arxiv_id
stringlengths
7
11
title
stringlengths
7
243
abstract
stringlengths
3
2.79k
link
stringlengths
21
49
authors
listlengths
1
451
updated
stringlengths
20
20
published
stringlengths
20
20
2407.06303
Unsupervised Fault Detection using SAM with a Moving Window Approach
Automated f ault detection and monitoring in engineering are critical but frequently difficult owing to the necessity for collecting and labeling large amounts of defective samples . We present an unsupervised method that uses the high end Segment Anything Model (SAM) and a moving window approach. SAM has gained recognition in AI image segmentation communities for its accuracy and versatility. However, its performance can be inconsistent when dealing with certain unexpected shapes , such as shadows and subtle surface irregularities. This limitation raise s concerns about its applicability for fault detection in real world scenarios We aim to overcome these challenges without requiring fine tun ing or labeled data. Our technique divides pictures into smaller windows, which are subsequently processed using SAM. This increases the accuracy of fault identification by focusing on localized details. We compute the sizes of the segmented sections and then us e a clustering technique to discover consistent fault areas while filtering out noise. To further improve the method's robustness , we propose adding the Exponentially Weighted Moving Average (EWMA) technique for continuous monitoring in industrial settings, which would improve the method's capacity to trace faults over time. We compare our method to various well established methods u sing a real case study where our model achieve s 0.96 accuracy compared to 0. 8 5 for the second best method. W e also compare our method us ing two open source datasets where our model attains a consistent 0. 86 accuracy across the datasets compared to 0.53 and 0.54 for second best model s.
http://arxiv.org/pdf/2407.06303v1
[ "Ahmed Maged", "Herman Shen" ]
2024-07-08T18:12:29Z
2024-07-08T18:12:29Z
2310.02011
Decoding Human Activities: Analyzing Wearable Accelerometer and Gyroscope Data for Activity Recognition
A person's movement or relative positioning can be effectively captured by different types of sensors and corresponding sensor output can be utilized in various manipulative techniques for the classification of different human activities. This letter proposes an effective scheme for human activity recognition, which introduces two unique approaches within a multi-structural architecture, named FusionActNet. The first approach aims to capture the static and dynamic behavior of a particular action by using two dedicated residual networks and the second approach facilitates the final decision-making process by introducing a guidance module. A two-stage training process is designed where at the first stage, residual networks are pre-trained separately by using static (where the human body is immobile) and dynamic (involving movement of the human body) data. In the next stage, the guidance module along with the pre-trained static or dynamic models are used to train the given sensor data. Here the guidance module learns to emphasize the most relevant prediction vector obtained from the static or dynamic models, which helps to effectively classify different human activities. The proposed scheme is evaluated using two benchmark datasets and compared with state-of-the-art methods. The results clearly demonstrate that our method outperforms existing approaches in terms of accuracy, precision, recall, and F1 score, achieving 97.35% and 95.35% accuracy on the UCI HAR and Motion-Sense datasets, respectively which highlights both the effectiveness and stability of the proposed scheme.
http://arxiv.org/pdf/2310.02011v3
[ "Utsab Saha", "Sawradip Saha", "Tahmid Kabir", "Shaikh Anowarul Fattah", "Mohammad Saquib" ]
2024-07-08T18:09:11Z
2023-10-03T12:34:31Z
2407.06298
Multi-Label Plant Species Classification with Self-Supervised Vision Transformers
We present a transfer learning approach using a self-supervised Vision Transformer (DINOv2) for the PlantCLEF 2024 competition, focusing on the multi-label plant species classification. Our method leverages both base and fine-tuned DINOv2 models to extract generalized feature embeddings. We train classifiers to predict multiple plant species within a single image using these rich embeddings. To address the computational challenges of the large-scale dataset, we employ Spark for distributed data processing, ensuring efficient memory management and processing across a cluster of workers. Our data processing pipeline transforms images into grids of tiles, classifying each tile, and aggregating these predictions into a consolidated set of probabilities. Our results demonstrate the efficacy of combining transfer learning with advanced data processing techniques for multi-label image classification tasks. Our code is available at https://github.com/dsgt-kaggle-clef/plantclef-2024.
http://arxiv.org/pdf/2407.06298v1
[ "Murilo Gustineli", "Anthony Miyaguchi", "Ian Stalter" ]
2024-07-08T18:07:33Z
2024-07-08T18:07:33Z
2407.06295
Engineering morphogenesis of cell clusters with differentiable programming
Understanding the rules underlying organismal development is a major unsolved problem in biology. Each cell in a developing organism responds to signals in its local environment by dividing, excreting, consuming, or reorganizing, yet how these individual actions coordinate over a macroscopic number of cells to grow complex structures with exquisite functionality is unknown. Here we use recent advances in automatic differentiation to discover local interaction rules and genetic networks that yield emergent, systems-level characteristics in a model of development. We consider a growing tissue with cellular interactions are mediated by morphogen diffusion, differential cell adhesion and mechanical stress. Each cell has an internal genetic network that it uses to make decisions based on its local environment. We show that one can simultaneously learn parameters governing the cell interactions and the genetic network for complex developmental scenarios, including the symmetry breaking of an embryo from an initial cell, the creation of emergent chemical gradients,homogenization of growth via mechanical stress, programmed growth into a prespecified shape, and the ability to repair from damage. When combined with recent experimental advances measuring spatio-temporal dynamics and gene expression of cells in a growing tissue, the methodology outlined here offers a promising path to unravelling the cellular basis of development.
http://arxiv.org/pdf/2407.06295v1
[ "Ramya Deshpande", "Francesco Mottes", "Ariana-Dalia Vlad", "Michael P. Brenner", "Alma dal Co" ]
2024-07-08T18:05:11Z
2024-07-08T18:05:11Z
2407.06286
Characterization of topological structures in different neural network architectures
One of the most crucial tasks in the future will be to understand what is going on in neural networks, as they will become even more powerful and widely deployed. This work aims to use TDA methods to analyze neural representations. We develop methods for analyzing representations from different architectures and check how one should use them to obtain valid results. Our findings indicate that removing outliers does not have much impact on the results and that we should compare representations with the same number of elements. We applied these methods for ResNet, VGG19, and ViT architectures and found substantial differences along with some similarities. Additionally, we determined that models with similar architecture tend to have a similar topology of representations and models with a larger number of layers change their topology more smoothly. Furthermore, we found that the topology of pre-trained and finetuned models starts to differ in the middle and final layers while remaining quite similar in the initial layers. These findings demonstrate the efficacy of TDA in the analysis of neural network behavior.
http://arxiv.org/pdf/2407.06286v1
[ "Paweł Świder" ]
2024-07-08T18:02:18Z
2024-07-08T18:02:18Z
2407.06183
Stepping on the Edge: Curvature Aware Learning Rate Tuners
Curvature information -- particularly, the largest eigenvalue of the loss Hessian, known as the sharpness -- often forms the basis for learning rate tuners. However, recent work has shown that the curvature information undergoes complex dynamics during training, going from a phase of increasing sharpness to eventual stabilization. We analyze the closed-loop feedback effect between learning rate tuning and curvature. We find that classical learning rate tuners may yield greater one-step loss reduction, yet they ultimately underperform in the long term when compared to constant learning rates in the full batch regime. These models break the stabilization of the sharpness, which we explain using a simplified model of the joint dynamics of the learning rate and the curvature. To further investigate these effects, we introduce a new learning rate tuning method, Curvature Dynamics Aware Tuning (CDAT), which prioritizes long term curvature stabilization over instantaneous progress on the objective. In the full batch regime, CDAT shows behavior akin to prefixed warm-up schedules on deep learning objectives, outperforming tuned constant learning rates. In the mini batch regime, we observe that stochasticity introduces confounding effects that explain the previous success of some learning rate tuners at appropriate batch sizes. Our findings highlight the critical role of understanding the joint dynamics of the learning rate and curvature, beyond greedy minimization, to diagnose failures and design effective adaptive learning rate tuners.
http://arxiv.org/pdf/2407.06183v1
[ "Vincent Roulet", "Atish Agarwala", "Jean-Bastien Grill", "Grzegorz Swirszcz", "Mathieu Blondel", "Fabian Pedregosa" ]
2024-07-08T17:56:00Z
2024-07-08T17:56:00Z
2405.14734
SimPO: Simple Preference Optimization with a Reference-Free Reward
Direct Preference Optimization (DPO) is a widely used offline preference optimization algorithm that reparameterizes reward functions in reinforcement learning from human feedback (RLHF) to enhance simplicity and training stability. In this work, we propose SimPO, a simpler yet more effective approach. The effectiveness of SimPO is attributed to a key design: using the average log probability of a sequence as the implicit reward. This reward formulation better aligns with model generation and eliminates the need for a reference model, making it more compute and memory efficient. Additionally, we introduce a target reward margin to the Bradley-Terry objective to encourage a larger margin between the winning and losing responses, further enhancing the algorithm's performance. We compare SimPO to DPO and its latest variants across various state-of-the-art training setups, including both base and instruction-tuned models like Mistral and Llama3. We evaluated on extensive instruction-following benchmarks, including AlpacaEval 2, MT-Bench, and the recent challenging Arena-Hard benchmark. Our results demonstrate that SimPO consistently and significantly outperforms existing approaches without substantially increasing response length. Specifically, SimPO outperforms DPO by up to 6.4 points on AlpacaEval 2 and by up to 7.5 points on Arena-Hard. Our top-performing model, built on Llama3-8B-Instruct, achieves a remarkable 53.7 length-controlled win rate on AlpacaEval 2 -- surpassing Claude 3 Opus on the leaderboard, and a 36.5 win rate on Arena-Hard -- making it the strongest 8B open-source model.
http://arxiv.org/pdf/2405.14734v2
[ "Yu Meng", "Mengzhou Xia", "Danqi Chen" ]
2024-07-08T17:55:24Z
2024-05-23T16:01:46Z
2407.06178
Transfer Learning with Self-Supervised Vision Transformers for Snake Identification
We present our approach for the SnakeCLEF 2024 competition to predict snake species from images. We explore and use Meta's DINOv2 vision transformer model for feature extraction to tackle species' high variability and visual similarity in a dataset of 182,261 images. We perform exploratory analysis on embeddings to understand their structure, and train a linear classifier on the embeddings to predict species. Despite achieving a score of 39.69, our results show promise for DINOv2 embeddings in snake identification. All code for this project is available at https://github.com/dsgt-kaggle-clef/snakeclef-2024.
http://arxiv.org/pdf/2407.06178v1
[ "Anthony Miyaguchi", "Murilo Gustineli", "Austin Fischer", "Ryan Lundqvist" ]
2024-07-08T17:52:23Z
2024-07-08T17:52:23Z
2405.08698
Byzantine-Resilient Secure Aggregation for Federated Learning Without Privacy Compromises
Federated learning (FL) shows great promise in large scale machine learning, but brings new risks in terms of privacy and security. We propose ByITFL, a novel scheme for FL that provides resilience against Byzantine users while keeping the users' data private from the federator and private from other users. The scheme builds on the preexisting non-private FLTrust scheme, which tolerates malicious users through trust scores (TS) that attenuate or amplify the users' gradients. The trust scores are based on the ReLU function, which we approximate by a polynomial. The distributed and privacy-preserving computation in ByITFL is designed using a combination of Lagrange coded computing, verifiable secret sharing and re-randomization steps. ByITFL is the first Byzantine resilient scheme for FL with full information-theoretic privacy.
http://arxiv.org/pdf/2405.08698v2
[ "Yue Xia", "Christoph Hofmeister", "Maximilian Egger", "Rawad Bitar" ]
2024-07-08T17:48:43Z
2024-05-14T15:37:56Z
2407.06169
Potential Based Diffusion Motion Planning
Effective motion planning in high dimensional spaces is a long-standing open problem in robotics. One class of traditional motion planning algorithms corresponds to potential-based motion planning. An advantage of potential based motion planning is composability -- different motion constraints can be easily combined by adding corresponding potentials. However, constructing motion paths from potentials requires solving a global optimization across configuration space potential landscape, which is often prone to local minima. We propose a new approach towards learning potential based motion planning, where we train a neural network to capture and learn an easily optimizable potentials over motion planning trajectories. We illustrate the effectiveness of such approach, significantly outperforming both classical and recent learned motion planning approaches and avoiding issues with local minima. We further illustrate its inherent composability, enabling us to generalize to a multitude of different motion constraints.
http://arxiv.org/pdf/2407.06169v1
[ "Yunhao Luo", "Chen Sun", "Joshua B. Tenenbaum", "Yilun Du" ]
2024-07-08T17:48:39Z
2024-07-08T17:48:39Z
2407.06167
DεpS: Delayed ε-Shrinking for Faster Once-For-All Training
CNNs are increasingly deployed across different hardware, dynamic environments, and low-power embedded devices. This has led to the design and training of CNN architectures with the goal of maximizing accuracy subject to such variable deployment constraints. As the number of deployment scenarios grows, there is a need to find scalable solutions to design and train specialized CNNs. Once-for-all training has emerged as a scalable approach that jointly co-trains many models (subnets) at once with a constant training cost and finds specialized CNNs later. The scalability is achieved by training the full model and simultaneously reducing it to smaller subnets that share model weights (weight-shared shrinking). However, existing once-for-all training approaches incur huge training costs reaching 1200 GPU hours. We argue this is because they either start the process of shrinking the full model too early or too late. Hence, we propose Delayed $epsilon$-Shrinking (D$epsilon$pS) that starts the process of shrinking the full model when it is partially trained (~50%) which leads to training cost improvement and better in-place knowledge distillation to smaller models. The proposed approach also consists of novel heuristics that dynamically adjust subnet learning rates incrementally (E), leading to improved weight-shared knowledge distillation from larger to smaller subnets as well. As a result, DEpS outperforms state-of-the-art once-for-all training techniques across different datasets including CIFAR10/100, ImageNet-100, and ImageNet-1k on accuracy and cost. It achieves 1.83% higher ImageNet-1k top1 accuracy or the same accuracy with 1.3x reduction in FLOPs and 2.5x drop in training cost (GPU*hrs)
http://arxiv.org/pdf/2407.06167v1
[ "Aditya Annavajjala", "Alind Khare", "Animesh Agrawal", "Igor Fedorov", "Hugo Latapie", "Myungjin Lee", "Alexey Tumanov" ]
2024-07-08T17:45:40Z
2024-07-08T17:45:40Z
2406.17119
Accelerating Phase Field Simulations Through a Hybrid Adaptive Fourier Neural Operator with U-Net Backbone
Prolonged contact between a corrosive liquid and metal alloys can cause progressive dealloying. For such liquid-metal dealloying (LMD) process, phase field models have been developed. However, the governing equations often involve coupled non-linear partial differential equations (PDE), which are challenging to solve numerically. In particular, stiffness in the PDEs requires an extremely small time steps (e.g. $10^{-12}$ or smaller). This computational bottleneck is especially problematic when running LMD simulation until a late time horizon is required. This motivates the development of surrogate models capable of leaping forward in time, by skipping several consecutive time steps at-once. In this paper, we propose U-Shaped Adaptive Fourier Neural Operators (U-AFNO), a machine learning (ML) model inspired by recent advances in neural operator learning. U-AFNO employs U-Nets for extracting and reconstructing local features within the physical fields, and passes the latent space through a vision transformer (ViT) implemented in the Fourier space (AFNO). We use U-AFNOs to learn the dynamics mapping the field at a current time step into a later time step. We also identify global quantities of interest (QoI) describing the corrosion process (e.g. the deformation of the liquid-metal interface) and show that our proposed U-AFNO model is able to accurately predict the field dynamics, in-spite of the chaotic nature of LMD. Our model reproduces the key micro-structure statistics and QoIs with a level of accuracy on-par with the high-fidelity numerical solver. We also investigate the opportunity of using hybrid simulations, in which we alternate forward leap in time using the U-AFNO with high-fidelity time stepping. We demonstrate that while advantageous for some surrogate model design choices, our proposed U-AFNO model in fully auto-regressive settings consistently outperforms hybrid schemes.
http://arxiv.org/pdf/2406.17119v2
[ "Christophe Bonneville", "Nathan Bieberdorf", "Arun Hegde", "Mark Asta", "Habib N. Najm", "Laurent Capolungo", "Cosmin Safta" ]
2024-07-08T17:23:22Z
2024-06-24T20:13:23Z
2405.05968
A Universal Growth Rate for Learning with Smooth Surrogate Losses
This paper presents a comprehensive analysis of the growth rate of $H$-consistency bounds (and excess error bounds) for various surrogate losses used in classification. We prove a square-root growth rate near zero for smooth margin-based surrogate losses in binary classification, providing both upper and lower bounds under mild assumptions. This result also translates to excess error bounds. Our lower bound requires weaker conditions than those in previous work for excess error bounds, and our upper bound is entirely novel. Moreover, we extend this analysis to multi-class classification with a series of novel results, demonstrating a universal square-root growth rate for smooth comp-sum and constrained losses, covering common choices for training neural networks in multi-class classification. Given this universal rate, we turn to the question of choosing among different surrogate losses. We first examine how $H$-consistency bounds vary across surrogates based on the number of classes. Next, ignoring constants and focusing on behavior near zero, we identify minimizability gaps as the key differentiating factor in these bounds. Thus, we thoroughly analyze these gaps, to guide surrogate loss selection, covering: comparisons across different comp-sum losses, conditions where gaps become zero, and general conditions leading to small gaps. Additionally, we demonstrate the key role of minimizability gaps in comparing excess error bounds and $H$-consistency bounds.
http://arxiv.org/pdf/2405.05968v2
[ "Anqi Mao", "Mehryar Mohri", "Yutao Zhong" ]
2024-07-08T17:20:19Z
2024-05-09T17:59:55Z
2307.13885
Characterizing Data Point Vulnerability via Average-Case Robustness
Studying the robustness of machine learning models is important to ensure consistent model behaviour across real-world settings. To this end, adversarial robustness is a standard framework, which views robustness of predictions through a binary lens: either a worst-case adversarial misclassification exists in the local region around an input, or it does not. However, this binary perspective does not account for the degrees of vulnerability, as data points with a larger number of misclassified examples in their neighborhoods are more vulnerable. In this work, we consider a complementary framework for robustness, called average-case robustness, which measures the fraction of points in a local region that provides consistent predictions. However, computing this quantity is hard, as standard Monte Carlo approaches are inefficient especially for high-dimensional inputs. In this work, we propose the first analytical estimators for average-case robustness for multi-class classifiers. We show empirically that our estimators are accurate and efficient for standard deep learning models and demonstrate their usefulness for identifying vulnerable data points, as well as quantifying robustness bias of models. Overall, our tools provide a complementary view to robustness, improving our ability to characterize model behaviour.
http://arxiv.org/pdf/2307.13885v6
[ "Tessa Han", "Suraj Srinivas", "Himabindu Lakkaraju" ]
2024-07-08T17:00:16Z
2023-07-26T01:10:29Z
2405.16642
Fast TRAC: A Parameter-Free Optimizer for Lifelong Reinforcement Learning
A key challenge in lifelong reinforcement learning (RL) is the loss of plasticity, where previous learning progress hinders an agent's adaptation to new tasks. While regularization and resetting can help, they require precise hyperparameter selection at the outset and environment-dependent adjustments. Building on the principled theory of online convex optimization, we present a parameter-free optimizer for lifelong RL, called TRAC, which requires no tuning or prior knowledge about the distribution shifts. Extensive experiments on Procgen, Atari, and Gym Control environments show that TRAC works surprisingly well-mitigating loss of plasticity and rapidly adapting to challenging distribution shifts-despite the underlying optimization problem being nonconvex and nonstationary.
http://arxiv.org/pdf/2405.16642v2
[ "Aneesh Muppidi", "Zhiyu Zhang", "Heng Yang" ]
2024-07-08T17:00:07Z
2024-05-26T17:38:44Z
2407.01512
Open-TeleVision: Teleoperation with Immersive Active Visual Feedback
Teleoperation serves as a powerful method for collecting on-robot data essential for robot learning from demonstrations. The intuitiveness and ease of use of the teleoperation system are crucial for ensuring high-quality, diverse, and scalable data. To achieve this, we propose an immersive teleoperation system Open-TeleVision that allows operators to actively perceive the robot's surroundings in a stereoscopic manner. Additionally, the system mirrors the operator's arm and hand movements on the robot, creating an immersive experience as if the operator's mind is transmitted to a robot embodiment. We validate the effectiveness of our system by collecting data and training imitation learning policies on four long-horizon, precise tasks (Can Sorting, Can Insertion, Folding, and Unloading) for 2 different humanoid robots and deploy them in the real world. The system is open-sourced at: https://robot-tv.github.io/
http://arxiv.org/pdf/2407.01512v2
[ "Xuxin Cheng", "Jialong Li", "Shiqi Yang", "Ge Yang", "Xiaolong Wang" ]
2024-07-08T16:59:38Z
2024-07-01T17:55:35Z
2407.06121
Periodic agent-state based Q-learning for POMDPs
The standard approach for Partially Observable Markov Decision Processes (POMDPs) is to convert them to a fully observed belief-state MDP. However, the belief state depends on the system model and is therefore not viable in reinforcement learning (RL) settings. A widely used alternative is to use an agent state, which is a model-free, recursively updateable function of the observation history. Examples include frame stacking and recurrent neural networks. Since the agent state is model-free, it is used to adapt standard RL algorithms to POMDPs. However, standard RL algorithms like Q-learning learn a stationary policy. Our main thesis that we illustrate via examples is that because the agent state does not satisfy the Markov property, non-stationary agent-state based policies can outperform stationary ones. To leverage this feature, we propose PASQL (periodic agent-state based Q-learning), which is a variant of agent-state-based Q-learning that learns periodic policies. By combining ideas from periodic Markov chains and stochastic approximation, we rigorously establish that PASQL converges to a cyclic limit and characterize the approximation error of the converged periodic policy. Finally, we present a numerical experiment to highlight the salient features of PASQL and demonstrate the benefit of learning periodic policies over stationary policies.
http://arxiv.org/pdf/2407.06121v1
[ "Amit Sinha", "Mathieu Geist", "Aditya Mahajan" ]
2024-07-08T16:58:57Z
2024-07-08T16:58:57Z
2407.06120
Sketchy Moment Matching: Toward Fast and Provable Data Selection for Finetuning
We revisit data selection in a modern context of finetuning from a fundamental perspective. Extending the classical wisdom of variance minimization in low dimensions to high-dimensional finetuning, our generalization analysis unveils the importance of additionally reducing bias induced by low-rank approximation. Inspired by the variance-bias tradeoff in high dimensions from the theory, we introduce Sketchy Moment Matching (SkMM), a scalable data selection scheme with two stages. (i) First, the bias is controlled using gradient sketching that explores the finetuning parameter space for an informative low-dimensional subspace $mathcal{S}$; (ii) then the variance is reduced over $mathcal{S}$ via moment matching between the original and selected datasets. Theoretically, we show that gradient sketching is fast and provably accurate: selecting $n$ samples by reducing variance over $mathcal{S}$ preserves the fast-rate generalization $O(dim(mathcal{S})/n)$, independent of the parameter dimension. Empirically, we concretize the variance-bias balance via synthetic experiments and demonstrate the effectiveness of SkMM for finetuning in real vision tasks.
http://arxiv.org/pdf/2407.06120v1
[ "Yijun Dong", "Hoang Phan", "Xiang Pan", "Qi Lei" ]
2024-07-08T16:57:26Z
2024-07-08T16:57:26Z
2407.02539
Research on Autonomous Robots Navigation based on Reinforcement Learning
Reinforcement learning continuously optimizes decision-making based on real-time feedback reward signals through continuous interaction with the environment, demonstrating strong adaptive and self-learning capabilities. In recent years, it has become one of the key methods to achieve autonomous navigation of robots. In this work, an autonomous robot navigation method based on reinforcement learning is introduced. We use the Deep Q Network (DQN) and Proximal Policy Optimization (PPO) models to optimize the path planning and decision-making process through the continuous interaction between the robot and the environment, and the reward signals with real-time feedback. By combining the Q-value function with the deep neural network, deep Q network can handle high-dimensional state space, so as to realize path planning in complex environments. Proximal policy optimization is a strategy gradient-based method, which enables robots to explore and utilize environmental information more efficiently by optimizing policy functions. These methods not only improve the robot's navigation ability in the unknown environment, but also enhance its adaptive and self-learning capabilities. Through multiple training and simulation experiments, we have verified the effectiveness and robustness of these models in various complex scenarios.
http://arxiv.org/pdf/2407.02539v2
[ "Zixiang Wang", "Hao Yan", "Yining Wang", "Zhengjia Xu", "Zhuoyue Wang", "Zhizhong Wu" ]
2024-07-08T16:50:48Z
2024-07-02T00:44:06Z
2406.05540
A Fine-tuning Dataset and Benchmark for Large Language Models for Protein Understanding
The parallels between protein sequences and natural language in their sequential structures have inspired the application of large language models (LLMs) to protein understanding. Despite the success of LLMs in NLP, their effectiveness in comprehending protein sequences remains an open question, largely due to the absence of datasets linking protein sequences to descriptive text. Researchers have then attempted to adapt LLMs for protein understanding by integrating a protein sequence encoder with a pre-trained LLM. However, this adaptation raises a fundamental question: "Can LLMs, originally designed for NLP, effectively comprehend protein sequences as a form of language?" Current datasets fall short in addressing this question due to the lack of a direct correlation between protein sequences and corresponding text descriptions, limiting the ability to train and evaluate LLMs for protein understanding effectively. To bridge this gap, we introduce ProteinLMDataset, a dataset specifically designed for further self-supervised pretraining and supervised fine-tuning (SFT) of LLMs to enhance their capability for protein sequence comprehension. Specifically, ProteinLMDataset includes 17.46 billion tokens for pretraining and 893,000 instructions for SFT. Additionally, we present ProteinLMBench, the first benchmark dataset consisting of 944 manually verified multiple-choice questions for assessing the protein understanding capabilities of LLMs. ProteinLMBench incorporates protein-related details and sequences in multiple languages, establishing a new standard for evaluating LLMs' abilities in protein comprehension. The large language model InternLM2-7B, pretrained and fine-tuned on the ProteinLMDataset, outperforms GPT-4 on ProteinLMBench, achieving the highest accuracy score.
http://arxiv.org/pdf/2406.05540v2
[ "Yiqing Shen", "Zan Chen", "Michail Mamalakis", "Luhan He", "Haiyang Xia", "Tianbin Li", "Yanzhou Su", "Junjun He", "Yu Guang Wang" ]
2024-07-08T16:39:35Z
2024-06-08T18:11:30Z
2407.06100
Leveraging data-driven weather models for improving numerical weather prediction skill through large-scale spectral nudging
Operational meteorological forecasting has long relied on physics-based numerical weather prediction (NWP) models. Recently, this landscape has been disrupted by the advent of data-driven artificial intelligence (AI)-based weather models, which offer tremendous computational performance and competitive forecasting skill. However, data-driven models for medium-range forecasting generally suffer from major limitations, including low effective resolution and a narrow range of predicted variables. This study illustrates the relative strengths and weaknesses of these competing paradigms using the GEM (Global Environmental Multiscale) and GraphCast models to represent physics-based and AI-based approaches, respectively. By analyzing global predictions from these two models against observations and analyses in both physical and spectral spaces, this study demonstrates that GraphCast-predicted large scales outperform GEM, particularly for longer lead times. Building on this insight, a hybrid NWP-AI system is proposed, wherein GEM-predicted large-scale state variables are spectrally nudged toward GraphCast predictions, while allowing GEM to freely generate fine-scale details critical for weather extremes. Results indicate that this hybrid approach is capable of leveraging the strengths of GraphCast to enhance the prediction skill of the GEM model. Importantly, trajectories of tropical cyclones are predicted with enhanced accuracy without significant changes in intensity. Furthermore, this new hybrid system ensures that meteorologists have access to a complete set of forecast variables, including those relevant for high-impact weather events.
http://arxiv.org/pdf/2407.06100v1
[ "Syed Zahid Husain", "Leo Separovic", "Jean-François Caron", "Rabah Aider", "Mark Buehner", "Stéphane Chamberland", "Ervig Lapalme", "Ron McTaggart-Cowan", "Christopher Subich", "Paul Vaillancourt", "Jing Yang", "Ayrton Zadra" ]
2024-07-08T16:39:25Z
2024-07-08T16:39:25Z
2407.06099
Physics-Informed Machine Learning Towards A Real-Time Spacecraft Thermal Simulator
Modeling thermal states for complex space missions, such as the surface exploration of airless bodies, requires high computation, whether used in ground-based analysis for spacecraft design or during onboard reasoning for autonomous operations. For example, a finite-element thermal model with hundreds of elements can take significant time to simulate, which makes it unsuitable for onboard reasoning during time-sensitive scenarios such as descent and landing, proximity operations, or in-space assembly. Further, the lack of fast and accurate thermal modeling drives thermal designs to be more conservative and leads to spacecraft with larger mass and higher power budgets. The emerging paradigm of physics-informed machine learning (PIML) presents a class of hybrid modeling architectures that address this challenge by combining simplified physics models with machine learning (ML) models resulting in models which maintain both interpretability and robustness. Such techniques enable designs with reduced mass and power through onboard thermal-state estimation and control and may lead to improved onboard handling of off-nominal states, including unplanned down-time. The PIML model or hybrid model presented here consists of a neural network which predicts reduced nodalizations (distribution and size of coarse mesh) given on-orbit thermal load conditions, and subsequently a (relatively coarse) finite-difference model operates on this mesh to predict thermal states. We compare the computational performance and accuracy of the hybrid model to a data-driven neural net model, and a high-fidelity finite-difference model of a prototype Earth-orbiting small spacecraft. The PIML based active nodalization approach provides significantly better generalization than the neural net model and coarse mesh model, while reducing computing cost by up to 1.7x compared to the high-fidelity model.
http://arxiv.org/pdf/2407.06099v1
[ "Manaswin Oddiraju", "Zaki Hasnain", "Saptarshi Bandyopadhyay", "Eric Sunada", "Souma Chowdhury" ]
2024-07-08T16:38:52Z
2024-07-08T16:38:52Z
2407.06092
Assessing Cardiomegaly in Dogs Using a Simple CNN Model
This paper introduces DogHeart, a dataset comprising 1400 training, 200 validation, and 400 test images categorized as small, normal, and large based on VHS score. A custom CNN model is developed, featuring a straightforward architecture with 4 convolutional layers and 4 fully connected layers. Despite the absence of data augmentation, the model achieves a 72% accuracy in classifying cardiomegaly severity. The study contributes to automated assessment of cardiac conditions in dogs, highlighting the potential for early detection and intervention in veterinary care.
http://arxiv.org/pdf/2407.06092v1
[ "Nikhil Deekonda" ]
2024-07-08T16:31:49Z
2024-07-08T16:31:49Z
2405.20993
Information limits and Thouless-Anderson-Palmer equations for spiked matrix models with structured noise
We consider a prototypical problem of Bayesian inference for a structured spiked model: a low-rank signal is corrupted by additive noise. While both information-theoretic and algorithmic limits are well understood when the noise is a Gaussian Wigner matrix, the more realistic case of structured noise still proves to be challenging. To capture the structure while maintaining mathematical tractability, a line of work has focused on rotationally invariant noise. However, existing studies either provide sub-optimal algorithms or are limited to special cases of noise ensembles. In this paper, using tools from statistical physics (replica method) and random matrix theory (generalized spherical integrals) we establish the first characterization of the information-theoretic limits for a noise matrix drawn from a general trace ensemble. Remarkably, our analysis unveils the asymptotic equivalence between the rotationally invariant model and a surrogate Gaussian one. Finally, we show how to saturate the predicted statistical limits using an efficient algorithm inspired by the theory of adaptive Thouless-Anderson-Palmer (TAP) equations.
http://arxiv.org/pdf/2405.20993v2
[ "Jean Barbier", "Francesco Camilli", "Marco Mondelli", "Yizhou Xu" ]
2024-07-08T16:26:03Z
2024-05-31T16:38:35Z
2406.00853
A Tutorial on Doubly Robust Learning for Causal Inference
Doubly robust learning offers a robust framework for causal inference from observational data by integrating propensity score and outcome modeling. Despite its theoretical appeal, practical adoption remains limited due to perceived complexity and inaccessible software. This tutorial aims to demystify doubly robust methods and demonstrate their application using the EconML package. We provide an introduction to causal inference, discuss the principles of outcome modeling and propensity scores, and illustrate the doubly robust approach through simulated case studies. By simplifying the methodology and offering practical coding examples, we intend to make doubly robust learning accessible to researchers and practitioners in data science and statistics.
http://arxiv.org/pdf/2406.00853v2
[ "Hlynur Davíð Hlynsson" ]
2024-07-08T16:15:08Z
2024-06-02T20:18:40Z
2407.06060
MERGE -- A Bimodal Dataset for Static Music Emotion Recognition
The Music Emotion Recognition (MER) field has seen steady developments in recent years, with contributions from feature engineering, machine learning, and deep learning. The landscape has also shifted from audio-centric systems to bimodal ensembles that combine audio and lyrics. However, a severe lack of public and sizeable bimodal databases has hampered the development and improvement of bimodal audio-lyrics systems. This article proposes three new audio, lyrics, and bimodal MER research datasets, collectively called MERGE, created using a semi-automatic approach. To comprehensively assess the proposed datasets and establish a baseline for benchmarking, we conducted several experiments for each modality, using feature engineering, machine learning, and deep learning methodologies. In addition, we propose and validate fixed train-validate-test splits. The obtained results confirm the viability of the proposed datasets, achieving the best overall result of 79.21% F1-score for bimodal classification using a deep neural network.
http://arxiv.org/pdf/2407.06060v1
[ "Pedro Lima Louro", "Hugo Redinho", "Ricardo Santos", "Ricardo Malheiro", "Renato Panda", "Rui Pedro Paiva" ]
2024-07-08T16:01:04Z
2024-07-08T16:01:04Z
2407.06057
Variational Best-of-N Alignment
Best-of-N (BoN) is a popular and effective algorithm for aligning language models to human preferences. The algorithm works as follows: at inference time, N samples are drawn from the language model, and the sample with the highest reward, as judged by a reward model, is returned as the output. Despite its effectiveness, BoN is computationally expensive; it reduces sampling throughput by a factor of N. To make BoN more efficient at inference time, one strategy is to fine-tune the language model to mimic what BoN does during inference. To achieve this, we derive the distribution induced by the BoN algorithm. We then propose to fine-tune the language model to minimize backward KL divergence to the BoN distribution. Our approach is analogous to mean-field variational inference and, thus, we term it variational BoN (vBoN). To the extent this fine-tuning is successful and we end up with a good approximation, we have reduced the inference cost by a factor of N. Our experiments on a controlled generation task suggest that while variational BoN is not as effective as BoN in aligning language models, it is close to BoN performance as vBoN appears more often on the Pareto frontier of reward and KL divergence compared to models trained with KL-constrained RL objective.
http://arxiv.org/pdf/2407.06057v1
[ "Afra Amini", "Tim Vieira", "Ryan Cotterell" ]
2024-07-08T15:59:44Z
2024-07-08T15:59:44Z
2306.08670
Simple Opinion Dynamics for No-Regret Learning
We study a cooperative multi-agent bandit setting in the distributed GOSSIP model: in every round, each of $n$ agents chooses an action from a common set, observes the action's corresponding reward, and subsequently exchanges information with a single randomly chosen neighbor, which may inform its choice in the next round. We introduce and analyze families of memoryless and time-independent protocols for this setting, inspired by opinion dynamics that are well-studied for other algorithmic tasks in the GOSSIP model. For stationary reward settings, we prove for the first time that these simple protocols exhibit best-of-both-worlds behavior, simultaneously obtaining constant cumulative regret scaling like $R(T)/T = widetilde O(1/T)$, and also reaching consensus on the highest-mean action within $widetilde O(sqrt{n})$ rounds. We obtain these results by showing a new connection between the global evolution of these decentralized protocols and a class of zero-sum multiplicative weights update} processes. Using this connection, we establish a general framework for analyzing the population-level regret and other properties of our protocols. Finally, we show our protocols are also surprisingly robust to adversarial rewards, and in this regime we obtain sublinear regret scaling like $R(T)/T = widetilde O(1/sqrt{T})$ as long as the number of rounds does not grow too fast as a function of $n$.
http://arxiv.org/pdf/2306.08670v4
[ "John Lazarsfeld", "Dan Alistarh" ]
2024-07-08T15:56:39Z
2023-06-14T17:59:15Z
2008.08718
Minimum discrepancy principle strategy for choosing $k$ in $k$-NN regression
We present a novel data-driven strategy to choose the hyperparameter $k$ in the $k$-NN regression estimator without using any hold-out data. We treat the problem of choosing the hyperparameter as an iterative procedure (over $k$) and propose using an easily implemented in practice strategy based on the idea of early stopping and the minimum discrepancy principle. This model selection strategy is proven to be minimax-optimal over some smoothness function classes, for instance, the Lipschitz functions class on a bounded domain. The novel method often improves statistical performance on artificial and real-world data sets in comparison to other model selection strategies, such as the Hold-out method, 5-fold cross-validation, and AIC criterion. The novelty of the strategy comes from reducing the computational time of the model selection procedure while preserving the statistical (minimax) optimality of the resulting estimator. More precisely, given a sample of size $n$, if one should choose $k$ among $left{ 1, ldots, n right}$, and $left{ f^1, ldots, f^n right}$ are the estimators of the regression function, the minimum discrepancy principle requires the calculation of a fraction of the estimators, while this is not the case for the generalized cross-validation, Akaike's AIC criteria, or Lepskii principle.
http://arxiv.org/pdf/2008.08718v7
[ "Yaroslav Averyanov", "Alain Celisse" ]
2024-07-08T15:43:59Z
2020-08-20T00:13:19Z
2305.16269
UDPM: Upsampling Diffusion Probabilistic Models
Denoising Diffusion Probabilistic Models (DDPM) have recently gained significant attention. DDPMs compose a Markovian process that begins in the data domain and gradually adds noise until reaching pure white noise. DDPMs generate high-quality samples from complex data distributions by defining an inverse process and training a deep neural network to learn this mapping. However, these models are inefficient because they require many diffusion steps to produce aesthetically pleasing samples. Additionally, unlike generative adversarial networks (GANs), the latent space of diffusion models is less interpretable. In this work, we propose to generalize the denoising diffusion process into an Upsampling Diffusion Probabilistic Model (UDPM). In the forward process, we reduce the latent variable dimension through downsampling, followed by the traditional noise perturbation. As a result, the reverse process gradually denoises and upsamples the latent variable to produce a sample from the data distribution. We formalize the Markovian diffusion processes of UDPM and demonstrate its generation capabilities on the popular FFHQ, AFHQv2, and CIFAR10 datasets. UDPM generates images with as few as three network evaluations, whose overall computational cost is less than a single DDPM or EDM step, while achieving an FID score of 6.86. This surpasses current state-of-the-art efficient diffusion models that use a single denoising step for sampling. Additionally, UDPM offers an interpretable and interpolable latent space, which gives it an advantage over traditional DDPMs. Our code is available online: url{https://github.com/shadyabh/UDPM/}
http://arxiv.org/pdf/2305.16269v3
[ "Shady Abu-Hussein", "Raja Giryes" ]
2024-07-08T15:32:52Z
2023-05-25T17:25:14Z
2407.06018
Leveraging Transformers for Weakly Supervised Object Localization in Unconstrained Videos
Weakly-Supervised Video Object Localization (WSVOL) involves localizing an object in videos using only video-level labels, also referred to as tags. State-of-the-art WSVOL methods like Temporal CAM (TCAM) rely on class activation mapping (CAM) and typically require a pre-trained CNN classifier. However, their localization accuracy is affected by their tendency to minimize the mutual information between different instances of a class and exploit temporal information during training for downstream tasks, e.g., detection and tracking. In the absence of bounding box annotation, it is challenging to exploit precise information about objects from temporal cues because the model struggles to locate objects over time. To address these issues, a novel method called transformer based CAM for videos (TrCAM-V), is proposed for WSVOL. It consists of a DeiT backbone with two heads for classification and localization. The classification head is trained using standard classification loss (CL), while the localization head is trained using pseudo-labels that are extracted using a pre-trained CLIP model. From these pseudo-labels, the high and low activation values are considered to be foreground and background regions, respectively. Our TrCAM-V method allows training a localization network by sampling pseudo-pixels on the fly from these regions. Additionally, a conditional random field (CRF) loss is employed to align the object boundaries with the foreground map. During inference, the model can process individual frames for real-time localization applications. Extensive experiments on challenging YouTube-Objects unconstrained video datasets show that our TrCAM-V method achieves new state-of-the-art performance in terms of classification and localization accuracy.
http://arxiv.org/pdf/2407.06018v1
[ "Shakeeb Murtaza", "Marco Pedersoli", "Aydin Sarraf", "Eric Granger" ]
2024-07-08T15:08:41Z
2024-07-08T15:08:41Z
2305.16614
Physics-Regulated Deep Reinforcement Learning: Invariant Embeddings
This paper proposes the Phy-DRL: a physics-regulated deep reinforcement learning (DRL) framework for safety-critical autonomous systems. The Phy-DRL has three distinguished invariant-embedding designs: i) residual action policy (i.e., integrating data-driven-DRL action policy and physics-model-based action policy), ii) automatically constructed safety-embedded reward, and iii) physics-model-guided neural network (NN) editing, including link editing and activation editing. Theoretically, the Phy-DRL exhibits 1) a mathematically provable safety guarantee and 2) strict compliance of critic and actor networks with physics knowledge about the action-value function and action policy. Finally, we evaluate the Phy-DRL on a cart-pole system and a quadruped robot. The experiments validate our theoretical results and demonstrate that Phy-DRL features guaranteed safety compared to purely data-driven DRL and solely model-based design while offering remarkably fewer learning parameters and fast training towards safety guarantee.
http://arxiv.org/pdf/2305.16614v2
[ "Hongpeng Cao", "Yanbing Mao", "Lui Sha", "Marco Caccamo" ]
2024-07-08T15:08:16Z
2023-05-26T04:20:02Z
2407.06015
Simulation-based Benchmarking for Causal Structure Learning in Gene Perturbation Experiments
Causal structure learning (CSL) refers to the task of learning causal relationships from data. Advances in CSL now allow learning of causal graphs in diverse application domains, which has the potential to facilitate data-driven causal decision-making. Real-world CSL performance depends on a number of $textit{context-specific}$ factors, including context-specific data distributions and non-linear dependencies, that are important in practical use-cases. However, our understanding of how to assess and select CSL methods in specific contexts remains limited. To address this gap, we present $textit{CausalRegNet}$, a multiplicative effect structural causal model that allows for generating observational and interventional data incorporating context-specific properties, with a focus on the setting of gene perturbation experiments. Using real-world gene perturbation data, we show that CausalRegNet generates accurate distributions and scales far better than current simulation frameworks. We illustrate the use of CausalRegNet in assessing CSL methods in the context of interventional experiments in biology.
http://arxiv.org/pdf/2407.06015v1
[ "Luka Kovačević", "Izzy Newsham", "Sach Mukherjee", "John Whittaker" ]
2024-07-08T15:06:03Z
2024-07-08T15:06:03Z
2403.04482
On the Topology Awareness and Generalization Performance of Graph Neural Networks
Many computer vision and machine learning problems are modelled as learning tasks on graphs where graph neural networks GNNs have emerged as a dominant tool for learning representations of graph structured data A key feature of GNNs is their use of graph structures as input enabling them to exploit the graphs inherent topological properties known as the topology awareness of GNNs Despite the empirical successes of GNNs the influence of topology awareness on generalization performance remains unexplored, particularly for node level tasks that diverge from the assumption of data being independent and identically distributed IID The precise definition and characterization of the topology awareness of GNNs especially concerning different topological features are still unclear This paper introduces a comprehensive framework to characterize the topology awareness of GNNs across any topological feature Using this framework we investigate the effects of topology awareness on GNN generalization performance Contrary to the prevailing belief that enhancing the topology awareness of GNNs is always advantageous our analysis reveals a critical insight improving the topology awareness of GNNs may inadvertently lead to unfair generalization across structural groups which might not be desired in some scenarios Additionally we conduct a case study using the intrinsic graph metric the shortest path distance on various benchmark datasets The empirical results of this case study confirm our theoretical insights Moreover we demonstrate the practical applicability of our framework by using it to tackle the cold start problem in graph active learning
http://arxiv.org/pdf/2403.04482v2
[ "Junwei Su", "Chuan Wu" ]
2024-07-08T14:49:14Z
2024-03-07T13:33:30Z
2407.04538
PDiscoFormer: Relaxing Part Discovery Constraints with Vision Transformers
Computer vision methods that explicitly detect object parts and reason on them are a step towards inherently interpretable models. Existing approaches that perform part discovery driven by a fine-grained classification task make very restrictive assumptions on the geometric properties of the discovered parts; they should be small and compact. Although this prior is useful in some cases, in this paper we show that pre-trained transformer-based vision models, such as self-supervised DINOv2 ViT, enable the relaxation of these constraints. In particular, we find that a total variation (TV) prior, which allows for multiple connected components of any size, substantially outperforms previous work. We test our approach on three fine-grained classification benchmarks: CUB, PartImageNet and Oxford Flowers, and compare our results to previously published methods as well as a re-implementation of the state-of-the-art method PDiscoNet with a transformer-based backbone. We consistently obtain substantial improvements across the board, both on part discovery metrics and the downstream classification task, showing that the strong inductive biases in self-supervised ViT models require to rethink the geometric priors that can be used for unsupervised part discovery.
http://arxiv.org/pdf/2407.04538v2
[ "Ananthu Aniraj", "Cassio F. Dantas", "Dino Ienco", "Diego Marcos" ]
2024-07-08T14:44:06Z
2024-07-05T14:24:37Z
2407.05986
KidSat: satellite imagery to map childhood poverty dataset and benchmark
Satellite imagery has emerged as an important tool to analyse demographic, health, and development indicators. While various deep learning models have been built for these tasks, each is specific to a particular problem, with few standard benchmarks available. We propose a new dataset pairing satellite imagery and high-quality survey data on child poverty to benchmark satellite feature representations. Our dataset consists of 33,608 images, each 10 km $times$ 10 km, from 19 countries in Eastern and Southern Africa in the time period 1997-2022. As defined by UNICEF, multidimensional child poverty covers six dimensions and it can be calculated from the face-to-face Demographic and Health Surveys (DHS) Program . As part of the benchmark, we test spatial as well as temporal generalization, by testing on unseen locations, and on data after the training years. Using our dataset we benchmark multiple models, from low-level satellite imagery models such as MOSAIKS , to deep learning foundation models, which include both generic vision models such as Self-Distillation with no Labels (DINOv2) models and specific satellite imagery models such as SatMAE. We provide open source code for building the satellite dataset, obtaining ground truth data from DHS and running various models assessed in our work.
http://arxiv.org/pdf/2407.05986v1
[ "Makkunda Sharma", "Fan Yang", "Duy-Nhat Vo", "Esra Suel", "Swapnil Mishra", "Samir Bhatt", "Oliver Fiala", "William Rudgard", "Seth Flaxman" ]
2024-07-08T14:26:30Z
2024-07-08T14:26:30Z
2406.18682
The Multilingual Alignment Prism: Aligning Global and Local Preferences to Reduce Harm
A key concern with the concept of "alignment" is the implicit question of "alignment to what?". AI systems are increasingly used across the world, yet safety alignment is often focused on homogeneous monolingual settings. Additionally, preference training and safety measures often overfit to harms common in Western-centric datasets. Here, we explore the viability of different alignment approaches when balancing dual objectives: addressing and optimizing for a non-homogeneous set of languages and cultural preferences while minimizing both global and local harms. We collect the first set of human annotated red-teaming prompts in different languages distinguishing between global and local harm, which serve as a laboratory for understanding the reliability of alignment techniques when faced with preference distributions that are non-stationary across geographies and languages. While this setting is seldom covered by the literature to date, which primarily centers on English harm mitigation, it captures real-world interactions with AI systems around the world. We establish a new precedent for state-of-the-art alignment techniques across 6 languages with minimal degradation in general performance. Our work provides important insights into cross-lingual transfer and novel optimization approaches to safeguard AI systems designed to serve global populations.
http://arxiv.org/pdf/2406.18682v2
[ "Aakanksha", "Arash Ahmadian", "Beyza Ermis", "Seraphina Goldfarb-Tarrant", "Julia Kreutzer", "Marzieh Fadaee", "Sara Hooker" ]
2024-07-08T14:26:16Z
2024-06-26T18:39:08Z
2407.05982
MTL-Split: Multi-Task Learning for Edge Devices using Split Computing
Split Computing (SC), where a Deep Neural Network (DNN) is intelligently split with a part of it deployed on an edge device and the rest on a remote server is emerging as a promising approach. It allows the power of DNNs to be leveraged for latency-sensitive applications that do not allow the entire DNN to be deployed remotely, while not having sufficient computation bandwidth available locally. In many such embedded systems scenarios, such as those in the automotive domain, computational resource constraints also necessitate Multi-Task Learning (MTL), where the same DNN is used for multiple inference tasks instead of having dedicated DNNs for each task, which would need more computing bandwidth. However, how to partition such a multi-tasking DNN to be deployed within a SC framework has not been sufficiently studied. This paper studies this problem, and MTL-Split, our novel proposed architecture, shows encouraging results on both synthetic and real-world data. The source code is available at https://github.com/intelligolabs/MTL-Split.
http://arxiv.org/pdf/2407.05982v1
[ "Luigi Capogrosso", "Enrico Fraccaroli", "Samarjit Chakraborty", "Franco Fummi", "Marco Cristani" ]
2024-07-08T14:25:39Z
2024-07-08T14:25:39Z
2401.12196
Learning Dynamics from Multicellular Graphs with Deep Neural Networks
Multicellular self-assembly into functional structures is a dynamic process that is critical in the development and diseases, including embryo development, organ formation, tumor invasion, and others. Being able to infer collective cell migratory dynamics from their static configuration is valuable for both understanding and predicting these complex processes. However, the identification of structural features that can indicate multicellular motion has been difficult, and existing metrics largely rely on physical instincts. Here we show that using a graph neural network (GNN), the motion of multicellular collectives can be inferred from a static snapshot of cell positions, in both experimental and synthetic datasets.
http://arxiv.org/pdf/2401.12196v2
[ "Haiqian Yang", "Florian Meyer", "Shaoxun Huang", "Liu Yang", "Cristiana Lungu", "Monilola A. Olayioye", "Markus J. Buehler", "Ming Guo" ]
2024-07-08T14:24:40Z
2024-01-22T18:36:29Z
2205.15745
HyperMAML: Few-Shot Adaptation of Deep Models with Hypernetworks
The aim of Few-Shot learning methods is to train models which can easily adapt to previously unseen tasks, based on small amounts of data. One of the most popular and elegant Few-Shot learning approaches is Model-Agnostic Meta-Learning (MAML). The main idea behind this method is to learn the general weights of the meta-model, which are further adapted to specific problems in a small number of gradient steps. However, the model's main limitation lies in the fact that the update procedure is realized by gradient-based optimisation. In consequence, MAML cannot always modify weights to the essential level in one or even a few gradient iterations. On the other hand, using many gradient steps results in a complex and time-consuming optimization procedure, which is hard to train in practice, and may lead to overfitting. In this paper, we propose HyperMAML, a novel generalization of MAML, where the training of the update procedure is also part of the model. Namely, in HyperMAML, instead of updating the weights with gradient descent, we use for this purpose a trainable Hypernetwork. Consequently, in this framework, the model can generate significant updates whose range is not limited to a fixed number of gradient steps. Experiments show that HyperMAML consistently outperforms MAML and performs comparably to other state-of-the-art techniques in a number of standard Few-Shot learning benchmarks.
http://arxiv.org/pdf/2205.15745v3
[ "M. Przewięźlikowski", "P. Przybysz", "J. Tabor", "M. Zięba", "P. Spurek" ]
2024-07-08T14:21:59Z
2022-05-31T12:31:21Z
2407.05973
Active Label Refinement for Robust Training of Imbalanced Medical Image Classification Tasks in the Presence of High Label Noise
The robustness of supervised deep learning-based medical image classification is significantly undermined by label noise. Although several methods have been proposed to enhance classification performance in the presence of noisy labels, they face some challenges: 1) a struggle with class-imbalanced datasets, leading to the frequent overlooking of minority classes as noisy samples; 2) a singular focus on maximizing performance using noisy datasets, without incorporating experts-in-the-loop for actively cleaning the noisy labels. To mitigate these challenges, we propose a two-phase approach that combines Learning with Noisy Labels (LNL) and active learning. This approach not only improves the robustness of medical image classification in the presence of noisy labels, but also iteratively improves the quality of the dataset by relabeling the important incorrect labels, under a limited annotation budget. Furthermore, we introduce a novel Variance of Gradients approach in LNL phase, which complements the loss-based sample selection by also sampling under-represented samples. Using two imbalanced noisy medical classification datasets, we demonstrate that that our proposed technique is superior to its predecessors at handling class imbalance by not misidentifying clean samples from minority classes as mostly noisy samples.
http://arxiv.org/pdf/2407.05973v1
[ "Bidur Khanal", "Tianhong Dai", "Binod Bhattarai", "Cristian Linte" ]
2024-07-08T14:16:05Z
2024-07-08T14:16:05Z
2407.05966
On Bellman equations for continuous-time policy evaluation I: discretization and approximation
We study the problem of computing the value function from a discretely-observed trajectory of a continuous-time diffusion process. We develop a new class of algorithms based on easily implementable numerical schemes that are compatible with discrete-time reinforcement learning (RL) with function approximation. We establish high-order numerical accuracy as well as the approximation error guarantees for the proposed approach. In contrast to discrete-time RL problems where the approximation factor depends on the effective horizon, we obtain a bounded approximation factor using the underlying elliptic structures, even if the effective horizon diverges to infinity.
http://arxiv.org/pdf/2407.05966v1
[ "Wenlong Mou", "Yuhua Zhu" ]
2024-07-08T14:05:03Z
2024-07-08T14:05:03Z
2407.05965
T2VSafetyBench: Evaluating the Safety of Text-to-Video Generative Models
The recent development of Sora leads to a new era in text-to-video (T2V) generation. Along with this comes the rising concern about its security risks. The generated videos may contain illegal or unethical content, and there is a lack of comprehensive quantitative understanding of their safety, posing a challenge to their reliability and practical deployment. Previous evaluations primarily focus on the quality of video generation. While some evaluations of text-to-image models have considered safety, they cover fewer aspects and do not address the unique temporal risk inherent in video generation. To bridge this research gap, we introduce T2VSafetyBench, a new benchmark designed for conducting safety-critical assessments of text-to-video models. We define 12 critical aspects of video generation safety and construct a malicious prompt dataset using LLMs and jailbreaking prompt attacks. Based on our evaluation results, we draw several important findings, including: 1) no single model excels in all aspects, with different models showing various strengths; 2) the correlation between GPT-4 assessments and manual reviews is generally high; 3) there is a trade-off between the usability and safety of text-to-video generative models. This indicates that as the field of video generation rapidly advances, safety risks are set to surge, highlighting the urgency of prioritizing video safety. We hope that T2VSafetyBench can provide insights for better understanding the safety of video generation in the era of generative AI.
http://arxiv.org/pdf/2407.05965v1
[ "Yibo Miao", "Yifan Zhu", "Yinpeng Dong", "Lijia Yu", "Jun Zhu", "Xiao-Shan Gao" ]
2024-07-08T14:04:58Z
2024-07-08T14:04:58Z
2307.10246
Deep Neural Networks and Brain Alignment: Brain Encoding and Decoding (Survey)
Can we obtain insights about the brain using AI models? How is the information in deep learning models related to brain recordings? Can we improve AI models with the help of brain recordings? Such questions can be tackled by studying brain recordings like functional magnetic resonance imaging (fMRI). As a first step, the neuroscience community has contributed several large cognitive neuroscience datasets related to passive reading/listening/viewing of concept words, narratives, pictures, and movies. Encoding and decoding models using these datasets have also been proposed in the past two decades. These models serve as additional tools for basic cognitive science and neuroscience research. Encoding models aim at generating fMRI brain representations given a stimulus automatically. They have several practical applications in evaluating and diagnosing neurological conditions and thus may also help design therapies for brain damage. Decoding models solve the inverse problem of reconstructing the stimuli given the fMRI. They are useful for designing brain-machine or brain-computer interfaces. Inspired by the effectiveness of deep learning models for natural language processing, computer vision, and speech, several neural encoding and decoding models have been recently proposed. In this survey, we will first discuss popular representations of language, vision and speech stimuli, and present a summary of neuroscience datasets. Further, we will review popular deep learning based encoding and decoding architectures and note their benefits and limitations. Finally, we will conclude with a summary and discussion about future trends. Given the large amount of recently published work in the computational cognitive neuroscience (CCN) community, we believe that this survey enables an entry point for DNN researchers to diversify into CCN research.
http://arxiv.org/pdf/2307.10246v2
[ "Subba Reddy Oota", "Zijiao Chen", "Manish Gupta", "Raju S. Bapi", "Gael Jobard", "Frederic Alexandre", "Xavier Hinaut" ]
2024-07-08T13:44:56Z
2023-07-17T06:54:36Z
2407.05934
Graph Anomaly Detection with Noisy Labels by Reinforcement Learning
Graph anomaly detection (GAD) has been widely applied in many areas, e.g., fraud detection in finance and robot accounts in social networks. Existing methods are dedicated to identifying the outlier nodes that deviate from normal ones. While they heavily rely on high-quality annotation, which is hard to obtain in real-world scenarios, this could lead to severely degraded performance based on noisy labels. Thus, we are motivated to cut the edges of suspicious nodes to alleviate the impact of noise. However, it remains difficult to precisely identify the nodes with noisy labels. Moreover, it is hard to quantitatively evaluate the regret of cutting the edges, which may have either positive or negative influences. To this end, we propose a novel framework REGAD, i.e., REinforced Graph Anomaly Detector. Specifically, we aim to maximize the performance improvement (AUC) of a base detector by cutting noisy edges approximated through the nodes with high-confidence labels. (i) We design a tailored action and search space to train a policy network to carefully prune edges step by step, where only a few suspicious edges are prioritized in each step. (ii) We design a policy-in-the-loop mechanism to iteratively optimize the policy based on the feedback from base detector. The overall performance is evaluated by the cumulative rewards. Extensive experiments are conducted on three datasets under different anomaly ratios. The results indicate the superior performance of our proposed REGAD.
http://arxiv.org/pdf/2407.05934v1
[ "Zhu Wang", "Shuang Zhou", "Junnan Dong", "Chang Yang", "Xiao Huang", "Shengjie Zhao" ]
2024-07-08T13:41:21Z
2024-07-08T13:41:21Z
2406.16426
Fault Detection for agents on power grid topology optimization: A Comprehensive analysis
The topology optimization of transmission networks using Deep Reinforcement Learning (DRL) has increasingly come into focus. Various researchers have proposed different DRL agents, which are often benchmarked on the Grid2Op environment from the Learning to Run a Power Network (L2RPN) challenges. The environments have many advantages with their realistic chronics and underlying power flow backends. However, the interpretation of agent survival or failure is not always clear, as there are a variety of potential causes. In this work, we focus on the failures of the power grid to identify patterns and detect them a priori. We collect the failed chronics of three different agents on the WCCI 2022 L2RPN environment, totaling about 40k data points. By clustering, we are able to detect five distinct clusters, identifying different failure types. Further, we propose a multi-class prediction approach to detect failures beforehand and evaluate five different models. Here, the Light Gradient-Boosting Machine (LightGBM) shows the best performance, with an accuracy of 86%. It also correctly identifies in 91% of the time failure and survival observations. Finally, we provide a detailed feature importance analysis that identifies critical features and regions in the grid.
http://arxiv.org/pdf/2406.16426v2
[ "Malte Lehna", "Mohamed Hassouna", "Dmitry Degtyar", "Sven Tomforde", "Christoph Scholz" ]
2024-07-08T13:35:12Z
2024-06-24T08:20:43Z
2407.05921
TAPVid-3D: A Benchmark for Tracking Any Point in 3D
We introduce a new benchmark, TAPVid-3D, for evaluating the task of long-range Tracking Any Point in 3D (TAP-3D). While point tracking in two dimensions (TAP) has many benchmarks measuring performance on real-world videos, such as TAPVid-DAVIS, three-dimensional point tracking has none. To this end, leveraging existing footage, we build a new benchmark for 3D point tracking featuring 4,000+ real-world videos, composed of three different data sources spanning a variety of object types, motion patterns, and indoor and outdoor environments. To measure performance on the TAP-3D task, we formulate a collection of metrics that extend the Jaccard-based metric used in TAP to handle the complexities of ambiguous depth scales across models, occlusions, and multi-track spatio-temporal smoothness. We manually verify a large sample of trajectories to ensure correct video annotations, and assess the current state of the TAP-3D task by constructing competitive baselines using existing tracking models. We anticipate this benchmark will serve as a guidepost to improve our ability to understand precise 3D motion and surface deformation from monocular video. Code for dataset download, generation, and model evaluation is available at https://tapvid3d.github.io
http://arxiv.org/pdf/2407.05921v1
[ "Skanda Koppula", "Ignacio Rocco", "Yi Yang", "Joe Heyward", "João Carreira", "Andrew Zisserman", "Gabriel Brostow", "Carl Doersch" ]
2024-07-08T13:28:47Z
2024-07-08T13:28:47Z
2407.05920
LPGD: A General Framework for Backpropagation through Embedded Optimization Layers
Embedding parameterized optimization problems as layers into machine learning architectures serves as a powerful inductive bias. Training such architectures with stochastic gradient descent requires care, as degenerate derivatives of the embedded optimization problem often render the gradients uninformative. We propose Lagrangian Proximal Gradient Descent (LPGD) a flexible framework for training architectures with embedded optimization layers that seamlessly integrates into automatic differentiation libraries. LPGD efficiently computes meaningful replacements of the degenerate optimization layer derivatives by re-running the forward solver oracle on a perturbed input. LPGD captures various previously proposed methods as special cases, while fostering deep links to traditional optimization methods. We theoretically analyze our method and demonstrate on historical and synthetic data that LPGD converges faster than gradient descent even in a differentiable setup.
http://arxiv.org/pdf/2407.05920v1
[ "Anselm Paulus", "Georg Martius", "Vít Musil" ]
2024-07-08T13:27:41Z
2024-07-08T13:27:41Z
2407.05919
Fostering Trust and Quantifying Value of AI and ML
Artificial Intelligence (AI) and Machine Learning (ML) providers have a responsibility to develop valid and reliable systems. Much has been discussed about trusting AI and ML inferences (the process of running live data through a trained AI model to make a prediction or solve a task), but little has been done to define what that means. Those in the space of ML- based products are familiar with topics such as transparency, explainability, safety, bias, and so forth. Yet, there are no frameworks to quantify and measure those. Producing ever more trustworthy machine learning inferences is a path to increase the value of products (i.e., increased trust in the results) and to engage in conversations with users to gather feedback to improve products. In this paper, we begin by examining the dynamic of trust between a provider (Trustor) and users (Trustees). Trustors are required to be trusting and trustworthy, whereas trustees need not be trusting nor trustworthy. The challenge for trustors is to provide results that are good enough to make a trustee increase their level of trust above a minimum threshold for: 1- doing business together; 2- continuation of service. We conclude by defining and proposing a framework, and a set of viable metrics, to be used for computing a trust score and objectively understand how trustworthy a machine learning system can claim to be, plus their behavior over time.
http://arxiv.org/pdf/2407.05919v1
[ "Dalmo Cirne", "Veena Calambur" ]
2024-07-08T13:25:28Z
2024-07-08T13:25:28Z
2407.01848
UniFIDES: Universal Fractional Integro-Differential Equation Solvers
The development of data-driven approaches for solving differential equations has been followed by a plethora of applications in science and engineering across a multitude of disciplines and remains a central focus of active scientific inquiry. However, a large body of natural phenomena incorporates memory effects that are best described via fractional integro-differential equations (FIDEs), in which the integral or differential operators accept non-integer orders. Addressing the challenges posed by nonlinear FIDEs is a recognized difficulty, necessitating the application of generic methods with immediate practical relevance. This work introduces the Universal Fractional Integro-Differential Equation Solvers (UniFIDES), a comprehensive machine learning platform designed to expeditiously solve a variety of FIDEs in both forward and inverse directions, without the need for ad hoc manipulation of the equations. The effectiveness of UniFIDES is demonstrated through a collection of integer-order and fractional problems in science and engineering. Our results highlight UniFIDES' ability to accurately solve a wide spectrum of integro-differential equations and offer the prospect of using machine learning platforms universally for discovering and describing dynamical and complex systems.
http://arxiv.org/pdf/2407.01848v2
[ "Milad Saadat", "Deepak Mangal", "Safa Jamali" ]
2024-07-08T13:18:17Z
2024-07-01T23:16:34Z
2407.05895
Link Representation Learning for Probabilistic Travel Time Estimation
Travel time estimation is a crucial application in navigation apps and web mapping services. Current deterministic and probabilistic methods primarily focus on modeling individual trips, assuming independence among trips. However, in real-world scenarios, we often observe strong inter-trip correlations due to factors such as weather conditions, traffic management, and road works. In this paper, we propose to model trip-level link travel time using a Gaussian hierarchical model, which can characterize both inter-trip and intra-trip correlations. The joint distribution of travel time of multiple trips becomes a multivariate Gaussian parameterized by learnable link representations. To effectively use the sparse GPS trajectories, we also propose a data augmentation method based on trip sub-sampling, which allows for fine-grained gradient backpropagation in learning link representations. During inference, we estimate the probability distribution of the travel time of a queried trip conditional on the completed trips that are spatiotemporally adjacent. We refer to the overall framework as ProbTTE. We evaluate ProbTTE on two real-world GPS trajectory datasets, and the results demonstrate its superior performance compared to state-of-the-art deterministic and probabilistic baselines. Additionally, we find that the learned link representations align well with the physical geometry of the network, making them suitable as input for other applications.
http://arxiv.org/pdf/2407.05895v1
[ "Chen Xu", "Qiang Wang", "Lijun Sun" ]
2024-07-08T13:01:53Z
2024-07-08T13:01:53Z
2307.10352
Properties of Discrete Sliced Wasserstein Losses
The Sliced Wasserstein (SW) distance has become a popular alternative to the Wasserstein distance for comparing probability measures. Widespread applications include image processing, domain adaptation and generative modelling, where it is common to optimise some parameters in order to minimise SW, which serves as a loss function between discrete probability measures (since measures admitting densities are numerically unattainable). All these optimisation problems bear the same sub-problem, which is minimising the Sliced Wasserstein energy. In this paper we study the properties of $mathcal{E}: Y longmapsto mathrm{SW}_2^2(gamma_Y, gamma_Z)$, i.e. the SW distance between two uniform discrete measures with the same amount of points as a function of the support $Y in mathbb{R}^{n times d}$ of one of the measures. We investigate the regularity and optimisation properties of this energy, as well as its Monte-Carlo approximation $mathcal{E}_p$ (estimating the expectation in SW using only $p$ samples) and show convergence results on the critical points of $mathcal{E}_p$ to those of $mathcal{E}$, as well as an almost-sure uniform convergence and a uniform Central Limit result on the process $mathcal{E}_p(Y)$. Finally, we show that in a certain sense, Stochastic Gradient Descent methods minimising $mathcal{E}$ and $mathcal{E}_p$ converge towards (Clarke) critical points of these energies.
http://arxiv.org/abs/2307.10352v6
[ "Eloi Tanguy", "Rémi Flamary", "Julie Delon" ]
2024-07-08T12:52:12Z
2023-07-19T21:21:18Z
2407.05887
Generation and De-Identification of Indian Clinical Discharge Summaries using LLMs
The consequences of a healthcare data breach can be devastating for the patients, providers, and payers. The average financial impact of a data breach in recent months has been estimated to be close to USD 10 million. This is especially significant for healthcare organizations in India that are managing rapid digitization while still establishing data governance procedures that align with the letter and spirit of the law. Computer-based systems for de-identification of personal information are vulnerable to data drift, often rendering them ineffective in cross-institution settings. Therefore, a rigorous assessment of existing de-identification against local health datasets is imperative to support the safe adoption of digital health initiatives in India. Using a small set of de-identified patient discharge summaries provided by an Indian healthcare institution, in this paper, we report the nominal performance of de-identification algorithms (based on language models) trained on publicly available non-Indian datasets, pointing towards a lack of cross-institutional generalization. Similarly, experimentation with off-the-shelf de-identification systems reveals potential risks associated with the approach. To overcome data scarcity, we explore generating synthetic clinical reports (using publicly available and Indian summaries) by performing in-context learning over Large Language Models (LLMs). Our experiments demonstrate the use of generated reports as an effective strategy for creating high-performing de-identification systems with good generalization capabilities.
http://arxiv.org/pdf/2407.05887v1
[ "Sanjeet Singh", "Shreya Gupta", "Niralee Gupta", "Naimish Sharma", "Lokesh Srivastava", "Vibhu Agarwal", "Ashutosh Modi" ]
2024-07-08T12:47:03Z
2024-07-08T12:47:03Z
2407.05876
Efficiently Training Neural Networks for Imperfect Information Games by Sampling Information Sets
In imperfect information games, the evaluation of a game state not only depends on the observable world but also relies on hidden parts of the environment. As accessing the obstructed information trivialises state evaluations, one approach to tackle such problems is to estimate the value of the imperfect state as a combination of all states in the information set, i.e., all possible states that are consistent with the current imperfect information. In this work, the goal is to learn a function that maps from the imperfect game information state to its expected value. However, constructing a perfect training set, i.e. an enumeration of the whole information set for numerous imperfect states, is often infeasible. To compute the expected values for an imperfect information game like textit{Reconnaissance Blind Chess}, one would need to evaluate thousands of chess positions just to obtain the training target for a single state. Still, the expected value of a state can already be approximated with appropriate accuracy from a much smaller set of evaluations. Thus, in this paper, we empirically investigate how a budget of perfect information game evaluations should be distributed among training samples to maximise the return. Our results show that sampling a small number of states, in our experiments roughly 3, for a larger number of separate positions is preferable over repeatedly sampling a smaller quantity of states. Thus, we find that in our case, the quantity of different samples seems to be more important than higher target quality.
http://arxiv.org/pdf/2407.05876v1
[ "Timo Bertram", "Johannes Fürnkranz", "Martin Müller" ]
2024-07-08T12:37:07Z
2024-07-08T12:37:07Z
2407.05872
Scaling Exponents Across Parameterizations and Optimizers
Robust and effective scaling of models from small to large width typically requires the precise adjustment of many algorithmic and architectural details, such as parameterization and optimizer choices. In this work, we propose a new perspective on parameterization by investigating a key assumption in prior work about the alignment between parameters and data and derive new theoretical results under weaker assumptions and a broader set of optimizers. Our extensive empirical investigation includes tens of thousands of models trained with all combinations of three optimizers, four parameterizations, several alignment assumptions, more than a dozen learning rates, and fourteen model sizes up to 26.8B parameters. We find that the best learning rate scaling prescription would often have been excluded by the assumptions in prior work. Our results show that all parameterizations, not just maximal update parameterization (muP), can achieve hyperparameter transfer; moreover, our novel per-layer learning rate prescription for standard parameterization outperforms muP. Finally, we demonstrate that an overlooked aspect of parameterization, the epsilon parameter in Adam, must be scaled correctly to avoid gradient underflow and propose Adam-atan2, a new numerically stable, scale-invariant version of Adam that eliminates the epsilon hyperparameter entirely.
http://arxiv.org/pdf/2407.05872v1
[ "Katie Everett", "Lechao Xiao", "Mitchell Wortsman", "Alexander A. Alemi", "Roman Novak", "Peter J. Liu", "Izzeddin Gur", "Jascha Sohl-Dickstein", "Leslie Pack Kaelbling", "Jaehoon Lee", "Jeffrey Pennington" ]
2024-07-08T12:32:51Z
2024-07-08T12:32:51Z
2407.05864
Neural Network-based Information Set Weighting for Playing Reconnaissance Blind Chess
In imperfect information games, the game state is generally not fully observable to players. Therefore, good gameplay requires policies that deal with the different information that is hidden from each player. To combat this, effective algorithms often reason about information sets; the sets of all possible game states that are consistent with a player's observations. While there is no way to distinguish between the states within an information set, this property does not imply that all states are equally likely to occur in play. We extend previous research on assigning weights to the states in an information set in order to facilitate better gameplay in the imperfect information game of Reconnaissance Blind Chess. For this, we train two different neural networks which estimate the likelihood of each state in an information set from historical game data. Experimentally, we find that a Siamese neural network is able to achieve higher accuracy and is more efficient than a classical convolutional neural network for the given domain. Finally, we evaluate an RBC-playing agent that is based on the generated weightings and compare different parameter settings that influence how strongly it should rely on them. The resulting best player is ranked 5th on the public leaderboard.
http://arxiv.org/abs/2407.05864v1
[ "Timo Bertram", "Johannes Fürnkranz", "Martin Müller" ]
2024-07-08T12:29:29Z
2024-07-08T12:29:29Z
2202.01602
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective
As various post hoc explanation methods are increasingly being leveraged to explain complex models in high-stakes settings, it becomes critical to develop a deeper understanding of if and when the explanations output by these methods disagree with each other, and how such disagreements are resolved in practice. However, there is little to no research that provides answers to these critical questions. In this work, we introduce and study the disagreement problem in explainable machine learning. More specifically, we formalize the notion of disagreement between explanations, analyze how often such disagreements occur in practice, and how practitioners resolve these disagreements. We first conduct interviews with data scientists to understand what constitutes disagreement between explanations generated by different methods for the same model prediction and introduce a novel quantitative framework to formalize this understanding. We then leverage this framework to carry out a rigorous empirical analysis with four real-world datasets, six state-of-the-art post hoc explanation methods, and six different predictive models, to measure the extent of disagreement between the explanations generated by various popular explanation methods. In addition, we carry out an online user study with data scientists to understand how they resolve the aforementioned disagreements. Our results indicate that (1) state-of-the-art explanation methods often disagree in terms of the explanations they output, and (2) machine learning practitioners often employ ad hoc heuristics when resolving such disagreements. These findings suggest that practitioners may be relying on misleading explanations when making consequential decisions. They also underscore the importance of developing principled frameworks for effectively evaluating and comparing explanations output by various explanation techniques.
http://arxiv.org/pdf/2202.01602v4
[ "Satyapriya Krishna", "Tessa Han", "Alex Gu", "Steven Wu", "Shahin Jabbari", "Himabindu Lakkaraju" ]
2024-07-08T12:11:38Z
2022-02-03T14:19:23Z
2402.09132
Exploring the Adversarial Capabilities of Large Language Models
The proliferation of large language models (LLMs) has sparked widespread and general interest due to their strong language generation capabilities, offering great potential for both industry and research. While previous research delved into the security and privacy issues of LLMs, the extent to which these models can exhibit adversarial behavior remains largely unexplored. Addressing this gap, we investigate whether common publicly available LLMs have inherent capabilities to perturb text samples to fool safety measures, so-called adversarial examples resp.~attacks. More specifically, we investigate whether LLMs are inherently able to craft adversarial examples out of benign samples to fool existing safe rails. Our experiments, which focus on hate speech detection, reveal that LLMs succeed in finding adversarial perturbations, effectively undermining hate speech detection systems. Our findings carry significant implications for (semi-)autonomous systems relying on LLMs, highlighting potential challenges in their interaction with existing systems and safety measures.
http://arxiv.org/pdf/2402.09132v4
[ "Lukas Struppek", "Minh Hieu Le", "Dominik Hintersdorf", "Kristian Kersting" ]
2024-07-08T12:10:58Z
2024-02-14T12:28:38Z
2310.06549
Be Careful What You Smooth For: Label Smoothing Can Be a Privacy Shield but Also a Catalyst for Model Inversion Attacks
Label smoothing -- using softened labels instead of hard ones -- is a widely adopted regularization method for deep learning, showing diverse benefits such as enhanced generalization and calibration. Its implications for preserving model privacy, however, have remained unexplored. To fill this gap, we investigate the impact of label smoothing on model inversion attacks (MIAs), which aim to generate class-representative samples by exploiting the knowledge encoded in a classifier, thereby inferring sensitive information about its training data. Through extensive analyses, we uncover that traditional label smoothing fosters MIAs, thereby increasing a model's privacy leakage. Even more, we reveal that smoothing with negative factors counters this trend, impeding the extraction of class-related information and leading to privacy preservation, beating state-of-the-art defenses. This establishes a practical and powerful novel way for enhancing model resilience against MIAs.
http://arxiv.org/pdf/2310.06549v5
[ "Lukas Struppek", "Dominik Hintersdorf", "Kristian Kersting" ]
2024-07-08T12:05:50Z
2023-10-10T11:51:12Z
2407.07922
Vulnerability Detection in Smart Contracts: A Comprehensive Survey
In the growing field of blockchain technology, smart contracts exist as transformative digital agreements that execute transactions autonomously in decentralised networks. However, these contracts face challenges in the form of security vulnerabilities, posing significant financial and operational risks. While traditional methods to detect and mitigate vulnerabilities in smart contracts are limited due to a lack of comprehensiveness and effectiveness, integrating advanced machine learning technologies presents an attractive approach to increasing effective vulnerability countermeasures. We endeavour to fill an important gap in the existing literature by conducting a rigorous systematic review, exploring the intersection between machine learning and smart contracts. Specifically, the study examines the potential of machine learning techniques to improve the detection and mitigation of vulnerabilities in smart contracts. We analysed 88 articles published between 2018 and 2023 from the following databases: IEEE, ACM, ScienceDirect, Scopus, and Google Scholar. The findings reveal that classical machine learning techniques, including KNN, RF, DT, XG-Boost, and SVM, outperform static tools in vulnerability detection. Moreover, multi-model approaches integrating deep learning and classical machine learning show significant improvements in precision and recall, while hybrid models employing various techniques achieve near-perfect performance in vulnerability detection accuracy. By integrating state-of-the-art solutions, this work synthesises current methods, thoroughly investigates research gaps, and suggests directions for future studies. The insights gathered from this study are intended to serve as a seminal reference for academics, industry experts, and bodies interested in leveraging machine learning to enhance smart contract security.
http://arxiv.org/pdf/2407.07922v1
[ "Christopher De Baets", "Basem Suleiman", "Armin Chitizadeh", "Imran Razzak" ]
2024-07-08T11:51:15Z
2024-07-08T11:51:15Z
2407.05841
An Empirical Comparison of Vocabulary Expansion and Initialization Approaches for Language Models
Language Models (LMs) excel in natural language processing tasks for English but show reduced performance in most other languages. This problem is commonly tackled by continually pre-training and fine-tuning these models for said languages. A significant issue in this process is the limited vocabulary coverage in the original model's tokenizer, leading to inadequate representation of new languages and necessitating an expansion of the tokenizer. The initialization of the embeddings corresponding to new vocabulary items presents a further challenge. Current strategies require cross-lingual embeddings and lack a solid theoretical foundation as well as comparisons with strong baselines. In this paper, we first establish theoretically that initializing within the convex hull of existing embeddings is a good initialization, followed by a novel but simple approach, Constrained Word2Vec (CW2V), which does not require cross-lingual embeddings. Our study evaluates different initialization methods for expanding RoBERTa and LLaMA 2 across four languages and five tasks. The results show that CW2V performs equally well or even better than more advanced techniques. Additionally, simpler approaches like multivariate initialization perform on par with these advanced methods indicating that efficient large-scale multilingual continued pretraining can be achieved even with simpler initialization methods.
http://arxiv.org/pdf/2407.05841v1
[ "Nandini Mundra", "Aditya Nanda Kishore", "Raj Dabre", "Ratish Puduppully", "Anoop Kunchukuttan", "Mitesh M. Khapra" ]
2024-07-08T11:38:49Z
2024-07-08T11:38:49Z
2405.10040
SynthesizRR: Generating Diverse Datasets with Retrieval Augmentation
It is often desirable to distill the capabilities of large language models (LLMs) into smaller student models due to compute and memory constraints. One way to do this for classification tasks is via dataset synthesis, which can be accomplished by generating examples of each label from the LLM. Prior approaches to synthesis use few-shot prompting, which relies on the LLM's parametric knowledge to generate usable examples. However, this leads to issues of repetition, bias towards popular entities, and stylistic differences from human text. In this work, we propose Synthesize by Retrieval and Refinement (SynthesizRR), which uses retrieval augmentation to introduce variety into the dataset synthesis process: as retrieved passages vary, the LLM is seeded with different content to generate its examples. We empirically study the synthesis of six datasets, covering topic classification, sentiment analysis, tone detection, and humor, requiring complex synthesis strategies. We find that SynthesizRR greatly improves lexical and semantic diversity, similarity to human-written text, and distillation performance, when compared to 32-shot prompting and four prior approaches. We release our extensive codebase at https://github.com/amazon-science/synthesizrr
http://arxiv.org/pdf/2405.10040v2
[ "Abhishek Divekar", "Greg Durrett" ]
2024-07-08T11:20:42Z
2024-05-16T12:22:41Z
2309.10301
Prominent Roles of Conditionally Invariant Components in Domain Adaptation: Theory and Algorithms
Domain adaptation (DA) is a statistical learning problem that arises when the distribution of the source data used to train a model differs from that of the target data used to evaluate the model. While many DA algorithms have demonstrated considerable empirical success, blindly applying these algorithms can often lead to worse performance on new datasets. To address this, it is crucial to clarify the assumptions under which a DA algorithm has good target performance. In this work, we focus on the assumption of the presence of conditionally invariant components (CICs), which are relevant for prediction and remain conditionally invariant across the source and target data. We demonstrate that CICs, which can be estimated through conditional invariant penalty (CIP), play three prominent roles in providing target risk guarantees in DA. First, we propose a new algorithm based on CICs, importance-weighted conditional invariant penalty (IW-CIP), which has target risk guarantees beyond simple settings such as covariate shift and label shift. Second, we show that CICs help identify large discrepancies between source and target risks of other DA algorithms. Finally, we demonstrate that incorporating CICs into the domain invariant projection (DIP) algorithm can address its failure scenario caused by label-flipping features. We support our new algorithms and theoretical findings via numerical experiments on synthetic data, MNIST, CelebA, Camelyon17, and DomainNet datasets.
http://arxiv.org/pdf/2309.10301v2
[ "Keru Wu", "Yuansi Chen", "Wooseok Ha", "Bin Yu" ]
2024-07-08T11:11:51Z
2023-09-19T04:04:59Z
2311.06322
Post-training Quantization for Text-to-Image Diffusion Models with Progressive Calibration and Activation Relaxing
High computational overhead is a troublesome problem for diffusion models. Recent studies have leveraged post-training quantization (PTQ) to compress diffusion models. However, most of them only focus on unconditional models, leaving the quantization of widely-used pretrained text-to-image models, e.g., Stable Diffusion, largely unexplored. In this paper, we propose a novel post-training quantization method PCR (Progressive Calibration and Relaxing) for text-to-image diffusion models, which consists of a progressive calibration strategy that considers the accumulated quantization error across timesteps, and an activation relaxing strategy that improves the performance with negligible cost. Additionally, we demonstrate the previous metrics for text-to-image diffusion model quantization are not accurate due to the distribution gap. To tackle the problem, we propose a novel QDiffBench benchmark, which utilizes data in the same domain for more accurate evaluation. Besides, QDiffBench also considers the generalization performance of the quantized model outside the calibration dataset. Extensive experiments on Stable Diffusion and Stable Diffusion XL demonstrate the superiority of our method and benchmark. Moreover, we are the first to achieve quantization for Stable Diffusion XL while maintaining the performance.
http://arxiv.org/pdf/2311.06322v3
[ "Siao Tang", "Xin Wang", "Hong Chen", "Chaoyu Guan", "Zewen Wu", "Yansong Tang", "Wenwu Zhu" ]
2024-07-08T11:02:47Z
2023-11-10T09:10:09Z
2404.01964
CAM-Based Methods Can See through Walls
CAM-based methods are widely-used post-hoc interpretability method that produce a saliency map to explain the decision of an image classification model. The saliency map highlights the important areas of the image relevant to the prediction. In this paper, we show that most of these methods can incorrectly attribute an important score to parts of the image that the model cannot see. We show that this phenomenon occurs both theoretically and experimentally. On the theory side, we analyze the behavior of GradCAM on a simple masked CNN model at initialization. Experimentally, we train a VGG-like model constrained to not use the lower part of the image and nevertheless observe positive scores in the unseen part of the image. This behavior is evaluated quantitatively on two new datasets. We believe that this is problematic, potentially leading to mis-interpretation of the model's behavior.
http://arxiv.org/pdf/2404.01964v2
[ "Magamed Taimeskhanov", "Ronan Sicre", "Damien Garreau" ]
2024-07-08T11:00:51Z
2024-04-02T13:57:30Z
2207.01789
Improved Global Guarantees for the Nonconvex Burer--Monteiro Factorization via Rank Overparameterization
We consider minimizing a twice-differentiable, $L$-smooth, and $mu$-strongly convex objective $phi$ over an $ntimes n$ positive semidefinite matrix $Msucceq0$, under the assumption that the minimizer $M^{star}$ has low rank $r^{star}ll n$. Following the Burer--Monteiro approach, we instead minimize the nonconvex objective $f(X)=phi(XX^{T})$ over a factor matrix $X$ of size $ntimes r$. This substantially reduces the number of variables from $O(n^{2})$ to as few as $O(n)$ and also enforces positive semidefiniteness for free, but at the cost of giving up the convexity of the original problem. In this paper, we prove that if the search rank $rge r^{star}$ is overparameterized by a emph{constant factor} with respect to the true rank $r^{star}$, namely as in $r>frac{1}{4}(L/mu-1)^{2}r^{star}$, then despite nonconvexity, local optimization is guaranteed to globally converge from any initial point to the global optimum. This significantly improves upon a previous rank overparameterization threshold of $rge n$, which we show is sharp in the absence of smoothness and strong convexity, but would increase the number of variables back up to $O(n^{2})$. Conversely, without rank overparameterization, we prove that such a global guarantee is possible if and only if $phi$ is almost perfectly conditioned, with a condition number of $L/mu<3$. Therefore, we conclude that a small amount of overparameterization can lead to large improvements in theoretical guarantees for the nonconvex Burer--Monteiro factorization.
http://arxiv.org/pdf/2207.01789v2
[ "Richard Y. Zhang" ]
2024-07-08T10:58:33Z
2022-07-05T03:18:17Z
2407.05816
Graph Reasoning Networks
Graph neural networks (GNNs) are the predominant approach for graph-based machine learning. While neural networks have shown great performance at learning useful representations, they are often criticized for their limited high-level reasoning abilities. In this work, we present Graph Reasoning Networks (GRNs), a novel approach to combine the strengths of fixed and learned graph representations and a reasoning module based on a differentiable satisfiability solver. While results on real-world datasets show comparable performance to GNN, experiments on synthetic datasets demonstrate the potential of the newly proposed method.
http://arxiv.org/pdf/2407.05816v1
[ "Markus Zopf", "Francesco Alesiani" ]
2024-07-08T10:53:49Z
2024-07-08T10:53:49Z
2209.14272
Towards Multimodal Prediction of Spontaneous Humour: A Novel Dataset and First Results
Humor is a substantial element of human social behavior, affect, and cognition. Its automatic understanding can facilitate a more naturalistic human-AI interaction. Current methods of humor detection have been exclusively based on staged data, making them inadequate for "real-world" applications. We contribute to addressing this deficiency by introducing the novel Passau-Spontaneous Football Coach Humor (Passau-SFCH) dataset, comprising about 11 hours of recordings. The Passau-SFCH dataset is annotated for the presence of humor and its dimensions (sentiment and direction) as proposed in Martin's Humor Style Questionnaire. We conduct a series of experiments employing pretrained Transformers, convolutional neural networks, and expert-designed features. The performance of each modality (text, audio, video) for spontaneous humor recognition is analyzed and their complementarity is investigated. Our findings suggest that for the automatic analysis of humor and its sentiment, facial expressions are most promising, while humor direction can be best modeled via text-based features. Further, we experiment with different multimodal approaches to humor recognition, including decision-level fusion and MulT, a multimodal Transformer approach. In this context, we propose a novel multimodal architecture that yields the best overall results. Finally, we make our code publicly available at https://www.github.com/lc0197/passau-sfch. The Passau-SFCH dataset is available upon request.
http://arxiv.org/pdf/2209.14272v3
[ "Lukas Christ", "Shahin Amiriparian", "Alexander Kathan", "Niklas Müller", "Andreas König", "Björn W. Schuller" ]
2024-07-08T10:50:56Z
2022-09-28T17:36:47Z
2407.05800
FedMRL: Data Heterogeneity Aware Federated Multi-agent Deep Reinforcement Learning for Medical Imaging
Despite recent advancements in federated learning (FL) for medical image diagnosis, addressing data heterogeneity among clients remains a significant challenge for practical implementation. A primary hurdle in FL arises from the non-IID nature of data samples across clients, which typically results in a decline in the performance of the aggregated global model. In this study, we introduce FedMRL, a novel federated multi-agent deep reinforcement learning framework designed to address data heterogeneity. FedMRL incorporates a novel loss function to facilitate fairness among clients, preventing bias in the final global model. Additionally, it employs a multi-agent reinforcement learning (MARL) approach to calculate the proximal term $(mu)$ for the personalized local objective function, ensuring convergence to the global optimum. Furthermore, FedMRL integrates an adaptive weight adjustment method using a Self-organizing map (SOM) on the server side to counteract distribution shifts among clients' local data distributions. We assess our approach using two publicly available real-world medical datasets, and the results demonstrate that FedMRL significantly outperforms state-of-the-art techniques, showing its efficacy in addressing data heterogeneity in federated learning. The code can be found here~{url{https://github.com/Pranabiitp/FedMRL}}.
http://arxiv.org/pdf/2407.05800v1
[ "Pranab Sahoo", "Ashutosh Tripathi", "Sriparna Saha", "Samrat Mondal" ]
2024-07-08T10:10:07Z
2024-07-08T10:10:07Z
2407.05793
A Primal-Dual Online Learning Approach for Dynamic Pricing of Sequentially Displayed Complementary Items under Sale Constraints
We address the challenging problem of dynamically pricing complementary items that are sequentially displayed to customers. An illustrative example is the online sale of flight tickets, where customers navigate through multiple web pages. Initially, they view the ticket cost, followed by ancillary expenses such as insurance and additional luggage fees. Coherent pricing policies for complementary items are essential because optimizing the pricing of each item individually is ineffective. Our scenario also involves a sales constraint, which specifies a minimum number of items to sell, and uncertainty regarding customer demand curves. To tackle this problem, we originally formulate it as a Markov Decision Process with constraints. Leveraging online learning tools, we design a primal-dual online optimization algorithm. We empirically evaluate our approach using synthetic settings randomly generated from real-world data, covering various configurations from stationary to non-stationary, and compare its performance in terms of constraints violation and regret against well-known baselines optimizing each state singularly.
http://arxiv.org/pdf/2407.05793v1
[ "Francesco Emanuele Stradi", "Filippo Cipriani", "Lorenzo Ciampiconi", "Marco Leonardi", "Alessandro Rozza", "Nicola Gatti" ]
2024-07-08T09:55:31Z
2024-07-08T09:55:31Z
2309.12325
FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare
Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI.
http://arxiv.org/pdf/2309.12325v3
[ "Karim Lekadir", "Aasa Feragen", "Abdul Joseph Fofanah", "Alejandro F Frangi", "Alena Buyx", "Anais Emelie", "Andrea Lara", "Antonio R Porras", "An-Wen Chan", "Arcadi Navarro", "Ben Glocker", "Benard O Botwe", "Bishesh Khanal", "Brigit Beger", "Carol C Wu", "Celia Cintas", "Curtis P Langlotz", "Daniel Rueckert", "Deogratias Mzurikwao", "Dimitrios I Fotiadis", "Doszhan Zhussupov", "Enzo Ferrante", "Erik Meijering", "Eva Weicken", "Fabio A González", "Folkert W Asselbergs", "Fred Prior", "Gabriel P Krestin", "Gary Collins", "Geletaw S Tegenaw", "Georgios Kaissis", "Gianluca Misuraca", "Gianna Tsakou", "Girish Dwivedi", "Haridimos Kondylakis", "Harsha Jayakody", "Henry C Woodruf", "Horst Joachim Mayer", "Hugo JWL Aerts", "Ian Walsh", "Ioanna Chouvarda", "Irène Buvat", "Isabell Tributsch", "Islem Rekik", "James Duncan", "Jayashree Kalpathy-Cramer", "Jihad Zahir", "Jinah Park", "John Mongan", "Judy W Gichoya", "Julia A Schnabel", "Kaisar Kushibar", "Katrine Riklund", "Kensaku Mori", "Kostas Marias", "Lameck M Amugongo", "Lauren A Fromont", "Lena Maier-Hein", "Leonor Cerdá Alberich", "Leticia Rittner", "Lighton Phiri", "Linda Marrakchi-Kacem", "Lluís Donoso-Bach", "Luis Martí-Bonmatí", "M Jorge Cardoso", "Maciej Bobowicz", "Mahsa Shabani", "Manolis Tsiknakis", "Maria A Zuluaga", "Maria Bielikova", "Marie-Christine Fritzsche", "Marina Camacho", "Marius George Linguraru", "Markus Wenzel", "Marleen De Bruijne", "Martin G Tolsgaard", "Marzyeh Ghassemi", "Md Ashrafuzzaman", "Melanie Goisauf", "Mohammad Yaqub", "Mónica Cano Abadía", "Mukhtar M E Mahmoud", "Mustafa Elattar", "Nicola Rieke", "Nikolaos Papanikolaou", "Noussair Lazrak", "Oliver Díaz", "Olivier Salvado", "Oriol Pujol", "Ousmane Sall", "Pamela Guevara", "Peter Gordebeke", "Philippe Lambin", "Pieta Brown", "Purang Abolmaesumi", "Qi Dou", "Qinghua Lu", "Richard Osuala", "Rose Nakasi", "S Kevin Zhou", "Sandy Napel", "Sara Colantonio", "Shadi Albarqouni", "Smriti Joshi", "Stacy Carter", "Stefan Klein", "Steffen E Petersen", "Susanna Aussó", "Suyash Awate", "Tammy Riklin Raviv", "Tessa Cook", "Tinashe E M Mutsvangwa", "Wendy A Rogers", "Wiro J Niessen", "Xènia Puig-Bosch", "Yi Zeng", "Yunusa G Mohammed", "Yves Saint James Aquino", "Zohaib Salahuddin", "Martijn P A Starmans" ]
2024-07-08T09:54:09Z
2023-08-11T10:49:05Z
2407.05789
CANDID DAC: Leveraging Coupled Action Dimensions with Importance Differences in DAC
High-dimensional action spaces remain a challenge for dynamic algorithm configuration (DAC). Interdependencies and varying importance between action dimensions are further known key characteristics of DAC problems. We argue that these Coupled Action Dimensions with Importance Differences (CANDID) represent aspects of the DAC problem that are not yet fully explored. To address this gap, we introduce a new white-box benchmark within the DACBench suite that simulates the properties of CANDID. Further, we propose sequential policies as an effective strategy for managing these properties. Such policies factorize the action space and mitigate exponential growth by learning a policy per action dimension. At the same time, these policies accommodate the interdependence of action dimensions by fostering implicit coordination. We show this in an experimental study of value-based policies on our new benchmark. This study demonstrates that sequential policies significantly outperform independent learning of factorized policies in CANDID action spaces. In addition, they overcome the scalability limitations associated with learning a single policy across all action dimensions. The code used for our experiments is available under https://github.com/PhilippBordne/candidDAC.
http://arxiv.org/pdf/2407.05789v1
[ "Philipp Bordne", "M. Asif Hasan", "Eddie Bergman", "Noor Awad", "André Biedenkapp" ]
2024-07-08T09:51:02Z
2024-07-08T09:51:02Z
2407.05788
Automated Computational Energy Minimization of ML Algorithms using Constrained Bayesian Optimization
Bayesian optimization (BO) is an efficient framework for optimization of black-box objectives when function evaluations are costly and gradient information is not easily accessible. BO has been successfully applied to automate the task of hyperparameter optimization (HPO) in machine learning (ML) models with the primary objective of optimizing predictive performance on held-out data. In recent years, however, with ever-growing model sizes, the energy cost associated with model training has become an important factor for ML applications. Here we evaluate Constrained Bayesian Optimization (CBO) with the primary objective of minimizing energy consumption and subject to the constraint that the generalization performance is above some threshold. We evaluate our approach on regression and classification tasks and demonstrate that CBO achieves lower energy consumption without compromising the predictive performance of ML models.
http://arxiv.org/pdf/2407.05788v1
[ "Pallavi Mitra", "Felix Biessmann" ]
2024-07-08T09:49:38Z
2024-07-08T09:49:38Z
2401.11849
Self-Labeling the Job Shop Scheduling Problem
In this work, we propose a Self-Supervised training strategy specifically designed for combinatorial problems. One of the main obstacles in applying supervised paradigms to such problems is the requirement of expensive target solutions as ground-truth, often produced with costly exact solvers. Inspired by Semi- and Self-Supervised learning, we show that it is possible to easily train generative models by sampling multiple solutions and using the best one according to the problem objective as a pseudo-label. In this way, we iteratively improve the model generation capability by relying only on its self-supervision, completely removing the need for optimality information. We prove the effectiveness of this Self-Labeling strategy on the Job Shop Scheduling (JSP), a complex combinatorial problem that is receiving much attention from the Reinforcement Learning community. We propose a generative model based on the well-known Pointer Network and train it with our strategy. Experiments on popular benchmarks demonstrate the potential of this approach as the resulting models outperform constructive heuristics and current state-of-the-art learning proposals for the JSP.
http://arxiv.org/pdf/2401.11849v2
[ "Andrea Corsini", "Angelo Porrello", "Simone Calderara", "Mauro Dell'Amico" ]
2024-07-08T09:47:59Z
2024-01-22T11:08:36Z
2407.05782
Sequential Contrastive Audio-Visual Learning
Contrastive learning has emerged as a powerful technique in audio-visual representation learning, leveraging the natural co-occurrence of audio and visual modalities in extensive web-scale video datasets to achieve significant advancements. However, conventional contrastive audio-visual learning methodologies often rely on aggregated representations derived through temporal aggregation, which neglects the intrinsic sequential nature of the data. This oversight raises concerns regarding the ability of standard approaches to capture and utilize fine-grained information within sequences, information that is vital for distinguishing between semantically similar yet distinct examples. In response to this limitation, we propose sequential contrastive audio-visual learning (SCAV), which contrasts examples based on their non-aggregated representation space using sequential distances. Retrieval experiments with the VGGSound and Music datasets demonstrate the effectiveness of SCAV, showing 2-3x relative improvements against traditional aggregation-based contrastive learning and other methods from the literature. We also show that models trained with SCAV exhibit a high degree of flexibility regarding the metric employed for retrieval, allowing them to operate on a spectrum of efficiency-accuracy trade-offs, potentially making them applicable in multiple scenarios, from small- to large-scale retrieval.
http://arxiv.org/pdf/2407.05782v1
[ "Ioannis Tsiamas", "Santiago Pascual", "Chunghsin Yeh", "Joan Serrà" ]
2024-07-08T09:45:20Z
2024-07-08T09:45:20Z
2407.05781
Regret Analysis of Multi-task Representation Learning for Linear-Quadratic Adaptive Control
Representation learning is a powerful tool that enables learning over large multitudes of agents or domains by enforcing that all agents operate on a shared set of learned features. However, many robotics or controls applications that would benefit from collaboration operate in settings with changing environments and goals, whereas most guarantees for representation learning are stated for static settings. Toward rigorously establishing the benefit of representation learning in dynamic settings, we analyze the regret of multi-task representation learning for linear-quadratic control. This setting introduces unique challenges. Firstly, we must account for and balance the $textit{misspecification}$ introduced by an approximate representation. Secondly, we cannot rely on the parameter update schemes of single-task online LQR, for which least-squares often suffices, and must devise a novel scheme to ensure sufficient improvement. We demonstrate that for settings where exploration is "benign", the regret of any agent after $T$ timesteps scales as $tilde O(sqrt{T/H})$, where $H$ is the number of agents. In settings with "difficult" exploration, the regret scales as $tilde{mathcal O}(sqrt{d_u d_theta} sqrt{T} + T^{3/4}/H^{1/5})$, where $d_x$ is the state-space dimension, $d_u$ is the input dimension, and $d_theta$ is the task-specific parameter count. In both cases, by comparing to the minimax single-task regret $tilde{mathcal O}(sqrt{d_x d_u^2}sqrt{T})$, we see a benefit of a large number of agents. Notably, in the difficult exploration case, by sharing a representation across tasks, the effective task-specific parameter count can often be small $d_theta < d_x d_u$. Lastly, we provide numerical validation of the trends we predict.
http://arxiv.org/pdf/2407.05781v1
[ "Bruce D. Lee", "Leonardo F. Toso", "Thomas T. Zhang", "James Anderson", "Nikolai Matni" ]
2024-07-08T09:41:42Z
2024-07-08T09:41:42Z
2307.10003
TbExplain: A Text-based Explanation Method for Scene Classification Models with the Statistical Prediction Correction
The field of Explainable Artificial Intelligence (XAI) aims to improve the interpretability of black-box machine learning models. Building a heatmap based on the importance value of input features is a popular method for explaining the underlying functions of such models in producing their predictions. Heatmaps are almost understandable to humans, yet they are not without flaws. Non-expert users, for example, may not fully understand the logic of heatmaps (the logic in which relevant pixels to the model's prediction are highlighted with different intensities or colors). Additionally, objects and regions of the input image that are relevant to the model prediction are frequently not entirely differentiated by heatmaps. In this paper, we propose a framework called TbExplain that employs XAI techniques and a pre-trained object detector to present text-based explanations of scene classification models. Moreover, TbExplain incorporates a novel method to correct predictions and textually explain them based on the statistics of objects in the input image when the initial prediction is unreliable. To assess the trustworthiness and validity of the text-based explanations, we conducted a qualitative experiment, and the findings indicated that these explanations are sufficiently reliable. Furthermore, our quantitative and qualitative experiments on TbExplain with scene classification datasets reveal an improvement in classification accuracy over ResNet variants.
http://arxiv.org/abs/2307.10003v2
[ "Amirhossein Aminimehr", "Pouya Khani", "Amirali Molaei", "Amirmohammad Kazemeini", "Erik Cambria" ]
2024-07-08T09:40:03Z
2023-07-19T14:23:26Z
2403.07965
Conditional computation in neural networks: principles and research trends
This article summarizes principles and ideas from the emerging area of applying textit{conditional computation} methods to the design of neural networks. In particular, we focus on neural networks that can dynamically activate or de-activate parts of their computational graph conditionally on their input. Examples include the dynamic selection of, e.g., input tokens, layers (or sets of layers), and sub-modules inside each layer (e.g., channels in a convolutional filter). We first provide a general formalism to describe these techniques in an uniform way. Then, we introduce three notable implementations of these principles: mixture-of-experts (MoEs) networks, token selection mechanisms, and early-exit neural networks. The paper aims to provide a tutorial-like introduction to this growing field. To this end, we analyze the benefits of these modular designs in terms of efficiency, explainability, and transfer learning, with a focus on emerging applicative areas ranging from automated scientific discovery to semantic communication.
http://arxiv.org/abs/2403.07965v2
[ "Simone Scardapane", "Alessandro Baiocchi", "Alessio Devoto", "Valerio Marsocci", "Pasquale Minervini", "Jary Pomponi" ]
2024-07-08T09:21:00Z
2024-03-12T11:56:38Z
2406.19015
Lithium-Ion Battery System Health Monitoring and Fault Analysis from Field Data Using Gaussian Processes
Health monitoring, fault analysis, and detection are critical for the safe and sustainable operation of battery systems. We apply Gaussian process resistance models on lithium iron phosphate battery field data to effectively separate the time-dependent and operating point-dependent resistance. The data set contains 29 battery systems returned to the manufacturer for warranty, each with eight cells in series, totaling 232 cells and 131 million data rows. We develop probabilistic fault detection rules using recursive spatiotemporal Gaussian processes. These processes allow the quick processing of over a million data points, enabling advanced online monitoring and furthering the understanding of battery pack failure in the field. The analysis underlines that often, only a single cell shows abnormal behavior or a knee point, consistent with weakest-link failure for cells connected in series, amplified by local resistive heating. The results further the understanding of how batteries degrade and fail in the field and demonstrate the potential of efficient online monitoring based on data. We open-source the code and publish the large data set upon completion of the review of this article.
http://arxiv.org/pdf/2406.19015v2
[ "Joachim Schaeffer", "Eric Lenz", "Duncan Gulla", "Martin Z. Bazant", "Richard D. Braatz", "Rolf Findeisen" ]
2024-07-08T09:07:51Z
2024-06-27T09:00:05Z
2402.04836
On the Completeness of Invariant Geometric Deep Learning Models
Invariant models, one important class of geometric deep learning models, are capable of generating meaningful geometric representations by leveraging informative geometric features in point clouds. These models are characterized by their simplicity, good experimental results and computational efficiency. However, their theoretical expressive power still remains unclear, restricting a deeper understanding of the potential of such models. In this work, we concentrate on characterizing the theoretical expressiveness of a wide range of invariant models. We first rigorously bound the expressiveness of the most classic invariant model, message-passing neural networks incorporating distance (DisGNN), restricting its unidentifiable cases to be only highly symmetric point clouds. We then show that GeoNGNN, the geometric counterpart of one of the simplest subgraph graph neural networks (subgraph GNNs), can effectively break these corner cases' symmetry and thus achieve E(3)-completeness. By leveraging GeoNGNN as a theoretical tool, we further prove that: 1) most subgraph GNNs developed in traditional graph learning can be seamlessly extended to geometric scenarios with E(3)-completeness; 2) DimeNet, GemNet and SphereNet, three well-established invariant models, are also all capable of achieving E(3)-completeness. Our theoretical results fill the gap in the theoretical power of invariant models, contributing to a rigorous and comprehensive understanding of their capabilities. We also empirically evaluated GeoNGNN, the simplest model within the large E(3)-complete family we established, which achieves competitive results to models relying on high-order invariant/equivariant representations on molecule-relevant tasks.
http://arxiv.org/pdf/2402.04836v2
[ "Zian Li", "Xiyuan Wang", "Shijia Kang", "Muhan Zhang" ]
2024-07-08T08:57:35Z
2024-02-07T13:32:53Z
2407.05749
LDGCN: An Edge-End Lightweight Dual GCN Based on Single-Channel EEG for Driver Drowsiness Monitoring
Driver drowsiness electroencephalography (EEG) signal monitoring can timely alert drivers of their drowsiness status, thereby reducing the probability of traffic accidents. Graph convolutional networks (GCNs) have shown significant advancements in processing the non-stationary, time-varying, and non-Euclidean nature of EEG signals. However, the existing single-channel EEG adjacency graph construction process lacks interpretability, which hinders the ability of GCNs to effectively extract adjacency graph features, thus affecting the performance of drowsiness monitoring. To address this issue, we propose an edge-end lightweight dual graph convolutional network (LDGCN). Specifically, we are the first to incorporate neurophysiological knowledge to design a Baseline Drowsiness Status Adjacency Graph (BDSAG), which characterizes driver drowsiness status. Additionally, to express more features within limited EEG data, we introduce the Augmented Graph-level Module (AGM). This module captures global and local information at the graph level, ensuring that BDSAG features remain intact while enhancing effective feature expression capability. Furthermore, to deploy our method on the fourth-generation Raspberry Pi, we utilize Adaptive Pruning Optimization (APO) on both channels and neurons, reducing inference latency by almost half. Experiments on benchmark datasets demonstrate that LDGCN offers the best trade-off between monitoring performance and hardware resource utilization compared to existing state-of-the-art algorithms. All our source code can be found at https://github.com/BryantDom/Driver-Drowsiness-Monitoring.
http://arxiv.org/pdf/2407.05749v1
[ "Jingwei Huang", "Chuansheng Wang", "Jiayan Huang", "Haoyi Fan", "Antoni Grau", "Fuquan Zhang" ]
2024-07-08T08:55:25Z
2024-07-08T08:55:25Z
2407.05732
FairPFN: Transformers Can do Counterfactual Fairness
Machine Learning systems are increasingly prevalent across healthcare, law enforcement, and finance but often operate on historical data, which may carry biases against certain demographic groups. Causal and counterfactual fairness provides an intuitive way to define fairness that closely aligns with legal standards. Despite its theoretical benefits, counterfactual fairness comes with several practical limitations, largely related to the reliance on domain knowledge and approximate causal discovery techniques in constructing a causal model. In this study, we take a fresh perspective on counterfactually fair prediction, building upon recent work in in context learning (ICL) and prior fitted networks (PFNs) to learn a transformer called FairPFN. This model is pretrained using synthetic fairness data to eliminate the causal effects of protected attributes directly from observational data, removing the requirement of access to the correct causal model in practice. In our experiments, we thoroughly assess the effectiveness of FairPFN in eliminating the causal impact of protected attributes on a series of synthetic case studies and real world datasets. Our findings pave the way for a new and promising research area: transformers for causal and counterfactual fairness.
http://arxiv.org/pdf/2407.05732v1
[ "Jake Robertson", "Noah Hollmann", "Noor Awad", "Frank Hutter" ]
2024-07-08T08:36:44Z
2024-07-08T08:36:44Z
2406.19670
Function+Data Flow: A Framework to Specify Machine Learning Pipelines for Digital Twinning
The development of digital twins (DTs) for physical systems increasingly leverages artificial intelligence (AI), particularly for combining data from different sources or for creating computationally efficient, reduced-dimension models. Indeed, even in very different application domains, twinning employs common techniques such as model order reduction and modelization with hybrid data (that is, data sourced from both physics-based models and sensors). Despite this apparent generality, current development practices are ad-hoc, making the design of AI pipelines for digital twinning complex and time-consuming. Here we propose Function+Data Flow (FDF), a domain-specific language (DSL) to describe AI pipelines within DTs. FDF aims to facilitate the design and validation of digital twins. Specifically, FDF treats functions as first-class citizens, enabling effective manipulation of models learned with AI. We illustrate the benefits of FDF on two concrete use cases from different domains: predicting the plastic strain of a structure and modeling the electromagnetic behavior of a bearing.
http://arxiv.org/abs/2406.19670v2
[ "Eduardo de Conto", "Blaise Genest", "Arvind Easwaran" ]
2024-07-08T08:28:34Z
2024-06-28T05:44:47Z
2407.03118
Can machine learning solve the challenge of adaptive learning and the individualization of learning paths? A field experiment in an online learning platform
The individualization of learning contents based on digital technologies promises large individual and social benefits. However, it remains an open question how this individualization can be implemented. To tackle this question we conduct a randomized controlled trial on a large digital self-learning platform. We develop an algorithm based on two convolutional neural networks that assigns tasks to $4,365$ learners according to their learning paths. Learners are randomized into three groups: two treatment groups -- a group-based adaptive treatment group and an individual adaptive treatment group -- and one control group. We analyze the difference between the three groups with respect to effort learners provide and their performance on the platform. Our null results shed light on the multiple challenges associated with the individualization of learning paths.
http://arxiv.org/pdf/2407.03118v2
[ "Marius Köppel", "Tim Klausmann", "Isabell Zipperle", "Daniel Schunk" ]
2024-07-08T08:07:35Z
2024-07-03T14:04:05Z
2407.05704
Narrowing the Gap between Adversarial and Stochastic MDPs via Policy Optimization
In this paper, we consider the problem of learning in adversarial Markov decision processes [MDPs] with an oblivious adversary in a full-information setting. The agent interacts with an environment during $T$ episodes, each of which consists of $H$ stages, and each episode is evaluated with respect to a reward function that will be revealed only at the end of the episode. We propose an algorithm, called APO-MVP, that achieves a regret bound of order $tilde{mathcal{O}}(mathrm{poly}(H)sqrt{SAT})$, where $S$ and $A$ are sizes of the state and action spaces, respectively. This result improves upon the best-known regret bound by a factor of $sqrt{S}$, bridging the gap between adversarial and stochastic MDPs, and matching the minimax lower bound $Omega(sqrt{H^3SAT})$ as far as the dependencies in $S,A,T$ are concerned. The proposed algorithm and analysis completely avoid the typical tool given by occupancy measures; instead, it performs policy optimization based only on dynamic programming and on a black-box online linear optimization strategy run over estimated advantage functions, making it easy to implement. The analysis leverages two recent techniques: policy optimization based on online linear optimization strategies (Jonckheere et al., 2023) and a refined martingale analysis of the impact on values of estimating transitions kernels (Zhang et al., 2023).
http://arxiv.org/pdf/2407.05704v1
[ "Daniil Tiapkin", "Evgenii Chzhen", "Gilles Stoltz" ]
2024-07-08T08:06:45Z
2024-07-08T08:06:45Z
2406.09776
Faster Convergence on Heterogeneous Federated Edge Learning: An Adaptive Clustered Data Sharing Approach
Federated Edge Learning (FEEL) emerges as a pioneering distributed machine learning paradigm for the 6G Hyper-Connectivity, harnessing data from the Internet of Things (IoT) devices while upholding data privacy. However, current FEEL algorithms struggle with non-independent and non-identically distributed (non-IID) data, leading to elevated communication costs and compromised model accuracy. To address these statistical imbalances within FEEL, we introduce a clustered data sharing framework, mitigating data heterogeneity by selectively sharing partial data from cluster heads to trusted associates through sidelink-aided multicasting. The collective communication pattern is integral to FEEL training, where both cluster formation and the efficiency of communication and computation impact training latency and accuracy simultaneously. To tackle the strictly coupled data sharing and resource optimization, we decompose the overall optimization problem into the clients clustering and effective data sharing subproblems. Specifically, a distribution-based adaptive clustering algorithm (DACA) is devised basing on three deductive cluster forming conditions, which ensures the maximum sharing yield. Meanwhile, we design a stochastic optimization based joint computed frequency and shared data volume optimization (JFVO) algorithm, determining the optimal resource allocation with an uncertain objective function. The experiments show that the proposed framework facilitates FEEL on non-IID datasets with faster convergence rate and higher model accuracy in a limited communication environment.
http://arxiv.org/pdf/2406.09776v2
[ "Gang Hu", "Yinglei Teng", "Nan Wang", "Zhu Han" ]
2024-07-08T08:06:00Z
2024-06-14T07:22:39Z
2407.05694
On the Limitations of Compute Thresholds as a Governance Strategy
At face value, this essay is about understanding a fairly esoteric governance tool called compute thresholds. However, in order to grapple with whether these thresholds will achieve anything, we must first understand how they came to be. This requires engaging with a decades-old debate at the heart of computer science progress, namely, is bigger always better? Hence, this essay may be of interest not only to policymakers and the wider public but also to computer scientists interested in understanding the role of compute in unlocking breakthroughs. Does a certain inflection point of compute result in changes to the risk profile of a model? This discussion is increasingly urgent given the wide adoption of governance approaches that suggest greater compute equates with higher propensity for harm. Several leading frontier AI companies have released responsible scaling policies. Both the White House Executive Orders on AI Safety (EO) and the EU AI Act encode the use of FLOP or floating-point operations as a way to identify more powerful systems. What is striking about the choice of compute thresholds to-date is that no models currently deployed in the wild fulfill the current criteria set by the EO. This implies that the emphasis is often not on auditing the risks and harms incurred by currently deployed models - but rather is based upon the belief that future levels of compute will introduce unforeseen new risks. A key conclusion of this essay is that compute thresholds as currently implemented are shortsighted and likely to fail to mitigate risk. Governance that is overly reliant on compute fails to understand that the relationship between compute and risk is highly uncertain and rapidly changing. It also overestimates our ability to predict what abilities emerge at different scales. This essay ends with recommendations for a better way forward.
http://arxiv.org/pdf/2407.05694v1
[ "Sara Hooker" ]
2024-07-08T07:53:06Z
2024-07-08T07:53:06Z
2212.12561
An active learning method for solving competitive multi-agent decision-making and control problems
To identify a stationary action profile for a population of competitive agents, each executing private strategies, we introduce a novel active-learning scheme where a centralized external observer (or entity) can probe the agents' reactions and recursively update simple local parametric estimates of the action-reaction mappings. Under very general working assumptions (not even assuming that a stationary profile exists), sufficient conditions are established to assess the asymptotic properties of the proposed active learning methodology so that, if the parameters characterizing the action-reaction mappings converge, a stationary action profile is achieved. Such conditions hence act also as certificates for the existence of such a profile. Extensive numerical simulations involving typical competitive multi-agent control and decision-making problems illustrate the practical effectiveness of the proposed learning-based approach.
http://arxiv.org/pdf/2212.12561v4
[ "Filippo Fabiani", "Alberto Bemporad" ]
2024-07-08T07:52:09Z
2022-12-23T19:37:39Z
2407.05693
Sub-SA: Strengthen In-context Learning via Submodular Selective Annotation
In-context learning (ICL) leverages in-context examples as prompts for the predictions of Large Language Models (LLMs). These prompts play a crucial role in achieving strong performance. However, the selection of suitable prompts from a large pool of labeled examples often entails significant annotation costs. To address this challenge, we propose textbf{Sub-SA} (textbf{Sub}modular textbf{S}elective textbf{A}nnotation), a submodule-based selective annotation method. The aim of Sub-SA is to reduce annotation costs while improving the quality of in-context examples and minimizing the time consumption of the selection process. In Sub-SA, we design a submodular function that facilitates effective subset selection for annotation and demonstrates the characteristics of monotonically and submodularity from the theoretical perspective. Specifically, we propose textbf{RPR} (textbf{R}eward and textbf{P}enalty textbf{R}egularization) to better balance the diversity and representativeness of the unlabeled dataset attributed to a reward term and a penalty term, respectively. Consequently, the selection for annotations can be effectively addressed with a simple yet effective greedy search algorithm based on the submodular function. Finally, we apply the similarity prompt retrieval to get the examples for ICL.
http://arxiv.org/pdf/2407.05693v1
[ "Jian Qian", "Miao Sun", "Sifan Zhou", "Ziyu Zhao", "Ruizhi Hun", "Patrick Chiang" ]
2024-07-08T07:47:30Z
2024-07-08T07:47:30Z
2407.05684
Multi-Fidelity Bayesian Neural Network for Uncertainty Quantification in Transonic Aerodynamic Loads
Multi-fidelity models are becoming more prevalent in engineering, particularly in aerospace, as they combine both the computational efficiency of low-fidelity models with the high accuracy of higher-fidelity simulations. Various state-of-the-art techniques exist for fusing data from different fidelity sources, including Co-Kriging and transfer learning in neural networks. This paper aims to implement a multi-fidelity Bayesian neural network model that applies transfer learning to fuse data generated by models at different fidelities. Bayesian neural networks use probability distributions over network weights, enabling them to provide predictions along with estimates of their confidence. This approach harnesses the predictive and data fusion capabilities of neural networks while also quantifying uncertainty. The results demonstrate that the multi-fidelity Bayesian model outperforms the state-of-the-art Co-Kriging in terms of overall accuracy and robustness on unseen data.
http://arxiv.org/pdf/2407.05684v1
[ "Andrea Vaiuso", "Gabriele Immordino", "Marcello Righi", "Andrea Da Ronch" ]
2024-07-08T07:34:35Z
2024-07-08T07:34:35Z
2401.04385
Machine unlearning through fine-grained model parameters perturbation
Machine unlearning techniques, which involve retracting data records and reducing influence of said data on trained models, help with the user privacy protection objective but incur significant computational costs. Weight perturbation-based unlearning is a general approach, but it typically involves globally modifying the parameters. We propose fine-grained Top-K and Random-k parameters perturbed inexact machine unlearning strategies that address the privacy needs while keeping the computational costs tractable. In order to demonstrate the efficacy of our strategies we also tackle the challenge of evaluating the effectiveness of machine unlearning by considering the model's generalization performance across both unlearning and remaining data. To better assess the unlearning effect and model generalization, we propose novel metrics, namely, the forgetting rate and memory retention rate. However, for inexact machine unlearning, current metrics are inadequate in quantifying the degree of forgetting that occurs after unlearning strategies are applied. To address this, we introduce SPD-GAN, which subtly perturbs the distribution of data targeted for unlearning. Then, we evaluate the degree of unlearning by measuring the performance difference of the models on the perturbed unlearning data before and after the unlearning process. By implementing these innovative techniques and metrics, we achieve computationally efficacious privacy protection in machine learning applications without significant sacrifice of model performance. Furthermore, this approach provides a novel method for evaluating the degree of unlearning.
http://arxiv.org/pdf/2401.04385v3
[ "Zhiwei Zuo", "Zhuo Tang", "Kenli Li", "Anwitaman Datta" ]
2024-07-08T07:24:49Z
2024-01-09T07:14:45Z
2407.05664
How DNNs break the Curse of Dimensionality: Compositionality and Symmetry Learning
We show that deep neural networks (DNNs) can efficiently learn any composition of functions with bounded $F_{1}$-norm, which allows DNNs to break the curse of dimensionality in ways that shallow networks cannot. More specifically, we derive a generalization bound that combines a covering number argument for compositionality, and the $F_{1}$-norm (or the related Barron norm) for large width adaptivity. We show that the global minimizer of the regularized loss of DNNs can fit for example the composition of two functions $f^{*}=hcirc g$ from a small number of observations, assuming $g$ is smooth/regular and reduces the dimensionality (e.g. $g$ could be the modulo map of the symmetries of $f^{*}$), so that $h$ can be learned in spite of its low regularity. The measures of regularity we consider is the Sobolev norm with different levels of differentiability, which is well adapted to the $F_{1}$ norm. We compute scaling laws empirically and observe phase transitions depending on whether $g$ or $h$ is harder to learn, as predicted by our theory.
http://arxiv.org/pdf/2407.05664v1
[ "Arthur Jacot", "Seok Hoan Choi", "Yuxiao Wen" ]
2024-07-08T06:59:29Z
2024-07-08T06:59:29Z
2312.06441
Revisiting Graph-Based Fraud Detection in Sight of Heterophily and Spectrum
Graph-based fraud detection (GFD) can be regarded as a challenging semi-supervised node binary classification task. In recent years, Graph Neural Networks (GNN) have been widely applied to GFD, characterizing the anomalous possibility of a node by aggregating neighbor information. However, fraud graphs are inherently heterophilic, thus most of GNNs perform poorly due to their assumption of homophily. In addition, due to the existence of heterophily and class imbalance problem, the existing models do not fully utilize the precious node label information. To address the above issues, this paper proposes a semi-supervised GNN-based fraud detector SEC-GFD. This detector includes a hybrid filtering module and a local environmental constraint module, the two modules are utilized to solve heterophily and label utilization problem respectively. The first module starts from the perspective of the spectral domain, and solves the heterophily problem to a certain extent. Specifically, it divides the spectrum into various mixed-frequency bands based on the correlation between spectrum energy distribution and heterophily. Then in order to make full use of the node label information, a local environmental constraint module is adaptively designed. The comprehensive experimental results on four real-world fraud detection datasets denote that SEC-GFD outperforms other competitive graph-based fraud detectors. We release our code at https://github.com/Sunxkissed/SEC-GFD.
http://arxiv.org/pdf/2312.06441v3
[ "Fan Xu", "Nan Wang", "Hao Wu", "Xuezhi Wen", "Xibin Zhao", "Hai Wan" ]
2024-07-08T06:54:37Z
2023-12-11T15:18:51Z
2407.05658
Random Features Hopfield Networks generalize retrieval to previously unseen examples
It has been recently shown that a learning transition happens when a Hopfield Network stores examples generated as superpositions of random features, where new attractors corresponding to such features appear in the model. In this work we reveal that the network also develops attractors corresponding to previously unseen examples generated with the same set of features. We explain this surprising behaviour in terms of spurious states of the learned features: we argue that, increasing the number of stored examples beyond the learning transition, the model also learns to mix the features to represent both stored and previously unseen examples. We support this claim with the computation of the phase diagram of the model.
http://arxiv.org/pdf/2407.05658v1
[ "Silvio Kalaj", "Clarissa Lauditi", "Gabriele Perugini", "Carlo Lucibello", "Enrico M. Malatesta", "Matteo Negri" ]
2024-07-08T06:35:13Z
2024-07-08T06:35:13Z
2407.05656
Multi-label Learning with Random Circular Vectors
The extreme multi-label classification~(XMC) task involves learning a classifier that can predict from a large label set the most relevant subset of labels for a data instance. While deep neural networks~(DNNs) have demonstrated remarkable success in XMC problems, the task is still challenging because it must deal with a large number of output labels, which make the DNN training computationally expensive. This paper addresses the issue by exploring the use of random circular vectors, where each vector component is represented as a complex amplitude. In our framework, we can develop an output layer and loss function of DNNs for XMC by representing the final output layer as a fully connected layer that directly predicts a low-dimensional circular vector encoding a set of labels for a data instance. We conducted experiments on synthetic datasets to verify that circular vectors have better label encoding capacity and retrieval ability than normal real-valued vectors. Then, we conducted experiments on actual XMC datasets and found that these appealing properties of circular vectors contribute to significant improvements in task performance compared with a previous model using random real-valued vectors, while reducing the size of the output layers by up to 99%.
http://arxiv.org/pdf/2407.05656v1
[ "Ken Nishida", "Kojiro Machi", "Kazuma Onishi", "Katsuhiko Hayashi", "Hidetaka Kamigaito" ]
2024-07-08T06:29:46Z
2024-07-08T06:29:46Z
2407.05650
The Dynamic Net Architecture: Learning Robust and Holistic Visual Representations Through Self-Organizing Networks
We present a novel intelligent-system architecture called "Dynamic Net Architecture" (DNA) that relies on recurrence-stabilized networks and discuss it in application to vision. Our architecture models a (cerebral cortical) area wherein elementary feature neurons encode details of visual structures, and coherent nets of such neurons model holistic object structures. By interpreting smaller or larger coherent pieces of an area network as complex features, our model encodes hierarchical feature representations essentially different than artificial neural networks (ANNs). DNA models operate on a dynamic connectionism principle, wherein neural activations stemming from initial afferent signals undergo stabilization through a self-organizing mechanism facilitated by Hebbian plasticity alongside periodically tightening inhibition. In contrast to ANNs, which rely on feed-forward connections and backpropagation of error, we posit that this processing paradigm leads to highly robust representations, as by employing dynamic lateral connections, irrelevant details in neural activations are filtered out, freeing further processing steps from distracting noise and premature decisions. We empirically demonstrate the viability of the DNA by composing line fragments into longer lines and show that the construction of nets representing lines remains robust even with the introduction of up to $59%$ noise at each spatial location. Furthermore, we demonstrate the model's capability to reconstruct anticipated features from partially obscured inputs and that it can generalize to patterns not observed during training. In this work, we limit the DNA to one cortical area and focus on its internals while providing insights into a standalone area's strengths and shortcomings. Additionally, we provide an outlook on how future work can implement invariant object recognition by combining multiple areas.
http://arxiv.org/pdf/2407.05650v1
[ "Pascal J. Sager", "Jan M. Deriu", "Benjamin F. Grewe", "Thilo Stadelmann", "Christoph von der Malsburg" ]
2024-07-08T06:22:10Z
2024-07-08T06:22:10Z
2407.05649
Graph Attention with Random Rewiring
Graph Neural Networks (GNNs) have become fundamental in graph-structured deep learning. Key paradigms of modern GNNs include message passing, graph rewiring, and Graph Transformers. This paper introduces Graph-Rewiring Attention with Stochastic Structures (GRASS), a novel GNN architecture that combines the advantages of these three paradigms. GRASS rewires the input graph by superimposing a random regular graph, enhancing long-range information propagation while preserving structural features of the input graph. It also employs a unique additive attention mechanism tailored for graph-structured data, providing a graph inductive bias while remaining computationally efficient. Our empirical evaluations demonstrate that GRASS achieves state-of-the-art performance on multiple benchmark datasets, confirming its practical efficacy.
http://arxiv.org/pdf/2407.05649v1
[ "Tongzhou Liao", "Barnabás Póczos" ]
2024-07-08T06:21:56Z
2024-07-08T06:21:56Z
2407.05639
Deep Learning-based Anomaly Detection and Log Analysis for Computer Networks
Computer network anomaly detection and log analysis, as an important topic in the field of network security, has been a key task to ensure network security and system reliability. First, existing network anomaly detection and log analysis methods are often challenged by high-dimensional data and complex network topologies, resulting in unstable performance and high false-positive rates. In addition, traditional methods are usually difficult to handle time-series data, which is crucial for anomaly detection and log analysis. Therefore, we need a more efficient and accurate method to cope with these problems. To compensate for the shortcomings of current methods, we propose an innovative fusion model that integrates Isolation Forest, GAN (Generative Adversarial Network), and Transformer with each other, and each of them plays a unique role. Isolation Forest is used to quickly identify anomalous data points, and GAN is used to generate synthetic data with the real data distribution characteristics to augment the training dataset, while the Transformer is used for modeling and context extraction on time series data. The synergy of these three components makes our model more accurate and robust in anomaly detection and log analysis tasks. We validate the effectiveness of this fusion model in an extensive experimental evaluation. Experimental results show that our model significantly improves the accuracy of anomaly detection while reducing the false alarm rate, which helps to detect potential network problems in advance. The model also performs well in the log analysis task and is able to quickly identify anomalous behaviors, which helps to improve the stability of the system. The significance of this study is that it introduces advanced deep learning techniques, which work anomaly detection and log analysis.
http://arxiv.org/pdf/2407.05639v1
[ "Shuzhan Wang", "Ruxue Jiang", "Zhaoqi Wang", "Yan Zhou" ]
2024-07-08T06:07:51Z
2024-07-08T06:07:51Z
2405.09802
Analysis and Predictive Modeling of Solar Coronal Holes Using Computer Vision and LSTM Networks
In the era of space exploration, coronal holes on the sun play a significant role due to their impact on satellites and aircraft through their open magnetic fields and increased solar wind emissions. This study employs computer vision techniques to detect coronal hole regions and estimate their sizes using imagery from the Solar Dynamics Observatory (SDO). Additionally, we utilize deep learning methods, specifically Long Short-Term Memory (LSTM) networks, to analyze trends in the area of coronal holes and predict their areas across various solar regions over a span of seven days. By examining time series data, we aim to identify patterns in coronal hole behavior and understand their potential effects on space weather. This research enhances our ability to anticipate and prepare for space weather events that could affect Earth's technological systems.
http://arxiv.org/pdf/2405.09802v2
[ "Juyoung Yun", "Jungmin Shin" ]
2024-07-08T06:06:34Z
2024-05-16T04:21:09Z
2407.05633
AdaPI: Facilitating DNN Model Adaptivity for Efficient Private Inference in Edge Computing
Private inference (PI) has emerged as a promising solution to execute computations on encrypted data, safeguarding user privacy and model parameters in edge computing. However, existing PI methods are predominantly developed considering constant resource constraints, overlooking the varied and dynamic resource constraints in diverse edge devices, like energy budgets. Consequently, model providers have to design specialized models for different devices, where all of them have to be stored on the edge server, resulting in inefficient deployment. To fill this gap, this work presents AdaPI, a novel approach that achieves adaptive PI by allowing a model to perform well across edge devices with diverse energy budgets. AdaPI employs a PI-aware training strategy that optimizes the model weights alongside weight-level and feature-level soft masks. These soft masks are subsequently transformed into multiple binary masks to enable adjustments in communication and computation workloads. Through sequentially training the model with increasingly dense binary masks, AdaPI attains optimal accuracy for each energy budget, which outperforms the state-of-the-art PI methods by 7.3% in terms of test accuracy on CIFAR-100. The code of AdaPI can be accessed via https://github.com/jiahuiiiiii/AdaPI.
http://arxiv.org/pdf/2407.05633v1
[ "Tong Zhou", "Jiahui Zhao", "Yukui Luo", "Xi Xie", "Wujie Wen", "Caiwen Ding", "Xiaolin Xu" ]
2024-07-08T05:58:49Z
2024-07-08T05:58:49Z