arxiv_id
stringlengths 7
11
| title
stringlengths 7
243
| abstract
stringlengths 3
2.79k
| link
stringlengths 21
49
| authors
listlengths 1
451
| updated
stringlengths 20
20
| published
stringlengths 20
20
|
---|---|---|---|---|---|---|
2401.12476
|
Bayesian identification of nonseparable Hamiltonians with multiplicative
noise using deep learning and reduced-order modeling
|
This paper presents a structure-preserving Bayesian approach for learning nonseparable Hamiltonian systems using stochastic dynamic models allowing for statistically-dependent, vector-valued additive and multiplicative measurement noise. The approach is comprised of three main facets. First, we derive a Gaussian filter for a statistically-dependent, vector-valued, additive and multiplicative noise model that is needed to evaluate the likelihood within the Bayesian posterior. Second, we develop a novel algorithm for cost-effective application of Bayesian system identification to high-dimensional systems. Third, we demonstrate how structure-preserving methods can be incorporated into the proposed framework, using nonseparable Hamiltonians as an illustrative system class. We assess the method's performance based on the forecasting accuracy of a model estimated from-single trajectory data. We compare the Bayesian method to a state-of-the-art machine learning method on a canonical nonseparable Hamiltonian model and a chaotic double pendulum model with small, noisy training datasets. The results show that using the Bayesian posterior as a training objective can yield upwards of 724 times improvement in Hamiltonian mean squared error using training data with up to 10% multiplicative noise compared to a standard training objective. Lastly, we demonstrate the utility of the novel algorithm for parameter estimation of a 64-dimensional model of the spatially-discretized nonlinear Schr"odinger equation with data corrupted by up to 20% multiplicative noise.
|
http://arxiv.org/pdf/2401.12476v2
|
[
"Nicholas Galioto",
"Harsh Sharma",
"Boris Kramer",
"Alex Arkady Gorodetsky"
] |
2024-06-26T23:55:27Z
|
2024-01-23T04:05:26Z
|
2305.09605
|
To smooth a cloud or to pin it down: Guarantees and Insights on Score
Matching in Denoising Diffusion Models
|
Denoising diffusion models are a class of generative models which have recently achieved state-of-the-art results across many domains. Gradual noise is added to the data using a diffusion process, which transforms the data distribution into a Gaussian. Samples from the generative model are then obtained by simulating an approximation of the time reversal of this diffusion initialized by Gaussian samples. Recent research has explored adapting diffusion models for sampling and inference tasks. In this paper, we leverage known connections to stochastic control akin to the F"ollmer drift to extend established neural network approximation results for the F"ollmer drift to denoising diffusion models and samplers.
|
http://arxiv.org/pdf/2305.09605v3
|
[
"Francisco Vargas",
"Teodora Reu",
"Anna Kerekes",
"Michael M Bronstein"
] |
2024-06-26T23:41:36Z
|
2023-05-16T16:56:19Z
|
2311.06554
|
PGODE: Towards High-quality System Dynamics Modeling
|
This paper studies the problem of modeling multi-agent dynamical systems, where agents could interact mutually to influence their behaviors. Recent research predominantly uses geometric graphs to depict these mutual interactions, which are then captured by powerful graph neural networks (GNNs). However, predicting interacting dynamics in challenging scenarios such as out-of-distribution shift and complicated underlying rules remains unsolved. In this paper, we propose a new approach named Prototypical Graph ODE (PGODE) to address the problem. The core of PGODE is to incorporate prototype decomposition from contextual knowledge into a continuous graph ODE framework. Specifically, PGODE employs representation disentanglement and system parameters to extract both object-level and system-level contexts from historical trajectories, which allows us to explicitly model their independent influence and thus enhances the generalization capability under system changes. Then, we integrate these disentangled latent representations into a graph ODE model, which determines a combination of various interacting prototypes for enhanced model expressivity. The entire model is optimized using an end-to-end variational inference framework to maximize the likelihood. Extensive experiments in both in-distribution and out-of-distribution settings validate the superiority of PGODE compared to various baselines.
|
http://arxiv.org/pdf/2311.06554v2
|
[
"Xiao Luo",
"Yiyang Gu",
"Huiyu Jiang",
"Hang Zhou",
"Jinsheng Huang",
"Wei Ju",
"Zhiping Xiao",
"Ming Zhang",
"Yizhou Sun"
] |
2024-06-26T23:37:46Z
|
2023-11-11T12:04:47Z
|
2406.18787
|
Unified Uncertainties: Combining Input, Data and Model Uncertainty into
a Single Formulation
|
Modelling uncertainty in Machine Learning models is essential for achieving safe and reliable predictions. Most research on uncertainty focuses on output uncertainty (predictions), but minimal attention is paid to uncertainty at inputs. We propose a method for propagating uncertainty in the inputs through a Neural Network that is simultaneously able to estimate input, data, and model uncertainty. Our results show that this propagation of input uncertainty results in a more stable decision boundary even under large amounts of input noise than comparatively simple Monte Carlo sampling. Additionally, we discuss and demonstrate that input uncertainty, when propagated through the model, results in model uncertainty at the outputs. The explicit incorporation of input uncertainty may be beneficial in situations where the amount of input uncertainty is known, though good datasets for this are still needed.
|
http://arxiv.org/pdf/2406.18787v1
|
[
"Matias Valdenegro-Toro",
"Ivo Pascal de Jong",
"Marco Zullich"
] |
2024-06-26T23:13:45Z
|
2024-06-26T23:13:45Z
|
2406.18781
|
Learning to Remove Cuts in Integer Linear Programming
|
Cutting plane methods are a fundamental approach for solving integer linear programs (ILPs). In each iteration of such methods, additional linear constraints (cuts) are introduced to the constraint set with the aim of excluding the previous fractional optimal solution while not affecting the optimal integer solution. In this work, we explore a novel approach within cutting plane methods: instead of only adding new cuts, we also consider the removal of previous cuts introduced at any of the preceding iterations of the method under a learnable parametric criteria. We demonstrate that in fundamental combinatorial optimization settings such cut removal policies can lead to significant improvements over both human-based and machine learning-guided cut addition policies even when implemented with simple models.
|
http://arxiv.org/pdf/2406.18781v1
|
[
"Pol Puigdemont",
"Stratis Skoulakis",
"Grigorios Chrysos",
"Volkan Cevher"
] |
2024-06-26T22:50:43Z
|
2024-06-26T22:50:43Z
|
2406.18777
|
Aligning Model Properties via Conformal Risk Control
|
AI model alignment is crucial due to inadvertent biases in training data and the underspecified pipeline in modern machine learning, where numerous models with excellent test set metrics can be produced, yet they may not meet end-user requirements. Recent advances demonstrate that post-training model alignment via human feedback can address some of these challenges. However, these methods are often confined to settings (such as generative AI) where humans can interpret model outputs and provide feedback. In traditional non-generative settings, where model outputs are numerical values or classes, detecting misalignment through single-sample outputs is highly challenging. In this paper we consider an alternative strategy. We propose interpreting model alignment through property testing, defining an aligned model $f$ as one belonging to a subset $mathcal{P}$ of functions that exhibit specific desired behaviors. We focus on post-processing a pre-trained model $f$ to better align with $mathcal{P}$ using conformal risk control. Specifically, we develop a general procedure for converting queries for a given property $mathcal{P}$ to a collection of loss functions suitable for use in a conformal risk control algorithm. We prove a probabilistic guarantee that the resulting conformal interval around $f$ contains a function approximately satisfying $mathcal{P}$. Given the capabilities of modern AI models with extensive parameters and training data, one might assume alignment issues will resolve naturally. However, increasing training data or parameters in a random feature model doesn't eliminate the need for alignment techniques when pre-training data is biased. We demonstrate our alignment methodology on supervised learning datasets for properties like monotonicity and concavity. Our flexible procedure can be applied to various desired properties.
|
http://arxiv.org/pdf/2406.18777v1
|
[
"William Overman",
"Jacqueline Jil Vallon",
"Mohsen Bayati"
] |
2024-06-26T22:24:46Z
|
2024-06-26T22:24:46Z
|
2406.14815
|
Latent diffusion models for parameterization and data assimilation of
facies-based geomodels
|
Geological parameterization entails the representation of a geomodel using a small set of latent variables and a mapping from these variables to grid-block properties such as porosity and permeability. Parameterization is useful for data assimilation (history matching), as it maintains geological realism while reducing the number of variables to be determined. Diffusion models are a new class of generative deep-learning procedures that have been shown to outperform previous methods, such as generative adversarial networks, for image generation tasks. Diffusion models are trained to "denoise", which enables them to generate new geological realizations from input fields characterized by random noise. Latent diffusion models, which are the specific variant considered in this study, provide dimension reduction through use of a low-dimensional latent variable. The model developed in this work includes a variational autoencoder for dimension reduction and a U-net for the denoising process. Our application involves conditional 2D three-facies (channel-levee-mud) systems. The latent diffusion model is shown to provide realizations that are visually consistent with samples from geomodeling software. Quantitative metrics involving spatial and flow-response statistics are evaluated, and general agreement between the diffusion-generated models and reference realizations is observed. Stability tests are performed to assess the smoothness of the parameterization method. The latent diffusion model is then used for ensemble-based data assimilation. Two synthetic "true" models are considered. Significant uncertainty reduction, posterior P$_{10}$-P$_{90}$ forecasts that generally bracket observed data, and consistent posterior geomodels, are achieved in both cases.
|
http://arxiv.org/pdf/2406.14815v2
|
[
"Guido Di Federico",
"Louis J. Durlofsky"
] |
2024-06-26T22:23:23Z
|
2024-06-21T01:32:03Z
|
2308.05239
|
Machine Learning-Enabled Software and System Architecture Frameworks
|
Various architecture frameworks for software, systems, and enterprises have been proposed in the literature. They identified several stakeholders and defined modeling perspectives, architecture viewpoints, and views to frame and address stakeholder concerns. However, the stakeholders with data science and Machine Learning (ML) related concerns, such as data scientists and data engineers, are yet to be included in existing architecture frameworks. Only this way can we envision a holistic system architecture description of an ML-enabled system. Note that the ML component behavior and functionalities are special and should be distinguished from traditional software system behavior and functionalities. The main reason is that the actual functionality should be inferred from data instead of being specified at design time. Additionally, the structural models of ML components, such as ML model architectures, are typically specified using different notations and formalisms from what the Software Engineering (SE) community uses for software structural models. Yet, these two aspects, namely ML and non-ML, are becoming so intertwined that it necessitates an extension of software architecture frameworks and modeling practices toward supporting ML-enabled system architectures. In this paper, we address this gap through an empirical study using an online survey instrument. We surveyed 61 subject matter experts from over 25 organizations in 10 countries.
|
http://arxiv.org/pdf/2308.05239v2
|
[
"Armin Moin",
"Atta Badii",
"Stephan Günnemann",
"Moharram Challenger"
] |
2024-06-26T22:09:04Z
|
2023-08-09T21:54:34Z
|
2305.11957
|
Towards understanding neural collapse in supervised contrastive learning
with the information bottleneck method
|
Neural collapse describes the geometry of activation in the final layer of a deep neural network when it is trained beyond performance plateaus. Open questions include whether neural collapse leads to better generalization and, if so, why and how training beyond the plateau helps. We model neural collapse as an information bottleneck (IB) problem in order to investigate whether such a compact representation exists and discover its connection to generalization. We demonstrate that neural collapse leads to good generalization specifically when it approaches an optimal IB solution of the classification problem. Recent research has shown that two deep neural networks independently trained with the same contrastive loss objective are linearly identifiable, meaning that the resulting representations are equivalent up to a matrix transformation. We leverage linear identifiability to approximate an analytical solution of the IB problem. This approximation demonstrates that when class means exhibit $K$-simplex Equiangular Tight Frame (ETF) behavior (e.g., $K$=10 for CIFAR10 and $K$=100 for CIFAR100), they coincide with the critical phase transitions of the corresponding IB problem. The performance plateau occurs once the optimal solution for the IB problem includes all of these phase transitions. We also show that the resulting $K$-simplex ETF can be packed into a $K$-dimensional Gaussian distribution using supervised contrastive learning with a ResNet50 backbone. This geometry suggests that the $K$-simplex ETF learned by supervised contrastive learning approximates the optimal features for source coding. Hence, there is a direct correspondence between optimal IB solutions and generalization in contrastive learning.
|
http://arxiv.org/pdf/2305.11957v2
|
[
"Siwei Wang",
"Stephanie E Palmer"
] |
2024-06-26T21:52:52Z
|
2023-05-19T18:41:17Z
|
2406.18770
|
ADO-LLM: Analog Design Bayesian Optimization with In-Context Learning of
Large Language Models
|
Analog circuit design requires substantial human expertise and involvement, which is a significant roadblock to design productivity. Bayesian Optimization (BO), a popular machine learning based optimization strategy, has been leveraged to automate analog design given its applicability across various circuit topologies and technologies. Traditional BO methods employ black box Gaussian Process surrogate models and optimized labeled data queries to find optimization solutions by trading off between exploration and exploitation. However, the search for the optimal design solution in BO can be expensive from both a computational and data usage point of view, particularly for high dimensional optimization problems. This paper presents ADO-LLM, the first work integrating large language models (LLMs) with Bayesian Optimization for analog design optimization. ADO-LLM leverages the LLM's ability to infuse domain knowledge to rapidly generate viable design points to remedy BO's inefficiency in finding high value design areas specifically under the limited design space coverage of the BO's probabilistic surrogate model. In the meantime, sampling of design points evaluated in the iterative BO process provides quality demonstrations for the LLM to generate high quality design points while leveraging infused broad design knowledge. Furthermore, the diversity brought by BO's exploration enriches the contextual understanding of the LLM and allows it to more broadly search in the design space and prevent repetitive and redundant suggestions. We evaluate the proposed framework on two different types of analog circuits and demonstrate notable improvements in design efficiency and effectiveness.
|
http://arxiv.org/pdf/2406.18770v1
|
[
"Yuxuan Yin",
"Yu Wang",
"Boxun Xu",
"Peng Li"
] |
2024-06-26T21:42:50Z
|
2024-06-26T21:42:50Z
|
2406.18765
|
WV-Net: A foundation model for SAR WV-mode satellite imagery trained
using contrastive self-supervised learning on 10 million images
|
The European Space Agency's Copernicus Sentinel-1 (S-1) mission is a constellation of C-band synthetic aperture radar (SAR) satellites that provide unprecedented monitoring of the world's oceans. S-1's wave mode (WV) captures 20x20 km image patches at 5 m pixel resolution and is unaffected by cloud cover or time-of-day. The mission's open data policy has made SAR data easily accessible for a range of applications, but the need for manual image annotations is a bottleneck that hinders the use of machine learning methods. This study uses nearly 10 million WV-mode images and contrastive self-supervised learning to train a semantic embedding model called WV-Net. In multiple downstream tasks, WV-Net outperforms a comparable model that was pre-trained on natural images (ImageNet) with supervised learning. Experiments show improvements for estimating wave height (0.50 vs 0.60 RMSE using linear probing), estimating near-surface air temperature (0.90 vs 0.97 RMSE), and performing multilabel-classification of geophysical and atmospheric phenomena (0.96 vs 0.95 micro-averaged AUROC). WV-Net embeddings are also superior in an unsupervised image-retrieval task and scale better in data-sparse settings. Together, these results demonstrate that WV-Net embeddings can support geophysical research by providing a convenient foundation model for a variety of data analysis and exploration tasks.
|
http://arxiv.org/pdf/2406.18765v1
|
[
"Yannik Glaser",
"Justin E. Stopa",
"Linnea M. Wolniewicz",
"Ralph Foster",
"Doug Vandemark",
"Alexis Mouche",
"Bertrand Chapron",
"Peter Sadowski"
] |
2024-06-26T21:30:41Z
|
2024-06-26T21:30:41Z
|
2406.18763
|
Conformalized Link Prediction on Graph Neural Networks
|
Graph Neural Networks (GNNs) excel in diverse tasks, yet their applications in high-stakes domains are often hampered by unreliable predictions. Although numerous uncertainty quantification methods have been proposed to address this limitation, they often lack textit{rigorous} uncertainty estimates. This work makes the first attempt to introduce a distribution-free and model-agnostic uncertainty quantification approach to construct a predictive interval with a statistical guarantee for GNN-based link prediction. We term it as textit{conformalized link prediction.} Our approach builds upon conformal prediction (CP), a framework that promises to construct statistically robust prediction sets or intervals. We first theoretically and empirically establish a permutation invariance condition for the application of CP in link prediction tasks, along with an exact test-time coverage. Leveraging the important structural information in graphs, we then identify a novel and crucial connection between a graph's adherence to the power law distribution and the efficiency of CP. This insight leads to the development of a simple yet effective sampling-based method to align the graph structure with a power law distribution prior to the standard CP procedure. Extensive experiments demonstrate that for conformalized link prediction, our approach achieves the desired marginal coverage while significantly improving the efficiency of CP compared to baseline methods.
|
http://arxiv.org/pdf/2406.18763v1
|
[
"Tianyi Zhao",
"Jian Kang",
"Lu Cheng"
] |
2024-06-26T21:17:37Z
|
2024-06-26T21:17:37Z
|
2402.04929
|
Source-Free Domain Adaptation with Diffusion-Guided Source Data
Generation
|
This paper introduces a novel approach to leverage the generalizability of Diffusion Models for Source-Free Domain Adaptation (DM-SFDA). Our proposed DMSFDA method involves fine-tuning a pre-trained text-to-image diffusion model to generate source domain images using features from the target images to guide the diffusion process. Specifically, the pre-trained diffusion model is fine-tuned to generate source samples that minimize entropy and maximize confidence for the pre-trained source model. We then use a diffusion model-based image mixup strategy to bridge the domain gap between the source and target domains. We validate our approach through comprehensive experiments across a range of datasets, including Office-31, Office-Home, and VisDA. The results demonstrate significant improvements in SFDA performance, highlighting the potential of diffusion models in generating contextually relevant, domain-specific images.
|
http://arxiv.org/pdf/2402.04929v3
|
[
"Shivang Chopra",
"Suraj Kothawade",
"Houda Aynaou",
"Aman Chadha"
] |
2024-06-26T20:57:15Z
|
2024-02-07T14:56:13Z
|
2406.17173
|
Diff3Dformer: Leveraging Slice Sequence Diffusion for Enhanced 3D CT
Classification with Transformer Networks
|
The manifestation of symptoms associated with lung diseases can vary in different depths for individual patients, highlighting the significance of 3D information in CT scans for medical image classification. While Vision Transformer has shown superior performance over convolutional neural networks in image classification tasks, their effectiveness is often demonstrated on sufficiently large 2D datasets and they easily encounter overfitting issues on small medical image datasets. To address this limitation, we propose a Diffusion-based 3D Vision Transformer (Diff3Dformer), which utilizes the latent space of the Diffusion model to form the slice sequence for 3D analysis and incorporates clustering attention into ViT to aggregate repetitive information within 3D CT scans, thereby harnessing the power of the advanced transformer in 3D classification tasks on small datasets. Our method exhibits improved performance on two different scales of small datasets of 3D lung CT scans, surpassing the state of the art 3D methods and other transformer-based approaches that emerged during the COVID-19 pandemic, demonstrating its robust and superior performance across different scales of data. Experimental results underscore the superiority of our proposed method, indicating its potential for enhancing medical image classification tasks in real-world scenarios.
|
http://arxiv.org/pdf/2406.17173v2
|
[
"Zihao Jin",
"Yingying Fang",
"Jiahao Huang",
"Caiwen Xu",
"Simon Walsh",
"Guang Yang"
] |
2024-06-26T20:54:45Z
|
2024-06-24T23:23:18Z
|
2404.03828
|
Outlier-Efficient Hopfield Layers for Large Transformer-Based Models
|
We introduce an Outlier-Efficient Modern Hopfield Model (termed $mathrm{OutEffHop}$) and use it to address the outlier inefficiency problem of {training} gigantic transformer-based models. Our main contribution is a novel associative memory model facilitating textit{outlier-efficient} associative memory retrievals. Interestingly, this memory model manifests a model-based interpretation of an outlier-efficient attention mechanism (${rm Softmax}_1$): it is an approximation of the memory retrieval process of $mathrm{OutEffHop}$. Methodologically, this allows us to introduce novel outlier-efficient Hopfield layers as powerful alternatives to traditional attention mechanisms, with superior post-quantization performance. Theoretically, the Outlier-Efficient Modern Hopfield Model retains and improves the desirable properties of standard modern Hopfield models, including fixed point convergence and exponential storage capacity. Empirically, we demonstrate the efficacy of the proposed model across large-scale transformer-based and Hopfield-based models (including BERT, OPT, ViT, and STanHop-Net), benchmarking against state-of-the-art methods like $mathtt{Clipped_Softmax}$ and $mathtt{Gated_Attention}$. Notably, $mathrm{OutEffHop}$ achieves an average reduction of 22+% in average kurtosis and 26+% in the maximum infinity norm of model outputs across four models. Code is available at href{https://github.com/MAGICS-LAB/OutEffHop}{GitHub}; models are on href{https://huggingface.co/collections/magicslabnu/outeffhop-6610fcede8d2cda23009a98f}{Hugging Face Hub}; future updates are on href{https://arxiv.org/abs/2404.03828}{arXiv}.
|
http://arxiv.org/pdf/2404.03828v2
|
[
"Jerry Yao-Chieh Hu",
"Pei-Hsuan Chang",
"Robin Luo",
"Hong-Yu Chen",
"Weijian Li",
"Wei-Po Wang",
"Han Liu"
] |
2024-06-26T20:50:18Z
|
2024-04-04T23:08:43Z
|
2403.05750
|
Decoding the AI Pen: Techniques and Challenges in Detecting AI-Generated
Text
|
Large Language Models (LLMs) have revolutionized the field of Natural Language Generation (NLG) by demonstrating an impressive ability to generate human-like text. However, their widespread usage introduces challenges that necessitate thoughtful examination, ethical scrutiny, and responsible practices. In this study, we delve into these challenges, explore existing strategies for mitigating them, with a particular emphasis on identifying AI-generated text as the ultimate solution. Additionally, we assess the feasibility of detection from a theoretical perspective and propose novel research directions to address the current limitations in this domain.
|
http://arxiv.org/abs/2403.05750v3
|
[
"Sara Abdali",
"Richard Anarfi",
"CJ Barberan",
"Jia He"
] |
2024-06-26T20:49:32Z
|
2024-03-09T01:13:54Z
|
2406.18752
|
Competitive Algorithms for Online Knapsack with Succinct Predictions
|
In the online knapsack problem, the goal is to pack items arriving online with different values and weights into a capacity-limited knapsack to maximize the total value of the accepted items. We study textit{learning-augmented} algorithms for this problem, which aim to use machine-learned predictions to move beyond pessimistic worst-case guarantees. Existing learning-augmented algorithms for online knapsack consider relatively complicated prediction models that give an algorithm substantial information about the input, such as the total weight of items at each value. In practice, such predictions can be error-sensitive and difficult to learn. Motivated by this limitation, we introduce a family of learning-augmented algorithms for online knapsack that use emph{succinct predictions}. In particular, the machine-learned prediction given to the algorithm is just a single value or interval that estimates the minimum value of any item accepted by an offline optimal solution. By leveraging a relaxation to online emph{fractional} knapsack, we design algorithms that can leverage such succinct predictions in both the trusted setting (i.e., with perfect prediction) and the untrusted setting, where we prove that a simple meta-algorithm achieves a nearly optimal consistency-robustness trade-off. Empirically, we show that our algorithms significantly outperform baselines that do not use predictions and often outperform algorithms based on more complex prediction models.
|
http://arxiv.org/pdf/2406.18752v1
|
[
"Mohammadreza Daneshvaramoli",
"Helia Karisani",
"Adam Lechowicz",
"Bo Sun",
"Cameron Musco",
"Mohammad Hajiesmaili"
] |
2024-06-26T20:38:00Z
|
2024-06-26T20:38:00Z
|
2406.18747
|
A Stem-Agnostic Single-Decoder System for Music Source Separation Beyond
Four Stems
|
Despite significant recent progress across multiple subtasks of audio source separation, few music source separation systems support separation beyond the four-stem vocals, drums, bass, and other (VDBO) setup. Of the very few current systems that support source separation beyond this setup, most continue to rely on an inflexible decoder setup that can only support a fixed pre-defined set of stems. Increasing stem support in these inflexible systems correspondingly requires increasing computational complexity, rendering extensions of these systems computationally infeasible for long-tail instruments. In this work, we propose Banquet, a system that allows source separation of multiple stems using just one decoder. A bandsplit source separation model is extended to work in a query-based setup in tandem with a music instrument recognition PaSST model. On the MoisesDB dataset, Banquet, at only 24.9 M trainable parameters, approached the performance level of the significantly more complex 6-stem Hybrid Transformer Demucs on VDBO stems and outperformed it on guitar and piano. The query-based setup allows for the separation of narrow instrument classes such as clean acoustic guitars, and can be successfully applied to the extraction of less common stems such as reeds and organs. Implementation is available at https://github.com/kwatcharasupat/query-bandit.
|
http://arxiv.org/pdf/2406.18747v1
|
[
"Karn N. Watcharasupat",
"Alexander Lerch"
] |
2024-06-26T20:25:53Z
|
2024-06-26T20:25:53Z
|
2311.06153
|
GRAM: An Interpretable Approach for Graph Anomaly Detection using
Gradient Attention Maps
|
Detecting unusual patterns in graph data is a crucial task in data mining. However, existing methods face challenges in consistently achieving satisfactory performance and often lack interpretability, which hinders our understanding of anomaly detection decisions. In this paper, we propose a novel approach to graph anomaly detection that leverages the power of interpretability to enhance performance. Specifically, our method extracts an attention map derived from gradients of graph neural networks, which serves as a basis for scoring anomalies. Notably, our approach is flexible and can be used in various anomaly detection settings. In addition, we conduct theoretical analysis using synthetic data to validate our method and gain insights into its decision-making process. To demonstrate the effectiveness of our method, we extensively evaluate our approach against state-of-the-art graph anomaly detection techniques on real-world graph classification and wireless network datasets. The results consistently demonstrate the superior performance of our method compared to the baselines.
|
http://arxiv.org/abs/2311.06153v2
|
[
"Yifei Yang",
"Peng Wang",
"Xiaofan He",
"Dongmian Zou"
] |
2024-06-26T20:24:08Z
|
2023-11-10T16:14:21Z
|
2405.10443
|
Simultaneous Masking, Not Prompting Optimization: A Paradigm Shift in
Fine-tuning LLMs for Simultaneous Translation
|
Large language models (LLMs) have achieved state-of-the-art performance in various language processing tasks, motivating their adoption in simultaneous translation. Current fine-tuning methods to adapt LLMs for simultaneous translation focus on prompting optimization strategies using either data augmentation or prompt structure modifications. However, these methods suffer from several issues, such as unnecessarily expanded training sets, computational inefficiency from dumping the key and value cache, increased prompt sizes, or restriction to a single decision policy. To eliminate these issues, in this work, we propose SimulMask, a new paradigm for fine-tuning LLMs for simultaneous translation. It utilizes a novel attention mask approach that models simultaneous translation during fine-tuning by masking attention for a desired decision policy. Applying the proposed SimulMask on a Falcon LLM for the IWSLT 2017 dataset, we have observed a significant translation quality improvement compared to state-of-the-art prompting optimization strategies on five language pairs while reducing the computational cost.
|
http://arxiv.org/pdf/2405.10443v2
|
[
"Matthew Raffel",
"Victor Agostinelli",
"Lizhong Chen"
] |
2024-06-26T20:22:31Z
|
2024-05-16T21:07:42Z
|
2406.18745
|
QBI: Quantile-based Bias Initialization for Efficient Private Data
Reconstruction in Federated Learning
|
Federated learning enables the training of machine learning models on distributed data without compromising user privacy, as data remains on personal devices and only model updates, such as gradients, are shared with a central coordinator. However, recent research has shown that the central entity can perfectly reconstruct private data from shared model updates by maliciously initializing the model's parameters. In this paper, we propose QBI, a novel bias initialization method that significantly enhances reconstruction capabilities. This is accomplished by directly solving for bias values yielding sparse activation patterns. Further, we propose PAIRS, an algorithm that builds on QBI. PAIRS can be deployed when a separate dataset from the target domain is available to further increase the percentage of data that can be fully recovered. Measured by the percentage of samples that can be perfectly reconstructed from batches of various sizes, our approach achieves significant improvements over previous methods with gains of up to 50% on ImageNet and up to 60% on the IMDB sentiment analysis text dataset. Furthermore, we establish theoretical limits for attacks leveraging stochastic gradient sparsity, providing a foundation for understanding the fundamental constraints of these attacks. We empirically assess these limits using synthetic datasets. Finally, we propose and evaluate AGGP, a defensive framework designed to prevent gradient sparsity attacks, contributing to the development of more secure and private federated learning systems.
|
http://arxiv.org/pdf/2406.18745v1
|
[
"Micha V. Nowak",
"Tim P. Bott",
"David Khachaturov",
"Frank Puppe",
"Adrian Krenzer",
"Amar Hekalo"
] |
2024-06-26T20:19:32Z
|
2024-06-26T20:19:32Z
|
2406.18741
|
Decentralized Semantic Traffic Control in AVs Using RL and DQN for
Dynamic Roadblocks
|
Autonomous Vehicles (AVs), furnished with sensors capable of capturing essential vehicle dynamics such as speed, acceleration, and precise location, possess the capacity to execute intelligent maneuvers, including lane changes, in anticipation of approaching roadblocks. Nevertheless, the sheer volume of sensory data and the processing necessary to derive informed decisions can often overwhelm the vehicles, rendering them unable to handle the task independently. Consequently, a common approach in traffic scenarios involves transmitting the data to servers for processing, a practice that introduces challenges, particularly in situations demanding real-time processing. In response to this challenge, we present a novel DL-based semantic traffic control system that entrusts semantic encoding responsibilities to the vehicles themselves. This system processes driving decisions obtained from a Reinforcement Learning (RL) agent, streamlining the decision-making process. Specifically, our framework envisions scenarios where abrupt roadblocks materialize due to factors such as road maintenance, accidents, or vehicle repairs, necessitating vehicles to make determinations concerning lane-keeping or lane-changing actions to navigate past these obstacles. To formulate this scenario mathematically, we employ a Markov Decision Process (MDP) and harness the Deep Q Learning (DQN) algorithm to unearth viable solutions.
|
http://arxiv.org/pdf/2406.18741v1
|
[
"Emanuel Figetakis",
"Yahuza Bello",
"Ahmed Refaey",
"Abdallah Shami"
] |
2024-06-26T20:12:48Z
|
2024-06-26T20:12:48Z
|
2406.18739
|
RetroGFN: Diverse and Feasible Retrosynthesis using GFlowNets
|
Single-step retrosynthesis aims to predict a set of reactions that lead to the creation of a target molecule, which is a crucial task in molecular discovery. Although a target molecule can often be synthesized with multiple different reactions, it is not clear how to verify the feasibility of a reaction, because the available datasets cover only a tiny fraction of the possible solutions. Consequently, the existing models are not encouraged to explore the space of possible reactions sufficiently. In this paper, we propose a novel single-step retrosynthesis model, RetroGFN, that can explore outside the limited dataset and return a diverse set of feasible reactions by leveraging a feasibility proxy model during the training. We show that RetroGFN achieves competitive results on standard top-k accuracy while outperforming existing methods on round-trip accuracy. Moreover, we provide empirical arguments in favor of using round-trip accuracy which expands the notion of feasibility with respect to the standard top-k accuracy metric.
|
http://arxiv.org/pdf/2406.18739v1
|
[
"Piotr Gaiński",
"Michał Koziarski",
"Krzysztof Maziarz",
"Marwin Segler",
"Jacek Tabor",
"Marek Śmieja"
] |
2024-06-26T20:10:03Z
|
2024-06-26T20:10:03Z
|
2406.18726
|
Data-driven identification of port-Hamiltonian DAE systems by Gaussian
processes
|
Port-Hamiltonian systems (pHS) allow for a structure-preserving modeling of dynamical systems. Coupling pHS via linear relations between input and output defines an overall pHS, which is structure preserving. However, in multiphysics applications, some subsystems do not allow for a physical pHS description, as (a) this is not available or (b) too expensive. Here, data-driven approaches can be used to deliver a pHS for such subsystems, which can then be coupled to the other subsystems in a structure-preserving way. In this work, we derive a data-driven identification approach for port-Hamiltonian differential algebraic equation (DAE) systems. The approach uses input and state space data to estimate nonlinear effort functions of pH-DAEs. As underlying technique, we us (multi-task) Gaussian processes. This work thereby extends over the current state of the art, in which only port-Hamiltonian ordinary differential equation systems could be identified via Gaussian processes. We apply this approach successfully to two applications from network design and constrained multibody system dynamics, based on pH-DAE system of index one and three, respectively.
|
http://arxiv.org/pdf/2406.18726v1
|
[
"Peter Zaspel",
"Michael Günther"
] |
2024-06-26T19:51:53Z
|
2024-06-26T19:51:53Z
|
2406.18725
|
Jailbreaking LLMs with Arabic Transliteration and Arabizi
|
This study identifies the potential vulnerabilities of Large Language Models (LLMs) to 'jailbreak' attacks, specifically focusing on the Arabic language and its various forms. While most research has concentrated on English-based prompt manipulation, our investigation broadens the scope to investigate the Arabic language. We initially tested the AdvBench benchmark in Standardized Arabic, finding that even with prompt manipulation techniques like prefix injection, it was insufficient to provoke LLMs into generating unsafe content. However, when using Arabic transliteration and chatspeak (or arabizi), we found that unsafe content could be produced on platforms like OpenAI GPT-4 and Anthropic Claude 3 Sonnet. Our findings suggest that using Arabic and its various forms could expose information that might remain hidden, potentially increasing the risk of jailbreak attacks. We hypothesize that this exposure could be due to the model's learned connection to specific words, highlighting the need for more comprehensive safety training across all language forms.
|
http://arxiv.org/pdf/2406.18725v1
|
[
"Mansour Al Ghanim",
"Saleh Almohaimeed",
"Mengxin Zheng",
"Yan Solihin",
"Qian Lou"
] |
2024-06-26T19:48:48Z
|
2024-06-26T19:48:48Z
|
2211.11695
|
Disentangled Representation Learning
|
Disentangled Representation Learning (DRL) aims to learn a model capable of identifying and disentangling the underlying factors hidden in the observable data in representation form. The process of separating underlying factors of variation into variables with semantic meaning benefits in learning explainable representations of data, which imitates the meaningful understanding process of humans when observing an object or relation. As a general learning strategy, DRL has demonstrated its power in improving the model explainability, controlability, robustness, as well as generalization capacity in a wide range of scenarios such as computer vision, natural language processing, and data mining. In this article, we comprehensively investigate DRL from various aspects including motivations, definitions, methodologies, evaluations, applications, and model designs. We first present two well-recognized definitions, i.e., Intuitive Definition and Group Theory Definition for disentangled representation learning. We further categorize the methodologies for DRL into four groups from the following perspectives, the model type, representation structure, supervision signal, and independence assumption. We also analyze principles to design different DRL models that may benefit different tasks in practical applications. Finally, we point out challenges in DRL as well as potential research directions deserving future investigations. We believe this work may provide insights for promoting the DRL research in the community.
|
http://arxiv.org/pdf/2211.11695v4
|
[
"Xin Wang",
"Hong Chen",
"Si'ao Tang",
"Zihao Wu",
"Wenwu Zhu"
] |
2024-06-26T19:29:39Z
|
2022-11-21T18:14:38Z
|
2406.10426
|
Towards Neural Scaling Laws for Foundation Models on Temporal Graphs
|
The field of temporal graph learning aims to learn from evolving network data to forecast future interactions. Given a collection of observed temporal graphs, is it possible to predict the evolution of an unseen network from the same domain? To answer this question, we first present the Temporal Graph Scaling (TGS) dataset, a large collection of temporal graphs consisting of eighty-four ERC20 token transaction networks collected from 2017 to 2023. Next, we evaluate the transferability of Temporal Graph Neural Networks (TGNNs) for the temporal graph property prediction task by pre-training on a collection of up to sixty-four token transaction networks and then evaluating the downstream performance on twenty unseen token networks. We find that the neural scaling law observed in NLP and Computer Vision also applies in temporal graph learning, where pre-training on greater number of networks leads to improved downstream performance. To the best of our knowledge, this is the first empirical demonstration of the transferability of temporal graphs learning. On downstream token networks, the largest pre-trained model outperforms single model TGNNs on thirteen unseen test networks. Therefore, we believe that this is a promising first step towards building foundation models for temporal graphs.
|
http://arxiv.org/pdf/2406.10426v2
|
[
"Razieh Shirzadkhani",
"Tran Gia Bao Ngo",
"Kiarash Shamsi",
"Shenyang Huang",
"Farimah Poursafaei",
"Poupak Azad",
"Reihaneh Rabbany",
"Baris Coskunuzer",
"Guillaume Rabusseau",
"Cuneyt Gurcan Akcora"
] |
2024-06-26T19:26:58Z
|
2024-06-14T22:07:11Z
|
2406.18708
|
Learn it or Leave it: Module Composition and Pruning for Continual
Learning
|
In real-world environments, continual learning is essential for machine learning models, as they need to acquire new knowledge incrementally without forgetting what they have already learned. While pretrained language models have shown impressive capabilities on various static tasks, applying them to continual learning poses significant challenges, including avoiding catastrophic forgetting, facilitating knowledge transfer, and maintaining parameter efficiency. In this paper, we introduce MoCL-P, a novel lightweight continual learning method that addresses these challenges simultaneously. Unlike traditional approaches that continuously expand parameters for newly arriving tasks, MoCL-P integrates task representation-guided module composition with adaptive pruning, effectively balancing knowledge integration and computational overhead. Our evaluation across three continual learning benchmarks with up to 176 tasks shows that MoCL-P achieves state-of-the-art performance and improves parameter efficiency by up to three times, demonstrating its potential for practical applications where resource requirements are constrained.
|
http://arxiv.org/pdf/2406.18708v1
|
[
"Mingyang Wang",
"Heike Adel",
"Lukas Lange",
"Jannik Strötgen",
"Hinrich Schütze"
] |
2024-06-26T19:18:28Z
|
2024-06-26T19:18:28Z
|
2406.18701
|
Fast Optimizer Benchmark
|
In this paper, we present the Fast Optimizer Benchmark (FOB), a tool designed for evaluating deep learning optimizers during their development. The benchmark supports tasks from multiple domains such as computer vision, natural language processing, and graph learning. The focus is on convenient usage, featuring human-readable YAML configurations, SLURM integration, and plotting utilities. FOB can be used together with existing hyperparameter optimization (HPO) tools as it handles training and resuming of runs. The modular design enables integration into custom pipelines, using it simply as a collection of tasks. We showcase an optimizer comparison as a usage example of our tool. FOB can be found on GitHub: https://github.com/automl/FOB.
|
http://arxiv.org/pdf/2406.18701v1
|
[
"Simon Blauth",
"Tobias Bürger",
"Zacharias Häringer",
"Jörg Franke",
"Frank Hutter"
] |
2024-06-26T19:10:34Z
|
2024-06-26T19:10:34Z
|
2406.18695
|
Learning to Correct for QA Reasoning with Black-box LLMs
|
An open challenge in recent machine learning is about how to improve the reasoning capability of large language models (LLMs) in a black-box setting, i.e., without access to detailed information such as output token probabilities. Existing approaches either rely on accessibility (which is often unrealistic) or involve significantly increased train- and inference-time costs. This paper addresses those limitations or shortcomings by proposing a novel approach, namely CoBB (Correct for improving QA reasoning of Black-Box LLMs). It uses a trained adaptation model to perform a seq2seq mapping from the often-imperfect reasonings of the original black-box LLM to the correct or improved reasonings. Specifically, the adaptation model is initialized with a relatively small open-source LLM and adapted over a collection of sub-sampled training pairs. To select the representative pairs of correct and incorrect reasonings, we formulated the dataset construction as an optimization problem that minimizes the statistical divergence between the sampled subset and the entire collection, and solved it via a genetic algorithm. We then train the adaptation model over the sampled pairs by contrasting the likelihoods of correct and incorrect reasonings. Our experimental results demonstrate that CoBB significantly improves reasoning accuracy across various QA benchmarks, compared to the best-performing adaptation baselines.
|
http://arxiv.org/pdf/2406.18695v1
|
[
"Jaehyung Kim",
"Dongyoung Kim",
"Yiming Yang"
] |
2024-06-26T18:57:32Z
|
2024-06-26T18:57:32Z
|
2405.18641
|
Lazy Safety Alignment for Large Language Models against Harmful
Fine-tuning
|
Recent studies show that Large Language Models (LLMs) with safety alignment can be jail-broken by fine-tuning on a dataset mixed with harmful data. First time in the literature, we show that the jail-broken effect can be mitigated by separating states in the finetuning stage to optimize the alignment and user datasets. Unfortunately, our subsequent study shows that this simple Bi-State Optimization (BSO) solution experiences convergence instability when steps invested in its alignment state is too small, leading to downgraded alignment performance. By statistical analysis, we show that the textit{excess drift} towards consensus could be a probable reason for the instability. To remedy this issue, we propose textbf{L}azy(textbf{i}) textbf{s}afety textbf{a}lignment (textbf{Lisa}), which introduces a proximal term to constraint the drift of each state. Theoretically, the benefit of the proximal term is supported by the convergence analysis, wherein we show that a sufficient large proximal factor is necessary to guarantee Lisa's convergence. Empirically, our results on four downstream finetuning tasks show that Lisa with a proximal term can significantly increase alignment performance while maintaining the LLM's accuracy on the user tasks. Code is available at url{https://github.com/git-disl/Lisa}.
|
http://arxiv.org/pdf/2405.18641v4
|
[
"Tiansheng Huang",
"Sihao Hu",
"Fatih Ilhan",
"Selim Furkan Tekin",
"Ling Liu"
] |
2024-06-26T18:54:59Z
|
2024-05-28T22:53:43Z
|
2406.18690
|
Petal-X: Human-Centered Visual Explanations to Improve Cardiovascular
Risk Communication
|
Cardiovascular diseases (CVDs), the leading cause of death worldwide, can be prevented in most cases through behavioral interventions. Therefore, effective communication of CVD risk and projected risk reduction by risk factor modification plays a crucial role in reducing CVD risk at the individual level. However, despite interest in refining risk estimation with improved prediction models such as SCORE2, the guidelines for presenting these risk estimations in clinical practice remained essentially unchanged in the last few years, with graphical score charts (GSCs) continuing to be one of the prevalent systems. This work describes the design and implementation of Petal-X, a novel tool to support clinician-patient shared decision-making by explaining the CVD risk contributions of different factors and facilitating what-if analysis. Petal-X relies on a novel visualization, Petal Product Plots, and a tailor-made global surrogate model of SCORE2, whose fidelity is comparable to that of the GSCs used in clinical practice. We evaluated Petal-X compared to GSCs in a controlled experiment with 88 healthcare students, all but one with experience with chronic patients. The results show that Petal-X outperforms GSC in critical tasks, such as comparing the contribution to the patient's 10-year CVD risk of each modifiable risk factor, without a significant loss of perceived transparency, trust, or intent to use. Our study provides an innovative approach to the visualization and explanation of risk in clinical practice that, due to its model-agnostic nature, could continue to support next-generation artificial intelligence risk assessment models.
|
http://arxiv.org/pdf/2406.18690v1
|
[
"Diego Rojo",
"Houda Lamqaddam",
"Lucija Gosak",
"Katrien Verbert"
] |
2024-06-26T18:48:50Z
|
2024-06-26T18:48:50Z
|
2406.08654
|
Large Stepsize Gradient Descent for Non-Homogeneous Two-Layer Networks:
Margin Improvement and Fast Optimization
|
The typical training of neural networks using large stepsize gradient descent (GD) under the logistic loss often involves two distinct phases, where the empirical risk oscillates in the first phase but decreases monotonically in the second phase. We investigate this phenomenon in two-layer networks that satisfy a near-homogeneity condition. We show that the second phase begins once the empirical risk falls below a certain threshold, dependent on the stepsize. Additionally, we show that the normalized margin grows nearly monotonically in the second phase, demonstrating an implicit bias of GD in training non-homogeneous predictors. If the dataset is linearly separable and the derivative of the activation function is bounded away from zero, we show that the average empirical risk decreases, implying that the first phase must stop in finite steps. Finally, we demonstrate that by choosing a suitably large stepsize, GD that undergoes this phase transition is more efficient than GD that monotonically decreases the risk. Our analysis applies to networks of any width, beyond the well-known neural tangent kernel and mean-field regimes.
|
http://arxiv.org/pdf/2406.08654v2
|
[
"Yuhang Cai",
"Jingfeng Wu",
"Song Mei",
"Michael Lindsey",
"Peter L. Bartlett"
] |
2024-06-26T18:40:57Z
|
2024-06-12T21:33:22Z
|
2406.18679
|
Speakers Unembedded: Embedding-free Approach to Long-form Neural
Diarization
|
End-to-end neural diarization (EEND) models offer significant improvements over traditional embedding-based Speaker Diarization (SD) approaches but falls short on generalizing to long-form audio with large number of speakers. EEND-vector-clustering method mitigates this by combining local EEND with global clustering of speaker embeddings from local windows, but this requires an additional speaker embedding framework alongside the EEND module. In this paper, we propose a novel framework applying EEND both locally and globally for long-form audio without separate speaker embeddings. This approach achieves significant relative DER reduction of 13% and 10% over the conventional 1-pass EEND on Callhome American English and RT03-CTS datasets respectively and marginal improvements over EEND-vector-clustering without the need for additional speaker embeddings. Furthermore, we discuss the computational complexity of our proposed framework and explore strategies for reducing processing times.
|
http://arxiv.org/pdf/2406.18679v1
|
[
"Xiang Li",
"Vivek Govindan",
"Rohit Paturi",
"Sundararajan Srinivasan"
] |
2024-06-26T18:32:16Z
|
2024-06-26T18:32:16Z
|
2406.18678
|
Few-shot Personalization of LLMs with Mis-aligned Responses
|
As the diversity of users increases, the capability of providing personalized responses by large language models (LLMs) has become increasingly important. Existing approaches have only limited successes in LLM personalization, due to the absence of personalized learning or the reliance on shared personal data. This paper proposes a new approach for a few-shot personalization of LLMs with their mis-aligned responses (Fermi). Our key idea is to learn a set of personalized prompts for each user by progressively improving the prompts using LLMs, based on user profile (e.g., demographic information) and a few examples of previous opinions. During an iterative process of prompt improvement, we incorporate the contexts of mis-aligned responses by LLMs, which are especially crucial for the effective personalization of LLMs. In addition, we develop an effective inference method to further leverage the context of the test query and the personalized prompts. Our experimental results demonstrate that Fermi significantly improves performance across various benchmarks, compared to the best-performing baselines.
|
http://arxiv.org/pdf/2406.18678v1
|
[
"Jaehyung Kim",
"Yiming Yang"
] |
2024-06-26T18:29:12Z
|
2024-06-26T18:29:12Z
|
2406.18676
|
Understand What LLM Needs: Dual Preference Alignment for
Retrieval-Augmented Generation
|
Retrieval-augmented generation (RAG) has demonstrated effectiveness in mitigating the hallucination problem of large language models (LLMs). However, the difficulty of aligning the retriever with the diverse LLMs' knowledge preferences inevitably poses an inevitable challenge in developing a reliable RAG system. To address this issue, we propose DPA-RAG, a universal framework designed to align diverse knowledge preferences within RAG systems. Specifically, we initially introduce a preference knowledge construction pipline and incorporate five novel query augmentation strategies to alleviate preference data scarcity. Based on preference data, DPA-RAG accomplishes both external and internal preference alignment: 1) It jointly integrate pair-wise, point-wise, and contrastive preference alignment abilities into the reranker, achieving external preference alignment among RAG components. 2) It further introduces a pre-aligned stage before vanilla Supervised Fine-tuning (SFT), enabling LLMs to implicitly capture knowledge aligned with their reasoning preferences, achieving LLMs' internal alignment. Experimental results across four knowledge-intensive QA datasets demonstrate that DPA-RAG outperforms all baselines and seamlessly integrates both black-box and open-sourced LLM readers. Further qualitative analysis and discussions also provide empirical guidance for achieving reliable RAG systems. Our code is publicly available at https://github.com/dongguanting/DPA-RAG.
|
http://arxiv.org/pdf/2406.18676v1
|
[
"Guanting Dong",
"Yutao Zhu",
"Chenghao Zhang",
"Zechen Wang",
"Zhicheng Dou",
"Ji-Rong Wen"
] |
2024-06-26T18:26:53Z
|
2024-06-26T18:26:53Z
|
2406.18672
|
A simple and improved algorithm for noisy, convex, zeroth-order
optimisation
|
In this paper, we study the problem of noisy, convex, zeroth order optimisation of a function $f$ over a bounded convex set $bar{mathcal X}subset mathbb{R}^d$. Given a budget $n$ of noisy queries to the function $f$ that can be allocated sequentially and adaptively, our aim is to construct an algorithm that returns a point $hat xin bar{mathcal X}$ such that $f(hat x)$ is as small as possible. We provide a conceptually simple method inspired by the textbook center of gravity method, but adapted to the noisy and zeroth order setting. We prove that this method is such that the $f(hat x) - min_{xin bar{mathcal X}} f(x)$ is of smaller order than $d^2/sqrt{n}$ up to poly-logarithmic terms. We slightly improve upon existing literature, where to the best of our knowledge the best known rate is in [Lattimore, 2024] is of order $d^{2.5}/sqrt{n}$, albeit for a more challenging problem. Our main contribution is however conceptual, as we believe that our algorithm and its analysis bring novel ideas and are significantly simpler than existing approaches.
|
http://arxiv.org/pdf/2406.18672v1
|
[
"Alexandra Carpentier"
] |
2024-06-26T18:19:10Z
|
2024-06-26T18:19:10Z
|
2406.18671
|
A Zero Auxiliary Knowledge Membership Inference Attack on Aggregate
Location Data
|
Location data is frequently collected from populations and shared in aggregate form to guide policy and decision making. However, the prevalence of aggregated data also raises the privacy concern of membership inference attacks (MIAs). MIAs infer whether an individual's data contributed to the aggregate release. Although effective MIAs have been developed for aggregate location data, these require access to an extensive auxiliary dataset of individual traces over the same locations, which are collected from a similar population. This assumption is often impractical given common privacy practices surrounding location data. To measure the risk of an MIA performed by a realistic adversary, we develop the first Zero Auxiliary Knowledge (ZK) MIA on aggregate location data, which eliminates the need for an auxiliary dataset of real individual traces. Instead, we develop a novel synthetic approach, such that suitable synthetic traces are generated from the released aggregate. We also develop methods to correct for bias and noise, to show that our synthetic-based attack is still applicable when privacy mechanisms are applied prior to release. Using two large-scale location datasets, we demonstrate that our ZK MIA matches the state-of-the-art Knock-Knock (KK) MIA across a wide range of settings, including popular implementations of differential privacy (DP) and suppression of small counts. Furthermore, we show that ZK MIA remains highly effective even when the adversary only knows a small fraction (10%) of their target's location history. This demonstrates that effective MIAs can be performed by realistic adversaries, highlighting the need for strong DP protection.
|
http://arxiv.org/pdf/2406.18671v1
|
[
"Vincent Guan",
"Florent Guépin",
"Ana-Maria Cretu",
"Yves-Alexandre de Montjoye"
] |
2024-06-26T18:14:36Z
|
2024-06-26T18:14:36Z
|
2406.18651
|
Contraction of Private Quantum Channels and Private Quantum Hypothesis
Testing
|
A quantum generalized divergence by definition satisfies the data-processing inequality; as such, the relative decrease in such a divergence under the action of a quantum channel is at most one. This relative decrease is formally known as the contraction coefficient of the channel and the divergence. Interestingly, there exist combinations of channels and divergences for which the contraction coefficient is strictly less than one. Furthermore, understanding the contraction coefficient is fundamental for the study of statistical tasks under privacy constraints. To this end, here we establish upper bounds on contraction coefficients for the hockey-stick divergence under privacy constraints, where privacy is quantified with respect to the quantum local differential privacy (QLDP) framework, and we fully characterize the contraction coefficient for the trace distance under privacy constraints. With the machinery developed, we also determine an upper bound on the contraction of both the Bures distance and quantum relative entropy relative to the normalized trace distance, under QLDP constraints. Next, we apply our findings to establish bounds on the sample complexity of quantum hypothesis testing under privacy constraints. Furthermore, we study various scenarios in which the sample complexity bounds are tight, while providing order-optimal quantum channels that achieve those bounds. Lastly, we show how private quantum channels provide fairness and Holevo information stability in quantum learning settings.
|
http://arxiv.org/pdf/2406.18651v1
|
[
"Theshani Nuradha",
"Mark M. Wilde"
] |
2024-06-26T18:00:03Z
|
2024-06-26T18:00:03Z
|
2406.18630
|
Improving Hyperparameter Optimization with Checkpointed Model Weights
|
When training deep learning models, the performance depends largely on the selected hyperparameters. However, hyperparameter optimization (HPO) is often one of the most expensive parts of model design. Classical HPO methods treat this as a black-box optimization problem. However, gray-box HPO methods, which incorporate more information about the setup, have emerged as a promising direction for more efficient optimization. For example, using intermediate loss evaluations to terminate bad selections. In this work, we propose an HPO method for neural networks using logged checkpoints of the trained weights to guide future hyperparameter selections. Our method, Forecasting Model Search (FMS), embeds weights into a Gaussian process deep kernel surrogate model, using a permutation-invariant graph metanetwork to be data-efficient with the logged network weights. To facilitate reproducibility and further research, we open-source our code at https://github.com/NVlabs/forecasting-model-search.
|
http://arxiv.org/pdf/2406.18630v1
|
[
"Nikhil Mehta",
"Jonathan Lorraine",
"Steve Masson",
"Ramanathan Arunachalam",
"Zaid Pervaiz Bhat",
"James Lucas",
"Arun George Zachariah"
] |
2024-06-26T17:59:54Z
|
2024-06-26T17:59:54Z
|
2406.07544
|
Situational Awareness Matters in 3D Vision Language Reasoning
|
Being able to carry out complicated vision language reasoning tasks in 3D space represents a significant milestone in developing household robots and human-centered embodied AI. In this work, we demonstrate that a critical and distinct challenge in 3D vision language reasoning is situational awareness, which incorporates two key components: (1) The autonomous agent grounds its self-location based on a language prompt. (2) The agent answers open-ended questions from the perspective of its calculated position. To address this challenge, we introduce SIG3D, an end-to-end Situation-Grounded model for 3D vision language reasoning. We tokenize the 3D scene into sparse voxel representation and propose a language-grounded situation estimator, followed by a situated question answering module. Experiments on the SQA3D and ScanQA datasets show that SIG3D outperforms state-of-the-art models in situation estimation and question answering by a large margin (e.g., an enhancement of over 30% on situation estimation accuracy). Subsequent analysis corroborates our architectural design choices, explores the distinct functions of visual and textual tokens, and highlights the importance of situational awareness in the domain of 3D question answering.
|
http://arxiv.org/pdf/2406.07544v2
|
[
"Yunze Man",
"Liang-Yan Gui",
"Yu-Xiong Wang"
] |
2024-06-26T17:59:50Z
|
2024-06-11T17:59:45Z
|
2306.13928
|
On Convex Data-Driven Inverse Optimal Control for Nonlinear,
Non-stationary and Stochastic Systems
|
This paper is concerned with a finite-horizon inverse control problem, which has the goal of reconstructing, from observations, the possibly non-convex and non-stationary cost driving the actions of an agent. In this context, we present a result enabling cost reconstruction by solving an optimization problem that is convex even when the agent cost is not and when the underlying dynamics is nonlinear, non-stationary and stochastic. To obtain this result, we also study a finite-horizon forward control problem that has randomized policies as decision variables. We turn our findings into algorithmic procedures and show the effectiveness of our approach via in-silico and hardware validations. All experiments confirm the effectiveness of our approach.
|
http://arxiv.org/pdf/2306.13928v2
|
[
"Emiland Garrabe",
"Hozefa Jesawada",
"Carmen Del Vecchio",
"Giovanni Russo"
] |
2024-06-26T17:59:37Z
|
2023-06-24T10:25:53Z
|
2406.18534
|
Towards Compositionality in Concept Learning
|
Concept-based interpretability methods offer a lens into the internals of foundation models by decomposing their embeddings into high-level concepts. These concept representations are most useful when they are compositional, meaning that the individual concepts compose to explain the full sample. We show that existing unsupervised concept extraction methods find concepts which are not compositional. To automatically discover compositional concept representations, we identify two salient properties of such representations, and propose Compositional Concept Extraction (CCE) for finding concepts which obey these properties. We evaluate CCE on five different datasets over image and text data. Our evaluation shows that CCE finds more compositional concept representations than baselines and yields better accuracy on four downstream classification tasks. Code and data are available at https://github.com/adaminsky/compositional_concepts .
|
http://arxiv.org/pdf/2406.18534v1
|
[
"Adam Stein",
"Aaditya Naik",
"Yinjun Wu",
"Mayur Naik",
"Eric Wong"
] |
2024-06-26T17:59:30Z
|
2024-06-26T17:59:30Z
|
2406.18532
|
Symbolic Learning Enables Self-Evolving Agents
|
The AI community has been exploring a pathway to artificial general intelligence (AGI) by developing "language agents", which are complex large language models (LLMs) pipelines involving both prompting techniques and tool usage methods. While language agents have demonstrated impressive capabilities for many real-world tasks, a fundamental limitation of current language agents research is that they are model-centric, or engineering-centric. That's to say, the progress on prompts, tools, and pipelines of language agents requires substantial manual engineering efforts from human experts rather than automatically learning from data. We believe the transition from model-centric, or engineering-centric, to data-centric, i.e., the ability of language agents to autonomously learn and evolve in environments, is the key for them to possibly achieve AGI. In this work, we introduce agent symbolic learning, a systematic framework that enables language agents to optimize themselves on their own in a data-centric way using symbolic optimizers. Specifically, we consider agents as symbolic networks where learnable weights are defined by prompts, tools, and the way they are stacked together. Agent symbolic learning is designed to optimize the symbolic network within language agents by mimicking two fundamental algorithms in connectionist learning: back-propagation and gradient descent. Instead of dealing with numeric weights, agent symbolic learning works with natural language simulacrums of weights, loss, and gradients. We conduct proof-of-concept experiments on both standard benchmarks and complex real-world tasks and show that agent symbolic learning enables language agents to update themselves after being created and deployed in the wild, resulting in "self-evolving agents".
|
http://arxiv.org/pdf/2406.18532v1
|
[
"Wangchunshu Zhou",
"Yixin Ou",
"Shengwei Ding",
"Long Li",
"Jialong Wu",
"Tiannan Wang",
"Jiamin Chen",
"Shuai Wang",
"Xiaohua Xu",
"Ningyu Zhang",
"Huajun Chen",
"Yuchen Eleanor Jiang"
] |
2024-06-26T17:59:18Z
|
2024-06-26T17:59:18Z
|
2406.18529
|
Confident Natural Policy Gradient for Local Planning in
$q_π$-realizable Constrained MDPs
|
The constrained Markov decision process (CMDP) framework emerges as an important reinforcement learning approach for imposing safety or other critical objectives while maximizing cumulative reward. However, the current understanding of how to learn efficiently in a CMDP environment with a potentially infinite number of states remains under investigation, particularly when function approximation is applied to the value functions. In this paper, we address the learning problem given linear function approximation with $q_{pi}$-realizability, where the value functions of all policies are linearly representable with a known feature map, a setting known to be more general and challenging than other linear settings. Utilizing a local-access model, we propose a novel primal-dual algorithm that, after $tilde{O}(text{poly}(d) epsilon^{-3})$ queries, outputs with high probability a policy that strictly satisfies the constraints while nearly optimizing the value with respect to a reward function. Here, $d$ is the feature dimension and $epsilon > 0$ is a given error. The algorithm relies on a carefully crafted off-policy evaluation procedure to evaluate the policy using historical data, which informs policy updates through policy gradients and conserves samples. To our knowledge, this is the first result achieving polynomial sample complexity for CMDP in the $q_{pi}$-realizable setting.
|
http://arxiv.org/pdf/2406.18529v1
|
[
"Tian Tian",
"Lin F. Yang",
"Csaba Szepesvári"
] |
2024-06-26T17:57:13Z
|
2024-06-26T17:57:13Z
|
2406.18518
|
APIGen: Automated Pipeline for Generating Verifiable and Diverse
Function-Calling Datasets
|
The advancement of function-calling agent models requires diverse, reliable, and high-quality datasets. This paper presents APIGen, an automated data generation pipeline designed to synthesize verifiable high-quality datasets for function-calling applications. We leverage APIGen and collect 3,673 executable APIs across 21 different categories to generate diverse function-calling datasets in a scalable and structured manner. Each data in our dataset is verified through three hierarchical stages: format checking, actual function executions, and semantic verification, ensuring its reliability and correctness. We demonstrate that models trained with our curated datasets, even with only 7B parameters, can achieve state-of-the-art performance on the Berkeley Function-Calling Benchmark, outperforming multiple GPT-4 models. Moreover, our 1B model achieves exceptional performance, surpassing GPT-3.5-Turbo and Claude-3 Haiku. We release a dataset containing 60,000 high-quality entries, aiming to advance the field of function-calling agent domains. The dataset is available on Huggingface: https://huggingface.co/datasets/Salesforce/xlam-function-calling-60k and the project homepage: https://apigen-pipeline.github.io/
|
http://arxiv.org/pdf/2406.18518v1
|
[
"Zuxin Liu",
"Thai Hoang",
"Jianguo Zhang",
"Ming Zhu",
"Tian Lan",
"Shirley Kokane",
"Juntao Tan",
"Weiran Yao",
"Zhiwei Liu",
"Yihao Feng",
"Rithesh Murthy",
"Liangwei Yang",
"Silvio Savarese",
"Juan Carlos Niebles",
"Huan Wang",
"Shelby Heinecke",
"Caiming Xiong"
] |
2024-06-26T17:49:11Z
|
2024-06-26T17:49:11Z
|
2406.18629
|
Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of
LLMs
|
Mathematical reasoning presents a significant challenge for Large Language Models (LLMs) due to the extensive and precise chain of reasoning required for accuracy. Ensuring the correctness of each reasoning step is critical. To address this, we aim to enhance the robustness and factuality of LLMs by learning from human feedback. However, Direct Preference Optimization (DPO) has shown limited benefits for long-chain mathematical reasoning, as models employing DPO struggle to identify detailed errors in incorrect answers. This limitation stems from a lack of fine-grained process supervision. We propose a simple, effective, and data-efficient method called Step-DPO, which treats individual reasoning steps as units for preference optimization rather than evaluating answers holistically. Additionally, we have developed a data construction pipeline for Step-DPO, enabling the creation of a high-quality dataset containing 10K step-wise preference pairs. We also observe that in DPO, self-generated data is more effective than data generated by humans or GPT-4, due to the latter's out-of-distribution nature. Our findings demonstrate that as few as 10K preference data pairs and fewer than 500 Step-DPO training steps can yield a nearly 3% gain in accuracy on MATH for models with over 70B parameters. Notably, Step-DPO, when applied to Qwen2-72B-Instruct, achieves scores of 70.8% and 94.0% on the test sets of MATH and GSM8K, respectively, surpassing a series of closed-source models, including GPT-4-1106, Claude-3-Opus, and Gemini-1.5-Pro. Our code, data, and models are available at https://github.com/dvlab-research/Step-DPO.
|
http://arxiv.org/pdf/2406.18629v1
|
[
"Xin Lai",
"Zhuotao Tian",
"Yukang Chen",
"Senqiao Yang",
"Xiangru Peng",
"Jiaya Jia"
] |
2024-06-26T17:43:06Z
|
2024-06-26T17:43:06Z
|
2311.01248
|
Multimodal and Force-Matched Imitation Learning with a See-Through
Visuotactile Sensor
|
Contact-rich tasks continue to present a variety of challenges for robotic manipulation. In this work, we leverage a multimodal visuotactile sensor within the framework of imitation learning (IL) to perform contact rich tasks that involve relative motion (slipping/sliding) between the end-effector and object. We introduce two algorithmic contributions, tactile force matching and learned mode switching, as complimentary methods for improving IL. Tactile force matching enhances kinesthetic teaching by reading approximate forces during the demonstration and generating an adapted robot trajectory that recreates the recorded forces. Learned mode switching uses IL to couple visual and tactile sensor modes with the learned motion policy, simplifying the transition from reaching to contacting. We perform robotic manipulation experiments on four door opening tasks with a variety of observation and method configurations to study the utility of our proposed improvements and multimodal visuotactile sensing. Our results show that the inclusion of force matching raises average policy success rates by 62.5%, visuotactile mode switching by 30.3%, and visuotactile data as a policy input by 42.5%, emphasizing the value of see-through tactile sensing for IL, both for data collection to allow force matching, and for policy execution to allow accurate task feedback.
|
http://arxiv.org/pdf/2311.01248v3
|
[
"Trevor Ablett",
"Oliver Limoyo",
"Adam Sigal",
"Affan Jilani",
"Jonathan Kelly",
"Kaleem Siddiqi",
"Francois Hogan",
"Gregory Dudek"
] |
2024-06-26T17:40:14Z
|
2023-11-02T14:02:42Z
|
2407.01603
|
A Review of Large Language Models and Autonomous Agents in Chemistry
|
Large language models (LLMs) are emerging as a powerful tool in chemistry across multiple domains. In chemistry, LLMs are able to accurately predict properties, design new molecules, optimize synthesis pathways, and accelerate drug and material discovery. A core emerging idea is combining LLMs with chemistry-specific tools like synthesis planners and databases, leading to so-called "agents." This review covers LLMs' recent history, current capabilities, design, challenges specific to chemistry, and future directions. Particular attention is given to agents and their emergence as a cross-chemistry paradigm. Agents have proven effective in diverse domains of chemistry, but challenges remain. It is unclear if creating domain-specific versus generalist agents and developing autonomous pipelines versus "co-pilot" systems will accelerate chemistry. An emerging direction is the development of multi-agent systems using a human-in-the-loop approach. Due to the incredibly fast development of this field, a repository has been built to keep track of the latest studies: https://github.com/ur-whitelab/LLMs-in-science.
|
http://arxiv.org/pdf/2407.01603v1
|
[
"Mayk Caldas Ramos",
"Christopher J. Collison",
"Andrew D. White"
] |
2024-06-26T17:33:21Z
|
2024-06-26T17:33:21Z
|
2404.15778
|
BASS: Batched Attention-optimized Speculative Sampling
|
Speculative decoding has emerged as a powerful method to improve latency and throughput in hosting large language models. However, most existing implementations focus on generating a single sequence. Real-world generative AI applications often require multiple responses and how to perform speculative decoding in a batched setting while preserving its latency benefits poses non-trivial challenges. This paper describes a system of batched speculative decoding that sets a new state of the art in multi-sequence generation latency and that demonstrates superior GPU utilization as well as quality of generations within a time budget. For example, for a 7.8B-size model on a single A100 GPU and with a batch size of 8, each sequence is generated at an average speed of 5.8ms per token, the overall throughput being 1.1K tokens per second. These results represent state-of-the-art latency and a 2.15X speed-up over optimized regular decoding. Within a time budget that regular decoding does not finish, our system is able to generate sequences with HumanEval Pass@First of 43% and Pass@All of 61%, far exceeding what's feasible with single-sequence speculative decoding. Our peak GPU utilization during decoding reaches as high as 15.8%, more than 3X the highest of that of regular decoding and around 10X of single-sequence speculative decoding.
|
http://arxiv.org/pdf/2404.15778v2
|
[
"Haifeng Qian",
"Sujan Kumar Gonugondla",
"Sungsoo Ha",
"Mingyue Shang",
"Sanjay Krishna Gouda",
"Ramesh Nallapati",
"Sudipta Sengupta",
"Xiaofei Ma",
"Anoop Deoras"
] |
2024-06-26T17:29:46Z
|
2024-04-24T09:57:11Z
|
2406.18505
|
Mental Modeling of Reinforcement Learning Agents by Language Models
|
Can emergent language models faithfully model the intelligence of decision-making agents? Though modern language models exhibit already some reasoning ability, and theoretically can potentially express any probable distribution over tokens, it remains underexplored how the world knowledge these pretrained models have memorized can be utilized to comprehend an agent's behaviour in the physical world. This study empirically examines, for the first time, how well large language models (LLMs) can build a mental model of agents, termed agent mental modelling, by reasoning about an agent's behaviour and its effect on states from agent interaction history. This research may unveil the potential of leveraging LLMs for elucidating RL agent behaviour, addressing a key challenge in eXplainable reinforcement learning (XRL). To this end, we propose specific evaluation metrics and test them on selected RL task datasets of varying complexity, reporting findings on agent mental model establishment. Our results disclose that LLMs are not yet capable of fully mental modelling agents through inference alone without further innovations. This work thus provides new insights into the capabilities and limitations of modern LLMs.
|
http://arxiv.org/pdf/2406.18505v1
|
[
"Wenhao Lu",
"Xufeng Zhao",
"Josua Spisak",
"Jae Hee Lee",
"Stefan Wermter"
] |
2024-06-26T17:14:45Z
|
2024-06-26T17:14:45Z
|
2310.16792
|
Learning Generalizable Program and Architecture Representations for
Performance Modeling
|
Performance modeling is an essential tool in many areas, including performance characterization/optimization, design space exploration, and resource allocation problems, to name a few. However, existing performance modeling approaches have limitations, such as high computational cost for discrete-event simulators, narrow flexibility of hardware emulators, or restricted accuracy/generality of analytical/data-driven models. To address these limitations, this paper proposes PerfVec, a novel deep learning-based performance modeling framework that learns high-dimensional and independent/orthogonal program and microarchitecture representations. Once learned, a program representation can be used to predict its performance on any microarchitecture, and likewise, a microarchitecture representation can be applied in the performance prediction of any program. Additionally, PerfVec yields a foundation model that captures the performance essence of instructions, which can be directly used by developers in numerous performance modeling related tasks without incurring its training cost. The evaluation demonstrates that PerfVec is more general and efficient than previous approaches.
|
http://arxiv.org/pdf/2310.16792v2
|
[
"Lingda Li",
"Thomas Flynn",
"Adolfy Hoisie"
] |
2024-06-26T17:12:21Z
|
2023-10-25T17:24:01Z
|
2405.00592
|
Scaling and renormalization in high-dimensional regression
|
This paper presents a succinct derivation of the training and generalization performance of a variety of high-dimensional ridge regression models using the basic tools of random matrix theory and free probability. We provide an introduction and review of recent results on these topics, aimed at readers with backgrounds in physics and deep learning. Analytic formulas for the training and generalization errors are obtained in a few lines of algebra directly from the properties of the $S$-transform of free probability. This allows for a straightforward identification of the sources of power-law scaling in model performance. We compute the generalization error of a broad class of random feature models. We find that in all models, the $S$-transform corresponds to the train-test generalization gap, and yields an analogue of the generalized-cross-validation estimator. Using these techniques, we derive fine-grained bias-variance decompositions for a very general class of random feature models with structured covariates. These novel results allow us to discover a scaling regime for random feature models where the variance due to the features limits performance in the overparameterized setting. We also demonstrate how anisotropic weight structure in random feature models can limit performance and lead to nontrivial exponents for finite-width corrections in the overparameterized setting. Our results extend and provide a unifying perspective on earlier models of neural scaling laws.
|
http://arxiv.org/pdf/2405.00592v3
|
[
"Alexander Atanasov",
"Jacob A. Zavatone-Veth",
"Cengiz Pehlevan"
] |
2024-06-26T16:56:06Z
|
2024-05-01T15:59:00Z
|
2406.18491
|
Enhancing Federated Learning with Adaptive Differential Privacy and
Priority-Based Aggregation
|
Federated learning (FL), a novel branch of distributed machine learning (ML), develops global models through a private procedure without direct access to local datasets. However, it is still possible to access the model updates (gradient updates of deep neural networks) transferred between clients and servers, potentially revealing sensitive local information to adversaries using model inversion attacks. Differential privacy (DP) offers a promising approach to addressing this issue by adding noise to the parameters. On the other hand, heterogeneities in data structure, storage, communication, and computational capabilities of devices can cause convergence problems and delays in developing the global model. A personalized weighted averaging of local parameters based on the resources of each device can yield a better aggregated model in each round. In this paper, to efficiently preserve privacy, we propose a personalized DP framework that injects noise based on clients' relative impact factors and aggregates parameters while considering heterogeneities and adjusting properties. To fulfill the DP requirements, we first analyze the convergence boundary of the FL algorithm when impact factors are personalized and fixed throughout the learning process. We then further study the convergence property considering time-varying (adaptive) impact factors.
|
http://arxiv.org/pdf/2406.18491v1
|
[
"Mahtab Talaei",
"Iman Izadi"
] |
2024-06-26T16:55:07Z
|
2024-06-26T16:55:07Z
|
2402.08609
|
Mixtures of Experts Unlock Parameter Scaling for Deep RL
|
The recent rapid progress in (self) supervised learning models is in large part predicted by empirical scaling laws: a model's performance scales proportionally to its size. Analogous scaling laws remain elusive for reinforcement learning domains, however, where increasing the parameter count of a model often hurts its final performance. In this paper, we demonstrate that incorporating Mixture-of-Expert (MoE) modules, and in particular Soft MoEs (Puigcerver et al., 2023), into value-based networks results in more parameter-scalable models, evidenced by substantial performance increases across a variety of training regimes and model sizes. This work thus provides strong empirical evidence towards developing scaling laws for reinforcement learning.
|
http://arxiv.org/pdf/2402.08609v3
|
[
"Johan Obando-Ceron",
"Ghada Sokar",
"Timon Willi",
"Clare Lyle",
"Jesse Farebrother",
"Jakob Foerster",
"Gintare Karolina Dziugaite",
"Doina Precup",
"Pablo Samuel Castro"
] |
2024-06-26T16:50:01Z
|
2024-02-13T17:18:56Z
|
2404.08722
|
VADA: a Data-Driven Simulator for Nanopore Sequencing
|
Nanopore sequencing offers the ability for real-time analysis of long DNA sequences at a low cost, enabling new applications such as early detection of cancer. Due to the complex nature of nanopore measurements and the high cost of obtaining ground truth datasets, there is a need for nanopore simulators. Existing simulators rely on handcrafted rules and parameters and do not learn an internal representation that would allow for analysing underlying biological factors of interest. Instead, we propose VADA, a purely data-driven method for simulating nanopores based on an autoregressive latent variable model. We embed subsequences of DNA and introduce a conditional prior to address the challenge of a collapsing conditioning. We introduce an auxiliary regressor on the latent variable to encourage our model to learn an informative latent representation. We empirically demonstrate that our model achieves competitive simulation performance on experimental nanopore data. Moreover, we show we have learned an informative latent representation that is predictive of the DNA labels. We hypothesize that other biological factors of interest, beyond the DNA labels, can potentially be extracted from such a learned latent representation.
|
http://arxiv.org/pdf/2404.08722v2
|
[
"Jonas Niederle",
"Simon Koop",
"Marc Pagès-Gallego",
"Vlado Menkovski"
] |
2024-06-26T16:46:19Z
|
2024-04-12T13:24:28Z
|
2402.11039
|
Robustness to Subpopulation Shift with Domain Label Noise via
Regularized Annotation of Domains
|
Existing methods for last layer retraining that aim to optimize worst-group accuracy (WGA) rely heavily on well-annotated groups in the training data. We show, both in theory and practice, that annotation-based data augmentations using either downsampling or upweighting for WGA are susceptible to domain annotation noise, and in high-noise regimes approach the WGA of a model trained with vanilla empirical risk minimization. We introduce Regularized Annotation of Domains (RAD) in order to train robust last layer classifiers without the need for explicit domain annotations. Our results show that RAD is competitive with other recently proposed domain annotation-free techniques. Most importantly, RAD outperforms state-of-the-art annotation-reliant methods even with only 5% noise in the training data for several publicly available datasets.
|
http://arxiv.org/pdf/2402.11039v2
|
[
"Nathan Stromberg",
"Rohan Ayyagari",
"Monica Welfert",
"Sanmi Koyejo",
"Richard Nock",
"Lalitha Sankar"
] |
2024-06-26T16:35:16Z
|
2024-02-16T19:35:42Z
|
2407.06204
|
A Survey on Mixture of Experts
|
Large language models (LLMs) have garnered unprecedented advancements across diverse fields, ranging from natural language processing to computer vision and beyond. The prowess of LLMs is underpinned by their substantial model size, extensive and diverse datasets, and the vast computational power harnessed during training, all of which contribute to the emergent abilities of LLMs (e.g., in-context learning) that are not present in small models. Within this context, the mixture of experts (MoE) has emerged as an effective method for substantially scaling up model capacity with minimal computation overhead, gaining significant attention from academia and industry. Despite its growing prevalence, there lacks a systematic and comprehensive review of the literature on MoE. This survey seeks to bridge that gap, serving as an essential resource for researchers delving into the intricacies of MoE. We first briefly introduce the structure of the MoE layer, followed by proposing a new taxonomy of MoE. Next, we overview the core designs for various MoE models including both algorithmic and systemic aspects, alongside collections of available open-source implementations, hyperparameter configurations and empirical evaluations. Furthermore, we delineate the multifaceted applications of MoE in practice, and outline some potential directions for future research. To facilitate ongoing updates and the sharing of cutting-edge developments in MoE research, we have established a resource repository accessible at https://github.com/withinmiaov/A-Survey-on-Mixture-of-Experts.
|
http://arxiv.org/pdf/2407.06204v1
|
[
"Weilin Cai",
"Juyong Jiang",
"Fan Wang",
"Jing Tang",
"Sunghun Kim",
"Jiayi Huang"
] |
2024-06-26T16:34:33Z
|
2024-06-26T16:34:33Z
|
2305.15598
|
ReLU Neural Networks with Linear Layers are Biased Towards Single- and
Multi-Index Models
|
Neural networks often operate in the overparameterized regime, in which there are far more parameters than training samples, allowing the training data to be fit perfectly. That is, training the network effectively learns an interpolating function, and properties of the interpolant affect predictions the network will make on new samples. This manuscript explores how properties of such functions learned by neural networks of depth greater than two layers. Our framework considers a family of networks of varying depths that all have the same capacity but different representation costs. The representation cost of a function induced by a neural network architecture is the minimum sum of squared weights needed for the network to represent the function; it reflects the function space bias associated with the architecture. Our results show that adding additional linear layers to the input side of a shallow ReLU network yields a representation cost favoring functions with low mixed variation - that is, it has limited variation in directions orthogonal to a low-dimensional subspace and can be well approximated by a single- or multi-index model. Such functions may be represented by the composition of a function with low two-layer representation cost and a low-rank linear operator. Our experiments confirm this behavior in standard network training regimes. They additionally show that linear layers can improve generalization and the learned network is well-aligned with the true latent low-dimensional linear subspace when data is generated using a multi-index model.
|
http://arxiv.org/pdf/2305.15598v3
|
[
"Suzanna Parkinson",
"Greg Ongie",
"Rebecca Willett"
] |
2024-06-26T16:29:56Z
|
2023-05-24T22:10:12Z
|
2405.02466
|
ProFLingo: A Fingerprinting-based Intellectual Property Protection
Scheme for Large Language Models
|
Large language models (LLMs) have attracted significant attention in recent years. Due to their "Large" nature, training LLMs from scratch consumes immense computational resources. Since several major players in the artificial intelligence (AI) field have open-sourced their original LLMs, an increasing number of individual researchers and smaller companies are able to build derivative LLMs based on these open-sourced models at much lower costs. However, this practice opens up possibilities for unauthorized use or reproduction that may not comply with licensing agreements, and fine-tuning can change the model's behavior, thus complicating the determination of model ownership. Current intellectual property (IP) protection schemes for LLMs are either designed for white-box settings or require additional modifications to the original model, which restricts their use in real-world settings. In this paper, we propose ProFLingo, a black-box fingerprinting-based IP protection scheme for LLMs. ProFLingo generates queries that elicit specific responses from an original model, thereby establishing unique fingerprints. Our scheme assesses the effectiveness of these queries on a suspect model to determine whether it has been derived from the original model. ProFLingo offers a non-invasive approach, which neither requires knowledge of the suspect model nor modifications to the base model or its training process. To the best of our knowledge, our method represents the first black-box fingerprinting technique for IP protection for LLMs. Our source code and generated queries are available at: https://github.com/hengvt/ProFLingo.
|
http://arxiv.org/pdf/2405.02466v2
|
[
"Heng Jin",
"Chaoyu Zhang",
"Shanghao Shi",
"Wenjing Lou",
"Y. Thomas Hou"
] |
2024-06-26T16:22:43Z
|
2024-05-03T20:00:40Z
|
2406.18464
|
Bayesian inverse Navier-Stokes problems: joint flow field reconstruction
and parameter learning
|
We formulate and solve a Bayesian inverse Navier-Stokes (N-S) problem that assimilates velocimetry data in order to jointly reconstruct a 3D flow field and learn the unknown N-S parameters, including the boundary position. By hardwiring a generalised N-S problem, and regularising its unknown parameters using Gaussian prior distributions, we learn the most likely parameters in a collapsed search space. The most likely flow field reconstruction is then the N-S solution that corresponds to the learned parameters. We develop the method in the variational setting and use a stabilised Nitsche weak form of the N-S problem that permits the control of all N-S parameters. To regularise the inferred the geometry, we use a viscous signed distance field (vSDF) as an auxiliary variable, which is given as the solution of a viscous Eikonal boundary value problem. We devise an algorithm that solves this inverse problem, and numerically implement it using an adjoint-consistent stabilised cut-cell finite element method. We then use this method to reconstruct magnetic resonance velocimetry (flow-MRI) data of a 3D steady laminar flow through a physical model of an aortic arch for two different Reynolds numbers and signal-to-noise ratio (SNR) levels (low/high). We find that the method can accurately i) reconstruct the low SNR data by filtering out the noise/artefacts and recovering flow features that are obscured by noise, and ii) reproduce the high SNR data without overfitting. Although the framework that we develop applies to 3D steady laminar flows in complex geometries, it readily extends to time-dependent laminar and Reynolds-averaged turbulent flows, as well as non-Newtonian (e.g. viscoelastic) fluids.
|
http://arxiv.org/pdf/2406.18464v1
|
[
"Alexandros Kontogiannis",
"Scott V. Elgersma",
"Andrew J. Sederman",
"Matthew P. Juniper"
] |
2024-06-26T16:16:36Z
|
2024-06-26T16:16:36Z
|
2407.01602
|
Clustering in pure-attention hardmax transformers and its role in
sentiment analysis
|
Transformers are extremely successful machine learning models whose mathematical properties remain poorly understood. Here, we rigorously characterize the behavior of transformers with hardmax self-attention and normalization sublayers as the number of layers tends to infinity. By viewing such transformers as discrete-time dynamical systems describing the evolution of points in a Euclidean space, and thanks to a geometric interpretation of the self-attention mechanism based on hyperplane separation, we show that the transformer inputs asymptotically converge to a clustered equilibrium determined by special points called leaders. We then leverage this theoretical understanding to solve sentiment analysis problems from language processing using a fully interpretable transformer model, which effectively captures `context' by clustering meaningless words around leader words carrying the most meaning. Finally, we outline remaining challenges to bridge the gap between the mathematical analysis of transformers and their real-life implementation.
|
http://arxiv.org/pdf/2407.01602v1
|
[
"Albert Alcalde",
"Giovanni Fantuzzi",
"Enrique Zuazua"
] |
2024-06-26T16:13:35Z
|
2024-06-26T16:13:35Z
|
2407.01464
|
Graph Neural Network as Computationally Efficient Emulator of Ice-sheet
and Sea-level System Model (ISSM)
|
The Ice-sheet and Sea-level System Model (ISSM) provides solutions for Stokes equations relevant to ice sheet dynamics by employing finite element and fine mesh adaption. However, since its finite element method is compatible only with Central Processing Units (CPU), the ISSM has limits on further economizing computational time. Thus, by taking advantage of Graphics Processing Units (GPUs), we design a graph convolutional network (GCN) as a fast emulator for ISSM. The GCN is trained and tested using the 20-year transient ISSM simulations in the Pine Island Glacier (PIG). The GCN reproduces ice thickness and velocity with a correlation coefficient greater than 0.998, outperforming the traditional convolutional neural network (CNN). Additionally, GCN shows 34 times faster computational speed than the CPU-based ISSM modeling. The GPU-based GCN emulator allows us to predict how the PIG will change in the future under different melting rate scenarios with high fidelity and much faster computational time.
|
http://arxiv.org/pdf/2407.01464v1
|
[
"Younghyun Koo",
"Maryam Rahnemoonfar"
] |
2024-06-26T16:13:11Z
|
2024-06-26T16:13:11Z
|
2407.08750
|
ROLCH: Regularized Online Learning for Conditional Heteroskedasticity
|
Large-scale streaming data are common in modern machine learning applications and have led to the development of online learning algorithms. Many fields, such as supply chain management, weather and meteorology, energy markets, and finance, have pivoted towards using probabilistic forecasts, which yields the need not only for accurate learning of the expected value but also for learning the conditional heteroskedasticity. Against this backdrop, we present a methodology for online estimation of regularized linear distributional models for conditional heteroskedasticity. The proposed algorithm is based on a combination of recent developments for the online estimation of LASSO models and the well-known GAMLSS framework. We provide a case study on day-ahead electricity price forecasting, in which we show the competitive performance of the adaptive estimation combined with strongly reduced computational effort. Our algorithms are implemented in a computationally efficient Python package.
|
http://arxiv.org/pdf/2407.08750v1
|
[
"Simon Hirsch",
"Jonathan Berrisch",
"Florian Ziel"
] |
2024-06-26T16:04:49Z
|
2024-06-26T16:04:49Z
|
2406.18451
|
Detecting Brittle Decisions for Free: Leveraging Margin Consistency in
Deep Robust Classifiers
|
Despite extensive research on adversarial training strategies to improve robustness, the decisions of even the most robust deep learning models can still be quite sensitive to imperceptible perturbations, creating serious risks when deploying them for high-stakes real-world applications. While detecting such cases may be critical, evaluating a model's vulnerability at a per-instance level using adversarial attacks is computationally too intensive and unsuitable for real-time deployment scenarios. The input space margin is the exact score to detect non-robust samples and is intractable for deep neural networks. This paper introduces the concept of margin consistency -- a property that links the input space margins and the logit margins in robust models -- for efficient detection of vulnerable samples. First, we establish that margin consistency is a necessary and sufficient condition to use a model's logit margin as a score for identifying non-robust samples. Next, through comprehensive empirical analysis of various robustly trained models on CIFAR10 and CIFAR100 datasets, we show that they indicate strong margin consistency with a strong correlation between their input space margins and the logit margins. Then, we show that we can effectively use the logit margin to confidently detect brittle decisions with such models and accurately estimate robust accuracy on an arbitrarily large test set by estimating the input margins only on a small subset. Finally, we address cases where the model is not sufficiently margin-consistent by learning a pseudo-margin from the feature representation. Our findings highlight the potential of leveraging deep representations to efficiently assess adversarial vulnerability in deployment scenarios.
|
http://arxiv.org/pdf/2406.18451v1
|
[
"Jonas Ngnawé",
"Sabyasachi Sahoo",
"Yann Pequignot",
"Frédéric Precioso",
"Christian Gagné"
] |
2024-06-26T16:00:35Z
|
2024-06-26T16:00:35Z
|
2406.18450
|
Preference Elicitation for Offline Reinforcement Learning
|
Applying reinforcement learning (RL) to real-world problems is often made challenging by the inability to interact with the environment and the difficulty of designing reward functions. Offline RL addresses the first challenge by considering access to an offline dataset of environment interactions labeled by the reward function. In contrast, Preference-based RL does not assume access to the reward function and learns it from preferences, but typically requires an online interaction with the environment. We bridge the gap between these frameworks by exploring efficient methods for acquiring preference feedback in a fully offline setup. We propose Sim-OPRL, an offline preference-based reinforcement learning algorithm, which leverages a learned environment model to elicit preference feedback on simulated rollouts. Drawing on insights from both the offline RL and the preference-based RL literature, our algorithm employs a pessimistic approach for out-of-distribution data, and an optimistic approach for acquiring informative preferences about the optimal policy. We provide theoretical guarantees regarding the sample complexity of our approach, dependent on how well the offline data covers the optimal policy. Finally, we demonstrate the empirical performance of Sim-OPRL in different environments.
|
http://arxiv.org/pdf/2406.18450v1
|
[
"Alizée Pace",
"Bernhard Schölkopf",
"Gunnar Rätsch",
"Giorgia Ramponi"
] |
2024-06-26T15:59:13Z
|
2024-06-26T15:59:13Z
|
2406.03346
|
Normalizing Flows for Conformal Regression
|
Conformal Prediction (CP) algorithms estimate the uncertainty of a prediction model by calibrating its outputs on labeled data. The same calibration scheme usually applies to any model and data without modifications. The obtained prediction intervals are valid by construction but could be inefficient, i.e. unnecessarily big, if the prediction errors are not uniformly distributed over the input space. We present a general scheme to localize the intervals by training the calibration process. The standard prediction error is replaced by an optimized distance metric that depends explicitly on the object attributes. Learning the optimal metric is equivalent to training a Normalizing Flow that acts on the joint distribution of the errors and the inputs. Unlike the Error Reweighting CP algorithm of Papadopoulos et al. (2008), the framework allows estimating the gap between nominal and empirical conditional validity. The approach is compatible with existing locally-adaptive CP strategies based on re-weighting the calibration samples and applies to any point-prediction model without retraining.
|
http://arxiv.org/pdf/2406.03346v2
|
[
"Nicolo Colombo"
] |
2024-06-26T15:55:02Z
|
2024-06-05T15:04:28Z
|
2312.05818
|
ICTSurF: Implicit Continuous-Time Survival Functions with Neural
Networks
|
Survival analysis is a widely known method for predicting the likelihood of an event over time. The challenge of dealing with censored samples still remains. Traditional methods, such as the Cox Proportional Hazards (CPH) model, hinge on the limitations due to the strong assumptions of proportional hazards and the predetermined relationships between covariates. The rise of models based on deep neural networks (DNNs) has demonstrated enhanced effectiveness in survival analysis. This research introduces the Implicit Continuous-Time Survival Function (ICTSurF), built on a continuous-time survival model, and constructs survival distribution through implicit representation. As a result, our method is capable of accepting inputs in continuous-time space and producing survival probabilities in continuous-time space, independent of neural network architecture. Comparative assessments with existing methods underscore the high competitiveness of our proposed approach. Our implementation of ICTSurF is available at https://github.com/44REAM/ICTSurF.
|
http://arxiv.org/pdf/2312.05818v2
|
[
"Chanon Puttanawarut",
"Panu Looareesuwan",
"Romen Samuel Wabina",
"Prut Saowaprut"
] |
2024-06-26T15:51:44Z
|
2023-12-10T08:29:00Z
|
2406.18445
|
An Autotuning-based Optimization Framework for Mixed-kernel SVM
Classifications in Smart Pixel Datasets and Heterojunction Transistors
|
Support Vector Machine (SVM) is a state-of-the-art classification method widely used in science and engineering due to its high accuracy, its ability to deal with high dimensional data, and its flexibility in modeling diverse sources of data. In this paper, we propose an autotuning-based optimization framework to quantify the ranges of hyperparameters in SVMs to identify their optimal choices, and apply the framework to two SVMs with the mixed-kernel between Sigmoid and Gaussian kernels for smart pixel datasets in high energy physics (HEP) and mixed-kernel heterojunction transistors (MKH). Our experimental results show that the optimal selection of hyperparameters in the SVMs and the kernels greatly varies for different applications and datasets, and choosing their optimal choices is critical for a high classification accuracy of the mixed kernel SVMs. Uninformed choices of hyperparameters C and coef0 in the mixed-kernel SVMs result in severely low accuracy, and the proposed framework effectively quantifies the proper ranges for the hyperparameters in the SVMs to identify their optimal choices to achieve the highest accuracy 94.6% for the HEP application and the highest average accuracy 97.2% with far less tuning time for the MKH application.
|
http://arxiv.org/pdf/2406.18445v1
|
[
"Xingfu Wu",
"Tupendra Oli",
"ustin H. Qian",
"Valerie Taylor",
"Mark C. Hersam",
"Vinod K. Sangwan"
] |
2024-06-26T15:50:13Z
|
2024-06-26T15:50:13Z
|
2406.01389
|
RL in Latent MDPs is Tractable: Online Guarantees via Off-Policy
Evaluation
|
In many real-world decision problems there is partially observed, hidden or latent information that remains fixed throughout an interaction. Such decision problems can be modeled as Latent Markov Decision Processes (LMDPs), where a latent variable is selected at the beginning of an interaction and is not disclosed to the agent. In the last decade, there has been significant progress in solving LMDPs under different structural assumptions. However, for general LMDPs, there is no known learning algorithm that provably matches the existing lower bound (Kwon et al., 2021). We introduce the first sample-efficient algorithm for LMDPs without any additional structural assumptions. Our result builds off a new perspective on the role of off-policy evaluation guarantees and coverage coefficients in LMDPs, a perspective, that has been overlooked in the context of exploration in partially observed environments. Specifically, we establish a novel off-policy evaluation lemma and introduce a new coverage coefficient for LMDPs. Then, we show how these can be used to derive near-optimal guarantees of an optimistic exploration algorithm. These results, we believe, can be valuable for a wide range of interactive learning problems beyond LMDPs, and especially, for partially observed environments.
|
http://arxiv.org/pdf/2406.01389v2
|
[
"Jeongyeol Kwon",
"Shie Mannor",
"Constantine Caramanis",
"Yonathan Efroni"
] |
2024-06-26T15:42:57Z
|
2024-06-03T14:51:27Z
|
2406.17002
|
Benchmarking mortality risk prediction from electrocardiograms
|
Several recent high-impact studies leverage large hospital-owned electrocardiographic (ECG) databases to model and predict patient mortality. MIMIC-IV, released September 2023, is the first comparable public dataset and includes 800,000 ECGs from a U.S. hospital system. Previously, the largest public ECG dataset was Code-15, containing 345,000 ECGs collected during routine care in Brazil. These datasets now provide an excellent resource for a broader audience to explore ECG survival modeling. Here, we benchmark survival model performance on Code-15 and MIMIC-IV with two neural network architectures, compare four deep survival modeling approaches to Cox regressions trained on classifier outputs, and evaluate performance at one to ten years. Our results yield AUROC and concordance scores comparable to past work (circa 0.8) and reasonable AUPRC scores (MIMIC-IV: 0.4-0.5, Code-15: 0.05-0.13) considering the fraction of ECG samples linked to a mortality (MIMIC-IV: 27%, Code-15: 4%). When evaluating models on the opposite dataset, AUROC and concordance values drop by 0.1-0.15, which may be due to cohort differences. All code and results are made public.
|
http://arxiv.org/pdf/2406.17002v2
|
[
"Platon Lukyanenko",
"Joshua Mayourian",
"Mingxuan Liu",
"John K. Triedman",
"Sunil J. Ghelani",
"William G. La Cava"
] |
2024-06-26T15:27:16Z
|
2024-06-24T14:37:17Z
|
2406.18423
|
Graph Neural Networks for Emulation of Finite-Element Ice Dynamics in
Greenland and Antarctic Ice Sheets
|
Although numerical models provide accurate solutions for ice sheet dynamics based on physics laws, they accompany intensified computational demands to solve partial differential equations. In recent years, convolutional neural networks (CNNs) have been widely used as statistical emulators for those numerical models. However, since CNNs operate on regular grids, they cannot represent the refined meshes and computational efficiency of finite-element numerical models. Therefore, instead of CNNs, this study adopts an equivariant graph convolutional network (EGCN) as an emulator for the ice sheet dynamics modeling. EGCN reproduces ice thickness and velocity changes in the Helheim Glacier, Greenland, and Pine Island Glacier, Antarctica, with 260 times and 44 times faster computation time, respectively. Compared to the traditional CNN and graph convolutional network, EGCN shows outstanding accuracy in thickness prediction near fast ice streams by preserving the equivariance to the translation and rotation of graphs.
|
http://arxiv.org/pdf/2406.18423v1
|
[
"Younghyun Koo",
"Maryam Rahnemoonfar"
] |
2024-06-26T15:18:49Z
|
2024-06-26T15:18:49Z
|
2406.18420
|
Mixture of Experts in a Mixture of RL settings
|
Mixtures of Experts (MoEs) have gained prominence in (self-)supervised learning due to their enhanced inference efficiency, adaptability to distributed training, and modularity. Previous research has illustrated that MoEs can significantly boost Deep Reinforcement Learning (DRL) performance by expanding the network's parameter count while reducing dormant neurons, thereby enhancing the model's learning capacity and ability to deal with non-stationarity. In this work, we shed more light on MoEs' ability to deal with non-stationarity and investigate MoEs in DRL settings with "amplified" non-stationarity via multi-task training, providing further evidence that MoEs improve learning capacity. In contrast to previous work, our multi-task results allow us to better understand the underlying causes for the beneficial effect of MoE in DRL training, the impact of the various MoE components, and insights into how best to incorporate them in actor-critic-based DRL networks. Finally, we also confirm results from previous work.
|
http://arxiv.org/pdf/2406.18420v1
|
[
"Timon Willi",
"Johan Obando-Ceron",
"Jakob Foerster",
"Karolina Dziugaite",
"Pablo Samuel Castro"
] |
2024-06-26T15:15:15Z
|
2024-06-26T15:15:15Z
|
2406.18418
|
Differential error feedback for communication-efficient decentralized
learning
|
Communication-constrained algorithms for decentralized learning and optimization rely on local updates coupled with the exchange of compressed signals. In this context, differential quantization is an effective technique to mitigate the negative impact of compression by leveraging correlations between successive iterates. In addition, the use of error feedback, which consists of incorporating the compression error into subsequent steps, is a powerful mechanism to compensate for the bias caused by the compression. Under error feedback, performance guarantees in the literature have so far focused on algorithms employing a fusion center or a special class of contractive compressors that cannot be implemented with a finite number of bits. In this work, we propose a new decentralized communication-efficient learning approach that blends differential quantization with error feedback. The approach is specifically tailored for decentralized learning problems where agents have individual risk functions to minimize subject to subspace constraints that require the minimizers across the network to lie in low-dimensional subspaces. This constrained formulation includes consensus or single-task optimization as special cases, and allows for more general task relatedness models such as multitask smoothness and coupled optimization. We show that, under some general conditions on the compression noise, and for sufficiently small step-sizes $mu$, the resulting communication-efficient strategy is stable both in terms of mean-square error and average bit rate: by reducing $mu$, it is possible to keep the estimation errors small (on the order of $mu$) without increasing indefinitely the bit rate as $murightarrow 0$. The results establish that, in the small step-size regime and with a finite number of bits, it is possible to attain the performance achievable in the absence of compression.
|
http://arxiv.org/pdf/2406.18418v1
|
[
"Roula Nassif",
"Stefan Vlaski",
"Marco Carpentiero",
"Vincenzo Matta",
"Ali H. Sayed"
] |
2024-06-26T15:11:26Z
|
2024-06-26T15:11:26Z
|
2406.18417
|
Towards diffusion models for large-scale sea-ice modelling
|
We make the first steps towards diffusion models for unconditional generation of multivariate and Arctic-wide sea-ice states. While targeting to reduce the computational costs by diffusion in latent space, latent diffusion models also offer the possibility to integrate physical knowledge into the generation process. We tailor latent diffusion models to sea-ice physics with a censored Gaussian distribution in data space to generate data that follows the physical bounds of the modelled variables. Our latent diffusion models reach similar scores as the diffusion model trained in data space, but they smooth the generated fields as caused by the latent mapping. While enforcing physical bounds cannot reduce the smoothing, it improves the representation of the marginal ice zone. Therefore, for large-scale Earth system modelling, latent diffusion models can have many advantages compared to diffusion in data space if the significant barrier of smoothing can be resolved.
|
http://arxiv.org/pdf/2406.18417v1
|
[
"Tobias Sebastian Finn",
"Charlotte Durand",
"Alban Farchi",
"Marc Bocquet",
"Julien Brajard"
] |
2024-06-26T15:11:15Z
|
2024-06-26T15:11:15Z
|
2312.12009
|
Active Preference Inference using Language Models and Probabilistic
Reasoning
|
Actively inferring user preferences, for example by asking good questions, is important for any human-facing decision-making system. Active inference allows such systems to adapt and personalize themselves to nuanced individual preferences. To enable this ability for instruction-tuned large language models (LLMs), one may prompt them to ask users questions to infer their preferences, transforming the language models into more robust, interactive systems. However, out of the box, these models are not efficient at extracting preferences: the questions they generate are not informative, requiring a high number of user interactions and impeding the usability of the downstream system. In this work, we introduce an inference-time algorithm that helps LLMs quickly infer preferences by using more informative questions. Our algorithm uses a probabilistic model whose conditional distributions are defined by prompting an LLM, and returns questions that optimize expected entropy and expected model change. Results in a simplified interactive web shopping setting with real product items show that an LLM equipped with our entropy reduction algorithm outperforms baselines with the same underlying LLM on task performance while using fewer user interactions.
|
http://arxiv.org/pdf/2312.12009v2
|
[
"Wasu Top Piriyakulkij",
"Volodymyr Kuleshov",
"Kevin Ellis"
] |
2024-06-26T15:00:52Z
|
2023-12-19T09:58:54Z
|
2406.15713
|
Efficient Low-rank Identification via Accelerated Iteratively Reweighted
Nuclear Norm Minimization
|
This paper considers the problem of minimizing the sum of a smooth function and the Schatten-$p$ norm of the matrix. Our contribution involves proposing accelerated iteratively reweighted nuclear norm methods designed for solving the nonconvex low-rank minimization problem. Two major novelties characterize our approach. Firstly, the proposed method possesses a rank identification property, enabling the provable identification of the "correct" rank of the stationary point within a finite number of iterations. Secondly, we introduce an adaptive updating strategy for smoothing parameters. This strategy automatically fixes parameters associated with zero singular values as constants upon detecting the "correct" rank while quickly driving the rest of the parameters to zero. This adaptive behavior transforms the algorithm into one that effectively solves smooth problems after a few iterations, setting our work apart from existing iteratively reweighted methods for low-rank optimization. We prove the global convergence of the proposed algorithm, guaranteeing that every limit point of the iterates is a critical point. Furthermore, a local convergence rate analysis is provided under the Kurdyka-{L}ojasiewicz property. We conduct numerical experiments using both synthetic and real data to showcase our algorithm's efficiency and superiority over existing methods.
|
http://arxiv.org/pdf/2406.15713v2
|
[
"Hao Wang",
"Ye Wang",
"Xiangyu Yang"
] |
2024-06-26T15:00:46Z
|
2024-06-22T02:37:13Z
|
2405.02140
|
An Information Theoretic Perspective on Conformal Prediction
|
Conformal Prediction (CP) is a distribution-free uncertainty estimation framework that constructs prediction sets guaranteed to contain the true answer with a user-specified probability. Intuitively, the size of the prediction set encodes a general notion of uncertainty, with larger sets associated with higher degrees of uncertainty. In this work, we leverage information theory to connect conformal prediction to other notions of uncertainty. More precisely, we prove three different ways to upper bound the intrinsic uncertainty, as described by the conditional entropy of the target variable given the inputs, by combining CP with information theoretical inequalities. Moreover, we demonstrate two direct and useful applications of such connection between conformal prediction and information theory: (i) more principled and effective conformal training objectives that generalize previous approaches and enable end-to-end training of machine learning models from scratch, and (ii) a natural mechanism to incorporate side information into conformal prediction. We empirically validate both applications in centralized and federated learning settings, showing our theoretical results translate to lower inefficiency (average prediction set size) for popular CP methods.
|
http://arxiv.org/pdf/2405.02140v2
|
[
"Alvaro H. C. Correia",
"Fabio Valerio Massoli",
"Christos Louizos",
"Arash Behboodi"
] |
2024-06-26T14:58:25Z
|
2024-05-03T14:43:07Z
|
2406.18400
|
Do LLMs dream of elephants (when told not to)? Latent concept
association and associative memory in transformers
|
Large Language Models (LLMs) have the capacity to store and recall facts. Through experimentation with open-source models, we observe that this ability to retrieve facts can be easily manipulated by changing contexts, even without altering their factual meanings. These findings highlight that LLMs might behave like an associative memory model where certain tokens in the contexts serve as clues to retrieving facts. We mathematically explore this property by studying how transformers, the building blocks of LLMs, can complete such memory tasks. We study a simple latent concept association problem with a one-layer transformer and we show theoretically and empirically that the transformer gathers information using self-attention and uses the value matrix for associative memory.
|
http://arxiv.org/pdf/2406.18400v1
|
[
"Yibo Jiang",
"Goutham Rajendran",
"Pradeep Ravikumar",
"Bryon Aragam"
] |
2024-06-26T14:49:54Z
|
2024-06-26T14:49:54Z
|
2406.18627
|
AssertionBench: A Benchmark to Evaluate Large-Language Models for
Assertion Generation
|
Assertions have been the de facto collateral for simulation-based and formal verification of hardware designs for over a decade. The quality of hardware verification, ie, detection and diagnosis of corner-case design bugs, is critically dependent on the quality of the assertions. There has been a considerable amount of research leveraging a blend of data-driven statistical analysis and static analysis to generate high-quality assertions from hardware design source code and design execution trace data. Despite such concerted effort, all prior research struggles to scale to industrial-scale large designs, generates too many low-quality assertions, often fails to capture subtle and non-trivial design functionality, and does not produce any easy-to-comprehend explanations of the generated assertions to understand assertions' suitability to different downstream validation tasks. Recently, with the advent of Large-Language Models (LLMs), there has been a widespread effort to leverage prompt engineering to generate assertions. However, there is little effort to quantitatively establish the effectiveness and suitability of various LLMs for assertion generation. In this paper, we present AssertionBench, a novel benchmark to evaluate LLMs' effectiveness for assertion generation quantitatively. AssertioBench contains 100 curated Verilog hardware designs from OpenCores and formally verified assertions for each design generated from GoldMine and HARM. We use AssertionBench to compare state-of-the-art LLMs to assess their effectiveness in inferring functionally correct assertions for hardware designs. Our experiments demonstrate how LLMs perform relative to each other, the benefits of using more in-context exemplars in generating a higher fraction of functionally correct assertions, and the significant room for improvement for LLM-based assertion generators.
|
http://arxiv.org/pdf/2406.18627v1
|
[
"Vaishnavi Pulavarthi",
"Deeksha Nandal",
"Soham Dan",
"Debjit Pal"
] |
2024-06-26T14:47:28Z
|
2024-06-26T14:47:28Z
|
2402.17775
|
WhaleNet: a Novel Deep Learning Architecture for Marine Mammals
Vocalizations on Watkins Marine Mammal Sound Database
|
Marine mammal communication is a complex field, hindered by the diversity of vocalizations and environmental factors. The Watkins Marine Mammal Sound Database (WMMD) constitutes a comprehensive labeled dataset employed in machine learning applications. Nevertheless, the methodologies for data preparation, preprocessing, and classification documented in the literature exhibit considerable variability and are typically not applied to the dataset in its entirety. This study initially undertakes a concise review of the state-of-the-art benchmarks pertaining to the dataset, with a particular focus on clarifying data preparation and preprocessing techniques. Subsequently, we explore the utilization of the Wavelet Scattering Transform (WST) and Mel spectrogram as preprocessing mechanisms for feature extraction. In this paper, we introduce textbf{WhaleNet} (Wavelet Highly Adaptive Learning Ensemble Network), a sophisticated deep ensemble architecture for the classification of marine mammal vocalizations, leveraging both WST and Mel spectrogram for enhanced feature discrimination. By integrating the insights derived from WST and Mel representations, we achieved an improvement in classification accuracy by $8-10%$ over existing architectures, corresponding to a classification accuracy of $97.61%$.
|
http://arxiv.org/pdf/2402.17775v2
|
[
"Alessandro Licciardi",
"Davide Carbone"
] |
2024-06-26T14:34:13Z
|
2024-02-20T11:36:23Z
|
2210.13382
|
Emergent World Representations: Exploring a Sequence Model Trained on a
Synthetic Task
|
Language models show a surprising range of capabilities, but the source of their apparent competence is unclear. Do these networks just memorize a collection of surface statistics, or do they rely on internal representations of the process that generates the sequences they see? We investigate this question by applying a variant of the GPT model to the task of predicting legal moves in a simple board game, Othello. Although the network has no a priori knowledge of the game or its rules, we uncover evidence of an emergent nonlinear internal representation of the board state. Interventional experiments indicate this representation can be used to control the output of the network and create "latent saliency maps" that can help explain predictions in human terms.
|
http://arxiv.org/pdf/2210.13382v5
|
[
"Kenneth Li",
"Aspen K. Hopkins",
"David Bau",
"Fernanda Viégas",
"Hanspeter Pfister",
"Martin Wattenberg"
] |
2024-06-26T14:27:49Z
|
2022-10-24T16:29:55Z
|
2402.12235
|
The Fundamental Limits of Least-Privilege Learning
|
The promise of least-privilege learning -- to find feature representations that are useful for a learning task but prevent inference of any sensitive information unrelated to this task -- is highly appealing. However, so far this concept has only been stated informally. It thus remains an open question whether and how we can achieve this goal. In this work, we provide the first formalisation of the least-privilege principle for machine learning and characterise its feasibility. We prove that there is a fundamental trade-off between a representation's utility for a given task and its leakage beyond the intended task: it is not possible to learn representations that have high utility for the intended task but, at the same time prevent inference of any attribute other than the task label itself. This trade-off holds under realistic assumptions on the data distribution and regardless of the technique used to learn the feature mappings that produce these representations. We empirically validate this result for a wide range of learning techniques, model architectures, and datasets.
|
http://arxiv.org/pdf/2402.12235v2
|
[
"Theresa Stadler",
"Bogdan Kulynych",
"Michael C. Gastpar",
"Nicolas Papernot",
"Carmela Troncoso"
] |
2024-06-26T14:18:44Z
|
2024-02-19T15:44:54Z
|
2406.18370
|
Learning pure quantum states (almost) without regret
|
We initiate the study of quantum state tomography with minimal regret. A learner has sequential oracle access to an unknown pure quantum state, and in each round selects a pure probe state. Regret is incurred if the unknown state is measured orthogonal to this probe, and the learner's goal is to minimise the expected cumulative regret over $T$ rounds. The challenge is to find a balance between the most informative measurements and measurements incurring minimal regret. We show that the cumulative regret scales as $Theta(operatorname{polylog} T)$ using a new tomography algorithm based on a median of means least squares estimator. This algorithm employs measurements biased towards the unknown state and produces online estimates that are optimal (up to logarithmic terms) in the number of observed samples.
|
http://arxiv.org/pdf/2406.18370v1
|
[
"Josep Lumbreras",
"Mikhail Terekhov",
"Marco Tomamichel"
] |
2024-06-26T14:13:50Z
|
2024-06-26T14:13:50Z
|
2306.03341
|
Inference-Time Intervention: Eliciting Truthful Answers from a Language
Model
|
We introduce Inference-Time Intervention (ITI), a technique designed to enhance the "truthfulness" of large language models (LLMs). ITI operates by shifting model activations during inference, following a set of directions across a limited number of attention heads. This intervention significantly improves the performance of LLaMA models on the TruthfulQA benchmark. On an instruction-finetuned LLaMA called Alpaca, ITI improves its truthfulness from 32.5% to 65.1%. We identify a tradeoff between truthfulness and helpfulness and demonstrate how to balance it by tuning the intervention strength. ITI is minimally invasive and computationally inexpensive. Moreover, the technique is data efficient: while approaches like RLHF require extensive annotations, ITI locates truthful directions using only few hundred examples. Our findings suggest that LLMs may have an internal representation of the likelihood of something being true, even as they produce falsehoods on the surface.
|
http://arxiv.org/pdf/2306.03341v6
|
[
"Kenneth Li",
"Oam Patel",
"Fernanda Viégas",
"Hanspeter Pfister",
"Martin Wattenberg"
] |
2024-06-26T14:11:53Z
|
2023-06-06T01:26:53Z
|
2405.16265
|
MindStar: Enhancing Math Reasoning in Pre-trained LLMs at Inference Time
|
Although Large Language Models (LLMs) achieve remarkable performance across various tasks, they often struggle with complex reasoning tasks, such as answering mathematical questions. Recent efforts to address this issue have primarily focused on leveraging mathematical datasets through supervised fine-tuning or self-improvement techniques. However, these methods often depend on high-quality datasets that are difficult to prepare, or they require substantial computational resources for fine-tuning. Inspired by findings that LLMs know how to produce the right answer but struggle to select the correct reasoning path, we propose a purely inference-based searching method -- MindStar (M*). This method formulates reasoning tasks as searching problems and proposes two search ideas to identify the optimal reasoning paths. We evaluate the M* framework on both the GSM8K and MATH datasets, comparing its performance with existing open and closed-source LLMs. Our results demonstrate that M* significantly enhances the reasoning abilities of open-source models, such as Llama-2-13B and Mistral-7B, and achieves comparable performance to GPT-3.5 and Grok-1, but with substantially reduced model size and computational costs.
|
http://arxiv.org/pdf/2405.16265v4
|
[
"Jikun Kang",
"Xin Zhe Li",
"Xi Chen",
"Amirreza Kazemi",
"Qianyi Sun",
"Boxing Chen",
"Dong Li",
"Xu He",
"Quan He",
"Feng Wen",
"Jianye Hao",
"Jun Yao"
] |
2024-06-26T14:01:15Z
|
2024-05-25T15:07:33Z
|
2406.18354
|
Kolmogorov-Arnold Graph Neural Networks
|
Graph neural networks (GNNs) excel in learning from network-like data but often lack interpretability, making their application challenging in domains requiring transparent decision-making. We propose the Graph Kolmogorov-Arnold Network (GKAN), a novel GNN model leveraging spline-based activation functions on edges to enhance both accuracy and interpretability. Our experiments on five benchmark datasets demonstrate that GKAN outperforms state-of-the-art GNN models in node classification, link prediction, and graph classification tasks. In addition to the improved accuracy, GKAN's design inherently provides clear insights into the model's decision-making process, eliminating the need for post-hoc explainability techniques. This paper discusses the methodology, performance, and interpretability of GKAN, highlighting its potential for applications in domains where interpretability is crucial.
|
http://arxiv.org/pdf/2406.18354v1
|
[
"Gianluca De Carlo",
"Andrea Mastropietro",
"Aris Anagnostopoulos"
] |
2024-06-26T13:54:59Z
|
2024-06-26T13:54:59Z
|
2406.18351
|
Reinforcement Learning with Intrinsically Motivated Feedback Graph for
Lost-sales Inventory Control
|
Reinforcement learning (RL) has proven to be well-performed and general-purpose in the inventory control (IC). However, further improvement of RL algorithms in the IC domain is impeded due to two limitations of online experience. First, online experience is expensive to acquire in real-world applications. With the low sample efficiency nature of RL algorithms, it would take extensive time to train the RL policy to convergence. Second, online experience may not reflect the true demand due to the lost sales phenomenon typical in IC, which makes the learning process more challenging. To address the above challenges, we propose a decision framework that combines reinforcement learning with feedback graph (RLFG) and intrinsically motivated exploration (IME) to boost sample efficiency. In particular, we first take advantage of the inherent properties of lost-sales IC problems and design the feedback graph (FG) specially for lost-sales IC problems to generate abundant side experiences aid RL updates. Then we conduct a rigorous theoretical analysis of how the designed FG reduces the sample complexity of RL methods. Based on the theoretical insights, we design an intrinsic reward to direct the RL agent to explore to the state-action space with more side experiences, further exploiting FG's power. Experimental results demonstrate that our method greatly improves the sample efficiency of applying RL in IC. Our code is available at https://anonymous.4open.science/r/RLIMFG4IC-811D/
|
http://arxiv.org/pdf/2406.18351v1
|
[
"Zifan Liu",
"Xinran Li",
"Shibo Chen",
"Gen Li",
"Jiashuo Jiang",
"Jun Zhang"
] |
2024-06-26T13:52:47Z
|
2024-06-26T13:52:47Z
|
2406.18345
|
EmT: A Novel Transformer for Generalized Cross-subject EEG Emotion
Recognition
|
Integrating prior knowledge of neurophysiology into neural network architecture enhances the performance of emotion decoding. While numerous techniques emphasize learning spatial and short-term temporal patterns, there has been limited emphasis on capturing the vital long-term contextual information associated with emotional cognitive processes. In order to address this discrepancy, we introduce a novel transformer model called emotion transformer (EmT). EmT is designed to excel in both generalized cross-subject EEG emotion classification and regression tasks. In EmT, EEG signals are transformed into a temporal graph format, creating a sequence of EEG feature graphs using a temporal graph construction module (TGC). A novel residual multi-view pyramid GCN module (RMPG) is then proposed to learn dynamic graph representations for each EEG feature graph within the series, and the learned representations of each graph are fused into one token. Furthermore, we design a temporal contextual transformer module (TCT) with two types of token mixers to learn the temporal contextual information. Finally, the task-specific output module (TSO) generates the desired outputs. Experiments on four publicly available datasets show that EmT achieves higher results than the baseline methods for both EEG emotion classification and regression tasks. The code is available at https://github.com/yi-ding-cs/EmT.
|
http://arxiv.org/pdf/2406.18345v1
|
[
"Yi Ding",
"Chengxuan Tong",
"Shuailei Zhang",
"Muyun Jiang",
"Yong Li",
"Kevin Lim Jun Liang",
"Cuntai Guan"
] |
2024-06-26T13:42:11Z
|
2024-06-26T13:42:11Z
|
2306.02411
|
Topological data quality via 0-dimensional persistence matching
|
Data quality is crucial for the successful training, generalization and performance of artificial intelligence models. We propose to measure data quality for supervised learning using topological data analysis techniques. Specifically, we provide a novel topological invariant based on persistence matchings induced by inclusions and using $0$-dimensional persistent homology. We show that such an invariant is stable. We provide an algorithm and relate it to images, kernels, and cokernels of the induced morphisms. Also, we show that the invariant allows us to understand whether the subset "represents well" the clusters from the larger dataset or not, and we also use it to estimate bounds for the Hausdorff distance between the subset and the complete dataset. This approach enables us to explain why the chosen dataset will lead to poor performance.
|
http://arxiv.org/pdf/2306.02411v2
|
[
"Álvaro Torras-Casas",
"Eduardo Paluzo-Hidalgo",
"Rocio Gonzalez-Diaz"
] |
2024-06-26T13:37:58Z
|
2023-06-04T17:08:41Z
|
2311.14533
|
Introducing 3DCNN ResNets for ASD full-body kinematic assessment: a
comparison with hand-crafted features
|
Autism Spectrum Disorder (ASD) is characterized by challenges in social communication and restricted patterns, with motor abnormalities gaining traction for early detection. However, kinematic analysis in ASD is limited, often lacking robust validation and relying on hand-crafted features for single tasks, leading to inconsistencies across studies. End-to-end models have emerged as promising methods to overcome the need for feature engineering. Our aim is to propose a newly adapted 3DCNN ResNet from and compare it to widely used hand-crafted features for motor ASD assessment. Specifically, we developed a virtual reality environment with multiple motor tasks and trained models using both approaches. We prioritized a reliable validation framework with repeated cross-validation. Results show the proposed model achieves a maximum accuracy of 85$pm$3%, outperforming state-of-the-art end-to-end models with short 1-to-3 minute samples. Our comparative analysis with hand-crafted features shows feature-engineered models outperformed our end-to-end model in certain tasks. However, our end-to-end model achieved a higher mean AUC of 0.80$pm$0.03. Additionally, statistical differences were found in model variance, with our end-to-end model providing more consistent results with less variability across all VR tasks, demonstrating domain generalization and reliability. These findings show that end-to-end models enable less variable and context-independent ASD classification without requiring domain knowledge or task specificity. However, they also recognize the effectiveness of hand-crafted features in specific task scenarios.
|
http://arxiv.org/pdf/2311.14533v3
|
[
"Alberto Altozano",
"Maria Eleonora Minissi",
"Mariano Alcañiz",
"Javier Marín-Morales"
] |
2024-06-26T13:29:12Z
|
2023-11-24T14:56:36Z
|
2302.05342
|
Combining Reconstruction and Contrastive Methods for Multimodal
Representations in RL
|
Learning self-supervised representations using reconstruction or contrastive losses improves performance and sample complexity of image-based and multimodal reinforcement learning (RL). Here, different self-supervised loss functions have distinct advantages and limitations depending on the information density of the underlying sensor modality. Reconstruction provides strong learning signals but is susceptible to distractions and spurious information. While contrastive approaches can ignore those, they may fail to capture all relevant details and can lead to representation collapse. For multimodal RL, this suggests that different modalities should be treated differently based on the amount of distractions in the signal. We propose Contrastive Reconstructive Aggregated representation Learning (CoRAL), a unified framework enabling us to choose the most appropriate self-supervised loss for each sensor modality and allowing the representation to better focus on relevant aspects. We evaluate CoRAL's benefits on a wide range of tasks with images containing distractions or occlusions, a new locomotion suite, and a challenging manipulation suite with visually realistic distractions. Our results show that learning a multimodal representation by combining contrastive and reconstruction-based losses can significantly improve performance and solve tasks that are out of reach for more naive representation learning approaches and other recent baselines.
|
http://arxiv.org/pdf/2302.05342v4
|
[
"Philipp Becker",
"Sebastian Mossburger",
"Fabian Otto",
"Gerhard Neumann"
] |
2024-06-26T13:28:35Z
|
2023-02-10T15:57:20Z
|
2406.18334
|
Efficient and Accurate Explanation Estimation with Distribution
Compression
|
Exact computation of various machine learning explanations requires numerous model evaluations and in extreme cases becomes impractical. The computational cost of approximation increases with an ever-increasing size of data and model parameters. Many heuristics have been proposed to approximate post-hoc explanations efficiently. This paper shows that the standard i.i.d. sampling used in a broad spectrum of algorithms for explanation estimation leads to an approximation error worthy of improvement. To this end, we introduce Compress Then Explain (CTE), a new paradigm for more efficient and accurate explanation estimation. CTE uses distribution compression through kernel thinning to obtain a data sample that best approximates the marginal distribution. We show that CTE improves the estimation of removal-based local and global explanations with negligible computational overhead. It often achieves an on-par explanation approximation error using 2-3x less samples, i.e. requiring 2-3x less model evaluations. CTE is a simple, yet powerful, plug-in for any explanation method that now relies on i.i.d. sampling.
|
http://arxiv.org/pdf/2406.18334v1
|
[
"Hubert Baniecki",
"Giuseppe Casalicchio",
"Bernd Bischl",
"Przemyslaw Biecek"
] |
2024-06-26T13:21:24Z
|
2024-06-26T13:21:24Z
|
2406.18330
|
Molecular Diffusion Models with Virtual Receptors
|
Machine learning approaches to Structure-Based Drug Design (SBDD) have proven quite fertile over the last few years. In particular, diffusion-based approaches to SBDD have shown great promise. We present a technique which expands on this diffusion approach in two crucial ways. First, we address the size disparity between the drug molecule and the target/receptor, which makes learning more challenging and inference slower. We do so through the notion of a Virtual Receptor, which is a compressed version of the receptor; it is learned so as to preserve key aspects of the structural information of the original receptor, while respecting the relevant group equivariance. Second, we incorporate a protein language embedding used originally in the context of protein folding. We experimentally demonstrate the contributions of both the virtual receptors and the protein embeddings: in practice, they lead to both better performance, as well as significantly faster computations.
|
http://arxiv.org/pdf/2406.18330v1
|
[
"Matan Halfon",
"Eyal Rozenberg",
"Ehud Rivlin",
"Daniel Freedman"
] |
2024-06-26T13:18:42Z
|
2024-06-26T13:18:42Z
|
2406.18327
|
Multi-modal Evidential Fusion Network for Trusted PET/CT Tumor
Segmentation
|
Accurate segmentation of tumors in PET/CT images is important in computer-aided diagnosis and treatment of cancer. The key issue of such a segmentation problem lies in the effective integration of complementary information from PET and CT images. However, the quality of PET and CT images varies widely in clinical settings, which leads to uncertainty in the modality information extracted by networks. To take the uncertainty into account in multi-modal information fusion, this paper proposes a novel Multi-modal Evidential Fusion Network (MEFN) comprising a Cross-Modal Feature Learning (CFL) module and a Multi-modal Trusted Fusion (MTF) module. The CFL module reduces the domain gap upon modality conversion and highlights common tumor features, thereby alleviating the needs of the segmentation module to handle modality specificity. The MTF module utilizes mutual attention mechanisms and an uncertainty calibrator to fuse modality features based on modality uncertainty and then fuse the segmentation results under the guidance of Dempster-Shafer Theory. Besides, a new uncertainty perceptual loss is introduced to force the model focusing on uncertain features and hence improve its ability to extract trusted modality information. Extensive comparative experiments are conducted on two publicly available PET/CT datasets to evaluate the performance of our proposed method whose results demonstrate that our MEFN significantly outperforms state-of-the-art methods with improvements of 2.15% and 3.23% in DSC scores on the AutoPET dataset and the Hecktor dataset, respectively. More importantly, our model can provide radiologists with credible uncertainty of the segmentation results for their decision in accepting or rejecting the automatic segmentation results, which is particularly important for clinical applications. Our code will be available at https://github.com/QPaws/MEFN.
|
http://arxiv.org/pdf/2406.18327v1
|
[
"Yuxuan Qi",
"Li Lin",
"Jiajun Wang",
"Jingya Zhang",
"Bin Zhang"
] |
2024-06-26T13:14:24Z
|
2024-06-26T13:14:24Z
|
2402.03864
|
The Challenges of the Nonlinear Regime for Physics-Informed Neural
Networks
|
The Neural Tangent Kernel (NTK) viewpoint is widely employed to analyze the training dynamics of overparameterized Physics-Informed Neural Networks (PINNs). However, unlike the case of linear Partial Differential Equations (PDEs), we show how the NTK perspective falls short in the nonlinear scenario. Specifically, we establish that the NTK yields a random matrix at initialization that is not constant during training, contrary to conventional belief. Another significant difference from the linear regime is that, even in the idealistic infinite-width limit, the Hessian does not vanish and hence it cannot be disregarded during training. This motivates the adoption of second-order optimization methods. We explore the convergence guarantees of such methods in both linear and nonlinear cases, addressing challenges such as spectral bias and slow convergence. Every theoretical result is supported by numerical examples with both linear and nonlinear PDEs, and we highlight the benefits of second-order methods in benchmark test cases.
|
http://arxiv.org/pdf/2402.03864v2
|
[
"Andrea Bonfanti",
"Giuseppe Bruno",
"Cristina Cipriani"
] |
2024-06-26T13:05:18Z
|
2024-02-06T10:24:36Z
|
2406.18316
|
Trade-off between Gradient Measurement Efficiency and Expressivity in
Deep Quantum Neural Networks
|
Quantum neural networks (QNNs) require an efficient training algorithm to achieve practical quantum advantages. A promising approach is the use of gradient-based optimization algorithms, where gradients are estimated through quantum measurements. However, it is generally difficult to efficiently measure gradients in QNNs because the quantum state collapses upon measurement. In this work, we prove a general trade-off between gradient measurement efficiency and expressivity in a wide class of deep QNNs, elucidating the theoretical limits and possibilities of efficient gradient estimation. This trade-off implies that a more expressive QNN requires a higher measurement cost in gradient estimation, whereas we can increase gradient measurement efficiency by reducing the QNN expressivity to suit a given task. We further propose a general QNN ansatz called the stabilizer-logical product ansatz (SLPA), which can reach the upper limit of the trade-off inequality by leveraging the symmetric structure of the quantum circuit. In learning an unknown symmetric function, the SLPA drastically reduces the quantum resources required for training while maintaining accuracy and trainability compared to a well-designed symmetric circuit based on the parameter-shift method. Our results not only reveal a theoretical understanding of efficient training in QNNs but also provide a standard and broadly applicable efficient QNN design.
|
http://arxiv.org/pdf/2406.18316v1
|
[
"Koki Chinzei",
"Shinichiro Yamano",
"Quoc Hoan Tran",
"Yasuhiro Endo",
"Hirotaka Oshima"
] |
2024-06-26T12:59:37Z
|
2024-06-26T12:59:37Z
|
2406.18314
|
ContactNet: Geometric-Based Deep Learning Model for Predicting
Protein-Protein Interactions
|
Deep learning approaches achieved significant progress in predicting protein structures. These methods are often applied to protein-protein interactions (PPIs) yet require Multiple Sequence Alignment (MSA) which is unavailable for various interactions, such as antibody-antigen. Computational docking methods are capable of sampling accurate complex models, but also produce thousands of invalid configurations. The design of scoring functions for identifying accurate models is a long-standing challenge. We develop a novel attention-based Graph Neural Network (GNN), ContactNet, for classifying PPI models obtained from docking algorithms into accurate and incorrect ones. When trained on docked antigen and modeled antibody structures, ContactNet doubles the accuracy of current state-of-the-art scoring functions, achieving accurate models among its Top-10 at 43% of the test cases. When applied to unbound antibodies, its Top-10 accuracy increases to 65%. This performance is achieved without MSA and the approach is applicable to other types of interactions, such as host-pathogens or general PPIs.
|
http://arxiv.org/pdf/2406.18314v1
|
[
"Matan Halfon",
"Tomer Cohen",
"Raanan Fattal",
"Dina Schneidman-Duhovny"
] |
2024-06-26T12:54:41Z
|
2024-06-26T12:54:41Z
|
2406.18310
|
Spatial-temporal Hierarchical Reinforcement Learning for Interpretable
Pathology Image Super-Resolution
|
Pathology image are essential for accurately interpreting lesion cells in cytopathology screening, but acquiring high-resolution digital slides requires specialized equipment and long scanning times. Though super-resolution (SR) techniques can alleviate this problem, existing deep learning models recover pathology image in a black-box manner, which can lead to untruthful biological details and misdiagnosis. Additionally, current methods allocate the same computational resources to recover each pixel of pathology image, leading to the sub-optimal recovery issue due to the large variation of pathology image. In this paper, we propose the first hierarchical reinforcement learning framework named Spatial-Temporal hierARchical Reinforcement Learning (STAR-RL), mainly for addressing the aforementioned issues in pathology image super-resolution problem. We reformulate the SR problem as a Markov decision process of interpretable operations and adopt the hierarchical recovery mechanism in patch level, to avoid sub-optimal recovery. Specifically, the higher-level spatial manager is proposed to pick out the most corrupted patch for the lower-level patch worker. Moreover, the higher-level temporal manager is advanced to evaluate the selected patch and determine whether the optimization should be stopped earlier, thereby avoiding the over-processed problem. Under the guidance of spatial-temporal managers, the lower-level patch worker processes the selected patch with pixel-wise interpretable actions at each time step. Experimental results on medical images degraded by different kernels show the effectiveness of STAR-RL. Furthermore, STAR-RL validates the promotion in tumor diagnosis with a large margin and shows generalizability under various degradations. The source code is available at https://github.com/CUHK-AIM-Group/STAR-RL.
|
http://arxiv.org/pdf/2406.18310v1
|
[
"Wenting Chen",
"Jie Liu",
"Tommy W. S. Chow",
"Yixuan Yuan"
] |
2024-06-26T12:50:10Z
|
2024-06-26T12:50:10Z
|
2406.18309
|
Automated Immunophenotyping Assessment for Diagnosing Childhood Acute
Leukemia using Set-Transformers
|
Acute Leukemia is the most common hematologic malignancy in children and adolescents. A key methodology in the diagnostic evaluation of this malignancy is immunophenotyping based on Multiparameter Flow Cytometry (FCM). However, this approach is manual, and thus time-consuming and subjective. To alleviate this situation, we propose in this paper the FCM-Former, a machine learning, self-attention based FCM-diagnostic tool, automating the immunophenotyping assessment in Childhood Acute Leukemia. The FCM-Former is trained in a supervised manner, by directly using flow cytometric data. Our FCM-Former achieves an accuracy of 96.5% assigning lineage to each sample among 960 cases of either acute B-cell, T-cell lymphoblastic, and acute myeloid leukemia (B-ALL, T-ALL, AML). To the best of our knowledge, the FCM-Former is the first work that automates the immunophenotyping assessment with FCM data in diagnosing pediatric Acute Leukemia.
|
http://arxiv.org/pdf/2406.18309v1
|
[
"Elpiniki Maria Lygizou",
"Michael Reiter",
"Margarita Maurer-Granofszky",
"Michael Dworzak",
"Radu Grosu"
] |
2024-06-26T12:50:07Z
|
2024-06-26T12:50:07Z
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.