arxiv_id
stringlengths 7
11
| title
stringlengths 7
243
| abstract
stringlengths 3
2.79k
| link
stringlengths 21
49
| authors
listlengths 1
451
| updated
stringlengths 20
20
| published
stringlengths 20
20
|
---|---|---|---|---|---|---|
2406.11650
|
Multimodal Learning With Intraoperative CBCT & Variably Aligned
Preoperative CT Data To Improve Segmentation
|
Cone-beam computed tomography (CBCT) is an important tool facilitating computer aided interventions, despite often suffering from artifacts that pose challenges for accurate interpretation. While the degraded image quality can affect downstream segmentation, the availability of high quality, preoperative scans represents potential for improvements. Here we consider a setting where preoperative CT and intraoperative CBCT scans are available, however, the alignment (registration) between the scans is imperfect. We propose a multimodal learning method that fuses roughly aligned CBCT and CT scans and investigate the effect of CBCT quality and misalignment on the final segmentation performance. For that purpose, we make use of a synthetically generated data set containing real CT and synthetic CBCT volumes. As an application scenario, we focus on liver and liver tumor segmentation. We show that the fusion of preoperative CT and simulated, intraoperative CBCT mostly improves segmentation performance (compared to using intraoperative CBCT only) and that even clearly misaligned preoperative data has the potential to improve segmentation performance.
|
http://arxiv.org/pdf/2406.11650v2
|
[
"Maximilian E. Tschuchnig",
"Philipp Steininger",
"Michael Gadermayr"
] |
2024-07-01T09:57:32Z
|
2024-06-17T15:31:54Z
|
2311.13580
|
$σ$-PCA: a building block for neural learning of identifiable
linear transformations
|
Linear principal component analysis (PCA) learns (semi-)orthogonal transformations by orienting the axes to maximize variance. Consequently, it can only identify orthogonal axes whose variances are clearly distinct, but it cannot identify the subsets of axes whose variances are roughly equal. It cannot eliminate the subspace rotational indeterminacy: it fails to disentangle components with equal variances (eigenvalues), resulting, in each eigen subspace, in randomly rotated axes. In this paper, we propose $sigma$-PCA, a method that (1) formulates a unified model for linear and nonlinear PCA, the latter being a special case of linear independent component analysis (ICA), and (2) introduces a missing piece into nonlinear PCA that allows it to eliminate, from the canonical linear PCA solution, the subspace rotational indeterminacy -- without whitening the inputs. Whitening, a preprocessing step which converts the inputs into unit-variance inputs, has generally been a prerequisite step for linear ICA methods, which meant that conventional nonlinear PCA could not necessarily preserve the orthogonality of the overall transformation, could not directly reduce dimensionality, and could not intrinsically order by variances. We offer insights on the relationship between linear PCA, nonlinear PCA, and linear ICA -- three methods with autoencoder formulations for learning special linear transformations from data, transformations that are (semi-)orthogonal for PCA, and arbitrary unit-variance for ICA. As part of our formulation, nonlinear PCA can be seen as a method that maximizes both variance and statistical independence, lying in the middle between linear PCA and linear ICA, serving as a building block for learning linear transformations that are identifiable.
|
http://arxiv.org/pdf/2311.13580v4
|
[
"Fahdi Kanavati",
"Lucy Katsnith",
"Masayuki Tsuneki"
] |
2024-07-01T09:55:40Z
|
2023-11-22T18:34:49Z
|
2407.01122
|
Calibrated Large Language Models for Binary Question Answering
|
Quantifying the uncertainty of predictions made by large language models (LLMs) in binary text classification tasks remains a challenge. Calibration, in the context of LLMs, refers to the alignment between the model's predicted probabilities and the actual correctness of its predictions. A well-calibrated model should produce probabilities that accurately reflect the likelihood of its predictions being correct. We propose a novel approach that utilizes the inductive Venn--Abers predictor (IVAP) to calibrate the probabilities associated with the output tokens corresponding to the binary labels. Our experiments on the BoolQ dataset using the Llama 2 model demonstrate that IVAP consistently outperforms the commonly used temperature scaling method for various label token choices, achieving well-calibrated probabilities while maintaining high predictive quality. Our findings contribute to the understanding of calibration techniques for LLMs and provide a practical solution for obtaining reliable uncertainty estimates in binary question answering tasks, enhancing the interpretability and trustworthiness of LLM predictions.
|
http://arxiv.org/pdf/2407.01122v1
|
[
"Patrizio Giovannotti",
"Alexander Gammerman"
] |
2024-07-01T09:31:03Z
|
2024-07-01T09:31:03Z
|
2402.07025
|
Generalization Error of Graph Neural Networks in the Mean-field Regime
|
This work provides a theoretical framework for assessing the generalization error of graph neural networks in the over-parameterized regime, where the number of parameters surpasses the quantity of data points. We explore two widely utilized types of graph neural networks: graph convolutional neural networks and message passing graph neural networks. Prior to this study, existing bounds on the generalization error in the over-parametrized regime were uninformative, limiting our understanding of over-parameterized network performance. Our novel approach involves deriving upper bounds within the mean-field regime for evaluating the generalization error of these graph neural networks. We establish upper bounds with a convergence rate of $O(1/n)$, where $n$ is the number of graph samples. These upper bounds offer a theoretical assurance of the networks' performance on unseen data in the challenging over-parameterized regime and overall contribute to our understanding of their performance.
|
http://arxiv.org/pdf/2402.07025v3
|
[
"Gholamali Aminian",
"Yixuan He",
"Gesine Reinert",
"Łukasz Szpruch",
"Samuel N. Cohen"
] |
2024-07-01T09:27:34Z
|
2024-02-10T19:12:31Z
|
2407.01115
|
Enabling Mixed Effects Neural Networks for Diverse, Clustered Data Using
Monte Carlo Methods
|
Neural networks often assume independence among input data samples, disregarding correlations arising from inherent clustering patterns in real-world datasets (e.g., due to different sites or repeated measurements). Recently, mixed effects neural networks (MENNs) which separate cluster-specific 'random effects' from cluster-invariant 'fixed effects' have been proposed to improve generalization and interpretability for clustered data. However, existing methods only allow for approximate quantification of cluster effects and are limited to regression and binary targets with only one clustering feature. We present MC-GMENN, a novel approach employing Monte Carlo methods to train Generalized Mixed Effects Neural Networks. We empirically demonstrate that MC-GMENN outperforms existing mixed effects deep learning models in terms of generalization performance, time complexity, and quantification of inter-cluster variance. Additionally, MC-GMENN is applicable to a wide range of datasets, including multi-class classification tasks with multiple high-cardinality categorical features. For these datasets, we show that MC-GMENN outperforms conventional encoding and embedding methods, simultaneously offering a principled methodology for interpreting the effects of clustering patterns.
|
http://arxiv.org/pdf/2407.01115v1
|
[
"Andrej Tschalzev",
"Paul Nitschke",
"Lukas Kirchdorfer",
"Stefan Lüdtke",
"Christian Bartelt",
"Heiner Stuckenschmidt"
] |
2024-07-01T09:24:04Z
|
2024-07-01T09:24:04Z
|
2407.01111
|
Proximity Matters: Local Proximity Preserved Balancing for Treatment
Effect Estimation
|
Heterogeneous treatment effect (HTE) estimation from observational data poses significant challenges due to treatment selection bias. Existing methods address this bias by minimizing distribution discrepancies between treatment groups in latent space, focusing on global alignment. However, the fruitful aspect of local proximity, where similar units exhibit similar outcomes, is often overlooked. In this study, we propose Proximity-aware Counterfactual Regression (PCR) to exploit proximity for representation balancing within the HTE estimation context. Specifically, we introduce a local proximity preservation regularizer based on optimal transport to depict the local proximity in discrepancy calculation. Furthermore, to overcome the curse of dimensionality that renders the estimation of discrepancy ineffective, exacerbated by limited data availability for HTE estimation, we develop an informative subspace projector, which trades off minimal distance precision for improved sample complexity. Extensive experiments demonstrate that PCR accurately matches units across different treatment groups, effectively mitigates treatment selection bias, and significantly outperforms competitors. Code is available at https://anonymous.4open.science/status/ncr-B697.
|
http://arxiv.org/pdf/2407.01111v1
|
[
"Hao Wang",
"Zhichao Chen",
"Yuan Shen",
"Jiajun Fan",
"Zhaoran Liu",
"Degui Yang",
"Xinggao Liu",
"Haoxuan Li"
] |
2024-07-01T09:20:26Z
|
2024-07-01T09:20:26Z
|
2407.01110
|
SecGenAI: Enhancing Security of Cloud-based Generative AI Applications
within Australian Critical Technologies of National Interest
|
The rapid advancement of Generative AI (GenAI) technologies offers transformative opportunities within Australia's critical technologies of national interest while introducing unique security challenges. This paper presents SecGenAI, a comprehensive security framework for cloud-based GenAI applications, with a focus on Retrieval-Augmented Generation (RAG) systems. SecGenAI addresses functional, infrastructure, and governance requirements, integrating end-to-end security analysis to generate specifications emphasizing data privacy, secure deployment, and shared responsibility models. Aligned with Australian Privacy Principles, AI Ethics Principles, and guidelines from the Australian Cyber Security Centre and Digital Transformation Agency, SecGenAI mitigates threats such as data leakage, adversarial attacks, and model inversion. The framework's novel approach combines advanced machine learning techniques with robust security measures, ensuring compliance with Australian regulations while enhancing the reliability and trustworthiness of GenAI systems. This research contributes to the field of intelligent systems by providing actionable strategies for secure GenAI implementation in industry, fostering innovation in AI applications, and safeguarding national interests.
|
http://arxiv.org/pdf/2407.01110v1
|
[
"Christoforus Yoga Haryanto",
"Minh Hieu Vu",
"Trung Duc Nguyen",
"Emily Lomempow",
"Yulia Nurliana",
"Sona Taheri"
] |
2024-07-01T09:19:50Z
|
2024-07-01T09:19:50Z
|
2406.14969
|
Uni-Mol2: Exploring Molecular Pretraining Model at Scale
|
In recent years, pretraining models have made significant advancements in the fields of natural language processing (NLP), computer vision (CV), and life sciences. The significant advancements in NLP and CV are predominantly driven by the expansion of model parameters and data size, a phenomenon now recognized as the scaling laws. However, research exploring scaling law in molecular pretraining models remains unexplored. In this work, we present Uni-Mol2 , an innovative molecular pretraining model that leverages a two-track transformer to effectively integrate features at the atomic level, graph level, and geometry structure level. Along with this, we systematically investigate the scaling law within molecular pretraining models, characterizing the power-law correlations between validation loss and model size, dataset size, and computational resources. Consequently, we successfully scale Uni-Mol2 to 1.1 billion parameters through pretraining on 800 million conformations, making it the largest molecular pretraining model to date. Extensive experiments show consistent improvement in the downstream tasks as the model size grows. The Uni-Mol2 with 1.1B parameters also outperforms existing methods, achieving an average 27% improvement on the QM9 and 14% on COMPAS-1D dataset.
|
http://arxiv.org/pdf/2406.14969v2
|
[
"Xiaohong Ji",
"Zhen Wang",
"Zhifeng Gao",
"Hang Zheng",
"Linfeng Zhang",
"Guolin Ke",
"Weinan E"
] |
2024-07-01T09:08:44Z
|
2024-06-21T08:28:54Z
|
2407.01100
|
Eliminating Position Bias of Language Models: A Mechanistic Approach
|
Position bias has proven to be a prevalent issue of modern language models (LMs), where the models prioritize content based on its position within the given context. This bias often leads to unexpected model failures and hurts performance, robustness, and reliability across various applications. Our mechanistic analysis attributes the position bias to two components employed in nearly all state-of-the-art LMs: causal attention and relative positional encodings. Specifically, we find that causal attention generally causes models to favor distant content, while relative positional encodings like RoPE prefer nearby ones based on the analysis of retrieval-augmented question answering (QA). Further, our empirical study on object detection reveals that position bias is also present in vision-language models (VLMs). Based on the above analyses, we propose to ELIMINATE position bias caused by different input segment orders (e.g., options in LM-as-a-judge, retrieved documents in QA) in a TRAINING-FREE ZERO-SHOT manner. Our method changes the causal attention to bidirectional attention between segments and utilizes model attention values to decide the relative orders of segments instead of using the order provided in input prompts, therefore enabling Position-INvariant inferencE (PINE) at the segment level. By eliminating position bias, models achieve better performance and reliability in downstream tasks where position bias widely exists, such as LM-as-a-judge and retrieval-augmented QA. Notably, PINE is especially useful when adapting LMs for evaluating reasoning pairs: it consistently provides 8 to 10 percentage points performance gains in most cases, and makes Llama-3-70B-Instruct perform even better than GPT-4-0125-preview on the RewardBench reasoning subset.
|
http://arxiv.org/pdf/2407.01100v1
|
[
"Ziqi Wang",
"Hanlin Zhang",
"Xiner Li",
"Kuan-Hao Huang",
"Chi Han",
"Shuiwang Ji",
"Sham M. Kakade",
"Hao Peng",
"Heng Ji"
] |
2024-07-01T09:06:57Z
|
2024-07-01T09:06:57Z
|
2407.01092
|
Kolmogorov-Arnold Convolutions: Design Principles and Empirical Studies
|
The emergence of Kolmogorov-Arnold Networks (KANs) has sparked significant interest and debate within the scientific community. This paper explores the application of KANs in the domain of computer vision (CV). We examine the convolutional version of KANs, considering various nonlinearity options beyond splines, such as Wavelet transforms and a range of polynomials. We propose a parameter-efficient design for Kolmogorov-Arnold convolutional layers and a parameter-efficient finetuning algorithm for pre-trained KAN models, as well as KAN convolutional versions of self-attention and focal modulation layers. We provide empirical evaluations conducted on MNIST, CIFAR10, CIFAR100, Tiny ImageNet, ImageNet1k, and HAM10000 datasets for image classification tasks. Additionally, we explore segmentation tasks, proposing U-Net-like architectures with KAN convolutions, and achieving state-of-the-art results on BUSI, GlaS, and CVC datasets. We summarized all of our findings in a preliminary design guide of KAN convolutional models for computer vision tasks. Furthermore, we investigate regularization techniques for KANs. All experimental code and implementations of convolutional layers and models, pre-trained on ImageNet1k weights are available on GitHub via this https://github.com/IvanDrokin/torch-conv-kan
|
http://arxiv.org/pdf/2407.01092v1
|
[
"Ivan Drokin"
] |
2024-07-01T08:49:33Z
|
2024-07-01T08:49:33Z
|
2407.04734
|
Neuro-Symbolic Fusion of Wi-Fi Sensing Data for Passive Radar with
Inter-Modal Knowledge Transfer
|
Wi-Fi devices, akin to passive radars, can discern human activities within indoor settings due to the human body's interaction with electromagnetic signals. Current Wi-Fi sensing applications predominantly employ data-driven learning techniques to associate the fluctuations in the physical properties of the communication channel with the human activity causing them. However, these techniques often lack the desired flexibility and transparency. This paper introduces DeepProbHAR, a neuro-symbolic architecture for Wi-Fi sensing, providing initial evidence that Wi-Fi signals can differentiate between simple movements, such as leg or arm movements, which are integral to human activities like running or walking. The neuro-symbolic approach affords gathering such evidence without needing additional specialised data collection or labelling. The training of DeepProbHAR is facilitated by declarative domain knowledge obtained from a camera feed and by fusing signals from various antennas of the Wi-Fi receivers. DeepProbHAR achieves results comparable to the state-of-the-art in human activity recognition. Moreover, as a by-product of the learning process, DeepProbHAR generates specialised classifiers for simple movements that match the accuracy of models trained on finely labelled datasets, which would be particularly costly.
|
http://arxiv.org/pdf/2407.04734v1
|
[
"Marco Cominelli",
"Francesco Gringoli",
"Lance M. Kaplan",
"Mani B. Srivastava",
"Trevor Bihl",
"Erik P. Blasch",
"Nandini Iyer",
"Federico Cerutti"
] |
2024-07-01T08:43:27Z
|
2024-07-01T08:43:27Z
|
2407.01085
|
Rethinking LLM-based Preference Evaluation
|
Recently, large language model (LLM)-based preference evaluation has been widely adopted to compare pairs of model responses. However, a severe bias towards lengthy responses has been observed, raising concerns about the reliability of this evaluation method. In this work, we designed a series of controlled experiments to study the major impacting factors of the metric of LLM-based preference evaluation, i.e., win rate, and conclude that the win rate is affected by two axes of model response: desirability and information mass, where the former is length-independent and related to trustworthiness, and the latter is length-dependent and can be represented by conditional entropy. We find that length impacts the existing evaluations by influencing information mass. However, a reliable evaluation metric should not only assess content quality but also ensure that the assessment is not confounded by extraneous factors such as response length. Therefore, we propose a simple yet effective adjustment, AdapAlpaca, to the existing practice of win rate measurement. Specifically, by adjusting the lengths of reference answers to match the test model's answers within the same interval, we debias information mass relative to length, ensuring a fair model evaluation.
|
http://arxiv.org/pdf/2407.01085v1
|
[
"Zhengyu Hu",
"Linxin Song",
"Jieyu Zhang",
"Zheyuan Xiao",
"Jingang Wang",
"Zhenyu Chen",
"Jieyu Zhao",
"Hui Xiong"
] |
2024-07-01T08:37:41Z
|
2024-07-01T08:37:41Z
|
2407.01079
|
On Statistical Rates and Provably Efficient Criteria of Latent Diffusion
Transformers (DiTs)
|
We investigate the statistical and computational limits of latent textbf{Di}ffusion textbf{T}ransformers (textbf{DiT}s) under the low-dimensional linear latent space assumption. Statistically, we study the universal approximation and sample complexity of the DiTs score function, as well as the distribution recovery property of the initial data. Specifically, under mild data assumptions, we derive an approximation error bound for the score network of latent DiTs, which is sub-linear in the latent space dimension. Additionally, we derive the corresponding sample complexity bound and show that the data distribution generated from the estimated score function converges toward a proximate area of the original one. Computationally, we characterize the hardness of both forward inference and backward computation of latent DiTs, assuming the Strong Exponential Time Hypothesis (SETH). For forward inference, we identify efficient criteria for all possible latent DiTs inference algorithms and showcase our theory by pushing the efficiency toward almost-linear time inference. For backward computation, we leverage the low-rank structure within the gradient computation of DiTs training for possible algorithmic speedup. Specifically, we show that such speedup achieves almost-linear time latent DiTs training by casting the DiTs gradient as a series of chained low-rank approximations with bounded error. Under the low-dimensional assumption, we show that the convergence rate and the computational efficiency are both dominated by the dimension of the subspace, suggesting that latent DiTs have the potential to bypass the challenges associated with the high dimensionality of initial data.
|
http://arxiv.org/pdf/2407.01079v1
|
[
"Jerry Yao-Chieh Hu",
"Weimin Wu",
"Zhuoru Li",
"Zhao Song",
"Han Liu"
] |
2024-07-01T08:34:40Z
|
2024-07-01T08:34:40Z
|
2407.04733
|
Accurate Passive Radar via an Uncertainty-Aware Fusion of Wi-Fi Sensing
Data
|
Wi-Fi devices can effectively be used as passive radar systems that sense what happens in the surroundings and can even discern human activity. We propose, for the first time, a principled architecture which employs Variational Auto-Encoders for estimating a latent distribution responsible for generating the data, and Evidential Deep Learning for its ability to sense out-of-distribution activities. We verify that the fused data processed by different antennas of the same Wi-Fi receiver results in increased accuracy of human activity recognition compared with the most recent benchmarks, while still being informative when facing out-of-distribution samples and enabling semantic interpretation of latent variables in terms of physical phenomena. The results of this paper are a first contribution toward the ultimate goal of providing a flexible, semantic characterisation of black-swan events, i.e., events for which we have limited to no training data.
|
http://arxiv.org/abs/2407.04733v1
|
[
"Marco Cominelli",
"Francesco Gringoli",
"Lance M. Kaplan",
"Mani B. Srivastava",
"Federico Cerutti"
] |
2024-07-01T08:26:15Z
|
2024-07-01T08:26:15Z
|
2309.09134
|
Total Variation Distance Meets Probabilistic Inference
|
In this paper, we establish a novel connection between total variation (TV) distance estimation and probabilistic inference. In particular, we present an efficient, structure-preserving reduction from relative approximation of TV distance to probabilistic inference over directed graphical models. This reduction leads to a fully polynomial randomized approximation scheme (FPRAS) for estimating TV distances between same-structure distributions over any class of Bayes nets for which there is an efficient probabilistic inference algorithm. In particular, it leads to an FPRAS for estimating TV distances between distributions that are defined over a common Bayes net of small treewidth. Prior to this work, such approximation schemes only existed for estimating TV distances between product distributions. Our approach employs a new notion of $partial$ couplings of high-dimensional distributions, which might be of independent interest.
|
http://arxiv.org/pdf/2309.09134v2
|
[
"Arnab Bhattacharyya",
"Sutanu Gayen",
"Kuldeep S. Meel",
"Dimitrios Myrisiotis",
"A. Pavan",
"N. V. Vinodchandran"
] |
2024-07-01T08:24:41Z
|
2023-09-17T02:12:36Z
|
2407.01067
|
Human-like object concept representations emerge naturally in multimodal
large language models
|
The conceptualization and categorization of natural objects in the human mind have long intrigued cognitive scientists and neuroscientists, offering crucial insights into human perception and cognition. Recently, the rapid development of Large Language Models (LLMs) has raised the attractive question of whether these models can also develop human-like object representations through exposure to vast amounts of linguistic and multimodal data. In this study, we combined behavioral and neuroimaging analysis methods to uncover how the object concept representations in LLMs correlate with those of humans. By collecting large-scale datasets of 4.7 million triplet judgments from LLM and Multimodal LLM (MLLM), we were able to derive low-dimensional embeddings that capture the underlying similarity structure of 1,854 natural objects. The resulting 66-dimensional embeddings were found to be highly stable and predictive, and exhibited semantic clustering akin to human mental representations. Interestingly, the interpretability of the dimensions underlying these embeddings suggests that LLM and MLLM have developed human-like conceptual representations of natural objects. Further analysis demonstrated strong alignment between the identified model embeddings and neural activity patterns in many functionally defined brain ROIs (e.g., EBA, PPA, RSC and FFA). This provides compelling evidence that the object representations in LLMs, while not identical to those in the human, share fundamental commonalities that reflect key schemas of human conceptual knowledge. This study advances our understanding of machine intelligence and informs the development of more human-like artificial cognitive systems.
|
http://arxiv.org/pdf/2407.01067v1
|
[
"Changde Du",
"Kaicheng Fu",
"Bincheng Wen",
"Yi Sun",
"Jie Peng",
"Wei Wei",
"Ying Gao",
"Shengpei Wang",
"Chuncheng Zhang",
"Jinpeng Li",
"Shuang Qiu",
"Le Chang",
"Huiguang He"
] |
2024-07-01T08:17:19Z
|
2024-07-01T08:17:19Z
|
2407.01065
|
Improve ROI with Causal Learning and Conformal Prediction
|
In the commercial sphere, such as operations and maintenance, advertising, and marketing recommendations, intelligent decision-making utilizing data mining and neural network technologies is crucial, especially in resource allocation to optimize ROI. This study delves into the Cost-aware Binary Treatment Assignment Problem (C-BTAP) across different industries, with a focus on the state-of-the-art Direct ROI Prediction (DRP) method. However, the DRP model confronts issues like covariate shift and insufficient training data, hindering its real-world effectiveness. Addressing these challenges is essential for ensuring dependable and robust predictions in varied operational contexts. This paper presents a robust Direct ROI Prediction (rDRP) method, designed to address challenges in real-world deployment of neural network-based uplift models, particularly under conditions of covariate shift and insufficient training data. The rDRP method, enhancing the standard DRP model, does not alter the model's structure or require retraining. It utilizes conformal prediction and Monte Carlo dropout for interval estimation, adapting to model uncertainty and data distribution shifts. A heuristic calibration method, inspired by a Kaggle competition, combines point and interval estimates. The effectiveness of these approaches is validated through offline tests and online A/B tests in various settings, demonstrating significant improvements in target rewards compared to the state-of-the-art method.
|
http://arxiv.org/pdf/2407.01065v1
|
[
"Meng Ai",
"Zhuo Chen",
"Jibin Wang",
"Jing Shang",
"Tao Tao",
"Zhen Li"
] |
2024-07-01T08:16:25Z
|
2024-07-01T08:16:25Z
|
2407.01054
|
Joint Pruning and Channel-wise Mixed-Precision Quantization for
Efficient Deep Neural Networks
|
The resource requirements of deep neural networks (DNNs) pose significant challenges to their deployment on edge devices. Common approaches to address this issue are pruning and mixed-precision quantization, which lead to latency and memory occupation improvements. These optimization techniques are usually applied independently. We propose a novel methodology to apply them jointly via a lightweight gradient-based search, and in a hardware-aware manner, greatly reducing the time required to generate Pareto-optimal DNNs in terms of accuracy versus cost (i.e., latency or memory). We test our approach on three edge-relevant benchmarks, namely CIFAR-10, Google Speech Commands, and Tiny ImageNet. When targeting the optimization of the memory footprint, we are able to achieve a size reduction of 47.50% and 69.54% at iso-accuracy with the baseline networks with all weights quantized at 8 and 2-bit, respectively. Our method surpasses a previous state-of-the-art approach with up to 56.17% size reduction at iso-accuracy. With respect to the sequential application of state-of-the-art pruning and mixed-precision optimizations, we obtain comparable or superior results, but with a significantly lowered training time. In addition, we show how well-tailored cost models can improve the cost versus accuracy trade-offs when targeting specific hardware for deployment.
|
http://arxiv.org/pdf/2407.01054v1
|
[
"Beatrice Alessandra Motetti",
"Matteo Risso",
"Alessio Burrello",
"Enrico Macii",
"Massimo Poncino",
"Daniele Jahier Pagliari"
] |
2024-07-01T08:07:02Z
|
2024-07-01T08:07:02Z
|
2307.11465
|
A Deep Learning Approach for Overall Survival Prediction in Lung Cancer
with Missing Values
|
In the field of lung cancer research, particularly in the analysis of overall survival (OS), artificial intelligence (AI) serves crucial roles with specific aims. Given the prevalent issue of missing data in the medical domain, our primary objective is to develop an AI model capable of dynamically handling this missing data. Additionally, we aim to leverage all accessible data, effectively analyzing both uncensored patients who have experienced the event of interest and censored patients who have not, by embedding a specialized technique within our AI model, not commonly utilized in other AI tasks. Through the realization of these objectives, our model aims to provide precise OS predictions for non-small cell lung cancer (NSCLC) patients, thus overcoming these significant challenges. We present a novel approach to survival analysis with missing values in the context of NSCLC, which exploits the strengths of the transformer architecture to account only for available features without requiring any imputation strategy. More specifically, this model tailors the transformer architecture to tabular data by adapting its feature embedding and masked self-attention to mask missing data and fully exploit the available ones. By making use of ad-hoc designed losses for OS, it is able to account for both censored and uncensored patients, as well as changes in risks over time. We compared our method with state-of-the-art models for survival analysis coupled with different imputation strategies. We evaluated the results obtained over a period of 6 years using different time granularities obtaining a Ct-index, a time-dependent variant of the C-index, of 71.97, 77.58 and 80.72 for time units of 1 month, 1 year and 2 years, respectively, outperforming all state-of-the-art methods regardless of the imputation method used.
|
http://arxiv.org/abs/2307.11465v5
|
[
"Camillo Maria Caruso",
"Valerio Guarrasi",
"Sara Ramella",
"Paolo Soda"
] |
2024-07-01T08:01:56Z
|
2023-07-21T10:01:55Z
|
2407.01049
|
SE(3)-Hyena Operator for Scalable Equivariant Learning
|
Modeling global geometric context while maintaining equivariance is crucial for accurate predictions in many fields such as biology, chemistry, or vision. Yet, this is challenging due to the computational demands of processing high-dimensional data at scale. Existing approaches such as equivariant self-attention or distance-based message passing, suffer from quadratic complexity with respect to sequence length, while localized methods sacrifice global information. Inspired by the recent success of state-space and long-convolutional models, in this work, we introduce SE(3)-Hyena operator, an equivariant long-convolutional model based on the Hyena operator. The SE(3)-Hyena captures global geometric context at sub-quadratic complexity while maintaining equivariance to rotations and translations. Evaluated on equivariant associative recall and n-body modeling, SE(3)-Hyena matches or outperforms equivariant self-attention while requiring significantly less memory and computational resources for long sequences. Our model processes the geometric context of 20k tokens x3.5 times faster than the equivariant transformer and allows x175 longer a context within the same memory budget.
|
http://arxiv.org/pdf/2407.01049v1
|
[
"Artem Moskalev",
"Mangal Prakash",
"Rui Liao",
"Tommaso Mansi"
] |
2024-07-01T07:56:48Z
|
2024-07-01T07:56:48Z
|
2402.15993
|
Model Compression Method for S4 with Diagonal State Space Layers using
Balanced Truncation
|
To implement deep learning models on edge devices, model compression methods have been widely recognized as useful. However, it remains unclear which model compression methods are effective for Structured State Space Sequence (S4) models incorporating Diagonal State Space (DSS) layers, tailored for processing long-sequence data. In this paper, we propose to use the balanced truncation, a prevalent model reduction technique in control theory, applied specifically to DSS layers in pre-trained S4 model as a novel model compression method. Moreover, we propose using the reduced model parameters obtained by the balanced truncation as initial parameters of S4 models with DSS layers during the main training process. Numerical experiments demonstrate that our trained models combined with the balanced truncation surpass conventionally trained models with Skew-HiPPO initialization in accuracy, even with fewer parameters. Furthermore, our observations reveal a positive correlation: higher accuracy in the original model consistently leads to increased accuracy in models trained using our model compression method, suggesting that our approach effectively leverages the strengths of the original model.
|
http://arxiv.org/pdf/2402.15993v3
|
[
"Haruka Ezoe",
"Kazuhiro Sato"
] |
2024-07-01T07:55:40Z
|
2024-02-25T05:22:45Z
|
2407.01036
|
Ranking by Lifts: A Cost-Benefit Approach to Large-Scale A/B Tests
|
A/B testers conducting large-scale tests prioritize lifts and want to be able to control false rejections of the null. This work develops a decision-theoretic framework for maximizing profits subject to false discovery rate (FDR) control. We build an empirical Bayes solution for the problem via the greedy knapsack approach. We derive an oracle rule based on ranking the ratio of expected lifts and the cost of wrong rejections using the local false discovery rate (lfdr) statistic. Our oracle decision rule is valid and optimal for large-scale tests. Further, we establish asymptotic validity for the data-driven procedure and demonstrate finite-sample validity in experimental studies. We also demonstrate the merit of the proposed method over other FDR control methods. Finally, we discuss an application to actual Optimizely experiments.
|
http://arxiv.org/pdf/2407.01036v1
|
[
"Pallavi Basu",
"Ron Berman"
] |
2024-07-01T07:40:08Z
|
2024-07-01T07:40:08Z
|
2403.12975
|
Training morphological neural networks with gradient descent: some
theoretical insights
|
Morphological neural networks, or layers, can be a powerful tool to boost the progress in mathematical morphology, either on theoretical aspects such as the representation of complete lattice operators, or in the development of image processing pipelines. However, these architectures turn out to be difficult to train when they count more than a few morphological layers, at least within popular machine learning frameworks which use gradient descent based optimization algorithms. In this paper we investigate the potential and limitations of differentiation based approaches and back-propagation applied to morphological networks, in light of the non-smooth optimization concept of Bouligand derivative. We provide insights and first theoretical guidelines, in particular regarding initialization and learning rates.
|
http://arxiv.org/pdf/2403.12975v2
|
[
"Samy Blusseau"
] |
2024-07-01T07:40:03Z
|
2024-02-05T12:11:15Z
|
2407.01033
|
Neural Networks Trained by Weight Permutation are Universal
Approximators
|
The universal approximation property is fundamental to the success of neural networks, and has traditionally been achieved by training networks without any constraints on their parameters. However, recent experimental research proposed a novel permutation-based training method, which exhibited a desired classification performance without modifying the exact weight values. In this paper, we provide a theoretical guarantee of this permutation training method by proving its ability to guide a ReLU network to approximate one-dimensional continuous functions. Our numerical results further validate this method's efficiency in regression tasks with various initializations. The notable observations during weight permutation suggest that permutation training can provide an innovative tool for describing network learning behavior.
|
http://arxiv.org/pdf/2407.01033v1
|
[
"Yongqiang Cai",
"Gaohang Chen",
"Zhonghua Qiao"
] |
2024-07-01T07:33:00Z
|
2024-07-01T07:33:00Z
|
2407.01032
|
Overcoming Common Flaws in the Evaluation of Selective Classification
Systems
|
Selective Classification, wherein models can reject low-confidence predictions, promises reliable translation of machine-learning based classification systems to real-world scenarios such as clinical diagnostics. While current evaluation of these systems typically assumes fixed working points based on pre-defined rejection thresholds, methodological progress requires benchmarking the general performance of systems akin to the $mathrm{AUROC}$ in standard classification. In this work, we define 5 requirements for multi-threshold metrics in selective classification regarding task alignment, interpretability, and flexibility, and show how current approaches fail to meet them. We propose the Area under the Generalized Risk Coverage curve ($mathrm{AUGRC}$), which meets all requirements and can be directly interpreted as the average risk of undetected failures. We empirically demonstrate the relevance of $mathrm{AUGRC}$ on a comprehensive benchmark spanning 6 data sets and 13 confidence scoring functions. We find that the proposed metric substantially changes metric rankings on 5 out of the 6 data sets.
|
http://arxiv.org/pdf/2407.01032v1
|
[
"Jeremias Traub",
"Till J. Bungert",
"Carsten T. Lüth",
"Michael Baumgartner",
"Klaus H. Maier-Hein",
"Lena Maier-Hein",
"Paul F Jaeger"
] |
2024-07-01T07:32:58Z
|
2024-07-01T07:32:58Z
|
2407.01031
|
PocketLLM: Enabling On-Device Fine-Tuning for Personalized LLMs
|
Recent advancements in large language models (LLMs) have indeed showcased their impressive capabilities. On mobile devices, the wealth of valuable, non-public data generated daily holds great promise for locally fine-tuning personalized LLMs, while maintaining privacy through on-device processing. However, the constraints of mobile device resources pose challenges to direct on-device LLM fine-tuning, mainly due to the memory-intensive nature of derivative-based optimization required for saving gradients and optimizer states. To tackle this, we propose employing derivative-free optimization techniques to enable on-device fine-tuning of LLM, even on memory-limited mobile devices. Empirical results demonstrate that the RoBERTa-large model and OPT-1.3B can be fine-tuned locally on the OPPO Reno 6 smartphone using around 4GB and 6.5GB of memory respectively, using derivative-free optimization techniques. This highlights the feasibility of on-device LLM fine-tuning on mobile devices, paving the way for personalized LLMs on resource-constrained devices while safeguarding data privacy.
|
http://arxiv.org/pdf/2407.01031v1
|
[
"Dan Peng",
"Zhihui Fu",
"Jun Wang"
] |
2024-07-01T07:26:56Z
|
2024-07-01T07:26:56Z
|
2407.01023
|
DistML.js: Installation-free Distributed Deep Learning Framework for Web
Browsers
|
We present "DistML.js", a library designed for training and inference of machine learning models within web browsers. Not only does DistML.js facilitate model training on local devices, but it also supports distributed learning through communication with servers. Its design and define-by-run API for deep learning model construction resemble PyTorch, thereby reducing the learning curve for prototyping. Matrix computations involved in model training and inference are executed on the backend utilizing WebGL, enabling high-speed calculations. We provide a comprehensive explanation of DistML.js's design, API, and implementation, alongside practical applications including data parallelism in learning. The source code is publicly available at https://github.com/mil-tokyo/distmljs.
|
http://arxiv.org/pdf/2407.01023v1
|
[
"Masatoshi Hidaka",
"Tomohiro Hashimoto",
"Yuto Nishizawa",
"Tatsuya Harada"
] |
2024-07-01T07:13:14Z
|
2024-07-01T07:13:14Z
|
2406.18380
|
KAGNNs: Kolmogorov-Arnold Networks meet Graph Learning
|
In recent years, Graph Neural Networks (GNNs) have become the de facto tool for learning node and graph representations. Most GNNs typically consist of a sequence of neighborhood aggregation (a.k.a., message passing) layers. Within each of these layers, the representation of each node is updated from an aggregation and transformation of its neighbours representations at the previous layer. The upper bound for the expressive power of message passing GNNs was reached through the use of MLPs as a transformation, due to their universal approximation capabilities. However, MLPs suffer from well-known limitations, which recently motivated the introduction of Kolmogorov-Arnold Networks (KANs). KANs rely on the Kolmogorov-Arnold representation theorem, rendering them a promising alternative to MLPs. In this work, we compare the performance of KANs against that of MLPs in graph learning tasks. We perform extensive experiments on node classification, graph classification and graph regression datasets. Our preliminary results indicate that while KANs are on-par with MLPs in classification tasks, they seem to have a clear advantage in the graph regression tasks. Code is available at https: //github.com/RomanBresson/KAGNN.
|
http://arxiv.org/pdf/2406.18380v2
|
[
"Roman Bresson",
"Giannis Nikolentzos",
"George Panagopoulos",
"Michail Chatzianastasis",
"Jun Pang",
"Michalis Vazirgiannis"
] |
2024-07-01T07:13:08Z
|
2024-06-26T14:21:21Z
|
2402.05162
|
Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank
Modifications
|
Large language models (LLMs) show inherent brittleness in their safety mechanisms, as evidenced by their susceptibility to jailbreaking and even non-malicious fine-tuning. This study explores this brittleness of safety alignment by leveraging pruning and low-rank modifications. We develop methods to identify critical regions that are vital for safety guardrails, and that are disentangled from utility-relevant regions at both the neuron and rank levels. Surprisingly, the isolated regions we find are sparse, comprising about $3%$ at the parameter level and $2.5%$ at the rank level. Removing these regions compromises safety without significantly impacting utility, corroborating the inherent brittleness of the model's safety mechanisms. Moreover, we show that LLMs remain vulnerable to low-cost fine-tuning attacks even when modifications to the safety-critical regions are restricted. These findings underscore the urgent need for more robust safety strategies in LLMs.
|
http://arxiv.org/pdf/2402.05162v3
|
[
"Boyi Wei",
"Kaixuan Huang",
"Yangsibo Huang",
"Tinghao Xie",
"Xiangyu Qi",
"Mengzhou Xia",
"Prateek Mittal",
"Mengdi Wang",
"Peter Henderson"
] |
2024-07-01T07:11:17Z
|
2024-02-07T18:34:38Z
|
2407.01015
|
Bayesian Entropy Neural Networks for Physics-Aware Prediction
|
This paper addresses the need for deep learning models to integrate well-defined constraints into their outputs, driven by their application in surrogate models, learning with limited data and partial information, and scenarios requiring flexible model behavior to incorporate non-data sample information. We introduce Bayesian Entropy Neural Networks (BENN), a framework grounded in Maximum Entropy (MaxEnt) principles, designed to impose constraints on Bayesian Neural Network (BNN) predictions. BENN is capable of constraining not only the predicted values but also their derivatives and variances, ensuring a more robust and reliable model output. To achieve simultaneous uncertainty quantification and constraint satisfaction, we employ the method of multipliers approach. This allows for the concurrent estimation of neural network parameters and the Lagrangian multipliers associated with the constraints. Our experiments, spanning diverse applications such as beam deflection modeling and microstructure generation, demonstrate the effectiveness of BENN. The results highlight significant improvements over traditional BNNs and showcase competitive performance relative to contemporary constrained deep learning methods.
|
http://arxiv.org/pdf/2407.01015v1
|
[
"Rahul Rathnakumar",
"Jiayu Huang",
"Hao Yan",
"Yongming Liu"
] |
2024-07-01T07:00:44Z
|
2024-07-01T07:00:44Z
|
2407.01649
|
FAFE: Immune Complex Modeling with Geodesic Distance Loss on Noisy Group
Frames
|
Despite the striking success of general protein folding models such as AlphaFold2(AF2, Jumper et al. (2021)), the accurate computational modeling of antibody-antigen complexes remains a challenging task. In this paper, we first analyze AF2's primary loss function, known as the Frame Aligned Point Error (FAPE), and raise a previously overlooked issue that FAPE tends to face gradient vanishing problem on high-rotational-error targets. To address this fundamental limitation, we propose a novel geodesic loss called Frame Aligned Frame Error (FAFE, denoted as F2E to distinguish from FAPE), which enables the model to better optimize both the rotational and translational errors between two frames. We then prove that F2E can be reformulated as a group-aware geodesic loss, which translates the optimization of the residue-to-residue error to optimizing group-to-group geodesic frame distance. By fine-tuning AF2 with our proposed new loss function, we attain a correct rate of 52.3% (DockQ $>$ 0.23) on an evaluation set and 43.8% correct rate on a subset with low homology, with substantial improvement over AF2 by 182% and 100% respectively.
|
http://arxiv.org/pdf/2407.01649v1
|
[
"Ruidong Wu",
"Ruihan Guo",
"Rui Wang",
"Shitong Luo",
"Yue Xu",
"Jiahan Li",
"Jianzhu Ma",
"Qiang Liu",
"Yunan Luo",
"Jian Peng"
] |
2024-07-01T06:47:21Z
|
2024-07-01T06:47:21Z
|
2404.09562
|
σ-GPTs: A New Approach to Autoregressive Models
|
Autoregressive models, such as the GPT family, use a fixed order, usually left-to-right, to generate sequences. However, this is not a necessity. In this paper, we challenge this assumption and show that by simply adding a positional encoding for the output, this order can be modulated on-the-fly per-sample which offers key advantageous properties. It allows for the sampling of and conditioning on arbitrary subsets of tokens, and it also allows sampling in one shot multiple tokens dynamically according to a rejection strategy, leading to a sub-linear number of model evaluations. We evaluate our method across various domains, including language modeling, path-solving, and aircraft vertical rate prediction, decreasing the number of steps required for generation by an order of magnitude.
|
http://arxiv.org/pdf/2404.09562v2
|
[
"Arnaud Pannatier",
"Evann Courdier",
"François Fleuret"
] |
2024-07-01T06:46:36Z
|
2024-04-15T08:22:47Z
|
2407.01005
|
MARLP: Time-series Forecasting Control for Agricultural Managed Aquifer
Recharge
|
The rapid decline in groundwater around the world poses a significant challenge to sustainable agriculture. To address this issue, agricultural managed aquifer recharge (Ag-MAR) is proposed to recharge the aquifer by artificially flooding agricultural lands using surface water. Ag-MAR requires a carefully selected flooding schedule to avoid affecting the oxygen absorption of crop roots. However, current Ag-MAR scheduling does not take into account complex environmental factors such as weather and soil oxygen, resulting in crop damage and insufficient recharging amounts. This paper proposes MARLP, the first end-to-end data-driven control system for Ag-MAR. We first formulate Ag-MAR as an optimization problem. To that end, we analyze four-year in-field datasets, which reveal the multi-periodicity feature of the soil oxygen level trends and the opportunity to use external weather forecasts and flooding proposals as exogenous clues for soil oxygen prediction. Then, we design a two-stage forecasting framework. In the first stage, it extracts both the cross-variate dependency and the periodic patterns from historical data to conduct preliminary forecasting. In the second stage, it uses weather-soil and flooding-soil causality to facilitate an accurate prediction of soil oxygen levels. Finally, we conduct model predictive control (MPC) for Ag-MAR flooding. To address the challenge of large action spaces, we devise a heuristic planning module to reduce the number of flooding proposals to enable the search for optimal solutions. Real-world experiments show that MARLP reduces the oxygen deficit ratio by 86.8% while improving the recharging amount in unit time by 35.8%, compared with the previous four years.
|
http://arxiv.org/abs/2407.01005v1
|
[
"Yuning Chen",
"Kang Yang",
"Zhiyu An",
"Brady Holder",
"Luke Paloutzian",
"Khaled Bali",
"Wan Du"
] |
2024-07-01T06:36:40Z
|
2024-07-01T06:36:40Z
|
2407.01004
|
CURLS: Causal Rule Learning for Subgroups with Significant Treatment
Effect
|
In causal inference, estimating heterogeneous treatment effects (HTE) is critical for identifying how different subgroups respond to interventions, with broad applications in fields such as precision medicine and personalized advertising. Although HTE estimation methods aim to improve accuracy, how to provide explicit subgroup descriptions remains unclear, hindering data interpretation and strategic intervention management. In this paper, we propose CURLS, a novel rule learning method leveraging HTE, which can effectively describe subgroups with significant treatment effects. Specifically, we frame causal rule learning as a discrete optimization problem, finely balancing treatment effect with variance and considering the rule interpretability. We design an iterative procedure based on the minorize-maximization algorithm and solve a submodular lower bound as an approximation for the original. Quantitative experiments and qualitative case studies verify that compared with state-of-the-art methods, CURLS can find subgroups where the estimated and true effects are 16.1% and 13.8% higher and the variance is 12.0% smaller, while maintaining similar or better estimation accuracy and rule interpretability. Code is available at https://osf.io/zwp2k/.
|
http://arxiv.org/pdf/2407.01004v1
|
[
"Jiehui Zhou",
"Linxiao Yang",
"Xingyu Liu",
"Xinyue Gu",
"Liang Sun",
"Wei Chen"
] |
2024-07-01T06:36:27Z
|
2024-07-01T06:36:27Z
|
2407.01001
|
Flood Prediction Using Classical and Quantum Machine Learning Models
|
This study investigates the potential of quantum machine learning to improve flood forecasting we focus on daily flood events along Germany's Wupper River in 2023 our approach combines classical machine learning techniques with QML techniques this hybrid model leverages quantum properties like superposition and entanglement to achieve better accuracy and efficiency classical and QML models are compared based on training time accuracy and scalability results show that QML models offer competitive training times and improved prediction accuracy this research signifies a step towards utilizing quantum technologies for climate change adaptation we emphasize collaboration and continuous innovation to implement this model in real-world flood management ultimately enhancing global resilience against floods
|
http://arxiv.org/pdf/2407.01001v1
|
[
"Marek Grzesiak",
"Param Thakkar"
] |
2024-07-01T06:31:41Z
|
2024-07-01T06:31:41Z
|
2407.00996
|
Can Small Language Models Learn, Unlearn, and Retain Noise Patterns?
|
Small Language Models (SLMs) are generally considered to be more compact versions of large language models (LLMs), typically having fewer than 7 billion parameters. This study investigates the ability of small language models to learn, retain, and subsequently eliminate noise that is typically not found on the internet, where most pretraining datasets are sourced. For this, four pre-trained SLMs were utilized: Olmo 1B, Qwen1.5 1.8B, Gemma 2B, and Phi2 2.7B. The models were instruction-tuned without noise and tested for task execution with in-context learning. Afterward, noise patterns were introduced to evaluate the models' learning and unlearning capabilities. We evaluated the models' performance at various training levels. Phi consistently excelled with word-level noise but performed the worst with character-level noise. Despite being the smallest with approximately 1 billion parameters, Olmo performed consistently well on tasks.
|
http://arxiv.org/pdf/2407.00996v1
|
[
"Nicy Scaria",
"Silvester John Joseph Kennedy",
"Deepak Subramani"
] |
2024-07-01T06:22:38Z
|
2024-07-01T06:22:38Z
|
2407.01648
|
Aligning Target-Aware Molecule Diffusion Models with Exact Energy
Optimization
|
Generating ligand molecules for specific protein targets, known as structure-based drug design, is a fundamental problem in therapeutics development and biological discovery. Recently, target-aware generative models, especially diffusion models, have shown great promise in modeling protein-ligand interactions and generating candidate drugs. However, existing models primarily focus on learning the chemical distribution of all drug candidates, which lacks effective steerability on the chemical quality of model generations. In this paper, we propose a novel and general alignment framework to align pretrained target diffusion models with preferred functional properties, named AliDiff. AliDiff shifts the target-conditioned chemical distribution towards regions with higher binding affinity and structural rationality, specified by user-defined reward functions, via the preference optimization approach. To avoid the overfitting problem in common preference optimization objectives, we further develop an improved Exact Energy Preference Optimization method to yield an exact and efficient alignment of the diffusion models, and provide the closed-form expression for the converged distribution. Empirical studies on the CrossDocked2020 benchmark show that AliDiff can generate molecules with state-of-the-art binding energies with up to -7.07 Avg. Vina Score, while maintaining strong molecular properties.
|
http://arxiv.org/pdf/2407.01648v1
|
[
"Siyi Gu",
"Minkai Xu",
"Alexander Powers",
"Weili Nie",
"Tomas Geffner",
"Karsten Kreis",
"Jure Leskovec",
"Arash Vahdat",
"Stefano Ermon"
] |
2024-07-01T06:10:29Z
|
2024-07-01T06:10:29Z
|
2402.05330
|
Classification under Nuisance Parameters and Generalized Label Shift in
Likelihood-Free Inference
|
An open scientific challenge is how to classify events with reliable measures of uncertainty, when we have a mechanistic model of the data-generating process but the distribution over both labels and latent nuisance parameters is different between train and target data. We refer to this type of distributional shift as generalized label shift (GLS). Direct classification using observed data $mathbf{X}$ as covariates leads to biased predictions and invalid uncertainty estimates of labels $Y$. We overcome these biases by proposing a new method for robust uncertainty quantification that casts classification as a hypothesis testing problem under nuisance parameters. The key idea is to estimate the classifier's receiver operating characteristic (ROC) across the entire nuisance parameter space, which allows us to devise cutoffs that are invariant under GLS. Our method effectively endows a pre-trained classifier with domain adaptation capabilities and returns valid prediction sets while maintaining high power. We demonstrate its performance on two challenging scientific problems in biology and astroparticle physics with data from realistic mechanistic models.
|
http://arxiv.org/pdf/2402.05330v2
|
[
"Luca Masserano",
"Alex Shen",
"Michele Doro",
"Tommaso Dorigo",
"Rafael Izbicki",
"Ann B. Lee"
] |
2024-07-01T05:51:25Z
|
2024-02-08T00:12:18Z
|
2406.18665
|
RouteLLM: Learning to Route LLMs with Preference Data
|
Large language models (LLMs) exhibit impressive capabilities across a wide range of tasks, yet the choice of which model to use often involves a trade-off between performance and cost. More powerful models, though effective, come with higher expenses, while less capable models are more cost-effective. To address this dilemma, we propose several efficient router models that dynamically select between a stronger and a weaker LLM during inference, aiming to optimize the balance between cost and response quality. We develop a training framework for these routers leveraging human preference data and data augmentation techniques to enhance performance. Our evaluation on widely-recognized benchmarks shows that our approach significantly reduces costs-by over 2 times in certain cases-without compromising the quality of responses. Interestingly, our router models also demonstrate significant transfer learning capabilities, maintaining their performance even when the strong and weak models are changed at test time. This highlights the potential of these routers to provide a cost-effective yet high-performance solution for deploying LLMs.
|
http://arxiv.org/pdf/2406.18665v2
|
[
"Isaac Ong",
"Amjad Almahairi",
"Vincent Wu",
"Wei-Lin Chiang",
"Tianhao Wu",
"Joseph E. Gonzalez",
"M Waleed Kadous",
"Ion Stoica"
] |
2024-07-01T05:38:08Z
|
2024-06-26T18:10:22Z
|
2407.00978
|
Hybrid RAG-empowered Multi-modal LLM for Secure Healthcare Data
Management: A Diffusion-based Contract Theory Approach
|
Secure data management and effective data sharing have become paramount in the rapidly evolving healthcare landscape. The advancement of generative artificial intelligence has positioned Multi-modal Large Language Models (MLLMs) as crucial tools for managing healthcare data. MLLMs can support multi-modal inputs and generate diverse types of content by leveraging large-scale training on vast amounts of multi-modal data. However, critical challenges persist in developing medical MLLMs, including healthcare data security and freshness issues, affecting the output quality of MLLMs. In this paper, we propose a hybrid Retrieval-Augmented Generation (RAG)-empowered medical MLLMs framework for healthcare data management. This framework leverages a hierarchical cross-chain architecture to facilitate secure data training. Moreover, it enhances the output quality of MLLMs through hybrid RAG, which employs multi-modal metrics to filter various unimodal RAG results and incorporates these retrieval results as additional inputs to MLLMs. Additionally, we employ age of information to indirectly evaluate the data freshness impact of MLLMs and utilize contract theory to incentivize healthcare data holders to share fresh data, mitigating information asymmetry in data sharing. Finally, we utilize a generative diffusion model-based reinforcement learning algorithm to identify the optimal contract for efficient data sharing. Numerical results demonstrate the effectiveness of the proposed schemes, which achieve secure and efficient healthcare data management.
|
http://arxiv.org/pdf/2407.00978v1
|
[
"Cheng Su",
"Jinbo Wen",
"Jiawen Kang",
"Yonghua Wang",
"Hudan Pan",
"M. Shamim Hossain"
] |
2024-07-01T05:28:40Z
|
2024-07-01T05:28:40Z
|
2407.01647
|
Optimizing PM2.5 Forecasting Accuracy with Hybrid Meta-Heuristic and
Machine Learning Models
|
Timely alerts about hazardous air pollutants are crucial for public health. However, existing forecasting models often overlook key factors like baseline parameters and missing data, limiting their accuracy. This study introduces a hybrid approach to address these issues, focusing on forecasting hourly PM2.5 concentrations using Support Vector Regression (SVR). Meta-heuristic algorithms, Grey Wolf Optimization (GWO) and Particle Swarm Optimization (PSO), optimize SVR Hyper-parameters "C" and "Gamma" to enhance prediction accuracy. Evaluation metrics include R-squared (R2), Root Mean Square Error (RMSE), and Mean Absolute Error (MAE). Results show significant improvements with PSO-SVR (R2: 0.9401, RMSE: 0.2390, MAE: 0.1368) and GWO-SVR (R2: 0.9408, RMSE: 0.2376, MAE: 0.1373), indicating robust and accurate models suitable for similar research applications.
|
http://arxiv.org/pdf/2407.01647v1
|
[
"Parviz Ghafariasl",
"Masoomeh Zeinalnezhad",
"Amir Ahmadishokooh"
] |
2024-07-01T05:24:19Z
|
2024-07-01T05:24:19Z
|
2209.13836
|
Mutual Information Assisted Ensemble Recommender System for Identifying
Critical Risk Factors in Healthcare Prognosis
|
Purpose: Health recommenders act as important decision support systems, aiding patients and medical professionals in taking actions that lead to patients' well-being. These systems extract the information which may be of particular relevance to the end-user, helping them in making appropriate decisions. The present study proposes a feature recommender, as a part of a disease management system, that identifies and recommends the most important risk factors for an illness. Methods: A novel mutual information and ensemble-based feature ranking approach for identifying critical risk factors in healthcare prognosis is proposed. Results: To establish the effectiveness of the proposed method, experiments have been conducted on four benchmark datasets of diverse diseases (clear cell renal cell carcinoma (ccRCC), chronic kidney disease, Indian liver patient, and cervical cancer risk factors). The performance of the proposed recommender is compared with four state-of-the-art methods using recommender systems' performance metrics like average precision@K, precision@K, recall@K, F1@K, reciprocal rank@K. The method is able to recommend all relevant critical risk factors for ccRCC. It also attains a higher accuracy (96.6% and 98.6% using support vector machine and neural network, respectively) for ccRCC staging with a reduced feature set as compared to existing methods. Moreover, the top two features recommended using the proposed method with ccRCC, viz. size of tumor and metastasis status, are medically validated from the existing TNM system. Results are also found to be superior for the other three datasets. Conclusion: The proposed recommender can identify and recommend risk factors that have the most discriminating power for detecting diseases.
|
http://arxiv.org/pdf/2209.13836v3
|
[
"Abhishek Dey",
"Debayan Goswami",
"Rahul Roy",
"Susmita Ghosh",
"Yu Shrike Zhang",
"Jonathan H. Chan"
] |
2024-07-01T05:06:56Z
|
2022-09-28T04:54:37Z
|
2405.01041
|
Efficient and Flexible Method for Reducing Moderate-size Deep Neural
Networks with Condensation
|
Neural networks have been extensively applied to a variety of tasks, achieving astounding results. Applying neural networks in the scientific field is an important research direction that is gaining increasing attention. In scientific applications, the scale of neural networks is generally moderate-size, mainly to ensure the speed of inference during application. Additionally, comparing neural networks to traditional algorithms in scientific applications is inevitable. These applications often require rapid computations, making the reduction of neural network sizes increasingly important. Existing work has found that the powerful capabilities of neural networks are primarily due to their non-linearity. Theoretical work has discovered that under strong non-linearity, neurons in the same layer tend to behave similarly, a phenomenon known as condensation. Condensation offers an opportunity to reduce the scale of neural networks to a smaller subnetwork with similar performance. In this article, we propose a condensation reduction algorithm to verify the feasibility of this idea in practical problems. Our reduction method can currently be applied to both fully connected networks and convolutional networks, achieving positive results. In complex combustion acceleration tasks, we reduced the size of the neural network to 41.7% of its original scale while maintaining prediction accuracy. In the CIFAR10 image classification task, we reduced the network size to 11.5% of the original scale, still maintaining a satisfactory validation accuracy. Our method can be applied to most trained neural networks, reducing computational pressure and improving inference speed.
|
http://arxiv.org/pdf/2405.01041v2
|
[
"Tianyi Chen",
"Zhi-Qin John Xu"
] |
2024-07-01T05:06:32Z
|
2024-05-02T06:53:40Z
|
2211.13314
|
CoMadOut -- A Robust Outlier Detection Algorithm based on CoMAD
|
Unsupervised learning methods are well established in the area of anomaly detection and achieve state of the art performances on outlier datasets. Outliers play a significant role, since they bear the potential to distort the predictions of a machine learning algorithm on a given dataset. Especially among PCA-based methods, outliers have an additional destructive potential regarding the result: they may not only distort the orientation and translation of the principal components, they also make it more complicated to detect outliers. To address this problem, we propose the robust outlier detection algorithm CoMadOut, which satisfies two required properties: (1) being robust towards outliers and (2) detecting them. Our CoMadOut outlier detection variants using comedian PCA define, dependent on its variant, an inlier region with a robust noise margin by measures of in-distribution (variant CMO) and optimized scores by measures of out-of-distribution (variants CMO*), e.g. kurtosis-weighting by CMO+k. These measures allow distribution based outlier scoring for each principal component, and thus, an appropriate alignment of the degree of outlierness between normal and abnormal instances. Experiments comparing CoMadOut with traditional, deep and other comparable robust outlier detection methods showed that the performance of the introduced CoMadOut approach is competitive to well established methods related to average precision (AP), area under the precision recall curve (AUPRC) and area under the receiver operating characteristic (AUROC) curve. In summary our approach can be seen as a robust alternative for outlier detection tasks.
|
http://arxiv.org/abs/2211.13314v2
|
[
"Andreas Lohrer",
"Daniyal Kazempour",
"Maximilian Hünemörder",
"Peer Kröger"
] |
2024-07-01T05:05:59Z
|
2022-11-23T21:33:34Z
|
2407.00968
|
How Does Overparameterization Affect Features?
|
Overparameterization, the condition where models have more parameters than necessary to fit their training loss, is a crucial factor for the success of deep learning. However, the characteristics of the features learned by overparameterized networks are not well understood. In this work, we explore this question by comparing models with the same architecture but different widths. We first examine the expressivity of the features of these models, and show that the feature space of overparameterized networks cannot be spanned by concatenating many underparameterized features, and vice versa. This reveals that both overparameterized and underparameterized networks acquire some distinctive features. We then evaluate the performance of these models, and find that overparameterized networks outperform underparameterized networks, even when many of the latter are concatenated. We corroborate these findings using a VGG-16 and ResNet18 on CIFAR-10 and a Transformer on the MNLI classification dataset. Finally, we propose a toy setting to explain how overparameterized networks can learn some important features that the underparamaterized networks cannot learn.
|
http://arxiv.org/pdf/2407.00968v1
|
[
"Ahmet Cagri Duzgun",
"Samy Jelassi",
"Yuanzhi Li"
] |
2024-07-01T05:01:03Z
|
2024-07-01T05:01:03Z
|
2407.00966
|
Smoothed Analysis for Learning Concepts with Low Intrinsic Dimension
|
In traditional models of supervised learning, the goal of a learner -- given examples from an arbitrary joint distribution on $mathbb{R}^d times {pm 1}$ -- is to output a hypothesis that is competitive (to within $epsilon$) of the best fitting concept from some class. In order to escape strong hardness results for learning even simple concept classes, we introduce a smoothed-analysis framework that requires a learner to compete only with the best classifier that is robust to small random Gaussian perturbation. This subtle change allows us to give a wide array of learning results for any concept that (1) depends on a low-dimensional subspace (aka multi-index model) and (2) has a bounded Gaussian surface area. This class includes functions of halfspaces and (low-dimensional) convex sets, cases that are only known to be learnable in non-smoothed settings with respect to highly structured distributions such as Gaussians. Surprisingly, our analysis also yields new results for traditional non-smoothed frameworks such as learning with margin. In particular, we obtain the first algorithm for agnostically learning intersections of $k$-halfspaces in time $k^{poly(frac{log k}{epsilon gamma}) }$ where $gamma$ is the margin parameter. Before our work, the best-known runtime was exponential in $k$ (Arriaga and Vempala, 1999).
|
http://arxiv.org/pdf/2407.00966v1
|
[
"Gautam Chandrasekaran",
"Adam Klivans",
"Vasilis Kontonis",
"Raghu Meka",
"Konstantinos Stavropoulos"
] |
2024-07-01T04:58:36Z
|
2024-07-01T04:58:36Z
|
2406.19549
|
ASCENT: Amplifying Power Side-Channel Resilience via Learning &
Monte-Carlo Tree Search
|
Power side-channel (PSC) analysis is pivotal for securing cryptographic hardware. Prior art focused on securing gate-level netlists obtained as-is from chip design automation, neglecting all the complexities and potential side-effects for security arising from the design automation process. That is, automation traditionally prioritizes power, performance, and area (PPA), sidelining security. We propose a "security-first" approach, refining the logic synthesis stage to enhance the overall resilience of PSC countermeasures. We introduce ASCENT, a learning-and-search-based framework that (i) drastically reduces the time for post-design PSC evaluation and (ii) explores the security-vs-PPA design space. Thus, ASCENT enables an efficient exploration of a large number of candidate netlists, leading to an improvement in PSC resilience compared to regular PPA-optimized netlists. ASCENT is up to 120x faster than traditional PSC analysis and yields a 3.11x improvement for PSC resilience of state-of-the-art PSC countermeasures
|
http://arxiv.org/pdf/2406.19549v2
|
[
"Jitendra Bhandari",
"Animesh Basak Chowdhury",
"Mohammed Nabeel",
"Ozgur Sinanoglu",
"Siddharth Garg",
"Ramesh Karri",
"Johann Knechtel"
] |
2024-07-01T04:52:56Z
|
2024-06-27T22:01:00Z
|
2406.04824
|
FunBO: Discovering Acquisition Functions for Bayesian Optimization with
FunSearch
|
The sample efficiency of Bayesian optimization algorithms depends on carefully crafted acquisition functions (AFs) guiding the sequential collection of function evaluations. The best-performing AF can vary significantly across optimization problems, often requiring ad-hoc and problem-specific choices. This work tackles the challenge of designing novel AFs that perform well across a variety of experimental settings. Based on FunSearch, a recent work using Large Language Models (LLMs) for discovery in mathematical sciences, we propose FunBO, an LLM-based method that can be used to learn new AFs written in computer code by leveraging access to a limited number of evaluations for a set of objective functions. We provide the analytic expression of all discovered AFs and evaluate them on various global optimization benchmarks and hyperparameter optimization tasks. We show how FunBO identifies AFs that generalize well in and out of the training distribution of functions, thus outperforming established general-purpose AFs and achieving competitive performance against AFs that are customized to specific function types and are learned via transfer-learning algorithms.
|
http://arxiv.org/pdf/2406.04824v2
|
[
"Virginia Aglietti",
"Ira Ktena",
"Jessica Schrouff",
"Eleni Sgouritsa",
"Francisco J. R. Ruiz",
"Alan Malek",
"Alexis Bellot",
"Silvia Chiappa"
] |
2024-07-01T04:48:24Z
|
2024-06-07T10:49:59Z
|
2403.17480
|
Capacity Provisioning Motivated Online Non-Convex Optimization Problem
with Memory and Switching Cost
|
An online non-convex optimization problem is considered where the goal is to minimize the flow time (total delay) of a set of jobs by modulating the number of active servers, but with a switching cost associated with changing the number of active servers over time. Each job can be processed by at most one fixed speed server at any time. Compared to the usual online convex optimization (OCO) problem with switching cost, the objective function considered is non-convex and more importantly, at each time, it depends on all past decisions and not just the present one. Both worst-case and stochastic inputs are considered; for both cases, competitive algorithms are derived.
|
http://arxiv.org/pdf/2403.17480v2
|
[
"Rahul Vaze",
"Jayakrishnan Nair"
] |
2024-07-01T04:35:14Z
|
2024-03-26T08:22:09Z
|
2407.00958
|
Universal Approximation Theory: The basic theory for large language
models
|
Language models have emerged as a critical area of focus in artificial intelligence, particularly with the introduction of groundbreaking innovations like ChatGPT. Large-scale Transformer networks have quickly become the leading approach for advancing natural language processing algorithms. Built on the Transformer architecture, these models enable interactions that closely mimic human communication and, equipped with extensive knowledge, can even assist in guiding human tasks. Despite their impressive capabilities and growing complexity, a key question remains-the theoretical foundations of large language models (LLMs). What makes Transformer so effective for powering intelligent language applications, such as translation and coding? What underlies LLMs' ability for In-Context Learning (ICL)? How does the LoRA scheme enhance the fine-tuning of LLMs? And what supports the practicality of pruning LLMs? To address these critical questions and explore the technological strategies within LLMs, we leverage the Universal Approximation Theory (UAT) to offer a theoretical backdrop, shedding light on the mechanisms that underpin these advancements.
|
http://arxiv.org/pdf/2407.00958v1
|
[
"Wei Wang",
"Qing Li"
] |
2024-07-01T04:29:35Z
|
2024-07-01T04:29:35Z
|
2407.00956
|
A Closer Look at Deep Learning on Tabular Data
|
Tabular data is prevalent across various domains in machine learning. Although Deep Neural Network (DNN)-based methods have shown promising performance comparable to tree-based ones, in-depth evaluation of these methods is challenging due to varying performance ranks across diverse datasets. In this paper, we propose a comprehensive benchmark comprising 300 tabular datasets, covering a wide range of task types, size distributions, and domains. We perform an extensive comparison between state-of-the-art deep tabular methods and tree-based methods, revealing the average rank of all methods and highlighting the key factors that influence the success of deep tabular methods. Next, we analyze deep tabular methods based on their training dynamics, including changes in validation metrics and other statistics. For each dataset-method pair, we learn a mapping from both the meta-features of datasets and the first part of the validation curve to the final validation set performance and even the evolution of validation curves. This mapping extracts essential meta-features that influence prediction accuracy, helping the analysis of tabular methods from novel aspects. Based on the performance of all methods on this large benchmark, we identify two subsets of 45 datasets each. The first subset contains datasets that favor either tree-based methods or DNN-based methods, serving as effective analysis tools to evaluate strategies (e.g., attribute encoding strategies) for improving deep tabular models. The second subset contains datasets where the ranks of methods are consistent with the overall benchmark, acting as a probe for tabular analysis. These ``tiny tabular benchmarks'' will facilitate further studies on tabular data.
|
http://arxiv.org/pdf/2407.00956v1
|
[
"Han-Jia Ye",
"Si-Yang Liu",
"Hao-Run Cai",
"Qi-Le Zhou",
"De-Chuan Zhan"
] |
2024-07-01T04:24:07Z
|
2024-07-01T04:24:07Z
|
2407.00952
|
SplitLoRA: A Split Parameter-Efficient Fine-Tuning Framework for Large
Language Models
|
The scalability of large language models (LLMs) in handling high-complexity models and large-scale datasets has led to tremendous successes in pivotal domains. While there is an urgent need to acquire more training data for LLMs, a concerning reality is the depletion of high-quality public datasets within a few years. In view of this, the federated learning (FL) LLM fine-tuning paradigm recently has been proposed to facilitate collaborative LLM fine-tuning on distributed private data, where multiple data owners collaboratively fine-tune a shared LLM without sharing raw data. However, the staggering model size of LLMs imposes heavy computing and communication burdens on clients, posing significant barriers to the democratization of the FL LLM fine-tuning paradigm. To address this issue, split learning (SL) has emerged as a promising solution by offloading the primary training workload to a server via model partitioning while exchanging activation/activation's gradients with smaller data sizes rather than the entire LLM. Unfortunately, research on the SL LLM fine-tuning paradigm is still in its nascent stage. To fill this gap, in this paper, we propose the first SL LLM fine-tuning framework, named SplitLoRA. SplitLoRA is built on the split federated learning (SFL) framework, amalgamating the advantages of parallel training from FL and model splitting from SL and thus greatly enhancing the training efficiency. It is worth noting that SplitLoRA is the inaugural open-source benchmark for SL LLM fine-tuning, providing a foundation for research efforts dedicated to advancing SL LLM fine-tuning. Extensive simulations validate that SplitLoRA achieves target accuracy in significantly less time than state-of-the-art LLM fine-tuning frameworks, demonstrating the superior training performance of SplitLoRA. The project page is available at https://fduinc.github.io/splitlora/.
|
http://arxiv.org/pdf/2407.00952v1
|
[
"Zheng Lin",
"Xuanjie Hu",
"Yuxin Zhang",
"Zhe Chen",
"Zihan Fang",
"Xianhao Chen",
"Ang Li",
"Praneeth Vepakomma",
"Yue Gao"
] |
2024-07-01T04:13:25Z
|
2024-07-01T04:13:25Z
|
2407.00950
|
Causal Bandits: The Pareto Optimal Frontier of Adaptivity, a Reduction
to Linear Bandits, and Limitations around Unknown Marginals
|
In this work, we investigate the problem of adapting to the presence or absence of causal structure in multi-armed bandit problems. In addition to the usual reward signal, we assume the learner has access to additional variables, observed in each round after acting. When these variables $d$-separate the action from the reward, existing work in causal bandits demonstrates that one can achieve strictly better (minimax) rates of regret (Lu et al., 2020). Our goal is to adapt to this favorable "conditionally benign" structure, if it is present in the environment, while simultaneously recovering worst-case minimax regret, if it is not. Notably, the learner has no prior knowledge of whether the favorable structure holds. In this paper, we establish the Pareto optimal frontier of adaptive rates. We prove upper and matching lower bounds on the possible trade-offs in the performance of learning in conditionally benign and arbitrary environments, resolving an open question raised by Bilodeau et al. (2022). Furthermore, we are the first to obtain instance-dependent bounds for causal bandits, by reducing the problem to the linear bandit setting. Finally, we examine the common assumption that the marginal distributions of the post-action contexts are known and show that a nontrivial estimate is necessary for better-than-worst-case minimax rates.
|
http://arxiv.org/pdf/2407.00950v1
|
[
"Ziyi Liu",
"Idan Attias",
"Daniel M. Roy"
] |
2024-07-01T04:12:15Z
|
2024-07-01T04:12:15Z
|
2407.00948
|
The House Always Wins: A Framework for Evaluating Strategic Deception in
LLMs
|
We propose a framework for evaluating strategic deception in large language models (LLMs). In this framework, an LLM acts as a game master in two scenarios: one with random game mechanics and another where it can choose between random or deliberate actions. As an example, we use blackjack because the action space nor strategies involve deception. We benchmark Llama3-70B, GPT-4-Turbo, and Mixtral in blackjack, comparing outcomes against expected distributions in fair play to determine if LLMs develop strategies favoring the "house." Our findings reveal that the LLMs exhibit significant deviations from fair play when given implicit randomness instructions, suggesting a tendency towards strategic manipulation in ambiguous scenarios. However, when presented with an explicit choice, the LLMs largely adhere to fair play, indicating that the framing of instructions plays a crucial role in eliciting or mitigating potentially deceptive behaviors in AI systems.
|
http://arxiv.org/pdf/2407.00948v1
|
[
"Tanush Chopra",
"Michael Li"
] |
2024-07-01T04:07:49Z
|
2024-07-01T04:07:49Z
|
2405.13300
|
FAITH: Frequency-domain Attention In Two Horizons for Time Series
Forecasting
|
Time Series Forecasting plays a crucial role in various fields such as industrial equipment maintenance, meteorology, energy consumption, traffic flow and financial investment. However, despite their considerable advantages over traditional statistical approaches, current deep learning-based predictive models often exhibit a significant deviation between their forecasting outcomes and the ground truth. This discrepancy is largely due to an insufficient emphasis on extracting the sequence's latent information, particularly its global information within the frequency domain and the relationship between different variables. To address this issue, we propose a novel model Frequency-domain Attention In Two Horizons, which decomposes time series into trend and seasonal components using a multi-scale sequence adaptive decomposition and fusion architecture, and processes them separately. FAITH utilizes Frequency Channel feature Extraction Module and Frequency Temporal feature Extraction Module to capture inter-channel relationships and temporal global information in the sequence, significantly improving its ability to handle long-term dependencies and complex patterns. Furthermore, FAITH achieves theoretically linear complexity by modifying the time-frequency domain transformation method, effectively reducing computational costs. Extensive experiments on 6 benchmarks for long-term forecasting and 3 benchmarks for short-term forecasting demonstrate that FAITH outperforms existing models in many fields, such as electricity, weather and traffic, proving its effectiveness and superiority both in long-term and short-term time series forecasting tasks. Our codes and data are available at https://github.com/LRQ577/FAITH.
|
http://arxiv.org/pdf/2405.13300v3
|
[
"Ruiqi Li",
"Maowei Jiang",
"Kai Wang",
"Kaiduo Feng",
"Quangao Liu",
"Yue Sun",
"Xiufang Zhou"
] |
2024-07-01T04:01:11Z
|
2024-05-22T02:37:02Z
|
2407.00945
|
Efficient Expert Pruning for Sparse Mixture-of-Experts Language Models:
Enhancing Performance and Reducing Inference Costs
|
The rapid advancement of large language models (LLMs) has led to architectures with billions to trillions of parameters, posing significant deployment challenges due to their substantial demands on memory, processing power, and energy consumption. Sparse Mixture-of-Experts (SMoE) architectures have emerged as a solution, activating only a subset of parameters per token, thereby achieving faster inference while maintaining performance. However, SMoE models still face limitations in broader deployment due to their large parameter counts and significant GPU memory requirements. In this work, we introduce a gradient-free evolutionary strategy named EEP (Efficient Expert P}runing) to enhance the pruning of experts in SMoE models. EEP relies solely on model inference (i.e., no gradient computation) and achieves greater sparsity while maintaining or even improving performance on downstream tasks. EEP can be used to reduce both the total number of experts (thus saving GPU memory) and the number of active experts (thus accelerating inference). For example, we demonstrate that pruning up to 75% of experts in Mixtral $8times7$B-Instruct results in a substantial reduction in parameters with minimal performance loss. Remarkably, we observe improved performance on certain tasks, such as a significant increase in accuracy on the SQuAD dataset (from 53.4% to 75.4%), when pruning half of the experts. With these results, EEP not only lowers the barrier to deploying SMoE models,but also challenges the conventional understanding of model pruning by showing that fewer experts can lead to better task-specific performance without any fine-tuning. Code is available at https://github.com/imagination-research/EEP.
|
http://arxiv.org/pdf/2407.00945v1
|
[
"Enshu Liu",
"Junyi Zhu",
"Zinan Lin",
"Xuefei Ning",
"Matthew B. Blaschko",
"Shengen Yan",
"Guohao Dai",
"Huazhong Yang",
"Yu Wang"
] |
2024-07-01T03:57:35Z
|
2024-07-01T03:57:35Z
|
2407.00935
|
Look Ahead or Look Around? A Theoretical Comparison Between
Autoregressive and Masked Pretraining
|
In recent years, the rise of generative self-supervised learning (SSL) paradigms has exhibited impressive performance across visual, language, and multi-modal domains. While the varied designs of generative SSL objectives lead to distinct properties in downstream tasks, a theoretical understanding of these differences remains largely unexplored. In this paper, we establish the first theoretical comparisons between two leading generative SSL paradigms: autoregressive SSL and masked SSL. Through establishing theoretical frameworks, we elucidate the strengths and limitations of autoregressive and masked SSL within the primary evaluation tasks of classification and content generation. Our findings demonstrate that in classification tasks, the flexibility of targeted tokens in masked SSL fosters more inter-sample connections compared to the fixed position of target tokens in autoregressive SSL, which yields superior clustering performance. In content generation tasks, the misalignment between the flexible lengths of test samples and the fixed length of unmasked texts in masked SSL (vs. flexible lengths of conditional texts in autoregressive SSL) hinders its generation performance. To leverage each other's strengths and mitigate weaknesses, we propose diversity-enhanced autoregressive and variable-length masked objectives, which substantially improve the classification performance of autoregressive SSL and the generation performance of masked SSL. Code is available at https://github.com/PKU-ML/LookAheadLookAround.
|
http://arxiv.org/pdf/2407.00935v1
|
[
"Qi Zhang",
"Tianqi Du",
"Haotian Huang",
"Yifei Wang",
"Yisen Wang"
] |
2024-07-01T03:35:59Z
|
2024-07-01T03:35:59Z
|
2303.01504
|
Backdoor for Debias: Mitigating Model Bias with Backdoor Attack-based
Artificial Bias
|
With the swift advancement of deep learning, state-of-the-art algorithms have been utilized in various social situations. Nonetheless, some algorithms have been discovered to exhibit biases and provide unequal results. The current debiasing methods face challenges such as poor utilization of data or intricate training requirements. In this work, we found that the backdoor attack can construct an artificial bias similar to the model bias derived in standard training. Considering the strong adjustability of backdoor triggers, we are motivated to mitigate the model bias by carefully designing reverse artificial bias created from backdoor attack. Based on this, we propose a backdoor debiasing framework based on knowledge distillation, which effectively reduces the model bias from original data and minimizes security risks from the backdoor attack. The proposed solution is validated on both image and structured datasets, showing promising results. This work advances the understanding of backdoor attacks and highlights its potential for beneficial applications. The code for the study can be found at url{https://anonymous.4open.science/r/DwB-BC07/}.
|
http://arxiv.org/pdf/2303.01504v3
|
[
"Shangxi Wu",
"Qiuyang He",
"Jian Yu",
"Jitao Sang"
] |
2024-07-01T03:33:55Z
|
2023-03-01T12:31:07Z
|
2405.16522
|
Multi-State TD Target for Model-Free Reinforcement Learning
|
Temporal difference (TD) learning is a fundamental technique in reinforcement learning that updates value estimates for states or state-action pairs using a TD target. This target represents an improved estimate of the true value by incorporating both immediate rewards and the estimated value of subsequent states. Traditionally, TD learning relies on the value of a single subsequent state. We propose an enhanced multi-state TD (MSTD) target that utilizes the estimated values of multiple subsequent states. Building on this new MSTD concept, we develop complete actor-critic algorithms that include management of replay buffers in two modes, and integrate with deep deterministic policy optimization (DDPG) and soft actor-critic (SAC). Experimental results demonstrate that algorithms employing the MSTD target significantly improve learning performance compared to traditional methods.The code is provided on GitHub.
|
http://arxiv.org/pdf/2405.16522v3
|
[
"Wuhao Wang",
"Zhiyong Chen",
"Lepeng Zhang"
] |
2024-07-01T03:21:38Z
|
2024-05-26T11:17:49Z
|
2407.00928
|
FoldGPT: Simple and Effective Large Language Model Compression Scheme
|
The demand for deploying large language models(LLMs) on mobile devices continues to increase, driven by escalating data security concerns and cloud costs. However, network bandwidth and memory limitations pose challenges for deploying billion-level models on mobile devices. In this study, we investigate the outputs of different layers across various scales of LLMs and found that the outputs of most layers exhibit significant similarity. Moreover, this similarity becomes more pronounced as the model size increases, indicating substantial redundancy in the depth direction of the LLMs. Based on this observation, we propose an efficient model volume compression strategy, termed FoldGPT, which combines block removal and block parameter sharing.This strategy consists of three parts: (1) Based on the learnable gating parameters, we determine the block importance ranking while modeling the coupling effect between blocks. Then we delete some redundant layers based on the given removal rate. (2) For the retained blocks, we apply a specially designed group parameter sharing strategy, where blocks within the same group share identical weights, significantly compressing the number of parameters and slightly reducing latency overhead. (3) After sharing these Blocks, we "cure" the mismatch caused by sparsity with a minor amount of fine-tuning and introduce a tail-layer distillation strategy to improve the performance. Experiments demonstrate that FoldGPT outperforms previous state-of-the-art(SOTA) methods in efficient model compression, demonstrating the feasibility of achieving model lightweighting through straightforward block removal and parameter sharing.
|
http://arxiv.org/pdf/2407.00928v1
|
[
"Songwei Liu",
"Chao Zeng",
"Lianqiang Li",
"Chenqian Yan",
"Lean Fu",
"Xing Mei",
"Fangmin Chen"
] |
2024-07-01T03:17:53Z
|
2024-07-01T03:17:53Z
|
2407.00927
|
Learnability of Parameter-Bounded Bayes Nets
|
Bayes nets are extensively used in practice to efficiently represent joint probability distributions over a set of random variables and capture dependency relations. In a seminal paper, Chickering et al. (JMLR 2004) showed that given a distribution $P$, that is defined as the marginal distribution of a Bayes net, it is $mathsf{NP}$-hard to decide whether there is a parameter-bounded Bayes net that represents $P$. They called this problem LEARN. In this work, we extend the $mathsf{NP}$-hardness result of LEARN and prove the $mathsf{NP}$-hardness of a promise search variant of LEARN, whereby the Bayes net in question is guaranteed to exist and one is asked to find such a Bayes net. We complement our hardness result with a positive result about the sample complexity that is sufficient to recover a parameter-bounded Bayes net that is close (in TV distance) to a given distribution $P$, that is represented by some parameter-bounded Bayes net, generalizing a degree-bounded sample complexity result of Brustle et al. (EC 2020).
|
http://arxiv.org/pdf/2407.00927v1
|
[
"Arnab Bhattacharyya",
"Davin Choo",
"Sutanu Gayen",
"Dimitrios Myrisiotis"
] |
2024-07-01T03:17:34Z
|
2024-07-01T03:17:34Z
|
2307.10529
|
Fast Unsupervised Deep Outlier Model Selection with Hypernetworks
|
Outlier detection (OD) finds many applications with a rich literature of numerous techniques. Deep neural network based OD (DOD) has seen a recent surge of attention thanks to the many advances in deep learning. In this paper, we consider a critical-yet-understudied challenge with unsupervised DOD, that is, effective hyperparameter (HP) tuning/model selection. While several prior work report the sensitivity of OD models to HPs, it becomes ever so critical for the modern DOD models that exhibit a long list of HPs. We introduce HYPER for tuning DOD models, tackling two fundamental challenges: (1) validation without supervision (due to lack of labeled anomalies), and (2) efficient search of the HP/model space (due to exponential growth in the number of HPs). A key idea is to design and train a novel hypernetwork (HN) that maps HPs onto optimal weights of the main DOD model. In turn, HYPER capitalizes on a single HN that can dynamically generate weights for many DOD models (corresponding to varying HPs), which offers significant speed-up. In addition, it employs meta-learning on historical OD tasks with labels to train a proxy validation function, likewise trained with our proposed HN efficiently. Extensive experiments on 35 OD tasks show that HYPER achieves high performance against 8 baselines with significant efficiency gains.
|
http://arxiv.org/abs/2307.10529v2
|
[
"Xueying Ding",
"Yue Zhao",
"Leman Akoglu"
] |
2024-07-01T03:10:34Z
|
2023-07-20T02:07:20Z
|
2402.11838
|
UniST: A Prompt-Empowered Universal Model for Urban Spatio-Temporal
Prediction
|
Urban spatio-temporal prediction is crucial for informed decision-making, such as traffic management, resource optimization, and emergence response. Despite remarkable breakthroughs in pretrained natural language models that enable one model to handle diverse tasks, a universal solution for spatio-temporal prediction remains challenging Existing prediction approaches are typically tailored for specific spatio-temporal scenarios, requiring task-specific model designs and extensive domain-specific training data. In this study, we introduce UniST, a universal model designed for general urban spatio-temporal prediction across a wide range of scenarios. Inspired by large language models, UniST achieves success through: (i) utilizing diverse spatio-temporal data from different scenarios, (ii) effective pre-training to capture complex spatio-temporal dynamics, (iii) knowledge-guided prompts to enhance generalization capabilities. These designs together unlock the potential of building a universal model for various scenarios Extensive experiments on more than 20 spatio-temporal scenarios demonstrate UniST's efficacy in advancing state-of-the-art performance, especially in few-shot and zero-shot prediction. The datasets and code implementation are released on https://github.com/tsinghua-fib-lab/UniST.
|
http://arxiv.org/abs/2402.11838v5
|
[
"Yuan Yuan",
"Jingtao Ding",
"Jie Feng",
"Depeng Jin",
"Yong Li"
] |
2024-07-01T02:51:58Z
|
2024-02-19T05:04:11Z
|
2407.00918
|
Robust and Reliable Early-Stage Website Fingerprinting Attacks via
Spatial-Temporal Distribution Analysis
|
Website Fingerprinting (WF) attacks identify the websites visited by users by performing traffic analysis, compromising user privacy. Particularly, DL-based WF attacks demonstrate impressive attack performance. However, the effectiveness of DL-based WF attacks relies on the collected complete and pure traffic during the page loading, which impacts the practicality of these attacks. The WF performance is rather low under dynamic network conditions and various WF defenses, particularly when the analyzed traffic is only a small part of the complete traffic. In this paper, we propose Holmes, a robust and reliable early-stage WF attack. Holmes utilizes temporal and spatial distribution analysis of website traffic to effectively identify websites in the early stages of page loading. Specifically, Holmes develops adaptive data augmentation based on the temporal distribution of website traffic and utilizes a supervised contrastive learning method to extract the correlations between the early-stage traffic and the pre-collected complete traffic. Holmes accurately identifies traffic in the early stages of page loading by computing the correlation of the traffic with the spatial distribution information, which ensures robust and reliable detection according to early-stage traffic. We extensively evaluate Holmes using six datasets. Compared to nine existing DL-based WF attacks, Holmes improves the F1-score of identifying early-stage traffic by an average of 169.18%. Furthermore, we replay the traffic of visiting real-world dark web websites. Holmes successfully identifies dark web websites when the ratio of page loading on average is only 21.71%, with an average precision improvement of 169.36% over the existing WF attacks.
|
http://arxiv.org/pdf/2407.00918v1
|
[
"Xinhao Deng",
"Qi Li",
"Ke Xu"
] |
2024-07-01T02:51:26Z
|
2024-07-01T02:51:26Z
|
2407.00913
|
SecureSpectra: Safeguarding Digital Identity from Deep Fake Threats via
Intelligent Signatures
|
Advancements in DeepFake (DF) audio models pose a significant threat to voice authentication systems, leading to unauthorized access and the spread of misinformation. We introduce a defense mechanism, SecureSpectra, addressing DF threats by embedding orthogonal, irreversible signatures within audio. SecureSpectra leverages the inability of DF models to replicate high-frequency content, which we empirically identify across diverse datasets and DF models. Integrating differential privacy into the pipeline protects signatures from reverse engineering and strikes a delicate balance between enhanced security and minimal performance compromises. Our evaluations on Mozilla Common Voice, LibriSpeech, and VoxCeleb datasets showcase SecureSpectra's superior performance, outperforming recent works by up to 71% in detection accuracy. We open-source SecureSpectra to benefit the research community.
|
http://arxiv.org/pdf/2407.00913v1
|
[
"Oguzhan Baser",
"Kaan Kale",
"Sandeep P. Chinchali"
] |
2024-07-01T02:36:27Z
|
2024-07-01T02:36:27Z
|
2402.16710
|
Cost Aware Best Arm Identification
|
In this paper, we study a best arm identification problem with dual objects. In addition to the classic reward, each arm is associated with a cost distribution and the goal is to identify the largest reward arm using the minimum expected cost. We call it emph{Cost Aware Best Arm Identification} (CABAI), which captures the separation of testing and implementation phases in product development pipelines and models the objective shift between phases, i.e., cost for testing and reward for implementation. We first derive a theoretical lower bound for CABAI and propose an algorithm called $mathsf{CTAS}$ to match it asymptotically. To reduce the computation of $mathsf{CTAS}$, we further propose a simple algorithm called emph{Chernoff Overlap} (CO), based on a square-root rule, which we prove is optimal in simplified two-armed models and generalizes well in numerical experiments. Our results show that (i) ignoring the heterogeneous action cost results in sub-optimality in practice, and (ii) simple algorithms can deliver near-optimal performance over a wide range of problems.
|
http://arxiv.org/pdf/2402.16710v2
|
[
"Kellen Kanarios",
"Qining Zhang",
"Lei Ying"
] |
2024-07-01T02:35:19Z
|
2024-02-26T16:27:08Z
|
2407.00911
|
Deep Image-to-Recipe Translation
|
The modern saying, "You Are What You Eat" resonates on a profound level, reflecting the intricate connection between our identities and the food we consume. Our project, Deep Image-to-Recipe Translation, is an intersection of computer vision and natural language generation that aims to bridge the gap between cherished food memories and the art of culinary creation. Our primary objective involves predicting ingredients from a given food image. For this task, we first develop a custom convolutional network and then compare its performance to a model that leverages transfer learning. We pursue an additional goal of generating a comprehensive set of recipe steps from a list of ingredients. We frame this process as a sequence-to-sequence task and develop a recurrent neural network that utilizes pre-trained word embeddings. We address several challenges of deep learning including imbalanced datasets, data cleaning, overfitting, and hyperparameter selection. Our approach emphasizes the importance of metrics such as Intersection over Union (IoU) and F1 score in scenarios where accuracy alone might be misleading. For our recipe prediction model, we employ perplexity, a commonly used and important metric for language models. We find that transfer learning via pre-trained ResNet-50 weights and GloVe embeddings provide an exceptional boost to model performance, especially when considering training resource constraints. Although we have made progress on the image-to-recipe translation, there is an opportunity for future exploration with advancements in model architectures, dataset scalability, and enhanced user interaction.
|
http://arxiv.org/pdf/2407.00911v1
|
[
"Jiangqin Ma",
"Bilal Mawji",
"Franz Williams"
] |
2024-07-01T02:33:07Z
|
2024-07-01T02:33:07Z
|
2311.02798
|
From molecules to scaffolds to functional groups: building
context-dependent molecular representation via multi-channel learning
|
Reliable molecular property prediction is essential for various scientific endeavors and industrial applications, such as drug discovery. However, the data scarcity, combined with the highly non-linear causal relationships between physicochemical and biological properties and conventional molecular featurization schemes, complicates the development of robust molecular machine learning models. Self-supervised learning (SSL) has emerged as a popular solution, utilizing large-scale, unannotated molecular data to learn a foundational representation of chemical space that might be advantageous for downstream tasks. Yet, existing molecular SSL methods largely overlook chemical knowledge, including molecular structure similarity, scaffold composition, and the context-dependent aspects of molecular properties when operating over the chemical space. They also struggle to learn the subtle variations in structure-activity relationship. This paper introduces a novel pre-training framework that learns robust and generalizable chemical knowledge. It leverages the structural hierarchy within the molecule, embeds them through distinct pre-training tasks across channels, and aggregates channel information in a task-specific manner during fine-tuning. Our approach demonstrates competitive performance across various molecular property benchmarks and offers strong advantages in particularly challenging yet ubiquitous scenarios like activity cliffs.
|
http://arxiv.org/pdf/2311.02798v2
|
[
"Yue Wan",
"Jialu Wu",
"Tingjun Hou",
"Chang-Yu Hsieh",
"Xiaowei Jia"
] |
2024-07-01T02:19:36Z
|
2023-11-05T23:47:52Z
|
2407.00906
|
GSO-YOLO: Global Stability Optimization YOLO for Construction Site
Detection
|
Safety issues at construction sites have long plagued the industry, posing risks to worker safety and causing economic damage due to potential hazards. With the advancement of artificial intelligence, particularly in the field of computer vision, the automation of safety monitoring on construction sites has emerged as a solution to this longstanding issue. Despite achieving impressive performance, advanced object detection methods like YOLOv8 still face challenges in handling the complex conditions found at construction sites. To solve these problems, this study presents the Global Stability Optimization YOLO (GSO-YOLO) model to address challenges in complex construction sites. The model integrates the Global Optimization Module (GOM) and Steady Capture Module (SCM) to enhance global contextual information capture and detection stability. The innovative AIoU loss function, which combines CIoU and EIoU, improves detection accuracy and efficiency. Experiments on datasets like SODA, MOCS, and CIS show that GSO-YOLO outperforms existing methods, achieving SOTA performance.
|
http://arxiv.org/pdf/2407.00906v1
|
[
"Yuming Zhang",
"Dongzhi Guan",
"Shouxin Zhang",
"Junhao Su",
"Yunzhi Han",
"Jiabin Liu"
] |
2024-07-01T02:15:27Z
|
2024-07-01T02:15:27Z
|
2405.18334
|
SketchQL Demonstration: Zero-shot Video Moment Querying with Sketches
|
In this paper, we will present SketchQL, a video database management system (VDBMS) for retrieving video moments with a sketch-based query interface. This novel interface allows users to specify object trajectory events with simple mouse drag-and-drop operations. Users can use trajectories of single objects as building blocks to compose complex events. Using a pre-trained model that encodes trajectory similarity, SketchQL achieves zero-shot video moments retrieval by performing similarity searches over the video to identify clips that are the most similar to the visual query. In this demonstration, we introduce the graphic user interface of SketchQL and detail its functionalities and interaction mechanisms. We also demonstrate the end-to-end usage of SketchQL from query composition to video moments retrieval using real-world scenarios.
|
http://arxiv.org/pdf/2405.18334v3
|
[
"Renzhi Wu",
"Pramod Chunduri",
"Dristi J Shah",
"Ashmitha Julius Aravind",
"Ali Payani",
"Xu Chu",
"Joy Arulraj",
"Kexin Rong"
] |
2024-07-01T02:10:50Z
|
2024-05-28T16:28:51Z
|
2406.19602
|
A Survey on Deep Clustering: From the Prior Perspective
|
Facilitated by the powerful feature extraction ability of neural networks, deep clustering has achieved great success in analyzing high-dimensional and complex real-world data. The performance of deep clustering methods is affected by various factors such as network structures and learning objectives. However, as pointed out in this survey, the essence of deep clustering lies in the incorporation and utilization of prior knowledge, which is largely ignored by existing works. From pioneering deep clustering methods based on data structure assumptions to recent contrastive clustering methods based on data augmentation invariances, the development of deep clustering intrinsically corresponds to the evolution of prior knowledge. In this survey, we provide a comprehensive review of deep clustering methods by categorizing them into six types of prior knowledge. We find that in general the prior innovation follows two trends, namely, i) from mining to constructing, and ii) from internal to external. Besides, we provide a benchmark on five widely-used datasets and analyze the performance of methods with diverse priors. By providing a novel prior knowledge perspective, we hope this survey could provide some novel insights and inspire future research in the deep clustering community.
|
http://arxiv.org/pdf/2406.19602v2
|
[
"Yiding Lu",
"Haobin Li",
"Yunfan Li",
"Yijie Lin",
"Xi Peng"
] |
2024-07-01T02:10:16Z
|
2024-06-28T02:18:16Z
|
2407.01645
|
Sign Gradient Descent-based Neuronal Dynamics: ANN-to-SNN Conversion
Beyond ReLU Network
|
Spiking neural network (SNN) is studied in multidisciplinary domains to (i) enable order-of-magnitudes energy-efficient AI inference and (ii) computationally simulate neuro-scientific mechanisms. The lack of discrete theory obstructs the practical application of SNN by limiting its performance and nonlinearity support. We present a new optimization-theoretic perspective of the discrete dynamics of spiking neurons. We prove that a discrete dynamical system of simple integrate-and-fire models approximates the sub-gradient method over unconstrained optimization problems. We practically extend our theory to introduce a novel sign gradient descent (signGD)-based neuronal dynamics that can (i) approximate diverse nonlinearities beyond ReLU and (ii) advance ANN-to-SNN conversion performance in low time steps. Experiments on large-scale datasets show that our technique achieves (i) state-of-the-art performance in ANN-to-SNN conversion and (ii) is the first to convert new DNN architectures, e.g., ConvNext, MLP-Mixer, and ResMLP. We publicly share our source code at https://github.com/snuhcs/snn_signgd .
|
http://arxiv.org/pdf/2407.01645v1
|
[
"Hyunseok Oh",
"Youngki Lee"
] |
2024-07-01T02:09:20Z
|
2024-07-01T02:09:20Z
|
2310.11829
|
Towards Graph Foundation Models: A Survey and Beyond
|
Foundation models have emerged as critical components in a variety of artificial intelligence applications, and showcase significant success in natural language processing and several other domains. Meanwhile, the field of graph machine learning is witnessing a paradigm transition from shallow methods to more sophisticated deep learning approaches. The capabilities of foundation models to generalize and adapt motivate graph machine learning researchers to discuss the potential of developing a new graph learning paradigm. This paradigm envisions models that are pre-trained on extensive graph data and can be adapted for various graph tasks. Despite this burgeoning interest, there is a noticeable lack of clear definitions and systematic analyses pertaining to this new domain. To this end, this article introduces the concept of Graph Foundation Models (GFMs), and offers an exhaustive explanation of their key characteristics and underlying technologies. We proceed to classify the existing work related to GFMs into three distinct categories, based on their dependence on graph neural networks and large language models. In addition to providing a thorough review of the current state of GFMs, this article also outlooks potential avenues for future research in this rapidly evolving domain.
|
http://arxiv.org/pdf/2310.11829v3
|
[
"Jiawei Liu",
"Cheng Yang",
"Zhiyuan Lu",
"Junze Chen",
"Yibo Li",
"Mengmei Zhang",
"Ting Bai",
"Yuan Fang",
"Lichao Sun",
"Philip S. Yu",
"Chuan Shi"
] |
2024-07-01T02:06:42Z
|
2023-10-18T09:31:21Z
|
2407.00902
|
From Introspection to Best Practices: Principled Analysis of
Demonstrations in Multimodal In-Context Learning
|
Motivated by in-context learning (ICL) capabilities of Large Language models (LLMs), multimodal LLMs with additional visual modality are also exhibited with similar ICL abilities when multiple image-text pairs are provided as demonstrations. However, relatively less work has been done to investigate the principles behind how and why multimodal ICL works. We conduct a systematic and principled evaluation of multimodal ICL for models of different scales on a broad spectrum of new yet critical tasks. Through perturbations over different modality information, we show that modalities matter differently across tasks in multimodal ICL. Considering such modality impact, we further utilize modality-driven demonstration strategies to boost ICL performance. We also identify that demonstration selection is closely related to the models' ability to capture task inductive biases from multimodal ICL. Our principled analysis provides a comprehensive way of understanding the role of demonstrations in multimodal in-context learning, and sheds light on effectively improving multimodal ICL on a wide range of tasks even if those tasks are not seen in or even contradict pretraining data.
|
http://arxiv.org/pdf/2407.00902v1
|
[
"Nan Xu",
"Fei Wang",
"Sheng Zhang",
"Hoifung Poon",
"Muhao Chen"
] |
2024-07-01T01:57:21Z
|
2024-07-01T01:57:21Z
|
2407.00891
|
ZeroDDI: A Zero-Shot Drug-Drug Interaction Event Prediction Method with
Semantic Enhanced Learning and Dual-Modal Uniform Alignment
|
Drug-drug interactions (DDIs) can result in various pharmacological changes, which can be categorized into different classes known as DDI events (DDIEs). In recent years, previously unobserved/unseen DDIEs have been emerging, posing a new classification task when unseen classes have no labelled instances in the training stage, which is formulated as a zero-shot DDIE prediction (ZS-DDIE) task. However, existing computational methods are not directly applicable to ZS-DDIE, which has two primary challenges: obtaining suitable DDIE representations and handling the class imbalance issue. To overcome these challenges, we propose a novel method named ZeroDDI for the ZS-DDIE task. Specifically, we design a biological semantic enhanced DDIE representation learning module, which emphasizes the key biological semantics and distills discriminative molecular substructure-related semantics for DDIE representation learning. Furthermore, we propose a dual-modal uniform alignment strategy to distribute drug pair representations and DDIE semantic representations uniformly in a unit sphere and align the matched ones, which can mitigate the issue of class imbalance. Extensive experiments showed that ZeroDDI surpasses the baselines and indicate that it is a promising tool for detecting unseen DDIEs. Our code has been released in https://github.com/wzy-Sarah/ZeroDDI.
|
http://arxiv.org/pdf/2407.00891v1
|
[
"Ziyan Wang",
"Zhankun Xiong",
"Feng Huang",
"Xuan Liu",
"Wen Zhang"
] |
2024-07-01T01:28:14Z
|
2024-07-01T01:28:14Z
|
2407.00890
|
Macroeconomic Forecasting with Large Language Models
|
This paper presents a comparative analysis evaluating the accuracy of Large Language Models (LLMs) against traditional macro time series forecasting approaches. In recent times, LLMs have surged in popularity for forecasting due to their ability to capture intricate patterns in data and quickly adapt across very different domains. However, their effectiveness in forecasting macroeconomic time series data compared to conventional methods remains an area of interest. To address this, we conduct a rigorous evaluation of LLMs against traditional macro forecasting methods, using as common ground the FRED-MD database. Our findings provide valuable insights into the strengths and limitations of LLMs in forecasting macroeconomic time series, shedding light on their applicability in real-world scenarios
|
http://arxiv.org/pdf/2407.00890v1
|
[
"Andrea Carriero",
"Davide Pettenuzzo",
"Shubhranshu Shekhar"
] |
2024-07-01T01:25:26Z
|
2024-07-01T01:25:26Z
|
2407.00888
|
Papez: Resource-Efficient Speech Separation with Auditory Working Memory
|
Transformer-based models recently reached state-of-the-art single-channel speech separation accuracy; However, their extreme computational load makes it difficult to deploy them in resource-constrained mobile or IoT devices. We thus present Papez, a lightweight and computation-efficient single-channel speech separation model. Papez is based on three key techniques. We first replace the inter-chunk Transformer with small-sized auditory working memory. Second, we adaptively prune the input tokens that do not need further processing. Finally, we reduce the number of parameters through the recurrent transformer. Our extensive evaluation shows that Papez achieves the best resource and accuracy tradeoffs with a large margin. We publicly share our source code at texttt{https://github.com/snuhcs/Papez}
|
http://arxiv.org/pdf/2407.00888v1
|
[
"Hyunseok Oh",
"Juheon Yi",
"Youngki Lee"
] |
2024-07-01T01:23:06Z
|
2024-07-01T01:23:06Z
|
2407.00886
|
Mechanistic Interpretation through Contextual Decomposition in
Transformers
|
Transformers exhibit impressive capabilities but are often regarded as black boxes due to challenges in understanding the complex nonlinear relationships between features. Interpreting machine learning models is of paramount importance to mitigate risks, and mechanistic interpretability is in particular of current interest as it opens up a window for guiding manual modifications and reverse-engineering solutions. In this work, we introduce contextual decomposition for transformers (CD-T), extending a prior work on CD for RNNs and CNNs, to address mechanistic interpretation computationally efficiently. CD-T is a flexible interpretation method for transformers. It can capture contributions of combinations of input features or source internal components (e.g. attention heads, feed-forward networks) to (1) final predictions or (2) the output of any target internal component. Using CD-T, we propose a novel algorithm for circuit discovery. On a real-world pathology report classification task: we show CD-T distills a more faithful circuit of attention heads with improved computational efficiency (speed up 2x) than a prior benchmark, path patching. As a versatile interpretation method, CD-T also exhibits exceptional capabilities for local interpretations. CD-T is shown to reliably find words and phrases of contrasting sentiment/topic on SST-2 and AGNews datasets. Through human experiments, we demonstrate CD-T enables users to identify the more accurate of two models and to better trust a model's outputs compared to alternative interpretation methods such as SHAP and LIME.
|
http://arxiv.org/pdf/2407.00886v1
|
[
"Aliyah R. Hsu",
"Yeshwanth Cherapanamjeri",
"Anobel Y. Odisho",
"Peter R. Carroll",
"Bin Yu"
] |
2024-07-01T01:12:20Z
|
2024-07-01T01:12:20Z
|
2407.01644
|
Evaluating the Role of Data Enrichment Approaches Towards Rare Event
Analysis in Manufacturing
|
Rare events are occurrences that take place with a significantly lower frequency than more common regular events. In manufacturing, predicting such events is particularly important, as they lead to unplanned downtime, shortening equipment lifespan, and high energy consumption. The occurrence of events is considered frequently-rare if observed in more than 10% of all instances, very-rare if it is 1-5%, moderately-rare if it is 5-10%, and extremely-rare if less than 1%. The rarity of events is inversely correlated with the maturity of a manufacturing industry. Typically, the rarity of events affects the multivariate data generated within a manufacturing process to be highly imbalanced, which leads to bias in predictive models. This paper evaluates the role of data enrichment techniques combined with supervised machine-learning techniques for rare event detection and prediction. To address the data scarcity, we use time series data augmentation and sampling methods to amplify the dataset with more multivariate features and data points while preserving the underlying time series patterns in the combined alterations. Imputation techniques are used in handling null values in datasets. Considering 15 learning models ranging from statistical learning to machine learning to deep learning methods, the best-performing model for the selected datasets is obtained and the efficacy of data enrichment is evaluated. Based on this evaluation, our results find that the enrichment procedure enhances up to 48% of F1 measure in rare failure event detection and prediction of supervised prediction models. We also conduct empirical and ablation experiments on the datasets to derive dataset-specific novel insights. Finally, we investigate the interpretability aspect of models for rare event prediction, considering multiple methods.
|
http://arxiv.org/pdf/2407.01644v1
|
[
"Chathurangi Shyalika",
"Ruwan Wickramarachchi",
"Fadi El Kalach",
"Ramy Harik",
"Amit Sheth"
] |
2024-07-01T00:05:56Z
|
2024-07-01T00:05:56Z
|
2010.02318
|
MIMOSA: Multi-constraint Molecule Sampling for Molecule Optimization
|
Molecule optimization is a fundamental task for accelerating drug discovery, with the goal of generating new valid molecules that maximize multiple drug properties while maintaining similarity to the input molecule. Existing generative models and reinforcement learning approaches made initial success, but still face difficulties in simultaneously optimizing multiple drug properties. To address such challenges, we propose the MultI-constraint MOlecule SAmpling (MIMOSA) approach, a sampling framework to use input molecule as an initial guess and sample molecules from the target distribution. MIMOSA first pretrains two property agnostic graph neural networks (GNNs) for molecule topology and substructure-type prediction, where a substructure can be either atom or single ring. For each iteration, MIMOSA uses the GNNs' prediction and employs three basic substructure operations (add, replace, delete) to generate new molecules and associated weights. The weights can encode multiple constraints including similarity and drug property constraints, upon which we select promising molecules for next iteration. MIMOSA enables flexible encoding of multiple property- and similarity-constraints and can efficiently generate new molecules that satisfy various property constraints and achieved up to 49.6% relative improvement over the best baseline in terms of success rate. The code repository (including readme file, data preprocessing and model construction, evaluation) is available https://github.com/futianfan/MIMOSA.
|
http://arxiv.org/pdf/2010.02318v4
|
[
"Tianfan Fu",
"Cao Xiao",
"Xinhao Li",
"Lucas M. Glass",
"Jimeng Sun"
] |
2024-06-30T23:58:07Z
|
2020-10-05T20:18:42Z
|
2403.19708
|
Cost-Efficient Large Language Model Serving for Multi-turn Conversations
with CachedAttention
|
Interacting with humans through multi-turn conversations is a fundamental feature of large language models (LLMs). However, existing LLM serving engines executing multi-turn conversations are inefficient due to the need to repeatedly compute the key-value (KV) caches of historical tokens, incurring high serving costs. To address the problem, this paper proposes CachedAttention, a new attention mechanism that enables reuse of KV caches across multi-turn conversations, significantly reducing the repetitive computation overheads. CachedAttention maintains a hierarchical KV caching system that leverages cost-effective memory/storage mediums to save KV caches for all requests. To reduce KV cache access overheads from slow mediums, CachedAttention employs layer-wise pre-loading and asynchronous saving schemes to overlap the KV cache access with the GPU computation. To ensure that the KV caches to be accessed are placed in the fastest hierarchy, CachedAttention employs scheduler-aware fetching and eviction schemes to consciously place the KV caches in different layers based on the hints from the inference job scheduler. To avoid the invalidation of the saved KV caches incurred by context window overflow, CachedAttention enables the saved KV caches to remain valid via decoupling the positional encoding and effectively truncating the KV caches. Extensive experimental results demonstrate that CachedAttention significantly decreases the time to the first token (TTFT) by up to 87%, improves the prompt prefilling throughput by up to 7.8$times$ for multi-turn conversations, and reduces the end-to-end inference cost by up to 70%.
|
http://arxiv.org/pdf/2403.19708v3
|
[
"Bin Gao",
"Zhuomin He",
"Puru Sharma",
"Qingxuan Kang",
"Djordje Jevdjic",
"Junbo Deng",
"Xingkun Yang",
"Zhou Yu",
"Pengfei Zuo"
] |
2024-06-30T23:50:38Z
|
2024-03-23T10:42:49Z
|
2407.01643
|
A Deep Generative Framework for Joint Households and Individuals
Population Synthesis
|
Household and individual-level sociodemographic data are essential for understanding human-infrastructure interaction and policymaking. However, the Public Use Microdata Sample (PUMS) offers only a sample at the state level, while census tract data only provides the marginal distributions of variables without correlations. Therefore, we need an accurate synthetic population dataset that maintains consistent variable correlations observed in microdata, preserves household-individual and individual-individual relationships, adheres to state-level statistics, and accurately represents the geographic distribution of the population. We propose a deep generative framework leveraging the variational autoencoder (VAE) to generate a synthetic population with the aforementioned features. The methodological contributions include (1) a new data structure for capturing household-individual and individual-individual relationships, (2) a transfer learning process with pre-training and fine-tuning steps to generate households and individuals whose aggregated distributions align with the census tract marginal distribution, and (3) decoupled binary cross-entropy (D-BCE) loss function enabling distribution shift and out-of-sample records generation. Model results for an application in Delaware, USA demonstrate the ability to ensure the realism of generated household-individual records and accurately describe population statistics at the census tract level compared to existing methods. Furthermore, testing in North Carolina, USA yielded promising results, supporting the transferability of our method.
|
http://arxiv.org/pdf/2407.01643v1
|
[
"Xiao Qian",
"Utkarsh Gangwal",
"Shangjia Dong",
"Rachel Davidson"
] |
2024-06-30T23:01:58Z
|
2024-06-30T23:01:58Z
|
2407.00849
|
Towards Understanding Sensitive and Decisive Patterns in Explainable AI:
A Case Study of Model Interpretation in Geometric Deep Learning
|
The interpretability of machine learning models has gained increasing attention, particularly in scientific domains where high precision and accountability are crucial. This research focuses on distinguishing between two critical data patterns -- sensitive patterns (model-related) and decisive patterns (task-related) -- which are commonly used as model interpretations but often lead to confusion. Specifically, this study compares the effectiveness of two main streams of interpretation methods: post-hoc methods and self-interpretable methods, in detecting these patterns. Recently, geometric deep learning (GDL) has shown superior predictive performance in various scientific applications, creating an urgent need for principled interpretation methods. Therefore, we conduct our study using several representative GDL applications as case studies. We evaluate thirteen interpretation methods applied to three major GDL backbone models, using four scientific datasets to assess how well these methods identify sensitive and decisive patterns. Our findings indicate that post-hoc methods tend to provide interpretations better aligned with sensitive patterns, whereas certain self-interpretable methods exhibit strong and stable performance in detecting decisive patterns. Additionally, our study offers valuable insights into improving the reliability of these interpretation methods. For example, ensembling post-hoc interpretations from multiple models trained on the same task can effectively uncover the task's decisive patterns.
|
http://arxiv.org/pdf/2407.00849v1
|
[
"Jiajun Zhu",
"Siqi Miao",
"Rex Ying",
"Pan Li"
] |
2024-06-30T22:59:15Z
|
2024-06-30T22:59:15Z
|
2208.00287
|
Simplex Clustering via sBeta with Applications to Online Adjustment of
Black-Box Predictions
|
We explore clustering the softmax predictions of deep neural networks and introduce a novel probabilistic clustering method, referred to as k-sBetas. In the general context of clustering discrete distributions, the existing methods focused on exploring distortion measures tailored to simplex data, such as the KL divergence, as alternatives to the standard Euclidean distance. We provide a general maximum a posteriori (MAP) perspective of clustering distributions, emphasizing that the statistical models underlying the existing distortion-based methods may not be descriptive enough. Instead, we optimize a mixed-variable objective measuring data conformity within each cluster to the introduced sBeta density function, whose parameters are constrained and estimated jointly with binary assignment variables. Our versatile formulation approximates various parametric densities for modeling simplex data and enables the control of the cluster-balance bias. This yields highly competitive performances for the unsupervised adjustment of black-box model predictions in various scenarios. Our code and comparisons with the existing simplex-clustering approaches and our introduced softmax-prediction benchmarks are publicly available: https://github.com/fchiaroni/Clustering_Softmax_Predictions.
|
http://arxiv.org/pdf/2208.00287v4
|
[
"Florent Chiaroni",
"Malik Boudiaf",
"Amar Mitiche",
"Ismail Ben Ayed"
] |
2024-06-30T22:46:54Z
|
2022-07-30T18:29:11Z
|
2311.12023
|
LQ-LoRA: Low-rank Plus Quantized Matrix Decomposition for Efficient
Language Model Finetuning
|
We propose a simple approach for memory-efficient adaptation of pretrained language models. Our approach uses an iterative algorithm to decompose each pretrained matrix into a high-precision low-rank component and a memory-efficient quantized component. During finetuning, the quantized component remains fixed and only the low-rank component is updated. We present an integer linear programming formulation of the quantization component which enables dynamic configuration of quantization parameters (e.g., bit-width, block size) for each matrix given an overall target memory budget. We further explore a data-aware version of the algorithm which uses an approximation of the Fisher information matrix to weight the reconstruction objective during matrix decomposition. Experiments on finetuning RoBERTa and LLaMA-2 (7B and 70B) demonstrate that our low-rank plus quantized matrix decomposition approach (LQ-LoRA) outperforms strong QLoRA and GPTQ-LoRA baselines and enables aggressive quantization to sub-3 bits with only minor performance degradations. When finetuned on a language modeling calibration dataset, LQ-LoRA can also be used for model compression; in this setting our 2.75-bit LLaMA-2-70B model (which has 2.85 bits on average when including the low-rank components and requires 27GB of GPU memory) performs respectably compared to the 16-bit baseline.
|
http://arxiv.org/pdf/2311.12023v3
|
[
"Han Guo",
"Philip Greengard",
"Eric P. Xing",
"Yoon Kim"
] |
2024-06-30T22:43:35Z
|
2023-11-20T18:57:41Z
|
2407.00843
|
A Unified Approach to Extract Intepretable Rules from Tree Ensembles via
Integer Programming
|
Tree ensemble methods represent a popular machine learning model, known for their effectiveness in supervised classification and regression tasks. Their performance derives from aggregating predictions of multiple decision trees, which are renowned for their interpretability properties. However, tree ensemble methods do not reliably exhibit interpretable output. Our work aims to extract an optimized list of rules from a trained tree ensemble, providing the user with a condensed, interpretable model that retains most of the predictive power of the full model. Our approach consists of solving a clean and neat set partitioning problem formulated through Integer Programming. The proposed method works with either tabular or time series data, for both classification and regression tasks, and does not require parameter tuning under the most common setting. Through rigorous computational experiments, we offer statistically significant evidence that our method is competitive with other rule extraction methods and effectively handles time series.
|
http://arxiv.org/pdf/2407.00843v1
|
[
"Lorenzo Bonasera",
"Emilio Carrizosa"
] |
2024-06-30T22:33:47Z
|
2024-06-30T22:33:47Z
|
2407.00840
|
MUSE-Net: Missingness-aware mUlti-branching Self-attention Encoder for
Irregular Longitudinal Electronic Health Records
|
The era of big data has made vast amounts of clinical data readily available, particularly in the form of electronic health records (EHRs), which provides unprecedented opportunities for developing data-driven diagnostic tools to enhance clinical decision making. However, the application of EHRs in data-driven modeling faces challenges such as irregularly spaced multi-variate time series, issues of incompleteness, and data imbalance. Realizing the full data potential of EHRs hinges on the development of advanced analytical models. In this paper, we propose a novel Missingness-aware mUlti-branching Self-attention Encoder (MUSE-Net) to cope with the challenges in modeling longitudinal EHRs for data-driven disease prediction. The MUSE-Net leverages a multi-task Gaussian process (MGP) with missing value masks for data imputation, a multi-branching architecture to address the data imbalance problem, and a time-aware self-attention encoder to account for the irregularly spaced time interval in longitudinal EHRs. We evaluate the proposed MUSE-Net using both synthetic and real-world datasets. Experimental results show that our MUSE-Net outperforms existing methods that are widely used to investigate longitudinal signals.
|
http://arxiv.org/pdf/2407.00840v1
|
[
"Zekai Wang",
"Tieming Liu",
"Bing Yao"
] |
2024-06-30T21:54:41Z
|
2024-06-30T21:54:41Z
|
2407.01641
|
NeurIPS 2024 ML4CFD Competition: Harnessing Machine Learning for
Computational Fluid Dynamics in Airfoil Design
|
The integration of machine learning (ML) techniques for addressing intricate physics problems is increasingly recognized as a promising avenue for expediting simulations. However, assessing ML-derived physical models poses a significant challenge for their adoption within industrial contexts. This competition is designed to promote the development of innovative ML approaches for tackling physical challenges, leveraging our recently introduced unified evaluation framework known as Learning Industrial Physical Simulations (LIPS). Building upon the preliminary edition held from November 2023 to March 2024, this iteration centers on a task fundamental to a well-established physical application: airfoil design simulation, utilizing our proposed AirfRANS dataset. The competition evaluates solutions based on various criteria encompassing ML accuracy, computational efficiency, Out-Of-Distribution performance, and adherence to physical principles. Notably, this competition represents a pioneering effort in exploring ML-driven surrogate methods aimed at optimizing the trade-off between computational efficiency and accuracy in physical simulations. Hosted on the Codabench platform, the competition offers online training and evaluation for all participating solutions.
|
http://arxiv.org/pdf/2407.01641v1
|
[
"Mouadh Yagoubi",
"David Danan",
"Milad Leyli-abadi",
"Jean-Patrick Brunet",
"Jocelyn Ahmed Mazari",
"Florent Bonnet",
"maroua gmati",
"Asma Farjallah",
"Paola Cinnella",
"Patrick Gallinari",
"Marc Schoenauer"
] |
2024-06-30T21:48:38Z
|
2024-06-30T21:48:38Z
|
2405.04715
|
Causality Pursuit from Heterogeneous Environments via Neural Adversarial
Invariance Learning
|
Pursuing causality from data is a fundamental problem in scientific discovery, treatment intervention, and transfer learning. This paper introduces a novel algorithmic method for addressing nonparametric invariance and causality learning in regression models across multiple environments, where the joint distribution of response variables and covariates varies, but the conditional expectations of outcome given an unknown set of quasi-causal variables are invariant. The challenge of finding such an unknown set of quasi-causal or invariant variables is compounded by the presence of endogenous variables that have heterogeneous effects across different environments, including even one of them in the regression would make the estimation inconsistent. The proposed Focused Adversial Invariant Regularization (FAIR) framework utilizes an innovative minimax optimization approach that breaks down the barriers, driving regression models toward prediction-invariant solutions through adversarial testing. Leveraging the representation power of neural networks, FAIR neural networks (FAIR-NN) are introduced for causality pursuit. It is shown that FAIR-NN can find the invariant variables and quasi-causal variables under a minimal identification condition and that the resulting procedure is adaptive to low-dimensional composition structures in a non-asymptotic analysis. Under a structural causal model, variables identified by FAIR-NN represent pragmatic causality and provably align with exact causal mechanisms under conditions of sufficient heterogeneity. Computationally, FAIR-NN employs a novel Gumbel approximation with decreased temperature and stochastic gradient descent ascent algorithm. The procedures are convincingly demonstrated using simulated and real-data examples.
|
http://arxiv.org/pdf/2405.04715v2
|
[
"Yihong Gu",
"Cong Fang",
"Peter Bühlmann",
"Jianqing Fan"
] |
2024-06-30T21:37:56Z
|
2024-05-07T23:37:40Z
|
2407.01640
|
BADM: Batch ADMM for Deep Learning
|
Stochastic gradient descent-based algorithms are widely used for training deep neural networks but often suffer from slow convergence. To address the challenge, we leverage the framework of the alternating direction method of multipliers (ADMM) to develop a novel data-driven algorithm, called batch ADMM (BADM). The fundamental idea of the proposed algorithm is to split the training data into batches, which is further divided into sub-batches where primal and dual variables are updated to generate global parameters through aggregation. We evaluate the performance of BADM across various deep learning tasks, including graph modelling, computer vision, image generation, and natural language processing. Extensive numerical experiments demonstrate that BADM achieves faster convergence and superior testing accuracy compared to other state-of-the-art optimizers.
|
http://arxiv.org/pdf/2407.01640v1
|
[
"Ouya Wang",
"Shenglong Zhou",
"Geoffrey Ye Li"
] |
2024-06-30T20:47:15Z
|
2024-06-30T20:47:15Z
|
2407.01639
|
ModelVerification.jl: a Comprehensive Toolbox for Formally Verifying
Deep Neural Networks
|
Deep Neural Networks (DNN) are crucial in approximating nonlinear functions across diverse applications, ranging from image classification to control. Verifying specific input-output properties can be a highly challenging task due to the lack of a single, self-contained framework that allows a complete range of verification types. To this end, we present texttt{ModelVerification.jl (MV)}, the first comprehensive, cutting-edge toolbox that contains a suite of state-of-the-art methods for verifying different types of DNNs and safety specifications. This versatile toolbox is designed to empower developers and machine learning practitioners with robust tools for verifying and ensuring the trustworthiness of their DNN models.
|
http://arxiv.org/pdf/2407.01639v1
|
[
"Tianhao Wei",
"Luca Marzari",
"Kai S. Yun",
"Hanjiang Hu",
"Peizhi Niu",
"Xusheng Luo",
"Changliu Liu"
] |
2024-06-30T20:15:31Z
|
2024-06-30T20:15:31Z
|
2407.00809
|
Kernel Neural Operators (KNOs) for Scalable, Memory-efficient,
Geometrically-flexible Operator Learning
|
This paper introduces the Kernel Neural Operator (KNO), a novel operator learning technique that uses deep kernel-based integral operators in conjunction with quadrature for function-space approximation of operators (maps from functions to functions). KNOs use parameterized, closed-form, finitely-smooth, and compactly-supported kernels with trainable sparsity parameters within the integral operators to significantly reduce the number of parameters that must be learned relative to existing neural operators. Moreover, the use of quadrature for numerical integration endows the KNO with geometric flexibility that enables operator learning on irregular geometries. Numerical results demonstrate that on existing benchmarks the training and test accuracy of KNOs is higher than popular operator learning techniques while using at least an order of magnitude fewer trainable parameters. KNOs thus represent a new paradigm of low-memory, geometrically-flexible, deep operator learning, while retaining the implementation simplicity and transparency of traditional kernel methods from both scientific computing and machine learning.
|
http://arxiv.org/pdf/2407.00809v1
|
[
"Matthew Lowery",
"John Turnage",
"Zachary Morrow",
"John D. Jakeman",
"Akil Narayan",
"Shandian Zhe",
"Varun Shankar"
] |
2024-06-30T19:28:12Z
|
2024-06-30T19:28:12Z
|
2305.19476
|
Accelerating Reinforcement Learning with Value-Conditional State Entropy
Exploration
|
A promising technique for exploration is to maximize the entropy of visited state distribution, i.e., state entropy, by encouraging uniform coverage of visited state space. While it has been effective for an unsupervised setup, it tends to struggle in a supervised setup with a task reward, where an agent prefers to visit high-value states to exploit the task reward. Such a preference can cause an imbalance between the distributions of high-value states and low-value states, which biases exploration towards low-value state regions as a result of the state entropy increasing when the distribution becomes more uniform. This issue is exacerbated when high-value states are narrowly distributed within the state space, making it difficult for the agent to complete the tasks. In this paper, we present a novel exploration technique that maximizes the value-conditional state entropy, which separately estimates the state entropies that are conditioned on the value estimates of each state, then maximizes their average. By only considering the visited states with similar value estimates for computing the intrinsic bonus, our method prevents the distribution of low-value states from affecting exploration around high-value states, and vice versa. We demonstrate that the proposed alternative to the state entropy baseline significantly accelerates various reinforcement learning algorithms across a variety of tasks within MiniGrid, DeepMind Control Suite, and Meta-World benchmarks. Source code is available at https://sites.google.com/view/rl-vcse.
|
http://arxiv.org/pdf/2305.19476v2
|
[
"Dongyoung Kim",
"Jinwoo Shin",
"Pieter Abbeel",
"Younggyo Seo"
] |
2024-06-30T19:27:07Z
|
2023-05-31T01:09:28Z
|
2407.00806
|
Benchmarks for Reinforcement Learning with Biased Offline Data and
Imperfect Simulators
|
In many reinforcement learning (RL) applications one cannot easily let the agent act in the world; this is true for autonomous vehicles, healthcare applications, and even some recommender systems, to name a few examples. Offline RL provides a way to train agents without real-world exploration, but is often faced with biases due to data distribution shifts, limited coverage, and incomplete representation of the environment. To address these issues, practical applications have tried to combine simulators with grounded offline data, using so-called hybrid methods. However, constructing a reliable simulator is in itself often challenging due to intricate system complexities as well as missing or incomplete information. In this work, we outline four principal challenges for combining offline data with imperfect simulators in RL: simulator modeling error, partial observability, state and action discrepancies, and hidden confounding. To help drive the RL community to pursue these problems, we construct ``Benchmarks for Mechanistic Offline Reinforcement Learning'' (B4MRL), which provide dataset-simulator benchmarks for the aforementioned challenges. Our results suggest the key necessity of such benchmarks for future research.
|
http://arxiv.org/pdf/2407.00806v1
|
[
"Ori Linial",
"Guy Tennenholtz",
"Uri Shalit"
] |
2024-06-30T19:22:59Z
|
2024-06-30T19:22:59Z
|
2407.00801
|
Model-Free Active Exploration in Reinforcement Learning
|
We study the problem of exploration in Reinforcement Learning and present a novel model-free solution. We adopt an information-theoretical viewpoint and start from the instance-specific lower bound of the number of samples that have to be collected to identify a nearly-optimal policy. Deriving this lower bound along with the optimal exploration strategy entails solving an intricate optimization problem and requires a model of the system. In turn, most existing sample optimal exploration algorithms rely on estimating the model. We derive an approximation of the instance-specific lower bound that only involves quantities that can be inferred using model-free approaches. Leveraging this approximation, we devise an ensemble-based model-free exploration strategy applicable to both tabular and continuous Markov decision processes. Numerical results demonstrate that our strategy is able to identify efficient policies faster than state-of-the-art exploration approaches
|
http://arxiv.org/pdf/2407.00801v1
|
[
"Alessio Russo",
"Alexandre Proutiere"
] |
2024-06-30T19:00:49Z
|
2024-06-30T19:00:49Z
|
2311.04886
|
SEMQA: Semi-Extractive Multi-Source Question Answering
|
Recently proposed long-form question answering (QA) systems, supported by large language models (LLMs), have shown promising capabilities. Yet, attributing and verifying their generated abstractive answers can be difficult, and automatically evaluating their accuracy remains an ongoing challenge. In this work, we introduce a new QA task for answering multi-answer questions by summarizing multiple diverse sources in a semi-extractive fashion. Specifically, Semi-extractive Multi-source QA (SEMQA) requires models to output a comprehensive answer, while mixing factual quoted spans -- copied verbatim from given input sources -- and non-factual free-text connectors that glue these spans together into a single cohesive passage. This setting bridges the gap between the outputs of well-grounded but constrained extractive QA systems and more fluent but harder to attribute fully abstractive answers. Particularly, it enables a new mode for language models that leverages their advanced language generation capabilities, while also producing fine in-line attributions by-design that are easy to verify, interpret, and evaluate. To study this task, we create the first dataset of this kind, QuoteSum, with human-written semi-extractive answers to natural and generated questions, and define text-based evaluation metrics. Experimenting with several LLMs in various settings, we find this task to be surprisingly challenging, demonstrating the importance of QuoteSum for developing and studying such consolidation capabilities.
|
http://arxiv.org/pdf/2311.04886v2
|
[
"Tal Schuster",
"Adam D. Lelkes",
"Haitian Sun",
"Jai Gupta",
"Jonathan Berant",
"William W. Cohen",
"Donald Metzler"
] |
2024-06-30T18:53:22Z
|
2023-11-08T18:46:32Z
|
2404.01216
|
Novel Node Category Detection Under Subpopulation Shift
|
In real-world graph data, distribution shifts can manifest in various ways, such as the emergence of new categories and changes in the relative proportions of existing categories. It is often important to detect nodes of novel categories under such distribution shifts for safety or insight discovery purposes. We introduce a new approach, Recall-Constrained Optimization with Selective Link Prediction (RECO-SLIP), to detect nodes belonging to novel categories in attributed graphs under subpopulation shifts. By integrating a recall-constrained learning framework with a sample-efficient link prediction mechanism, RECO-SLIP addresses the dual challenges of resilience against subpopulation shifts and the effective exploitation of graph structure. Our extensive empirical evaluation across multiple graph datasets demonstrates the superior performance of RECO-SLIP over existing methods. The experimental code is available at https://github.com/hsinghuan/novel-node-category-detection.
|
http://arxiv.org/pdf/2404.01216v2
|
[
"Hsing-Huan Chung",
"Shravan Chaudhari",
"Yoav Wald",
"Xing Han",
"Joydeep Ghosh"
] |
2024-06-30T18:40:10Z
|
2024-04-01T16:16:19Z
|
2407.00787
|
Enhancing Travel Decision-Making: A Contrastive Learning Approach for
Personalized Review Rankings in Accommodations
|
User-generated reviews significantly influence consumer decisions, particularly in the travel domain when selecting accommodations. This paper contribution comprising two main elements. Firstly, we present a novel dataset of authentic guest reviews sourced from a prominent online travel platform, totaling over two million reviews from 50,000 distinct accommodations. Secondly, we propose an innovative approach for personalized review ranking. Our method employs contrastive learning to intricately capture the relationship between a review and the contextual information of its respective reviewer. Through a comprehensive experimental study, we demonstrate that our approach surpasses several baselines across all reported metrics. Augmented by a comparative analysis, we showcase the efficacy of our method in elevating personalized review ranking. The implications of our research extend beyond the travel domain, with potential applications in other sectors where personalized review ranking is paramount, such as online e-commerce platforms.
|
http://arxiv.org/pdf/2407.00787v1
|
[
"Reda Igebaria",
"Eran Fainman",
"Sarai Mizrachi",
"Moran Beladev",
"Fengjun Wang"
] |
2024-06-30T18:04:16Z
|
2024-06-30T18:04:16Z
|
2407.00779
|
Towards Faster Matrix Diagonalization with Graph Isomorphism Networks
and the AlphaZero Framework
|
In this paper, we introduce innovative approaches for accelerating the Jacobi method for matrix diagonalization, specifically through the formulation of large matrix diagonalization as a Semi-Markov Decision Process and small matrix diagonalization as a Markov Decision Process. Furthermore, we examine the potential of utilizing scalable architecture between different-sized matrices. During a short training period, our method discovered a significant reduction in the number of steps required for diagonalization and exhibited efficient inference capabilities. Importantly, this approach demonstrated possible scalability to large-sized matrices, indicating its potential for wide-ranging applicability. Upon training completion, we obtain action-state probabilities and transition graphs, which depict transitions between different states. These outputs not only provide insights into the diagonalization process but also pave the way for cost savings pertinent to large-scale matrices. The advancements made in this research enhance the efficacy and scalability of matrix diagonalization, pushing for new possibilities for deployment in practical applications in scientific and engineering domains.
|
http://arxiv.org/pdf/2407.00779v1
|
[
"Geigh Zollicoffer",
"Kshitij Bhatta",
"Manish Bhattarai",
"Phil Romero",
"Christian F. A. Negre",
"Anders M. N. Niklasson",
"Adetokunbo Adedoyin"
] |
2024-06-30T17:45:01Z
|
2024-06-30T17:45:01Z
|
2407.00774
|
Advantages of quantum support vector machine in cross-domain
classification of quantum states
|
In this study, we use cross-domain classification using quantum machine learning for quantum advantages to address the entanglement versus separability paradigm. We further demonstrate the efficient classification of Bell diagonal states into zero and non-zero discord classes. The inherited structure of quantum states and its relation with a particular class of quantum states are exploited to intuitively approach the classification of different domain testing states, referred here as crossdomain classification. In addition, we extend our analysis to evaluate the robustness of our model for the analyzed problem using random unitary transformations. Using numerical analysis, our results clearly demonstrate the potential of QSVM for classifying quantum states across the multidimensional Hilbert space.
|
http://arxiv.org/pdf/2407.00774v1
|
[
"Diksha Sharma",
"Vivek Balasaheb Sabale",
"Parvinder Singh",
"Atul Kumar"
] |
2024-06-30T17:24:55Z
|
2024-06-30T17:24:55Z
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.