arxiv_id
stringlengths 7
11
| title
stringlengths 7
243
| abstract
stringlengths 3
2.79k
| link
stringlengths 21
49
| authors
listlengths 1
451
| updated
stringlengths 20
20
| published
stringlengths 20
20
|
---|---|---|---|---|---|---|
2406.17490
|
BricksRL: A Platform for Democratizing Robotics and Reinforcement
Learning Research and Education with LEGO
|
We present BricksRL, a platform designed to democratize access to robotics for reinforcement learning research and education. BricksRL facilitates the creation, design, and training of custom LEGO robots in the real world by interfacing them with the TorchRL library for reinforcement learning agents. The integration of TorchRL with the LEGO hubs, via Bluetooth bidirectional communication, enables state-of-the-art reinforcement learning training on GPUs for a wide variety of LEGO builds. This offers a flexible and cost-efficient approach for scaling and also provides a robust infrastructure for robot-environment-algorithm communication. We present various experiments across tasks and robot configurations, providing built plans and training results. Furthermore, we demonstrate that inexpensive LEGO robots can be trained end-to-end in the real world to achieve simple tasks, with training times typically under 120 minutes on a normal laptop. Moreover, we show how users can extend the capabilities, exemplified by the successful integration of non-LEGO sensors. By enhancing accessibility to both robotics and reinforcement learning, BricksRL establishes a strong foundation for democratized robotic learning in research and educational settings.
|
http://arxiv.org/pdf/2406.17490v1
|
[
"Sebastian Dittert",
"Vincent Moens",
"Gianni De Fabritiis"
] |
2024-06-25T12:17:44Z
|
2024-06-25T12:17:44Z
|
2406.17477
|
Towards Federated Low-Rank Adaptation with Rank-Heterogeneous
Communication
|
Low-rank adaptation (LoRA) is an attractive alternative of adapting full weights for the federated fine-tuning of large pretrained models, which can significantly reduce the memory and communication burden. In principle, federated LoRA can provide an effective mean to allocate different resources to each client by tuning ranks for each client, which can be useful in achieving a better communication-performance tradeoff. We find, however, that the empirical performance of LoRA is highly unstable with respect to such rank-heterogeneity, severely limiting the applicability to the scenarios where it is desirable or even required to allocate nonuniform communication bandwidth to each client due to constrained total bandwidth. Our investigation reveals that the root cause of this instability is the zero-padding-based aggregation strategy adopted in conventional federated LoRA frameworks, which causes the information from high rank clients to get diluted during the aggregation process. To address this issue, we propose a new replication-based padding strategy, which allows us to better leverage the information from clients with high-quality datasets. This method ensures that valuable information from high rank clients is retained during the aggregation process, accelerating the convergence speed and enhancing the overall prediction quality of the global model.
|
http://arxiv.org/pdf/2406.17477v1
|
[
"Yuji Byun",
"Jaeho Lee"
] |
2024-06-25T11:49:33Z
|
2024-06-25T11:49:33Z
|
2406.07266
|
Efficient 3D Molecular Generation with Flow Matching and Scale Optimal
Transport
|
Generative models for 3D drug design have gained prominence recently for their potential to design ligands directly within protein pockets. Current approaches, however, often suffer from very slow sampling times or generate molecules with poor chemical validity. Addressing these limitations, we propose Semla, a scalable E(3)-equivariant message passing architecture. We further introduce a molecular generation model, SemlaFlow, which is trained using flow matching along with scale optimal transport, a novel extension of equivariant optimal transport. Our model produces state-of-the-art results on benchmark datasets with just 100 sampling steps. Crucially, SemlaFlow samples high quality molecules with as few as 20 steps, corresponding to a two order-of-magnitude speed-up compared to state-of-the-art, without sacrificing performance. Furthermore, we highlight limitations of current evaluation methods for 3D generation and propose new benchmark metrics for unconditional molecular generators. Finally, using these new metrics, we compare our model's ability to generate high quality samples against current approaches and further demonstrate SemlaFlow's strong performance.
|
http://arxiv.org/pdf/2406.07266v2
|
[
"Ross Irwin",
"Alessandro Tibo",
"Jon Paul Janet",
"Simon Olsson"
] |
2024-06-25T11:42:09Z
|
2024-06-11T13:51:51Z
|
2406.17475
|
Performative Debias with Fair-exposure Optimization Driven by Strategic
Agents in Recommender Systems
|
Data bias, e.g., popularity impairs the dynamics of two-sided markets within recommender systems. This overshadows the less visible but potentially intriguing long-tail items that could capture user interest. Despite the abundance of research surrounding this issue, it still poses challenges and remains a hot topic in academic circles. Along this line, in this paper, we developed a re-ranking approach in dynamic settings with fair-exposure optimization driven by strategic agents. Designed for the producer side, the execution of agents assumes content creators can modify item features based on strategic incentives to maximize their exposure. This iterative process entails an end-to-end optimization, employing differentiable ranking operators that simultaneously target accuracy and fairness. Joint objectives ensure the performance of recommendations while enhancing the visibility of tail items. We also leveraged the performativity nature of predictions to illustrate how strategic learning influences content creators to shift towards fairness efficiently, thereby incentivizing features of tail items. Through comprehensive experiments on both public and industrial datasets, we have substantiated the effectiveness and dominance of the proposed method especially on unveiling the potential of tail items.
|
http://arxiv.org/abs/2406.17475v1
|
[
"Zhichen Xiang",
"Hongke Zhao",
"Chuang Zhao",
"Ming He",
"Jianping Fan"
] |
2024-06-25T11:41:50Z
|
2024-06-25T11:41:50Z
|
2406.17470
|
Dynamic Scheduling for Vehicle-to-Vehicle Communications Enhanced
Federated Learning
|
Leveraging the computing and sensing capabilities of vehicles, vehicular federated learning (VFL) has been applied to edge training for connected vehicles. The dynamic and interconnected nature of vehicular networks presents unique opportunities to harness direct vehicle-to-vehicle (V2V) communications, enhancing VFL training efficiency. In this paper, we formulate a stochastic optimization problem to optimize the VFL training performance, considering the energy constraints and mobility of vehicles, and propose a V2V-enhanced dynamic scheduling (VEDS) algorithm to solve it. The model aggregation requirements of VFL and the limited transmission time due to mobility result in a stepwise objective function, which presents challenges in solving the problem. We thus propose a derivative-based drift-plus-penalty method to convert the long-term stochastic optimization problem to an online mixed integer nonlinear programming (MINLP) problem, and provide a theoretical analysis to bound the performance gap between the online solution and the offline optimal solution. Further analysis of the scheduling priority reduces the original problem into a set of convex optimization problems, which are efficiently solved using the interior-point method. Experimental results demonstrate that compared with the state-of-the-art benchmarks, the proposed algorithm enhances the image classification accuracy on the CIFAR-10 dataset by 3.18% and reduces the average displacement errors on the Argoverse trajectory prediction dataset by 10.21%.
|
http://arxiv.org/pdf/2406.17470v1
|
[
"Jintao Yan",
"Tan Chen",
"Yuxuan Sun",
"Zhaojun Nan",
"Sheng Zhou",
"Zhisheng Niu"
] |
2024-06-25T11:15:53Z
|
2024-06-25T11:15:53Z
|
2406.17467
|
Early learning of the optimal constant solution in neural networks and
humans
|
Deep neural networks learn increasingly complex functions over the course of training. Here, we show both empirically and theoretically that learning of the target function is preceded by an early phase in which networks learn the optimal constant solution (OCS) - that is, initial model responses mirror the distribution of target labels, while entirely ignoring information provided in the input. Using a hierarchical category learning task, we derive exact solutions for learning dynamics in deep linear networks trained with bias terms. Even when initialized to zero, this simple architectural feature induces substantial changes in early dynamics. We identify hallmarks of this early OCS phase and illustrate how these signatures are observed in deep linear networks and larger, more complex (and nonlinear) convolutional neural networks solving a hierarchical learning task based on MNIST and CIFAR10. We explain these observations by proving that deep linear networks necessarily learn the OCS during early learning. To further probe the generality of our results, we train human learners over the course of three days on the category learning task. We then identify qualitative signatures of this early OCS phase in terms of the dynamics of true negative (correct-rejection) rates. Surprisingly, we find the same early reliance on the OCS in the behaviour of human learners. Finally, we show that learning of the OCS can emerge even in the absence of bias terms and is equivalently driven by generic correlations in the input data. Overall, our work suggests the OCS as a universal learning principle in supervised, error-corrective learning, and the mechanistic reasons for its prevalence.
|
http://arxiv.org/pdf/2406.17467v1
|
[
"Jirko Rubruck",
"Jan P. Bauer",
"Andrew Saxe",
"Christopher Summerfield"
] |
2024-06-25T11:12:52Z
|
2024-06-25T11:12:52Z
|
2303.15057
|
Towards Unbiased Calibration using Meta-Regularization
|
Model miscalibration has been frequently identified in modern deep neural networks. Recent work aims to improve model calibration directly through a differentiable calibration proxy. However, the calibration produced is often biased due to the binning mechanism. In this work, we propose to learn better-calibrated models via meta-regularization, which has two components: (1) gamma network (gamma-net), a meta learner that outputs sample-wise gamma values (continuous variable) for Focal loss for regularizing the backbone network; (2) smooth expected calibration error (SECE), a Gaussian-kernel based, unbiased, and differentiable surrogate to ECE that enables the smooth optimization of gamma-Net. We evaluate the effectiveness of the proposed approach in regularizing neural networks towards improved and unbiased calibration on three computer vision datasets. We empirically demonstrate that: (a) learning sample-wise gamma as continuous variables can effectively improve calibration; (b) SECE smoothly optimizes gamma-net towards unbiased and robust calibration with respect to the binning schemes; and (c) the combination of gamma-net and SECE achieves the best calibration performance across various calibration metrics while retaining very competitive predictive performance as compared to multiple recently proposed methods.
|
http://arxiv.org/pdf/2303.15057v3
|
[
"Cheng Wang",
"Jacek Golebiowski"
] |
2024-06-25T11:00:05Z
|
2023-03-27T10:00:50Z
|
2406.10918
|
Embodied Question Answering via Multi-LLM Systems
|
Embodied Question Answering (EQA) is an important problem, which involves an agent exploring the environment to answer user queries. In the existing literature, EQA has exclusively been studied in single-agent scenarios, where exploration can be time-consuming and costly. In this work, we consider EQA in a multi-agent framework involving multiple large language models (LLM) based agents independently answering queries about a household environment. To generate one answer for each query, we use the individual responses to train a Central Answer Model (CAM) that aggregates responses for a robust answer. Using CAM, we observe a $50%$ higher EQA accuracy when compared against aggregation methods for ensemble LLM, such as voting schemes and debates. CAM does not require any form of agent communication, alleviating it from the associated costs. We ablate CAM with various nonlinear (neural network, random forest, decision tree, XGBoost) and linear (logistic regression classifier, SVM) algorithms. Finally, we present a feature importance analysis for CAM via permutation feature importance (PFI), quantifying CAMs reliance on each independent agent and query context.
|
http://arxiv.org/pdf/2406.10918v3
|
[
"Bhrij Patel",
"Vishnu Sashank Dorbala",
"Dinesh Manocha",
"Amrit Singh Bedi"
] |
2024-06-25T10:50:09Z
|
2024-06-16T12:46:40Z
|
2311.04517
|
High-Performance Hybrid Algorithm for Minimum Sum-of-Squares Clustering
of Infinitely Tall Data
|
This paper introduces a novel formulation of the clustering problem, namely the Minimum Sum-of-Squares Clustering of Infinitely Tall Data (MSSC-ITD), and presents HPClust, an innovative set of hybrid parallel approaches for its effective solution. By utilizing modern high-performance computing techniques, HPClust enhances key clustering metrics: effectiveness, computational efficiency, and scalability. In contrast to vanilla data parallelism, which only accelerates processing time through the MapReduce framework, our approach unlocks superior performance by leveraging the multi-strategy competitive-cooperative parallelism and intricate properties of the objective function landscape. Unlike other available algorithms that struggle to scale, our algorithm is inherently parallel in nature, improving solution quality through increased scalability and parallelism, and outperforming even advanced algorithms designed for small and medium-sized datasets. Our evaluation of HPClust, featuring four parallel strategies, demonstrates its superiority over traditional and cutting-edge methods by offering better performance in the key metrics. These results also show that parallel processing not only enhances the clustering efficiency, but the accuracy as well. Additionally, we explore the balance between computational efficiency and clustering quality, providing insights into optimal parallel strategies based on dataset specifics and resource availability. This research advances our understanding of parallelism in clustering algorithms, demonstrating that a judicious hybridization of advanced parallel approaches yields optimal results for MSSC-ITD. Experiments on synthetic data further confirm HPClust's exceptional scalability and robustness to noise.
|
http://arxiv.org/abs/2311.04517v5
|
[
"Ravil Mussabayev",
"Rustam Mussabayev"
] |
2024-06-25T10:49:06Z
|
2023-11-08T08:02:52Z
|
2402.09910
|
DE-COP: Detecting Copyrighted Content in Language Models Training Data
|
How can we detect if copyrighted content was used in the training process of a language model, considering that the training data is typically undisclosed? We are motivated by the premise that a language model is likely to identify verbatim excerpts from its training text. We propose DE-COP, a method to determine whether a piece of copyrighted content was included in training. DE-COP's core approach is to probe an LLM with multiple-choice questions, whose options include both verbatim text and their paraphrases. We construct BookTection, a benchmark with excerpts from 165 books published prior and subsequent to a model's training cutoff, along with their paraphrases. Our experiments show that DE-COP surpasses the prior best method by 9.6% in detection performance (AUC) on models with logits available. Moreover, DE-COP also achieves an average accuracy of 72% for detecting suspect books on fully black-box models where prior methods give approximately 4% accuracy. The code and datasets are available at https://github.com/LeiLiLab/DE-COP.
|
http://arxiv.org/pdf/2402.09910v2
|
[
"André V. Duarte",
"Xuandong Zhao",
"Arlindo L. Oliveira",
"Lei Li"
] |
2024-06-25T10:33:41Z
|
2024-02-15T12:17:15Z
|
2401.16843
|
Evaluating ML-Based Anomaly Detection Across Datasets of Varied
Integrity: A Case Study
|
Cybersecurity remains a critical challenge in the digital age, with network traffic flow anomaly detection being a key pivotal instrument in the fight against cyber threats. In this study, we address the prevalent issue of data integrity in network traffic datasets, which are instrumental in developing machine learning (ML) models for anomaly detection. We introduce two refined versions of the CICIDS-2017 dataset, NFS-2023-nTE and NFS-2023-TE, processed using NFStream to ensure methodologically sound flow expiration and labeling. Our research contrasts the performance of the Random Forest (RF) algorithm across the original CICIDS-2017, its refined counterparts WTMC-2021 and CRiSIS-2022, and our NFStream-generated datasets, in both binary and multi-class classification contexts. We observe that the RF model exhibits exceptional robustness, achieving consistent high-performance metrics irrespective of the underlying dataset quality, which prompts a critical discussion on the actual impact of data integrity on ML efficacy. Our study underscores the importance of continual refinement and methodological rigor in dataset generation for network security research. As the landscape of network threats evolves, so must the tools and techniques used to detect and analyze them.
|
http://arxiv.org/abs/2401.16843v2
|
[
"Adrian Pekar",
"Richard Jozsa"
] |
2024-06-25T10:27:26Z
|
2024-01-30T09:34:15Z
|
2406.04163
|
Essentially Sharp Estimates on the Entropy Regularization Error in
Discrete Discounted Markov Decision Processes
|
We study the error introduced by entropy regularization of infinite-horizon discrete discounted Markov decision processes. We show that this error decreases exponentially in the inverse regularization strength both in a weighted KL-divergence and in value with a problem-specific exponent. We provide a lower bound matching our upper bound up to a polynomial factor. Our proof relies on the correspondence of the solutions of entropy-regularized Markov decision processes with gradient flows of the unregularized reward with respect to a Riemannian metric common in natural policy gradient methods. Further, this correspondence allows us to identify the limit of the gradient flow as the generalized maximum entropy optimal policy, thereby characterizing the implicit bias of the Kakade gradient flow which corresponds to a time-continuous version of the natural policy gradient method. We use this to show that for entropy-regularized natural policy gradient methods the overall error decays exponentially in the square root of the number of iterations improving existing sublinear guarantees.
|
http://arxiv.org/pdf/2406.04163v2
|
[
"Johannes Müller",
"Semih Cayci"
] |
2024-06-25T10:26:49Z
|
2024-06-06T15:20:37Z
|
2403.11795
|
Low-Cost Privacy-Aware Decentralized Learning
|
This paper introduces ZIP-DL, a novel privacy-aware decentralized learning (DL) algorithm that exploits correlated noise to provide strong privacy protection against a local adversary while yielding efficient convergence guarantees for a low communication cost. The progressive neutralization of the added noise during the distributed aggregation process results in ZIP-DL fostering a high model accuracy under privacy guarantees. ZIP-DL further uses a single communication round between each gradient descent, thus minimizing communication overhead. We provide theoretical guarantees for both convergence speed and privacy guarantees, thereby making ZIP-DL applicable to practical scenarios. Our extensive experimental study shows that ZIP-DL significantly outperforms the state-of-the-art in terms of vulnerability/accuracy trade-off. In particular, ZIP-DL (i) reduces the efficacy of linkability attacks by up to 52 percentage points compared to baseline DL, (ii) improves accuracy by up to 37 percent w.r.t. the state-of-the-art privacy-preserving mechanism operating under the same threat model as ours, when configured to provide the same protection against membership inference attacks, and (iii) reduces communication by up to 10.5x against the same competitor for the same level of protection.
|
http://arxiv.org/pdf/2403.11795v2
|
[
"Sayan Biswas",
"Davide Frey",
"Romaric Gaudel",
"Anne-Marie Kermarrec",
"Dimitri Lerévérend",
"Rafael Pires",
"Rishi Sharma",
"François Taïani"
] |
2024-06-25T10:20:49Z
|
2024-03-18T13:53:17Z
|
2406.17433
|
Mind the Graph When Balancing Data for Fairness or Robustness
|
Failures of fairness or robustness in machine learning predictive settings can be due to undesired dependencies between covariates, outcomes and auxiliary factors of variation. A common strategy to mitigate these failures is data balancing, which attempts to remove those undesired dependencies. In this work, we define conditions on the training distribution for data balancing to lead to fair or robust models. Our results display that, in many cases, the balanced distribution does not correspond to selectively removing the undesired dependencies in a causal graph of the task, leading to multiple failure modes and even interference with other mitigation techniques such as regularization. Overall, our results highlight the importance of taking the causal graph into account before performing data balancing.
|
http://arxiv.org/pdf/2406.17433v1
|
[
"Jessica Schrouff",
"Alexis Bellot",
"Amal Rannen-Triki",
"Alan Malek",
"Isabela Albuquerque",
"Arthur Gretton",
"Alexander D'Amour",
"Silvia Chiappa"
] |
2024-06-25T10:16:19Z
|
2024-06-25T10:16:19Z
|
2406.17427
|
A Critical Analysis of the Theoretical Framework of the Extreme Learning
Machine
|
Despite the number of successful applications of the Extreme Learning Machine (ELM), we show that its underlying foundational principles do not have a rigorous mathematical justification. Specifically, we refute the proofs of two main statements, and we also create a dataset that provides a counterexample to the ELM learning algorithm and explain its design, which leads to many such counterexamples. Finally, we provide alternative statements of the foundations, which justify the efficiency of ELM in some theoretical cases.
|
http://arxiv.org/pdf/2406.17427v1
|
[
"Irina Perfilievaa",
"Nicolas Madrid",
"Manuel Ojeda-Aciego",
"Piotr Artiemjew",
"Agnieszka Niemczynowicz"
] |
2024-06-25T10:06:07Z
|
2024-06-25T10:06:07Z
|
2406.17425
|
CuDA2: An approach for Incorporating Traitor Agents into Cooperative
Multi-Agent Systems
|
Cooperative Multi-Agent Reinforcement Learning (CMARL) strategies are well known to be vulnerable to adversarial perturbations. Previous works on adversarial attacks have primarily focused on white-box attacks that directly perturb the states or actions of victim agents, often in scenarios with a limited number of attacks. However, gaining complete access to victim agents in real-world environments is exceedingly difficult. To create more realistic adversarial attacks, we introduce a novel method that involves injecting traitor agents into the CMARL system. We model this problem as a Traitor Markov Decision Process (TMDP), where traitors cannot directly attack the victim agents but can influence their formation or positioning through collisions. In TMDP, traitors are trained using the same MARL algorithm as the victim agents, with their reward function set as the negative of the victim agents' reward. Despite this, the training efficiency for traitors remains low because it is challenging for them to directly associate their actions with the victim agents' rewards. To address this issue, we propose the Curiosity-Driven Adversarial Attack (CuDA2) framework. CuDA2 enhances the efficiency and aggressiveness of attacks on the specified victim agents' policies while maintaining the optimal policy invariance of the traitors. Specifically, we employ a pre-trained Random Network Distillation (RND) module, where the extra reward generated by the RND module encourages traitors to explore states unencountered by the victim agents. Extensive experiments on various scenarios from SMAC demonstrate that our CuDA2 framework offers comparable or superior adversarial attack capabilities compared to other baselines.
|
http://arxiv.org/pdf/2406.17425v1
|
[
"Zhen Chen",
"Yong Liao",
"Youpeng Zhao",
"Zipeng Dai",
"Jian Zhao"
] |
2024-06-25T09:59:31Z
|
2024-06-25T09:59:31Z
|
2406.17418
|
SE-VGAE: Unsupervised Disentangled Representation Learning for
Interpretable Architectural Layout Design Graph Generation
|
Despite the suitability of graphs for capturing the relational structures inherent in architectural layout designs, there is a notable dearth of research on interpreting architectural design space using graph-based representation learning and exploring architectural design graph generation. Concurrently, disentangled representation learning in graph generation faces challenges such as node permutation invariance and representation expressiveness. To address these challenges, we introduce an unsupervised disentangled representation learning framework, Style-based Edge-augmented Variational Graph Auto-Encoder (SE-VGAE), aiming to generate architectural layout in the form of attributed adjacency multi-graphs while prioritizing representation disentanglement. The framework is designed with three alternative pipelines, each integrating a transformer-based edge-augmented encoder, a latent space disentanglement module, and a style-based decoder. These components collectively facilitate the decomposition of latent factors influencing architectural layout graph generation, enhancing generation fidelity and diversity. We also provide insights into optimizing the framework by systematically exploring graph feature augmentation schemes and evaluating their effectiveness for disentangling architectural layout representation through extensive experiments. Additionally, we contribute a new benchmark large-scale architectural layout graph dataset extracted from real-world floor plan images to facilitate the exploration of graph data-based architectural design representation space interpretation. This study pioneered disentangled representation learning for the architectural layout graph generation. The code and dataset of this study will be open-sourced.
|
http://arxiv.org/pdf/2406.17418v1
|
[
"Jielin Chen",
"Rudi Stouffs"
] |
2024-06-25T09:40:47Z
|
2024-06-25T09:40:47Z
|
2202.09289
|
A Numerical Proof of Shell Model Turbulence Closure
|
The development of turbulence closure models, parametrizing the influence of small non-resolved scales on the dynamics of large resolved ones, is an outstanding theoretical challenge with vast applicative relevance. We present a closure, based on deep recurrent neural networks, that quantitatively reproduces, within statistical errors, Eulerian and Lagrangian structure functions and the intermittent statistics of the energy cascade, including those of subgrid fluxes. To achieve high-order statistical accuracy, and thus a stringent statistical test, we employ shell models of turbulence. Our results encourage the development of similar approaches for 3D Navier-Stokes turbulence.
|
http://arxiv.org/abs/2202.09289v2
|
[
"Giulio Ortali",
"Alessandro Corbetta",
"Gianluigi Rozza",
"Federico Toschi"
] |
2024-06-25T09:40:14Z
|
2022-02-18T16:31:57Z
|
2401.10710
|
Classification with neural networks with quadratic decision functions
|
Neural networks with quadratic decision functions have been introduced as alternatives to standard neural networks with affine linear ones. They are advantageous when the objects or classes to be identified are compact and of basic geometries like circles, ellipses etc. In this paper we investigate the use of such ansatz functions for classification. In particular we test and compare the algorithm on the MNIST dataset for classification of handwritten digits and for classification of subspecies. We also show, that the implementation can be based on the neural network structure in the software Tensorflow and Keras, respectively.
|
http://arxiv.org/pdf/2401.10710v2
|
[
"Leon Frischauf",
"Otmar Scherzer",
"Cong Shi"
] |
2024-06-25T09:37:40Z
|
2024-01-19T14:18:32Z
|
2407.01596
|
Maze Discovery using Multiple Robots via Federated Learning
|
This work presents a use case of federated learning (FL) applied to discovering a maze with LiDAR sensors-equipped robots. Goal here is to train classification models to accurately identify the shapes of grid areas within two different square mazes made up with irregular shaped walls. Due to the use of different shapes for the walls, a classification model trained in one maze that captures its structure does not generalize for the other. This issue is resolved by adopting FL framework between the robots that explore only one maze so that the collective knowledge allows them to operate accurately in the unseen maze. This illustrates the effectiveness of FL in real-world applications in terms of enhancing classification accuracy and robustness in maze discovery tasks.
|
http://arxiv.org/pdf/2407.01596v1
|
[
"Kalpana Ranasinghe",
"H. P. Madushanka",
"Rafaela Scaciota",
"Sumudu Samarakoon",
"Mehdi Bennis"
] |
2024-06-25T09:34:11Z
|
2024-06-25T09:34:11Z
|
2403.04666
|
Telecom Language Models: Must They Be Large?
|
The increasing interest in Large Language Models (LLMs) within the telecommunications sector underscores their potential to revolutionize operational efficiency. However, the deployment of these sophisticated models is often hampered by their substantial size and computational demands, raising concerns about their viability in resource-constrained environments. Addressing this challenge, recent advancements have seen the emergence of small language models that surprisingly exhibit performance comparable to their larger counterparts in many tasks, such as coding and common-sense reasoning. Phi-2, a compact yet powerful model, exemplifies this new wave of efficient small language models. This paper conducts a comprehensive evaluation of Phi-2's intrinsic understanding of the telecommunications domain. Recognizing the scale-related limitations, we enhance Phi-2's capabilities through a Retrieval-Augmented Generation approach, meticulously integrating an extensive knowledge base specifically curated with telecom standard specifications. The enhanced Phi-2 model demonstrates a profound improvement in accuracy, answering questions about telecom standards with a precision that closely rivals the more resource-intensive GPT-3.5. The paper further explores the refined capabilities of Phi-2 in addressing problem-solving scenarios within the telecom sector, highlighting its potential and limitations.
|
http://arxiv.org/pdf/2403.04666v2
|
[
"Nicola Piovesan",
"Antonio De Domenico",
"Fadhel Ayed"
] |
2024-06-25T09:28:43Z
|
2024-03-07T17:13:12Z
|
2406.17404
|
Make Some Noise: Unlocking Language Model Parallel Inference Capability
through Noisy Training
|
Existing speculative decoding methods typically require additional model structure and training processes to assist the model for draft token generation. This makes the migration of acceleration methods to the new model more costly and more demanding on device memory. To address this problem, we propose the Make Some Noise (MSN) training framework as a replacement for the supervised fine-tuning stage of the large language model. The training method simply introduces some noise at the input for the model to learn the denoising task. It significantly enhances the parallel decoding capability of the model without affecting the original task capability. In addition, we propose a tree-based retrieval-augmented Jacobi (TR-Jacobi) decoding strategy to further improve the inference speed of MSN models. Experiments in both the general and code domains have shown that MSN can improve inference speed by 2.3-2.7x times without compromising model performance. The MSN model also achieves comparable acceleration ratios to the SOTA model with additional model structure on Spec-Bench.
|
http://arxiv.org/pdf/2406.17404v1
|
[
"Yixuan Wang",
"Xianzhen Luo",
"Fuxuan Wei",
"Yijun Liu",
"Qingfu Zhu",
"Xuanyu Zhang",
"Qing Yang",
"Dongliang Xu",
"Wanxiang Che"
] |
2024-06-25T09:25:39Z
|
2024-06-25T09:25:39Z
|
2406.17399
|
GradCheck: Analyzing classifier guidance gradients for conditional
diffusion sampling
|
To sample from an unconditionally trained Denoising Diffusion Probabilistic Model (DDPM), classifier guidance adds conditional information during sampling, but the gradients from classifiers, especially those not trained on noisy images, are often unstable. This study conducts a gradient analysis comparing robust and non-robust classifiers, as well as multiple gradient stabilization techniques. Experimental results demonstrate that these techniques significantly improve the quality of class-conditional samples for non-robust classifiers by providing more stable and informative classifier guidance gradients. The findings highlight the importance of gradient stability in enhancing the performance of classifier guidance, especially on non-robust classifiers.
|
http://arxiv.org/pdf/2406.17399v1
|
[
"Philipp Vaeth",
"Alexander M. Fruehwald",
"Benjamin Paassen",
"Magda Gregorova"
] |
2024-06-25T09:23:25Z
|
2024-06-25T09:23:25Z
|
2406.17822
|
AI for the prediction of early stages of Alzheimer's disease from
neuroimaging biomarkers -- A narrative review of a growing field
|
Objectives: The objectives of this narrative review are to summarize the current state of AI applications in neuroimaging for early Alzheimer's disease (AD) prediction and to highlight the potential of AI techniques in improving early AD diagnosis, prognosis, and management. Methods: We conducted a narrative review of studies using AI techniques applied to neuroimaging data for early AD prediction. We examined single-modality studies using structural MRI and PET imaging, as well as multi-modality studies integrating multiple neuroimaging techniques and biomarkers. Furthermore, they reviewed longitudinal studies that model AD progression and identify individuals at risk of rapid decline. Results: Single-modality studies using structural MRI and PET imaging have demonstrated high accuracy in classifying AD and predicting progression from mild cognitive impairment (MCI) to AD. Multi-modality studies, integrating multiple neuroimaging techniques and biomarkers, have shown improved performance and robustness compared to single-modality approaches. Longitudinal studies have highlighted the value of AI in modeling AD progression and identifying individuals at risk of rapid decline. However, challenges remain in data standardization, model interpretability, generalizability, clinical integration, and ethical considerations. Conclusion: AI techniques applied to neuroimaging data have the potential to improve early AD diagnosis, prognosis, and management. Addressing challenges related to data standardization, model interpretability, generalizability, clinical integration, and ethical considerations is crucial for realizing the full potential of AI in AD research and clinical practice. Collaborative efforts among researchers, clinicians, and regulatory agencies are needed to develop reliable, robust, and ethical AI tools that can benefit AD patients and society.
|
http://arxiv.org/abs/2406.17822v1
|
[
"Thorsten Rudroff",
"Oona Rainio",
"Riku Klén"
] |
2024-06-25T09:22:53Z
|
2024-06-25T09:22:53Z
|
2404.09871
|
Explainable Online Unsupervised Anomaly Detection for Cyber-Physical
Systems via Causal Discovery from Time Series
|
Online unsupervised detection of anomalies is crucial to guarantee the correct operation of cyber-physical systems and the safety of humans interacting with them. State-of-the-art approaches based on deep learning via neural networks achieve outstanding performance at anomaly recognition, evaluating the discrepancy between a normal model of the system (with no anomalies) and the real-time stream of sensor time series. However, large training data and time are typically required, and explainability is still a challenge to identify the root of the anomaly and implement predictive maintainance. In this paper, we use causal discovery to learn a normal causal graph of the system, and we evaluate the persistency of causal links during real-time acquisition of sensor data to promptly detect anomalies. On two benchmark anomaly detection datasets, we show that our method has higher training efficiency, outperforms the accuracy of state-of-the-art neural architectures and correctly identifies the sources of >10 different anomalies. The code is at https://github.com/Isla-lab/causal_anomaly_detection.
|
http://arxiv.org/pdf/2404.09871v3
|
[
"Daniele Meli"
] |
2024-06-25T09:10:46Z
|
2024-04-15T15:42:12Z
|
2406.17386
|
Double Momentum Method for Lower-Level Constrained Bilevel Optimization
|
Bilevel optimization (BO) has recently gained prominence in many machine learning applications due to its ability to capture the nested structure inherent in these problems. Recently, many hypergradient methods have been proposed as effective solutions for solving large-scale problems. However, current hypergradient methods for the lower-level constrained bilevel optimization (LCBO) problems need very restrictive assumptions, namely, where optimality conditions satisfy the differentiability and invertibility conditions and lack a solid analysis of the convergence rate. What's worse, existing methods require either double-loop updates, which are sometimes less efficient. To solve this problem, in this paper, we propose a new hypergradient of LCBO leveraging the theory of nonsmooth implicit function theorem instead of using the restrive assumptions. In addition, we propose a textit{single-loop single-timescale} algorithm based on the double-momentum method and adaptive step size method and prove it can return a $(delta, epsilon)$-stationary point with $tilde{mathcal{O}}(d_2^2epsilon^{-4})$ iterations. Experiments on two applications demonstrate the effectiveness of our proposed method.
|
http://arxiv.org/pdf/2406.17386v1
|
[
"Wanli Shi",
"Yi Chang",
"Bin Gu"
] |
2024-06-25T09:05:22Z
|
2024-06-25T09:05:22Z
|
2308.08634
|
FedPop: Federated Population-based Hyperparameter Tuning
|
Federated Learning (FL) is a distributed machine learning (ML) paradigm, in which multiple clients collaboratively train ML models without centralizing their local data. Similar to conventional ML pipelines, the client local optimization and server aggregation procedure in FL are sensitive to the hyperparameter (HP) selection. Despite extensive research on tuning HPs for centralized ML, these methods yield suboptimal results when employed in FL. This is mainly because their "training-after-tuning" framework is unsuitable for FL with limited client computation power. While some approaches have been proposed for HP-Tuning in FL, they are limited to the HPs for client local updates. In this work, we propose a novel HP-tuning algorithm, called Federated Population-based Hyperparameter Tuning (FedPop), to address this vital yet challenging problem. FedPop employs population-based evolutionary algorithms to optimize the HPs, which accommodates various HP types at both client and server sides. Compared with prior tuning methods, FedPop employs an online "tuning-while-training" framework, offering computational efficiency and enabling the exploration of a broader HP search space. Our empirical validation on the common FL benchmarks and complex real-world FL datasets demonstrates the effectiveness of the proposed method, which substantially outperforms the concurrent state-of-the-art HP tuning methods for FL.
|
http://arxiv.org/pdf/2308.08634v2
|
[
"Haokun Chen",
"Denis Krompass",
"Jindong Gu",
"Volker Tresp"
] |
2024-06-25T09:04:08Z
|
2023-08-16T19:14:52Z
|
2401.04364
|
SoK: Facial Deepfake Detectors
|
Deepfakes have rapidly emerged as a profound and serious threat to society, primarily due to their ease of creation and dissemination. This situation has triggered an accelerated development of deepfake detection technologies. However, many existing detectors rely heavily on lab-generated datasets for validation, which may not effectively prepare them for novel, emerging, and real-world deepfake techniques. In this paper, we conduct an extensive and comprehensive review and analysis of the latest state-of-the-art deepfake detectors, evaluating them against several critical criteria. These criteria facilitate the categorization of these detectors into 4 high-level groups and 13 fine-grained sub-groups, all aligned with a unified standard conceptual framework. This classification and framework offer deep and practical insights into the factors that affect detector efficacy. We assess the generalizability of 16 leading detectors across various standard attack scenarios, including black-box, white-box, and gray-box settings. Our systematized analysis and experimentation lay the groundwork for a deeper understanding of deepfake detectors and their generalizability, paving the way for future research focused on creating detectors adept at countering various attack scenarios. Additionally, this work offers insights for developing more proactive defenses against deepfakes.
|
http://arxiv.org/pdf/2401.04364v2
|
[
"Binh M. Le",
"Jiwon Kim",
"Shahroz Tariq",
"Kristen Moore",
"Alsharif Abuadbba",
"Simon S. Woo"
] |
2024-06-25T09:02:42Z
|
2024-01-09T05:32:22Z
|
2406.17381
|
Forget but Recall: Incremental Latent Rectification in Continual
Learning
|
Intrinsic capability to continuously learn a changing data stream is a desideratum of deep neural networks (DNNs). However, current DNNs suffer from catastrophic forgetting, which hinders remembering past knowledge. To mitigate this issue, existing Continual Learning (CL) approaches either retain exemplars for replay, regularize learning, or allocate dedicated capacity for new tasks. This paper investigates an unexplored CL direction for incremental learning called Incremental Latent Rectification or ILR. In a nutshell, ILR learns to propagate with correction (or rectify) the representation from the current trained DNN backward to the representation space of the old task, where performing predictive decisions is easier. This rectification process only employs a chain of small representation mapping networks, called rectifier units. Empirical experiments on several continual learning benchmarks, including CIFAR10, CIFAR100, and Tiny ImageNet, demonstrate the effectiveness and potential of this novel CL direction compared to existing representative CL methods.
|
http://arxiv.org/pdf/2406.17381v1
|
[
"Nghia D. Nguyen",
"Hieu Trung Nguyen",
"Ang Li",
"Hoang Pham",
"Viet Anh Nguyen",
"Khoa D. Doan"
] |
2024-06-25T08:57:47Z
|
2024-06-25T08:57:47Z
|
2401.02736
|
On the numerical reliability of nonsmooth autodiff: a MaxPool case study
|
This paper considers the reliability of automatic differentiation (AD) for neural networks involving the nonsmooth MaxPool operation. We investigate the behavior of AD across different precision levels (16, 32, 64 bits) and convolutional architectures (LeNet, VGG, and ResNet) on various datasets (MNIST, CIFAR10, SVHN, and ImageNet). Although AD can be incorrect, recent research has shown that it coincides with the derivative almost everywhere, even in the presence of nonsmooth operations (such as MaxPool and ReLU). On the other hand, in practice, AD operates with floating-point numbers (not real numbers), and there is, therefore, a need to explore subsets on which AD can be numerically incorrect. These subsets include a bifurcation zone (where AD is incorrect over reals) and a compensation zone (where AD is incorrect over floating-point numbers but correct over reals). Using SGD for the training process, we study the impact of different choices of the nonsmooth Jacobian for the MaxPool function on the precision of 16 and 32 bits. These findings suggest that nonsmooth MaxPool Jacobians with lower norms help maintain stable and efficient test accuracy, whereas those with higher norms can result in instability and decreased performance. We also observe that the influence of MaxPool's nonsmooth Jacobians on learning can be reduced by using batch normalization, Adam-like optimizers, or increasing the precision level.
|
http://arxiv.org/pdf/2401.02736v2
|
[
"Ryan Boustany"
] |
2024-06-25T08:55:16Z
|
2024-01-05T10:14:39Z
|
2402.14103
|
Computational-Statistical Gaps for Improper Learning in Sparse Linear
Regression
|
We study computational-statistical gaps for improper learning in sparse linear regression. More specifically, given $n$ samples from a $k$-sparse linear model in dimension $d$, we ask what is the minimum sample complexity to efficiently (in time polynomial in $d$, $k$, and $n$) find a potentially dense estimate for the regression vector that achieves non-trivial prediction error on the $n$ samples. Information-theoretically this can be achieved using $Theta(k log (d/k))$ samples. Yet, despite its prominence in the literature, there is no polynomial-time algorithm known to achieve the same guarantees using less than $Theta(d)$ samples without additional restrictions on the model. Similarly, existing hardness results are either restricted to the proper setting, in which the estimate must be sparse as well, or only apply to specific algorithms. We give evidence that efficient algorithms for this task require at least (roughly) $Omega(k^2)$ samples. In particular, we show that an improper learning algorithm for sparse linear regression can be used to solve sparse PCA problems (with a negative spike) in their Wishart form, in regimes in which efficient algorithms are widely believed to require at least $Omega(k^2)$ samples. We complement our reduction with low-degree and statistical query lower bounds for the sparse PCA problems from which we reduce. Our hardness results apply to the (correlated) random design setting in which the covariates are drawn i.i.d. from a mean-zero Gaussian distribution with unknown covariance.
|
http://arxiv.org/pdf/2402.14103v2
|
[
"Rares-Darius Buhai",
"Jingqiu Ding",
"Stefan Tiegel"
] |
2024-06-25T08:50:33Z
|
2024-02-21T19:55:01Z
|
2406.17374
|
Generalizability of experimental studies
|
Experimental studies are a cornerstone of machine learning (ML) research. A common, but often implicit, assumption is that the results of a study will generalize beyond the study itself, e.g. to new data. That is, there is a high probability that repeating the study under different conditions will yield similar results. Despite the importance of the concept, the problem of measuring generalizability remains open. This is probably due to the lack of a mathematical formalization of experimental studies. In this paper, we propose such a formalization and develop a quantifiable notion of generalizability. This notion allows to explore the generalizability of existing studies and to estimate the number of experiments needed to achieve the generalizability of new studies. To demonstrate its usefulness, we apply it to two recently published benchmarks to discern generalizable and non-generalizable results. We also publish a Python module that allows our analysis to be repeated for other experimental studies.
|
http://arxiv.org/pdf/2406.17374v1
|
[
"Federico Matteucci",
"Vadim Arzamasov",
"Jose Cribeiro-Ramallo",
"Marco Heyden",
"Konstantin Ntounas",
"Klemens Böhm"
] |
2024-06-25T08:49:07Z
|
2024-06-25T08:49:07Z
|
2406.14868
|
Direct Multi-Turn Preference Optimization for Language Agents
|
Adapting Large Language Models (LLMs) for agent tasks is critical in developing language agents. Direct Preference Optimization (DPO) is a promising technique for this adaptation with the alleviation of compounding errors, offering a means to directly optimize Reinforcement Learning (RL) objectives. However, applying DPO to multi-turn tasks presents challenges due to the inability to cancel the partition function. Overcoming this obstacle involves making the partition function independent of the current state and addressing length disparities between preferred and dis-preferred trajectories. In this light, we replace the policy constraint with the state-action occupancy measure constraint in the RL objective and add length normalization to the Bradley-Terry model, yielding a novel loss function named DMPO for multi-turn agent tasks with theoretical explanations. Extensive experiments on three multi-turn agent task datasets confirm the effectiveness and superiority of the DMPO loss.
|
http://arxiv.org/pdf/2406.14868v2
|
[
"Wentao Shi",
"Mengqi Yuan",
"Junkang Wu",
"Qifan Wang",
"Fuli Feng"
] |
2024-06-25T08:44:24Z
|
2024-06-21T05:13:20Z
|
2406.17819
|
Automatically Adaptive Conformal Risk Control
|
Science and technology have a growing need for effective mechanisms that ensure reliable, controlled performance from black-box machine learning algorithms. These performance guarantees should ideally hold conditionally on the input-that is the performance guarantees should hold, at least approximately, no matter what the input. However, beyond stylized discrete groupings such as ethnicity and gender, the right notion of conditioning can be difficult to define. For example, in problems such as image segmentation, we want the uncertainty to reflect the intrinsic difficulty of the test sample, but this may be difficult to capture via a conditioning event. Building on the recent work of Gibbs et al. [2023], we propose a methodology for achieving approximate conditional control of statistical risks-the expected value of loss functions-by adapting to the difficulty of test samples. Our framework goes beyond traditional conditional risk control based on user-provided conditioning events to the algorithmic, data-driven determination of appropriate function classes for conditioning. We apply this framework to various regression and segmentation tasks, enabling finer-grained control over model performance and demonstrating that by continuously monitoring and adjusting these parameters, we can achieve superior precision compared to conventional risk-control methods.
|
http://arxiv.org/pdf/2406.17819v1
|
[
"Vincent Blot",
"Anastasios N Angelopoulos",
"Michael I Jordan",
"Nicolas J-B Brunel"
] |
2024-06-25T08:29:32Z
|
2024-06-25T08:29:32Z
|
2402.13418
|
Efficiently Predicting Mutational Effect on Homologous Proteins by
Evolution Encoding
|
Predicting protein properties is paramount for biological and medical advancements. Current protein engineering mutates on a typical protein, called the wild-type, to construct a family of homologous proteins and study their properties. Yet, existing methods easily neglect subtle mutations, failing to capture the effect on the protein properties. To this end, we propose EvolMPNN, Evolution-aware Message Passing Neural Network, an efficient model to learn evolution-aware protein embeddings. EvolMPNN samples sets of anchor proteins, computes evolutionary information by means of residues and employs a differentiable evolution-aware aggregation scheme over these sampled anchors. This way, EvolMPNN can efficiently utilise a novel message-passing method to capture the mutation effect on proteins with respect to the anchor proteins. Afterwards, the aggregated evolution-aware embeddings are integrated with sequence embeddings to generate final comprehensive protein embeddings. Our model shows up to 6.4% better than state-of-the-art methods and attains 36X inference speedup in comparison with large pre-trained models. Code and models are available at https://github.com/zhiqiangzhongddu/EvolMPNN.
|
http://arxiv.org/pdf/2402.13418v2
|
[
"Zhiqiang Zhong",
"Davide Mottin"
] |
2024-06-25T08:26:33Z
|
2024-02-20T23:06:21Z
|
2402.13414
|
Harnessing Large Language Models as Post-hoc Correctors
|
As Machine Learning (ML) models grow in size and demand higher-quality training data, the expenses associated with re-training and fine-tuning these models are escalating rapidly. Inspired by recent impressive achievements of Large Language Models (LLMs) in different fields, this paper delves into the question: can LLMs efficiently improve an ML's performance at a minimal cost? We show that, through our proposed training-free framework LlmCorr, an LLM can work as a post-hoc corrector to propose corrections for the predictions of an arbitrary ML model. In particular, we form a contextual knowledge database by incorporating the dataset's label information and the ML model's predictions on the validation dataset. Leveraging the in-context learning capability of LLMs, we ask the LLM to summarise the instances in which the ML model makes mistakes and the correlation between primary predictions and true labels. Following this, the LLM can transfer its acquired knowledge to suggest corrections for the ML model's predictions. Our experimental results on text analysis and the challenging molecular predictions show that model improves the performance of a number of models by up to 39%.
|
http://arxiv.org/pdf/2402.13414v2
|
[
"Zhiqiang Zhong",
"Kuangyu Zhou",
"Davide Mottin"
] |
2024-06-25T08:26:19Z
|
2024-02-20T22:50:41Z
|
2406.17352
|
Development of a digital tool for monitoring the behaviour of pre-weaned
calves using accelerometer neck-collars
|
Automatic monitoring of calf behaviour is a promising way of assessing animal welfare from their first week on farms. This study aims to (i) develop machine learning models from accelerometer data to classify the main behaviours of pre-weaned calves and (ii) set up a digital tool for monitoring the behaviour of pre-weaned calves from the models' prediction. Thirty pre-weaned calves were equipped with a 3-D accelerometer attached to a neck-collar for two months and filmed simultaneously. The behaviours were annotated, resulting in 27.4 hours of observation aligned with the accelerometer data. The time-series were then split into 3 seconds windows. Two machine learning models were tuned using data from 80% of the calves: (i) a Random Forest model to classify between active and inactive behaviours using a set of 11 hand-craft features [model 1] and (ii) a RidgeClassifierCV model to classify between lying, running, drinking milk and other behaviours using ROCKET features [model 2]. The performance of the models was tested using data from the remaining 20% of the calves. Model 1 achieved a balanced accuracy of 0.92. Model 2 achieved a balanced accuracy of 0.84. Behavioural metrics such as daily activity ratio and episodes of running, lying, drinking milk, and other behaviours expressed over time were deduced from the predictions. All the development was finally embedded into a Python dashboard so that the individual calf metrics could be displayed directly from the raw accelerometer files.
|
http://arxiv.org/pdf/2406.17352v1
|
[
"Oshana Dissanayake",
"Sarah E. Mcpherson",
"Joseph Allyndrée",
"Emer Kennedy",
"Pádraig Cunningham",
"Lucile Riaboff"
] |
2024-06-25T08:11:22Z
|
2024-06-25T08:11:22Z
|
2406.17818
|
Temporal Prototype-Aware Learning for Active Voltage Control on Power
Distribution Networks
|
Active Voltage Control (AVC) on the Power Distribution Networks (PDNs) aims to stabilize the voltage levels to ensure efficient and reliable operation of power systems. With the increasing integration of distributed energy resources, recent efforts have explored employing multi-agent reinforcement learning (MARL) techniques to realize effective AVC. Existing methods mainly focus on the acquisition of short-term AVC strategies, i.e., only learning AVC within the short-term training trajectories of a singular diurnal cycle. However, due to the dynamic nature of load demands and renewable energy, the operation states of real-world PDNs may exhibit significant distribution shifts across varying timescales (e.g., daily and seasonal changes). This can render those short-term strategies suboptimal or even obsolete when performing continuous AVC over extended periods. In this paper, we propose a novel temporal prototype-aware learning method, abbreviated as TPA, to learn time-adaptive AVC under short-term training trajectories. At the heart of TPA are two complementary components, namely multi-scale dynamic encoder and temporal prototype-aware policy, that can be readily incorporated into various MARL methods. The former component integrates a stacked transformer network to learn underlying temporal dependencies at different timescales of the PDNs, while the latter implements a learnable prototype matching mechanism to construct a dedicated AVC policy that can dynamically adapt to the evolving operation states. Experimental results on the AVC benchmark with different PDN sizes demonstrate that the proposed TPA surpasses the state-of-the-art counterparts not only in terms of control performance but also by offering model transferability. Our code is available at https://github.com/Canyizl/TPA-for-AVC.
|
http://arxiv.org/abs/2406.17818v1
|
[
"Feiyang Xu",
"Shunyu Liu",
"Yunpeng Qing",
"Yihe Zhou",
"Yuwen Wang",
"Mingli Song"
] |
2024-06-25T08:07:00Z
|
2024-06-25T08:07:00Z
|
2406.17346
|
Stacked Confusion Reject Plots (SCORE)
|
Machine learning is more and more applied in critical application areas like health and driver assistance. To minimize the risk of wrong decisions, in such applications it is necessary to consider the certainty of a classification to reject uncertain samples. An established tool for this are reject curves that visualize the trade-off between the number of rejected samples and classification performance metrics. We argue that common reject curves are too abstract and hard to interpret by non-experts. We propose Stacked Confusion Reject Plots (SCORE) that offer a more intuitive understanding of the used data and the classifier's behavior. We present example plots on artificial Gaussian data to document the different options of SCORE and provide the code as a Python package.
|
http://arxiv.org/pdf/2406.17346v1
|
[
"Stephan Hasler",
"Lydia Fischer"
] |
2024-06-25T07:59:29Z
|
2024-06-25T07:59:29Z
|
2406.17341
|
Generative Modelling of Structurally Constrained Graphs
|
Graph diffusion models have emerged as state-of-the-art techniques in graph generation, yet integrating domain knowledge into these models remains challenging. Domain knowledge is particularly important in real-world scenarios, where invalid generated graphs hinder deployment in practical applications. Unconstrained and conditioned graph generative models fail to guarantee such domain-specific structural properties. We present ConStruct, a novel framework that allows for hard-constraining graph diffusion models to incorporate specific properties, such as planarity or acyclicity. Our approach ensures that the sampled graphs remain within the domain of graphs that verify the specified property throughout the entire trajectory in both the forward and reverse processes. This is achieved by introducing a specific edge-absorbing noise model and a new projector operator. ConStruct demonstrates versatility across several structural and edge-deletion invariant constraints and achieves state-of-the-art performance for both synthetic benchmarks and attributed real-world datasets. For example, by leveraging planarity in digital pathology graph datasets, the proposed method outperforms existing baselines and enhances generated data validity by up to 71.1 percentage points.
|
http://arxiv.org/pdf/2406.17341v1
|
[
"Manuel Madeira",
"Clement Vignac",
"Dorina Thanou",
"Pascal Frossard"
] |
2024-06-25T07:54:32Z
|
2024-06-25T07:54:32Z
|
2406.17338
|
Robustly Optimized Deep Feature Decoupling Network for Fatty Liver
Diseases Detection
|
Current medical image classification efforts mainly aim for higher average performance, often neglecting the balance between different classes. This can lead to significant differences in recognition accuracy between classes and obvious recognition weaknesses. Without the support of massive data, deep learning faces challenges in fine-grained classification of fatty liver. In this paper, we propose an innovative deep learning framework that combines feature decoupling and adaptive adversarial training. Firstly, we employ two iteratively compressed decouplers to supervised decouple common features and specific features related to fatty liver in abdominal ultrasound images. Subsequently, the decoupled features are concatenated with the original image after transforming the color space and are fed into the classifier. During adversarial training, we adaptively adjust the perturbation and balance the adversarial strength by the accuracy of each class. The model will eliminate recognition weaknesses by correctly classifying adversarial samples, thus improving recognition robustness. Finally, the accuracy of our method improved by 4.16%, achieving 82.95%. As demonstrated by extensive experiments, our method is a generalized learning framework that can be directly used to eliminate the recognition weaknesses of any classifier while improving its average performance. Code is available at https://github.com/HP-ML/MICCAI2024.
|
http://arxiv.org/pdf/2406.17338v1
|
[
"Peng Huang",
"Shu Hu",
"Bo Peng",
"Jiashu Zhang",
"Xi Wu",
"Xin Wang"
] |
2024-06-25T07:50:09Z
|
2024-06-25T07:50:09Z
|
2407.02521
|
Performance Comparison of Deep RL Algorithms for Mixed Traffic
Cooperative Lane-Changing
|
Lane-changing (LC) is a challenging scenario for connected and automated vehicles (CAVs) because of the complex dynamics and high uncertainty of the traffic environment. This challenge can be handled by deep reinforcement learning (DRL) approaches, leveraging their data-driven and model-free nature. Our previous work proposed a cooperative lane-changing in mixed traffic (CLCMT) mechanism based on TD3 to facilitate an optimal lane-changing strategy. This study enhances the current CLCMT mechanism by considering both the uncertainty of the human-driven vehicles (HVs) and the microscopic interactions between HVs and CAVs. The state-of-the-art (SOTA) DRL algorithms including DDPG, TD3, SAC, and PPO are utilized to deal with the formulated MDP with continuous actions. Performance comparison among the four DRL algorithms demonstrates that DDPG, TD3, and PPO algorithms can deal with uncertainty in traffic environments and learn well-performed LC strategies in terms of safety, efficiency, comfort, and ecology. The PPO algorithm outperforms the other three algorithms, regarding a higher reward, fewer exploration mistakes and crashes, and a more comfortable and ecology LC strategy. The improvements promise CLCMT mechanism greater advantages in the LC motion planning of CAVs.
|
http://arxiv.org/pdf/2407.02521v1
|
[
"Xue Yao",
"Shengren Hou",
"Serge P. Hoogendoorn",
"Simeon C. Calvert"
] |
2024-06-25T07:49:25Z
|
2024-06-25T07:49:25Z
|
2404.17990
|
TabVFL: Improving Latent Representation in Vertical Federated Learning
|
Autoencoders are popular neural networks that are able to compress high dimensional data to extract relevant latent information. TabNet is a state-of-the-art neural network model designed for tabular data that utilizes an autoencoder architecture for training. Vertical Federated Learning (VFL) is an emerging distributed machine learning paradigm that allows multiple parties to train a model collaboratively on vertically partitioned data while maintaining data privacy. The existing design of training autoencoders in VFL is to train a separate autoencoder in each participant and aggregate the latent representation later. This design could potentially break important correlations between feature data of participating parties, as each autoencoder is trained on locally available features while disregarding the features of others. In addition, traditional autoencoders are not specifically designed for tabular data, which is ubiquitous in VFL settings. Moreover, the impact of client failures during training on the model robustness is under-researched in the VFL scene. In this paper, we propose TabVFL, a distributed framework designed to improve latent representation learning using the joint features of participants. The framework (i) preserves privacy by mitigating potential data leakage with the addition of a fully-connected layer, (ii) conserves feature correlations by learning one latent representation vector, and (iii) provides enhanced robustness against client failures during training phase. Extensive experiments on five classification datasets show that TabVFL can outperform the prior work design, with 26.12% of improvement on f1-score.
|
http://arxiv.org/pdf/2404.17990v2
|
[
"Mohamed Rashad",
"Zilong Zhao",
"Jeremie Decouchant",
"Lydia Y. Chen"
] |
2024-06-25T07:46:30Z
|
2024-04-27T19:40:35Z
|
2406.17335
|
A Thorough Performance Benchmarking on Lightweight Embedding-based
Recommender Systems
|
Since the creation of the Web, recommender systems (RSs) have been an indispensable mechanism in information filtering. State-of-the-art RSs primarily depend on categorical features, which ecoded by embedding vectors, resulting in excessively large embedding tables. To prevent over-parameterized embedding tables from harming scalability, both academia and industry have seen increasing efforts in compressing RS embeddings. However, despite the prosperity of lightweight embedding-based RSs (LERSs), a wide diversity is seen in evaluation protocols, resulting in obstacles when relating LERS performance to real-world usability. Moreover, despite the common goal of lightweight embeddings, LERSs are evaluated with a single choice between the two main recommendation tasks -- collaborative filtering and content-based recommendation. This lack of discussions on cross-task transferability hinders the development of unified, more scalable solutions. Motivated by these issues, this study investigates various LERSs' performance, efficiency, and cross-task transferability via a thorough benchmarking process. Additionally, we propose an efficient embedding compression method using magnitude pruning, which is an easy-to-deploy yet highly competitive baseline that outperforms various complex LERSs. Our study reveals the distinct performance of LERSs across the two tasks, shedding light on their effectiveness and generalizability. To support edge-based recommendations, we tested all LERSs on a Raspberry Pi 4, where the efficiency bottleneck is exposed. Finally, we conclude this paper with critical summaries of LERS performance, model selection suggestions, and underexplored challenges around LERSs for future research. To encourage future research, we publish source codes and artifacts at href{this link}{https://github.com/chenxing1999/recsys-benchmark}.
|
http://arxiv.org/pdf/2406.17335v1
|
[
"Hung Vinh Tran",
"Tong Chen",
"Quoc Viet Hung Nguyen",
"Zi Huang",
"Lizhen Cui",
"Hongzhi Yin"
] |
2024-06-25T07:45:00Z
|
2024-06-25T07:45:00Z
|
2303.17879
|
CoSMo: a Framework to Instantiate Conditioned Process Simulation Models
|
Process simulation is gaining attention for its ability to assess potential performance improvements and risks associated with business process changes. The existing literature presents various techniques, generally grounded in process models discovered from event log data or built upon deep learning algorithms. These techniques have specific strengths and limitations. Traditional data-driven approaches offer increased interpretability, while deep learning-based excel at generalizing changes across large event logs. However, the practical application of deep learning faces challenges related to managing stochasticity and integrating information for what-if analysis. This paper introduces a novel recurrent neural architecture tailored to discover COnditioned process Simulation MOdels (CoSMo) based on user-based constraints or any other nature of a-priori knowledge. This architecture facilitates the simulation of event logs that adhere to specific constraints by incorporating declarative-based rules into the learning phase as an attempt to fill the gap of incorporating information into deep learning models to perform what-if analysis. Experimental validation illustrates CoSMo's efficacy in simulating event logs while adhering to predefined declarative conditions, emphasizing both control-flow and data-flow perspectives.
|
http://arxiv.org/pdf/2303.17879v4
|
[
"Rafael S. Oyamada",
"Gabriel M. Tavares",
"Sylvio Barbon Junior",
"Paolo Ceravolo"
] |
2024-06-25T07:44:31Z
|
2023-03-31T08:26:18Z
|
2301.13584
|
Straight-Through meets Sparse Recovery: the Support Exploration
Algorithm
|
The {it straight-through estimator} (STE) is commonly used to optimize quantized neural networks, yet its contexts of effective performance are still unclear despite empirical successes.To make a step forward in this comprehension, we apply STE to a well-understood problem: {it sparse support recovery}. We introduce the {it Support Exploration Algorithm} (SEA), a novel algorithm promoting sparsity, and we analyze its performance in support recovery (a.k.a. model selection) problems. SEA explores more supports than the state-of-the-art, leading to superior performance in experiments, especially when the columns of $A$ are strongly coherent.The theoretical analysis considers recovery guarantees when the linear measurements matrix $A$ satisfies the {it Restricted Isometry Property} (RIP).The sufficient conditions of recovery are comparable but more stringent than those of the state-of-the-art in sparse support recovery. Their significance lies mainly in their applicability to an instance of the STE.
|
http://arxiv.org/pdf/2301.13584v3
|
[
"Mimoun Mohamed",
"François Malgouyres",
"Valentin Emiya",
"Caroline Chaux"
] |
2024-06-25T07:42:54Z
|
2023-01-31T12:31:13Z
|
2406.12193
|
Adaptive Collaborative Correlation Learning-based Semi-Supervised
Multi-Label Feature Selection
|
Semi-supervised multi-label feature selection has recently been developed to solve the curse of dimensionality problem in high-dimensional multi-label data with certain samples missing labels. Although many efforts have been made, most existing methods use a predefined graph approach to capture the sample similarity or the label correlation. In this manner, the presence of noise and outliers within the original feature space can undermine the reliability of the resulting sample similarity graph. It also fails to precisely depict the label correlation due to the existence of unknown labels. Besides, these methods only consider the discriminative power of selected features, while neglecting their redundancy. In this paper, we propose an Adaptive Collaborative Correlation lEarning-based Semi-Supervised Multi-label Feature Selection (Access-MFS) method to address these issues. Specifically, a generalized regression model equipped with an extended uncorrelated constraint is introduced to select discriminative yet irrelevant features and maintain consistency between predicted and ground-truth labels in labeled data, simultaneously. Then, the instance correlation and label correlation are integrated into the proposed regression model to adaptively learn both the sample similarity graph and the label similarity graph, which mutually enhance feature selection performance. Extensive experimental results demonstrate the superiority of the proposed Access-MFS over other state-of-the-art methods.
|
http://arxiv.org/pdf/2406.12193v2
|
[
"Yanyong Huang",
"Li Yang",
"Dongjie Wang",
"Ke Li",
"Xiuwen Yi",
"Fengmao Lv",
"Tianrui Li"
] |
2024-06-25T07:25:23Z
|
2024-06-18T01:47:38Z
|
2407.00087
|
ARES: Alternating Reinforcement Learning and Supervised Fine-Tuning for
Enhanced Multi-Modal Chain-of-Thought Reasoning Through Diverse AI Feedback
|
Large Multimodal Models (LMMs) excel at comprehending human instructions and demonstrate remarkable results across a broad spectrum of tasks. Reinforcement Learning from Human Feedback (RLHF) and AI Feedback (RLAIF) further refine LLMs by aligning them with specific preferences. These methods primarily use ranking-based feedback for entire generations. With advanced AI models (Teacher), such as GPT-4 and Claude 3 Opus, we can request various types of detailed feedback that are expensive for humans to provide. We propose a two-stage algorithm ARES that Alternates REinforcement Learning (RL) and Supervised Fine-Tuning (SFT). First, we request the Teacher to score how much each sentence contributes to solving the problem in a Chain-of-Thought (CoT). This sentence-level feedback allows us to consider individual valuable segments, providing more granular rewards for the RL procedure. Second, we ask the Teacher to correct the wrong reasoning after the RL stage. The RL procedure requires massive efforts for hyperparameter tuning and often generates errors like repetitive words and incomplete sentences. With the correction feedback, we stabilize the RL fine-tuned model through SFT. We conduct experiments on multi-model dataset ScienceQA and A-OKVQA to demonstrate the effectiveness of our proposal. ARES rationale reasoning achieves around 70% win rate against baseline models judged by GPT-4o. Additionally, we observe that the improved rationale reasoning leads to a 2.5% increase in inference answer accuracy on average for the multi-modal datasets.
|
http://arxiv.org/pdf/2407.00087v1
|
[
"Ju-Seung Byun",
"Jiyun Chun",
"Jihyung Kil",
"Andrew Perrault"
] |
2024-06-25T07:20:11Z
|
2024-06-25T07:20:11Z
|
2406.17323
|
XAMI -- A Benchmark Dataset for Artefact Detection in XMM-Newton Optical
Images
|
Reflected or scattered light produce artefacts in astronomical observations that can negatively impact the scientific study. Hence, automated detection of these artefacts is highly beneficial, especially with the increasing amounts of data gathered. Machine learning methods are well-suited to this problem, but currently there is a lack of annotated data to train such approaches to detect artefacts in astronomical observations. In this work, we present a dataset of images from the XMM-Newton space telescope Optical Monitoring camera showing different types of artefacts. We hand-annotated a sample of 1000 images with artefacts which we use to train automated ML methods. We further demonstrate techniques tailored for accurate detection and masking of artefacts using instance segmentation. We adopt a hybrid approach, combining knowledge from both convolutional neural networks (CNNs) and transformer-based models and use their advantages in segmentation. The presented method and dataset will advance artefact detection in astronomical observations by providing a reproducible baseline. All code and data are made available (https://github.com/ESA-Datalabs/XAMI-model and https://github.com/ESA-Datalabs/XAMI-dataset).
|
http://arxiv.org/pdf/2406.17323v1
|
[
"Elisabeta-Iulia Dima",
"Pablo Gómez",
"Sandor Kruk",
"Peter Kretschmar",
"Simon Rosen",
"Călin-Adrian Popa"
] |
2024-06-25T07:14:15Z
|
2024-06-25T07:14:15Z
|
2406.17322
|
ALPBench: A Benchmark for Active Learning Pipelines on Tabular Data
|
In settings where only a budgeted amount of labeled data can be afforded, active learning seeks to devise query strategies for selecting the most informative data points to be labeled, aiming to enhance learning algorithms' efficiency and performance. Numerous such query strategies have been proposed and compared in the active learning literature. However, the community still lacks standardized benchmarks for comparing the performance of different query strategies. This particularly holds for the combination of query strategies with different learning algorithms into active learning pipelines and examining the impact of the learning algorithm choice. To close this gap, we propose ALPBench, which facilitates the specification, execution, and performance monitoring of active learning pipelines. It has built-in measures to ensure evaluations are done reproducibly, saving exact dataset splits and hyperparameter settings of used algorithms. In total, ALPBench consists of 86 real-world tabular classification datasets and 5 active learning settings, yielding 430 active learning problems. To demonstrate its usefulness and broad compatibility with various learning algorithms and query strategies, we conduct an exemplary study evaluating 9 query strategies paired with 8 learning algorithms in 2 different settings. We provide ALPBench here: https://github.com/ValentinMargraf/ActiveLearningPipelines.
|
http://arxiv.org/pdf/2406.17322v1
|
[
"Valentin Margraf",
"Marcel Wever",
"Sandra Gilhuber",
"Gabriel Marques Tavares",
"Thomas Seidl",
"Eyke Hüllermeier"
] |
2024-06-25T07:14:14Z
|
2024-06-25T07:14:14Z
|
2106.14642
|
Expert Q-learning: Deep Reinforcement Learning with Coarse State Values
from Offline Expert Examples
|
In this article, we propose a novel algorithm for deep reinforcement learning named Expert Q-learning. Expert Q-learning is inspired by Dueling Q-learning and aims at incorporating semi-supervised learning into reinforcement learning through splitting Q-values into state values and action advantages. We require that an offline expert assesses the value of a state in a coarse manner using three discrete values. An expert network is designed in addition to the Q-network, which updates each time following the regular offline minibatch update whenever the expert example buffer is not empty. Using the board game Othello, we compare our algorithm with the baseline Q-learning algorithm, which is a combination of Double Q-learning and Dueling Q-learning. Our results show that Expert Q-learning is indeed useful and more resistant to the overestimation bias. The baseline Q-learning algorithm exhibits unstable and suboptimal behavior in non-deterministic settings, whereas Expert Q-learning demonstrates more robust performance with higher scores, illustrating that our algorithm is indeed suitable to integrate state values from expert examples into Q-learning.
|
http://arxiv.org/abs/2106.14642v5
|
[
"Li Meng",
"Anis Yazidi",
"Morten Goodwin",
"Paal Engelstad"
] |
2024-06-25T07:08:34Z
|
2021-06-28T12:41:45Z
|
2310.07710
|
A Resilient and Accessible Distribution-Preserving Watermark for Large
Language Models
|
Watermarking techniques offer a promising way to identify machine-generated content via embedding covert information into the contents generated from language models. A challenge in the domain lies in preserving the distribution of original generated content after watermarking. Our research extends and improves upon existing watermarking framework, placing emphasis on the importance of a textbf{Di}stribution-textbf{P}reserving (DiP) watermark. Contrary to the current strategies, our proposed DiPmark simultaneously preserves the original token distribution during watermarking (distribution-preserving), is detectable without access to the language model API and prompts (accessible), and is provably robust to moderate changes of tokens (resilient). DiPmark operates by selecting a random set of tokens prior to the generation of a word, then modifying the token distribution through a distribution-preserving reweight function to enhance the probability of these selected tokens during the sampling process. Extensive empirical evaluation on various language models and tasks demonstrates our approach's distribution-preserving property, accessibility, and resilience, making it a effective solution for watermarking tasks that demand impeccable quality preservation.
|
http://arxiv.org/pdf/2310.07710v2
|
[
"Yihan Wu",
"Zhengmian Hu",
"Junfeng Guo",
"Hongyang Zhang",
"Heng Huang"
] |
2024-06-25T07:08:17Z
|
2023-10-11T17:57:35Z
|
2406.17316
|
A review of unsupervised learning in astronomy
|
This review summarizes popular unsupervised learning methods, and gives an overview of their past, current, and future uses in astronomy. Unsupervised learning aims to organise the information content of a dataset, in such a way that knowledge can be extracted. Traditionally this has been achieved through dimensionality reduction techniques that aid the ranking of a dataset, for example through principal component analysis or by using auto-encoders, or simpler visualisation of a high dimensional space, for example through the use of a self organising map. Other desirable properties of unsupervised learning include the identification of clusters, i.e. groups of similar objects, which has traditionally been achieved by the k-means algorithm and more recently through density-based clustering such as HDBSCAN. More recently, complex frameworks have emerged, that chain together dimensionality reduction and clustering methods. However, no dataset is fully unknown. Thus, nowadays a lot of research has been directed towards self-supervised and semi-supervised methods that stand to gain from both supervised and unsupervised learning.
|
http://arxiv.org/abs/2406.17316v1
|
[
"Sotiria Fotopoulou"
] |
2024-06-25T06:57:47Z
|
2024-06-25T06:57:47Z
|
2406.17308
|
Improving Realized LGD Approximation: A Novel Framework with XGBoost for
Handling Missing Cash-Flow Data
|
The scope for the accurate calculation of the Loss Given Default (LGD) parameter is comprehensive in terms of financial data. In this research, we aim to explore methods for improving the approximation of realized LGD in conditions of limited access to the cash-flow data. We enhance the performance of the method which relies on the differences between exposure values (delta outstanding approach) by employing machine learning (ML) techniques. The research utilizes the data from the mortgage portfolio of one of the European countries and assumes a close resemblance to similar economic contexts. It incorporates non-financial variables and macroeconomic data related to the housing market, improving the accuracy of loss severity approximation. The proposed methodology attempts to mitigate the country-specific (related to the local legal) or portfolio-specific factors in aim to show the general advantage of applying ML techniques, rather than case-specific relation. We developed an XGBoost model that does not rely on cash-flow data yet enhances the accuracy of realized LGD estimation compared to results obtained with the delta outstanding approach. A novel aspect of our work is the detailed exploration of the delta outstanding approach and the methodology for addressing conditions of limited access to cash-flow data through machine learning models.
|
http://arxiv.org/pdf/2406.17308v1
|
[
"Zuzanna Kostecka",
"Robert Ślepaczuk"
] |
2024-06-25T06:41:09Z
|
2024-06-25T06:41:09Z
|
2101.11932
|
Approximation Theory of Tree Tensor Networks: Tensorized Multivariate
Functions
|
We study the approximation of multivariate functions with tensor networks (TNs). The main conclusion of this work is an answer to the following two questions: ``What are the approximation capabilities of TNs?" and "What is an appropriate model class of functions that can be approximated with TNs?" To answer the former, we show that TNs can (near to) optimally replicate $h$-uniform and $h$-adaptive approximation, for any smoothness order of the target function. Tensor networks thus exhibit universal expressivity w.r.t. isotropic, anisotropic and mixed smoothness spaces that is comparable with more general neural networks families such as deep rectified linear unit (ReLU) networks. Put differently, TNs have the capacity to (near to) optimally approximate many function classes -- without being adapted to the particular class in question. To answer the latter, as a candidate model class we consider approximation classes of TNs and show that these are (quasi-)Banach spaces, that many types of classical smoothness spaces are continuously embedded into said approximation classes and that TN approximation classes are themselves not embedded in any classical smoothness space.
|
http://arxiv.org/pdf/2101.11932v5
|
[
"Mazen Ali",
"Anthony Nouy"
] |
2024-06-25T06:24:52Z
|
2021-01-28T11:09:40Z
|
2306.02568
|
Latent Optimal Paths by Gumbel Propagation for Variational Bayesian
Dynamic Programming
|
We propose the stochastic optimal path which solves the classical optimal path problem by a probability-softening solution. This unified approach transforms a wide range of DP problems into directed acyclic graphs in which all paths follow a Gibbs distribution. We show the equivalence of the Gibbs distribution to a message-passing algorithm by the properties of the Gumbel distribution and give all the ingredients required for variational Bayesian inference of a latent path, namely Bayesian dynamic programming (BDP). We demonstrate the usage of BDP in the latent space of variational autoencoders (VAEs) and propose the BDP-VAE which captures structured sparse optimal paths as latent variables. This enables end-to-end training for generative tasks in which models rely on unobserved structural information. At last, we validate the behavior of our approach and showcase its applicability in two real-world applications: text-to-speech and singing voice synthesis. Our implementation code is available at url{https://github.com/XinleiNIU/LatentOptimalPathsBayesianDP}.
|
http://arxiv.org/pdf/2306.02568v3
|
[
"Xinlei Niu",
"Christian Walder",
"Jing Zhang",
"Charles Patrick Martin"
] |
2024-06-25T06:13:38Z
|
2023-06-05T03:47:59Z
|
2406.17298
|
Towards Efficient and Scalable Training of Differentially Private Deep
Learning
|
Differentially private stochastic gradient descent (DP-SGD) is the standard algorithm for training machine learning models under differential privacy (DP). The major drawback of DP-SGD is the drop in utility which prior work has comprehensively studied. However, in practice another major drawback that hinders the large-scale deployment is the significantly higher computational cost. We conduct a comprehensive empirical study to quantify the computational cost of training deep learning models under DP and benchmark methods that aim at reducing the cost. Among these are more efficient implementations of DP-SGD and training with lower precision. Finally, we study the scaling behaviour using up to 80 GPUs.
|
http://arxiv.org/pdf/2406.17298v1
|
[
"Sebastian Rodriguez Beltran",
"Marlon Tobaben",
"Niki Loppi",
"Antti Honkela"
] |
2024-06-25T06:04:58Z
|
2024-06-25T06:04:58Z
|
2406.17296
|
BlockLLM: Memory-Efficient Adaptation of LLMs by Selecting and
Optimizing the Right Coordinate Blocks
|
Training large language models (LLMs) for pretraining or adapting to new tasks and domains has become increasingly critical as their applications expand. However, as the model and the data sizes grow, the training process presents significant memory challenges, often requiring a prohibitive amount of GPU memory that may not be readily available. Existing methods such as low-rank adaptation (LoRA) add trainable low-rank matrix factorizations, altering the training dynamics and limiting the model's parameter search to a low-rank subspace. GaLore, a more recent method, employs Gradient Low-Rank Projection to reduce the memory footprint, in the full parameter training setting. However GaLore can only be applied to a subset of the LLM layers that satisfy the "reversibility" property, thus limiting their applicability. In response to these challenges, we introduce BlockLLM, an approach inspired by block coordinate descent. Our method carefully selects and updates a very small subset of the trainable parameters without altering any part of its architecture and training procedure. BlockLLM achieves state-of-the-art performance in both finetuning and pretraining tasks, while reducing the memory footprint of the underlying optimization process. Our experiments demonstrate that fine-tuning with only less than 5% of the parameters, BlockLLM achieves state-of-the-art perplexity scores on the GLUE benchmarks. On Llama model pretrained on C4 dataset, BlockLLM is able to train with significantly less memory than the state-of-the-art, while still maintaining competitive performance.
|
http://arxiv.org/pdf/2406.17296v1
|
[
"Amrutha Varshini Ramesh",
"Vignesh Ganapathiraman",
"Issam H. Laradji",
"Mark Schmidt"
] |
2024-06-25T05:45:12Z
|
2024-06-25T05:45:12Z
|
2406.17285
|
EON-1: A Brain-Inspired Processor for Near-Sensor Extreme Edge Online
Feature Extraction
|
For Edge AI applications, deploying online learning and adaptation on resource-constrained embedded devices can deal with fast sensor-generated streams of data in changing environments. However, since maintaining low-latency and power-efficient inference is paramount at the Edge, online learning and adaptation on the device should impose minimal additional overhead for inference. With this goal in mind, we explore energy-efficient learning and adaptation on-device for streaming-data Edge AI applications using Spiking Neural Networks (SNNs), which follow the principles of brain-inspired computing, such as high-parallelism, neuron co-located memory and compute, and event-driven processing. We propose EON-1, a brain-inspired processor for near-sensor extreme edge online feature extraction, that integrates a fast online learning and adaptation algorithm. We report results of only 1% energy overhead for learning, by far the lowest overhead when compared to other SoTA solutions, while attaining comparable inference accuracy. Furthermore, we demonstrate that EON-1 is up for the challenge of low-latency processing of HD and UHD streaming video in real-time, with learning enabled.
|
http://arxiv.org/pdf/2406.17285v1
|
[
"Alexandra Dobrita",
"Amirreza Yousefzadeh",
"Simon Thorpe",
"Kanishkan Vadivel",
"Paul Detterer",
"Guangzhi Tang",
"Gert-Jan van Schaik",
"Mario Konijnenburg",
"Anteneh Gebregiorgis",
"Said Hamdioui",
"Manolis Sifalakis"
] |
2024-06-25T05:23:41Z
|
2024-06-25T05:23:41Z
|
2406.17281
|
Distance Recomputator and Topology Reconstructor for Graph Neural
Networks
|
This paper introduces novel methodologies, the Distance Recomputator and Topology Reconstructor, aimed at enhancing Graph Neural Networks (GNNs). The Distance Recomputator dynamically recalibrates node distances within k-hop neighborhoods using a dynamic encoding scheme, thereby improving the accuracy and adaptability of node representations. Concurrently, the Topology Reconstructor adjusts local graph structures based on computed "similarity distances," optimizing network configurations for improved learning outcomes. These methods address the limitations of static node representations and fixed aggregation schemes in traditional GNNs, offering a more nuanced approach to modeling complex and dynamic graph topologies. Furthermore, our experimental evaluations demonstrate significant performance advantages over existing methods across various benchmark datasets. The proposed Distance Recomputator and Topology Reconstructor not only enhance node relationship modeling accuracy but also optimize information aggregation efficiency through an asynchronous aggregation mechanism. This approach proves particularly effective in scenarios involving dynamic or large-scale graphs, showcasing the methods' robustness and applicability in real-world graph learning tasks.
|
http://arxiv.org/pdf/2406.17281v1
|
[
"Dong Liu",
"Meng Jiang"
] |
2024-06-25T05:12:51Z
|
2024-06-25T05:12:51Z
|
2406.17814
|
Distribution Learnability and Robustness
|
We examine the relationship between learnability and robust (or agnostic) learnability for the problem of distribution learning. We show that, contrary to other learning settings (e.g., PAC learning of function classes), realizable learnability of a class of probability distributions does not imply its agnostic learnability. We go on to examine what type of data corruption can disrupt the learnability of a distribution class and what is such learnability robust against. We show that realizable learnability of a class of distributions implies its robust learnability with respect to only additive corruption, but not against subtractive corruption. We also explore related implications in the context of compression schemes and differentially private learnability.
|
http://arxiv.org/pdf/2406.17814v1
|
[
"Shai Ben-David",
"Alex Bie",
"Gautam Kamath",
"Tosca Lechner"
] |
2024-06-25T05:09:54Z
|
2024-06-25T05:09:54Z
|
2309.05519
|
NExT-GPT: Any-to-Any Multimodal LLM
|
While recently Multimodal Large Language Models (MM-LLMs) have made exciting strides, they mostly fall prey to the limitation of only input-side multimodal understanding, without the ability to produce content in multiple modalities. As we humans always perceive the world and communicate with people through various modalities, developing any-to-any MM-LLMs capable of accepting and delivering content in any modality becomes essential to human-level AI. To fill the gap, we present an end-to-end general-purpose any-to-any MM-LLM system, NExT-GPT. We connect an LLM with multimodal adaptors and different diffusion decoders, enabling NExT-GPT to perceive inputs and generate outputs in arbitrary combinations of text, images, videos, and audio. By leveraging the existing well-trained highly-performing encoders and decoders, NExT-GPT is tuned with only a small amount of parameter (1%) of certain projection layers, which not only benefits low-cost training and also facilitates convenient expansion to more potential modalities. Moreover, we introduce a modality-switching instruction tuning (MosIT) and manually curate a high-quality dataset for MosIT, based on which NExT-GPT is empowered with complex cross-modal semantic understanding and content generation. Overall, our research showcases the promising possibility of building an AI agent capable of modeling universal modalities, paving the way for more human-like AI research in the community. Project page: https://next-gpt.github.io/
|
http://arxiv.org/pdf/2309.05519v3
|
[
"Shengqiong Wu",
"Hao Fei",
"Leigang Qu",
"Wei Ji",
"Tat-Seng Chua"
] |
2024-06-25T05:01:09Z
|
2023-09-11T15:02:25Z
|
2308.00177
|
Pretrained deep models outperform GBDTs in Learning-To-Rank under label
scarcity
|
On tabular data, a significant body of literature has shown that current deep learning (DL) models perform at best similarly to Gradient Boosted Decision Trees (GBDTs), while significantly underperforming them on outlier data. However, these works often study idealized problem settings which may fail to capture complexities of real-world scenarios. We identify a natural tabular data setting where DL models can outperform GBDTs: tabular Learning-to-Rank (LTR) under label scarcity. Tabular LTR applications, including search and recommendation, often have an abundance of unlabeled data, and scarce labeled data. We show that DL rankers can utilize unsupervised pretraining to exploit this unlabeled data. In extensive experiments over both public and proprietary datasets, we show that pretrained DL rankers consistently outperform GBDT rankers on ranking metrics -- sometimes by as much as 38% -- both overall and on outliers.
|
http://arxiv.org/pdf/2308.00177v4
|
[
"Charlie Hou",
"Kiran Koshy Thekumparampil",
"Michael Shavlovsky",
"Giulia Fanti",
"Yesh Dattatreya",
"Sujay Sanghavi"
] |
2024-06-25T04:41:56Z
|
2023-07-31T22:19:45Z
|
2406.17274
|
Can We Trust the Performance Evaluation of Uncertainty Estimation
Methods in Text Summarization?
|
Text summarization, a key natural language generation (NLG) task, is vital in various domains. However, the high cost of inaccurate summaries in risk-critical applications, particularly those involving human-in-the-loop decision-making, raises concerns about the reliability of uncertainty estimation on text summarization (UE-TS) evaluation methods. This concern stems from the dependency of uncertainty model metrics on diverse and potentially conflicting NLG metrics. To address this issue, we introduce a comprehensive UE-TS benchmark incorporating 31 NLG metrics across four dimensions. The benchmark evaluates the uncertainty estimation capabilities of two large language models and one pre-trained language model on three datasets, with human-annotation analysis incorporated where applicable. We also assess the performance of 14 common uncertainty estimation methods within this benchmark. Our findings emphasize the importance of considering multiple uncorrelated NLG metrics and diverse uncertainty estimation methods to ensure reliable and efficient evaluation of UE-TS techniques.
|
http://arxiv.org/pdf/2406.17274v1
|
[
"Jianfeng He",
"Runing Yang",
"Linlin Yu",
"Changbin Li",
"Ruoxi Jia",
"Feng Chen",
"Ming Jin",
"Chang-Tien Lu"
] |
2024-06-25T04:41:17Z
|
2024-06-25T04:41:17Z
|
2307.01753
|
Local primordial non-Gaussianity from the large-scale clustering of
photometric DESI luminous red galaxies
|
We use angular clustering of luminous red galaxies from the Dark Energy Spectroscopic Instrument (DESI) imaging surveys to constrain the local primordial non-Gaussianity parameter $fnl$. Our sample comprises over 12 million targets, covering 14,000 square degrees of the sky, with redshifts in the range $0.2< z < 1.35$. We identify Galactic extinction, survey depth, and astronomical seeing as the primary sources of systematic error, and employ linear regression and artificial neural networks to alleviate non-cosmological excess clustering on large scales. Our methods are tested against simulations with and without $fnl$ and systematics, showing superior performance of the neural network treatment. The neural network with a set of nine imaging property maps passes our systematic null test criteria, and is chosen as the fiducial treatment. Assuming the universality relation, we find $fnl = 34^{+24(+50)}_{-44(-73)}$ at 68%(95%) confidence. We apply a series of robustness tests (e.g., cuts on imaging, declination, or scales used) that show consistency in the obtained constraints. We study how the regression method biases the measured angular power-spectrum and degrades the $fnl$ constraining power. The use of the nine maps more than doubles the uncertainty compared to using only the three primary maps in the regression. Our results thus motivate the development of more efficient methods that avoid over-correction, protect large-scale clustering information, and preserve constraining power. Additionally, our results encourage further studies of $fnl$ with DESI spectroscopic samples, where the inclusion of 3D clustering modes should help separate imaging systematics and lessen the degradation in the $fnl$ uncertainty.
|
http://arxiv.org/abs/2307.01753v3
|
[
"Mehdi Rezaie",
"Ashley J. Ross",
"Hee-Jong Seo",
"Hui Kong",
"Anna Porredon",
"Lado Samushia",
"Edmond Chaussidon",
"Alex Krolewski",
"Arnaud de Mattia",
"Florian Beutler",
"Jessica Nicole Aguilar",
"Steven Ahlen",
"Shadab Alam",
"Santiago Avila",
"Benedict Bahr-Kalus",
"Jose Bermejo-Climent",
"David Brooks",
"Todd Claybaugh",
"Shaun Cole",
"Kyle Dawson",
"Axel de la Macorra",
"Peter Doel",
"Andreu Font-Ribera",
"Jaime E. Forero-Romero",
"Satya Gontcho A Gontcho",
"Julien Guy",
"Klaus Honscheid",
"Dragan Huterer",
"Theodore Kisner",
"Martin Landriau",
"Michael Levi",
"Marc Manera",
"Aaron Meisner",
"Ramon Miquel",
"Eva-Maria Mueller",
"Adam Myers",
"Jeffrey A. Newman",
"Jundan Nie",
"Nathalie Palanque-Delabrouille",
"Will Percival",
"Claire Poppett",
"Graziano Rossi",
"Eusebio Sanchez",
"Michael Schubnell",
"Gregory Tarlé",
"Benjamin Alan Weaver",
"Christophe Yèche",
"Zhimin Zhou",
"Hu Zou"
] |
2024-06-25T04:39:44Z
|
2023-07-04T14:49:23Z
|
2406.17272
|
A Comprehensive Solution to Connect Speech Encoder and Large Language
Model for ASR
|
Recent works have shown promising results in connecting speech encoders to large language models (LLMs) for speech recognition. However, several limitations persist, including limited fine-tuning options, a lack of mechanisms to enforce speech-text alignment, and high insertion errors especially in domain mismatch conditions. This paper presents a comprehensive solution to address these issues. We begin by investigating more thoughtful fine-tuning schemes. Next, we propose a matching loss to enhance alignment between modalities. Finally, we explore training and inference methods to mitigate high insertion errors. Experimental results on the Librispeech corpus demonstrate that partially fine-tuning the encoder and LLM using parameter-efficient methods, such as LoRA, is the most cost-effective approach. Additionally, the matching loss improves modality alignment, enhancing performance. The proposed training and inference methods significantly reduce insertion errors.
|
http://arxiv.org/pdf/2406.17272v1
|
[
"Van Tung Pham",
"Yist Lin",
"Tao Han",
"Wei Li",
"Jun Zhang",
"Lu Lu",
"Yuxuan Wang"
] |
2024-06-25T04:35:50Z
|
2024-06-25T04:35:50Z
|
2401.17802
|
Distillation Enhanced Time Series Forecasting Network with Momentum
Contrastive Learning
|
Contrastive representation learning is crucial in time series analysis as it alleviates the issue of data noise and incompleteness as well as sparsity of supervision signal. However, existing constrastive learning frameworks usually focus on intral-temporal features, which fails to fully exploit the intricate nature of time series data. To address this issue, we propose DE-TSMCL, an innovative distillation enhanced framework for long sequence time series forecasting. Specifically, we design a learnable data augmentation mechanism which adaptively learns whether to mask a timestamp to obtain optimized sub-sequences. Then, we propose a contrastive learning task with momentum update to explore inter-sample and intra-temporal correlations of time series to learn the underlying structure feature on the unlabeled time series. Meanwhile, we design a supervised task to learn more robust representations and facilitate the contrastive learning process. Finally, we jointly optimize the above two tasks. By developing model loss from multiple tasks, we can learn effective representations for downstream forecasting task. Extensive experiments, in comparison with state-of-the-arts, well demonstrate the effectiveness of DE-TSMCL, where the maximum improvement can reach to 27.3%.
|
http://arxiv.org/abs/2401.17802v2
|
[
"Haozhi Gao",
"Qianqian Ren",
"Jinbao Li"
] |
2024-06-25T04:34:38Z
|
2024-01-31T12:52:10Z
|
2406.16026
|
CEST-KAN: Kolmogorov-Arnold Networks for CEST MRI Data Analysis
|
Purpose: This study aims to propose and investigate the feasibility of using Kolmogorov-Arnold Network (KAN) for CEST MRI data analysis (CEST-KAN). Methods: CEST MRI data were acquired from twelve healthy volunteers at 3T. Data from ten subjects were used for training, while the remaining two were reserved for testing. The performance of multi-layer perceptron (MLP) and KAN models with the same network settings were evaluated and compared to the conventional multi-pool Lorentzian fitting (MPLF) method in generating water and multiple CEST contrasts, including amide, relayed nuclear Overhauser effect (rNOE), and magnetization transfer (MT). Results: The water and CEST maps generated by both MLP and KAN were visually comparable to the MPLF results. However, the KAN model demonstrated higher accuracy in extrapolating the CEST fitting metrics, as evidenced by the smaller validation loss during training and smaller absolute error during testing. Voxel-wise correlation analysis showed that all four CEST fitting metrics generated by KAN consistently exhibited higher Pearson coefficients than the MLP results, indicating superior performance. Moreover, the KAN models consistently outperformed the MLP models in varying hidden layer numbers despite longer training time. Conclusion: In this study, we demonstrated for the first time the feasibility of utilizing KAN for CEST MRI data analysis, highlighting its superiority over MLP in this task. The findings suggest that CEST-KAN has the potential to be a robust and reliable post-analysis tool for CEST MRI in clinical settings.
|
http://arxiv.org/pdf/2406.16026v2
|
[
"Jiawen Wang",
"Pei Cai",
"Ziyan Wang",
"Huabin Zhang",
"Jianpan Huang"
] |
2024-06-25T04:28:09Z
|
2024-06-23T06:23:12Z
|
2406.17266
|
AG-LSEC: Audio Grounded Lexical Speaker Error Correction
|
Speaker Diarization (SD) systems are typically audio-based and operate independently of the ASR system in traditional speech transcription pipelines and can have speaker errors due to SD and/or ASR reconciliation, especially around speaker turns and regions of speech overlap. To reduce these errors, a Lexical Speaker Error Correction (LSEC), in which an external language model provides lexical information to correct the speaker errors, was recently proposed. Though the approach achieves good Word Diarization error rate (WDER) improvements, it does not use any additional acoustic information and is prone to miscorrections. In this paper, we propose to enhance and acoustically ground the LSEC system with speaker scores directly derived from the existing SD pipeline. This approach achieves significant relative WDER reductions in the range of 25-40% over the audio-based SD, ASR system and beats the LSEC system by 15-25% relative on RT03-CTS, Callhome American English and Fisher datasets.
|
http://arxiv.org/pdf/2406.17266v1
|
[
"Rohit Paturi",
"Xiang Li",
"Sundararajan Srinivasan"
] |
2024-06-25T04:20:49Z
|
2024-06-25T04:20:49Z
|
2406.17263
|
Efficient, Multimodal, and Derivative-Free Bayesian Inference With
Fisher-Rao Gradient Flows
|
In this paper, we study efficient approximate sampling for probability distributions known up to normalization constants. We specifically focus on a problem class arising in Bayesian inference for large-scale inverse problems in science and engineering applications. The computational challenges we address with the proposed methodology are: (i) the need for repeated evaluations of expensive forward models; (ii) the potential existence of multiple modes; and (iii) the fact that gradient of, or adjoint solver for, the forward model might not be feasible. While existing Bayesian inference methods meet some of these challenges individually, we propose a framework that tackles all three systematically. Our approach builds upon the Fisher-Rao gradient flow in probability space, yielding a dynamical system for probability densities that converges towards the target distribution at a uniform exponential rate. This rapid convergence is advantageous for the computational burden outlined in (i). We apply Gaussian mixture approximations with operator splitting techniques to simulate the flow numerically; the resulting approximation can capture multiple modes thus addressing (ii). Furthermore, we employ the Kalman methodology to facilitate a derivative-free update of these Gaussian components and their respective weights, addressing the issue in (iii). The proposed methodology results in an efficient derivative-free sampler flexible enough to handle multi-modal distributions: Gaussian Mixture Kalman Inversion (GMKI). The effectiveness of GMKI is demonstrated both theoretically and numerically in several experiments with multimodal target distributions, including proof-of-concept and two-dimensional examples, as well as a large-scale application: recovering the Navier-Stokes initial condition from solution data at positive times.
|
http://arxiv.org/pdf/2406.17263v1
|
[
"Yifan Chen",
"Daniel Zhengyu Huang",
"Jiaoyang Huang",
"Sebastian Reich",
"Andrew M. Stuart"
] |
2024-06-25T04:07:22Z
|
2024-06-25T04:07:22Z
|
2406.17251
|
TopoGCL: Topological Graph Contrastive Learning
|
Graph contrastive learning (GCL) has recently emerged as a new concept which allows for capitalizing on the strengths of graph neural networks (GNNs) to learn rich representations in a wide variety of applications which involve abundant unlabeled information. However, existing GCL approaches largely tend to overlook the important latent information on higher-order graph substructures. We address this limitation by introducing the concepts of topological invariance and extended persistence on graphs to GCL. In particular, we propose a new contrastive mode which targets topological representations of the two augmented views from the same graph, yielded by extracting latent shape properties of the graph at multiple resolutions. Along with the extended topological layer, we introduce a new extended persistence summary, namely, extended persistence landscapes (EPL) and derive its theoretical stability guarantees. Our extensive numerical results on biological, chemical, and social interaction graphs show that the new Topological Graph Contrastive Learning (TopoGCL) model delivers significant performance gains in unsupervised graph classification for 11 out of 12 considered datasets and also exhibits robustness under noisy scenarios.
|
http://arxiv.org/pdf/2406.17251v1
|
[
"Yuzhou Chen",
"Jose Frias",
"Yulia R. Gel"
] |
2024-06-25T03:35:20Z
|
2024-06-25T03:35:20Z
|
2406.17245
|
Unlocking Continual Learning Abilities in Language Models
|
Language models (LMs) exhibit impressive performance and generalization capabilities. However, LMs struggle with the persistent challenge of catastrophic forgetting, which undermines their long-term sustainability in continual learning (CL). Existing approaches usually address the issue by incorporating old task data or task-wise inductive bias into LMs. However, old data and accurate task information are often unavailable or costly to collect, hindering the availability of current CL approaches for LMs. To address this limitation, we introduce $textbf{MIGU}$ ($textbf{M}$agn$textbf{I}$tude-based $textbf{G}$radient $textbf{U}$pdating for continual learning), a rehearsal-free and task-label-free method that only updates the model parameters with large magnitudes of output in LMs' linear layers. MIGU is based on our observation that the L1-normalized magnitude distribution of the output in LMs' linear layers is different when the LM models deal with different task data. By imposing this simple constraint on the gradient update process, we can leverage the inherent behaviors of LMs, thereby unlocking their innate CL abilities. Our experiments demonstrate that MIGU is universally applicable to all three LM architectures (T5, RoBERTa, and Llama2), delivering state-of-the-art or on-par performance across continual finetuning and continual pre-training settings on four CL benchmarks. For example, MIGU brings a 15.2% average accuracy improvement over conventional parameter-efficient finetuning baselines in a 15-task CL benchmark. MIGU can also seamlessly integrate with all three existing CL types to further enhance performance. Code is available at href{https://github.com/wenyudu/MIGU}{this https URL}.
|
http://arxiv.org/pdf/2406.17245v1
|
[
"Wenyu Du",
"Shuang Cheng",
"Tongxu Luo",
"Zihan Qiu",
"Zeyu Huang",
"Ka Chun Cheung",
"Reynold Cheng",
"Jie Fu"
] |
2024-06-25T03:24:06Z
|
2024-06-25T03:24:06Z
|
2406.16252
|
Graph-Augmented LLMs for Personalized Health Insights: A Case Study in
Sleep Analysis
|
Health monitoring systems have revolutionized modern healthcare by enabling the continuous capture of physiological and behavioral data, essential for preventive measures and early health intervention. While integrating this data with Large Language Models (LLMs) has shown promise in delivering interactive health advice, traditional methods like Retrieval-Augmented Generation (RAG) and fine-tuning often fail to fully utilize the complex, multi-dimensional, and temporally relevant data from wearable devices. These conventional approaches typically provide limited actionable and personalized health insights due to their inadequate capacity to dynamically integrate and interpret diverse health data streams. In response, this paper introduces a graph-augmented LLM framework designed to significantly enhance the personalization and clarity of health insights. Utilizing a hierarchical graph structure, the framework captures inter and intra-patient relationships, enriching LLM prompts with dynamic feature importance scores derived from a Random Forest Model. The effectiveness of this approach is demonstrated through a sleep analysis case study involving 20 college students during the COVID-19 lockdown, highlighting the potential of our model to generate actionable and personalized health insights efficiently. We leverage another LLM to evaluate the insights for relevance, comprehensiveness, actionability, and personalization, addressing the critical need for models that process and interpret complex health data effectively. Our findings show that augmenting prompts with our framework yields significant improvements in all 4 criteria. Through our framework, we can elicit well-crafted, more thoughtful responses tailored to a specific patient.
|
http://arxiv.org/pdf/2406.16252v2
|
[
"Ajan Subramanian",
"Zhongqi Yang",
"Iman Azimi",
"Amir M. Rahmani"
] |
2024-06-25T03:17:40Z
|
2024-06-24T01:22:54Z
|
2402.10963
|
GLoRe: When, Where, and How to Improve LLM Reasoning via Global and
Local Refinements
|
State-of-the-art language models can exhibit impressive reasoning refinement capabilities on math, science or coding tasks. However, recent work demonstrates that even the best models struggle to identify textit{when and where to refine} without access to external feedback. Outcome-based Reward Models (textbf{ORMs}), trained to predict correctness of the final answer indicating when to refine, offer one convenient solution for deciding when to refine. Process Based Reward Models (textbf{PRMs}), trained to predict correctness of intermediate steps, can then be used to indicate where to refine. But they are expensive to train, requiring extensive human annotations. In this paper, we propose Stepwise ORMs (textbf{SORMs}) which are trained, only on synthetic data, to approximate the expected future reward of the optimal policy or $V^{star}$. More specifically, SORMs are trained to predict the correctness of the final answer when sampling the current policy many times (rather than only once as in the case of ORMs). Our experiments show that SORMs can more accurately detect incorrect reasoning steps compared to ORMs, thus improving downstream accuracy when doing refinements. We then train textit{global} refinement models, which take only the question and a draft solution as input and predict a corrected solution, and textit{local} refinement models which also take as input a critique indicating the location of the first reasoning error. We generate training data for both models synthetically by reusing data used to train the SORM. We find combining global and local refinements, using the ORM as a reranker, significantly outperforms either one individually, as well as a best of three sample baseline. With this strategy we can improve the accuracy of a LLaMA-2 13B model (already fine-tuned with RL) on GSM8K from 53% to 65% when greedily sampled.
|
http://arxiv.org/pdf/2402.10963v2
|
[
"Alex Havrilla",
"Sharath Raparthy",
"Christoforus Nalmpantis",
"Jane Dwivedi-Yu",
"Maksym Zhuravinskyi",
"Eric Hambro",
"Roberta Raileanu"
] |
2024-06-25T03:14:10Z
|
2024-02-13T20:16:29Z
|
2406.17238
|
Expansive Synthesis: Generating Large-Scale Datasets from Minimal
Samples
|
The challenge of limited availability of data for training in machine learning arises in many applications and the impact on performance and generalization is serious. Traditional data augmentation methods aim to enhance training with a moderately sufficient data set. Generative models like Generative Adversarial Networks (GANs) often face problematic convergence when generating significant and diverse data samples. Diffusion models, though effective, still struggle with high computational cost and long training times. This paper introduces an innovative Expansive Synthesis model that generates large-scale, high-fidelity datasets from minimal samples. The proposed approach exploits expander graph mappings and feature interpolation to synthesize expanded datasets while preserving the intrinsic data distribution and feature structural relationships. The rationale of the model is rooted in the non-linear property of neural networks' latent space and in its capture by a Koopman operator to yield a linear space of features to facilitate the construction of larger and enriched consistent datasets starting with a much smaller dataset. This process is optimized by an autoencoder architecture enhanced with self-attention layers and further refined for distributional consistency by optimal transport. We validate our Expansive Synthesis by training classifiers on the generated datasets and comparing their performance to classifiers trained on larger, original datasets. Experimental results demonstrate that classifiers trained on synthesized data achieve performance metrics on par with those trained on full-scale datasets, showcasing the model's potential to effectively augment training data. This work represents a significant advancement in data generation, offering a robust solution to data scarcity and paving the way for enhanced data availability in machine learning applications.
|
http://arxiv.org/pdf/2406.17238v1
|
[
"Vahid Jebraeeli",
"Bo Jiang",
"Hamid Krim",
"Derya Cansever"
] |
2024-06-25T02:59:02Z
|
2024-06-25T02:59:02Z
|
2402.12397
|
Multi-class Temporal Logic Neural Networks
|
Time-series data can represent the behaviors of autonomous systems, such as drones and self-driving cars. The task of binary and multi-class classification for time-series data has become a prominent area of research. Neural networks represent a popular approach to classifying data; However, they lack interpretability, which poses a significant challenge in extracting meaningful information from them. Signal Temporal Logic (STL) is a formalism that describes the properties of timed behaviors. We propose a method that combines all of the above: neural networks that represent STL specifications for multi-class classification of time-series data. We offer two key contributions: 1) We introduce a notion of margin for multi-class classification, and 2) we introduce STL-based attributes for enhancing the interpretability of the results. We evaluate our method on two datasets and compare it with state-of-the-art baselines.
|
http://arxiv.org/pdf/2402.12397v2
|
[
"Danyang Li",
"Roberto Tron"
] |
2024-06-25T02:58:06Z
|
2024-02-17T00:22:29Z
|
2401.14555
|
Revisiting Active Learning in the Era of Vision Foundation Models
|
Foundation vision or vision-language models are trained on large unlabeled or noisy data and learn robust representations that can achieve impressive zero- or few-shot performance on diverse tasks. Given these properties, they are a natural fit for active learning (AL), which aims to maximize labeling efficiency. However, the full potential of foundation models has not been explored in the context of AL, specifically in the low-budget regime. In this work, we evaluate how foundation models influence three critical components of effective AL, namely, 1) initial labeled pool selection, 2) ensuring diverse sampling, and 3) the trade-off between representative and uncertainty sampling. We systematically study how the robust representations of foundation models (DINOv2, OpenCLIP) challenge existing findings in active learning. Our observations inform the principled construction of a new simple and elegant AL strategy that balances uncertainty estimated via dropout with sample diversity. We extensively test our strategy on many challenging image classification benchmarks, including natural images as well as out-of-domain biomedical images that are relatively understudied in the AL literature. We also provide a highly performant and efficient implementation of modern AL strategies (including our method) at https://github.com/sanketx/AL-foundation-models.
|
http://arxiv.org/pdf/2401.14555v2
|
[
"Sanket Rajan Gupte",
"Josiah Aklilu",
"Jeffrey J. Nirschl",
"Serena Yeung-Levy"
] |
2024-06-25T02:43:06Z
|
2024-01-25T22:50:39Z
|
2406.17229
|
Self-Supervised Embeddings for Detecting Individual Symptoms of
Depression
|
Depression, a prevalent mental health disorder impacting millions globally, demands reliable assessment systems. Unlike previous studies that focus solely on either detecting depression or predicting its severity, our work identifies individual symptoms of depression while also predicting its severity using speech input. We leverage self-supervised learning (SSL)-based speech models to better utilize the small-sized datasets that are frequently encountered in this task. Our study demonstrates notable performance improvements by utilizing SSL embeddings compared to conventional speech features. We compare various types of SSL pretrained models to elucidate the type of speech information (semantic, speaker, or prosodic) that contributes the most in identifying different symptoms. Additionally, we evaluate the impact of combining multiple SSL embeddings on performance. Furthermore, we show the significance of multi-task learning for identifying depressive symptoms effectively.
|
http://arxiv.org/pdf/2406.17229v1
|
[
"Sri Harsha Dumpala",
"Katerina Dikaios",
"Abraham Nunes",
"Frank Rudzicz",
"Rudolf Uher",
"Sageev Oore"
] |
2024-06-25T02:35:37Z
|
2024-06-25T02:35:37Z
|
2406.17228
|
Greedy equivalence search for nonparametric graphical models
|
One of the hallmark achievements of the theory of graphical models and Bayesian model selection is the celebrated greedy equivalence search (GES) algorithm due to Chickering and Meek. GES is known to consistently estimate the structure of directed acyclic graph (DAG) models in various special cases including Gaussian and discrete models, which are in particular curved exponential families. A general theory that covers general nonparametric DAG models, however, is missing. Here, we establish the consistency of greedy equivalence search for general families of DAG models that satisfy smoothness conditions on the Markov factorization, and hence may not be curved exponential families, or even parametric. The proof leverages recent advances in nonparametric Bayes to construct a test for comparing misspecified DAG models that avoids arguments based on the Laplace approximation. Nonetheless, when the Laplace approximation is valid and a consistent scoring function exists, we recover the classical result. As a result, we obtain a general consistency theorem for GES applied to general DAG models.
|
http://arxiv.org/pdf/2406.17228v1
|
[
"Bryon Aragam"
] |
2024-06-25T02:31:32Z
|
2024-06-25T02:31:32Z
|
2212.06338
|
Minimax Optimal Estimation of Stability Under Distribution Shift
|
The performance of decision policies and prediction models often deteriorates when applied to environments different from the ones seen during training. To ensure reliable operation, we analyze the stability of a system under distribution shift, which is defined as the smallest change in the underlying environment that causes the system's performance to deteriorate beyond a permissible threshold. In contrast to standard tail risk measures and distributionally robust losses that require the specification of a plausible magnitude of distribution shift, the stability measure is defined in terms of a more intuitive quantity: the level of acceptable performance degradation. We develop a minimax optimal estimator of stability and analyze its convergence rate, which exhibits a fundamental phase shift behavior. Our characterization of the minimax convergence rate shows that evaluating stability against large performance degradation incurs a statistical cost. Empirically, we demonstrate the practical utility of our stability framework by using it to compare system designs on problems where robustness to distribution shift is critical.
|
http://arxiv.org/pdf/2212.06338v2
|
[
"Hongseok Namkoong",
"Yuanzhe Ma",
"Peter W. Glynn"
] |
2024-06-25T02:21:54Z
|
2022-12-13T02:40:30Z
|
2404.03876
|
Accurately Classifying Out-Of-Distribution Data in Facial Recognition
|
Standard classification theory assumes that the distribution of images in the test and training sets are identical. Unfortunately, real-life scenarios typically feature unseen data ("out-of-distribution data") which is different from data in the training distribution("in-distribution"). This issue is most prevalent in social justice problems where data from under-represented groups may appear in the test data without representing an equal proportion of the training data. This may result in a model returning confidently wrong decisions and predictions. We are interested in the following question: Can the performance of a neural network improve on facial images of out-of-distribution data when it is trained simultaneously on multiple datasets of in-distribution data? We approach this problem by incorporating the Outlier Exposure model and investigate how the model's performance changes when other datasets of facial images were implemented. We observe that the accuracy and other metrics of the model can be increased by applying Outlier Exposure, incorporating a trainable weight parameter to increase the machine's emphasis on outlier images, and by re-weighting the importance of different class labels. We also experimented with whether sorting the images and determining outliers via image features would have more of an effect on the metrics than sorting by average pixel value. Our goal was to make models not only more accurate but also more fair by scanning a more expanded range of images. We also tested the datasets in reverse order to see whether a more fair dataset with balanced features has an effect on the model's accuracy.
|
http://arxiv.org/pdf/2404.03876v3
|
[
"Gianluca Barone",
"Aashrit Cunchala",
"Rudy Nunez"
] |
2024-06-25T02:20:06Z
|
2024-04-05T03:51:19Z
|
2406.17224
|
Large Language Models are Interpretable Learners
|
The trade-off between expressiveness and interpretability remains a core challenge when building human-centric predictive models for classification and decision-making. While symbolic rules offer interpretability, they often lack expressiveness, whereas neural networks excel in performance but are known for being black boxes. In this paper, we show a combination of Large Language Models (LLMs) and symbolic programs can bridge this gap. In the proposed LLM-based Symbolic Programs (LSPs), the pretrained LLM with natural language prompts provides a massive set of interpretable modules that can transform raw input into natural language concepts. Symbolic programs then integrate these modules into an interpretable decision rule. To train LSPs, we develop a divide-and-conquer approach to incrementally build the program from scratch, where the learning process of each step is guided by LLMs. To evaluate the effectiveness of LSPs in extracting interpretable and accurate knowledge from data, we introduce IL-Bench, a collection of diverse tasks, including both synthetic and real-world scenarios across different modalities. Empirical results demonstrate LSP's superior performance compared to traditional neurosymbolic programs and vanilla automatic prompt tuning methods. Moreover, as the knowledge learned by LSP is a combination of natural language descriptions and symbolic rules, it is easily transferable to humans (interpretable), and other LLMs, and generalizes well to out-of-distribution samples.
|
http://arxiv.org/pdf/2406.17224v1
|
[
"Ruochen Wang",
"Si Si",
"Felix Yu",
"Dorothea Wiesmann",
"Cho-Jui Hsieh",
"Inderjit Dhillon"
] |
2024-06-25T02:18:15Z
|
2024-06-25T02:18:15Z
|
2406.17216
|
Machine Unlearning Fails to Remove Data Poisoning Attacks
|
We revisit the efficacy of several practical methods for approximate machine unlearning developed for large-scale deep learning. In addition to complying with data deletion requests, one often-cited potential application for unlearning methods is to remove the effects of training on poisoned data. We experimentally demonstrate that, while existing unlearning methods have been demonstrated to be effective in a number of evaluation settings (e.g., alleviating membership inference attacks), they fail to remove the effects of data poisoning, across a variety of types of poisoning attacks (indiscriminate, targeted, and a newly-introduced Gaussian poisoning attack) and models (image classifiers and LLMs); even when granted a relatively large compute budget. In order to precisely characterize unlearning efficacy, we introduce new evaluation metrics for unlearning based on data poisoning. Our results suggest that a broader perspective, including a wider variety of evaluations, is required to avoid a false sense of confidence in machine unlearning procedures for deep learning without provable guarantees. Moreover, while unlearning methods show some signs of being useful to efficiently remove poisoned datapoints without having to retrain, our work suggests that these methods are not yet "ready for prime time", and currently provide limited benefit over retraining.
|
http://arxiv.org/pdf/2406.17216v1
|
[
"Martin Pawelczyk",
"Jimmy Z. Di",
"Yiwei Lu",
"Gautam Kamath",
"Ayush Sekhari",
"Seth Neel"
] |
2024-06-25T02:05:29Z
|
2024-06-25T02:05:29Z
|
2406.12723
|
BIOSCAN-5M: A Multimodal Dataset for Insect Biodiversity
|
As part of an ongoing worldwide effort to comprehend and monitor insect biodiversity, this paper presents the BIOSCAN-5M Insect dataset to the machine learning community and establish several benchmark tasks. BIOSCAN-5M is a comprehensive dataset containing multi-modal information for over 5 million insect specimens, and it significantly expands existing image-based biological datasets by including taxonomic labels, raw nucleotide barcode sequences, assigned barcode index numbers, and geographical information. We propose three benchmark experiments to demonstrate the impact of the multi-modal data types on the classification and clustering accuracy. First, we pretrain a masked language model on the DNA barcode sequences of the BIOSCAN-5M dataset, and demonstrate the impact of using this large reference library on species- and genus-level classification performance. Second, we propose a zero-shot transfer learning task applied to images and DNA barcodes to cluster feature embeddings obtained from self-supervised learning, to investigate whether meaningful clusters can be derived from these representation embeddings. Third, we benchmark multi-modality by performing contrastive learning on DNA barcodes, image data, and taxonomic information. This yields a general shared embedding space enabling taxonomic classification using multiple types of information and modalities. The code repository of the BIOSCAN-5M Insect dataset is available at https://github.com/zahrag/BIOSCAN-5M.
|
http://arxiv.org/pdf/2406.12723v3
|
[
"Zahra Gharaee",
"Scott C. Lowe",
"ZeMing Gong",
"Pablo Millan Arias",
"Nicholas Pellegrino",
"Austin T. Wang",
"Joakim Bruslund Haurum",
"Iuliia Zarubiieva",
"Lila Kari",
"Dirk Steinke",
"Graham W. Taylor",
"Paul Fieguth",
"Angel X. Chang"
] |
2024-06-25T02:00:48Z
|
2024-06-18T15:45:21Z
|
2405.18400
|
Superposed Decoding: Multiple Generations from a Single Autoregressive
Inference Pass
|
Many applications today provide users with multiple auto-complete drafts as they type, including GitHub's code completion, Gmail's smart compose, and Apple's messaging auto-suggestions. Under the hood, language models support this by running an autoregressive inference pass to provide a draft. Consequently, providing $k$ drafts to the user requires running an expensive language model $k$ times. To alleviate the computation cost of running $k$ inference passes, we propose Superposed Decoding, a new decoding algorithm that generates $k$ drafts at the computation cost of one autoregressive inference pass. We achieve this by feeding a superposition of the most recent token embeddings from the $k$ drafts as input to the next decoding step of the language model. At every inference step we combine the $k$ drafts with the top-$k$ tokens to get $k^2$ new drafts and cache the $k$ most likely options, using an n-gram interpolation with minimal compute overhead to filter out incoherent generations. Our experiments show that $k$ drafts from Superposed Decoding are at least as coherent and factual as Nucleus Sampling and Greedy Decoding respectively, while being at least $2.44times$ faster for $kge3$. In a compute-normalized setting, user evaluations demonstrably favor text generated by Superposed Decoding over Nucleus Sampling. Code and more examples open-sourced at https://github.com/RAIVNLab/SuperposedDecoding.
|
http://arxiv.org/pdf/2405.18400v3
|
[
"Ethan Shen",
"Alan Fan",
"Sarah M. Pratt",
"Jae Sung Park",
"Matthew Wallingford",
"Sham M. Kakade",
"Ari Holtzman",
"Ranjay Krishna",
"Ali Farhadi",
"Aditya Kusupati"
] |
2024-06-25T01:49:45Z
|
2024-05-28T17:40:48Z
|
2406.17199
|
Contrastive General Graph Matching with Adaptive Augmentation Sampling
|
Graph matching has important applications in pattern recognition and beyond. Current approaches predominantly adopt supervised learning, demanding extensive labeled data which can be limited or costly. Meanwhile, self-supervised learning methods for graph matching often require additional side information such as extra categorical information and input features, limiting their application to the general case. Moreover, designing the optimal graph augmentations for self-supervised graph matching presents another challenge to ensure robustness and efficacy. To address these issues, we introduce a novel Graph-centric Contrastive framework for Graph Matching (GCGM), capitalizing on a vast pool of graph augmentations for contrastive learning, yet without needing any side information. Given the variety of augmentation choices, we further introduce a Boosting-inspired Adaptive Augmentation Sampler (BiAS), which adaptively selects more challenging augmentations tailored for graph matching. Through various experiments, our GCGM surpasses state-of-the-art self-supervised methods across various datasets, marking a significant step toward more effective, efficient and general graph matching.
|
http://arxiv.org/pdf/2406.17199v1
|
[
"Jianyuan Bo",
"Yuan Fang"
] |
2024-06-25T01:08:03Z
|
2024-06-25T01:08:03Z
|
2406.17190
|
Sound Tagging in Infant-centric Home Soundscapes
|
Certain environmental noises have been associated with negative developmental outcomes for infants and young children. Though classifying or tagging sound events in a domestic environment is an active research area, previous studies focused on data collected from a non-stationary microphone placed in the environment or from the perspective of adults. Further, many of these works ignore infants or young children in the environment or have data collected from only a single family where noise from the fixed sound source can be moderate at the infant's position or vice versa. Thus, despite the recent success of large pre-trained models for noise event detection, the performance of these models on infant-centric noise soundscapes in the home is yet to be explored. To bridge this gap, we have collected and labeled noises in home soundscapes from 22 families in an unobtrusive manner, where the data are collected through an infant-worn recording device. In this paper, we explore the performance of a large pre-trained model (Audio Spectrogram Transformer [AST]) on our noise-conditioned infant-centric environmental data as well as publicly available home environmental datasets. Utilizing different training strategies such as resampling, utilizing public datasets, mixing public and infant-centric training sets, and data augmentation using noise and masking, we evaluate the performance of a large pre-trained model on sparse and imbalanced infant-centric data. Our results show that fine-tuning the large pre-trained model by combining our collected dataset with public datasets increases the F1-score from 0.11 (public datasets) and 0.76 (collected datasets) to 0.84 (combined datasets) and Cohen's Kappa from 0.013 (public datasets) and 0.77 (collected datasets) to 0.83 (combined datasets) compared to only training with public or collected datasets, respectively.
|
http://arxiv.org/pdf/2406.17190v1
|
[
"Mohammad Nur Hossain Khan",
"Jialu Li",
"Nancy L. McElwain",
"Mark Hasegawa-Johnson",
"Bashima Islam"
] |
2024-06-25T00:15:54Z
|
2024-06-25T00:15:54Z
|
2407.01595
|
Fairpriori: Improving Biased Subgroup Discovery for Deep Neural Network
Fairness
|
While deep learning has become a core functional module of most software systems, concerns regarding the fairness of ML predictions have emerged as a significant issue that affects prediction results due to discrimination. Intersectional bias, which disproportionately affects members of subgroups, is a prime example of this. For instance, a machine learning model might exhibit bias against darker-skinned women, while not showing bias against individuals with darker skin or women. This problem calls for effective fairness testing before the deployment of such deep learning models in real-world scenarios. However, research into detecting such bias is currently limited compared to research on individual and group fairness. Existing tools to investigate intersectional bias lack important features such as support for multiple fairness metrics, fast and efficient computation, and user-friendly interpretation. This paper introduces Fairpriori, a novel biased subgroup discovery method, which aims to address these limitations. Fairpriori incorporates the frequent itemset generation algorithm to facilitate effective and efficient investigation of intersectional bias by producing fast fairness metric calculations on subgroups of a dataset. Through comparison with the state-of-the-art methods (e.g., Themis, FairFictPlay, and TestSGD) under similar conditions, Fairpriori demonstrates superior effectiveness and efficiency when identifying intersectional bias. Specifically, Fairpriori is easier to use and interpret, supports a wider range of use cases by accommodating multiple fairness metrics, and exhibits higher efficiency in computing fairness metrics. These findings showcase Fairpriori's potential for effectively uncovering subgroups affected by intersectional bias, supported by its open-source tooling at https://anonymous.4open.science/r/Fairpriori-0320.
|
http://arxiv.org/pdf/2407.01595v1
|
[
"Kacy Zhou",
"Jiawen Wen",
"Nan Yang",
"Dong Yuan",
"Qinghua Lu",
"Huaming Chen"
] |
2024-06-25T00:15:13Z
|
2024-06-25T00:15:13Z
|
2406.17188
|
Geometric Median (GM) Matching for Robust Data Pruning
|
Data pruning, the combinatorial task of selecting a small and informative subset from a large dataset, is crucial for mitigating the enormous computational costs associated with training data-hungry modern deep learning models at scale. Since large-scale data collections are invariably noisy, developing data pruning strategies that remain robust even in the presence of corruption is critical in practice. Unfortunately, the existing heuristics for (robust) data pruning lack theoretical coherence and rely on heroic assumptions, that are, often unattainable, by the very nature of the problem setting. Moreover, these strategies often yield sub-optimal neural scaling laws even compared to random sampling, especially in scenarios involving strong corruption and aggressive pruning rates -- making provably robust data pruning an open challenge. In response, in this work, we propose Geometric Median ($gm$) Matching -- a herding~citep{welling2009herding} style greedy algorithm -- that yields a $k$-subset such that the mean of the subset approximates the geometric median of the (potentially) noisy dataset. Theoretically, we show that $gm$ Matching enjoys an improved $gO(1/k)$ scaling over $gO(1/sqrt{k})$ scaling of uniform sampling; while achieving the optimal breakdown point of 1/2 even under arbitrary corruption. Extensive experiments across popular deep learning benchmarks indicate that $gm$ Matching consistently outperforms prior state-of-the-art; the gains become more profound at high rates of corruption and aggressive pruning rates; making $gm$ Matching a strong baseline for future research in robust data pruning.
|
http://arxiv.org/pdf/2406.17188v1
|
[
"Anish Acharya",
"Inderjit S Dhillon",
"Sujay Sanghavi"
] |
2024-06-25T00:02:01Z
|
2024-06-25T00:02:01Z
|
2406.17184
|
Minimax Optimality in Contextual Dynamic Pricing with General Valuation
Models
|
Dynamic pricing, the practice of adjusting prices based on contextual factors, has gained significant attention due to its impact on revenue maximization. In this paper, we address the contextual dynamic pricing problem, which involves pricing decisions based on observable product features and customer characteristics. We propose a novel algorithm that achieves improved regret bounds while minimizing assumptions about the problem. Our algorithm discretizes the unknown noise distribution and combines the upper confidence bounds with a layered data partitioning technique to effectively regulate regret in each episode. These techniques effectively control the regret associated with pricing decisions, leading to the minimax optimality. Specifically, our algorithm achieves a regret upper bound of $tilde{mathcal{O}}(rho_{mathcal{V}}^{frac{1}{3}}(delta) T^{frac{2}{3}})$, where $rho_{mathcal{V}}(delta)$ represents the estimation error of the valuation function. Importantly, this bound matches the lower bound up to logarithmic terms, demonstrating the minimax optimality of our approach. Furthermore, our method extends beyond linear valuation models commonly used in dynamic pricing by considering general function spaces. We simplify the estimation process by reducing it to general offline regression oracles, making implementation more straightforward.
|
http://arxiv.org/pdf/2406.17184v1
|
[
"Xueping Gong",
"Jiheng Zhang"
] |
2024-06-24T23:43:56Z
|
2024-06-24T23:43:56Z
|
2306.13004
|
Can Differentiable Decision Trees Enable Interpretable Reward Learning
from Human Feedback?
|
Reinforcement Learning from Human Feedback (RLHF) has emerged as a popular paradigm for capturing human intent to alleviate the challenges of hand-crafting the reward values. Despite the increasing interest in RLHF, most works learn black box reward functions that while expressive are difficult to interpret and often require running the whole costly process of RL before we can even decipher if these frameworks are actually aligned with human preferences. We propose and evaluate a novel approach for learning expressive and interpretable reward functions from preferences using Differentiable Decision Trees (DDTs). Our experiments across several domains, including CartPole, Visual Gridworld environments and Atari games, provide evidence that the tree structure of our learned reward function is useful in determining the extent to which the reward function is aligned with human preferences. We also provide experimental evidence that not only shows that reward DDTs can often achieve competitive RL performance when compared with larger capacity deep neural network reward functions but also demonstrates the diagnostic utility of our framework in checking alignment of learned reward functions. We also observe that the choice between soft and hard (argmax) output of reward DDT reveals a tension between wanting highly shaped rewards to ensure good RL performance, while also wanting simpler, more interpretable rewards. Videos and code, are available at: https://sites.google.com/view/ddt-rlhf
|
http://arxiv.org/pdf/2306.13004v4
|
[
"Akansha Kalra",
"Daniel S. Brown"
] |
2024-06-24T23:43:30Z
|
2023-06-22T16:04:16Z
|
2406.17182
|
Debiased Recommendation with Noisy Feedback
|
Ratings of a user to most items in recommender systems are usually missing not at random (MNAR), largely because users are free to choose which items to rate. To achieve unbiased learning of the prediction model under MNAR data, three typical solutions have been proposed, including error-imputation-based (EIB), inverse-propensity-scoring (IPS), and doubly robust (DR) methods. However, these methods ignore an alternative form of bias caused by the inconsistency between the observed ratings and the users' true preferences, also known as noisy feedback or outcome measurement errors (OME), e.g., due to public opinion or low-quality data collection process. In this work, we study intersectional threats to the unbiased learning of the prediction model from data MNAR and OME in the collected data. First, we design OME-EIB, OME-IPS, and OME-DR estimators, which largely extend the existing estimators to combat OME in real-world recommendation scenarios. Next, we theoretically prove the unbiasedness and generalization bound of the proposed estimators. We further propose an alternate denoising training approach to achieve unbiased learning of the prediction model under MNAR data with OME. Extensive experiments are conducted on three real-world datasets and one semi-synthetic dataset to show the effectiveness of our proposed approaches. The code is available at https://github.com/haoxuanli-pku/KDD24-OME-DR.
|
http://arxiv.org/pdf/2406.17182v1
|
[
"Haoxuan Li",
"Chunyuan Zheng",
"Wenjie Wang",
"Hao Wang",
"Fuli Feng",
"Xiao-Hua Zhou"
] |
2024-06-24T23:42:18Z
|
2024-06-24T23:42:18Z
|
2406.17813
|
Unsupervised Concept Drift Detection from Deep Learning Representations
in Real-time
|
Concept Drift is a phenomenon in which the underlying data distribution and statistical properties of a target domain change over time, leading to a degradation of the model's performance. Consequently, models deployed in production require continuous monitoring through drift detection techniques. Most drift detection methods to date are supervised, i.e., based on ground-truth labels. However, true labels are usually not available in many real-world scenarios. Although recent efforts have been made to develop unsupervised methods, they often lack the required accuracy, have a complexity that makes real-time implementation in production environments difficult, or are unable to effectively characterize drift. To address these challenges, we propose DriftLens, an unsupervised real-time concept drift detection framework. It works on unstructured data by exploiting the distribution distances of deep learning representations. DriftLens can also provide drift characterization by analyzing each label separately. A comprehensive experimental evaluation is presented with multiple deep learning classifiers for text, image, and speech. Results show that (i) DriftLens performs better than previous methods in detecting drift in $11/13$ use cases; (ii) it runs at least 5 times faster; (iii) its detected drift value is very coherent with the amount of drift (correlation $geq 0.85$); (iv) it is robust to parameter changes.
|
http://arxiv.org/pdf/2406.17813v1
|
[
"Salvatore Greco",
"Bartolomeo Vacchetti",
"Daniele Apiletti",
"Tania Cerquitelli"
] |
2024-06-24T23:41:46Z
|
2024-06-24T23:41:46Z
|
2406.06644
|
Latent Diffusion Model-Enabled Real-Time Semantic Communication
Considering Semantic Ambiguities and Channel Noises
|
Semantic communication (SemCom) has emerged as a new paradigm for 6G communication, with deep learning (DL) models being one of the key drives to shift from the accuracy of bit/symbol to the semantics and pragmatics of data. Nevertheless, DL-based SemCom systems often face performance bottlenecks due to overfitting, poor generalization, and sensitivity to outliers. Furthermore, the varying-fading gains and noises with uncertain signal-to-noise ratios (SNRs) commonly present in wireless channels usually restrict the accuracy of semantic information transmission. Consequently, this paper constructs a latent diffusion model-enabled SemCom system, and proposes three improvements compared to existing works: i) To handle potential outliers in the source data, semantic errors obtained by projected gradient descent based on the vulnerabilities of DL models, are utilized to update the parameters and obtain an outlier-robust encoder. ii) A lightweight single-layer latent space transformation adapter completes one-shot learning at the transmitter and is placed before the decoder at the receiver, enabling adaptation for out-of-distribution data and enhancing human-perceptual quality. iii) An end-to-end consistency distillation (EECD) strategy is used to distill the diffusion models trained in latent space, enabling deterministic single or few-step real-time denoising in various noisy channels while maintaining high semantic quality. Extensive numerical experiments across different datasets demonstrate the superiority of the proposed SemCom system, consistently proving its robustness to outliers, the capability to transmit data with unknown distributions, and the ability to perform real-time channel denoising tasks while preserving high human perceptual quality, outperforming the existing denoising approaches in semantic metrics.
|
http://arxiv.org/pdf/2406.06644v2
|
[
"Jianhua Pei",
"Cheng Feng",
"Ping Wang",
"Hina Tabassum",
"Dongyuan Shi"
] |
2024-06-24T23:41:23Z
|
2024-06-09T23:39:31Z
|
2406.17172
|
Robust Zero Trust Architecture: Joint Blockchain based Federated
learning and Anomaly Detection based Framework
|
This paper introduces a robust zero-trust architecture (ZTA) tailored for the decentralized system that empowers efficient remote work and collaboration within IoT networks. Using blockchain-based federated learning principles, our proposed framework includes a robust aggregation mechanism designed to counteract malicious updates from compromised clients, enhancing the security of the global learning process. Moreover, secure and reliable trust computation is essential for remote work and collaboration. The robust ZTA framework integrates anomaly detection and trust computation, ensuring secure and reliable device collaboration in a decentralized fashion. We introduce an adaptive algorithm that dynamically adjusts to varying user contexts, using unsupervised clustering to detect novel anomalies, like zero-day attacks. To ensure a reliable and scalable trust computation, we develop an algorithm that dynamically adapts to varying user contexts by employing incremental anomaly detection and clustering techniques to identify and share local and global anomalies between nodes. Future directions include scalability improvements, Dirichlet process for advanced anomaly detection, privacy-preserving techniques, and the integration of post-quantum cryptographic methods to safeguard against emerging quantum threats.
|
http://arxiv.org/pdf/2406.17172v1
|
[
"Shiva Raj Pokhrel",
"Luxing Yang",
"Sutharshan Rajasegarar",
"Gang Li"
] |
2024-06-24T23:15:19Z
|
2024-06-24T23:15:19Z
|
2406.17168
|
Reinforcement Learning via Auxiliary Task Distillation
|
We present Reinforcement Learning via Auxiliary Task Distillation (AuxDistill), a new method that enables reinforcement learning (RL) to perform long-horizon robot control problems by distilling behaviors from auxiliary RL tasks. AuxDistill achieves this by concurrently carrying out multi-task RL with auxiliary tasks, which are easier to learn and relevant to the main task. A weighted distillation loss transfers behaviors from these auxiliary tasks to solve the main task. We demonstrate that AuxDistill can learn a pixels-to-actions policy for a challenging multi-stage embodied object rearrangement task from the environment reward without demonstrations, a learning curriculum, or pre-trained skills. AuxDistill achieves $2.3 times$ higher success than the previous state-of-the-art baseline in the Habitat Object Rearrangement benchmark and outperforms methods that use pre-trained skills and expert demonstrations.
|
http://arxiv.org/pdf/2406.17168v1
|
[
"Abhinav Narayan Harish",
"Larry Heck",
"Josiah P. Hanna",
"Zsolt Kira",
"Andrew Szot"
] |
2024-06-24T23:02:18Z
|
2024-06-24T23:02:18Z
|
2406.17167
|
Learning on Transformers is Provable Low-Rank and Sparse: A One-layer
Analysis
|
Efficient training and inference algorithms, such as low-rank adaption and model pruning, have shown impressive performance for learning Transformer-based large foundation models. However, due to the technical challenges of the non-convex optimization caused by the complicated architecture of Transformers, the theoretical study of why these methods can be applied to learn Transformers is mostly elusive. To the best of our knowledge, this paper shows the first theoretical analysis of the property of low-rank and sparsity of one-layer Transformers by characterizing the trained model after convergence using stochastic gradient descent. By focusing on a data model based on label-relevant and label-irrelevant patterns, we quantify that the gradient updates of trainable parameters are low-rank, which depends on the number of label-relevant patterns. We also analyze how model pruning affects the generalization while improving computation efficiency and conclude that proper magnitude-based pruning has a slight effect on the testing performance. We implement numerical experiments to support our findings.
|
http://arxiv.org/pdf/2406.17167v1
|
[
"Hongkang Li",
"Meng Wang",
"Shuai Zhang",
"Sijia Liu",
"Pin-Yu Chen"
] |
2024-06-24T23:00:58Z
|
2024-06-24T23:00:58Z
|
2212.01529
|
Laplacian Convolutional Representation for Traffic Time Series
Imputation
|
Spatiotemporal traffic data imputation is of great significance in intelligent transportation systems and data-driven decision-making processes. To perform efficient learning and accurate reconstruction from partially observed traffic data, we assert the importance of characterizing both global and local trends in time series. In the literature, substantial works have demonstrated the effectiveness of utilizing the low-rank property of traffic data by matrix/tensor completion models. In this study, we first introduce a Laplacian kernel to temporal regularization for characterizing local trends in traffic time series, which can be formulated as a circular convolution. Then, we develop a low-rank Laplacian convolutional representation (LCR) model by putting the circulant matrix nuclear norm and the Laplacian kernelized temporal regularization together, which is proved to meet a unified framework that has a fast Fourier transform (FFT) solution in log-linear time complexity. Through extensive experiments on several traffic datasets, we demonstrate the superiority of LCR over several baseline models for imputing traffic time series of various time series behaviors (e.g., data noises and strong/weak periodicity) and reconstructing sparse speed fields of vehicular traffic flow. The proposed LCR model is also an efficient solution to large-scale traffic data imputation over the existing imputation models.
|
http://arxiv.org/pdf/2212.01529v3
|
[
"Xinyu Chen",
"Zhanhong Cheng",
"HanQin Cai",
"Nicolas Saunier",
"Lijun Sun"
] |
2024-06-24T22:52:28Z
|
2022-12-03T04:08:56Z
|
2406.17163
|
Paraphrase and Aggregate with Large Language Models for Minimizing
Intent Classification Errors
|
Large language models (LLM) have achieved remarkable success in natural language generation but lesser focus has been given to their applicability in decision making tasks such as classification. We show that LLMs like LLaMa can achieve high performance on large multi-class classification tasks but still make classification errors and worse, generate out-of-vocabulary class labels. To address these critical issues, we introduce Paraphrase and AGgregate (PAG)-LLM approach wherein an LLM generates multiple paraphrases of the input query (parallel queries), performs multi-class classification for the original query and each paraphrase, and at the end aggregate all the classification labels based on their confidence scores. We evaluate PAG-LLM on two large multi-class classication datasets: CLINC, and Banking and show 22.7% and 15.1% error reduction. We show that PAG-LLM is especially effective for hard examples where LLM is uncertain, and reduces the critical misclassification and hallucinated label generation errors
|
http://arxiv.org/pdf/2406.17163v1
|
[
"Vikas Yadav",
"Zheng Tang",
"Vijay Srinivasan"
] |
2024-06-24T22:30:26Z
|
2024-06-24T22:30:26Z
|
2406.17162
|
Virtual Mines -- Component-level recycling of printed circuit boards
using deep learning
|
This contribution gives an overview of an ongoing project using machine learning and computer vision components for improving the electronic waste recycling process. In circular economy, the "virtual mines" concept refers to production cycles where interesting raw materials are reclaimed in an efficient and cost-effective manner from end-of-life items. In particular, the growth of e-waste, due to the increasingly shorter life cycle of hi-tech goods, is a global problem. In this paper, we describe a pipeline based on deep learning model to recycle printed circuit boards at the component level. A pre-trained YOLOv5 model is used to analyze the results of the locally developed dataset. With a different distribution of class instances, YOLOv5 managed to achieve satisfactory precision and recall, with the ability to optimize with large component instances.
|
http://arxiv.org/pdf/2406.17162v1
|
[
"Muhammad Mohsin",
"Stefano Rovetta",
"Francesco Masulli",
"Alberto Cabri"
] |
2024-06-24T22:29:30Z
|
2024-06-24T22:29:30Z
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.