arxiv_id
stringlengths 7
11
| title
stringlengths 7
243
| abstract
stringlengths 3
2.79k
| link
stringlengths 21
49
| authors
listlengths 1
451
| updated
stringlengths 20
20
| published
stringlengths 20
20
|
---|---|---|---|---|---|---|
2407.08270
|
SciQu: Accelerating Materials Properties Prediction with Automated
Literature Mining for Self-Driving Laboratories
|
Assessing different material properties to predict specific attributes, such as band gap, resistivity, young modulus, work function, and refractive index, is a fundamental requirement for materials science-based applications. However, the process is time-consuming and often requires extensive literature reviews and numerous experiments. Our study addresses these challenges by leveraging machine learning to analyze material properties with greater precision and efficiency. By automating the data extraction process and using the extracted information to train machine learning models, our developed model, SciQu, optimizes material properties. As a proof of concept, we predicted the refractive index of materials using data extracted from numerous research articles with SciQu, considering input descriptors such as space group, volume, and bandgap with Root Mean Square Error (RMSE) 0.068 and R2 0.94. Thus, SciQu not only predicts the properties of materials but also plays a key role in self-driving laboratories by optimizing the synthesis parameters to achieve precise shape, size, and phase of the materials subjected to the input parameters.
|
http://arxiv.org/pdf/2407.08270v1
|
[
"Anand Babu"
] |
2024-07-11T08:12:46Z
|
2024-07-11T08:12:46Z
|
2407.08257
|
Knowledge distillation to effectively attain both region-of-interest and
global semantics from an image where multiple objects appear
|
Models based on convolutional neural networks (CNN) and transformers have steadily been improved. They also have been applied in various computer vision downstream tasks. However, in object detection tasks, accurately localizing and classifying almost infinite categories of foods in images remains challenging. To address these problems, we first segmented the food as the region-of-interest (ROI) by using the segment-anything model (SAM) and masked the rest of the region except ROI as black pixels. This process simplified the problems into a single classification for which annotation and training were much simpler than object detection. The images in which only the ROI was preserved were fed as inputs to fine-tune various off-the-shelf models that encoded their own inductive biases. Among them, Data-efficient image Transformers (DeiTs) had the best classification performance. Nonetheless, when foods' shapes and textures were similar, the contextual features of the ROI-only images were not enough for accurate classification. Therefore, we introduced a novel type of combined architecture, RveRNet, which consisted of ROI, extra-ROI, and integration modules that allowed it to account for both the ROI's and global contexts. The RveRNet's F1 score was 10% better than other individual models when classifying ambiguous food images. If the RveRNet's modules were DeiT with the knowledge distillation from the CNN, performed the best. We investigated how architectures can be made robust against input noise caused by permutation and translocation. The results indicated that there was a trade-off between how much the CNN teacher's knowledge could be distilled to DeiT and DeiT's innate strength. Code is publicly available at: https://github.com/Seonwhee-Genome/RveRNet.
|
http://arxiv.org/pdf/2407.08257v1
|
[
"Seonwhee Jin"
] |
2024-07-11T07:57:33Z
|
2024-07-11T07:57:33Z
|
2407.08256
|
Adaptive Compressed Sensing with Diffusion-Based Posterior Sampling
|
Compressed Sensing (CS) facilitates rapid image acquisition by selecting a small subset of measurements sufficient for high-fidelity reconstruction. Adaptive CS seeks to further enhance this process by dynamically choosing future measurements based on information gleaned from data that is already acquired. However, many existing frameworks are often tailored to specific tasks and require intricate training procedures. We propose AdaSense, a novel Adaptive CS approach that leverages zero-shot posterior sampling with pre-trained diffusion models. By sequentially sampling from the posterior distribution, we can quantify the uncertainty of each possible future linear measurement throughout the acquisition process. AdaSense eliminates the need for additional training and boasts seamless adaptation to diverse domains with minimal tuning requirements. Our experiments demonstrate the effectiveness of AdaSense in reconstructing facial images from a small number of measurements. Furthermore, we apply AdaSense for active acquisition of medical images in the domains of magnetic resonance imaging (MRI) and computed tomography (CT), highlighting its potential for tangible real-world acceleration.
|
http://arxiv.org/pdf/2407.08256v1
|
[
"Noam Elata",
"Tomer Michaeli",
"Michael Elad"
] |
2024-07-11T07:56:17Z
|
2024-07-11T07:56:17Z
|
2407.08255
|
GraphMamba: An Efficient Graph Structure Learning Vision Mamba for
Hyperspectral Image Classification
|
Efficient extraction of spectral sequences and geospatial information has always been a hot topic in hyperspectral image classification. In terms of spectral sequence feature capture, RNN and Transformer have become mainstream classification frameworks due to their long-range feature capture capabilities. In terms of spatial information aggregation, CNN enhances the receptive field to retain integrated spatial information as much as possible. However, the spectral feature-capturing architectures exhibit low computational efficiency, and CNNs lack the flexibility to perceive spatial contextual information. To address these issues, this paper proposes GraphMamba--an efficient graph structure learning vision Mamba classification framework that fully considers HSI characteristics to achieve deep spatial-spectral information mining. Specifically, we propose a novel hyperspectral visual GraphMamba processing paradigm (HVGM) that preserves spatial-spectral features by constructing spatial-spectral cubes and utilizes linear spectral encoding to enhance the operability of subsequent tasks. The core components of GraphMamba include the HyperMamba module for improving computational efficiency and the SpectralGCN module for adaptive spatial context awareness. The HyperMamba mitigates clutter interference by employing the global mask (GM) and introduces a parallel training inference architecture to alleviate computational bottlenecks. The SpatialGCN incorporates weighted multi-hop aggregation (WMA) spatial encoding to focus on highly correlated spatial structural features, thus flexibly aggregating contextual information while mitigating spatial noise interference. Extensive experiments were conducted on three different scales of real HSI datasets, and compared with the state-of-the-art classification frameworks, GraphMamba achieved optimal performance.
|
http://arxiv.org/pdf/2407.08255v1
|
[
"Aitao Yang",
"Min Li",
"Yao Ding",
"Leyuan Fang",
"Yaoming Cai",
"Yujie He"
] |
2024-07-11T07:56:08Z
|
2024-07-11T07:56:08Z
|
2405.20278
|
Length independent generalization bounds for deep SSM architectures
|
Many state-of-the-art models trained on long-range sequences, for example S4, S5 or LRU, are made of sequential blocks combining State-Space Models (SSMs) with neural networks. In this paper we provide a PAC bound that holds for these kind of architectures with stable SSM blocks and does not depend on the length of the input sequence. Imposing stability of the SSM blocks is a standard practice in the literature, and it is known to help performance. Our results provide a theoretical justification for the use of stable SSM blocks as the proposed PAC bound decreases as the degree of stability of the SSM blocks increases.
|
http://arxiv.org/pdf/2405.20278v2
|
[
"Dániel Rácz",
"Mihály Petreczky",
"Bálint Daróczy"
] |
2024-07-11T07:55:14Z
|
2024-05-30T17:32:46Z
|
2407.08250
|
Gradient Boosting Reinforcement Learning
|
Neural networks (NN) achieve remarkable results in various tasks, but lack key characteristics: interpretability, support for categorical features, and lightweight implementations suitable for edge devices. While ongoing efforts aim to address these challenges, Gradient Boosting Trees (GBT) inherently meet these requirements. As a result, GBTs have become the go-to method for supervised learning tasks in many real-world applications and competitions. However, their application in online learning scenarios, notably in reinforcement learning (RL), has been limited. In this work, we bridge this gap by introducing Gradient-Boosting RL (GBRL), a framework that extends the advantages of GBT to the RL domain. Using the GBRL framework, we implement various actor-critic algorithms and compare their performance with their NN counterparts. Inspired by shared backbones in NN we introduce a tree-sharing approach for policy and value functions with distinct learning rates, enhancing learning efficiency over millions of interactions. GBRL achieves competitive performance across a diverse array of tasks, excelling in domains with structured or categorical features. Additionally, we present a high-performance, GPU-accelerated implementation that integrates seamlessly with widely-used RL libraries (available at https://github.com/NVlabs/gbrl). GBRL expands the toolkit for RL practitioners, demonstrating the viability and promise of GBT within the RL paradigm, particularly in domains characterized by structured or categorical features.
|
http://arxiv.org/pdf/2407.08250v1
|
[
"Benjamin Fuhrer",
"Chen Tessler",
"Gal Dalal"
] |
2024-07-11T07:52:33Z
|
2024-07-11T07:52:33Z
|
2407.08245
|
Feature Diversification and Adaptation for Federated Domain
Generalization
|
Federated learning, a distributed learning paradigm, utilizes multiple clients to build a robust global model. In real-world applications, local clients often operate within their limited domains, leading to a `domain shift' across clients. Privacy concerns limit each client's learning to its own domain data, which increase the risk of overfitting. Moreover, the process of aggregating models trained on own limited domain can be potentially lead to a significant degradation in the global model performance. To deal with these challenges, we introduce the concept of federated feature diversification. Each client diversifies the own limited domain data by leveraging global feature statistics, i.e., the aggregated average statistics over all participating clients, shared through the global model's parameters. This data diversification helps local models to learn client-invariant representations while preserving privacy. Our resultant global model shows robust performance on unseen test domain data. To enhance performance further, we develop an instance-adaptive inference approach tailored for test domain data. Our proposed instance feature adapter dynamically adjusts feature statistics to align with the test input, thereby reducing the domain gap between the test and training domains. We show that our method achieves state-of-the-art performance on several domain generalization benchmarks within a federated learning setting.
|
http://arxiv.org/pdf/2407.08245v1
|
[
"Seunghan Yang",
"Seokeon Choi",
"Hyunsin Park",
"Sungha Choi",
"Simyung Chang",
"Sungrack Yun"
] |
2024-07-11T07:45:10Z
|
2024-07-11T07:45:10Z
|
2406.18664
|
Evaluating Copyright Takedown Methods for Language Models
|
Language models (LMs) derive their capabilities from extensive training on diverse data, including potentially copyrighted material. These models can memorize and generate content similar to their training data, posing potential concerns. Therefore, model creators are motivated to develop mitigation methods that prevent generating protected content. We term this procedure as copyright takedowns for LMs, noting the conceptual similarity to (but legal distinction from) the DMCA takedown This paper introduces the first evaluation of the feasibility and side effects of copyright takedowns for LMs. We propose CoTaEval, an evaluation framework to assess the effectiveness of copyright takedown methods, the impact on the model's ability to retain uncopyrightable factual knowledge from the training data whose recitation is embargoed, and how well the model maintains its general utility and efficiency. We examine several strategies, including adding system prompts, decoding-time filtering interventions, and unlearning approaches. Our findings indicate that no tested method excels across all metrics, showing significant room for research in this unique problem setting and indicating potential unresolved challenges for live policy proposals.
|
http://arxiv.org/pdf/2406.18664v3
|
[
"Boyi Wei",
"Weijia Shi",
"Yangsibo Huang",
"Noah A. Smith",
"Chiyuan Zhang",
"Luke Zettlemoyer",
"Kai Li",
"Peter Henderson"
] |
2024-07-11T07:45:04Z
|
2024-06-26T18:09:46Z
|
2401.05363
|
Generalizable Sleep Staging via Multi-Level Domain Alignment
|
Automatic sleep staging is essential for sleep assessment and disorder diagnosis. Most existing methods depend on one specific dataset and are limited to be generalized to other unseen datasets, for which the training data and testing data are from the same dataset. In this paper, we introduce domain generalization into automatic sleep staging and propose the task of generalizable sleep staging which aims to improve the model generalization ability to unseen datasets. Inspired by existing domain generalization methods, we adopt the feature alignment idea and propose a framework called SleepDG to solve it. Considering both of local salient features and sequential features are important for sleep staging, we propose a Multi-level Feature Alignment combining epoch-level and sequence-level feature alignment to learn domain-invariant feature representations. Specifically, we design an Epoch-level Feature Alignment to align the feature distribution of each single sleep epoch among different domains, and a Sequence-level Feature Alignment to minimize the discrepancy of sequential features among different domains. SleepDG is validated on five public datasets, achieving the state-of-the-art performance.
|
http://arxiv.org/pdf/2401.05363v4
|
[
"Jiquan Wang",
"Sha Zhao",
"Haiteng Jiang",
"Shijian Li",
"Tao Li",
"Gang Pan"
] |
2024-07-11T07:38:32Z
|
2023-12-13T14:26:37Z
|
2407.08239
|
An Unsupervised Domain Adaptation Method for Locating Manipulated Region
in partially fake Audio
|
When the task of locating manipulation regions in partially-fake audio (PFA) involves cross-domain datasets, the performance of deep learning models drops significantly due to the shift between the source and target domains. To address this issue, existing approaches often employ data augmentation before training. However, they overlook the characteristics in target domain that are absent in source domain. Inspired by the mixture-of-experts model, we propose an unsupervised method named Samples mining with Diversity and Entropy (SDE). Our method first learns from a collection of diverse experts that achieve great performance from different perspectives in the source domain, but with ambiguity on target samples. We leverage these diverse experts to select the most informative samples by calculating their entropy. Furthermore, we introduced a label generation method tailored for these selected samples that are incorporated in the training process in source domain integrating the target domain information. We applied our method to a cross-domain partially fake audio detection dataset, ADD2023Track2. By introducing 10% of unknown samples from the target domain, we achieved an F1 score of 43.84%, which represents a relative increase of 77.2% compared to the second-best method.
|
http://arxiv.org/pdf/2407.08239v1
|
[
"Siding Zeng",
"Jiangyan Yi",
"Jianhua Tao",
"Yujie Chen",
"Shan Liang",
"Yong Ren",
"Xiaohui Zhang"
] |
2024-07-11T07:32:16Z
|
2024-07-11T07:32:16Z
|
2407.08233
|
Differentially Private Neural Network Training under Hidden State
Assumption
|
We present a novel approach called differentially private stochastic block coordinate descent (DP-SBCD) for training neural networks with provable guarantees of differential privacy under the hidden state assumption. Our methodology incorporates Lipschitz neural networks and decomposes the training process of the neural network into sub-problems, each corresponding to the training of a specific layer. By doing so, we extend the analysis of differential privacy under the hidden state assumption to encompass non-convex problems and algorithms employing proximal gradient descent. Furthermore, in contrast to existing methods, we adopt a novel approach by utilizing calibrated noise sampled from adaptive distributions, yielding improved empirical trade-offs between utility and privacy.
|
http://arxiv.org/pdf/2407.08233v1
|
[
"Ding Chen",
"Chen Liu"
] |
2024-07-11T07:14:40Z
|
2024-07-11T07:14:40Z
|
2407.08232
|
SwishReLU: A Unified Approach to Activation Functions for Enhanced Deep
Neural Networks Performance
|
ReLU, a commonly used activation function in deep neural networks, is prone to the issue of "Dying ReLU". Several enhanced versions, such as ELU, SeLU, and Swish, have been introduced and are considered to be less commonly utilized. However, replacing ReLU can be somewhat challenging due to its inconsistent advantages. While Swish offers a smoother transition similar to ReLU, its utilization generally incurs a greater computational burden compared to ReLU. This paper proposes SwishReLU, a novel activation function combining elements of ReLU and Swish. Our findings reveal that SwishReLU outperforms ReLU in performance with a lower computational cost than Swish. This paper undertakes an examination and comparison of different types of ReLU variants with SwishReLU. Specifically, we compare ELU and SeLU along with Tanh on three datasets: CIFAR-10, CIFAR-100 and MNIST. Notably, applying SwishReLU in the VGG16 model described in Algorithm 2 yields a 6% accuracy improvement on the CIFAR-10 dataset.
|
http://arxiv.org/pdf/2407.08232v1
|
[
"Jamshaid Ul Rahman",
"Rubiqa Zulfiqar",
"Asad Khan",
"Nimra"
] |
2024-07-11T07:14:34Z
|
2024-07-11T07:14:34Z
|
2405.09858
|
Towards Realistic Incremental Scenario in Class Incremental Semantic
Segmentation
|
This paper addresses the unrealistic aspect of the commonly adopted Continuous Incremental Semantic Segmentation (CISS) scenario, termed overlapped. We point out that overlapped allows the same image to reappear in future tasks with different pixel labels, which is far from practical incremental learning scenarios. Moreover, we identified that this flawed scenario may lead to biased results for two commonly used techniques in CISS, pseudo-labeling and exemplar memory, resulting in unintended advantages or disadvantages for certain techniques. To mitigate this, a practical scenario called partitioned is proposed, in which the dataset is first divided into distinct subsets representing each class, and then the subsets are assigned to each corresponding task. This efficiently addresses the issue above while meeting the requirement of CISS scenario, such as capturing the background shifts. Furthermore, we identify and address the code implementation issues related to retrieving data from the exemplar memory, which was ignored in previous works. Lastly, we introduce a simple yet competitive memory-based baseline, MiB-AugM, that handles background shifts of current tasks in the exemplar memory. This baseline achieves state-of-the-art results across multiple tasks involving learning numerous new classes.
|
http://arxiv.org/pdf/2405.09858v2
|
[
"Jihwan Kwak",
"Sungmin Cha",
"Taesup Moon"
] |
2024-07-11T07:09:00Z
|
2024-05-16T07:25:15Z
|
2407.08227
|
DALL-M: Context-Aware Clinical Data Augmentation with LLMs
|
X-ray images are vital in medical diagnostics, but their effectiveness is limited without clinical context. Radiologists often find chest X-rays insufficient for diagnosing underlying diseases, necessitating comprehensive clinical features and data integration. We present a novel technique to enhance the clinical context through augmentation techniques with clinical tabular data, thereby improving its applicability and reliability in AI medical diagnostics. To address this, we introduce a pioneering approach to clinical data augmentation that employs large language models (LLMs) to generate patient contextual synthetic data. This methodology is crucial for training more robust deep learning models in healthcare. It preserves the integrity of real patient data while enriching the dataset with contextually relevant synthetic features, significantly enhancing model performance. DALL-M uses a three-phase feature generation process: (i) clinical context storage, (ii) expert query generation, and (iii) context-aware feature augmentation. DALL-M generates new, clinically relevant features by synthesizing chest X-ray images and reports. Applied to 799 cases using nine features from the MIMIC-IV dataset, it created an augmented set of 91 features. This is the first work to generate contextual values for existing and new features based on patients' X-ray reports, gender, and age and to produce new contextual knowledge during data augmentation. Empirical validation with machine learning models, including Decision Trees, Random Forests, XGBoost, and TabNET, showed significant performance improvements. Incorporating augmented features increased the F1 score by 16.5% and Precision and Recall by approximately 25%. DALL-M addresses a critical gap in clinical data augmentation, offering a robust framework for generating contextually enriched datasets.
|
http://arxiv.org/pdf/2407.08227v1
|
[
"Chihcheng Hsieh",
"Catarina Moreira",
"Isabel Blanco Nobre",
"Sandra Costa Sousa",
"Chun Ouyang",
"Margot Brereton",
"Joaquim Jorge",
"Jacinto C. Nascimento"
] |
2024-07-11T07:01:50Z
|
2024-07-11T07:01:50Z
|
2407.08215
|
Enhancing Performance and User Engagement in Everyday Stress Monitoring:
A Context-Aware Active Reinforcement Learning Approach
|
In today's fast-paced world, accurately monitoring stress levels is crucial. Sensor-based stress monitoring systems often need large datasets for training effective models. However, individual-specific models are necessary for personalized and interactive scenarios. Traditional methods like Ecological Momentary Assessments (EMAs) assess stress but struggle with efficient data collection without burdening users. The challenge is to timely send EMAs, especially during stress, balancing monitoring efficiency and user convenience. This paper introduces a novel context-aware active reinforcement learning (RL) algorithm for enhanced stress detection using Photoplethysmography (PPG) data from smartwatches and contextual data from smartphones. Our approach dynamically selects optimal times for deploying EMAs, utilizing the user's immediate context to maximize label accuracy and minimize intrusiveness. Initially, the study was executed in an offline environment to refine the label collection process, aiming to increase accuracy while reducing user burden. Later, we integrated a real-time label collection mechanism, transitioning to an online methodology. This shift resulted in an 11% improvement in stress detection efficiency. Incorporating contextual data improved model accuracy by 4%. Personalization studies indicated a 10% enhancement in AUC-ROC scores, demonstrating better stress level differentiation. This research marks a significant move towards personalized, context-driven real-time stress monitoring methods.
|
http://arxiv.org/pdf/2407.08215v1
|
[
"Seyed Amir Hossein Aqajari",
"Ziyu Wang",
"Ali Tazarv",
"Sina Labbaf",
"Salar Jafarlou",
"Brenda Nguyen",
"Nikil Dutt",
"Marco Levorato",
"Amir M. Rahmani"
] |
2024-07-11T06:33:11Z
|
2024-07-11T06:33:11Z
|
2407.08214
|
Towards stable training of parallel continual learning
|
Parallel Continual Learning (PCL) tasks investigate the training methods for continual learning with multi-source input, where data from different tasks are learned as they arrive. PCL offers high training efficiency and is well-suited for complex multi-source data systems, such as autonomous vehicles equipped with multiple sensors. However, at any time, multiple tasks need to be trained simultaneously, leading to severe training instability in PCL. This instability manifests during both forward and backward propagation, where features are entangled and gradients are conflict. This paper introduces Stable Parallel Continual Learning (SPCL), a novel approach that enhances the training stability of PCL for both forward and backward propagation. For the forward propagation, we apply Doubly-block Toeplit (DBT) Matrix based orthogonality constraints to network parameters to ensure stable and consistent propagation. For the backward propagation, we employ orthogonal decomposition for gradient management stabilizes backpropagation and mitigates gradient conflicts across tasks. By optimizing gradients by ensuring orthogonality and minimizing the condition number, SPCL effectively stabilizing the gradient descent in complex optimization tasks. Experimental results demonstrate that SPCL outperforms state-of-the-art methjods and achieve better training stability.
|
http://arxiv.org/pdf/2407.08214v1
|
[
"Li Yuepan",
"Fan Lyu",
"Yuyang Li",
"Wei Feng",
"Guangcan Liu",
"Fanhua Shang"
] |
2024-07-11T06:31:04Z
|
2024-07-11T06:31:04Z
|
2012.15408
|
Gated Ensemble of Spatio-temporal Mixture of Experts for Multi-task
Learning in Ride-hailing System
|
Ride-hailing system requires efficient management of dynamic demand and supply to ensure optimal service delivery, pricing strategies, and operational efficiency. Designing spatio-temporal forecasting models separately in a task-wise and city-wise manner to forecast demand and supply-demand gap in a ride-hailing system poses a burden for the expanding transportation network companies. Therefore, a multi-task learning architecture is proposed in this study by developing gated ensemble of spatio-temporal mixture of experts network (GESME-Net) with convolutional recurrent neural network (CRNN), convolutional neural network (CNN), and recurrent neural network (RNN) for simultaneously forecasting these spatio-temporal tasks in a city as well as across different cities. Furthermore, a task adaptation layer is integrated with the architecture for learning joint representation in multi-task learning and revealing the contribution of the input features utilized in prediction. The proposed architecture is tested with data from Didi Chuxing for: (i) simultaneously forecasting demand and supply-demand gap in Beijing, and (ii) simultaneously forecasting demand across Chengdu and Xian. In both scenarios, models from our proposed architecture outperformed the single-task and multi-task deep learning benchmarks and ensemble-based machine learning algorithms.
|
http://arxiv.org/pdf/2012.15408v5
|
[
"M. H. Rahman",
"S. M. Rifaat",
"S. N. Sadeek",
"M. Abrar",
"D. Wang"
] |
2024-07-11T06:18:12Z
|
2020-12-31T02:42:27Z
|
2407.08205
|
OPIMA: Optical Processing-In-Memory for Convolutional Neural Network
Acceleration
|
Recent advances in machine learning (ML) have spotlighted the pressing need for computing architectures that bridge the gap between memory bandwidth and processing power. The advent of deep neural networks has pushed traditional Von Neumann architectures to their limits due to the high latency and energy consumption costs associated with data movement between the processor and memory for these workloads. One of the solutions to overcome this bottleneck is to perform computation within the main memory through processing-in-memory (PIM), thereby limiting data movement and the costs associated with it. However, DRAM-based PIM struggles to achieve high throughput and energy efficiency due to internal data movement bottlenecks and the need for frequent refresh operations. In this work, we introduce OPIMA, a PIM-based ML accelerator, architected within an optical main memory. OPIMA has been designed to leverage the inherent massive parallelism within main memory while performing high-speed, low-energy optical computation to accelerate ML models based on convolutional neural networks. We present a comprehensive analysis of OPIMA to guide design choices and operational mechanisms. Additionally, we evaluate the performance and energy consumption of OPIMA, comparing it with conventional electronic computing systems and emerging photonic PIM architectures. The experimental results show that OPIMA can achieve 2.98x higher throughput and 137x better energy efficiency than the best-known prior work.
|
http://arxiv.org/pdf/2407.08205v1
|
[
"Febin Sunny",
"Amin Shafiee",
"Abhishek Balasubramaniam",
"Mahdi Nikdast",
"Sudeep Pasricha"
] |
2024-07-11T06:12:04Z
|
2024-07-11T06:12:04Z
|
2401.01325
|
LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning
|
It is well known that LLMs cannot generalize well to long contexts whose lengths are larger than the training sequence length. This poses challenges when employing LLMs for processing long input sequences during inference. In this work, we argue that LLMs themselves have inherent capabilities to handle long contexts without fine-tuning. To achieve this goal, we propose SelfExtend to extend the context window of LLMs by constructing bi-level attention information: the grouped attention and the neighbor attention. The grouped attention captures the dependencies among tokens that are far apart, while neighbor attention captures dependencies among adjacent tokens within a specified range. The two-level attentions are computed based on the original model's self-attention mechanism during inference. With minor code modification, our SelfExtend can effortlessly extend existing LLMs' context window without any fine-tuning. We conduct comprehensive experiments on multiple benchmarks and the results show that our SelfExtend can effectively extend existing LLMs' context window length. The code can be found at url{https://github.com/datamllab/LongLM}.
|
http://arxiv.org/pdf/2401.01325v3
|
[
"Hongye Jin",
"Xiaotian Han",
"Jingfeng Yang",
"Zhimeng Jiang",
"Zirui Liu",
"Chia-Yuan Chang",
"Huiyuan Chen",
"Xia Hu"
] |
2024-07-11T06:11:46Z
|
2024-01-02T18:30:51Z
|
2407.07457
|
GLBench: A Comprehensive Benchmark for Graph with Large Language Models
|
The emergence of large language models (LLMs) has revolutionized the way we interact with graphs, leading to a new paradigm called GraphLLM. Despite the rapid development of GraphLLM methods in recent years, the progress and understanding of this field remain unclear due to the lack of a benchmark with consistent experimental protocols. To bridge this gap, we introduce GLBench, the first comprehensive benchmark for evaluating GraphLLM methods in both supervised and zero-shot scenarios. GLBench provides a fair and thorough evaluation of different categories of GraphLLM methods, along with traditional baselines such as graph neural networks. Through extensive experiments on a collection of real-world datasets with consistent data processing and splitting strategies, we have uncovered several key findings. Firstly, GraphLLM methods outperform traditional baselines in supervised settings, with LLM-as-enhancers showing the most robust performance. However, using LLMs as predictors is less effective and often leads to uncontrollable output issues. We also notice that no clear scaling laws exist for current GraphLLM methods. In addition, both structures and semantics are crucial for effective zero-shot transfer, and our proposed simple baseline can even outperform several models tailored for zero-shot scenarios. The data and code of the benchmark can be found at https://github.com/NineAbyss/GLBench.
|
http://arxiv.org/pdf/2407.07457v2
|
[
"Yuhan Li",
"Peisong Wang",
"Xiao Zhu",
"Aochuan Chen",
"Haiyun Jiang",
"Deng Cai",
"Victor Wai Kin Chan",
"Jia Li"
] |
2024-07-11T06:06:33Z
|
2024-07-10T08:20:47Z
|
2401.07039
|
Quantum Generative Diffusion Model: A Fully Quantum-Mechanical Model for
Generating Quantum State Ensemble
|
Classical diffusion models have shown superior generative results. Exploring them in the quantum domain can advance the field of quantum generative learning. This work introduces Quantum Generative Diffusion Model (QGDM) as their simple and elegant quantum counterpart. Through a non-unitary forward process, any target quantum state can be transformed into a completely mixed state that has the highest entropy and maximum uncertainty about the system. A trainable backward process is used to recover the former from the latter. The design requirements for its backward process includes non-unitarity and small parameter count. We introduce partial trace operations to enforce non-unitary and reduce the number of trainable parameters by using a parameter-sharing strategy and incorporating temporal information as an input in the backward process. We present QGDM's resource-efficient version to reduce auxiliary qubits while preserving generative capabilities. QGDM exhibits faster convergence than Quantum Generative Adversarial Network (QGAN) because its adopted convex-based optimization can result in faster convergence. The results of comparing it with QGAN demonstrate its effectiveness in generating both pure and mixed quantum states. It can achieve 53.02% higher fidelity in mixed-state generation than QGAN. The results highlight its great potential to tackle challenging quantum generation tasks.
|
http://arxiv.org/pdf/2401.07039v3
|
[
"Chuangtao Chen",
"Qinglin Zhao",
"MengChu Zhou",
"Zhimin He",
"Zhili Sun",
"Haozhen Situ"
] |
2024-07-11T05:46:04Z
|
2024-01-13T10:56:34Z
|
2407.02419
|
Quantum Curriculum Learning
|
Quantum machine learning (QML) requires significant quantum resources to achieve quantum advantage. Research should prioritize both the efficient design of quantum architectures and the development of learning strategies to optimize resource usage. We propose a framework called quantum curriculum learning (Q-CurL) for quantum data, where the curriculum introduces simpler tasks or data to the learning model before progressing to more challenging ones. We define the curriculum criteria based on the data density ratio between tasks to determine the curriculum order. We also implement a dynamic learning schedule to emphasize the significance of quantum data in optimizing the loss function. Empirical evidence shows that Q-CurL significantly enhances the training convergence and the generalization for unitary learning tasks and improves the robustness of quantum phase recognition tasks. Our framework provides a general learning strategy, bringing QML closer to realizing practical advantages.
|
http://arxiv.org/pdf/2407.02419v2
|
[
"Quoc Hoan Tran",
"Yasuhiro Endo",
"Hirotaka Oshima"
] |
2024-07-11T05:42:23Z
|
2024-07-02T16:44:14Z
|
2407.08765
|
Approximating G(t)/GI/1 queues with deep learning
|
In this paper, we apply a supervised machine-learning approach to solve a fundamental problem in queueing theory: estimating the transient distribution of the number in the system for a G(t)/GI/1. We develop a neural network mechanism that provides a fast and accurate predictor of these distributions for moderate horizon lengths and practical settings. It is based on using a Recurrent Neural Network (RNN) architecture based on the first several moments of the time-dependant inter-arrival and the stationary service time distributions; we call it the Moment-Based Recurrent Neural Network (RNN) method (MBRNN ). Our empirical study suggests MBRNN requires only the first four inter-arrival and service time moments. We use simulation to generate a substantial training dataset and present a thorough performance evaluation to examine the accuracy of our method using two different test sets. We show that even under the configuration with the worst performance errors, the mean number of customers over the entire timeline has an error of less than 3%. While simulation modeling can achieve high accuracy, the advantage of the MBRNN over simulation is runtime, while the MBRNN analyzes hundreds of systems within a fraction of a second. This paper focuses on a G(t)/GI/1; however, the MBRNN approach demonstrated here can be extended to other queueing systems, as the training data labeling is based on simulations (which can be applied to more complex systems) and the training is based on deep learning, which can capture very complex time sequence tasks. In summary, the MBRNN can potentially revolutionize our ability to perform transient analyses of queueing systems.
|
http://arxiv.org/pdf/2407.08765v1
|
[
"Eliran Sherzer",
"Opher Baron",
"Dmitry Krass",
"Yehezkel Resheff"
] |
2024-07-11T05:25:45Z
|
2024-07-11T05:25:45Z
|
2403.14067
|
Automatic Outlier Rectification via Optimal Transport
|
In this paper, we propose a novel conceptual framework to detect outliers using optimal transport with a concave cost function. Conventional outlier detection approaches typically use a two-stage procedure: first, outliers are detected and removed, and then estimation is performed on the cleaned data. However, this approach does not inform outlier removal with the estimation task, leaving room for improvement. To address this limitation, we propose an automatic outlier rectification mechanism that integrates rectification and estimation within a joint optimization framework. We take the first step to utilize the optimal transport distance with a concave cost function to construct a rectification set in the space of probability distributions. Then, we select the best distribution within the rectification set to perform the estimation task. Notably, the concave cost function we introduced in this paper is the key to making our estimator effectively identify the outlier during the optimization process. We demonstrate the effectiveness of our approach over conventional approaches in simulations and empirical analyses for mean estimation, least absolute regression, and the fitting of option implied volatility surfaces.
|
http://arxiv.org/pdf/2403.14067v2
|
[
"Jose Blanchet",
"Jiajin Li",
"Markus Pelger",
"Greg Zanotti"
] |
2024-07-11T05:22:42Z
|
2024-03-21T01:30:24Z
|
2407.08192
|
ARCO:Adaptive Multi-Agent Reinforcement Learning-Based Hardware/Software
Co-Optimization Compiler for Improved Performance in DNN Accelerator Design
|
This paper presents ARCO, an adaptive Multi-Agent Reinforcement Learning (MARL)-based co-optimizing compilation framework designed to enhance the efficiency of mapping machine learning (ML) models - such as Deep Neural Networks (DNNs) - onto diverse hardware platforms. The framework incorporates three specialized actor-critic agents within MARL, each dedicated to a distinct aspect of compilation/optimization at an abstract level: one agent focuses on hardware, while two agents focus on software optimizations. This integration results in a collaborative hardware/software co-optimization strategy that improves the precision and speed of DNN deployments. Concentrating on high-confidence configurations simplifies the search space and delivers superior performance compared to current optimization methods. The ARCO framework surpasses existing leading frameworks, achieving a throughput increase of up to 37.95% while reducing the optimization time by up to 42.2% across various DNNs.
|
http://arxiv.org/pdf/2407.08192v1
|
[
"Arya Fayyazi",
"Mehdi Kamal",
"Massoud Pedram"
] |
2024-07-11T05:22:04Z
|
2024-07-11T05:22:04Z
|
2407.08188
|
Position: Measure Dataset Diversity, Don't Just Claim It
|
Machine learning (ML) datasets, often perceived as neutral, inherently encapsulate abstract and disputed social constructs. Dataset curators frequently employ value-laden terms such as diversity, bias, and quality to characterize datasets. Despite their prevalence, these terms lack clear definitions and validation. Our research explores the implications of this issue by analyzing "diversity" across 135 image and text datasets. Drawing from social sciences, we apply principles from measurement theory to identify considerations and offer recommendations for conceptualizing, operationalizing, and evaluating diversity in datasets. Our findings have broader implications for ML research, advocating for a more nuanced and precise approach to handling value-laden properties in dataset construction.
|
http://arxiv.org/pdf/2407.08188v1
|
[
"Dora Zhao",
"Jerone T. A. Andrews",
"Orestis Papakyriakopoulos",
"Alice Xiang"
] |
2024-07-11T05:13:27Z
|
2024-07-11T05:13:27Z
|
2407.08179
|
CoGS: Causality Constrained Counterfactual Explanations using
goal-directed ASP
|
Machine learning models are increasingly used in areas such as loan approvals and hiring, yet they often function as black boxes, obscuring their decision-making processes. Transparency is crucial, and individuals need explanations to understand decisions, especially for the ones not desired by the user. Ethical and legal considerations require informing individuals of changes in input attribute values (features) that could lead to a desired outcome for the user. Our work aims to generate counterfactual explanations by considering causal dependencies between features. We present the CoGS (Counterfactual Generation with s(CASP)) framework that utilizes the goal-directed Answer Set Programming system s(CASP) to generate counterfactuals from rule-based machine learning models, specifically the FOLD-SE algorithm. CoGS computes realistic and causally consistent changes to attribute values taking causal dependencies between them into account. It finds a path from an undesired outcome to a desired one using counterfactuals. We present details of the CoGS framework along with its evaluation.
|
http://arxiv.org/pdf/2407.08179v1
|
[
"Sopam Dasgupta",
"Joaquín Arias",
"Elmer Salazar",
"Gopal Gupta"
] |
2024-07-11T04:50:51Z
|
2024-07-11T04:50:51Z
|
2311.12550
|
Explainable Time Series Anomaly Detection using Masked Latent Generative
Modeling
|
We present a novel time series anomaly detection method that achieves excellent detection accuracy while offering a superior level of explainability. Our proposed method, TimeVQVAE-AD, leverages masked generative modeling adapted from the cutting-edge time series generation method known as TimeVQVAE. The prior model is trained on the discrete latent space of a time-frequency domain. Notably, the dimensional semantics of the time-frequency domain are preserved in the latent space, enabling us to compute anomaly scores across different frequency bands, which provides a better insight into the detected anomalies. Additionally, the generative nature of the prior model allows for sampling likely normal states for detected anomalies, enhancing the explainability of the detected anomalies through counterfactuals. Our experimental evaluation on the UCR Time Series Anomaly archive demonstrates that TimeVQVAE-AD significantly surpasses the existing methods in terms of detection accuracy and explainability. We provide our implementation on GitHub: url{https://github.com/ML4ITS/TimeVQVAE-AnomalyDetection}.
|
http://arxiv.org/pdf/2311.12550v4
|
[
"Daesoo Lee",
"Sara Malacarne",
"Erlend Aune"
] |
2024-07-11T04:45:41Z
|
2023-11-21T11:59:16Z
|
2407.08176
|
Foundation Model Engineering: Engineering Foundation Models Just as
Engineering Software
|
By treating data and models as the source code, Foundation Models (FMs) become a new type of software. Mirroring the concept of software crisis, the increasing complexity of FMs making FM crisis a tangible concern in the coming decade, appealing for new theories and methodologies from the field of software engineering. In this paper, we outline our vision of introducing Foundation Model (FM) engineering, a strategic response to the anticipated FM crisis with principled engineering methodologies. FM engineering aims to mitigate potential issues in FM development and application through the introduction of declarative, automated, and unified programming interfaces for both data and model management, reducing the complexities involved in working with FMs by providing a more structured and intuitive process for developers. Through the establishment of FM engineering, we aim to provide a robust, automated, and extensible framework that addresses the imminent challenges, and discovering new research opportunities for the software engineering field.
|
http://arxiv.org/pdf/2407.08176v1
|
[
"Dezhi Ran",
"Mengzhou Wu",
"Wei Yang",
"Tao Xie"
] |
2024-07-11T04:40:02Z
|
2024-07-11T04:40:02Z
|
2407.08169
|
Faster Machine Unlearning via Natural Gradient Descent
|
We address the challenge of efficiently and reliably deleting data from machine learning models trained using Empirical Risk Minimization (ERM), a process known as machine unlearning. To avoid retraining models from scratch, we propose a novel algorithm leveraging Natural Gradient Descent (NGD). Our theoretical framework ensures strong privacy guarantees for convex models, while a practical Min/Max optimization algorithm is developed for non-convex models. Comprehensive evaluations show significant improvements in privacy, computational efficiency, and generalization compared to state-of-the-art methods, advancing both the theoretical and practical aspects of machine unlearning.
|
http://arxiv.org/pdf/2407.08169v1
|
[
"Omri Lev",
"Ashia Wilson"
] |
2024-07-11T04:19:28Z
|
2024-07-11T04:19:28Z
|
2407.08166
|
Synthetic Electroretinogram Signal Generation Using Conditional
Generative Adversarial Network for Enhancing Classification of Autism
Spectrum Disorder
|
The electroretinogram (ERG) is a clinical test that records the retina's electrical response to light. The ERG is a promising way to study different neurodevelopmental and neurodegenerative disorders, including autism spectrum disorder (ASD) - a neurodevelopmental condition that impacts language, communication, and reciprocal social interactions. However, in heterogeneous populations, such as ASD, where the ability to collect large datasets is limited, the application of artificial intelligence (AI) is complicated. Synthetic ERG signals generated from real ERG recordings carry similar information as natural ERGs and, therefore, could be used as an extension for natural data to increase datasets so that AI applications can be fully utilized. As proof of principle, this study presents a Generative Adversarial Network capable of generating synthetic ERG signals of children with ASD and typically developing control individuals. We applied a Time Series Transformer and Visual Transformer with Continuous Wavelet Transform to enhance classification results on the extended synthetic signals dataset. This approach may support classification models in related psychiatric conditions where the ERG may help classify disorders.
|
http://arxiv.org/pdf/2407.08166v1
|
[
"Mikhail Kulyabin",
"Paul A. Constable",
"Aleksei Zhdanov",
"Irene O. Lee",
"David H. Skuse",
"Dorothy A. Thompson",
"Andreas Maier"
] |
2024-07-11T04:11:52Z
|
2024-07-11T04:11:52Z
|
2310.07338
|
From Supervised to Generative: A Novel Paradigm for Tabular Deep
Learning with Large Language Models
|
Tabular data is foundational to predictive modeling in various crucial industries, including healthcare, finance, retail, sustainability, etc. Despite the progress made in specialized models, there is an increasing demand for universal models that can transfer knowledge, generalize from limited data, and follow human instructions. These are challenges that current tabular deep learning approaches have not fully tackled. Here we introduce Generative Tabular Learning (GTL), a novel framework that integrates the advanced functionalities of large language models (LLMs)-such as prompt-based zero-shot generalization and in-context learning-into tabular deep learning. GTL capitalizes on the pre-training of LLMs on diverse tabular data, enhancing their understanding of domain-specific knowledge, numerical sequences, and statistical dependencies critical for accurate predictions. Our empirical study spans 384 public datasets, rigorously analyzing GTL's convergence and scaling behaviors and assessing the impact of varied data templates. The GTL-enhanced LLaMA-2 model demonstrates superior zero-shot and in-context learning capabilities across numerous classification and regression tasks. Notably, it achieves this without fine-tuning, outperforming traditional methods and rivaling state-of-the-art models like GPT-4 in certain cases. Through GTL, we not only foster a deeper integration of LLMs' sophisticated abilities into tabular data comprehension and application but also offer a new training resource and a test bed for LLMs to enhance their ability to comprehend tabular data. To facilitate reproducible research, we release our code, data, and model checkpoints at https://github.com/microsoft/Industrial-Foundation-Models.
|
http://arxiv.org/pdf/2310.07338v4
|
[
"Xumeng Wen",
"Han Zhang",
"Shun Zheng",
"Wei Xu",
"Jiang Bian"
] |
2024-07-11T04:09:19Z
|
2023-10-11T09:37:38Z
|
2407.07796
|
Evaluating Large Language Models with Grid-Based Game Competitions: An
Extensible LLM Benchmark and Leaderboard
|
We introduce a novel and extensible benchmark for large language models (LLMs) through grid-based games such as Tic-Tac-Toe, Connect Four, and Gomoku. The open-source game simulation code, available on GitHub, allows LLMs to compete and generates detailed data files in JSON, CSV, TXT, and PNG formats for leaderboard rankings and further analysis. We present the results of games among leading LLMs, including Claude 3.5 Sonnet and Claude 3 Sonnet by Anthropic, Gemini 1.5 Pro and Gemini 1.5 Flash by Google, GPT-4 Turbo and GPT-4o by OpenAI, and Llama3-70B by Meta. We also encourage submissions of results from other LLMs. In total, we simulated 2,310 matches (5 sessions for each pair among 7 LLMs and a random player) across three types of games, using three distinct prompt types: list, illustration, and image. The results revealed significant variations in LLM performance across different games and prompt types, with analysis covering win and disqualification rates, missed opportunity analysis, and invalid move analysis. The details of the leaderboard and result matrix data are available as open-access data on GitHub. This study enhances our understanding of LLMs' capabilities in playing games they were not specifically trained for, helping to assess their rule comprehension and strategic thinking. On the path to Artificial General Intelligence (AGI), this study lays the groundwork for future exploration into their utility in complex decision-making scenarios, illuminating their strategic thinking abilities and offering directions for further inquiry into the limits of LLMs within game-based frameworks.
|
http://arxiv.org/pdf/2407.07796v2
|
[
"Oguzhan Topsakal",
"Colby Jacob Edell",
"Jackson Bailey Harper"
] |
2024-07-11T03:46:35Z
|
2024-07-10T16:14:34Z
|
2110.09326
|
Graph convolutional network for predicting abnormal grain growth in
Monte Carlo simulations of microstructural evolution
|
Recent developments in graph neural networks show promise for predicting the occurrence of abnormal grain growth, which has been a particularly challenging area of research due to its apparent stochastic nature. In this study, we generate a large dataset of Monte Carlo simulations of abnormal grain growth. We train simple graph convolution networks to predict which initial microstructures will exhibit abnormal grain growth, and compare the results to a standard computer vision approach for the same task. The graph neural network outperformed the computer vision method and achieved 73% prediction accuracy and fewer false positives. It also provided some physical insight into feature importance and the relevant length scale required to maximize predictive performance. Analysis of the uncertainty in the Monte Carlo simulations provides additional insights for ongoing work in this area.
|
http://arxiv.org/pdf/2110.09326v2
|
[
"Ryan Cohn",
"Elizabeth Holm"
] |
2024-07-11T03:45:01Z
|
2021-10-18T13:50:43Z
|
2310.05797
|
In-Context Explainers: Harnessing LLMs for Explaining Black Box Models
|
Recent advancements in Large Language Models (LLMs) have demonstrated exceptional capabilities in complex tasks like machine translation, commonsense reasoning, and language understanding. One of the primary reasons for the adaptability of LLMs in such diverse tasks is their in-context learning (ICL) capability, which allows them to perform well on new tasks by simply using a few task samples in the prompt. Despite their effectiveness in enhancing the performance of LLMs on diverse language and tabular tasks, these methods have not been thoroughly explored for their potential to generate post hoc explanations. In this work, we carry out one of the first explorations to analyze the effectiveness of LLMs in explaining other complex predictive models using ICL. To this end, we propose a novel framework, In-Context Explainers, comprising of three novel approaches that exploit the ICL capabilities of LLMs to explain the predictions made by other predictive models. We conduct extensive analysis with these approaches on real-world tabular and text datasets and demonstrate that LLMs are capable of explaining other predictive models similar to state-of-the-art post hoc explainers, opening up promising avenues for future research into LLM-based post hoc explanations of complex predictive models.
|
http://arxiv.org/pdf/2310.05797v4
|
[
"Nicholas Kroeger",
"Dan Ley",
"Satyapriya Krishna",
"Chirag Agarwal",
"Himabindu Lakkaraju"
] |
2024-07-11T03:42:12Z
|
2023-10-09T15:31:03Z
|
2402.16075
|
Don't Start from Scratch: Behavioral Refinement via Interpolant-based
Policy Diffusion
|
Imitation learning empowers artificial agents to mimic behavior by learning from demonstrations. Recently, diffusion models, which have the ability to model high-dimensional and multimodal distributions, have shown impressive performance on imitation learning tasks. These models learn to shape a policy by diffusing actions (or states) from standard Gaussian noise. However, the target policy to be learned is often significantly different from Gaussian and this mismatch can result in poor performance when using a small number of diffusion steps (to improve inference speed) and under limited data. The key idea in this work is that initiating from a more informative source than Gaussian enables diffusion methods to mitigate the above limitations. We contribute both theoretical results, a new method, and empirical findings that show the benefits of using an informative source policy. Our method, which we call BRIDGER, leverages the stochastic interpolants framework to bridge arbitrary policies, thus enabling a flexible approach towards imitation learning. It generalizes prior work in that standard Gaussians can still be applied, but other source policies can be used if available. In experiments on challenging simulation benchmarks and on real robots, BRIDGER outperforms state-of-the-art diffusion policies. We provide further analysis on design considerations when applying BRIDGER. Code for BRIDGER is available at https://github.com/clear-nus/bridger.
|
http://arxiv.org/pdf/2402.16075v4
|
[
"Kaiqi Chen",
"Eugene Lim",
"Kelvin Lin",
"Yiyang Chen",
"Harold Soh"
] |
2024-07-11T03:41:42Z
|
2024-02-25T12:19:21Z
|
2407.08159
|
Model-agnostic clean-label backdoor mitigation in cybersecurity
environments
|
The training phase of machine learning models is a delicate step, especially in cybersecurity contexts. Recent research has surfaced a series of insidious training-time attacks that inject backdoors in models designed for security classification tasks without altering the training labels. With this work, we propose new techniques that leverage insights in cybersecurity threat models to effectively mitigate these clean-label poisoning attacks, while preserving the model utility. By performing density-based clustering on a carefully chosen feature subspace, and progressively isolating the suspicious clusters through a novel iterative scoring procedure, our defensive mechanism can mitigate the attacks without requiring many of the common assumptions in the existing backdoor defense literature. To show the generality of our proposed mitigation, we evaluate it on two clean-label model-agnostic attacks on two different classic cybersecurity data modalities: network flows classification and malware classification, using gradient boosting and neural network models.
|
http://arxiv.org/pdf/2407.08159v1
|
[
"Giorgio Severi",
"Simona Boboila",
"John Holodnak",
"Kendra Kratkiewicz",
"Rauf Izmailov",
"Alina Oprea"
] |
2024-07-11T03:25:40Z
|
2024-07-11T03:25:40Z
|
2407.08152
|
Privacy-Preserving Data Deduplication for Enhancing Federated Learning
of Language Models
|
Deduplication is a vital preprocessing step that enhances machine learning model performance and saves training time and energy. However, enhancing federated learning through deduplication poses challenges, especially regarding scalability and potential privacy violations if deduplication involves sharing all clients' data. In this paper, we address the problem of deduplication in a federated setup by introducing a pioneering protocol, Efficient Privacy-Preserving Multi-Party Deduplication (EP-MPD). It efficiently removes duplicates from multiple clients' datasets without compromising data privacy. EP-MPD is constructed in a modular fashion, utilizing two novel variants of the Private Set Intersection protocol. Our extensive experiments demonstrate the significant benefits of deduplication in federated learning of large language models. For instance, we observe up to 19.61% improvement in perplexity and up to 27.95% reduction in running time. EP-MPD effectively balances privacy and performance in federated learning, making it a valuable solution for large-scale applications.
|
http://arxiv.org/pdf/2407.08152v1
|
[
"Aydin Abadi",
"Vishnu Asutosh Dasu",
"Sumanta Sarkar"
] |
2024-07-11T03:10:27Z
|
2024-07-11T03:10:27Z
|
2407.06645
|
Entropy Law: The Story Behind Data Compression and LLM Performance
|
Data is the cornerstone of large language models (LLMs), but not all data is useful for model learning. Carefully selected data can better elicit the capabilities of LLMs with much less computational overhead. Most methods concentrate on evaluating the quality of individual samples in data selection, while the combinatorial effects among samples are neglected. Even if each sample is of perfect quality, their combinations may be suboptimal in teaching LLMs due to their intrinsic homogeneity or contradiction. In this paper, we aim to uncover the underlying relationships between LLM performance and data selection. Inspired by the information compression nature of LLMs, we uncover an ``entropy law'' that connects LLM performance with data compression ratio and first-epoch training loss, which reflect the information redundancy of a dataset and the mastery of inherent knowledge encoded in this dataset, respectively. Through both theoretical deduction and empirical evaluation, we find that model performance is negatively correlated to the compression ratio of training data, which usually yields a lower training loss. Based on the findings of the entropy law, we propose a quite efficient and universal data selection method named textbf{ZIP} for training LLMs, which aim to prioritize data subsets exhibiting a low compression ratio. Based on a multi-stage algorithm that selects diverse data in a greedy manner, we can obtain a good data subset with satisfactory diversity. Extensive experiments have been conducted to validate the entropy law and the superiority of ZIP across different LLM backbones and alignment stages. We also present an interesting application of entropy law that can detect potential performance risks at the beginning of model training.
|
http://arxiv.org/pdf/2407.06645v3
|
[
"Mingjia Yin",
"Chuhan Wu",
"Yufei Wang",
"Hao Wang",
"Wei Guo",
"Yasheng Wang",
"Yong Liu",
"Ruiming Tang",
"Defu Lian",
"Enhong Chen"
] |
2024-07-11T03:06:45Z
|
2024-07-09T08:14:29Z
|
2407.07801
|
AVCap: Leveraging Audio-Visual Features as Text Tokens for Captioning
|
In recent years, advancements in representation learning and language models have propelled Automated Captioning (AC) to new heights, enabling the generation of human-level descriptions. Leveraging these advancements, we propose AVCap, an Audio-Visual Captioning framework, a simple yet powerful baseline approach applicable to audio-visual captioning. AVCap utilizes audio-visual features as text tokens, which has many advantages not only in performance but also in the extensibility and scalability of the model. AVCap is designed around three pivotal dimensions: the exploration of optimal audio-visual encoder architectures, the adaptation of pre-trained models according to the characteristics of generated text, and the investigation into the efficacy of modality fusion in captioning. Our method outperforms existing audio-visual captioning methods across all metrics and the code is available on https://github.com/JongSuk1/AVCap
|
http://arxiv.org/pdf/2407.07801v2
|
[
"Jongsuk Kim",
"Jiwon Shin",
"Junmo Kim"
] |
2024-07-11T02:38:14Z
|
2024-07-10T16:17:49Z
|
2407.08134
|
Highway Networks for Improved Surface Reconstruction: The Role of
Residuals and Weight Updates
|
Surface reconstruction from point clouds is a fundamental challenge in computer graphics and medical imaging. In this paper, we explore the application of advanced neural network architectures for the accurate and efficient reconstruction of surfaces from data points. We introduce a novel variant of the Highway network (Hw) called Square-Highway (SqrHw) within the context of multilayer perceptrons and investigate its performance alongside plain neural networks and a simplified Hw in various numerical examples. These examples include the reconstruction of simple and complex surfaces, such as spheres, human hands, and intricate models like the Stanford Bunny. We analyze the impact of factors such as the number of hidden layers, interior and exterior points, and data distribution on surface reconstruction quality. Our results show that the proposed SqrHw architecture outperforms other neural network configurations, achieving faster convergence and higher-quality surface reconstructions. Additionally, we demonstrate the SqrHw's ability to predict surfaces over missing data, a valuable feature for challenging applications like medical imaging. Furthermore, our study delves into further details, demonstrating that the proposed method based on highway networks yields more stable weight norms and backpropagation gradients compared to the Plain Network architecture. This research not only advances the field of computer graphics but also holds utility for other purposes such as function interpolation and physics-informed neural networks, which integrate multilayer perceptrons into their algorithms.
|
http://arxiv.org/pdf/2407.08134v1
|
[
"A. Noorizadegan",
"Y. C. Hon",
"D. L. Young",
"C. S. Chen"
] |
2024-07-11T02:15:21Z
|
2024-07-11T02:15:21Z
|
2301.13803
|
Fairness-aware Vision Transformer via Debiased Self-Attention
|
Vision Transformer (ViT) has recently gained significant attention in solving computer vision (CV) problems due to its capability of extracting informative features and modeling long-range dependencies through the attention mechanism. Whereas recent works have explored the trustworthiness of ViT, including its robustness and explainability, the issue of fairness has not yet been adequately addressed. We establish that the existing fairness-aware algorithms designed for CNNs do not perform well on ViT, which highlights the need to develop our novel framework via Debiased Self-Attention (DSA). DSA is a fairness-through-blindness approach that enforces ViT to eliminate spurious features correlated with the sensitive label for bias mitigation and simultaneously retain real features for target prediction. Notably, DSA leverages adversarial examples to locate and mask the spurious features in the input image patches with an additional attention weights alignment regularizer in the training objective to encourage learning real features for target prediction. Importantly, our DSA framework leads to improved fairness guarantees over prior works on multiple prediction tasks without compromising target prediction performance. Code is available at href{https://github.com/qiangyao1988/DSA}{https://github.com/qiangyao1988/DSA}.
|
http://arxiv.org/pdf/2301.13803v3
|
[
"Yao Qiang",
"Chengyin Li",
"Prashant Khanduri",
"Dongxiao Zhu"
] |
2024-07-11T02:11:49Z
|
2023-01-31T17:44:59Z
|
2407.08125
|
Real-Time Summarization of Twitter
|
In this paper, we describe our approaches to TREC Real-Time Summarization of Twitter. We focus on real time push notification scenario, which requires a system monitors the stream of sampled tweets and returns the tweets relevant and novel to given interest profiles. Dirichlet score with and with very little smoothing (baseline) are employed to classify whether a tweet is relevant to a given interest profile. Using metrics including Mean Average Precision (MAP, cumulative gain (CG) and discount cumulative gain (DCG), the experiment indicates that our approach has a good performance. It is also desired to remove the redundant tweets from the pushing queue. Due to the precision limit, we only describe the algorithm in this paper.
|
http://arxiv.org/pdf/2407.08125v1
|
[
"Yixin Jin",
"Meiqi Wang",
"Meng Li",
"Wenjing Zhou",
"Yi Shen",
"Hao Liu"
] |
2024-07-11T01:56:31Z
|
2024-07-11T01:56:31Z
|
2401.00428
|
Training toward significance with the decorrelated event classifier
transformer neural network
|
Experimental particle physics uses machine learning for many tasks, where one application is to classify signal and background events. This classification can be used to bin an analysis region to enhance the expected significance for a mass resonance search. In natural language processing, one of the leading neural network architectures is the transformer. In this work, an event classifier transformer is proposed to bin an analysis region, in which the network is trained with special techniques. The techniques developed here can enhance the significance and reduce the correlation between the network's output and the reconstructed mass. It is found that this trained network can perform better than boosted decision trees and feed-forward networks.
|
http://arxiv.org/pdf/2401.00428v3
|
[
"Jaebak Kim"
] |
2024-07-11T01:50:17Z
|
2023-12-31T08:57:29Z
|
2401.10747
|
Multimodal Sentiment Analysis with Missing Modality: A
Knowledge-Transfer Approach
|
Multimodal sentiment analysis aims to identify the emotions expressed by individuals through visual, language, and acoustic cues. However, most of the existing research efforts assume that all modalities are available during both training and testing, making their algorithms susceptible to the missing modality scenario. In this paper, we propose a novel knowledge-transfer network to translate between different modalities to reconstruct the missing audio modalities. Moreover, we develop a cross-modality attention mechanism to retain the maximal information of the reconstructed and observed modalities for sentiment prediction. Extensive experiments on three publicly available datasets demonstrate significant improvements over baselines and achieve comparable results to the previous methods with complete multi-modality supervision.
|
http://arxiv.org/pdf/2401.10747v3
|
[
"Weide Liu",
"Huijing Zhan",
"Hao Chen",
"Fengmao Lv"
] |
2024-07-11T01:34:37Z
|
2023-12-28T06:47:18Z
|
2407.08112
|
How Well Can a Long Sequence Model Model Long Sequences? Comparing
Architechtural Inductive Biases on Long-Context Abilities
|
Long sequences occur in abundance within real-world scenarios, hence properly modelling them opens numerous down-stream use-cases. Deep neural networks, however, have often struggled with these for a variety of reasons. Recent advances, both in system engineering as well as model design, have enabled the scaling up of model that are purported to support extended context length. In particular, the state-space and linear recurrent neural network families of models hypothetically can entend to infinite sequence lenth. However, is this too good to be true? We conduct an evaluation to show that while such claims may be sound theoretically, there remain large practical gaps that are empirically observed. In particular, recurrent models still suffer in the same settings as long-context LLMs with attention. We further show that different inductive biases have inconsistent extrapolation capabilities, highlighting the need to further study such paradigms and investigate why long-context models seemingly fail to behave as one might expect.
|
http://arxiv.org/pdf/2407.08112v1
|
[
"Jerry Huang"
] |
2024-07-11T01:08:39Z
|
2024-07-11T01:08:39Z
|
2407.08109
|
Urban Waterlogging Detection: A Challenging Benchmark and Large-Small
Model Co-Adapter
|
Urban waterlogging poses a major risk to public safety and infrastructure. Conventional methods using water-level sensors need high-maintenance to hardly achieve full coverage. Recent advances employ surveillance camera imagery and deep learning for detection, yet these struggle amidst scarce data and adverse environmental conditions. In this paper, we establish a challenging Urban Waterlogging Benchmark (UW-Bench) under diverse adverse conditions to advance real-world applications. We propose a Large-Small Model co-adapter paradigm (LSM-adapter), which harnesses the substantial generic segmentation potential of large model and the specific task-directed guidance of small model. Specifically, a Triple-S Prompt Adapter module alongside a Dynamic Prompt Combiner are proposed to generate then merge multiple prompts for mask decoder adaptation. Meanwhile, a Histogram Equalization Adap-ter module is designed to infuse the image specific information for image encoder adaptation. Results and analysis show the challenge and superiority of our developed benchmark and algorithm. Project page: url{https://github.com/zhang-chenxu/LSM-Adapter}
|
http://arxiv.org/pdf/2407.08109v1
|
[
"Suqi Song",
"Chenxu Zhang",
"Peng Zhang",
"Pengkun Li",
"Fenglong Song",
"Lei Zhang"
] |
2024-07-11T01:03:02Z
|
2024-07-11T01:03:02Z
|
2407.08108
|
CADC: Encoding User-Item Interactions for Compressing Recommendation
Model Training Data
|
Deep learning recommendation models (DLRMs) are at the heart of the current e-commerce industry. However, the amount of training data used to train these large models is growing exponentially, leading to substantial training hurdles. The training dataset contains two primary types of information: content-based information (features of users and items) and collaborative information (interactions between users and items). One approach to reduce the training dataset is to remove user-item interactions. But that significantly diminishes collaborative information, which is crucial for maintaining accuracy due to its inclusion of interaction histories. This loss profoundly impacts DLRM performance. This paper makes an important observation that if one can capture the user-item interaction history to enrich the user and item embeddings, then the interaction history can be compressed without losing model accuracy. Thus, this work, Collaborative Aware Data Compression (CADC), takes a two-step approach to training dataset compression. In the first step, we use matrix factorization of the user-item interaction matrix to create a novel embedding representation for both the users and items. Once the user and item embeddings are enriched by the interaction history information the approach then applies uniform random sampling of the training dataset to drastically reduce the training dataset size while minimizing model accuracy drop. The source code of CADC is available at href{https://anonymous.4open.science/r/DSS-RM-8C1D/README.md}{https://anonymous.4open.science/r/DSS-RM-8C1D/README.md}.
|
http://arxiv.org/pdf/2407.08108v1
|
[
"Hossein Entezari Zarch",
"Abdulla Alshabanah",
"Chaoyi Jiang",
"Murali Annavaram"
] |
2024-07-11T00:54:56Z
|
2024-07-11T00:54:56Z
|
2407.08107
|
Advanced Meta-Ensemble Machine Learning Models for Early and Accurate
Sepsis Prediction to Improve Patient Outcomes
|
Sepsis, a critical condition from the body's response to infection, poses a major global health crisis affecting all age groups. Timely detection and intervention are crucial for reducing healthcare expenses and improving patient outcomes. This paper examines the limitations of traditional sepsis screening tools like Systemic Inflammatory Response Syndrome, Modified Early Warning Score, and Quick Sequential Organ Failure Assessment, highlighting the need for advanced approaches. We propose using machine learning techniques - Random Forest, Extreme Gradient Boosting, and Decision Tree models - to predict sepsis onset. Our study evaluates these models individually and in a combined meta-ensemble approach using key metrics such as Accuracy, Precision, Recall, F1 score, and Area Under the Receiver Operating Characteristic Curve. Results show that the meta-ensemble model outperforms individual models, achieving an AUC-ROC score of 0.96, indicating superior predictive accuracy for early sepsis detection. The Random Forest model also performs well with an AUC-ROC score of 0.95, while Extreme Gradient Boosting and Decision Tree models score 0.94 and 0.90, respectively.
|
http://arxiv.org/pdf/2407.08107v1
|
[
"MohammadAmin Ansari Khoushabar",
"Parviz Ghafariasl"
] |
2024-07-11T00:51:32Z
|
2024-07-11T00:51:32Z
|
2310.00927
|
Understanding Transferable Representation Learning and Zero-shot
Transfer in CLIP
|
Multi-modal learning has become increasingly popular due to its ability to leverage information from different data sources (e.g., text and images) to improve the model performance. Recently, CLIP has emerged as an effective approach that employs vision-language contrastive pretraining to learn joint image and text representations and exhibits remarkable performance in zero-shot learning and text-guided natural image generation. Despite the huge practical success of CLIP, its theoretical understanding remains elusive. In this paper, we formally study transferrable representation learning underlying CLIP and demonstrate how features from different modalities get aligned. We also analyze its zero-shot transfer performance on the downstream tasks. Inspired by our analysis, we propose a new CLIP-type approach, which achieves better performance than CLIP and other state-of-the-art methods on benchmark datasets.
|
http://arxiv.org/pdf/2310.00927v2
|
[
"Zixiang Chen",
"Yihe Deng",
"Yuanzhi Li",
"Quanquan Gu"
] |
2024-07-11T00:38:08Z
|
2023-10-02T06:41:30Z
|
2407.08100
|
Non-convergence of Adam and other adaptive stochastic gradient descent
optimization methods for non-vanishing learning rates
|
Deep learning algorithms - typically consisting of a class of deep neural networks trained by a stochastic gradient descent (SGD) optimization method - are nowadays the key ingredients in many artificial intelligence (AI) systems and have revolutionized our ways of working and living in modern societies. For example, SGD methods are used to train powerful large language models (LLMs) such as versions of ChatGPT and Gemini, SGD methods are employed to create successful generative AI based text-to-image creation models such as Midjourney, DALL-E, and Stable Diffusion, but SGD methods are also used to train DNNs to approximately solve scientific models such as partial differential equation (PDE) models from physics and biology and optimal control and stopping problems from engineering. It is known that the plain vanilla standard SGD method fails to converge even in the situation of several convex optimization problems if the learning rates are bounded away from zero. However, in many practical relevant training scenarios, often not the plain vanilla standard SGD method but instead adaptive SGD methods such as the RMSprop and the Adam optimizers, in which the learning rates are modified adaptively during the training process, are employed. This naturally rises the question whether such adaptive optimizers, in which the learning rates are modified adaptively during the training process, do converge in the situation of non-vanishing learning rates. In this work we answer this question negatively by proving that adaptive SGD methods such as the popular Adam optimizer fail to converge to any possible random limit point if the learning rates are asymptotically bounded away from zero. In our proof of this non-convergence result we establish suitable pathwise a priori bounds for a class of accelerated and adaptive SGD methods, which are also of independent interest.
|
http://arxiv.org/pdf/2407.08100v1
|
[
"Steffen Dereich",
"Robin Graeber",
"Arnulf Jentzen"
] |
2024-07-11T00:10:35Z
|
2024-07-11T00:10:35Z
|
2403.04221
|
Why Online Reinforcement Learning is Causal
|
Reinforcement learning (RL) and causal modelling naturally complement each other. The goal of causal modelling is to predict the effects of interventions in an environment, while the goal of reinforcement learning is to select interventions that maximize the rewards the agent receives from the environment. Reinforcement learning includes the two most powerful sources of information for estimating causal relationships: temporal ordering and the ability to act on an environment. This paper examines which reinforcement learning settings we can expect to benefit from causal modelling, and how. In online learning, the agent has the ability to interact directly with their environment, and learn from exploring it. Our main argument is that in online learning, conditional probabilities are causal, and therefore offline RL is the setting where causal learning has the most potential to make a difference. Essentially, the reason is that when an agent learns from their {em own} experience, there are no unobserved confounders that influence both the agent's own exploratory actions and the rewards they receive. Our paper formalizes this argument. For offline RL, where an agent may and typically does learn from the experience of {em others}, we describe previous and new methods for leveraging a causal model, including support for counterfactual queries.
|
http://arxiv.org/pdf/2403.04221v2
|
[
"Oliver Schulte",
"Pascal Poupart"
] |
2024-07-10T23:51:52Z
|
2024-03-07T04:49:48Z
|
2310.03716
|
A Long Way to Go: Investigating Length Correlations in RLHF
|
Great success has been reported using Reinforcement Learning from Human Feedback (RLHF) to align large language models, with open preference datasets enabling wider experimentation, particularly for "helpfulness" in tasks like dialogue and web question answering. Alongside these improvements, however, RLHF also often drives models to produce longer outputs. This paper demonstrates, on three diverse settings, that optimizing for response length is, much more than previously thought, a significant factor behind RLHF. Studying the strategies RL optimization uses to maximize reward, we find improvements in reward to largely be driven by increasing response length, instead of other features. Indeed, we find that even a purely length-based reward reproduces most downstream RLHF improvements over supervised fine-tuned models. Testing a comprehensive set of length-countering interventions, we identify the dominant source of these biases to be reward models, which, by studying training dynamics, we find are non-robust and easily influenced by length biases in preference data.
|
http://arxiv.org/pdf/2310.03716v2
|
[
"Prasann Singhal",
"Tanya Goyal",
"Jiacheng Xu",
"Greg Durrett"
] |
2024-07-10T23:15:49Z
|
2023-10-05T17:38:28Z
|
2407.08086
|
The GeometricKernels Package: Heat and Matérn Kernels for Geometric
Learning on Manifolds, Meshes, and Graphs
|
Kernels are a fundamental technical primitive in machine learning. In recent years, kernel-based methods such as Gaussian processes are becoming increasingly important in applications where quantifying uncertainty is of key interest. In settings that involve structured data defined on graphs, meshes, manifolds, or other related spaces, defining kernels with good uncertainty-quantification behavior, and computing their value numerically, is less straightforward than in the Euclidean setting. To address this difficulty, we present GeometricKernels, a software package which implements the geometric analogs of classical Euclidean squared exponential - also known as heat - and Mat'ern kernels, which are widely-used in settings where uncertainty is of key interest. As a byproduct, we obtain the ability to compute Fourier-feature-type expansions, which are widely used in their own right, on a wide set of geometric spaces. Our implementation supports automatic differentiation in every major current framework simultaneously via a backend-agnostic design. In this companion paper to the package and its documentation, we outline the capabilities of the package and present an illustrated example of its interface. We also include a brief overview of the theory the package is built upon and provide some historic context in the appendix.
|
http://arxiv.org/pdf/2407.08086v1
|
[
"Peter Mostowsky",
"Vincent Dutordoir",
"Iskander Azangulov",
"Noémie Jaquier",
"Michael John Hutchinson",
"Aditya Ravuri",
"Leonel Rozo",
"Alexander Terenin",
"Viacheslav Borovitskiy"
] |
2024-07-10T23:09:23Z
|
2024-07-10T23:09:23Z
|
2407.09571
|
ImPORTance -- Machine Learning-Driven Analysis of Global Port
Significance and Network Dynamics for Improved Operational Efficiency
|
Seaports play a crucial role in the global economy, and researchers have sought to understand their significance through various studies. In this paper, we aim to explore the common characteristics shared by important ports by analyzing the network of connections formed by vessel movement among them. To accomplish this task, we adopt a bottom-up network construction approach that combines three years' worth of AIS (Automatic Identification System) data from around the world, constructing a Ports Network that represents the connections between different ports. Through such representation, we use machine learning to measure the relative significance of different port features. Our model examined such features and revealed that geographical characteristics and the depth of the port are indicators of a port's significance to the Ports Network. Accordingly, this study employs a data-driven approach and utilizes machine learning to provide a comprehensive understanding of the factors contributing to ports' importance. The outcomes of our work are aimed to inform decision-making processes related to port development, resource allocation, and infrastructure planning in the industry.
|
http://arxiv.org/pdf/2407.09571v1
|
[
"Emanuele Carlini",
"Domenico Di Gangi",
"Vinicius Monteiro de Lira",
"Hanna Kavalionak",
"Gabriel Spadon",
"Amilcar Soares"
] |
2024-07-10T22:49:45Z
|
2024-07-10T22:49:45Z
|
2401.13098
|
Enhancing Global Maritime Traffic Network Forecasting with
Gravity-Inspired Deep Learning Models
|
Aquatic non-indigenous species (NIS) pose significant threats to biodiversity, disrupting ecosystems and inflicting substantial economic damages across agriculture, forestry, and fisheries. Due to the fast growth of global trade and transportation networks, NIS has been introduced and spread unintentionally in new environments. This study develops a new physics-informed model to forecast maritime shipping traffic between port regions worldwide. The predicted information provided by these models, in turn, is used as input for risk assessment of NIS spread through transportation networks to evaluate the capability of our solution. Inspired by the gravity model for international trades, our model considers various factors that influence the likelihood and impact of vessel activities, such as shipping flux density, distance between ports, trade flow, and centrality measures of transportation hubs. Accordingly, this paper introduces transformers to gravity models to rebuild the short- and long-term dependencies that make the risk analysis feasible. Thus, we introduce a physics-inspired framework that achieves an 89% binary accuracy for existing and non-existing trajectories and an 84.8% accuracy for the number of vessels flowing between key port areas, representing more than 10% improvement over the traditional deep-gravity model. Along these lines, this research contributes to a better understanding of NIS risk assessment. It allows policymakers, conservationists, and stakeholders to prioritize management actions by identifying high-risk invasion pathways. Besides, our model is versatile and can include new data sources, making it suitable for assessing international vessel traffic flow in a changing global landscape.
|
http://arxiv.org/pdf/2401.13098v3
|
[
"Ruixin Song",
"Gabriel Spadon",
"Ronald Pelot",
"Stan Matwin",
"Amilcar Soares"
] |
2024-07-10T22:33:58Z
|
2024-01-23T21:22:51Z
|
2407.08074
|
Smooth Like Butter: Evaluating Multi-Lattice Transitions in
Property-Augmented Latent Spaces
|
Additive manufacturing has revolutionized structural optimization by enhancing component strength and reducing material requirements. One approach used to achieve these improvements is the application of multi-lattice structures, where the macro-scale performance relies on the detailed design of mesostructural lattice elements. Many current approaches to designing such structures use data-driven design to generate multi-lattice transition regions, making use of machine learning models that are informed solely by the geometry of the mesostructures. However, it remains unclear if the integration of mechanical properties into the dataset used to train such machine learning models would be beneficial beyond using geometric data alone. To address this issue, this work implements and evaluates a hybrid geometry/property Variational Autoencoder (VAE) for generating multi-lattice transition regions. In our study, we found that hybrid VAEs demonstrate enhanced performance in maintaining stiffness continuity through transition regions, indicating their suitability for design tasks requiring smooth mechanical properties.
|
http://arxiv.org/abs/2407.08074v1
|
[
"Martha Baldwin",
"Nicholas A. Meisel",
"Christopher McComb"
] |
2024-07-10T22:28:13Z
|
2024-07-10T22:28:13Z
|
2407.08073
|
NDST: Neural Driving Style Transfer for Human-Like Vision-Based
Autonomous Driving
|
Autonomous Vehicles (AV) and Advanced Driver Assistant Systems (ADAS) prioritize safety over comfort. The intertwining factors of safety and comfort emerge as pivotal elements in ensuring the effectiveness of Autonomous Driving (AD). Users often experience discomfort when AV or ADAS drive the vehicle on their behalf. Providing a personalized human-like AD experience, tailored to match users' unique driving styles while adhering to safety prerequisites, presents a significant opportunity to boost the acceptance of AVs. This paper proposes a novel approach, Neural Driving Style Transfer (NDST), inspired by Neural Style Transfer (NST), to address this issue. NDST integrates a Personalized Block (PB) into the conventional Baseline Driving Model (BDM), allowing for the transfer of a user's unique driving style while adhering to safety parameters. The PB serves as a self-configuring system, learning and adapting to an individual's driving behavior without requiring modifications to the BDM. This approach enables the personalization of AV models, aligning the driving style more closely with user preferences while ensuring baseline safety critical actuation. Two contrasting driving styles (Style A and Style B) were used to validate the proposed NDST methodology, demonstrating its efficacy in transferring personal driving styles to the AV system. Our work highlights the potential of NDST to enhance user comfort in AVs by providing a personalized and familiar driving experience. The findings affirm the feasibility of integrating NDST into existing AV frameworks to bridge the gap between safety and individualized driving styles, promoting wider acceptance and improved user experiences.
|
http://arxiv.org/pdf/2407.08073v1
|
[
"Donghyun Kim",
"Aws Khalil",
"Haewoon Nam",
"Jaerock Kwon"
] |
2024-07-10T22:26:45Z
|
2024-07-10T22:26:45Z
|
2310.18948
|
Multi-Path Long-Term Vessel Trajectories Forecasting with Probabilistic
Feature Fusion for Problem Shifting
|
This paper addresses the challenge of boosting the precision of multi-path long-term vessel trajectory forecasting on engineered sequences of Automatic Identification System (AIS) data using feature fusion for problem shifting. We have developed a deep auto-encoder model and a phased framework approach to predict the next 12 hours of vessel trajectories using 1 to 3 hours of AIS data as input. To this end, we fuse the spatiotemporal features from the AIS messages with probabilistic features engineered from historical AIS data referring to potential routes and destinations. As a result, we reduce the forecasting uncertainty by shifting the problem into a trajectory reconstruction problem. The probabilistic features have an F1-Score of approximately 85% and 75% for the vessel route and destination prediction, respectively. Under such circumstances, we achieved an R2 Score of over 98% with different layer structures and varying feature combinations; the high R2 Score is a natural outcome of the well-defined shipping lanes in the study region. However, our proposal stands out among competing approaches as it demonstrates the capability of complex decision-making during turnings and route selection. Furthermore, we have shown that our model achieves more accurate forecasting with average and median errors of 11km and 6km, respectively, a 25% improvement from the current state-of-the-art approaches. The resulting model from this proposal is deployed as part of a broader Decision Support System to safeguard whales by preventing the risk of vessel-whale collisions under the smartWhales initiative and acting on the Gulf of St. Lawrence in Atlantic Canada.
|
http://arxiv.org/pdf/2310.18948v6
|
[
"Gabriel Spadon",
"Jay Kumar",
"Derek Eden",
"Josh van Berkel",
"Tom Foster",
"Amilcar Soares",
"Ronan Fablet",
"Stan Matwin",
"Ronald Pelot"
] |
2024-07-10T22:01:54Z
|
2023-10-29T09:15:22Z
|
2407.08065
|
Towards Interpretable Foundation Models of Robot Behavior: A Task
Specific Policy Generation Approach
|
Foundation models are a promising path toward general-purpose and user-friendly robots. The prevalent approach involves training a generalist policy that, like a reinforcement learning policy, uses observations to output actions. Although this approach has seen much success, several concerns arise when considering deployment and end-user interaction with these systems. In particular, the lack of modularity between tasks means that when model weights are updated (e.g., when a user provides feedback), the behavior in other, unrelated tasks may be affected. This can negatively impact the system's interpretability and usability. We present an alternative approach to the design of robot foundation models, Diffusion for Policy Parameters (DPP), which generates stand-alone, task-specific policies. Since these policies are detached from the foundation model, they are updated only when a user wants, either through feedback or personalization, allowing them to gain a high degree of familiarity with that policy. We demonstrate a proof-of-concept of DPP in simulation then discuss its limitations and the future of interpretable foundation models.
|
http://arxiv.org/pdf/2407.08065v1
|
[
"Isaac Sheidlower",
"Reuben Aronson",
"Elaine Schaertl Short"
] |
2024-07-10T21:55:44Z
|
2024-07-10T21:55:44Z
|
2407.08064
|
TinyGraph: Joint Feature and Node Condensation for Graph Neural Networks
|
Training graph neural networks (GNNs) on large-scale graphs can be challenging due to the high computational expense caused by the massive number of nodes and high-dimensional nodal features. Existing graph condensation studies tackle this problem only by reducing the number of nodes in the graph. However, the resulting condensed graph data can still be cumbersome. Specifically, although the nodes of the Citeseer dataset are reduced to 0.9% (30 nodes) in training, the number of features is 3,703, severely exceeding the training sample magnitude. Faced with this challenge, we study the problem of joint condensation for both features and nodes in large-scale graphs. This task is challenging mainly due to 1) the intertwined nature of the node features and the graph structure calls for the feature condensation solver to be structure-aware; and 2) the difficulty of keeping useful information in the condensed graph. To address these challenges, we propose a novel framework TinyGraph, to condense features and nodes simultaneously in graphs. Specifically, we cast the problem as matching the gradients of GNN weights trained on the condensed graph and the gradients obtained from training over the original graph, where the feature condensation is achieved by a trainable function. The condensed graph obtained by minimizing the matching loss along the training trajectory can henceforth retain critical information in the original graph. Extensive experiments were carried out to demonstrate the effectiveness of the proposed TinyGraph. For example, a GNN trained with TinyGraph retains 98.5% and 97.5% of the original test accuracy on the Cora and Citeseer datasets, respectively, while significantly reducing the number of nodes by 97.4% and 98.2%, and the number of features by 90.0% on both datasets.
|
http://arxiv.org/pdf/2407.08064v1
|
[
"Yezi Liu",
"Yanning Shen"
] |
2024-07-10T21:54:12Z
|
2024-07-10T21:54:12Z
|
2407.08056
|
Pareto Low-Rank Adapters: Efficient Multi-Task Learning with Preferences
|
Dealing with multi-task trade-offs during inference can be addressed via Pareto Front Learning (PFL) methods that parameterize the Pareto Front with a single model, contrary to traditional Multi-Task Learning (MTL) approaches that optimize for a single trade-off which has to be decided prior to training. However, recent PFL methodologies suffer from limited scalability, slow convergence and excessive memory requirements compared to MTL approaches while exhibiting inconsistent mappings from preference space to objective space. In this paper, we introduce PaLoRA, a novel parameter-efficient method that augments the original model with task-specific low-rank adapters and continuously parameterizes the Pareto Front in their convex hull. Our approach dedicates the original model and the adapters towards learning general and task-specific features, respectively. Additionally, we propose a deterministic sampling schedule of preference vectors that reinforces this division of labor, enabling faster convergence and scalability to real world networks. Our experimental results show that PaLoRA outperforms MTL and PFL baselines across various datasets, scales to large networks and provides a continuous parameterization of the Pareto Front, reducing the memory overhead $23.8-31.7$ times compared with competing PFL baselines in scene understanding benchmarks.
|
http://arxiv.org/pdf/2407.08056v1
|
[
"Nikolaos Dimitriadis",
"Pascal Frossard",
"Francois Fleuret"
] |
2024-07-10T21:25:51Z
|
2024-07-10T21:25:51Z
|
2407.08044
|
RoLoRA: Fine-tuning Rotated Outlier-free LLMs for Effective
Weight-Activation Quantization
|
Low-Rank Adaptation (LoRA), as a representative Parameter-Efficient Fine-Tuning (PEFT)method, significantly enhances the training efficiency by updating only a small portion of the weights in Large Language Models (LLMs). Recently, weight-only quantization techniques have also been applied to LoRA methods to reduce the memory footprint of fine-tuning. However, applying weight-activation quantization to the LoRA pipeline is under-explored, and we observe substantial performance degradation primarily due to the presence of activation outliers. In this work, we propose RoLoRA, the first LoRA-based scheme for effective weight-activation quantization. RoLoRA utilizes rotation for outlier elimination and proposes rotation-aware fine-tuning to preserve the outlier-free characteristics in rotated LLMs. Experimental results show RoLoRA consistently improves low-bit LoRA convergence and post-training quantization robustness in weight-activation settings. We evaluate RoLoRA across LLaMA2-7B/13B, LLaMA3-8B models, achieving up to 29.5% absolute accuracy gain of 4-bit weight-activation quantized LLaMA2- 13B on commonsense reasoning tasks compared to LoRA baseline. We further demonstrate its effectiveness on Large Multimodal Models (LLaVA-1.5-7B). Codes are available at https://github.com/HuangOwen/RoLoRA
|
http://arxiv.org/pdf/2407.08044v1
|
[
"Xijie Huang",
"Zechun Liu",
"Shih-Yang Liu",
"Kwang-Ting Cheng"
] |
2024-07-10T20:52:18Z
|
2024-07-10T20:52:18Z
|
2407.05625
|
New User Event Prediction Through the Lens of Causal Inference
|
Modeling and analysis for event series generated by heterogeneous users of various behavioral patterns are closely involved in our daily lives, including credit card fraud detection, online platform user recommendation, and social network analysis. The most commonly adopted approach to this task is to classify users into behavior-based categories and analyze each of them separately. However, this approach requires extensive data to fully understand user behavior, presenting challenges in modeling newcomers without historical knowledge. In this paper, we propose a novel discrete event prediction framework for new users through the lens of causal inference. Our method offers an unbiased prediction for new users without needing to know their categories. We treat the user event history as the ''treatment'' for future events and the user category as the key confounder. Thus, the prediction problem can be framed as counterfactual outcome estimation, with the new user model trained on an adjusted dataset where each event is re-weighted by its inverse propensity score. We demonstrate the superior performance of the proposed framework with a numerical simulation study and two real-world applications, including Netflix rating prediction and seller contact prediction for customer support at Amazon.
|
http://arxiv.org/pdf/2407.05625v2
|
[
"Henry Shaowu Yuchi",
"Shixiang Zhu",
"Li Dong",
"Yigit M. Arisoy",
"Matthew C. Spencer"
] |
2024-07-10T20:44:39Z
|
2024-07-08T05:35:54Z
|
2407.08029
|
A Critical Review of Causal Reasoning Benchmarks for Large Language
Models
|
Numerous benchmarks aim to evaluate the capabilities of Large Language Models (LLMs) for causal inference and reasoning. However, many of them can likely be solved through the retrieval of domain knowledge, questioning whether they achieve their purpose. In this review, we present a comprehensive overview of LLM benchmarks for causality. We highlight how recent benchmarks move towards a more thorough definition of causal reasoning by incorporating interventional or counterfactual reasoning. We derive a set of criteria that a useful benchmark or set of benchmarks should aim to satisfy. We hope this work will pave the way towards a general framework for the assessment of causal understanding in LLMs and the design of novel benchmarks.
|
http://arxiv.org/pdf/2407.08029v1
|
[
"Linying Yang",
"Vik Shirvaikar",
"Oscar Clivio",
"Fabian Falck"
] |
2024-07-10T20:11:51Z
|
2024-07-10T20:11:51Z
|
2407.08022
|
Deep Reinforcement Learning for Sequential Combinatorial Auctions
|
Revenue-optimal auction design is a challenging problem with significant theoretical and practical implications. Sequential auction mechanisms, known for their simplicity and strong strategyproofness guarantees, are often limited by theoretical results that are largely existential, except for certain restrictive settings. Although traditional reinforcement learning methods such as Proximal Policy Optimization (PPO) and Soft Actor-Critic (SAC) are applicable in this domain, they struggle with computational demands and convergence issues when dealing with large and continuous action spaces. In light of this and recognizing that we can model transitions differentiable for our settings, we propose using a new reinforcement learning framework tailored for sequential combinatorial auctions that leverages first-order gradients. Our extensive evaluations show that our approach achieves significant improvement in revenue over both analytical baselines and standard reinforcement learning algorithms. Furthermore, we scale our approach to scenarios involving up to 50 agents and 50 items, demonstrating its applicability in complex, real-world auction settings. As such, this work advances the computational tools available for auction design and contributes to bridging the gap between theoretical results and practical implementations in sequential auction design.
|
http://arxiv.org/pdf/2407.08022v1
|
[
"Sai Srivatsa Ravindranath",
"Zhe Feng",
"Di Wang",
"Manzil Zaheer",
"Aranyak Mehta",
"David C. Parkes"
] |
2024-07-10T20:00:22Z
|
2024-07-10T20:00:22Z
|
2407.08010
|
A New Self-organizing Interval Type-2 Fuzzy Neural Network for
Multi-Step Time Series Prediction
|
This paper proposes a new self-organizing interval type-2 fuzzy neural network with multiple outputs (SOIT2FNN-MO) for multi-step time series prediction. Differing from the traditional six-layer IT2FNN, a nine-layer network is developed to improve prediction accuracy, uncertainty handling and model interpretability. First, a new co-antecedent layer and a modified consequent layer are devised to improve the interpretability of the fuzzy model for multi-step predictions. Second, a new transformation layer is designed to address the potential issues in the vanished rule firing strength caused by highdimensional inputs. Third, a new link layer is proposed to build temporal connections between multi-step predictions. Furthermore, a two-stage self-organizing mechanism is developed to automatically generate the fuzzy rules, in which the first stage is used to create the rule base from empty and perform the initial optimization, while the second stage is to fine-tune all network parameters. Finally, various simulations are carried out on chaotic and microgrid time series prediction problems, demonstrating the superiority of our approach in terms of prediction accuracy, uncertainty handling and model interpretability.
|
http://arxiv.org/pdf/2407.08010v1
|
[
"Fulong Yao",
"Wanqing Zhao",
"Matthew Forshaw",
"Yang Song"
] |
2024-07-10T19:35:44Z
|
2024-07-10T19:35:44Z
|
2407.08003
|
Machine Learning for ALSFRS-R Score Prediction: Making Sense of the
Sensor Data
|
Amyotrophic Lateral Sclerosis (ALS) is characterized as a rapidly progressive neurodegenerative disease that presents individuals with limited treatment options in the realm of medical interventions and therapies. The disease showcases a diverse range of onset patterns and progression trajectories, emphasizing the critical importance of early detection of functional decline to enable tailored care strategies and timely therapeutic interventions. The present investigation, spearheaded by the iDPP@CLEF 2024 challenge, focuses on utilizing sensor-derived data obtained through an app. This data is used to construct various machine learning models specifically designed to forecast the advancement of the ALS Functional Rating Scale-Revised (ALSFRS-R) score, leveraging the dataset provided by the organizers. In our analysis, multiple predictive models were evaluated to determine their efficacy in handling ALS sensor data. The temporal aspect of the sensor data was compressed and amalgamated using statistical methods, thereby augmenting the interpretability and applicability of the gathered information for predictive modeling objectives. The models that demonstrated optimal performance were a naive baseline and ElasticNet regression. The naive model achieved a Mean Absolute Error (MAE) of 0.20 and a Root Mean Square Error (RMSE) of 0.49, slightly outperforming the ElasticNet model, which recorded an MAE of 0.22 and an RMSE of 0.50. Our comparative analysis suggests that while the naive approach yielded marginally better predictive accuracy, the ElasticNet model provides a robust framework for understanding feature contributions.
|
http://arxiv.org/pdf/2407.08003v1
|
[
"Ritesh Mehta",
"Aleksandar Pramov",
"Shashank Verma"
] |
2024-07-10T19:17:23Z
|
2024-07-10T19:17:23Z
|
2403.02178
|
Masked Thought: Simply Masking Partial Reasoning Steps Can Improve
Mathematical Reasoning Learning of Language Models
|
In reasoning tasks, even a minor error can cascade into inaccurate results, leading to suboptimal performance of large language models in such domains. Earlier fine-tuning approaches sought to mitigate this by leveraging more precise supervisory signals from human labeling, larger models, or self-sampling, although at a high cost. Conversely, we develop a method that avoids external resources, relying instead on introducing perturbations to the input. Our training approach randomly masks certain tokens within the chain of thought, a technique we found to be particularly effective for reasoning tasks. When applied to fine-tuning with GSM8K on Llama-2-7B, this method achieved a 5% improvement in GSM8K accuracy and a 10% improvement in GSM-IC accuracy over standard supervised fine-tuning with a few codes modified. Furthermore, it is complementary to existing methods. When integrated with related explicit data augmentation methods, it leads to improvements across five datasets of various augmentation methods, as well as two different base models. We further investigate the mechanisms behind this improvement through case studies and quantitative analysis, suggesting that our approach may provide superior support for the model in capturing long-distance dependencies, especially those related to questions. This enhancement could deepen understanding of the premises in questions and prior steps. Our code is available at Github.
|
http://arxiv.org/pdf/2403.02178v2
|
[
"Changyu Chen",
"Xiting Wang",
"Ting-En Lin",
"Ang Lv",
"Yuchuan Wu",
"Xin Gao",
"Ji-Rong Wen",
"Rui Yan",
"Yongbin Li"
] |
2024-07-10T19:15:24Z
|
2024-03-04T16:21:54Z
|
2309.05072
|
Uncertainty-Aware Probabilistic Graph Neural Networks for Road-Level
Traffic Accident Prediction
|
Traffic accidents present substantial challenges to human safety and socio-economic development in urban areas. Developing a reliable and responsible traffic accident prediction model is crucial to addressing growing public safety concerns and enhancing the safety of urban mobility systems. Traditional methods face limitations at fine spatiotemporal scales due to the sporadic nature of highrisk accidents and the predominance of non-accident characteristics. Furthermore, while most current models show promising occurrence prediction, they overlook the uncertainties arising from the inherent nature of accidents, and then fail to adequately map the hierarchical ranking of accident risk values for more precise insights. To address these issues, we introduce the Spatiotemporal Zero-Inflated Tweedie Graph Neural Network STZITDGNN -- the first uncertainty-aware probabilistic graph deep learning model in roadlevel traffic accident prediction for multisteps. This model integrates the interpretability of the statistical Tweedie family model and the expressive power of graph neural networks. Its decoder innovatively employs a compound Tweedie model,a Poisson distribution to model the frequency of accident occurrences and a Gamma distribution to assess injury severity, supplemented by a zeroinflated component to effectively identify exessive nonincident instances. Empirical tests using realworld traffic data from London, UK, demonstrate that the STZITDGNN surpasses other baseline models across multiple benchmarks and metrics, including accident risk value prediction, uncertainty minimisation, non-accident road identification and accident occurrence accuracy. Our study demonstrates that STZTIDGNN can effectively inform targeted road monitoring, thereby improving urban road safety strategies.
|
http://arxiv.org/pdf/2309.05072v3
|
[
"Xiaowei Gao",
"Xinke Jiang",
"Dingyi Zhuang",
"Huanfa Chen",
"Shenhao Wang",
"Stephen Law",
"James Haworth"
] |
2024-07-10T19:05:37Z
|
2023-09-10T16:35:47Z
|
2407.07998
|
What's the score? Automated Denoising Score Matching for Nonlinear
Diffusions
|
Reversing a diffusion process by learning its score forms the heart of diffusion-based generative modeling and for estimating properties of scientific systems. The diffusion processes that are tractable center on linear processes with a Gaussian stationary distribution. This limits the kinds of models that can be built to those that target a Gaussian prior or more generally limits the kinds of problems that can be generically solved to those that have conditionally linear score functions. In this work, we introduce a family of tractable denoising score matching objectives, called local-DSM, built using local increments of the diffusion process. We show how local-DSM melded with Taylor expansions enables automated training and score estimation with nonlinear diffusion processes. To demonstrate these ideas, we use automated-DSM to train generative models using non-Gaussian priors on challenging low dimensional distributions and the CIFAR10 image dataset. Additionally, we use the automated-DSM to learn the scores for nonlinear processes studied in statistical physics.
|
http://arxiv.org/pdf/2407.07998v1
|
[
"Raghav Singhal",
"Mark Goldstein",
"Rajesh Ranganath"
] |
2024-07-10T19:02:19Z
|
2024-07-10T19:02:19Z
|
2407.07997
|
ICD Codes are Insufficient to Create Datasets for Machine Learning: An
Evaluation Using All of Us Data for Coccidioidomycosis and Myocardial
Infarction
|
In medicine, machine learning (ML) datasets are often built using the International Classification of Diseases (ICD) codes. As new models are being developed, there is a need for larger datasets. However, ICD codes are intended for billing. We aim to determine how suitable ICD codes are for creating datasets to train ML models. We focused on a rare and common disease using the All of Us database. First, we compared the patient cohort created using ICD codes for Valley fever (coccidioidomycosis, CM) with that identified via serological confirmation. Second, we compared two similarly created patient cohorts for myocardial infarction (MI) patients. We identified significant discrepancies between these two groups, and the patient overlap was small. The CM cohort had 811 patients in the ICD-10 group, 619 patients in the positive-serology group, and 24 with both. The MI cohort had 14,875 patients in the ICD-10 group, 23,598 in the MI laboratory-confirmed group, and 6,531 in both. Demographics, rates of disease symptoms, and other clinical data varied across our case study cohorts.
|
http://arxiv.org/pdf/2407.07997v1
|
[
"Abigail E. Whitlock",
"Gondy Leroy",
"Fariba M. Donovan",
"John N. Galgiani"
] |
2024-07-10T19:02:11Z
|
2024-07-10T19:02:11Z
|
2406.07767
|
Conformalized Teleoperation: Confidently Mapping Human Inputs to
High-Dimensional Robot Actions
|
Assistive robotic arms often have more degrees-of-freedom than a human teleoperator can control with a low-dimensional input, like a joystick. To overcome this challenge, existing approaches use data-driven methods to learn a mapping from low-dimensional human inputs to high-dimensional robot actions. However, determining if such a black-box mapping can confidently infer a user's intended high-dimensional action from low-dimensional inputs remains an open problem. Our key idea is to adapt the assistive map at training time to additionally estimate high-dimensional action quantiles, and then calibrate these quantiles via rigorous uncertainty quantification methods. Specifically, we leverage adaptive conformal prediction which adjusts the intervals over time, reducing the uncertainty bounds when the mapping is performant and increasing the bounds when the mapping consistently mis-predicts. Furthermore, we propose an uncertainty-interval-based mechanism for detecting high-uncertainty user inputs and robot states. We evaluate the efficacy of our proposed approach in a 2D assistive navigation task and two 7DOF Kinova Jaco tasks involving assistive cup grasping and goal reaching. Our findings demonstrate that conformalized assistive teleoperation manages to detect (but not differentiate between) high uncertainty induced by diverse preferences and induced by low-precision trajectories in the mapping's training dataset. On the whole, we see this work as a key step towards enabling robots to quantify their own uncertainty and proactively seek intervention when needed.
|
http://arxiv.org/pdf/2406.07767v2
|
[
"Michelle Zhao",
"Reid Simmons",
"Henny Admoni",
"Andrea Bajcsy"
] |
2024-07-10T18:34:05Z
|
2024-06-11T23:16:46Z
|
2407.07982
|
Automating Weak Label Generation for Data Programming with Clinicians in
the Loop
|
Large Deep Neural Networks (DNNs) are often data hungry and need high-quality labeled data in copious amounts for learning to converge. This is a challenge in the field of medicine since high quality labeled data is often scarce. Data programming has been the ray of hope in this regard, since it allows us to label unlabeled data using multiple weak labeling functions. Such functions are often supplied by a domain expert. Data-programming can combine multiple weak labeling functions and suggest labels better than simple majority voting over the different functions. However, it is not straightforward to express such weak labeling functions, especially in high-dimensional settings such as images and time-series data. What we propose in this paper is a way to bypass this issue, using distance functions. In high-dimensional spaces, it is easier to find meaningful distance metrics which can generalize across different labeling tasks. We propose an algorithm that queries an expert for labels of a few representative samples of the dataset. These samples are carefully chosen by the algorithm to capture the distribution of the dataset. The labels assigned by the expert on the representative subset induce a labeling on the full dataset, thereby generating weak labels to be used in the data programming pipeline. In our medical time series case study, labeling a subset of 50 to 130 out of 3,265 samples showed 17-28% improvement in accuracy and 13-28% improvement in F1 over the baseline using clinician-defined labeling functions. In our medical image case study, labeling a subset of about 50 to 120 images from 6,293 unlabeled medical images using our approach showed significant improvement over the baseline method, Snuba, with an increase of approximately 5-15% in accuracy and 12-19% in F1 score.
|
http://arxiv.org/pdf/2407.07982v1
|
[
"Jean Park",
"Sydney Pugh",
"Kaustubh Sridhar",
"Mengyu Liu",
"Navish Yarna",
"Ramneet Kaur",
"Souradeep Dutta",
"Elena Bernardis",
"Oleg Sokolsky",
"Insup Lee"
] |
2024-07-10T18:29:22Z
|
2024-07-10T18:29:22Z
|
2407.07972
|
Deconstructing What Makes a Good Optimizer for Language Models
|
Training language models becomes increasingly expensive with scale, prompting numerous attempts to improve optimization efficiency. Despite these efforts, the Adam optimizer remains the most widely used, due to a prevailing view that it is the most effective approach. We aim to compare several optimization algorithms, including SGD, Adafactor, Adam, and Lion, in the context of autoregressive language modeling across a range of model sizes, hyperparameters, and architecture variants. Our findings indicate that, except for SGD, these algorithms all perform comparably both in their optimal performance and also in terms of how they fare across a wide range of hyperparameter choices. Our results suggest to practitioners that the choice of optimizer can be guided by practical considerations like memory constraints and ease of implementation, as no single algorithm emerged as a clear winner in terms of performance or stability to hyperparameter misspecification. Given our findings, we further dissect these approaches, examining two simplified versions of Adam: a) signed momentum (Signum) which we see recovers both the performance and hyperparameter stability of Adam and b) Adalayer, a layerwise variant of Adam which we introduce to study Adam's preconditioning. Examining Adalayer leads us to the conclusion that the largest impact of Adam's preconditioning is restricted to the last layer and LayerNorm parameters, and, perhaps surprisingly, the remaining layers can be trained with SGD.
|
http://arxiv.org/pdf/2407.07972v1
|
[
"Rosie Zhao",
"Depen Morwani",
"David Brandfonbrener",
"Nikhil Vyas",
"Sham Kakade"
] |
2024-07-10T18:11:40Z
|
2024-07-10T18:11:40Z
|
2407.07896
|
Pentagonal Photonic Crystal Mirrors: Scalable Lightsails with Enhanced
Acceleration via Neural Topology Optimization
|
The Starshot Breakthrough Initiative aims to send one-gram microchip probes to Alpha Centauri within 20 years, using gram-scale lightsails propelled by laser-based radiation pressure, reaching velocities nearing a fifth of light speed. This mission requires lightsail materials that challenge the fundamentals of nanotechnology, requiring innovations in optics, material science and structural engineering. Unlike the microchip payload, which must be minimized in every dimension, such lightsails need meter-scale dimensions with nanoscale thickness and billions of nanoscale holes to enhance reflectivity and reduce mass. Our study employs neural topology optimization, revealing a novel pentagonal lattice-based photonic crystal (PhC) reflector. The optimized designs shorten acceleration times, therefore lowering launch costs significantly. Crucially, these designs also enable lightsail material fabrication with orders-of-magnitude reduction in costs. We have fabricated a 60 x 60 mm$^2$, 200nm thick, single-layer reflector perforated with over a billion nanoscale features; the highest aspect-ratio nanophotonic element to date. We achieve this with nearly 9,000 times cost reduction per m$^2$. Starshot lightsails will have several stringent requirements but will ultimately be driven by costs to build at scale. Here we highlight challenges and possible solutions in developing lightsail materials - showcasing the potential of scaling nanophotonics for cost-effective next-generation space exploration.
|
http://arxiv.org/pdf/2407.07896v1
|
[
"L. Norder",
"S. Yin",
"M. J. de Jong",
"F. Stallone",
"H. Aydogmus",
"P. M. Sberna",
"M. A. Bessa",
"R. A. Norte"
] |
2024-07-10T17:59:55Z
|
2024-07-10T17:59:55Z
|
2407.07895
|
LLaVA-NeXT-Interleave: Tackling Multi-image, Video, and 3D in Large
Multimodal Models
|
Visual instruction tuning has made considerable strides in enhancing the capabilities of Large Multimodal Models (LMMs). However, existing open LMMs largely focus on single-image tasks, their applications to multi-image scenarios remains less explored. Additionally, prior LMM research separately tackles different scenarios, leaving it impossible to generalize cross scenarios with new emerging capabilities. To this end, we introduce LLaVA-NeXT-Interleave, which simultaneously tackles Multi-image, Multi-frame (video), Multi-view (3D), and Multi-patch (single-image) scenarios in LMMs. To enable these capabilities, we regard the interleaved data format as a general template and compile the M4-Instruct dataset with 1,177.6k samples, spanning 4 primary domains with 14 tasks and 41 datasets. We also curate the LLaVA-Interleave Bench to comprehensively evaluate the multi-image performance of LMMs. Through extensive experiments, LLaVA-NeXT-Interleave achieves leading results in multi-image, video, and 3D benchmarks, while maintaining the performance of single-image tasks. Besides, our model also exhibits several emerging capabilities, e.g., transferring tasks across different settings and modalities. Code is available at https://github.com/LLaVA-VL/LLaVA-NeXT
|
http://arxiv.org/pdf/2407.07895v1
|
[
"Feng Li",
"Renrui Zhang",
"Hao Zhang",
"Yuanhan Zhang",
"Bo Li",
"Wei Li",
"Zejun Ma",
"Chunyuan Li"
] |
2024-07-10T17:59:43Z
|
2024-07-10T17:59:43Z
|
2406.06609
|
Mitigating Bias in Dataset Distillation
|
Dataset Distillation has emerged as a technique for compressing large datasets into smaller synthetic counterparts, facilitating downstream training tasks. In this paper, we study the impact of bias inside the original dataset on the performance of dataset distillation. With a comprehensive empirical evaluation on canonical datasets with color, corruption and background biases, we found that color and background biases in the original dataset will be amplified through the distillation process, resulting in a notable decline in the performance of models trained on the distilled dataset, while corruption bias is suppressed through the distillation process. To reduce bias amplification in dataset distillation, we introduce a simple yet highly effective approach based on a sample reweighting scheme utilizing kernel density estimation. Empirical results on multiple real-world and synthetic datasets demonstrate the effectiveness of the proposed method. Notably, on CMNIST with 5% bias-conflict ratio and IPC 50, our method achieves 91.5% test accuracy compared to 23.8% from vanilla DM, boosting the performance by 67.7%, whereas applying state-of-the-art debiasing method on the same dataset only achieves 53.7% accuracy. Our findings highlight the importance of addressing biases in dataset distillation and provide a promising avenue to address bias amplification in the process.
|
http://arxiv.org/pdf/2406.06609v2
|
[
"Justin Cui",
"Ruochen Wang",
"Yuanhao Xiong",
"Cho-Jui Hsieh"
] |
2024-07-10T17:58:14Z
|
2024-06-06T18:52:28Z
|
2407.07890
|
Training on the Test Task Confounds Evaluation and Emergence
|
We study a fundamental problem in the evaluation of large language models that we call training on the test task. Unlike wrongful practices like training on the test data, leakage, or data contamination, training on the test task is not a malpractice. Rather, the term describes a growing set of techniques to include task-relevant data in the pretraining stage of a language model. We demonstrate that training on the test task confounds both relative model evaluations and claims about emergent capabilities. We argue that the seeming superiority of one model family over another may be explained by a different degree of training on the test task. To this end, we propose an effective method to adjust for training on the test task by fine-tuning each model under comparison on the same task-relevant data before evaluation. We then show that instances of emergent behavior largely vanish once we adjust for training on the test task. This also applies to reported instances of emergent behavior that cannot be explained by the choice of evaluation metric. Our work promotes a new perspective on the evaluation of large language models with broad implications for benchmarking and the study of emergent capabilities.
|
http://arxiv.org/pdf/2407.07890v1
|
[
"Ricardo Dominguez-Olmedo",
"Florian E. Dorner",
"Moritz Hardt"
] |
2024-07-10T17:57:58Z
|
2024-07-10T17:57:58Z
|
2407.07889
|
AdaptiGraph: Material-Adaptive Graph-Based Neural Dynamics for Robotic
Manipulation
|
Predictive models are a crucial component of many robotic systems. Yet, constructing accurate predictive models for a variety of deformable objects, especially those with unknown physical properties, remains a significant challenge. This paper introduces AdaptiGraph, a learning-based dynamics modeling approach that enables robots to predict, adapt to, and control a wide array of challenging deformable materials with unknown physical properties. AdaptiGraph leverages the highly flexible graph-based neural dynamics (GBND) framework, which represents material bits as particles and employs a graph neural network (GNN) to predict particle motion. Its key innovation is a unified physical property-conditioned GBND model capable of predicting the motions of diverse materials with varying physical properties without retraining. Upon encountering new materials during online deployment, AdaptiGraph utilizes a physical property optimization process for a few-shot adaptation of the model, enhancing its fit to the observed interaction data. The adapted models can precisely simulate the dynamics and predict the motion of various deformable materials, such as ropes, granular media, rigid boxes, and cloth, while adapting to different physical properties, including stiffness, granular size, and center of pressure. On prediction and manipulation tasks involving a diverse set of real-world deformable objects, our method exhibits superior prediction accuracy and task proficiency over non-material-conditioned and non-adaptive models. The project page is available at https://robopil.github.io/adaptigraph/ .
|
http://arxiv.org/pdf/2407.07889v1
|
[
"Kaifeng Zhang",
"Baoyu Li",
"Kris Hauser",
"Yunzhu Li"
] |
2024-07-10T17:57:04Z
|
2024-07-10T17:57:04Z
|
2407.07885
|
Learning In-Hand Translation Using Tactile Skin With Shear and Normal
Force Sensing
|
Recent progress in reinforcement learning (RL) and tactile sensing has significantly advanced dexterous manipulation. However, these methods often utilize simplified tactile signals due to the gap between tactile simulation and the real world. We introduce a sensor model for tactile skin that enables zero-shot sim-to-real transfer of ternary shear and binary normal forces. Using this model, we develop an RL policy that leverages sliding contact for dexterous in-hand translation. We conduct extensive real-world experiments to assess how tactile sensing facilitates policy adaptation to various unseen object properties and robot hand orientations. We demonstrate that our 3-axis tactile policies consistently outperform baselines that use only shear forces, only normal forces, or only proprioception. Website: https://jessicayin.github.io/tactile-skin-rl/
|
http://arxiv.org/pdf/2407.07885v1
|
[
"Jessica Yin",
"Haozhi Qi",
"Jitendra Malik",
"James Pikul",
"Mark Yim",
"Tess Hellebrekers"
] |
2024-07-10T17:52:30Z
|
2024-07-10T17:52:30Z
|
2407.07884
|
Vegetable Peeling: A Case Study in Constrained Dexterous Manipulation
|
Recent studies have made significant progress in addressing dexterous manipulation problems, particularly in in-hand object reorientation. However, there are few existing works that explore the potential utilization of developed dexterous manipulation controllers for downstream tasks. In this study, we focus on constrained dexterous manipulation for food peeling. Food peeling presents various constraints on the reorientation controller, such as the requirement for the hand to securely hold the object after reorientation for peeling. We propose a simple system for learning a reorientation controller that facilitates the subsequent peeling task. Videos are available at: https://taochenshh.github.io/projects/veg-peeling.
|
http://arxiv.org/pdf/2407.07884v1
|
[
"Tao Chen",
"Eric Cousineau",
"Naveen Kuppuswamy",
"Pulkit Agrawal"
] |
2024-07-10T17:51:33Z
|
2024-07-10T17:51:33Z
|
2407.07880
|
Towards Robust Alignment of Language Models: Distributionally
Robustifying Direct Preference Optimization
|
This study addresses the challenge of noise in training datasets for Direct Preference Optimization (DPO), a method for aligning Large Language Models (LLMs) with human preferences. We categorize noise into pointwise noise, which includes low-quality data points, and pairwise noise, which encompasses erroneous data pair associations that affect preference rankings. Utilizing Distributionally Robust Optimization (DRO), we enhance DPO's resilience to these types of noise. Our theoretical insights reveal that DPO inherently embeds DRO principles, conferring robustness to pointwise noise, with the regularization coefficient $beta$ playing a critical role in its noise resistance. Extending this framework, we introduce Distributionally Robustifying DPO (Dr. DPO), which integrates pairwise robustness by optimizing against worst-case pairwise scenarios. The novel hyperparameter $beta'$ in Dr. DPO allows for fine-tuned control over data pair reliability, providing a strategic balance between exploration and exploitation in noisy training environments. Empirical evaluations demonstrate that Dr. DPO substantially improves the quality of generated text and response accuracy in preference datasets, showcasing enhanced performance in both noisy and noise-free settings. The code is available at https://github.com/junkangwu/Dr_DPO.
|
http://arxiv.org/pdf/2407.07880v1
|
[
"Junkang Wu",
"Yuexiang Xie",
"Zhengyi Yang",
"Jiancan Wu",
"Jiawei Chen",
"Jinyang Gao",
"Bolin Ding",
"Xiang Wang",
"Xiangnan He"
] |
2024-07-10T17:48:25Z
|
2024-07-10T17:48:25Z
|
2407.07875
|
Generative Image as Action Models
|
Image-generation diffusion models have been fine-tuned to unlock new capabilities such as image-editing and novel view synthesis. Can we similarly unlock image-generation models for visuomotor control? We present GENIMA, a behavior-cloning agent that fine-tunes Stable Diffusion to 'draw joint-actions' as targets on RGB images. These images are fed into a controller that maps the visual targets into a sequence of joint-positions. We study GENIMA on 25 RLBench and 9 real-world manipulation tasks. We find that, by lifting actions into image-space, internet pre-trained diffusion models can generate policies that outperform state-of-the-art visuomotor approaches, especially in robustness to scene perturbations and generalizing to novel objects. Our method is also competitive with 3D agents, despite lacking priors such as depth, keypoints, or motion-planners.
|
http://arxiv.org/pdf/2407.07875v1
|
[
"Mohit Shridhar",
"Yat Long Lo",
"Stephen James"
] |
2024-07-10T17:41:10Z
|
2024-07-10T17:41:10Z
|
2407.07873
|
Dynamical Measure Transport and Neural PDE Solvers for Sampling
|
The task of sampling from a probability density can be approached as transporting a tractable density function to the target, known as dynamical measure transport. In this work, we tackle it through a principled unified framework using deterministic or stochastic evolutions described by partial differential equations (PDEs). This framework incorporates prior trajectory-based sampling methods, such as diffusion models or Schr"odinger bridges, without relying on the concept of time-reversals. Moreover, it allows us to propose novel numerical methods for solving the transport task and thus sampling from complicated targets without the need for the normalization constant or data samples. We employ physics-informed neural networks (PINNs) to approximate the respective PDE solutions, implying both conceptional and computational advantages. In particular, PINNs allow for simulation- and discretization-free optimization and can be trained very efficiently, leading to significantly better mode coverage in the sampling task compared to alternative methods. Moreover, they can readily be fine-tuned with Gauss-Newton methods to achieve high accuracy in sampling.
|
http://arxiv.org/pdf/2407.07873v1
|
[
"Jingtong Sun",
"Julius Berner",
"Lorenz Richter",
"Marius Zeinhofer",
"Johannes Müller",
"Kamyar Azizzadenesheli",
"Anima Anandkumar"
] |
2024-07-10T17:39:50Z
|
2024-07-10T17:39:50Z
|
2310.05615
|
Adaptive Multi-head Contrastive Learning
|
In contrastive learning, two views of an original image, generated by different augmentations, are considered a positive pair, and their similarity is required to be high. Similarly, two views of distinct images form a negative pair, with encouraged low similarity. Typically, a single similarity measure, provided by a lone projection head, evaluates positive and negative sample pairs. However, due to diverse augmentation strategies and varying intra-sample similarity, views from the same image may not always be similar. Additionally, owing to inter-sample similarity, views from different images may be more akin than those from the same image. Consequently, enforcing high similarity for positive pairs and low similarity for negative pairs may be unattainable, and in some cases, such enforcement could detrimentally impact performance. To address this challenge, we propose using multiple projection heads, each producing a distinct set of features. Our pre-training loss function emerges from a solution to the maximum likelihood estimation over head-wise posterior distributions of positive samples given observations. This loss incorporates the similarity measure over positive and negative pairs, each re-weighted by an individual adaptive temperature, regulated to prevent ill solutions. Our approach, Adaptive Multi-Head Contrastive Learning (AMCL), can be applied to and experimentally enhances several popular contrastive learning methods such as SimCLR, MoCo, and Barlow Twins. The improvement remains consistent across various backbones and linear probing epochs, and becomes more significant when employing multiple augmentation methods.
|
http://arxiv.org/pdf/2310.05615v2
|
[
"Lei Wang",
"Piotr Koniusz",
"Tom Gedeon",
"Liang Zheng"
] |
2024-07-10T17:37:48Z
|
2023-10-09T11:08:34Z
|
2403.02502
|
Trial and Error: Exploration-Based Trajectory Optimization for LLM
Agents
|
Large Language Models (LLMs) have become integral components in various autonomous agent systems. In this study, we present an exploration-based trajectory optimization approach, referred to as ETO. This learning method is designed to enhance the performance of open LLM agents. Contrary to previous studies that exclusively train on successful expert trajectories, our method allows agents to learn from their exploration failures. This leads to improved performance through an iterative optimization framework. During the exploration phase, the agent interacts with the environment while completing given tasks, gathering failure trajectories to create contrastive trajectory pairs. In the subsequent training phase, the agent utilizes these trajectory preference pairs to update its policy using contrastive learning methods like DPO. This iterative cycle of exploration and training fosters continued improvement in the agents. Our experiments on three complex tasks demonstrate that ETO consistently surpasses baseline performance by a large margin. Furthermore, an examination of task-solving efficiency and potential in scenarios lacking expert trajectory underscores the effectiveness of our approach.
|
http://arxiv.org/pdf/2403.02502v2
|
[
"Yifan Song",
"Da Yin",
"Xiang Yue",
"Jie Huang",
"Sujian Li",
"Bill Yuchen Lin"
] |
2024-07-10T17:36:25Z
|
2024-03-04T21:50:29Z
|
2311.05657
|
Agent Lumos: Unified and Modular Training for Open-Source Language
Agents
|
Closed-source agents suffer from several issues such as a lack of affordability, transparency, and reproducibility, particularly on complex interactive tasks. This motivates the development of open-source alternatives. We introduce LUMOS, one of the first frameworks for training open-source LLM-based agents. LUMOS features a learnable, unified, and modular architecture with a planning module that learns high-level subgoal generation, and a grounding module trained to translate these into actions using various tools in the execution module. The design allows for modular upgrades and wider applicability to diverse interactive tasks. To foster generalizable agent learning, we collect large-scale, unified, and high-quality training annotations derived from diverse ground-truth reasoning rationales across various complex interactive tasks. On 9 datasets, LUMOS exhibits several key advantages: (1) LUMOS excels multiple larger open-source agents on the held-out datasets (unused for training) for each task type. LUMOS even surpasses GPT agents on QA and web tasks; (2) LUMOS outperforms open-source agents produced by chain-of-thoughts and unmodularized integrated training; and (3) LUMOS effectively generalizes to unseen tasks, outperforming 33B-scale agents and domain-specific agents.
|
http://arxiv.org/pdf/2311.05657v3
|
[
"Da Yin",
"Faeze Brahman",
"Abhilasha Ravichander",
"Khyathi Chandu",
"Kai-Wei Chang",
"Yejin Choi",
"Bill Yuchen Lin"
] |
2024-07-10T17:36:02Z
|
2023-11-09T00:30:13Z
|
2404.09349
|
Adversarial Robustness Limits via Scaling-Law and Human-Alignment
Studies
|
This paper revisits the simple, long-studied, yet still unsolved problem of making image classifiers robust to imperceptible perturbations. Taking CIFAR10 as an example, SOTA clean accuracy is about $100$%, but SOTA robustness to $ell_{infty}$-norm bounded perturbations barely exceeds $70$%. To understand this gap, we analyze how model size, dataset size, and synthetic data quality affect robustness by developing the first scaling laws for adversarial training. Our scaling laws reveal inefficiencies in prior art and provide actionable feedback to advance the field. For instance, we discovered that SOTA methods diverge notably from compute-optimal setups, using excess compute for their level of robustness. Leveraging a compute-efficient setup, we surpass the prior SOTA with $20$% ($70$%) fewer training (inference) FLOPs. We trained various compute-efficient models, with our best achieving $74$% AutoAttack accuracy ($+3$% gain). However, our scaling laws also predict robustness slowly grows then plateaus at $90$%: dwarfing our new SOTA by scaling is impractical, and perfect robustness is impossible. To better understand this predicted limit, we carry out a small-scale human evaluation on the AutoAttack data that fools our top-performing model. Concerningly, we estimate that human performance also plateaus near $90$%, which we show to be attributable to $ell_{infty}$-constrained attacks' generation of invalid images not consistent with their original labels. Having characterized limiting roadblocks, we outline promising paths for future research.
|
http://arxiv.org/pdf/2404.09349v2
|
[
"Brian R. Bartoldson",
"James Diffenderfer",
"Konstantinos Parasyris",
"Bhavya Kailkhura"
] |
2024-07-10T17:32:29Z
|
2024-04-14T20:14:38Z
|
2407.07868
|
Green Screen Augmentation Enables Scene Generalisation in Robotic
Manipulation
|
Generalising vision-based manipulation policies to novel environments remains a challenging area with limited exploration. Current practices involve collecting data in one location, training imitation learning or reinforcement learning policies with this data, and deploying the policy in the same location. However, this approach lacks scalability as it necessitates data collection in multiple locations for each task. This paper proposes a novel approach where data is collected in a location predominantly featuring green screens. We introduce Green-screen Augmentation (GreenAug), employing a chroma key algorithm to overlay background textures onto a green screen. Through extensive real-world empirical studies with over 850 training demonstrations and 8.2k evaluation episodes, we demonstrate that GreenAug surpasses no augmentation, standard computer vision augmentation, and prior generative augmentation methods in performance. While no algorithmic novelties are claimed, our paper advocates for a fundamental shift in data collection practices. We propose that real-world demonstrations in future research should utilise green screens, followed by the application of GreenAug. We believe GreenAug unlocks policy generalisation to visually distinct novel locations, addressing the current scene generalisation limitations in robot learning.
|
http://arxiv.org/pdf/2407.07868v1
|
[
"Eugene Teoh",
"Sumit Patidar",
"Xiao Ma",
"Stephen James"
] |
2024-07-10T17:32:05Z
|
2024-07-10T17:32:05Z
|
2407.07858
|
FACTS About Building Retrieval Augmented Generation-based Chatbots
|
Enterprise chatbots, powered by generative AI, are emerging as key applications to enhance employee productivity. Retrieval Augmented Generation (RAG), Large Language Models (LLMs), and orchestration frameworks like Langchain and Llamaindex are crucial for building these chatbots. However, creating effective enterprise chatbots is challenging and requires meticulous RAG pipeline engineering. This includes fine-tuning embeddings and LLMs, extracting documents from vector databases, rephrasing queries, reranking results, designing prompts, honoring document access controls, providing concise responses, including references, safeguarding personal information, and building orchestration agents. We present a framework for building RAG-based chatbots based on our experience with three NVIDIA chatbots: for IT/HR benefits, financial earnings, and general content. Our contributions are three-fold: introducing the FACTS framework (Freshness, Architectures, Cost, Testing, Security), presenting fifteen RAG pipeline control points, and providing empirical results on accuracy-latency tradeoffs between large and small LLMs. To the best of our knowledge, this is the first paper of its kind that provides a holistic view of the factors as well as solutions for building secure enterprise-grade chatbots."
|
http://arxiv.org/pdf/2407.07858v1
|
[
"Rama Akkiraju",
"Anbang Xu",
"Deepak Bora",
"Tan Yu",
"Lu An",
"Vishal Seth",
"Aaditya Shukla",
"Pritam Gundecha",
"Hridhay Mehta",
"Ashwin Jha",
"Prithvi Raj",
"Abhinav Balasubramanian",
"Murali Maram",
"Guru Muthusamy",
"Shivakesh Reddy Annepally",
"Sidney Knowles",
"Min Du",
"Nick Burnett",
"Sean Javiya",
"Ashok Marannan",
"Mamta Kumari",
"Surbhi Jha",
"Ethan Dereszenski",
"Anupam Chakraborty",
"Subhash Ranjan",
"Amina Terfai",
"Anoop Surya",
"Tracey Mercer",
"Vinodh Kumar Thanigachalam",
"Tamar Bar",
"Sanjana Krishnan",
"Samy Kilaru",
"Jasmine Jaksic",
"Nave Algarici",
"Jacob Liberman",
"Joey Conway",
"Sonu Nayyar",
"Justin Boitano"
] |
2024-07-10T17:20:59Z
|
2024-07-10T17:20:59Z
|
2407.07852
|
OpenDiLoCo: An Open-Source Framework for Globally Distributed
Low-Communication Training
|
OpenDiLoCo is an open-source implementation and replication of the Distributed Low-Communication (DiLoCo) training method for large language models. We provide a reproducible implementation of the DiLoCo experiments, offering it within a scalable, decentralized training framework using the Hivemind library. We demonstrate its effectiveness by training a model across two continents and three countries, while maintaining 90-95% compute utilization. Additionally, we conduct ablations studies focusing on the algorithm's compute efficiency, scalability in the number of workers and show that its gradients can be all-reduced using FP16 without any performance degradation. Furthermore, we scale OpenDiLoCo to 3x the size of the original work, demonstrating its effectiveness for billion parameter models.
|
http://arxiv.org/pdf/2407.07852v1
|
[
"Sami Jaghouar",
"Jack Min Ong",
"Johannes Hagemann"
] |
2024-07-10T17:13:17Z
|
2024-07-10T17:13:17Z
|
2310.11366
|
Lie Group Decompositions for Equivariant Neural Networks
|
Invariance and equivariance to geometrical transformations have proven to be very useful inductive biases when training (convolutional) neural network models, especially in the low-data regime. Much work has focused on the case where the symmetry group employed is compact or abelian, or both. Recent work has explored enlarging the class of transformations used to the case of Lie groups, principally through the use of their Lie algebra, as well as the group exponential and logarithm maps. The applicability of such methods is limited by the fact that depending on the group of interest $G$, the exponential map may not be surjective. Further limitations are encountered when $G$ is neither compact nor abelian. Using the structure and geometry of Lie groups and their homogeneous spaces, we present a framework by which it is possible to work with such groups primarily focusing on the groups $G = text{GL}^{+}(n, mathbb{R})$ and $G = text{SL}(n, mathbb{R})$, as well as their representation as affine transformations $mathbb{R}^{n} rtimes G$. Invariant integration as well as a global parametrization is realized by a decomposition into subgroups and submanifolds which can be handled individually. Under this framework, we show how convolution kernels can be parametrized to build models equivariant with respect to affine transformations. We evaluate the robustness and out-of-distribution generalisation capability of our model on the benchmark affine-invariant classification task, outperforming previous proposals.
|
http://arxiv.org/pdf/2310.11366v2
|
[
"Mircea Mironenco",
"Patrick Forré"
] |
2024-07-10T17:12:45Z
|
2023-10-17T16:04:33Z
|
2407.07848
|
Uncovering Layer-Dependent Activation Sparsity Patterns in ReLU
Transformers
|
Previous work has demonstrated that MLPs within ReLU Transformers exhibit high levels of sparsity, with many of their activations equal to zero for any given token. We build on that work to more deeply explore how token-level sparsity evolves over the course of training, and how it connects to broader sparsity patterns over the course of a sequence or batch, demonstrating that the different layers within small transformers exhibit distinctly layer-specific patterns on both of these fronts. In particular, we demonstrate that the first and last layer of the network have distinctive and in many ways inverted relationships to sparsity, and explore implications for the structure of feature representations being learned at different depths of the model. We additionally explore the phenomenon of ReLU dimensions "turning off", and show evidence suggesting that "neuron death" is being primarily driven by the dynamics of training, rather than simply occurring randomly or accidentally as a result of outliers.
|
http://arxiv.org/pdf/2407.07848v1
|
[
"Cody Wild",
"Jesper Anderson"
] |
2024-07-10T17:10:10Z
|
2024-07-10T17:10:10Z
|
2402.11354
|
Probabilistic Routing for Graph-Based Approximate Nearest Neighbor
Search
|
Approximate nearest neighbor search (ANNS) in high-dimensional spaces is a pivotal challenge in the field of machine learning. In recent years, graph-based methods have emerged as the superior approach to ANNS, establishing a new state of the art. Although various optimizations for graph-based ANNS have been introduced, they predominantly rely on heuristic methods that lack formal theoretical backing. This paper aims to enhance routing within graph-based ANNS by introducing a method that offers a probabilistic guarantee when exploring a node's neighbors in the graph. We formulate the problem as probabilistic routing and develop two baseline strategies by incorporating locality-sensitive techniques. Subsequently, we introduce PEOs, a novel approach that efficiently identifies which neighbors in the graph should be considered for exact distance calculation, thus significantly improving efficiency in practice. Our experiments demonstrate that equipping PEOs can increase throughput on commonly utilized graph indexes (HNSW and NSSG) by a factor of 1.6 to 2.5, and its efficiency consistently outperforms the leading-edge routing technique by 1.1 to 1.4 times.
|
http://arxiv.org/pdf/2402.11354v2
|
[
"Kejing Lu",
"Chuan Xiao",
"Yoshiharu Ishikawa"
] |
2024-07-10T17:05:43Z
|
2024-02-17T18:08:37Z
|
2405.19327
|
MAP-Neo: Highly Capable and Transparent Bilingual Large Language Model
Series
|
Large Language Models (LLMs) have made great strides in recent years to achieve unprecedented performance across different tasks. However, due to commercial interest, the most competitive models like GPT, Gemini, and Claude have been gated behind proprietary interfaces without disclosing the training details. Recently, many institutions have open-sourced several strong LLMs like LLaMA-3, comparable to existing closed-source LLMs. However, only the model's weights are provided with most details (e.g., intermediate checkpoints, pre-training corpus, and training code, etc.) being undisclosed. To improve the transparency of LLMs, the research community has formed to open-source truly open LLMs (e.g., Pythia, Amber, OLMo), where more details (e.g., pre-training corpus and training code) are being provided. These models have greatly advanced the scientific study of these large models including their strengths, weaknesses, biases and risks. However, we observe that the existing truly open LLMs on reasoning, knowledge, and coding tasks are still inferior to existing state-of-the-art LLMs with similar model sizes. To this end, we open-source MAP-Neo, a highly capable and transparent bilingual language model with 7B parameters trained from scratch on 4.5T high-quality tokens. Our MAP-Neo is the first fully open-sourced bilingual LLM with comparable performance compared to existing state-of-the-art LLMs. Moreover, we open-source all details to reproduce our MAP-Neo, where the cleaned pre-training corpus, data cleaning pipeline, checkpoints, and well-optimized training/evaluation framework are provided. Finally, we hope our MAP-Neo will enhance and strengthen the open research community and inspire more innovations and creativities to facilitate the further improvements of LLMs.
|
http://arxiv.org/pdf/2405.19327v4
|
[
"Ge Zhang",
"Scott Qu",
"Jiaheng Liu",
"Chenchen Zhang",
"Chenghua Lin",
"Chou Leuang Yu",
"Danny Pan",
"Esther Cheng",
"Jie Liu",
"Qunshu Lin",
"Raven Yuan",
"Tuney Zheng",
"Wei Pang",
"Xinrun Du",
"Yiming Liang",
"Yinghao Ma",
"Yizhi Li",
"Ziyang Ma",
"Bill Lin",
"Emmanouil Benetos",
"Huan Yang",
"Junting Zhou",
"Kaijing Ma",
"Minghao Liu",
"Morry Niu",
"Noah Wang",
"Quehry Que",
"Ruibo Liu",
"Sine Liu",
"Shawn Guo",
"Soren Gao",
"Wangchunshu Zhou",
"Xinyue Zhang",
"Yizhi Zhou",
"Yubo Wang",
"Yuelin Bai",
"Yuhan Zhang",
"Yuxiang Zhang",
"Zenith Wang",
"Zhenzhu Yang",
"Zijian Zhao",
"Jiajun Zhang",
"Wanli Ouyang",
"Wenhao Huang",
"Wenhu Chen"
] |
2024-07-10T16:55:47Z
|
2024-05-29T17:57:16Z
|
2407.07829
|
Disentangled Representation Learning through Geometry Preservation with
the Gromov-Monge Gap
|
Learning disentangled representations in an unsupervised manner is a fundamental challenge in machine learning. Solving it may unlock other problems, such as generalization, interpretability, or fairness. While remarkably difficult to solve in general, recent works have shown that disentanglement is provably achievable under additional assumptions that can leverage geometrical constraints, such as local isometry. To use these insights, we propose a novel perspective on disentangled representation learning built on quadratic optimal transport. Specifically, we formulate the problem in the Gromov-Monge setting, which seeks isometric mappings between distributions supported on different spaces. We propose the Gromov-Monge-Gap (GMG), a regularizer that quantifies the geometry-preservation of an arbitrary push-forward map between two distributions supported on different spaces. We demonstrate the effectiveness of GMG regularization for disentanglement on four standard benchmarks. Moreover, we show that geometry preservation can even encourage unsupervised disentanglement without the standard reconstruction objective - making the underlying model decoder-free, and promising a more practically viable and scalable perspective on unsupervised disentanglement.
|
http://arxiv.org/pdf/2407.07829v1
|
[
"Théo Uscidda",
"Luca Eyring",
"Karsten Roth",
"Fabian Theis",
"Zeynep Akata",
"Marco Cuturi"
] |
2024-07-10T16:51:32Z
|
2024-07-10T16:51:32Z
|
2407.07827
|
Estimating the stability number of a random graph using convolutional
neural networks
|
Graph combinatorial optimization problems are widely applicable and notoriously difficult to compute; for example, consider the traveling salesman or facility location problems. In this paper, we explore the feasibility of using convolutional neural networks (CNNs) on graph images to predict the cardinality of combinatorial properties of random graphs and networks. Specifically, we use image representations of modified adjacency matrices of random graphs as training samples for a CNN model to predict the stability number of random graphs; where the stability number is the cardinality of a maximum set of vertices containing no pairwise adjacency. Our approach demonstrates the potential for applying deep learning in combinatorial optimization problems.
|
http://arxiv.org/pdf/2407.07827v1
|
[
"Randy Davila"
] |
2024-07-10T16:50:59Z
|
2024-07-10T16:50:59Z
|
2407.07821
|
When to Accept Automated Predictions and When to Defer to Human
Judgment?
|
Ensuring the reliability and safety of automated decision-making is crucial. It is well-known that data distribution shifts in machine learning can produce unreliable outcomes. This paper proposes a new approach for measuring the reliability of predictions under distribution shifts. We analyze how the outputs of a trained neural network change using clustering to measure distances between outputs and class centroids. We propose this distance as a metric to evaluate the confidence of predictions under distribution shifts. We assign each prediction to a cluster with centroid representing the mean softmax output for all correct predictions of a given class. We then define a safety threshold for a class as the smallest distance from an incorrect prediction to the given class centroid. We evaluate the approach on the MNIST and CIFAR-10 datasets using a Convolutional Neural Network and a Vision Transformer, respectively. The results show that our approach is consistent across these data sets and network models, and indicate that the proposed metric can offer an efficient way of determining when automated predictions are acceptable and when they should be deferred to human operators given a distribution shift.
|
http://arxiv.org/pdf/2407.07821v1
|
[
"Daniel Sikar",
"Artur Garcez",
"Tillman Weyde",
"Robin Bloomfield",
"Kaleem Peeroo"
] |
2024-07-10T16:45:52Z
|
2024-07-10T16:45:52Z
|
2407.07818
|
The Misclassification Likelihood Matrix: Some Classes Are More Likely To
Be Misclassified Than Others
|
This study introduces the Misclassification Likelihood Matrix (MLM) as a novel tool for quantifying the reliability of neural network predictions under distribution shifts. The MLM is obtained by leveraging softmax outputs and clustering techniques to measure the distances between the predictions of a trained neural network and class centroids. By analyzing these distances, the MLM provides a comprehensive view of the model's misclassification tendencies, enabling decision-makers to identify the most common and critical sources of errors. The MLM allows for the prioritization of model improvements and the establishment of decision thresholds based on acceptable risk levels. The approach is evaluated on the MNIST dataset using a Convolutional Neural Network (CNN) and a perturbed version of the dataset to simulate distribution shifts. The results demonstrate the effectiveness of the MLM in assessing the reliability of predictions and highlight its potential in enhancing the interpretability and risk mitigation capabilities of neural networks. The implications of this work extend beyond image classification, with ongoing applications in autonomous systems, such as self-driving cars, to improve the safety and reliability of decision-making in complex, real-world environments.
|
http://arxiv.org/pdf/2407.07818v1
|
[
"Daniel Sikar",
"Artur Garcez",
"Robin Bloomfield",
"Tillman Weyde",
"Kaleem Peeroo",
"Naman Singh",
"Maeve Hutchinson",
"Mirela Reljan-Delaney"
] |
2024-07-10T16:43:14Z
|
2024-07-10T16:43:14Z
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.