arxiv_id
stringlengths 7
11
| title
stringlengths 7
243
| abstract
stringlengths 3
2.79k
| link
stringlengths 21
49
| authors
listlengths 1
451
| updated
stringlengths 20
20
| published
stringlengths 20
20
|
---|---|---|---|---|---|---|
2308.09571
|
Physics-Informed Boundary Integral Networks (PIBI-Nets): A Data-Driven
Approach for Solving Partial Differential Equations
|
Partial differential equations (PDEs) are widely used to describe relevant phenomena in dynamical systems. In real-world applications, we commonly need to combine formal PDE models with (potentially noisy) observations. This is especially relevant in settings where we lack information about boundary or initial conditions, or where we need to identify unknown model parameters. In recent years, Physics-Informed Neural Networks (PINNs) have become a popular tool for this kind of problems. In high-dimensional settings, however, PINNs often suffer from computational problems because they usually require dense collocation points over the entire computational domain. To address this problem, we present Physics-Informed Boundary Integral Networks (PIBI-Nets) as a data-driven approach for solving PDEs in one dimension less than the original problem space. PIBI-Nets only require points at the computational domain boundary, while still achieving highly accurate results. Moreover, PIBI-Nets clearly outperform PINNs in several practical settings. Exploiting elementary properties of fundamental solutions of linear differential operators, we present a principled and simple way to handle point sources in inverse problems. We demonstrate the excellent performance of PIBI- Nets for the Laplace and Poisson equations, both on artificial datasets and within a real-world application concerning the reconstruction of groundwater flows.
|
http://arxiv.org/abs/2308.09571v3
|
[
"Monika Nagy-Huber",
"Volker Roth"
] |
2024-07-03T20:31:06Z
|
2023-08-18T14:03:34Z
|
2310.17582
|
Convergence of flow-based generative models via proximal gradient
descent in Wasserstein space
|
Flow-based generative models enjoy certain advantages in computing the data generation and the likelihood, and have recently shown competitive empirical performance. Compared to the accumulating theoretical studies on related score-based diffusion models, analysis of flow-based models, which are deterministic in both forward (data-to-noise) and reverse (noise-to-data) directions, remain sparse. In this paper, we provide a theoretical guarantee of generating data distribution by a progressive flow model, the so-called JKO flow model, which implements the Jordan-Kinderleherer-Otto (JKO) scheme in a normalizing flow network. Leveraging the exponential convergence of the proximal gradient descent (GD) in Wasserstein space, we prove the Kullback-Leibler (KL) guarantee of data generation by a JKO flow model to be $O(varepsilon^2)$ when using $N lesssim log (1/varepsilon)$ many JKO steps ($N$ Residual Blocks in the flow) where $varepsilon $ is the error in the per-step first-order condition. The assumption on data density is merely a finite second moment, and the theory extends to data distributions without density and when there are inversion errors in the reverse process where we obtain KL-$W_2$ mixed error guarantees. The non-asymptotic convergence rate of the JKO-type $W_2$-proximal GD is proved for a general class of convex objective functionals that includes the KL divergence as a special case, which can be of independent interest. The analysis framework can extend to other first-order Wasserstein optimization schemes applied to flow-based generative models.
|
http://arxiv.org/pdf/2310.17582v3
|
[
"Xiuyuan Cheng",
"Jianfeng Lu",
"Yixin Tan",
"Yao Xie"
] |
2024-07-03T20:05:43Z
|
2023-10-26T17:06:23Z
|
2310.08419
|
Jailbreaking Black Box Large Language Models in Twenty Queries
|
There is growing interest in ensuring that large language models (LLMs) align with human values. However, the alignment of such models is vulnerable to adversarial jailbreaks, which coax LLMs into overriding their safety guardrails. The identification of these vulnerabilities is therefore instrumental in understanding inherent weaknesses and preventing future misuse. To this end, we propose Prompt Automatic Iterative Refinement (PAIR), an algorithm that generates semantic jailbreaks with only black-box access to an LLM. PAIR -- which is inspired by social engineering attacks -- uses an attacker LLM to automatically generate jailbreaks for a separate targeted LLM without human intervention. In this way, the attacker LLM iteratively queries the target LLM to update and refine a candidate jailbreak. Empirically, PAIR often requires fewer than twenty queries to produce a jailbreak, which is orders of magnitude more efficient than existing algorithms. PAIR also achieves competitive jailbreaking success rates and transferability on open and closed-source LLMs, including GPT-3.5/4, Vicuna, and Gemini.
|
http://arxiv.org/pdf/2310.08419v3
|
[
"Patrick Chao",
"Alexander Robey",
"Edgar Dobriban",
"Hamed Hassani",
"George J. Pappas",
"Eric Wong"
] |
2024-07-03T19:50:34Z
|
2023-10-12T15:38:28Z
|
2205.06297
|
Improving Sequential Query Recommendation with Immediate User Feedback
|
We propose an algorithm for next query recommendation in interactive data exploration settings, like knowledge discovery for information gathering. The state-of-the-art query recommendation algorithms are based on sequence-to-sequence learning approaches that exploit historical interaction data. Due to the supervision involved in the learning process, such approaches fail to adapt to immediate user feedback. We propose to augment the transformer-based causal language models for query recommendations to adapt to the immediate user feedback using multi-armed bandit (MAB) framework. We conduct a large-scale experimental study using log files from a popular online literature discovery service and demonstrate that our algorithm improves the per-round regret substantially, with respect to the state-of-the-art transformer-based query recommendation models, which do not make use of immediate user feedback. Our data model and source code are available at https://github.com/shampp/exp3_ss
|
http://arxiv.org/pdf/2205.06297v3
|
[
"Shameem A Puthiya Parambath",
"Christos Anagnostopoulos",
"Roderick Murray-Smith"
] |
2024-07-03T19:44:31Z
|
2022-05-12T18:19:24Z
|
2407.03475
|
How JEPA Avoids Noisy Features: The Implicit Bias of Deep Linear Self
Distillation Networks
|
Two competing paradigms exist for self-supervised learning of data representations. Joint Embedding Predictive Architecture (JEPA) is a class of architectures in which semantically similar inputs are encoded into representations that are predictive of each other. A recent successful approach that falls under the JEPA framework is self-distillation, where an online encoder is trained to predict the output of the target encoder, sometimes using a lightweight predictor network. This is contrasted with the Masked AutoEncoder (MAE) paradigm, where an encoder and decoder are trained to reconstruct missing parts of the input in the data space rather, than its latent representation. A common motivation for using the JEPA approach over MAE is that the JEPA objective prioritizes abstract features over fine-grained pixel information (which can be unpredictable and uninformative). In this work, we seek to understand the mechanism behind this empirical observation by analyzing the training dynamics of deep linear models. We uncover a surprising mechanism: in a simplified linear setting where both approaches learn similar representations, JEPAs are biased to learn high-influence features, i.e., features characterized by having high regression coefficients. Our results point to a distinct implicit bias of predicting in latent space that may shed light on its success in practice.
|
http://arxiv.org/pdf/2407.03475v1
|
[
"Etai Littwin",
"Omid Saremi",
"Madhu Advani",
"Vimal Thilak",
"Preetum Nakkiran",
"Chen Huang",
"Joshua Susskind"
] |
2024-07-03T19:43:12Z
|
2024-07-03T19:43:12Z
|
2407.03470
|
Prosody-Driven Privacy-Preserving Dementia Detection
|
Speaker embeddings extracted from voice recordings have been proven valuable for dementia detection. However, by their nature, these embeddings contain identifiable information which raises privacy concerns. In this work, we aim to anonymize embeddings while preserving the diagnostic utility for dementia detection. Previous studies rely on adversarial learning and models trained on the target attribute and struggle in limited-resource settings. We propose a novel approach that leverages domain knowledge to disentangle prosody features relevant to dementia from speaker embeddings without relying on a dementia classifier. Our experiments show the effectiveness of our approach in preserving speaker privacy (speaker recognition F1-score .01%) while maintaining high dementia detection score F1-score of 74% on the ADReSS dataset. Our results are also on par with a more constrained classifier-dependent system on ADReSSo (.01% and .66%), and have no impact on synthesized speech naturalness.
|
http://arxiv.org/pdf/2407.03470v1
|
[
"Dominika Woszczyk",
"Ranya Aloufi",
"Soteris Demetriou"
] |
2024-07-03T19:34:47Z
|
2024-07-03T19:34:47Z
|
2307.07604
|
Smooth Lower Bounds for Differentially Private Algorithms via
Padding-and-Permuting Fingerprinting Codes
|
Fingerprinting arguments, first introduced by Bun, Ullman, and Vadhan (STOC 2014), are the most widely used method for establishing lower bounds on the sample complexity or error of approximately differentially private (DP) algorithms. Still, there are many problems in differential privacy for which we don't know suitable lower bounds, and even for problems that we do, the lower bounds are not smooth, and usually become vacuous when the error is larger than some threshold. We present a new framework and tools to generate smooth lower bounds on the sample complexity of differentially private algorithms satisfying very weak accuracy. We illustrate the applicability of our method by providing new lower bounds in various settings: 1. A tight lower bound for DP averaging in the low-accuracy regime, which in particular implies a lower bound for the private 1-cluster problem introduced by Nissim, Stemmer, and Vadhan (PODS 2016). 2. A lower bound on the additive error of DP algorithms for approximate k-means clustering and general (k,z)-clustering, as a function of the multiplicative error, which is tight for a constant multiplication error. 3. A lower bound for estimating the top singular vector of a matrix under DP in low-accuracy regimes, which is a special case of DP subspace estimation studied by Singhal and Steinke (NeurIPS 2021). Our main technique is to apply a padding-and-permuting transformation to a fingerprinting code. However, rather than proving our results using a black-box access to an existing fingerprinting code (e.g., Tardos' code), we develop a new fingerprinting lemma that is stronger than those of Dwork et al. (FOCS 2015) and Bun et al. (SODA 2017), and prove our lower bounds directly from the lemma. Our lemma, in particular, gives a simpler fingerprinting code construction with optimal rate (up to polylogarithmic factors) that is of independent interest.
|
http://arxiv.org/pdf/2307.07604v4
|
[
"Naty Peter",
"Eliad Tsfadia",
"Jonathan Ullman"
] |
2024-07-03T19:22:10Z
|
2023-07-14T19:58:02Z
|
2407.03453
|
On Large Language Models in National Security Applications
|
The overwhelming success of GPT-4 in early 2023 highlighted the transformative potential of large language models (LLMs) across various sectors, including national security. This article explores the implications of LLM integration within national security contexts, analyzing their potential to revolutionize information processing, decision-making, and operational efficiency. Whereas LLMs offer substantial benefits, such as automating tasks and enhancing data analysis, they also pose significant risks, including hallucinations, data privacy concerns, and vulnerability to adversarial attacks. Through their coupling with decision-theoretic principles and Bayesian reasoning, LLMs can significantly improve decision-making processes within national security organizations. Namely, LLMs can facilitate the transition from data to actionable decisions, enabling decision-makers to quickly receive and distill available information with less manpower. Current applications within the US Department of Defense and beyond are explored, e.g., the USAF's use of LLMs for wargaming and automatic summarization, that illustrate their potential to streamline operations and support decision-making. However, these applications necessitate rigorous safeguards to ensure accuracy and reliability. The broader implications of LLM integration extend to strategic planning, international relations, and the broader geopolitical landscape, with adversarial nations leveraging LLMs for disinformation and cyber operations, emphasizing the need for robust countermeasures. Despite exhibiting "sparks" of artificial general intelligence, LLMs are best suited for supporting roles rather than leading strategic decisions. Their use in training and wargaming can provide valuable insights and personalized learning experiences for military personnel, thereby improving operational readiness.
|
http://arxiv.org/pdf/2407.03453v1
|
[
"William N. Caballero",
"Phillip R. Jenkins"
] |
2024-07-03T18:53:22Z
|
2024-07-03T18:53:22Z
|
2407.03440
|
Advanced Framework for Animal Sound Classification With Features
Optimization
|
The automatic classification of animal sounds presents an enduring challenge in bioacoustics, owing to the diverse statistical properties of sound signals, variations in recording equipment, and prevalent low Signal-to-Noise Ratio (SNR) conditions. Deep learning models like Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) have excelled in human speech recognition but have not been effectively tailored to the intricate nature of animal sounds, which exhibit substantial diversity even within the same domain. We propose an automated classification framework applicable to general animal sound classification. Our approach first optimizes audio features from Mel-frequency cepstral coefficients (MFCC) including feature rearrangement and feature reduction. It then uses the optimized features for the deep learning model, i.e., an attention-based Bidirectional LSTM (Bi-LSTM), to extract deep semantic features for sound classification. We also contribute an animal sound benchmark dataset encompassing oceanic animals and birds1. Extensive experimentation with real-world datasets demonstrates that our approach consistently outperforms baseline methods by over 25% in precision, recall, and accuracy, promising advancements in animal sound classification.
|
http://arxiv.org/pdf/2407.03440v1
|
[
"Qiang Yang",
"Xiuying Chen",
"Changsheng Ma",
"Carlos M. Duarte",
"Xiangliang Zhang"
] |
2024-07-03T18:33:47Z
|
2024-07-03T18:33:47Z
|
2407.03436
|
A Role of Environmental Complexity on Representation Learning in Deep
Reinforcement Learning Agents
|
The environments where individuals live can present diverse navigation challenges, resulting in varying navigation abilities and strategies. Inspired by differing urban layouts and the Dual Solutions Paradigm test used for human navigators, we developed a simulated navigation environment to train deep reinforcement learning agents in a shortcut usage task. We modulated the frequency of exposure to a shortcut and navigation cue, leading to the development of artificial agents with differing abilities. We examined the encoded representations in artificial neural networks driving these agents, revealing intricate dynamics in representation learning, and correlated them with shortcut use preferences. Furthermore, we demonstrated methods to analyze representations across a population of nodes, which proved effective in finding patterns in what would otherwise be noisy single-node data. These techniques may also have broader applications in studying neural activity. From our observations in representation learning dynamics, we propose insights for human navigation learning, emphasizing the importance of navigation challenges in developing strong landmark knowledge over repeated exposures to landmarks alone.
|
http://arxiv.org/pdf/2407.03436v1
|
[
"Andrew Liu",
"Alla Borisyuk"
] |
2024-07-03T18:27:26Z
|
2024-07-03T18:27:26Z
|
2407.03428
|
NEBULA: Neural Empirical Bayes Under Latent Representations for
Efficient and Controllable Design of Molecular Libraries
|
We present NEBULA, the first latent 3D generative model for scalable generation of large molecular libraries around a seed compound of interest. Such libraries are crucial for scientific discovery, but it remains challenging to generate large numbers of high quality samples efficiently. 3D-voxel-based methods have recently shown great promise for generating high quality samples de novo from random noise (Pinheiro et al., 2023). However, sampling in 3D-voxel space is computationally expensive and use in library generation is prohibitively slow. Here, we instead perform neural empirical Bayes sampling (Saremi & Hyvarinen, 2019) in the learned latent space of a vector-quantized variational autoencoder. NEBULA generates large molecular libraries nearly an order of magnitude faster than existing methods without sacrificing sample quality. Moreover, NEBULA generalizes better to unseen drug-like molecules, as demonstrated on two public datasets and multiple recently released drugs. We expect the approach herein to be highly enabling for machine learning-based drug discovery. The code is available at https://github.com/prescient-design/nebula
|
http://arxiv.org/pdf/2407.03428v1
|
[
"Ewa M. Nowara",
"Pedro O. Pinheiro",
"Sai Pooja Mahajan",
"Omar Mahmood",
"Andrew Martin Watkins",
"Saeed Saremi",
"Michael Maser"
] |
2024-07-03T18:10:43Z
|
2024-07-03T18:10:43Z
|
2407.03426
|
Multi-Task Decision-Making for Multi-User 360 Video Processing over
Wireless Networks
|
We study a multi-task decision-making problem for 360 video processing in a wireless multi-user virtual reality (VR) system that includes an edge computing unit (ECU) to deliver 360 videos to VR users and offer computing assistance for decoding/rendering of video frames. However, this comes at the expense of increased data volume and required bandwidth. To balance this trade-off, we formulate a constrained quality of experience (QoE) maximization problem in which the rebuffering time and quality variation between video frames are bounded by user and video requirements. To solve the formulated multi-user QoE maximization, we leverage deep reinforcement learning (DRL) for multi-task rate adaptation and computation distribution (MTRC). The proposed MTRC approach does not rely on any predefined assumption about the environment and relies on video playback statistics (i.e., past throughput, decoding time, transmission time, etc.), video information, and the resulting performance to adjust the video bitrate and computation distribution. We train MTRC with real-world wireless network traces and 360 video datasets to obtain evaluation results in terms of the average QoE, peak signal-to-noise ratio (PSNR), rebuffering time, and quality variation. Our results indicate that the MTRC improves the users' QoE compared to state-of-the-art rate adaptation algorithm. Specifically, we show a 5.97 dB to 6.44 dB improvement in PSNR, a 1.66X to 4.23X improvement in rebuffering time, and a 4.21 dB to 4.35 dB improvement in quality variation.
|
http://arxiv.org/pdf/2407.03426v1
|
[
"Babak Badnava",
"Jacob Chakareski",
"Morteza Hashemi"
] |
2024-07-03T18:09:25Z
|
2024-07-03T18:09:25Z
|
2407.03418
|
HEMM: Holistic Evaluation of Multimodal Foundation Models
|
Multimodal foundation models that can holistically process text alongside images, video, audio, and other sensory modalities are increasingly used in a variety of real-world applications. However, it is challenging to characterize and study progress in multimodal foundation models, given the range of possible modeling decisions, tasks, and domains. In this paper, we introduce Holistic Evaluation of Multimodal Models (HEMM) to systematically evaluate the capabilities of multimodal foundation models across a set of 3 dimensions: basic skills, information flow, and real-world use cases. Basic multimodal skills are internal abilities required to solve problems, such as learning interactions across modalities, fine-grained alignment, multi-step reasoning, and the ability to handle external knowledge. Information flow studies how multimodal content changes during a task through querying, translation, editing, and fusion. Use cases span domain-specific challenges introduced in real-world multimedia, affective computing, natural sciences, healthcare, and human-computer interaction applications. Through comprehensive experiments across the 30 tasks in HEMM, we (1) identify key dataset dimensions (e.g., basic skills, information flows, and use cases) that pose challenges to today's models, and (2) distill performance trends regarding how different modeling dimensions (e.g., scale, pre-training data, multimodal alignment, pre-training, and instruction tuning objectives) influence performance. Our conclusions regarding challenging multimodal interactions, use cases, and tasks requiring reasoning and external knowledge, the benefits of data and model scale, and the impacts of instruction tuning yield actionable insights for future work in multimodal foundation models.
|
http://arxiv.org/pdf/2407.03418v1
|
[
"Paul Pu Liang",
"Akshay Goindani",
"Talha Chafekar",
"Leena Mathur",
"Haofei Yu",
"Ruslan Salakhutdinov",
"Louis-Philippe Morency"
] |
2024-07-03T18:00:48Z
|
2024-07-03T18:00:48Z
|
2407.03321
|
Planetarium: A Rigorous Benchmark for Translating Text to Structured
Planning Languages
|
Many recent works have explored using language models for planning problems. One line of research focuses on translating natural language descriptions of planning tasks into structured planning languages, such as the planning domain definition language (PDDL). While this approach is promising, accurately measuring the quality of generated PDDL code continues to pose significant challenges. First, generated PDDL code is typically evaluated using planning validators that check whether the problem can be solved with a planner. This method is insufficient because a language model might generate valid PDDL code that does not align with the natural language description of the task. Second, existing evaluation sets often have natural language descriptions of the planning task that closely resemble the ground truth PDDL, reducing the challenge of the task. To bridge this gap, we introduce benchmarkName, a benchmark designed to evaluate language models' ability to generate PDDL code from natural language descriptions of planning tasks. We begin by creating a PDDL equivalence algorithm that rigorously evaluates the correctness of PDDL code generated by language models by flexibly comparing it against a ground truth PDDL. Then, we present a dataset of $132,037$ text-to-PDDL pairs across 13 different tasks, with varying levels of difficulty. Finally, we evaluate several API-access and open-weight language models that reveal this task's complexity. For example, $87.6%$ of the PDDL problem descriptions generated by GPT-4o are syntactically parseable, $82.2%$ are valid, solve-able problems, but only $35.1%$ are semantically correct, highlighting the need for a more rigorous benchmark for this problem.
|
http://arxiv.org/pdf/2407.03321v1
|
[
"Max Zuo",
"Francisco Piedrahita Velez",
"Xiaochen Li",
"Michael L. Littman",
"Stephen H. Bach"
] |
2024-07-03T17:59:53Z
|
2024-07-03T17:59:53Z
|
2407.03311
|
Value-Penalized Auxiliary Control from Examples for Learning without
Rewards or Demonstrations
|
Learning from examples of success is an appealing approach to reinforcement learning that eliminates many of the disadvantages of using hand-crafted reward functions or full expert-demonstration trajectories, both of which can be difficult to acquire, biased, or suboptimal. However, learning from examples alone dramatically increases the exploration challenge, especially for complex tasks. This work introduces value-penalized auxiliary control from examples (VPACE); we significantly improve exploration in example-based control by adding scheduled auxiliary control and examples of auxiliary tasks. Furthermore, we identify a value-calibration problem, where policy value estimates can exceed their theoretical limits based on successful data. We resolve this problem, which is exacerbated by learning auxiliary tasks, through the addition of an above-success-level value penalty. Across three simulated and one real robotic manipulation environment, and 21 different main tasks, we show that our approach substantially improves learning efficiency. Videos, code, and datasets are available at https://papers.starslab.ca/vpace.
|
http://arxiv.org/pdf/2407.03311v1
|
[
"Trevor Ablett",
"Bryan Chan",
"Jayce Haoran Wang",
"Jonathan Kelly"
] |
2024-07-03T17:54:11Z
|
2024-07-03T17:54:11Z
|
2407.03310
|
Universal Length Generalization with Turing Programs
|
Length generalization refers to the ability to extrapolate from short training sequences to long test sequences and is a challenge for current large language models. While prior work has proposed some architecture or data format changes to achieve length generalization, these proposals typically apply to a limited set of tasks. Building on prior scratchpad and Chain-of-Thought (CoT) techniques, we propose Turing Programs, a novel CoT strategy that decomposes an algorithmic task into steps mimicking the computation of a Turing Machine. This framework is both universal, as it can accommodate any algorithmic task, and simple, requiring only copying text from the context with small modifications. We show that by using Turing Programs, we obtain robust length generalization on a range of algorithmic tasks: addition, multiplication and in-context SGD. We then demonstrate that transformers achieve length generalization on random Turing Programs, suggesting that length generalization is possible for any algorithmic task. Finally, we theoretically prove that transformers can implement Turing Programs, constructing a simple RASP (Weiss et al.) program that simulates an arbitrary Turing machine.
|
http://arxiv.org/pdf/2407.03310v1
|
[
"Kaiying Hou",
"David Brandfonbrener",
"Sham Kakade",
"Samy Jelassi",
"Eran Malach"
] |
2024-07-03T17:53:44Z
|
2024-07-03T17:53:44Z
|
2407.03300
|
DisCo-Diff: Enhancing Continuous Diffusion Models with Discrete Latents
|
Diffusion models (DMs) have revolutionized generative learning. They utilize a diffusion process to encode data into a simple Gaussian distribution. However, encoding a complex, potentially multimodal data distribution into a single continuous Gaussian distribution arguably represents an unnecessarily challenging learning problem. We propose Discrete-Continuous Latent Variable Diffusion Models (DisCo-Diff) to simplify this task by introducing complementary discrete latent variables. We augment DMs with learnable discrete latents, inferred with an encoder, and train DM and encoder end-to-end. DisCo-Diff does not rely on pre-trained networks, making the framework universally applicable. The discrete latents significantly simplify learning the DM's complex noise-to-data mapping by reducing the curvature of the DM's generative ODE. An additional autoregressive transformer models the distribution of the discrete latents, a simple step because DisCo-Diff requires only few discrete variables with small codebooks. We validate DisCo-Diff on toy data, several image synthesis tasks as well as molecular docking, and find that introducing discrete latents consistently improves model performance. For example, DisCo-Diff achieves state-of-the-art FID scores on class-conditioned ImageNet-64/128 datasets with ODE sampler.
|
http://arxiv.org/pdf/2407.03300v1
|
[
"Yilun Xu",
"Gabriele Corso",
"Tommi Jaakkola",
"Arash Vahdat",
"Karsten Kreis"
] |
2024-07-03T17:42:46Z
|
2024-07-03T17:42:46Z
|
2310.05988
|
Dual Latent State Learning: Exploiting Regional Network Similarities for
QoS Prediction
|
Individual objects, whether users or services, within a specific region often exhibit similar network states due to their shared origin from the same city or autonomous system (AS). Despite this regional network similarity, many existing techniques overlook its potential, resulting in subpar performance arising from challenges such as data sparsity and label imbalance. In this paper, we introduce the regional-based dual latent state learning network(R2SL), a novel deep learning framework designed to overcome the pitfalls of traditional individual object-based prediction techniques in Quality of Service (QoS) prediction. Unlike its predecessors, R2SL captures the nuances of regional network behavior by deriving two distinct regional network latent states: the city-network latent state and the AS-network latent state. These states are constructed utilizing aggregated data from common regions rather than individual object data. Furthermore, R2SL adopts an enhanced Huber loss function that adjusts its linear loss component, providing a remedy for prevalent label imbalance issues. To cap off the prediction process, a multi-scale perception network is leveraged to interpret the integrated feature map, a fusion of regional network latent features and other pertinent information, ultimately accomplishing the QoS prediction. Through rigorous testing on real-world QoS datasets, R2SL demonstrates superior performance compared to prevailing state-of-the-art methods. Our R2SL approach ushers in an innovative avenue for precise QoS predictions by fully harnessing the regional network similarities inherent in objects.
|
http://arxiv.org/pdf/2310.05988v2
|
[
"Ziliang Wang",
"Xiaohong Zhang",
"Kechi Zhang",
"Ze Shi Li",
"Meng Yan"
] |
2024-07-03T17:42:25Z
|
2023-10-07T19:35:07Z
|
2406.16008
|
Found in the Middle: Calibrating Positional Attention Bias Improves Long
Context Utilization
|
Large language models (LLMs), even when specifically trained to process long input contexts, struggle to capture relevant information located in the middle of their input. This phenomenon has been known as the lost-in-the-middle problem. In this work, we make three contributions. First, we set out to understand the factors that cause this phenomenon. In doing so, we establish a connection between lost-in-the-middle to LLMs' intrinsic attention bias: LLMs exhibit a U-shaped attention bias where the tokens at the beginning and at the end of its input receive higher attention, regardless of their relevance. Second, we mitigate this positional bias through a calibration mechanism, found-in-the-middle, that allows the model to attend to contexts faithfully according to their relevance, even though when they are in the middle. Third, we show found-in-the-middle not only achieves better performance in locating relevant information within a long context, but also eventually leads to improved retrieval-augmented generation (RAG) performance across various tasks, outperforming existing methods by up to 15 percentage points. These findings open up future directions in understanding LLM attention bias and its potential consequences.
|
http://arxiv.org/pdf/2406.16008v2
|
[
"Cheng-Yu Hsieh",
"Yung-Sung Chuang",
"Chun-Liang Li",
"Zifeng Wang",
"Long T. Le",
"Abhishek Kumar",
"James Glass",
"Alexander Ratner",
"Chen-Yu Lee",
"Ranjay Krishna",
"Tomas Pfister"
] |
2024-07-03T17:40:00Z
|
2024-06-23T04:35:42Z
|
2303.15975
|
Large-scale Pre-trained Models are Surprisingly Strong in Incremental
Novel Class Discovery
|
Discovering novel concepts in unlabelled datasets and in a continuous manner is an important desideratum of lifelong learners. In the literature such problems have been partially addressed under very restricted settings, where novel classes are learned by jointly accessing a related labelled set (e.g., NCD) or by leveraging only a supervisedly pre-trained model (e.g., class-iNCD). In this work we challenge the status quo in class-iNCD and propose a learning paradigm where class discovery occurs continuously and truly unsupervisedly, without needing any related labelled set. In detail, we propose to exploit the richer priors from strong self-supervised pre-trained models (PTM). To this end, we propose simple baselines, composed of a frozen PTM backbone and a learnable linear classifier, that are not only simple to implement but also resilient under longer learning scenarios. We conduct extensive empirical evaluation on a multitude of benchmarks and show the effectiveness of our proposed baselines when compared with sophisticated state-of-the-art methods. The code is open source.
|
http://arxiv.org/pdf/2303.15975v3
|
[
"Mingxuan Liu",
"Subhankar Roy",
"Zhun Zhong",
"Nicu Sebe",
"Elisa Ricci"
] |
2024-07-03T17:36:19Z
|
2023-03-28T13:47:16Z
|
2405.09062
|
Naturalistic Music Decoding from EEG Data via Latent Diffusion Models
|
In this article, we explore the potential of using latent diffusion models, a family of powerful generative models, for the task of reconstructing naturalistic music from electroencephalogram (EEG) recordings. Unlike simpler music with limited timbres, such as MIDI-generated tunes or monophonic pieces, the focus here is on intricate music featuring a diverse array of instruments, voices, and effects, rich in harmonics and timbre. This study represents an initial foray into achieving general music reconstruction of high-quality using non-invasive EEG data, employing an end-to-end training approach directly on raw data without the need for manual pre-processing and channel selection. We train our models on the public NMED-T dataset and perform quantitative evaluation proposing neural embedding-based metrics. We additionally perform song classification based on the generated tracks. Our work contributes to the ongoing research in neural decoding and brain-computer interfaces, offering insights into the feasibility of using EEG data for complex auditory information reconstruction.
|
http://arxiv.org/pdf/2405.09062v4
|
[
"Emilian Postolache",
"Natalia Polouliakh",
"Hiroaki Kitano",
"Akima Connelly",
"Emanuele Rodolà",
"Luca Cosmo",
"Taketo Akama"
] |
2024-07-03T17:33:58Z
|
2024-05-15T03:26:01Z
|
2407.03294
|
Vertex Exchange Method for a Class of Quadratic Programming Problems
|
A vertex exchange method is proposed for solving the strongly convex quadratic program subject to the generalized simplex constraint. We conduct rigorous convergence analysis for the proposed algorithm and demonstrate its essential roles in solving some important classes of constrained convex optimization. To get a feasible initial point to execute the algorithm, we also present and analyze a highly efficient semismooth Newton method for computing the projection onto the generalized simplex. The excellent practical performance of the proposed algorithms is demonstrated by a set of extensive numerical experiments. Our theoretical and numerical results further motivate the potential applications of the considered model and the proposed algorithms.
|
http://arxiv.org/pdf/2407.03294v1
|
[
"Ling Liang",
"Kim-Chuan Toh",
"Haizhao Yang"
] |
2024-07-03T17:28:17Z
|
2024-07-03T17:28:17Z
|
2407.03289
|
Correlated Privacy Mechanisms for Differentially Private Distributed
Mean Estimation
|
Differentially private distributed mean estimation (DP-DME) is a fundamental building block in privacy-preserving federated learning, where a central server estimates the mean of $d$-dimensional vectors held by $n$ users while ensuring $(epsilon,delta)$-DP. Local differential privacy (LDP) and distributed DP with secure aggregation (SecAgg) are the most common notions of DP used in DP-DME settings with an untrusted server. LDP provides strong resilience to dropouts, colluding users, and malicious server attacks, but suffers from poor utility. In contrast, SecAgg-based DP-DME achieves an $O(n)$ utility gain over LDP in DME, but requires increased communication and computation overheads and complex multi-round protocols to handle dropouts and malicious attacks. In this work, we propose CorDP-DME, a novel DP-DME mechanism that spans the gap between DME with LDP and distributed DP, offering a favorable balance between utility and resilience to dropout and collusion. CorDP-DME is based on correlated Gaussian noise, ensuring DP without the perfect conditional privacy guarantees of SecAgg-based approaches. We provide an information-theoretic analysis of CorDP-DME, and derive theoretical guarantees for utility under any given privacy parameters and dropout/colluding user thresholds. Our results demonstrate that (anti) correlated Gaussian DP mechanisms can significantly improve utility in mean estimation tasks compared to LDP -- even in adversarial settings -- while maintaining better resilience to dropouts and attacks compared to distributed DP.
|
http://arxiv.org/pdf/2407.03289v1
|
[
"Sajani Vithana",
"Viveck R. Cadambe",
"Flavio P. Calmon",
"Haewon Jeong"
] |
2024-07-03T17:22:33Z
|
2024-07-03T17:22:33Z
|
2407.06211
|
Synthetic data: How could it be used for infectious disease research?
|
Over the last three to five years, it has become possible to generate machine learning synthetic data for healthcare-related uses. However, concerns have been raised about potential negative factors associated with the possibilities of artificial dataset generation. These include the potential misuse of generative artificial intelligence (AI) in fields such as cybercrime, the use of deepfakes and fake news to deceive or manipulate, and displacement of human jobs across various market sectors. Here, we consider both current and future positive advances and possibilities with synthetic datasets. Synthetic data offers significant benefits, particularly in data privacy, research, in balancing datasets and reducing bias in machine learning models. Generative AI is an artificial intelligence genre capable of creating text, images, video or other data using generative models. The recent explosion of interest in GenAI was heralded by the invention and speedy move to use of large language models (LLM). These computational models are able to achieve general-purpose language generation and other natural language processing tasks and are based on transformer architectures, which made an evolutionary leap from previous neural network architectures. Fuelled by the advent of improved GenAI techniques and wide scale usage, this is surely the time to consider how synthetic data can be used to advance infectious disease research. In this commentary we aim to create an overview of the current and future position of synthetic data in infectious disease research.
|
http://arxiv.org/pdf/2407.06211v1
|
[
"Styliani-Christina Fragkouli",
"Dhwani Solanki",
"Leyla J Castro",
"Fotis E Psomopoulos",
"Núria Queralt-Rosinach",
"Davide Cirillo",
"Lisa C Crossman"
] |
2024-07-03T17:13:04Z
|
2024-07-03T17:13:04Z
|
2407.03266
|
Do Quantum Neural Networks have Simplicity Bias?
|
One hypothesis for the success of deep neural networks (DNNs) is that they are highly expressive, which enables them to be applied to many problems, and they have a strong inductive bias towards solutions that are simple, known as simplicity bias, which allows them to generalise well on unseen data because most real-world data is structured (i.e. simple). In this work, we explore the inductive bias and expressivity of quantum neural networks (QNNs), which gives us a way to compare their performance to those of DNNs. Our results show that it is possible to have simplicity bias with certain QNNs, but we prove that this type of QNN limits the expressivity of the QNN. We also show that it is possible to have QNNs with high expressivity, but they either have no inductive bias or a poor inductive bias and result in a worse generalisation performance compared to DNNs. We demonstrate that an artificial (restricted) inductive bias can be produced by intentionally restricting the expressivity of a QNN. Our results suggest a bias-expressivity tradeoff. Our conclusion is that the QNNs we studied can not generally offer an advantage over DNNs, because these QNNs either have a poor inductive bias or poor expressivity compared to DNNs.
|
http://arxiv.org/pdf/2407.03266v1
|
[
"Jessica Pointing"
] |
2024-07-03T16:56:08Z
|
2024-07-03T16:56:08Z
|
2406.08740
|
An AI Architecture with the Capability to Explain Recognition Results
|
Explainability is needed to establish confidence in machine learning results. Some explainable methods take a post hoc approach to explain the weights of machine learning models, others highlight areas of the input contributing to decisions. These methods do not adequately explain decisions, in plain terms. Explainable property-based systems have been shown to provide explanations in plain terms, however, they have not performed as well as leading unexplainable machine learning methods. This research focuses on the importance of metrics to explainability and contributes two methods yielding performance gains. The first method introduces a combination of explainable and unexplainable flows, proposing a metric to characterize explainability of a decision. The second method compares classic metrics for estimating the effectiveness of neural networks in the system, posing a new metric as the leading performer. Results from the new methods and examples from handwritten datasets are presented.
|
http://arxiv.org/abs/2406.08740v2
|
[
"Paul Whitten",
"Francis Wolff",
"Chris Papachristou"
] |
2024-07-03T16:54:44Z
|
2024-06-13T02:00:13Z
|
2407.03262
|
Nearly Linear Sparsification of $\ell_p$ Subspace Approximation
|
The $ell_p$ subspace approximation problem is an NP-hard low rank approximation problem that generalizes the median hyperplane problem ($p = 1$), principal component analysis ($p = 2$), and the center hyperplane problem ($p = infty$). A popular approach to cope with the NP-hardness of this problem is to compute a strong coreset, which is a small weighted subset of the input points which simultaneously approximates the cost of every $k$-dimensional subspace, typically to $(1+varepsilon)$ relative error for a small constant $varepsilon$. We obtain the first algorithm for constructing a strong coreset for $ell_p$ subspace approximation with a nearly optimal dependence on the rank parameter $k$, obtaining a nearly linear bound of $tilde O(k)mathrm{poly}(varepsilon^{-1})$ for $p<2$ and $tilde O(k^{p/2})mathrm{poly}(varepsilon^{-1})$ for $p>2$. Prior constructions either achieved a similar size bound but produced a coreset with a modification of the original points [SW18, FKW21], or produced a coreset of the original points but lost $mathrm{poly}(k)$ factors in the coreset size [HV20, WY23]. Our techniques also lead to the first nearly optimal online strong coresets for $ell_p$ subspace approximation with similar bounds as the offline setting, resolving a problem of [WY23]. All prior approaches lose $mathrm{poly}(k)$ factors in this setting, even when allowed to modify the original points.
|
http://arxiv.org/pdf/2407.03262v1
|
[
"David P. Woodruff",
"Taisuke Yasuda"
] |
2024-07-03T16:49:28Z
|
2024-07-03T16:49:28Z
|
2407.03261
|
Magnetic Hysteresis Modeling with Neural Operators
|
Hysteresis modeling is crucial to comprehend the behavior of magnetic devices, facilitating optimal designs. Hitherto, deep learning-based methods employed to model hysteresis, face challenges in generalizing to novel input magnetic fields. This paper addresses the generalization challenge by proposing neural operators for modeling constitutive laws that exhibit magnetic hysteresis by learning a mapping between magnetic fields. In particular, two prominent neural operators -- deep operator network and Fourier neural operator -- are employed to predict novel first-order reversal curves and minor loops, where novel means they are not used to train the model. In addition, a rate-independent Fourier neural operator is proposed to predict material responses at sampling rates different from those used during training to incorporate the rate-independent characteristics of magnetic hysteresis. The presented numerical experiments demonstrate that neural operators efficiently model magnetic hysteresis, outperforming the traditional neural recurrent methods on various metrics and generalizing to novel magnetic fields. The findings emphasize the advantages of using neural operators for modeling hysteresis under varying magnetic conditions, underscoring their importance in characterizing magnetic material based devices.
|
http://arxiv.org/pdf/2407.03261v1
|
[
"Abhishek Chandra",
"Bram Daniels",
"Mitrofan Curti",
"Koen Tiels",
"Elena A. Lomonova"
] |
2024-07-03T16:45:45Z
|
2024-07-03T16:45:45Z
|
2407.06209
|
Self-supervised Pretraining for Partial Differential Equations
|
In this work, we describe a novel approach to building a neural PDE solver leveraging recent advances in transformer based neural network architectures. Our model can provide solutions for different values of PDE parameters without any need for retraining the network. The training is carried out in a self-supervised manner, similar to pretraining approaches applied in language and vision tasks. We hypothesize that the model is in effect learning a family of operators (for multiple parameters) mapping the initial condition to the solution of the PDE at any future time step t. We compare this approach with the Fourier Neural Operator (FNO), and demonstrate that it can generalize over the space of PDE parameters, despite having a higher prediction error for individual parameter values compared to the FNO. We show that performance on a specific parameter can be improved by finetuning the model with very small amounts of data. We also demonstrate that the model scales with data as well as model size.
|
http://arxiv.org/pdf/2407.06209v1
|
[
"Varun Madhavan",
"Amal S Sebastian",
"Bharath Ramsundar",
"Venkatasubramanian Viswanathan"
] |
2024-07-03T16:39:32Z
|
2024-07-03T16:39:32Z
|
2407.03257
|
Modern Neighborhood Components Analysis: A Deep Tabular Baseline Two
Decades Later
|
The growing success of deep learning in various domains has prompted investigations into its application to tabular data, where deep models have shown promising results compared to traditional tree-based methods. In this paper, we revisit Neighborhood Component Analysis (NCA), a classic tabular prediction method introduced in 2004, designed to learn a linear projection that captures semantic similarities between instances. We find that minor modifications, such as adjustments to the learning objectives and the integration of deep learning architectures, significantly enhance NCA's performance, enabling it to surpass most modern deep tabular models. Additionally, we introduce a stochastic neighbor sampling strategy that improves both the efficiency and predictive accuracy of our proposed ModernNCA -- sampling only a subset of neighbors during training, while utilizing the entire neighborhood during inference. Extensive experiments demonstrate that our ModernNCA achieves state-of-the-art results in both classification and regression tasks across various tabular datasets, outperforming both tree-based and other deep tabular models, while also reducing training time and model size.
|
http://arxiv.org/pdf/2407.03257v1
|
[
"Han-Jia Ye",
"Huai-Hong Yin",
"De-Chuan Zhan"
] |
2024-07-03T16:38:57Z
|
2024-07-03T16:38:57Z
|
2406.16793
|
Adam-mini: Use Fewer Learning Rates To Gain More
|
We propose Adam-mini, an optimizer that achieves on-par or better performance than AdamW with 45% to 50% less memory footprint. Adam-mini reduces memory by cutting down the learning rate resources in Adam (i.e., $1/sqrt{v}$). We find that $geq$ 90% of these learning rates in $v$ could be harmlessly removed if we (1) carefully partition the parameters into blocks following our proposed principle on Hessian structure; (2) assign a single but good learning rate to each parameter block. We further find that, for each of these parameter blocks, there exists a single high-quality learning rate that can outperform Adam, provided that sufficient resources are available to search it out. We then provide one cost-effective way to find good learning rates and propose Adam-mini. Empirically, we verify that Adam-mini performs on par or better than AdamW on various language models sized from 125M to 7B for pre-training, supervised fine-tuning, and RLHF. The reduced memory footprint of Adam-mini also alleviates communication overheads among GPUs and CPUs, thereby increasing throughput. For instance, Adam-mini achieves 49.6% higher throughput than AdamW when pre-training Llama2-7B on $2times$ A800-80GB GPUs, which saves 33% wall-clock time for pre-training.
|
http://arxiv.org/pdf/2406.16793v5
|
[
"Yushun Zhang",
"Congliang Chen",
"Ziniu Li",
"Tian Ding",
"Chenwei Wu",
"Yinyu Ye",
"Zhi-Quan Luo",
"Ruoyu Sun"
] |
2024-07-03T16:38:17Z
|
2024-06-24T16:56:41Z
|
2406.17761
|
CaLMQA: Exploring culturally specific long-form question answering
across 23 languages
|
Large language models (LLMs) are used for long-form question answering (LFQA), which requires them to generate paragraph-length answers to complex questions. While LFQA has been well-studied in English, this research has not been extended to other languages. To bridge this gap, we introduce CaLMQA, a collection of 1.5K complex culturally specific questions spanning 23 languages and 51 culturally agnostic questions translated from English into 22 other languages. We define culturally specific questions as those uniquely or more likely to be asked by people from cultures associated with the question's language. We collect naturally-occurring questions from community web forums and hire native speakers to write questions to cover under-resourced, rarely-studied languages such as Fijian and Kirundi. Our dataset contains diverse, complex questions that reflect cultural topics (e.g. traditions, laws, news) and the language usage of native speakers. We automatically evaluate a suite of open- and closed-source models on CaLMQA by detecting incorrect language and token repetitions in answers, and observe that the quality of LLM-generated answers degrades significantly for some low-resource languages. Lastly, we perform human evaluation on a subset of models and languages. Manual evaluation reveals that model performance is significantly worse for culturally specific questions than for culturally agnostic questions. Our findings highlight the need for further research in non-English LFQA and provide an evaluation framework.
|
http://arxiv.org/pdf/2406.17761v2
|
[
"Shane Arora",
"Marzena Karpinska",
"Hung-Ting Chen",
"Ipsita Bhattacharjee",
"Mohit Iyyer",
"Eunsol Choi"
] |
2024-07-03T16:33:55Z
|
2024-06-25T17:45:26Z
|
2407.00741
|
Diffusion Models for Offline Multi-agent Reinforcement Learning with
Safety Constraints
|
In recent advancements in Multi-agent Reinforcement Learning (MARL), its application has extended to various safety-critical scenarios. However, most methods focus on online learning, which presents substantial risks when deployed in real-world settings. Addressing this challenge, we introduce an innovative framework integrating diffusion models within the MARL paradigm. This approach notably enhances the safety of actions taken by multiple agents through risk mitigation while modeling coordinated action. Our framework is grounded in the Centralized Training with Decentralized Execution (CTDE) architecture, augmented by a Diffusion Model for prediction trajectory generation. Additionally, we incorporate a specialized algorithm to further ensure operational safety. We evaluate our model against baselines on the DSRL benchmark. Experiment results demonstrate that our model not only adheres to stringent safety constraints but also achieves superior performance compared to existing methodologies. This underscores the potential of our approach in advancing the safety and efficacy of MARL in real-world applications.
|
http://arxiv.org/pdf/2407.00741v2
|
[
"Jianuo Huang"
] |
2024-07-03T16:22:15Z
|
2024-06-30T16:05:31Z
|
2407.01601
|
Unveiling and Controlling Anomalous Attention Distribution in
Transformers
|
With the advent of large models based on the Transformer architecture, researchers have observed an anomalous phenomenon in the Attention mechanism--there is a very high attention on the first element, which is prevalent across Transformer-based models. It is crucial to understand it for the development of techniques focusing on attention distribution, such as Key-Value (KV) Cache compression and infinite extrapolation; however, the latent cause leaves to be unknown. In this paper, we analyze such a phenomenon from the perspective of waiver phenomenon, which involves reducing the internal values of certain elements in the sequence, allowing them to absorb excess attention without affecting their contribution to information. In specific models, due to differences in positional encoding and attention patterns, we have found that the selection of waiver elements by the model can be categorized into two methods: positional-encoding-based and feature-distribution-within-elements-based.
|
http://arxiv.org/pdf/2407.01601v2
|
[
"Ruiqing Yan",
"Xingbo Du",
"Haoyu Deng",
"Linghan Zheng",
"Qiuzhuang Sun",
"Jifang Hu",
"Yuhang Shao",
"Penghao Jiang",
"Jinrong Jiang",
"Lian Zhao"
] |
2024-07-03T16:19:59Z
|
2024-06-26T11:53:35Z
|
2407.03241
|
Terrain Classification Enhanced with Uncertainty for Space Exploration
Robots from Proprioceptive Data
|
Terrain Classification is an essential task in space exploration, where unpredictable environments are difficult to observe using only exteroceptive sensors such as vision. Implementing Neural Network classifiers can have high performance but can be deemed untrustworthy as they lack transparency, which makes them unreliable for taking high-stakes decisions during mission planning. We address this by proposing Neural Networks with Uncertainty Quantification in Terrain Classification. We enable our Neural Networks with Monte Carlo Dropout, DropConnect, and Flipout in time series-capable architectures using only proprioceptive data as input. We use Bayesian Optimization with Hyperband for efficient hyperparameter optimization to find optimal models for trustworthy terrain classification.
|
http://arxiv.org/pdf/2407.03241v1
|
[
"Mariela De Lucas Álvarez",
"Jichen Guo",
"Raul Domínguez",
"Matias Valdenegro-Toro"
] |
2024-07-03T16:10:50Z
|
2024-07-03T16:10:50Z
|
2312.16762
|
Backstepping Neural Operators for $2\times 2$ Hyperbolic PDEs
|
Deep neural network approximation of nonlinear operators, commonly referred to as DeepONet, has proven capable of approximating PDE backstepping designs in which a single Goursat-form PDE governs a single feedback gain function. In boundary control of coupled PDEs, coupled Goursat-form PDEs govern two or more gain kernels-a PDE structure unaddressed thus far with DeepONet. In this paper, we explore the subject of approximating systems of gain kernel PDEs for hyperbolic PDE plants by considering a simple counter-convecting $2times 2$ coupled system in whose control a $2times 2$ kernel PDE system in Goursat form arises. Engineering applications include oil drilling, the Saint-Venant model of shallow water waves, and the Aw-Rascle-Zhang model of stop-and-go instability in congested traffic flow. We establish the continuity of the mapping from a total of five plant PDE functional coefficients to the kernel PDE solutions, prove the existence of an arbitrarily close DeepONet approximation to the kernel PDEs, and ensure that the DeepONet-approximated gains guarantee stabilization when replacing the exact backstepping gain kernels. Taking into account anti-collocated boundary actuation and sensing, our $L^2$-Globally-exponentially stabilizing (GES) approximate gain kernel-based output feedback design implies the deep learning of both the controller's and the observer's gains. Moreover, the encoding of the output-feedback law into DeepONet ensures semi-global practical exponential stability (SG-PES). The DeepONet operator speeds up the computation of the controller gains by multiple orders of magnitude. Its theoretically proven stabilizing capability is demonstrated through simulations.
|
http://arxiv.org/pdf/2312.16762v3
|
[
"Shanshan Wang",
"Mamadou Diagne",
"Miroslav Krstić"
] |
2024-07-03T16:04:07Z
|
2023-12-28T00:49:41Z
|
2407.03232
|
Single Character Perturbations Break LLM Alignment
|
When LLMs are deployed in sensitive, human-facing settings, it is crucial that they do not output unsafe, biased, or privacy-violating outputs. For this reason, models are both trained and instructed to refuse to answer unsafe prompts such as "Tell me how to build a bomb." We find that, despite these safeguards, it is possible to break model defenses simply by appending a space to the end of a model's input. In a study of eight open-source models, we demonstrate that this acts as a strong enough attack to cause the majority of models to generate harmful outputs with very high success rates. We examine the causes of this behavior, finding that the contexts in which single spaces occur in tokenized training data encourage models to generate lists when prompted, overriding training signals to refuse to answer unsafe requests. Our findings underscore the fragile state of current model alignment and promote the importance of developing more robust alignment methods. Code and data will be available at https://github.com/hannah-aught/space_attack.
|
http://arxiv.org/pdf/2407.03232v1
|
[
"Leon Lin",
"Hannah Brown",
"Kenji Kawaguchi",
"Michael Shieh"
] |
2024-07-03T16:03:10Z
|
2024-07-03T16:03:10Z
|
2307.10167
|
VITS : Variational Inference Thompson Sampling for contextual bandits
|
In this paper, we introduce and analyze a variant of the Thompson sampling (TS) algorithm for contextual bandits. At each round, traditional TS requires samples from the current posterior distribution, which is usually intractable. To circumvent this issue, approximate inference techniques can be used and provide samples with distribution close to the posteriors. However, current approximate techniques yield to either poor estimation (Laplace approximation) or can be computationally expensive (MCMC methods, Ensemble sampling...). In this paper, we propose a new algorithm, Varational Inference Thompson sampling VITS, based on Gaussian Variational Inference. This scheme provides powerful posterior approximations which are easy to sample from, and is computationally efficient, making it an ideal choice for TS. In addition, we show that VITS achieves a sub-linear regret bound of the same order in the dimension and number of round as traditional TS for linear contextual bandit. Finally, we demonstrate experimentally the effectiveness of VITS on both synthetic and real world datasets.
|
http://arxiv.org/pdf/2307.10167v3
|
[
"Pierre Clavier",
"Tom Huix",
"Alain Durmus"
] |
2024-07-03T15:53:13Z
|
2023-07-19T17:53:22Z
|
2305.01397
|
Are demographically invariant models and representations in medical
imaging fair?
|
Medical imaging models have been shown to encode information about patient demographics such as age, race, and sex in their latent representation, raising concerns about their potential for discrimination. Here, we ask whether requiring models not to encode demographic attributes is desirable. We point out that marginal and class-conditional representation invariance imply the standard group fairness notions of demographic parity and equalized odds, respectively. In addition, however, they require matching the risk distributions, thus potentially equalizing away important group differences. Enforcing the traditional fairness notions directly instead does not entail these strong constraints. Moreover, representationally invariant models may still take demographic attributes into account for deriving predictions, implying unequal treatment - in fact, achieving representation invariance may require doing so. In theory, this can be prevented using counterfactual notions of (individual) fairness or invariance. We caution, however, that properly defining medical image counterfactuals with respect to demographic attributes is fraught with challenges. Finally, we posit that encoding demographic attributes may even be advantageous if it enables learning a task-specific encoding of demographic features that does not rely on social constructs such as 'race' and 'gender.' We conclude that demographically invariant representations are neither necessary nor sufficient for fairness in medical imaging. Models may need to encode demographic attributes, lending further urgency to calls for comprehensive model fairness assessments in terms of predictive performance across diverse patient groups.
|
http://arxiv.org/pdf/2305.01397v3
|
[
"Eike Petersen",
"Enzo Ferrante",
"Melanie Ganz",
"Aasa Feragen"
] |
2024-07-03T15:52:58Z
|
2023-05-02T13:16:04Z
|
2211.15724
|
Malign Overfitting: Interpolation Can Provably Preclude Invariance
|
Learned classifiers should often possess certain invariance properties meant to encourage fairness, robustness, or out-of-distribution generalization. However, multiple recent works empirically demonstrate that common invariance-inducing regularizers are ineffective in the over-parameterized regime, in which classifiers perfectly fit (i.e. interpolate) the training data. This suggests that the phenomenon of "benign overfitting", in which models generalize well despite interpolating, might not favorably extend to settings in which robustness or fairness are desirable. In this work we provide a theoretical justification for these observations. We prove that -- even in the simplest of settings -- any interpolating learning rule (with arbitrarily small margin) will not satisfy these invariance properties. We then propose and analyze an algorithm that -- in the same setting -- successfully learns a non-interpolating classifier that is provably invariant. We validate our theoretical observations on simulated data and the Waterbirds dataset.
|
http://arxiv.org/pdf/2211.15724v2
|
[
"Yoav Wald",
"Gal Yona",
"Uri Shalit",
"Yair Carmon"
] |
2024-07-03T15:40:39Z
|
2022-11-28T19:17:31Z
|
2407.03211
|
How Does Quantization Affect Multilingual LLMs?
|
Quantization techniques are widely used to improve inference speed and deployment of large language models. While a wide body of work examines the impact of quantized LLMs on English tasks, none have examined the effect of quantization across languages. We conduct a thorough analysis of quantized multilingual LLMs, focusing on their performance across languages and at varying scales. We use automatic benchmarks, LLM-as-a-Judge methods, and human evaluation, finding that (1) harmful effects of quantization are apparent in human evaluation, and automatic metrics severely underestimate the detriment: a 1.7% average drop in Japanese across automatic tasks corresponds to a 16.0% drop reported by human evaluators on realistic prompts; (2) languages are disparately affected by quantization, with non-Latin script languages impacted worst; and (3) challenging tasks such as mathematical reasoning degrade fastest. As the ability to serve low-compute models is critical for wide global adoption of NLP technologies, our results urge consideration of multilingual performance as a key evaluation criterion for efficient models.
|
http://arxiv.org/pdf/2407.03211v1
|
[
"Kelly Marchisio",
"Saurabh Dash",
"Hongyu Chen",
"Dennis Aumiller",
"Ahmet Üstün",
"Sara Hooker",
"Sebastian Ruder"
] |
2024-07-03T15:39:40Z
|
2024-07-03T15:39:40Z
|
2407.03210
|
Combining AI Control Systems and Human Decision Support via Robustness
and Criticality
|
AI-enabled capabilities are reaching the requisite level of maturity to be deployed in the real world, yet do not always make correct or safe decisions. One way of addressing these concerns is to leverage AI control systems alongside and in support of human decisions, relying on the AI control system in safe situations while calling on a human co-decider for critical situations. We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks, including MuZero. Multiple improvements to the base agent architecture are proposed. We demonstrate how this technology has two applications: for intelligent decision tools and to enhance training / learning frameworks. In a decision support context, adversarial explanations help a user make the correct decision by highlighting those contextual factors that would need to change for a different AI-recommended decision. As another benefit of adversarial explanations, we show that the learned AI control system demonstrates robustness against adversarial tampering. Additionally, we supplement AE by introducing strategically similar autoencoders (SSAs) to help users identify and understand all salient factors being considered by the AI system. In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction. Finally, to identify when AI decisions would most benefit from human oversight, we tie this combined system to our prior art on statistically verified analyses of the criticality of decisions at any point in time.
|
http://arxiv.org/abs/2407.03210v1
|
[
"Walt Woods",
"Alexander Grushin",
"Simon Khan",
"Alvaro Velasquez"
] |
2024-07-03T15:38:57Z
|
2024-07-03T15:38:57Z
|
2407.00201
|
Deconvolving Complex Neuronal Networks into Interpretable Task-Specific
Connectomes
|
Task-specific functional MRI (fMRI) images provide excellent modalities for studying the neuronal basis of cognitive processes. We use fMRI data to formulate and solve the problem of deconvolving task-specific aggregate neuronal networks into a set of basic building blocks called canonical networks, to use these networks for functional characterization, and to characterize the physiological basis of these responses by mapping them to regions of the brain. Our results show excellent task-specificity of canonical networks, i.e., the expression of a small number of canonical networks can be used to accurately predict tasks; generalizability across cohorts, i.e., canonical networks are conserved across diverse populations, studies, and acquisition protocols; and that canonical networks have strong anatomical and physiological basis. From a methods perspective, the problem of identifying these canonical networks poses challenges rooted in the high dimensionality, small sample size, acquisition variability, and noise. Our deconvolution technique is based on non-negative matrix factorization (NMF) that identifies canonical networks as factors of a suitably constructed matrix. We demonstrate that our method scales to large datasets, yields stable and accurate factors, and is robust to noise.
|
http://arxiv.org/pdf/2407.00201v2
|
[
"Yifan Wang",
"Vikram Ravindra",
"Ananth Grama"
] |
2024-07-03T15:37:54Z
|
2024-06-28T19:13:48Z
|
2407.03392
|
M5: A Whole Genome Bacterial Encoder at Single Nucleotide Resolution
|
A linear attention mechanism is described to extend the context length of an encoder only transformer, called M5 in this report, to a multi-million single nucleotide resolution foundation model pretrained on bacterial whole genomes. The linear attention mechanism used approximates a full quadratic attention mechanism tightly and has a simple and lightweight implementation for the use case when the key-query embedding dimensionality is low. The M5-small model is entirely trained and tested on one A100 GPU with 40gb of memory up to 196K nucleotides during training and 2M nucleotides during testing. We test the performance of the M5-small model and record notable improvements in performance as whole genome bacterial sequence lengths are increased as well as demonstrating the stability of the full multi-head attention approximation used as sequence length is increased.
|
http://arxiv.org/pdf/2407.03392v1
|
[
"Agust Egilsson"
] |
2024-07-03T15:30:44Z
|
2024-07-03T15:30:44Z
|
2407.03195
|
Incremental Gauss--Newton Methods with Superlinear Convergence Rates
|
This paper addresses the challenge of solving large-scale nonlinear equations with H"older continuous Jacobians. We introduce a novel Incremental Gauss--Newton (IGN) method within explicit superlinear convergence rate, which outperforms existing methods that only achieve linear convergence rate. In particular, we formulate our problem by the nonlinear least squares with finite-sum structure, and our method incrementally iterates with the information of one component in each round. We also provide a mini-batch extension to our IGN method that obtains an even faster superlinear convergence rate. Furthermore, we conduct numerical experiments to show the advantages of the proposed methods.
|
http://arxiv.org/pdf/2407.03195v1
|
[
"Zhiling Zhou",
"Zhuanghua Liu",
"Chengchang Liu",
"Luo Luo"
] |
2024-07-03T15:26:34Z
|
2024-07-03T15:26:34Z
|
2407.03194
|
Prediction Instability in Machine Learning Ensembles
|
In machine learning ensembles predictions from multiple models are aggregated. Despite widespread use and strong performance of ensembles in applied problems little is known about the mathematical properties of aggregating models and associated consequences for safe, explainable use of such models. In this paper we prove a theorem that shows that any ensemble will exhibit at least one of the following forms of prediction instability. It will either ignore agreement among all underlying models, change its mind when none of the underlying models have done so, or be manipulable through inclusion or exclusion of options it would never actually predict. As a consequence, ensemble aggregation procedures will always need to balance the benefits of information use against the risk of these prediction instabilities. This analysis also sheds light on what specific forms of prediction instability to expect from particular ensemble algorithms; for example popular tree ensembles like random forest, or xgboost will violate basic, intuitive monotonicity and fairness properties.
|
http://arxiv.org/pdf/2407.03194v1
|
[
"Jeremy Kedziora"
] |
2024-07-03T15:26:02Z
|
2024-07-03T15:26:02Z
|
2310.10649
|
A Computational Framework for Solving Wasserstein Lagrangian Flows
|
The dynamical formulation of the optimal transport can be extended through various choices of the underlying geometry (kinetic energy), and the regularization of density paths (potential energy). These combinations yield different variational problems (Lagrangians), encompassing many variations of the optimal transport problem such as the Schr"odinger bridge, unbalanced optimal transport, and optimal transport with physical constraints, among others. In general, the optimal density path is unknown, and solving these variational problems can be computationally challenging. We propose a novel deep learning based framework approaching all of these problems from a unified perspective. Leveraging the dual formulation of the Lagrangians, our method does not require simulating or backpropagating through the trajectories of the learned dynamics, and does not need access to optimal couplings. We showcase the versatility of the proposed framework by outperforming previous approaches for the single-cell trajectory inference, where incorporating prior knowledge into the dynamics is crucial for correct predictions.
|
http://arxiv.org/pdf/2310.10649v3
|
[
"Kirill Neklyudov",
"Rob Brekelmans",
"Alexander Tong",
"Lazar Atanackovic",
"Qiang Liu",
"Alireza Makhzani"
] |
2024-07-03T15:23:42Z
|
2023-10-16T17:59:54Z
|
2407.01910
|
MG-Verilog: Multi-grained Dataset Towards Enhanced LLM-assisted Verilog
Generation
|
Large Language Models (LLMs) have recently shown promise in streamlining hardware design processes by encapsulating vast amounts of domain-specific data. In addition, they allow users to interact with the design processes through natural language instructions, thus making hardware design more accessible to developers. However, effectively leveraging LLMs in hardware design necessitates providing domain-specific data during inference (e.g., through in-context learning), fine-tuning, or pre-training. Unfortunately, existing publicly available hardware datasets are often limited in size, complexity, or detail, which hinders the effectiveness of LLMs in hardware design tasks. To address this issue, we first propose a set of criteria for creating high-quality hardware datasets that can effectively enhance LLM-assisted hardware design. Based on these criteria, we propose a Multi-Grained-Verilog (MG-Verilog) dataset, which encompasses descriptions at various levels of detail and corresponding code samples. To benefit the broader hardware design community, we have developed an open-source infrastructure that facilitates easy access, integration, and extension of the dataset to meet specific project needs. Furthermore, to fully exploit the potential of the MG-Verilog dataset, which varies in complexity and detail, we introduce a balanced fine-tuning scheme. This scheme serves as a unique use case to leverage the diverse levels of detail provided by the dataset. Extensive experiments demonstrate that the proposed dataset and fine-tuning scheme consistently improve the performance of LLMs in hardware design tasks.
|
http://arxiv.org/pdf/2407.01910v2
|
[
"Yongan Zhang",
"Zhongzhi Yu",
"Yonggan Fu",
"Cheng Wan",
"Yingyan Celine Lin"
] |
2024-07-03T15:15:20Z
|
2024-07-02T03:21:24Z
|
2407.03185
|
Multiple-Resolution Tokenization for Time Series Forecasting with an
Application to Pricing
|
We propose a transformer architecture for time series forecasting with a focus on time series tokenisation and apply it to a real-world prediction problem from the pricing domain. Our architecture aims to learn effective representations at many scales across all available data simultaneously. The model contains a number of novel modules: a differentiated form of time series patching which employs multiple resolutions, a multiple-resolution module for time-varying known variables, a mixer-based module for capturing cross-series information, and a novel output head with favourable scaling to account for the increased number of tokens. We present an application of this model to a real world prediction problem faced by the markdown team at a very large retailer. On the experiments conducted our model outperforms in-house models and the selected existing deep learning architectures.
|
http://arxiv.org/pdf/2407.03185v1
|
[
"Egon Peršak",
"Miguel F. Anjos",
"Sebastian Lautz",
"Aleksandar Kolev"
] |
2024-07-03T15:07:16Z
|
2024-07-03T15:07:16Z
|
2407.03179
|
Motion meets Attention: Video Motion Prompts
|
Videos contain rich spatio-temporal information. Traditional methods for extracting motion, used in tasks such as action recognition, often rely on visual contents rather than precise motion features. This phenomenon is referred to as 'blind motion extraction' behavior, which proves inefficient in capturing motions of interest due to a lack of motion-guided cues. Recently, attention mechanisms have enhanced many computer vision tasks by effectively highlighting salient visual areas. Inspired by this, we propose using a modified Sigmoid function with learnable slope and shift parameters as an attention mechanism to activate and modulate motion signals derived from frame differencing maps. This approach generates a sequence of attention maps that enhance the processing of motion-related video content. To ensure temporally continuity and smoothness of the attention maps, we apply pair-wise temporal attention variation regularization to remove unwanted motions (e.g., noise) while preserving important ones. We then perform Hadamard product between each pair of attention maps and the original video frames to highlight the evolving motions of interest over time. These highlighted motions, termed video motion prompts, are subsequently used as inputs to the model instead of the original video frames. We formalize this process as a motion prompt layer and incorporate the regularization term into the loss function to learn better motion prompts. This layer serves as an adapter between the model and the video data, bridging the gap between traditional 'blind motion extraction' and the extraction of relevant motions of interest.
|
http://arxiv.org/pdf/2407.03179v1
|
[
"Qixiang Chen",
"Lei Wang",
"Piotr Koniusz",
"Tom Gedeon"
] |
2024-07-03T14:59:46Z
|
2024-07-03T14:59:46Z
|
2407.03178
|
Relating CNN-Transformer Fusion Network for Change Detection
|
While deep learning, particularly convolutional neural networks (CNNs), has revolutionized remote sensing (RS) change detection (CD), existing approaches often miss crucial features due to neglecting global context and incomplete change learning. Additionally, transformer networks struggle with low-level details. RCTNet addresses these limitations by introducing textbf{(1)} an early fusion backbone to exploit both spatial and temporal features early on, textbf{(2)} a Cross-Stage Aggregation (CSA) module for enhanced temporal representation, textbf{(3)} a Multi-Scale Feature Fusion (MSF) module for enriched feature extraction in the decoder, and textbf{(4)} an Efficient Self-deciphering Attention (ESA) module utilizing transformers to capture global information and fine-grained details for accurate change detection. Extensive experiments demonstrate RCTNet's clear superiority over traditional RS image CD methods, showing significant improvement and an optimal balance between accuracy and computational cost.
|
http://arxiv.org/pdf/2407.03178v1
|
[
"Yuhao Gao",
"Gensheng Pei",
"Mengmeng Sheng",
"Zeren Sun",
"Tao Chen",
"Yazhou Yao"
] |
2024-07-03T14:58:40Z
|
2024-07-03T14:58:40Z
|
2404.11477
|
Discovering Nuclear Models from Symbolic Machine Learning
|
Numerous phenomenological nuclear models have been proposed to describe specific observables within different regions of the nuclear chart. However, developing a unified model that describes the complex behavior of all nuclei remains an open challenge. Here, we explore whether novel symbolic Machine Learning (ML) can rediscover traditional nuclear physics models or identify alternatives with improved simplicity, fidelity, and predictive power. To address this challenge, we developed a Multi-objective Iterated Symbolic Regression approach that handles symbolic regressions over multiple target observables, accounts for experimental uncertainties and is robust against high-dimensional problems. As a proof of principle, we applied this method to describe the nuclear binding energies and charge radii of light and medium mass nuclei. Our approach identified simple analytical relationships based on the number of protons and neutrons, providing interpretable models with precision comparable to state-of-the-art nuclear models. Additionally, we integrated this ML-discovered model with an existing complementary model to estimate the limits of nuclear stability. These results highlight the potential of symbolic ML to develop accurate nuclear models and guide our description of complex many-body problems.
|
http://arxiv.org/pdf/2404.11477v3
|
[
"Jose M. Munoz",
"Silviu M. Udrescu",
"Ronald F. Garcia Ruiz"
] |
2024-07-03T14:47:09Z
|
2024-04-17T15:32:58Z
|
2405.13022
|
LLMs can learn self-restraint through iterative self-reflection
|
In order to be deployed safely, Large Language Models (LLMs) must be capable of dynamically adapting their behavior based on their level of knowledge and uncertainty associated with specific topics. This adaptive behavior, which we refer to as self-restraint, is non-trivial to teach since it depends on the internal knowledge of an LLM. By default, LLMs are trained to maximize the next token likelihood, which does not teach the model to modulate its answer based on its level of uncertainty. In order to learn self-restraint, we devise a utility function that can encourage the model to produce responses only when it is confident in them. This utility function can be used to score generation of different length and abstention. To optimize this function, we introduce ReSearch, a process of "self-reflection" consisting of iterative self-prompting and self-evaluation. We use the ReSearch algorithm to generate synthetic data on which we finetune our models. Compared to their original versions, our resulting models generate fewer emph{hallucinations} overall at no additional inference cost, for both known and unknown topics, as the model learns to selectively restrain itself. In addition, our method elegantly incorporates the ability to abstain by augmenting the samples generated by the model during the search procedure with an answer expressing abstention.
|
http://arxiv.org/pdf/2405.13022v2
|
[
"Alexandre Piché",
"Aristides Milios",
"Dzmitry Bahdanau",
"Chris Pal"
] |
2024-07-03T14:46:52Z
|
2024-05-15T13:35:43Z
|
2407.03162
|
Bunny-VisionPro: Real-Time Bimanual Dexterous Teleoperation for
Imitation Learning
|
Teleoperation is a crucial tool for collecting human demonstrations, but controlling robots with bimanual dexterous hands remains a challenge. Existing teleoperation systems struggle to handle the complexity of coordinating two hands for intricate manipulations. We introduce Bunny-VisionPro, a real-time bimanual dexterous teleoperation system that leverages a VR headset. Unlike previous vision-based teleoperation systems, we design novel low-cost devices to provide haptic feedback to the operator, enhancing immersion. Our system prioritizes safety by incorporating collision and singularity avoidance while maintaining real-time performance through innovative designs. Bunny-VisionPro outperforms prior systems on a standard task suite, achieving higher success rates and reduced task completion times. Moreover, the high-quality teleoperation demonstrations improve downstream imitation learning performance, leading to better generalizability. Notably, Bunny-VisionPro enables imitation learning with challenging multi-stage, long-horizon dexterous manipulation tasks, which have rarely been addressed in previous work. Our system's ability to handle bimanual manipulations while prioritizing safety and real-time performance makes it a powerful tool for advancing dexterous manipulation and imitation learning.
|
http://arxiv.org/pdf/2407.03162v1
|
[
"Runyu Ding",
"Yuzhe Qin",
"Jiyue Zhu",
"Chengzhe Jia",
"Shiqi Yang",
"Ruihan Yang",
"Xiaojuan Qi",
"Xiaolong Wang"
] |
2024-07-03T14:35:35Z
|
2024-07-03T14:35:35Z
|
2407.03160
|
SOS! Soft Prompt Attack Against Open-Source Large Language Models
|
Open-source large language models (LLMs) have become increasingly popular among both the general public and industry, as they can be customized, fine-tuned, and freely used. However, some open-source LLMs require approval before usage, which has led to third parties publishing their own easily accessible versions. Similarly, third parties have been publishing fine-tuned or quantized variants of these LLMs. These versions are particularly appealing to users because of their ease of access and reduced computational resource demands. This trend has increased the risk of training time attacks, compromising the integrity and security of LLMs. In this work, we present a new training time attack, SOS, which is designed to be low in computational demand and does not require clean data or modification of the model weights, thereby maintaining the model's utility intact. The attack addresses security issues in various scenarios, including the backdoor attack, jailbreak attack, and prompt stealing attack. Our experimental findings demonstrate that the proposed attack is effective across all evaluated targets. Furthermore, we present the other side of our SOS technique, namely the copyright token -- a novel technique that enables users to mark their copyrighted content and prevent models from using it.
|
http://arxiv.org/pdf/2407.03160v1
|
[
"Ziqing Yang",
"Michael Backes",
"Yang Zhang",
"Ahmed Salem"
] |
2024-07-03T14:35:16Z
|
2024-07-03T14:35:16Z
|
2407.03157
|
Let the Code LLM Edit Itself When You Edit the Code
|
In this work, we investigate a typical scenario in code generation where a developer edits existing code in real time and requests a code assistant, e.g., a large language model, to re-predict the next token or next line on the fly. Naively, the LLM needs to re-encode the entire KV cache to provide an accurate prediction. However, this process is computationally expensive, especially when the sequence length is long. Simply encoding the edited subsequence and integrating it to the original KV cache meets the temporal confusion problem, leading to significantly worse performance. We address this efficiency and accuracy trade-off by introducing underline{textbf{Positional textbf{I}ntegrity textbf{E}ncoding} (PIE). Building upon the rotary positional encoding, PIE first removes the rotary matrices in the Key cache that introduce temporal confusion and then reapplies the correct rotary matrices. This process ensures that positional relationships between tokens are correct and requires only a single round of matrix multiplication. We validate the effectiveness of PIE through extensive experiments on the RepoBench-C-8k dataset, utilizing DeepSeek-Coder models with 1.3B, 6.7B, and 33B parameters. Our evaluation includes three real-world coding tasks: code insertion, code deletion, and multi-place code editing. Results demonstrate that PIE reduces computational overhead by over 85% compared to the standard full recomputation approach across all model sizes and tasks while well approximating the model performance.
|
http://arxiv.org/pdf/2407.03157v1
|
[
"Zhenyu He",
"Jun Zhang",
"Shengjie Luo",
"Jingjing Xu",
"Zhi Zhang",
"Di He"
] |
2024-07-03T14:34:03Z
|
2024-07-03T14:34:03Z
|
2407.03154
|
Reinforcement Learning for Sequence Design Leveraging Protein Language
Models
|
Protein sequence design, determined by amino acid sequences, are essential to protein engineering problems in drug discovery. Prior approaches have resorted to evolutionary strategies or Monte-Carlo methods for protein design, but often fail to exploit the structure of the combinatorial search space, to generalize to unseen sequences. In the context of discrete black box optimization over large search spaces, learning a mutation policy to generate novel sequences with reinforcement learning is appealing. Recent advances in protein language models (PLMs) trained on large corpora of protein sequences offer a potential solution to this problem by scoring proteins according to their biological plausibility (such as the TM-score). In this work, we propose to use PLMs as a reward function to generate new sequences. Yet the PLM can be computationally expensive to query due to its large size. To this end, we propose an alternative paradigm where optimization can be performed on scores from a smaller proxy model that is periodically finetuned, jointly while learning the mutation policy. We perform extensive experiments on various sequence lengths to benchmark RL-based approaches, and provide comprehensive evaluations along biological plausibility and diversity of the protein. Our experimental results include favorable evaluations of the proposed sequences, along with high diversity scores, demonstrating that RL is a strong candidate for biological sequence design. Finally, we provide a modular open source implementation can be easily integrated in most RL training loops, with support for replacing the reward model with other PLMs, to spur further research in this domain. The code for all experiments is provided in the supplementary material.
|
http://arxiv.org/pdf/2407.03154v1
|
[
"Jithendaraa Subramanian",
"Shivakanth Sujit",
"Niloy Irtisam",
"Umong Sain",
"Derek Nowrouzezahrai",
"Samira Ebrahimi Kahou",
"Riashat Islam"
] |
2024-07-03T14:31:36Z
|
2024-07-03T14:31:36Z
|
2407.03152
|
Stereo Risk: A Continuous Modeling Approach to Stereo Matching
|
We introduce Stereo Risk, a new deep-learning approach to solve the classical stereo-matching problem in computer vision. As it is well-known that stereo matching boils down to a per-pixel disparity estimation problem, the popular state-of-the-art stereo-matching approaches widely rely on regressing the scene disparity values, yet via discretization of scene disparity values. Such discretization often fails to capture the nuanced, continuous nature of scene depth. Stereo Risk departs from the conventional discretization approach by formulating the scene disparity as an optimal solution to a continuous risk minimization problem, hence the name "stereo risk". We demonstrate that $L^1$ minimization of the proposed continuous risk function enhances stereo-matching performance for deep networks, particularly for disparities with multi-modal probability distributions. Furthermore, to enable the end-to-end network training of the non-differentiable $L^1$ risk optimization, we exploited the implicit function theorem, ensuring a fully differentiable network. A comprehensive analysis demonstrates our method's theoretical soundness and superior performance over the state-of-the-art methods across various benchmark datasets, including KITTI 2012, KITTI 2015, ETH3D, SceneFlow, and Middlebury 2014.
|
http://arxiv.org/pdf/2407.03152v1
|
[
"Ce Liu",
"Suryansh Kumar",
"Shuhang Gu",
"Radu Timofte",
"Yao Yao",
"Luc Van Gool"
] |
2024-07-03T14:30:47Z
|
2024-07-03T14:30:47Z
|
2403.19887
|
Jamba: A Hybrid Transformer-Mamba Language Model
|
We present Jamba, a new base large language model based on a novel hybrid Transformer-Mamba mixture-of-experts (MoE) architecture. Specifically, Jamba interleaves blocks of Transformer and Mamba layers, enjoying the benefits of both model families. MoE is added in some of these layers to increase model capacity while keeping active parameter usage manageable. This flexible architecture allows resource- and objective-specific configurations. In the particular configuration we have implemented, we end up with a powerful model that fits in a single 80GB GPU. Built at large scale, Jamba provides high throughput and small memory footprint compared to vanilla Transformers, and at the same time state-of-the-art performance on standard language model benchmarks and long-context evaluations. Remarkably, the model presents strong results for up to 256K tokens context length. We study various architectural decisions, such as how to combine Transformer and Mamba layers, and how to mix experts, and show that some of them are crucial in large scale modeling. We also describe several interesting properties of these architectures which the training and evaluation of Jamba have revealed, and plan to release checkpoints from various ablation runs, to encourage further exploration of this novel architecture. We make the weights of our implementation of Jamba publicly available under a permissive license.
|
http://arxiv.org/pdf/2403.19887v2
|
[
"Opher Lieber",
"Barak Lenz",
"Hofit Bata",
"Gal Cohen",
"Jhonathan Osin",
"Itay Dalmedigos",
"Erez Safahi",
"Shaked Meirom",
"Yonatan Belinkov",
"Shai Shalev-Shwartz",
"Omri Abend",
"Raz Alon",
"Tomer Asida",
"Amir Bergman",
"Roman Glozman",
"Michael Gokhman",
"Avashalom Manevich",
"Nir Ratner",
"Noam Rozen",
"Erez Shwartz",
"Mor Zusman",
"Yoav Shoham"
] |
2024-07-03T14:30:33Z
|
2024-03-28T23:55:06Z
|
2402.15171
|
Towards Efficient and Optimal Covariance-Adaptive Algorithms for
Combinatorial Semi-Bandits
|
We address the problem of stochastic combinatorial semi-bandits, where a player selects among $P$ actions from the power set of a set containing $d$ base items. Adaptivity to the problem's structure is essential in order to obtain optimal regret upper bounds. As estimating the coefficients of a covariance matrix can be manageable in practice, leveraging them should improve the regret. We design ``optimistic'' covariance-adaptive algorithms relying on online estimations of the covariance structure, called OLSUCBC and COSV (only the variances for the latter). They both yields improved gap-free regret. Although COSV can be slightly suboptimal, it improves on computational complexity by taking inspiration from Thompson Sampling approaches. It is the first sampling-based algorithm satisfying a $sqrt{T}$ gap-free regret (up to poly-logs). We also show that in some cases, our approach efficiently leverages the semi-bandit feedback and outperforms bandit feedback approaches, not only in exponential regimes where $Pgg d$ but also when $Pleq d$, which is not covered by existing analyses.
|
http://arxiv.org/pdf/2402.15171v2
|
[
"Julien Zhou",
"Pierre Gaillard",
"Thibaud Rahier",
"Houssam Zenati",
"Julyan Arbel"
] |
2024-07-03T14:29:43Z
|
2024-02-23T08:07:54Z
|
2406.09009
|
Fredformer: Frequency Debiased Transformer for Time Series Forecasting
|
The Transformer model has shown leading performance in time series forecasting. Nevertheless, in some complex scenarios, it tends to learn low-frequency features in the data and overlook high-frequency features, showing a frequency bias. This bias prevents the model from accurately capturing important high-frequency data features. In this paper, we undertook empirical analyses to understand this bias and discovered that frequency bias results from the model disproportionately focusing on frequency features with higher energy. Based on our analysis, we formulate this bias and propose Fredformer, a Transformer-based framework designed to mitigate frequency bias by learning features equally across different frequency bands. This approach prevents the model from overlooking lower amplitude features important for accurate forecasting. Extensive experiments show the effectiveness of our proposed approach, which can outperform other baselines in different real-world time-series datasets. Furthermore, we introduce a lightweight variant of the Fredformer with an attention matrix approximation, which achieves comparable performance but with much fewer parameters and lower computation costs. The code is available at: https://github.com/chenzRG/Fredformer
|
http://arxiv.org/abs/2406.09009v4
|
[
"Xihao Piao",
"Zheng Chen",
"Taichi Murayama",
"Yasuko Matsubara",
"Yasushi Sakurai"
] |
2024-07-03T14:24:39Z
|
2024-06-13T11:29:21Z
|
2407.03132
|
Speaker- and Text-Independent Estimation of Articulatory Movements and
Phoneme Alignments from Speech
|
This paper introduces a novel combination of two tasks, previously treated separately: acoustic-to-articulatory speech inversion (AAI) and phoneme-to-articulatory (PTA) motion estimation. We refer to this joint task as acoustic phoneme-to-articulatory speech inversion (APTAI) and explore two different approaches, both working speaker- and text-independently during inference. We use a multi-task learning setup, with the end-to-end goal of taking raw speech as input and estimating the corresponding articulatory movements, phoneme sequence, and phoneme alignment. While both proposed approaches share these same requirements, they differ in their way of achieving phoneme-related predictions: one is based on frame classification, the other on a two-staged training procedure and forced alignment. We reach competitive performance of 0.73 mean correlation for the AAI task and achieve up to approximately 87% frame overlap compared to a state-of-the-art text-dependent phoneme force aligner.
|
http://arxiv.org/pdf/2407.03132v1
|
[
"Tobias Weise",
"Philipp Klumpp",
"Kubilay Can Demir",
"Paula Andrea Pérez-Toro",
"Maria Schuster",
"Elmar Noeth",
"Bjoern Heismann",
"Andreas Maier",
"Seung Hee Yang"
] |
2024-07-03T14:13:04Z
|
2024-07-03T14:13:04Z
|
2406.14322
|
Mind the Privacy Unit! User-Level Differential Privacy for Language
Model Fine-Tuning
|
Large language models (LLMs) have emerged as powerful tools for tackling complex tasks across diverse domains, but they also raise privacy concerns when fine-tuned on sensitive data due to potential memorization. While differential privacy (DP) offers a promising solution by ensuring models are 'almost indistinguishable' with or without any particular privacy unit, current evaluations on LLMs mostly treat each example (text record) as the privacy unit. This leads to uneven user privacy guarantees when contributions per user vary. We therefore study user-level DP motivated by applications where it necessary to ensure uniform privacy protection across users. We present a systematic evaluation of user-level DP for LLM fine-tuning on natural language generation tasks. Focusing on two mechanisms for achieving user-level DP guarantees, Group Privacy and User-wise DP-SGD, we investigate design choices like data selection strategies and parameter tuning for the best privacy-utility tradeoff.
|
http://arxiv.org/pdf/2406.14322v2
|
[
"Lynn Chua",
"Badih Ghazi",
"Yangsibo Huang",
"Pritish Kamath",
"Ravi Kumar",
"Daogao Liu",
"Pasin Manurangsi",
"Amer Sinha",
"Chiyuan Zhang"
] |
2024-07-03T14:05:20Z
|
2024-06-20T13:54:32Z
|
2402.05369
|
Noise Contrastive Alignment of Language Models with Explicit Rewards
|
User intentions are typically formalized as evaluation rewards to be maximized when fine-tuning language models (LMs). Existing alignment methods, such as Direct Preference Optimization (DPO), are mainly tailored for pairwise preference data where rewards are implicitly defined rather than explicitly given. In this paper, we introduce a general framework for LM alignment, leveraging Noise Contrastive Estimation (NCE) to bridge the gap in handling reward datasets explicitly annotated with scalar evaluations. Our framework comprises two parallel algorithms, NCA and InfoNCA, both enabling the direct extraction of an LM policy from reward data as well as preference data. Notably, we show that the DPO loss is a special case of our proposed InfoNCA objective under pairwise preference settings, thereby integrating and extending current alignment theories. By comparing NCA and InfoNCA, we demonstrate that the well-observed decreasing-likelihood trend of DPO/InfoNCA is caused by their focus on adjusting relative likelihood across different responses. In contrast, NCA optimizes the absolute likelihood for each response, thereby effectively preventing the chosen likelihood from decreasing. We evaluate our methods in both reward and preference settings with Mistral-8*7B and 7B models. Experiments suggest that InfoNCA/NCA surpasses various preference baselines when reward datasets are available. We also find NCA significantly outperforms DPO in complex reasoning tasks like math and coding.
|
http://arxiv.org/pdf/2402.05369v2
|
[
"Huayu Chen",
"Guande He",
"Lifan Yuan",
"Ganqu Cui",
"Hang Su",
"Jun Zhu"
] |
2024-07-03T13:53:06Z
|
2024-02-08T02:58:47Z
|
2407.03108
|
How Reliable and Stable are Explanations of XAI Methods?
|
Black box models are increasingly being used in the daily lives of human beings living in society. Along with this increase, there has been the emergence of Explainable Artificial Intelligence (XAI) methods aimed at generating additional explanations regarding how the model makes certain predictions. In this sense, methods such as Dalex, Eli5, eXirt, Lofo and Shap emerged as different proposals and methodologies for generating explanations of black box models in an agnostic way. Along with the emergence of these methods, questions arise such as "How Reliable and Stable are XAI Methods?". With the aim of shedding light on this main question, this research creates a pipeline that performs experiments using the diabetes dataset and four different machine learning models (LGBM, MLP, DT and KNN), creating different levels of perturbations of the test data and finally generates explanations from the eXirt method regarding the confidence of the models and also feature relevances ranks from all XAI methods mentioned, in order to measure their stability in the face of perturbations. As a result, it was found that eXirt was able to identify the most reliable models among all those used. It was also found that current XAI methods are sensitive to perturbations, with the exception of one specific method.
|
http://arxiv.org/pdf/2407.03108v1
|
[
"José Ribeiro",
"Lucas Cardoso",
"Vitor Santos",
"Eduardo Carvalho",
"Níkolas Carneiro",
"Ronnie Alves"
] |
2024-07-03T13:47:41Z
|
2024-07-03T13:47:41Z
|
2402.13228
|
Smaug: Fixing Failure Modes of Preference Optimisation with DPO-Positive
|
Direct Preference Optimisation (DPO) is effective at significantly improving the performance of large language models (LLMs) on downstream tasks such as reasoning, summarisation, and alignment. Using pairs of preferred and dispreferred data, DPO models the relative probability of picking one response over another. In this work, first we show theoretically that the standard DPO loss can lead to a reduction of the model's likelihood of the preferred examples, as long as the relative probability between the preferred and dispreferred classes increases. We then show empirically that this phenomenon occurs when fine-tuning LLMs on common datasets, especially datasets in which the edit distance between pairs of completions is low. Using these insights, we design DPO-Positive (DPOP), a new loss function and training procedure which avoids this failure mode. Surprisingly, we find that DPOP outperforms DPO and other fine-tuning procedures across a wide variety of datasets and downstream tasks, including datasets with high edit distances between completions. Furthermore, we find that the DPOP-tuned model outperforms the DPO-tuned model (all else equal) on benchmarks independent of the fine-tuning data, such as MT-Bench. Finally, using DPOP, we create and open-source Smaug-34B and Smaug-72B, with the latter becoming the first open-source LLM to surpass an average accuracy of 80% on the HuggingFace Open LLM Leaderboard.
|
http://arxiv.org/pdf/2402.13228v2
|
[
"Arka Pal",
"Deep Karkhanis",
"Samuel Dooley",
"Manley Roberts",
"Siddartha Naidu",
"Colin White"
] |
2024-07-03T13:46:33Z
|
2024-02-20T18:42:34Z
|
2403.00849
|
NeuraLUT: Hiding Neural Network Density in Boolean Synthesizable
Functions
|
Field-Programmable Gate Array (FPGA) accelerators have proven successful in handling latency- and resource-critical deep neural network (DNN) inference tasks. Among the most computationally intensive operations in a neural network (NN) is the dot product between the feature and weight vectors. Thus, some previous FPGA acceleration works have proposed mapping neurons with quantized inputs and outputs directly to lookup tables (LUTs) for hardware implementation. In these works, the boundaries of the neurons coincide with the boundaries of the LUTs. We propose relaxing these boundaries and mapping entire sub-networks to a single LUT. As the sub-networks are absorbed within the LUT, the NN topology and precision within a partition do not affect the size of the lookup tables generated. Therefore, we utilize fully connected layers with floating-point precision inside each partition, which benefit from being universal function approximators, but with rigid sparsity and quantization enforced between partitions, where the NN topology becomes exposed to the circuit topology. Although cheap to implement, this approach can lead to very deep NNs, and so to tackle challenges like vanishing gradients, we also introduce skip connections inside the partitions. The resulting methodology can be seen as training DNNs with a specific FPGA hardware-inspired sparsity pattern that allows them to be mapped to much shallower circuit-level networks, thereby significantly improving latency. We validate our proposed method on a known latency-critical task, jet substructure tagging, and on the classical computer vision task, digit classification using MNIST. Our approach allows for greater function expressivity within the LUTs compared to existing work, leading to up to $4.3times$ lower latency NNs for the same accuracy.
|
http://arxiv.org/pdf/2403.00849v2
|
[
"Marta Andronic",
"George A. Constantinides"
] |
2024-07-03T13:43:56Z
|
2024-02-29T16:10:21Z
|
2407.03105
|
On Generalization for Generative Flow Networks
|
Generative Flow Networks (GFlowNets) have emerged as an innovative learning paradigm designed to address the challenge of sampling from an unnormalized probability distribution, called the reward function. This framework learns a policy on a constructed graph, which enables sampling from an approximation of the target probability distribution through successive steps of sampling from the learned policy. To achieve this, GFlowNets can be trained with various objectives, each of which can lead to the model s ultimate goal. The aspirational strength of GFlowNets lies in their potential to discern intricate patterns within the reward function and their capacity to generalize effectively to novel, unseen parts of the reward function. This paper attempts to formalize generalization in the context of GFlowNets, to link generalization with stability, and also to design experiments that assess the capacity of these models to uncover unseen parts of the reward function. The experiments will focus on length generalization meaning generalization to states that can be constructed only by longer trajectories than those seen in training.
|
http://arxiv.org/pdf/2407.03105v1
|
[
"Anas Krichel",
"Nikolay Malkin",
"Salem Lahlou",
"Yoshua Bengio"
] |
2024-07-03T13:42:21Z
|
2024-07-03T13:42:21Z
|
2407.07912
|
ITEM: Improving Training and Evaluation of Message-Passing based GNNs
for top-k recommendation
|
Graph Neural Networks (GNNs), especially message-passing-based models, have become prominent in top-k recommendation tasks, outperforming matrix factorization models due to their ability to efficiently aggregate information from a broader context. Although GNNs are evaluated with ranking-based metrics, e.g NDCG@k and Recall@k, they remain largely trained with proxy losses, e.g the BPR loss. In this work we explore the use of ranking loss functions to directly optimize the evaluation metrics, an area not extensively investigated in the GNN community for collaborative filtering. We take advantage of smooth approximations of the rank to facilitate end-to-end training of GNNs and propose a Personalized PageRank-based negative sampling strategy tailored for ranking loss functions. Moreover, we extend the evaluation of GNN models for top-k recommendation tasks with an inductive user-centric protocol, providing a more accurate reflection of real-world applications. Our proposed method significantly outperforms the standard BPR loss and more advanced losses across four datasets and four recent GNN architectures while also exhibiting faster training. Demonstrating the potential of ranking loss functions in improving GNN training for collaborative filtering tasks.
|
http://arxiv.org/pdf/2407.07912v1
|
[
"Yannis Karmim",
"Elias Ramzi",
"Raphaël Fournier-S'niehotta",
"Nicolas Thome"
] |
2024-07-03T13:35:23Z
|
2024-07-03T13:35:23Z
|
2407.03094
|
Conformal Prediction for Causal Effects of Continuous Treatments
|
Uncertainty quantification of causal effects is crucial for safety-critical applications such as personalized medicine. A powerful approach for this is conformal prediction, which has several practical benefits due to model-agnostic finite-sample guarantees. Yet, existing methods for conformal prediction of causal effects are limited to binary/discrete treatments and make highly restrictive assumptions such as known propensity scores. In this work, we provide a novel conformal prediction method for potential outcomes of continuous treatments. We account for the additional uncertainty introduced through propensity estimation so that our conformal prediction intervals are valid even if the propensity score is unknown. Our contributions are three-fold: (1) We derive finite-sample prediction intervals for potential outcomes of continuous treatments. (2) We provide an algorithm for calculating the derived intervals. (3) We demonstrate the effectiveness of the conformal prediction intervals in experiments on synthetic and real-world datasets. To the best of our knowledge, we are the first to propose conformal prediction for continuous treatments when the propensity score is unknown and must be estimated from data.
|
http://arxiv.org/pdf/2407.03094v1
|
[
"Maresa Schröder",
"Dennis Frauen",
"Jonas Schweisthal",
"Konstantin Heß",
"Valentyn Melnychuk",
"Stefan Feuerriegel"
] |
2024-07-03T13:34:33Z
|
2024-07-03T13:34:33Z
|
2407.03093
|
Revisiting the Performance of Deep Learning-Based Vulnerability
Detection on Realistic Datasets
|
The impact of software vulnerabilities on everyday software systems is significant. Despite deep learning models being proposed for vulnerability detection, their reliability is questionable. Prior evaluations show high recall/F1 scores of up to 99%, but these models underperform in practical scenarios, particularly when assessed on entire codebases rather than just the fixing commit. This paper introduces Real-Vul, a comprehensive dataset representing real-world scenarios for evaluating vulnerability detection models. Evaluating DeepWukong, LineVul, ReVeal, and IVDetect shows a significant drop in performance, with precision decreasing by up to 95 percentage points and F1 scores by up to 91 points. Furthermore, Model performance fluctuates based on vulnerability characteristics, with better F1 scores for information leaks or code injection than for path resolution or predictable return values. The results highlight a significant performance gap that needs addressing before deploying deep learning-based vulnerability detection in practical settings. Overfitting is identified as a key issue, and an augmentation technique is proposed, potentially improving performance by up to 30%. Contributions include a dataset creation approach for better model evaluation, Real-Vul dataset, and empirical evidence of deep learning models struggling in real-world settings.
|
http://arxiv.org/pdf/2407.03093v1
|
[
"Partha Chakraborty",
"Krishna Kanth Arumugam",
"Mahmoud Alfadel",
"Meiyappan Nagappan",
"Shane McIntosh"
] |
2024-07-03T13:34:30Z
|
2024-07-03T13:34:30Z
|
2406.18470
|
UFRec: Integrating Uniformity and Frequency to Enhance Sequential
Recommendations
|
Effective representation learning in sequential recommendation systems is pivotal for precisely capturing user interaction patterns and enhancing recommendation accuracy. Nonetheless, current methodologies largely focus on item-to-item transitions, frequently overlooking the time intervals between interactions, which are integral to understanding behavior pattern shifts. Moreover, critical interaction attributes like item frequency are often neglected. Our research indicates that sequences with more consistent time intervals and items with higher interaction frequency result in superior predictive performance. In contrast, sequences with non-uniform intervals contribute to user interest drift, and infrequently interacted items are challenging to model due to sparse data, posing unique challenges that existing methods fail to adequately address. In this study, we introduce UFRec, an innovative bidirectional enhancement method for sequential recommendations. UFRec harnesses sequence uniformity and item frequency to boost performance, particularly improving the representation of non-uniform sequences and less-frequent items. These two components synergistically enhance each other, driving holistic performance optimization in intricate sequential recommendation scenarios. Additionally, we introduce a multidimensional time module to further augment adaptability. To the best of our knowledge, UFRec is the pioneering method to exploit the properties of uniformity and frequency for feature augmentation. Through comparisons with eleven state-of-the-art models across four datasets, we demonstrate that UFRec significantly surpasses current leading models.
|
http://arxiv.org/pdf/2406.18470v2
|
[
"Yang Liu",
"Yitong Wang",
"Chenyue Feng"
] |
2024-07-03T13:32:34Z
|
2024-06-26T16:28:24Z
|
2407.02466
|
PWM: Policy Learning with Large World Models
|
Reinforcement Learning (RL) has achieved impressive results on complex tasks but struggles in multi-task settings with different embodiments. World models offer scalability by learning a simulation of the environment, yet they often rely on inefficient gradient-free optimization methods. We introduce Policy learning with large World Models (PWM), a novel model-based RL algorithm that learns continuous control policies from large multi-task world models. By pre-training the world model on offline data and using it for first-order gradient policy learning, PWM effectively solves tasks with up to 152 action dimensions and outperforms methods using ground-truth dynamics. Additionally, PWM scales to an 80-task setting, achieving up to 27% higher rewards than existing baselines without the need for expensive online planning. Visualizations and code available at https://www.imgeorgiev.com/pwm
|
http://arxiv.org/pdf/2407.02466v2
|
[
"Ignat Georgiev",
"Varun Giridhar",
"Nicklas Hansen",
"Animesh Garg"
] |
2024-07-03T13:24:02Z
|
2024-07-02T17:47:03Z
|
2407.03086
|
Effective Heterogeneous Federated Learning via Efficient
Hypernetwork-based Weight Generation
|
While federated learning leverages distributed client resources, it faces challenges due to heterogeneous client capabilities. This necessitates allocating models suited to clients' resources and careful parameter aggregation to accommodate this heterogeneity. We propose HypeMeFed, a novel federated learning framework for supporting client heterogeneity by combining a multi-exit network architecture with hypernetwork-based model weight generation. This approach aligns the feature spaces of heterogeneous model layers and resolves per-layer information disparity during weight aggregation. To practically realize HypeMeFed, we also propose a low-rank factorization approach to minimize computation and memory overhead associated with hypernetworks. Our evaluations on a real-world heterogeneous device testbed indicate that HypeMeFed enhances accuracy by 5.12% over FedAvg, reduces the hypernetwork memory requirements by 98.22%, and accelerates its operations by 1.86 times compared to a naive hypernetwork approach. These results demonstrate HypeMeFed's effectiveness in leveraging and engaging heterogeneous clients for federated learning.
|
http://arxiv.org/pdf/2407.03086v1
|
[
"Yujin Shin",
"Kichang Lee",
"Sungmin Lee",
"You Rim Choi",
"Hyung-Sin Kim",
"JeongGil Ko"
] |
2024-07-03T13:15:12Z
|
2024-07-03T13:15:12Z
|
2210.09933
|
Explanations Based on Item Response Theory (eXirt): A Model-Specific
Method to Explain Tree-Ensemble Model in Trust Perspective
|
In recent years, XAI researchers have been formalizing proposals and developing new methods to explain black box models, with no general consensus in the community on which method to use to explain these models, with this choice being almost directly linked to the popularity of a specific method. Methods such as Ciu, Dalex, Eli5, Lofo, Shap and Skater emerged with the proposal to explain black box models through global rankings of feature relevance, which based on different methodologies, generate global explanations that indicate how the model's inputs explain its predictions. In this context, 41 datasets, 4 tree-ensemble algorithms (Light Gradient Boosting, CatBoost, Random Forest, and Gradient Boosting), and 6 XAI methods were used to support the launch of a new XAI method, called eXirt, based on Item Response Theory - IRT and aimed at tree-ensemble black box models that use tabular data referring to binary classification problems. In the first set of analyses, the 164 global feature relevance ranks of the eXirt were compared with 984 ranks of the other XAI methods present in the literature, seeking to highlight their similarities and differences. In a second analysis, exclusive explanations of the eXirt based on Explanation-by-example were presented that help in understanding the model trust. Thus, it was verified that eXirt is able to generate global explanations of tree-ensemble models and also local explanations of instances of models through IRT, showing how this consolidated theory can be used in machine learning in order to obtain explainable and reliable models.
|
http://arxiv.org/abs/2210.09933v3
|
[
"José Ribeiro",
"Lucas Cardoso",
"Raíssa Silva",
"Vitor Cirilo",
"Níkolas Carneiro",
"Ronnie Alves"
] |
2024-07-03T13:08:04Z
|
2022-10-18T15:30:14Z
|
2407.03082
|
Stable Heterogeneous Treatment Effect Estimation across
Out-of-Distribution Populations
|
Heterogeneous treatment effect (HTE) estimation is vital for understanding the change of treatment effect across individuals or subgroups. Most existing HTE estimation methods focus on addressing selection bias induced by imbalanced distributions of confounders between treated and control units, but ignore distribution shifts across populations. Thereby, their applicability has been limited to the in-distribution (ID) population, which shares a similar distribution with the training dataset. In real-world applications, where population distributions are subject to continuous changes, there is an urgent need for stable HTE estimation across out-of-distribution (OOD) populations, which, however, remains an open problem. As pioneers in resolving this problem, we propose a novel Stable Balanced Representation Learning with Hierarchical-Attention Paradigm (SBRL-HAP) framework, which consists of 1) Balancing Regularizer for eliminating selection bias, 2) Independence Regularizer for addressing the distribution shift issue, 3) Hierarchical-Attention Paradigm for coordination between balance and independence. In this way, SBRL-HAP regresses counterfactual outcomes using ID data, while ensuring the resulting HTE estimation can be successfully generalized to out-of-distribution scenarios, thereby enhancing the model's applicability in real-world settings. Extensive experiments conducted on synthetic and real-world datasets demonstrate the effectiveness of our SBRL-HAP in achieving stable HTE estimation across OOD populations, with an average 10% reduction in the error metric PEHE and 11% decrease in the ATE bias, compared to the SOTA methods.
|
http://arxiv.org/pdf/2407.03082v1
|
[
"Yuling Zhang",
"Anpeng Wu",
"Kun Kuang",
"Liang Du",
"Zixun Sun",
"Zhi Wang"
] |
2024-07-03T13:03:51Z
|
2024-07-03T13:03:51Z
|
2407.03080
|
Artificial Inductive Bias for Synthetic Tabular Data Generation in
Data-Scarce Scenarios
|
While synthetic tabular data generation using Deep Generative Models (DGMs) offers a compelling solution to data scarcity and privacy concerns, their effectiveness relies on substantial training data, often unavailable in real-world applications. This paper addresses this challenge by proposing a novel methodology for generating realistic and reliable synthetic tabular data with DGMs in limited real-data environments. Our approach proposes several ways to generate an artificial inductive bias in a DGM through transfer learning and meta-learning techniques. We explore and compare four different methods within this framework, demonstrating that transfer learning strategies like pre-training and model averaging outperform meta-learning approaches, like Model-Agnostic Meta-Learning, and Domain Randomized Search. We validate our approach using two state-of-the-art DGMs, namely, a Variational Autoencoder and a Generative Adversarial Network, to show that our artificial inductive bias fuels superior synthetic data quality, as measured by Jensen-Shannon divergence, achieving relative gains of up to 50% when using our proposed approach. This methodology has broad applicability in various DGMs and machine learning tasks, particularly in areas like healthcare and finance, where data scarcity is often a critical issue.
|
http://arxiv.org/pdf/2407.03080v1
|
[
"Patricia A. Apellániz",
"Ana Jiménez",
"Borja Arroyo Galende",
"Juan Parras",
"Santiago Zazo"
] |
2024-07-03T12:53:42Z
|
2024-07-03T12:53:42Z
|
2405.14527
|
ArchesWeather: An efficient AI weather forecasting model at 1.5°
resolution
|
One of the guiding principles for designing AI-based weather forecasting systems is to embed physical constraints as inductive priors in the neural network architecture. A popular prior is locality, where the atmospheric data is processed with local neural interactions, like 3D convolutions or 3D local attention windows as in Pangu-Weather. On the other hand, some works have shown great success in weather forecasting without this locality principle, at the cost of a much higher parameter count. In this paper, we show that the 3D local processing in Pangu-Weather is computationally sub-optimal. We design ArchesWeather, a transformer model that combines 2D attention with a column-wise attention-based feature interaction module, and demonstrate that this design improves forecasting skill. ArchesWeather is trained at 1.5{deg} resolution and 24h lead time, with a training budget of a few GPU-days and a lower inference cost than competing methods. An ensemble of four of our models shows better RMSE scores than the IFS HRES and is competitive with the 1.4{deg} 50-members NeuralGCM ensemble for one to three days ahead forecasting. Our code and models are publicly available at https://github.com/gcouairon/ArchesWeather.
|
http://arxiv.org/pdf/2405.14527v2
|
[
"Guillaume Couairon",
"Christian Lessig",
"Anastase Charantonis",
"Claire Monteleoni"
] |
2024-07-03T12:39:58Z
|
2024-05-23T13:11:49Z
|
2407.03065
|
Warm-up Free Policy Optimization: Improved Regret in Linear Markov
Decision Processes
|
Policy Optimization (PO) methods are among the most popular Reinforcement Learning (RL) algorithms in practice. Recently, Sherman et al. [2023a] proposed a PO-based algorithm with rate-optimal regret guarantees under the linear Markov Decision Process (MDP) model. However, their algorithm relies on a costly pure exploration warm-up phase that is hard to implement in practice. This paper eliminates this undesired warm-up phase, replacing it with a simple and efficient contraction mechanism. Our PO algorithm achieves rate-optimal regret with improved dependence on the other parameters of the problem (horizon and function approximation dimension) in two fundamental settings: adversarial losses with full-information feedback and stochastic losses with bandit feedback.
|
http://arxiv.org/pdf/2407.03065v1
|
[
"Asaf Cassel",
"Aviv Rosenberg"
] |
2024-07-03T12:36:24Z
|
2024-07-03T12:36:24Z
|
2407.03059
|
FairJob: A Real-World Dataset for Fairness in Online Systems
|
We introduce a fairness-aware dataset for job recommendation in advertising, designed to foster research in algorithmic fairness within real-world scenarios. It was collected and prepared to comply with privacy standards and business confidentiality. An additional challenge is the lack of access to protected user attributes such as gender, for which we propose a solution to obtain a proxy estimate. Despite being anonymized and including a proxy for a sensitive attribute, our dataset preserves predictive power and maintains a realistic and challenging benchmark. This dataset addresses a significant gap in the availability of fairness-focused resources for high-impact domains like advertising -- the actual impact being having access or not to precious employment opportunities, where balancing fairness and utility is a common industrial challenge. We also explore various stages in the advertising process where unfairness can occur and introduce a method to compute a fair utility metric for the job recommendations in online systems case from a biased dataset. Experimental evaluations of bias mitigation techniques on the released dataset demonstrate potential improvements in fairness and the associated trade-offs with utility.
|
http://arxiv.org/pdf/2407.03059v1
|
[
"Mariia Vladimirova",
"Federico Pavone",
"Eustache Diemert"
] |
2024-07-03T12:30:39Z
|
2024-07-03T12:30:39Z
|
2407.03056
|
Improving Zero-shot Generalization of Learned Prompts via Unsupervised
Knowledge Distillation
|
Vision-Language Models (VLMs) demonstrate remarkable zero-shot generalization to unseen tasks, but fall short of the performance of supervised methods in generalizing to downstream tasks with limited data. Prompt learning is emerging as a parameter-efficient method for adapting VLMs, but state-of-the-art approaches require annotated samples. In this paper we propose a novel approach to prompt learning based on unsupervised knowledge distillation from more powerful models. Our approach, which we call Knowledge Distillation Prompt Learning (KDPL), can be integrated into existing prompt learning techniques and eliminates the need for labeled examples during adaptation. Our experiments on more than ten standard benchmark datasets demonstrate that KDPL is very effective at improving generalization of learned prompts for zero-shot domain generalization, zero-shot cross-dataset generalization, and zero-shot base-to-novel class generalization problems. KDPL requires no ground-truth labels for adaptation, and moreover we show that even in the absence of any knowledge of training class names it can be used to effectively transfer knowledge. The code is publicly available at https://github.com/miccunifi/KDPL.
|
http://arxiv.org/pdf/2407.03056v1
|
[
"Marco Mistretta",
"Alberto Baldrati",
"Marco Bertini",
"Andrew D. Bagdanov"
] |
2024-07-03T12:24:40Z
|
2024-07-03T12:24:40Z
|
2402.17506
|
Thermodynamics-informed super-resolution of scarce temporal dynamics
data
|
We present a method to increase the resolution of measurements of a physical system and subsequently predict its time evolution using thermodynamics-aware neural networks. Our method uses adversarial autoencoders, which reduce the dimensionality of the full order model to a set of latent variables that are enforced to match a prior, for example a normal distribution. Adversarial autoencoders are seen as generative models, and they can be trained to generate high-resolution samples from low-resoution inputs, meaning they can address the so-called super-resolution problem. Then, a second neural network is trained to learn the physical structure of the latent variables and predict their temporal evolution. This neural network is known as an structure-preserving neural network. It learns the metriplectic-structure of the system and applies a physical bias to ensure that the first and second principles of thermodynamics are fulfilled. The integrated trajectories are decoded to their original dimensionality, as well as to the higher dimensionality space produced by the adversarial autoencoder and they are compared to the ground truth solution. The method is tested with two examples of flow over a cylinder, where the fluid properties are varied between both examples.
|
http://arxiv.org/pdf/2402.17506v2
|
[
"Carlos Bermejo-Barbanoj",
"Beatriz Moya",
"Alberto Badías",
"Francisco Chinesta",
"Elías Cueto"
] |
2024-07-03T12:13:21Z
|
2024-02-27T13:46:45Z
|
2407.03045
|
JailbreakHunter: A Visual Analytics Approach for Jailbreak Prompts
Discovery from Large-Scale Human-LLM Conversational Datasets
|
Large Language Models (LLMs) have gained significant attention but also raised concerns due to the risk of misuse. Jailbreak prompts, a popular type of adversarial attack towards LLMs, have appeared and constantly evolved to breach the safety protocols of LLMs. To address this issue, LLMs are regularly updated with safety patches based on reported jailbreak prompts. However, malicious users often keep their successful jailbreak prompts private to exploit LLMs. To uncover these private jailbreak prompts, extensive analysis of large-scale conversational datasets is necessary to identify prompts that still manage to bypass the system's defenses. This task is highly challenging due to the immense volume of conversation data, diverse characteristics of jailbreak prompts, and their presence in complex multi-turn conversations. To tackle these challenges, we introduce JailbreakHunter, a visual analytics approach for identifying jailbreak prompts in large-scale human-LLM conversational datasets. We have designed a workflow with three analysis levels: group-level, conversation-level, and turn-level. Group-level analysis enables users to grasp the distribution of conversations and identify suspicious conversations using multiple criteria, such as similarity with reported jailbreak prompts in previous research and attack success rates. Conversation-level analysis facilitates the understanding of the progress of conversations and helps discover jailbreak prompts within their conversation contexts. Turn-level analysis allows users to explore the semantic similarity and token overlap between a singleturn prompt and the reported jailbreak prompts, aiding in the identification of new jailbreak strategies. The effectiveness and usability of the system were verified through multiple case studies and expert interviews.
|
http://arxiv.org/pdf/2407.03045v1
|
[
"Zhihua Jin",
"Shiyi Liu",
"Haotian Li",
"Xun Zhao",
"Huamin Qu"
] |
2024-07-03T12:10:41Z
|
2024-07-03T12:10:41Z
|
2407.03038
|
On the Client Preference of LLM Fine-tuning in Federated Learning
|
Reinforcement learning with human feedback (RLHF) fine-tunes a pretrained large language model (LLM) using preference datasets, enabling the LLM to generate outputs that align with human preferences. Given the sensitive nature of these preference datasets held by various clients, there is a need to implement RLHF within a federated learning (FL) framework, where clients are reluctant to share their data due to privacy concerns. To address this, we introduce a feasible framework in which clients collaboratively train a binary selector with their preference datasets using our proposed FedBis. With a well-trained selector, we can further enhance the LLM that generates human-preferred completions. Meanwhile, we propose a novel algorithm, FedBiscuit, that trains multiple selectors by organizing clients into balanced and disjoint clusters based on their preferences. Compared to the FedBis, FedBiscuit demonstrates superior performance in simulating human preferences for pairwise completions. Our extensive experiments on federated human preference datasets -- marking the first benchmark to address heterogeneous data partitioning among clients -- demonstrate that FedBiscuit outperforms FedBis and even surpasses traditional centralized training.
|
http://arxiv.org/pdf/2407.03038v1
|
[
"Feijie Wu",
"Xiaoze Liu",
"Haoyu Wang",
"Xingchen Wang",
"Jing Gao"
] |
2024-07-03T12:02:24Z
|
2024-07-03T12:02:24Z
|
2308.13900
|
Semi-Supervised Semantic Segmentation via Marginal Contextual
Information
|
We present a novel confidence refinement scheme that enhances pseudo labels in semi-supervised semantic segmentation. Unlike existing methods, which filter pixels with low-confidence predictions in isolation, our approach leverages the spatial correlation of labels in segmentation maps by grouping neighboring pixels and considering their pseudo labels collectively. With this contextual information, our method, named S4MC, increases the amount of unlabeled data used during training while maintaining the quality of the pseudo labels, all with negligible computational overhead. Through extensive experiments on standard benchmarks, we demonstrate that S4MC outperforms existing state-of-the-art semi-supervised learning approaches, offering a promising solution for reducing the cost of acquiring dense annotations. For example, S4MC achieves a 1.39 mIoU improvement over the prior art on PASCAL VOC 12 with 366 annotated images. The code to reproduce our experiments is available at https://s4mcontext.github.io/
|
http://arxiv.org/pdf/2308.13900v2
|
[
"Moshe Kimhi",
"Shai Kimhi",
"Evgenii Zheltonozhskii",
"Or Litany",
"Chaim Baskin"
] |
2024-07-03T11:58:22Z
|
2023-08-26T15:02:00Z
|
2312.12616
|
Online Variational Sequential Monte Carlo
|
Being the most classical generative model for serial data, state-space models (SSM) are fundamental in AI and statistical machine learning. In SSM, any form of parameter learning or latent state inference typically involves the computation of complex latent-state posteriors. In this work, we build upon the variational sequential Monte Carlo (VSMC) method, which provides computationally efficient and accurate model parameter estimation and Bayesian latent-state inference by combining particle methods and variational inference. While standard VSMC operates in the offline mode, by re-processing repeatedly a given batch of data, we distribute the approximation of the gradient of the VSMC surrogate ELBO in time using stochastic approximation, allowing for online learning in the presence of streams of data. This results in an algorithm, online VSMC, that is capable of performing efficiently, entirely on-the-fly, both parameter estimation and particle proposal adaptation. In addition, we provide rigorous theoretical results describing the algorithm's convergence properties as the number of data tends to infinity as well as numerical illustrations of its excellent convergence properties and usefulness also in batch-processing settings.
|
http://arxiv.org/pdf/2312.12616v3
|
[
"Alessandro Mastrototaro",
"Jimmy Olsson"
] |
2024-07-03T11:47:36Z
|
2023-12-19T21:45:38Z
|
2308.14507
|
Spectral Estimators for Structured Generalized Linear Models via
Approximate Message Passing
|
We consider the problem of parameter estimation in a high-dimensional generalized linear model. Spectral methods obtained via the principal eigenvector of a suitable data-dependent matrix provide a simple yet surprisingly effective solution. However, despite their wide use, a rigorous performance characterization, as well as a principled way to preprocess the data, are available only for unstructured (i.i.d. Gaussian and Haar orthogonal) designs. In contrast, real-world data matrices are highly structured and exhibit non-trivial correlations. To address the problem, we consider correlated Gaussian designs capturing the anisotropic nature of the features via a covariance matrix $Sigma$. Our main result is a precise asymptotic characterization of the performance of spectral estimators. This allows us to identify the optimal preprocessing that minimizes the number of samples needed for parameter estimation. Surprisingly, such preprocessing is universal across a broad set of designs, which partly addresses a conjecture on optimal spectral estimators for rotationally invariant models. Our principled approach vastly improves upon previous heuristic methods, including for designs common in computational imaging and genetics. The proposed methodology, based on approximate message passing, is broadly applicable and opens the way to the precise characterization of spiked matrices and of the corresponding spectral methods in a variety of settings.
|
http://arxiv.org/pdf/2308.14507v3
|
[
"Yihan Zhang",
"Hong Chang Ji",
"Ramji Venkataramanan",
"Marco Mondelli"
] |
2024-07-03T11:43:58Z
|
2023-08-28T11:49:23Z
|
2402.01781
|
When Benchmarks are Targets: Revealing the Sensitivity of Large Language
Model Leaderboards
|
Large Language Model (LLM) leaderboards based on benchmark rankings are regularly used to guide practitioners in model selection. Often, the published leaderboard rankings are taken at face value - we show this is a (potentially costly) mistake. Under existing leaderboards, the relative performance of LLMs is highly sensitive to (often minute) details. We show that for popular multiple-choice question benchmarks (e.g., MMLU), minor perturbations to the benchmark, such as changing the order of choices or the method of answer selection, result in changes in rankings up to 8 positions. We explain this phenomenon by conducting systematic experiments over three broad categories of benchmark perturbations and identifying the sources of this behavior. Our analysis results in several best-practice recommendations, including the advantage of a hybrid scoring method for answer selection. Our study highlights the dangers of relying on simple benchmark evaluations and charts the path for more robust evaluation schemes on the existing benchmarks. The code for this paper is available at https://github.com/National-Center-for-AI-Saudi-Arabia/lm-evaluation-harness.
|
http://arxiv.org/pdf/2402.01781v2
|
[
"Norah Alzahrani",
"Hisham Abdullah Alyahya",
"Yazeed Alnumay",
"Sultan Alrashed",
"Shaykhah Alsubaie",
"Yusef Almushaykeh",
"Faisal Mirza",
"Nouf Alotaibi",
"Nora Altwairesh",
"Areeb Alowisheq",
"M Saiful Bari",
"Haidar Khan"
] |
2024-07-03T11:20:43Z
|
2024-02-01T19:12:25Z
|
2407.01823
|
Meta-Learning Based Optimization for Large Scale Wireless Systems
|
Optimization algorithms for wireless systems play a fundamental role in improving their performance and efficiency. However, it is known that the complexity of conventional optimization algorithms in the literature often exponentially increases with the number of transmit antennas and communication users in the wireless system. Therefore, in the large scale regime, the astronomically large complexity of these optimization algorithms prohibits their use and prevents assessing large scale wireless systems performance under optimized conditions. To overcome this limitation, this work proposes instead the use of an unsupervised meta-learning based approach to directly perform non-convex optimization at significantly reduced complexity. To demonstrate the effectiveness of the proposed meta-learning based solution, the sum-rate (SR) maximization problem for the following three emerging 6G technologies is contemplated: hierarchical rate-splitting multiple access (H-RSMA), integrated sensing and communication (ISAC), and beyond-diagonal reconfigurable intelligent surfaces (BD-RIS). Through numerical results, it is demonstrated that the proposed meta-learning based optimization framework is able to successfully optimize the performance and also reveal unknown aspects of the operation in the large scale regime for the considered three 6G technologies.
|
http://arxiv.org/pdf/2407.01823v2
|
[
"Rafael Cerna Loli",
"Bruno Clerckx"
] |
2024-07-03T11:09:00Z
|
2024-07-01T21:45:27Z
|
2311.00285
|
Mixture-of-Experts for Open Set Domain Adaptation: A Dual-Space
Detection Approach
|
Open Set Domain Adaptation (OSDA) aims to cope with the distribution and label shifts between the source and target domains simultaneously, performing accurate classification for known classes while identifying unknown class samples in the target domain. Most existing OSDA approaches, depending on the final image feature space of deep models, require manually-tuned thresholds, and may easily misclassify unknown samples as known classes. Mixture-of-Experts (MoE) could be a remedy. Within a MoE, different experts handle distinct input features, producing unique expert routing patterns for various classes in a routing feature space. As a result, unknown class samples may display different expert routing patterns to known classes. In this paper, we propose Dual-Space Detection, which exploits the inconsistencies between the image feature space and the routing feature space to detect unknown class samples without any threshold. Graph Router is further introduced to better make use of the spatial information among image patches. Experiments on three different datasets validated the effectiveness and superiority of our approach.
|
http://arxiv.org/pdf/2311.00285v2
|
[
"Zhenbang Du",
"Jiayu An",
"Yunlu Tu",
"Jiahao Hong",
"Dongrui Wu"
] |
2024-07-03T10:51:56Z
|
2023-11-01T04:36:18Z
|
2407.02987
|
LoRA-Guard: Parameter-Efficient Guardrail Adaptation for Content
Moderation of Large Language Models
|
Guardrails have emerged as an alternative to safety alignment for content moderation of large language models (LLMs). Existing model-based guardrails have not been designed for resource-constrained computational portable devices, such as mobile phones, more and more of which are running LLM-based applications locally. We introduce LoRA-Guard, a parameter-efficient guardrail adaptation method that relies on knowledge sharing between LLMs and guardrail models. LoRA-Guard extracts language features from the LLMs and adapts them for the content moderation task using low-rank adapters, while a dual-path design prevents any performance degradation on the generative task. We show that LoRA-Guard outperforms existing approaches with 100-1000x lower parameter overhead while maintaining accuracy, enabling on-device content moderation.
|
http://arxiv.org/pdf/2407.02987v1
|
[
"Hayder Elesedy",
"Pedro M. Esperança",
"Silviu Vlad Oprea",
"Mete Ozay"
] |
2024-07-03T10:38:40Z
|
2024-07-03T10:38:40Z
|
2406.09684
|
Explainable AI for Comparative Analysis of Intrusion Detection Models
|
Explainable Artificial Intelligence (XAI) has become a widely discussed topic, the related technologies facilitate better understanding of conventional black-box models like Random Forest, Neural Networks and etc. However, domain-specific applications of XAI are still insufficient. To fill this gap, this research analyzes various machine learning models to the tasks of binary and multi-class classification for intrusion detection from network traffic on the same dataset using occlusion sensitivity. The models evaluated include Linear Regression, Logistic Regression, Linear Support Vector Machine (SVM), K-Nearest Neighbors (KNN), Random Forest, Decision Trees, and Multi-Layer Perceptrons (MLP). We trained all models to the accuracy of 90% on the UNSW-NB15 Dataset. We found that most classifiers leverage only less than three critical features to achieve such accuracies, indicating that effective feature engineering could actually be far more important for intrusion detection than applying complicated models. We also discover that Random Forest provides the best performance in terms of accuracy, time efficiency and robustness. Data and code available at https://github.com/pcwhy/XML-IntrusionDetection.git
|
http://arxiv.org/pdf/2406.09684v2
|
[
"Pap M. Corea",
"Yongxin Liu",
"Jian Wang",
"Shuteng Niu",
"Houbing Song"
] |
2024-07-03T10:32:51Z
|
2024-06-14T03:11:01Z
|
2405.15512
|
ChatGPT Code Detection: Techniques for Uncovering the Source of Code
|
In recent times, large language models (LLMs) have made significant strides in generating computer code, blurring the lines between code created by humans and code produced by artificial intelligence (AI). As these technologies evolve rapidly, it is crucial to explore how they influence code generation, especially given the risk of misuse in areas like higher education. This paper explores this issue by using advanced classification techniques to differentiate between code written by humans and that generated by ChatGPT, a type of LLM. We employ a new approach that combines powerful embedding features (black-box) with supervised learning algorithms - including Deep Neural Networks, Random Forests, and Extreme Gradient Boosting - to achieve this differentiation with an impressive accuracy of 98%. For the successful combinations, we also examine their model calibration, showing that some of the models are extremely well calibrated. Additionally, we present white-box features and an interpretable Bayes classifier to elucidate critical differences between the code sources, enhancing the explainability and transparency of our approach. Both approaches work well but provide at most 85-88% accuracy. We also show that untrained humans solve the same task not better than random guessing. This study is crucial in understanding and mitigating the potential risks associated with using AI in code generation, particularly in the context of higher education, software development, and competitive programming.
|
http://arxiv.org/abs/2405.15512v2
|
[
"Marc Oedingen",
"Raphael C. Engelhardt",
"Robin Denz",
"Maximilian Hammer",
"Wolfgang Konen"
] |
2024-07-03T10:23:01Z
|
2024-05-24T12:56:18Z
|
2405.18925
|
Federated Continual Learning Goes Online: Leveraging Uncertainty for
Modality-Agnostic Class-Incremental Learning
|
Given the ability to model more realistic and dynamic problems, Federated Continual Learning (FCL) has been increasingly investigated recently. A well-known problem encountered in this setting is the so-called catastrophic forgetting, for which the learning model is inclined to focus on more recent tasks while forgetting the previously learned knowledge. The majority of the current approaches in FCL propose generative-based solutions to solve said problem. However, this setting requires multiple training epochs over the data, implying an offline setting where datasets are stored locally and remain unchanged over time. Furthermore, the proposed solutions are tailored for vision tasks solely. To overcome these limitations, we propose a new modality-agnostic approach to deal with the online scenario where new data arrive in streams of mini-batches that can only be processed once. To solve catastrophic forgetting, we propose an uncertainty-aware memory-based approach. In particular, we suggest using an estimator based on the Bregman Information (BI) to compute the model's variance at the sample level. Through measures of predictive uncertainty, we retrieve samples with specific characteristics, and - by retraining the model on such samples - we demonstrate the potential of this approach to reduce the forgetting effect in realistic settings.
|
http://arxiv.org/pdf/2405.18925v2
|
[
"Giuseppe Serra",
"Florian Buettner"
] |
2024-07-03T10:22:04Z
|
2024-05-29T09:29:39Z
|
2407.02974
|
IM-MoCo: Self-supervised MRI Motion Correction using Motion-Guided
Implicit Neural Representations
|
Motion artifacts in Magnetic Resonance Imaging (MRI) arise due to relatively long acquisition times and can compromise the clinical utility of acquired images. Traditional motion correction methods often fail to address severe motion, leading to distorted and unreliable results. Deep Learning (DL) alleviated such pitfalls through generalization with the cost of vanishing structures and hallucinations, making it challenging to apply in the medical field where hallucinated structures can tremendously impact the diagnostic outcome. In this work, we present an instance-wise motion correction pipeline that leverages motion-guided Implicit Neural Representations (INRs) to mitigate the impact of motion artifacts while retaining anatomical structure. Our method is evaluated using the NYU fastMRI dataset with different degrees of simulated motion severity. For the correction alone, we can improve over state-of-the-art image reconstruction methods by $+5%$ SSIM, $+5:db$ PSNR, and $+14%$ HaarPSI. Clinical relevance is demonstrated by a subsequent experiment, where our method improves classification outcomes by at least $+1.5$ accuracy percentage points compared to motion-corrupted images.
|
http://arxiv.org/pdf/2407.02974v1
|
[
"Ziad Al-Haj Hemidi",
"Christian Weihsbach",
"Mattias P. Heinrich"
] |
2024-07-03T10:14:33Z
|
2024-07-03T10:14:33Z
|
2403.03777
|
ENOT: Expectile Regularization for Fast and Accurate Training of Neural
Optimal Transport
|
We present a new approach for Neural Optimal Transport (NOT) training procedure, capable of accurately and efficiently estimating optimal transportation plan via specific regularization on dual Kantorovich potentials. The main bottleneck of existing NOT solvers is associated with the procedure of finding a near-exact approximation of the conjugate operator (i.e., the c-transform), which is done either by optimizing over non-convex max-min objectives or by the computationally intensive fine-tuning of the initial approximated prediction. We resolve both issues by proposing a new, theoretically justified loss in the form of expectile regularisation which enforces binding conditions on the learning process of dual potentials. Such a regularization provides the upper bound estimation over the distribution of possible conjugate potentials and makes the learning stable, completely eliminating the need for additional extensive fine-tuning. Proposed method, called Expectile-Regularised Neural Optimal Transport (ENOT), outperforms previous state-of-the-art approaches on the established Wasserstein-2 benchmark tasks by a large margin (up to a 3-fold improvement in quality and up to a 10-fold improvement in runtime). Moreover, we showcase performance of ENOT for varying cost functions on different tasks such as image generation, showing robustness of proposed algorithm. OTT-JAX library includes our implementation of ENOT algorithm https://ott-jax.readthedocs.io/en/latest/tutorials/ENOT.html
|
http://arxiv.org/pdf/2403.03777v3
|
[
"Nazar Buzun",
"Maksim Bobrin",
"Dmitry V. Dylov"
] |
2024-07-03T10:02:39Z
|
2024-03-06T15:15:42Z
|
2407.06085
|
LLMcap: Large Language Model for Unsupervised PCAP Failure Detection
|
The integration of advanced technologies into telecommunication networks complicates troubleshooting, posing challenges for manual error identification in Packet Capture (PCAP) data. This manual approach, requiring substantial resources, becomes impractical at larger scales. Machine learning (ML) methods offer alternatives, but the scarcity of labeled data limits accuracy. In this study, we propose a self-supervised, large language model-based (LLMcap) method for PCAP failure detection. LLMcap leverages language-learning abilities and employs masked language modeling to learn grammar, context, and structure. Tested rigorously on various PCAPs, it demonstrates high accuracy despite the absence of labeled data during training, presenting a promising solution for efficient network analysis. Index Terms: Network troubleshooting, Packet Capture Analysis, Self-Supervised Learning, Large Language Model, Network Quality of Service, Network Performance.
|
http://arxiv.org/pdf/2407.06085v1
|
[
"Lukasz Tulczyjew",
"Kinan Jarrah",
"Charles Abondo",
"Dina Bennett",
"Nathanael Weill"
] |
2024-07-03T09:59:27Z
|
2024-07-03T09:59:27Z
|
2407.02961
|
Towards a Scalable Reference-Free Evaluation of Generative Models
|
While standard evaluation scores for generative models are mostly reference-based, a reference-dependent assessment of generative models could be generally difficult due to the unavailability of applicable reference datasets. Recently, the reference-free entropy scores, VENDI and RKE, have been proposed to evaluate the diversity of generated data. However, estimating these scores from data leads to significant computational costs for large-scale generative models. In this work, we leverage the random Fourier features framework to reduce the computational price and propose the Fourier-based Kernel Entropy Approximation (FKEA) method. We utilize FKEA's approximated eigenspectrum of the kernel matrix to efficiently estimate the mentioned entropy scores. Furthermore, we show the application of FKEA's proxy eigenvectors to reveal the method's identified modes in evaluating the diversity of produced samples. We provide a stochastic implementation of the FKEA assessment algorithm with a complexity $O(n)$ linearly growing with sample size $n$. We extensively evaluate FKEA's numerical performance in application to standard image, text, and video datasets. Our empirical results indicate the method's scalability and interpretability applied to large-scale generative models. The codebase is available at https://github.com/aziksh-ospanov/FKEA.
|
http://arxiv.org/pdf/2407.02961v1
|
[
"Azim Ospanov",
"Jingwei Zhang",
"Mohammad Jalali",
"Xuenan Cao",
"Andrej Bogdanov",
"Farzan Farnia"
] |
2024-07-03T09:54:58Z
|
2024-07-03T09:54:58Z
|
2407.02960
|
ObfuscaTune: Obfuscated Offsite Fine-tuning and Inference of Proprietary
LLMs on Private Datasets
|
This work addresses the timely yet underexplored problem of performing inference and finetuning of a proprietary LLM owned by a model provider entity on the confidential/private data of another data owner entity, in a way that ensures the confidentiality of both the model and the data. Hereby, the finetuning is conducted offsite, i.e., on the computation infrastructure of a third-party cloud provider. We tackle this problem by proposing ObfuscaTune, a novel, efficient and fully utility-preserving approach that combines a simple yet effective obfuscation technique with an efficient usage of confidential computing (only 5% of the model parameters are placed on TEE). We empirically demonstrate the effectiveness of ObfuscaTune by validating it on GPT-2 models with different sizes on four NLP benchmark datasets. Finally, we compare to a na"ive version of our approach to highlight the necessity of using random matrices with low condition numbers in our approach to reduce errors induced by the obfuscation.
|
http://arxiv.org/pdf/2407.02960v1
|
[
"Ahmed Frikha",
"Nassim Walha",
"Ricardo Mendes",
"Krishna Kanth Nakka",
"Xue Jiang",
"Xuebing Zhou"
] |
2024-07-03T09:54:08Z
|
2024-07-03T09:54:08Z
|
2407.02956
|
IncogniText: Privacy-enhancing Conditional Text Anonymization via
LLM-based Private Attribute Randomization
|
In this work, we address the problem of text anonymization where the goal is to prevent adversaries from correctly inferring private attributes of the author, while keeping the text utility, i.e., meaning and semantics. We propose IncogniText, a technique that anonymizes the text to mislead a potential adversary into predicting a wrong private attribute value. Our empirical evaluation shows a reduction of private attribute leakage by more than 90%. Finally, we demonstrate the maturity of IncogniText for real-world applications by distilling its anonymization capability into a set of LoRA parameters associated with an on-device model.
|
http://arxiv.org/pdf/2407.02956v1
|
[
"Ahmed Frikha",
"Nassim Walha",
"Krishna Kanth Nakka",
"Ricardo Mendes",
"Xue Jiang",
"Xuebing Zhou"
] |
2024-07-03T09:49:03Z
|
2024-07-03T09:49:03Z
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.