arxiv_id
stringlengths
7
11
title
stringlengths
7
243
abstract
stringlengths
3
2.79k
link
stringlengths
21
49
authors
listlengths
1
451
updated
stringlengths
20
20
published
stringlengths
20
20
2403.11901
Larimar: Large Language Models with Episodic Memory Control
Efficient and accurate updating of knowledge stored in Large Language Models (LLMs) is one of the most pressing research challenges today. This paper presents Larimar - a novel, brain-inspired architecture for enhancing LLMs with a distributed episodic memory. Larimar's memory allows for dynamic, one-shot updates of knowledge without the need for computationally expensive re-training or fine-tuning. Experimental results on multiple fact editing benchmarks demonstrate that Larimar attains accuracy comparable to most competitive baselines, even in the challenging sequential editing setup, but also excels in speed - yielding speed-ups of 8-10x depending on the base LLM - as well as flexibility due to the proposed architecture being simple, LLM-agnostic, and hence general. We further provide mechanisms for selective fact forgetting, information leakage prevention, and input context length generalization with Larimar and show their effectiveness. Our code is available at https://github.com/IBM/larimar
http://arxiv.org/pdf/2403.11901v3
[ "Payel Das", "Subhajit Chaudhury", "Elliot Nelson", "Igor Melnyk", "Sarath Swaminathan", "Sihui Dai", "Aurélie Lozano", "Georgios Kollias", "Vijil Chenthamarakshan", "Jiří", "Navrátil", "Soham Dan", "Pin-Yu Chen" ]
2024-07-07T00:51:44Z
2024-03-18T16:01:42Z
2404.05980
Tackling Structural Hallucination in Image Translation with Local Diffusion
Recent developments in diffusion models have advanced conditioned image generation, yet they struggle with reconstructing out-of-distribution (OOD) images, such as unseen tumors in medical images, causing "image hallucination" and risking misdiagnosis. We hypothesize such hallucinations result from local OOD regions in the conditional images. We verify that partitioning the OOD region and conducting separate image generations alleviates hallucinations in several applications. From this, we propose a training-free diffusion framework that reduces hallucination with multiple Local Diffusion processes. Our approach involves OOD estimation followed by two modules: a "branching" module generates locally both within and outside OOD regions, and a "fusion" module integrates these predictions into one. Our evaluation shows our method mitigates hallucination over baseline models quantitatively and qualitatively, reducing misdiagnosis by 40% and 25% in the real-world medical and natural image datasets, respectively. It also demonstrates compatibility with various pre-trained diffusion models.
http://arxiv.org/pdf/2404.05980v4
[ "Seunghoi Kim", "Chen Jin", "Tom Diethe", "Matteo Figini", "Henry F. J. Tregidgo", "Asher Mullokandov", "Philip Teare", "Daniel C. Alexander" ]
2024-07-06T23:15:17Z
2024-04-09T03:24:10Z
2407.05194
LLMCloudHunter: Harnessing LLMs for Automated Extraction of Detection Rules from Cloud-Based CTI
As the number and sophistication of cyber attacks have increased, threat hunting has become a critical aspect of active security, enabling proactive detection and mitigation of threats before they cause significant harm. Open-source cyber threat intelligence (OS-CTI) is a valuable resource for threat hunters, however, it often comes in unstructured formats that require further manual analysis. Previous studies aimed at automating OSCTI analysis are limited since (1) they failed to provide actionable outputs, (2) they did not take advantage of images present in OSCTI sources, and (3) they focused on on-premises environments, overlooking the growing importance of cloud environments. To address these gaps, we propose LLMCloudHunter, a novel framework that leverages large language models (LLMs) to automatically generate generic-signature detection rule candidates from textual and visual OSCTI data. We evaluated the quality of the rules generated by the proposed framework using 12 annotated real-world cloud threat reports. The results show that our framework achieved a precision of 92% and recall of 98% for the task of accurately extracting API calls made by the threat actor and a precision of 99% with a recall of 98% for IoCs. Additionally, 99.18% of the generated detection rule candidates were successfully compiled and converted into Splunk queries.
http://arxiv.org/pdf/2407.05194v1
[ "Yuval Schwartz", "Lavi Benshimol", "Dudu Mimran", "Yuval Elovici", "Asaf Shabtai" ]
2024-07-06T21:43:35Z
2024-07-06T21:43:35Z
2101.03735
Biomanufacturing Harvest Optimization with Small Data
In biopharmaceutical manufacturing, fermentation processes play a critical role in productivity and profit. A fermentation process uses living cells with complex biological mechanisms, leading to high variability in the process outputs, namely, the protein and impurity levels. By building on the biological mechanisms of protein and impurity growth, we introduce a stochastic model to characterize the accumulation of the protein and impurity levels in the fermentation process. However, a common challenge in the industry is the availability of only a very limited amount of data, especially in the development and early stages of production. This adds an additional layer of uncertainty, referred to as model risk, due to the difficulty of estimating the model parameters with limited data. In this paper, we study the harvesting decision for a fermentation process (i.e., when to stop the fermentation and collect the production reward) under model risk. We adopt a Bayesian approach to update the unknown parameters of the growth-rate distributions, and use the resulting posterior distributions to characterize the impact of model risk on fermentation output variability. The harvesting problem is formulated as a Markov decision process model with knowledge states that summarize the posterior distributions and hence incorporate the model risk in decision-making. Our case studies at MSD Animal Health demonstrate that the proposed model and solution approach improve the harvesting decisions in real life by achieving substantially higher average output from a fermentation batch along with lower batch-to-batch variability.
http://arxiv.org/pdf/2101.03735v5
[ "Bo Wang", "Wei Xie", "Tugce Martagan", "Alp Akcay", "Bram van Ravenstein" ]
2024-07-06T21:16:59Z
2021-01-11T07:47:25Z
2407.05182
A Novel Bifurcation Method for Observation Perturbation Attacks on Reinforcement Learning Agents: Load Altering Attacks on a Cyber Physical Power System
Components of cyber physical systems, which affect real-world processes, are often exposed to the internet. Replacing conventional control methods with Deep Reinforcement Learning (DRL) in energy systems is an active area of research, as these systems become increasingly complex with the advent of renewable energy sources and the desire to improve their efficiency. Artificial Neural Networks (ANN) are vulnerable to specific perturbations of their inputs or features, called adversarial examples. These perturbations are difficult to detect when properly regularized, but have significant effects on the ANN's output. Because DRL uses ANN to map optimal actions to observations, they are similarly vulnerable to adversarial examples. This work proposes a novel attack technique for continuous control using Group Difference Logits loss with a bifurcation layer. By combining aspects of targeted and untargeted attacks, the attack significantly increases the impact compared to an untargeted attack, with drastically smaller distortions than an optimally targeted attack. We demonstrate the impacts of powerful gradient-based attacks in a realistic smart energy environment, show how the impacts change with different DRL agents and training procedures, and use statistical and time-series analysis to evaluate attacks' stealth. The results show that adversarial attacks can have significant impacts on DRL controllers, and constraining an attack's perturbations makes it difficult to detect. However, certain DRL architectures are far more robust, and robust training methods can further reduce the impact.
http://arxiv.org/pdf/2407.05182v1
[ "Kiernan Broda-Milian", "Ranwa Al-Mallah", "Hanane Dagdougui" ]
2024-07-06T20:55:24Z
2024-07-06T20:55:24Z
2407.05174
Synthetic Data Aided Federated Learning Using Foundation Models
In heterogeneous scenarios where the data distribution amongst the Federated Learning (FL) participants is Non-Independent and Identically distributed (Non-IID), FL suffers from the well known problem of data heterogeneity. This leads the performance of FL to be significantly degraded, as the global model tends to struggle to converge. To solve this problem, we propose Differentially Private Synthetic Data Aided Federated Learning Using Foundation Models (DPSDA-FL), a novel data augmentation strategy that aids in homogenizing the local data present on the clients' side. DPSDA-FL improves the training of the local models by leveraging differentially private synthetic data generated from foundation models. We demonstrate the effectiveness of our approach by evaluating it on the benchmark image dataset: CIFAR-10. Our experimental results have shown that DPSDA-FL can improve class recall and classification accuracy of the global model by up to 26% and 9%, respectively, in FL with Non-IID issues.
http://arxiv.org/pdf/2407.05174v1
[ "Fatima Abacha", "Sin G. Teo", "Lucas C. Cordeiro", "Mustafa A. Mustafa" ]
2024-07-06T20:31:43Z
2024-07-06T20:31:43Z
2406.10445
Optimal Reward Labeling: Bridging Offline Preference and Reward-Based Reinforcement Learning
Offline reinforcement learning has become one of the most practical RL settings. A recent success story has been RLHF, offline preference-based RL (PBRL) with preference from humans. However, most existing works on offline RL focus on the standard setting with scalar reward feedback. It remains unknown how to universally transfer the existing rich understanding of offline RL from the reward-based to the preference-based setting. In this work, we propose a general framework to bridge this gap. Our key insight is transforming preference feedback to scalar rewards via optimal reward labeling (ORL), and then any reward-based offline RL algorithms can be applied to the dataset with the reward labels. We theoretically show the connection between several recent PBRL techniques and our framework combined with specific offline RL algorithms in terms of how they utilize the preference signals. By combining reward labeling with different algorithms, our framework can lead to new and potentially more efficient offline PBRL algorithms. We empirically test our framework on preference datasets based on the standard D4RL benchmark. When combined with a variety of efficient reward-based offline RL algorithms, the learning result achieved under our framework is comparable to training the same algorithm on the dataset with actual rewards in many cases and better than the recent PBRL baselines in most cases.
http://arxiv.org/pdf/2406.10445v2
[ "Yinglun Xu", "David Zhu", "Rohan Gumaste", "Gagandeep Singh" ]
2024-07-06T20:03:16Z
2024-06-14T23:40:42Z
2102.11905
Grounded Relational Inference: Domain Knowledge Driven Explainable Autonomous Driving
Explainability is essential for autonomous vehicles and other robotics systems interacting with humans and other objects during operation. Humans need to understand and anticipate the actions taken by the machines for trustful and safe cooperation. In this work, we aim to develop an explainable model that generates explanations consistent with both human domain knowledge and the model's inherent causal relation. In particular, we focus on an essential building block of autonomous driving, multi-agent interaction modeling. We propose Grounded Relational Inference (GRI). It models an interactive system's underlying dynamics by inferring an interaction graph representing the agents' relations. We ensure a semantically meaningful interaction graph by grounding the relational latent space into semantic interactive behaviors defined with expert domain knowledge. We demonstrate that it can model interactive traffic scenarios under both simulation and real-world settings, and generate semantic graphs explaining the vehicle's behavior by their interactions.
http://arxiv.org/pdf/2102.11905v3
[ "Chen Tang", "Nishan Srishankar", "Sujitha Martin", "Masayoshi Tomizuka" ]
2024-07-06T19:40:13Z
2021-02-23T19:34:32Z
2407.05145
On high-dimensional modifications of the nearest neighbor classifier
Nearest neighbor classifier is arguably the most simple and popular nonparametric classifier available in the literature. However, due to the concentration of pairwise distances and the violation of the neighborhood structure, this classifier often suffers in high-dimension, low-sample size (HDLSS) situations, especially when the scale difference between the competing classes dominates their location difference. Several attempts have been made in the literature to take care of this problem. In this article, we discuss some of these existing methods and propose some new ones. We carry out some theoretical investigations in this regard and analyze several simulated and benchmark datasets to compare the empirical performances of proposed methods with some of the existing ones.
http://arxiv.org/pdf/2407.05145v1
[ "Annesha Ghosh", "Bilol Banerjee", "Anil K. Ghosh" ]
2024-07-06T17:53:53Z
2024-07-06T17:53:53Z
2407.05141
Impact of Network Topology on Byzantine Resilience in Decentralized Federated Learning
Federated learning (FL) enables a collaborative environment for training machine learning models without sharing training data between users. This is typically achieved by aggregating model gradients on a central server. Decentralized federated learning is a rising paradigm that enables users to collaboratively train machine learning models in a peer-to-peer manner, without the need for a central aggregation server. However, before applying decentralized FL in real-world use training environments, nodes that deviate from the FL process (Byzantine nodes) must be considered when selecting an aggregation function. Recent research has focused on Byzantine-robust aggregation for client-server or fully connected networks, but has not yet evaluated such aggregation schemes for complex topologies possible with decentralized FL. Thus, the need for empirical evidence of Byzantine robustness in differing network topologies is evident. This work investigates the effects of state-of-the-art Byzantine-robust aggregation methods in complex, large-scale network structures. We find that state-of-the-art Byzantine robust aggregation strategies are not resilient within large non-fully connected networks. As such, our findings point the field towards the development of topology-aware aggregation schemes, especially necessary within the context of large scale real-world deployment.
http://arxiv.org/pdf/2407.05141v1
[ "Siddhartha Bhattacharya", "Daniel Helo", "Joshua Siegel" ]
2024-07-06T17:47:44Z
2024-07-06T17:47:44Z
2407.05134
Solving for X and Beyond: Can Large Language Models Solve Complex Math Problems with More-Than-Two Unknowns?
Large Language Models (LLMs) have demonstrated remarkable performance in solving math problems, a hallmark of human intelligence. Despite high success rates on current benchmarks; however, these often feature simple problems with only one or two unknowns, which do not sufficiently challenge their reasoning capacities. This paper introduces a novel benchmark, BeyondX, designed to address these limitations by incorporating problems with multiple unknowns. Recognizing the challenges in proposing multi-unknown problems from scratch, we developed BeyondX using an innovative automated pipeline that progressively increases complexity by expanding the number of unknowns in simpler problems. Empirical study on BeyondX reveals that the performance of existing LLMs, even those fine-tuned specifically on math tasks, significantly decreases as the number of unknowns increases - with a performance drop of up to 70% observed in GPT-4. To tackle these challenges, we propose the Formulate-and-Solve strategy, a generalized prompting approach that effectively handles problems with an arbitrary number of unknowns. Our findings reveal that this strategy not only enhances LLM performance on the BeyondX benchmark but also provides deeper insights into the computational limits of LLMs when faced with more complex mathematical challenges.
http://arxiv.org/pdf/2407.05134v1
[ "Kuei-Chun Kao", "Ruochen Wang", "Cho-Jui Hsieh" ]
2024-07-06T17:01:04Z
2024-07-06T17:01:04Z
2407.05131
RULE: Reliable Multimodal RAG for Factuality in Medical Vision Language Models
The recent emergence of Medical Large Vision Language Models (Med-LVLMs) has enhanced medical diagnosis. However, current Med-LVLMs frequently encounter factual issues, often generating responses that do not align with established medical facts. Retrieval-Augmented Generation (RAG), which utilizes external knowledge, can improve the factual accuracy of these models but introduces two major challenges. First, limited retrieved contexts might not cover all necessary information, while excessive retrieval can introduce irrelevant and inaccurate references, interfering with the model's generation. Second, in cases where the model originally responds correctly, applying RAG can lead to an over-reliance on retrieved contexts, resulting in incorrect answers. To address these issues, we propose RULE, which consists of two components. First, we introduce a provably effective strategy for controlling factuality risk through the calibrated selection of the number of retrieved contexts. Second, based on samples where over-reliance on retrieved contexts led to errors, we curate a preference dataset to fine-tune the model, balancing its dependence on inherent knowledge and retrieved contexts for generation. We demonstrate the effectiveness of RULE on three medical VQA datasets, achieving an average improvement of 20.8% in factual accuracy. We publicly release our benchmark and code in https://github.com/richard-peng-xia/RULE.
http://arxiv.org/pdf/2407.05131v1
[ "Peng Xia", "Kangyu Zhu", "Haoran Li", "Hongtu Zhu", "Yun Li", "Gang Li", "Linjun Zhang", "Huaxiu Yao" ]
2024-07-06T16:45:07Z
2024-07-06T16:45:07Z
2407.05125
A Joint Approach to Local Updating and Gradient Compression for Efficient Asynchronous Federated Learning
Asynchronous Federated Learning (AFL) confronts inherent challenges arising from the heterogeneity of devices (e.g., their computation capacities) and low-bandwidth environments, both potentially causing stale model updates (e.g., local gradients) for global aggregation. Traditional approaches mitigating the staleness of updates typically focus on either adjusting the local updating or gradient compression, but not both. Recognizing this gap, we introduce a novel approach that synergizes local updating with gradient compression. Our research begins by examining the interplay between local updating frequency and gradient compression rate, and their collective impact on convergence speed. The theoretical upper bound shows that the local updating frequency and gradient compression rate of each device are jointly determined by its computing power, communication capabilities and other factors. Building on this foundation, we propose an AFL framework called FedLuck that adaptively optimizes both local update frequency and gradient compression rates. Experiments on image classification and speech recognization show that FedLuck reduces communication consumption by 56% and training time by 55% on average, achieving competitive performance in heterogeneous and low-bandwidth scenarios compared to the baselines.
http://arxiv.org/pdf/2407.05125v1
[ "Jiajun Song", "Jiajun Luo", "Rongwei Lu", "Shuzhao Xie", "Bin Chen", "Zhi Wang" ]
2024-07-06T16:19:06Z
2024-07-06T16:19:06Z
2402.14730
Clifford-Steerable Convolutional Neural Networks
We present Clifford-Steerable Convolutional Neural Networks (CS-CNNs), a novel class of $mathrm{E}(p, q)$-equivariant CNNs. CS-CNNs process multivector fields on pseudo-Euclidean spaces $mathbb{R}^{p,q}$. They cover, for instance, $mathrm{E}(3)$-equivariance on $mathbb{R}^3$ and Poincar'e-equivariance on Minkowski spacetime $mathbb{R}^{1,3}$. Our approach is based on an implicit parametrization of $mathrm{O}(p,q)$-steerable kernels via Clifford group equivariant neural networks. We significantly and consistently outperform baseline methods on fluid dynamics as well as relativistic electrodynamics forecasting tasks.
http://arxiv.org/pdf/2402.14730v3
[ "Maksim Zhdanov", "David Ruhe", "Maurice Weiler", "Ana Lucic", "Johannes Brandstetter", "Patrick Forré" ]
2024-07-06T16:10:29Z
2024-02-22T17:42:15Z
2407.05108
The Role of Depth, Width, and Tree Size in Expressiveness of Deep Forest
Random forests are classical ensemble algorithms that construct multiple randomized decision trees and aggregate their predictions using naive averaging. citet{zhou2019deep} further propose a deep forest algorithm with multi-layer forests, which outperforms random forests in various tasks. The performance of deep forests is related to three hyperparameters in practice: depth, width, and tree size, but little has been known about its theoretical explanation. This work provides the first upper and lower bounds on the approximation complexity of deep forests concerning the three hyperparameters. Our results confirm the distinctive role of depth, which can exponentially enhance the expressiveness of deep forests compared with width and tree size. Experiments confirm the theoretical findings.
http://arxiv.org/pdf/2407.05108v1
[ "Shen-Huan Lyu", "Jin-Hui Wu", "Qin-Cheng Zheng", "Baoliu Ye" ]
2024-07-06T15:32:54Z
2024-07-06T15:32:54Z
2406.12921
WindowMixer: Intra-Window and Inter-Window Modeling for Time Series Forecasting
Time series forecasting (TSF) is crucial in fields like economic forecasting, weather prediction, traffic flow analysis, and public health surveillance. Real-world time series data often include noise, outliers, and missing values, making accurate forecasting challenging. Traditional methods model point-to-point relationships, which limits their ability to capture complex temporal patterns and increases their susceptibility to noise.To address these issues, we introduce the WindowMixer model, built on an all-MLP framework. WindowMixer leverages the continuous nature of time series by examining temporal variations from a window-based perspective. It decomposes time series into trend and seasonal components, handling them individually. For trends, a fully connected (FC) layer makes predictions. For seasonal components, time windows are projected to produce window tokens, processed by Intra-Window-Mixer and Inter-Window-Mixer modules. The Intra-Window-Mixer models relationships within each window, while the Inter-Window-Mixer models relationships between windows. This approach captures intricate patterns and long-range dependencies in the data.Experiments show WindowMixer consistently outperforms existing methods in both long-term and short-term forecasting tasks.
http://arxiv.org/pdf/2406.12921v2
[ "Quangao Liu", "Ruiqi Li", "Maowei Jiang", "Wei Yang", "Chen Liang", "LongLong Pang", "Zhuozhang Zou" ]
2024-07-06T15:14:20Z
2024-06-14T08:09:39Z
2311.04131
Towards Interpretable Sequence Continuation: Analyzing Shared Circuits in Large Language Models
While transformer models exhibit strong capabilities on linguistic tasks, their complex architectures make them difficult to interpret. Recent work has aimed to reverse engineer transformer models into human-readable representations called circuits that implement algorithmic functions. We extend this research by analyzing and comparing circuits for similar sequence continuation tasks, which include increasing sequences of Arabic numerals, number words, and months. By applying circuit interpretability analysis, we identify a key sub-circuit in both GPT-2 Small and Llama-2-7B responsible for detecting sequence members and for predicting the next member in a sequence. Our analysis reveals that semantically related sequences rely on shared circuit subgraphs with analogous roles. Additionally, we show that this sub-circuit has effects on various math-related prompts, such as on intervaled circuits, Spanish number word and months continuation, and natural language word problems. Overall, documenting shared computational structures enables better model behavior predictions, identification of errors, and safer editing procedures. This mechanistic understanding of transformers is a critical step towards building more robust, aligned, and interpretable language models.
http://arxiv.org/pdf/2311.04131v4
[ "Michael Lan", "Phillip Torr", "Fazl Barez" ]
2024-07-06T15:14:03Z
2023-11-07T16:58:51Z
2402.08280
Pix2Code: Learning to Compose Neural Visual Concepts as Programs
The challenge in learning abstract concepts from images in an unsupervised fashion lies in the required integration of visual perception and generalizable relational reasoning. Moreover, the unsupervised nature of this task makes it necessary for human users to be able to understand a model's learnt concepts and potentially revise false behaviours. To tackle both the generalizability and interpretability constraints of visual concept learning, we propose Pix2Code, a framework that extends program synthesis to visual relational reasoning by utilizing the abilities of both explicit, compositional symbolic and implicit neural representations. This is achieved by retrieving object representations from images and synthesizing relational concepts as lambda-calculus programs. We evaluate the diverse properties of Pix2Code on the challenging reasoning domains, Kandinsky Patterns and CURI, thereby testing its ability to identify compositional visual concepts that generalize to novel data and concept configurations. Particularly, in stark contrast to neural approaches, we show that Pix2Code's representations remain human interpretable and can be easily revised for improved performance.
http://arxiv.org/pdf/2402.08280v2
[ "Antonia Wüst", "Wolfgang Stammer", "Quentin Delfosse", "Devendra Singh Dhami", "Kristian Kersting" ]
2024-07-06T15:07:57Z
2024-02-13T08:14:10Z
2406.03736
Your Absorbing Discrete Diffusion Secretly Models the Conditional Distributions of Clean Data
Discrete diffusion models with absorbing processes have shown promise in language modeling. The key quantities to be estimated are the ratios between the marginal probabilities of two transitive states at all timesteps, called the concrete score. In this paper, we reveal that the concrete score in absorbing diffusion can be expressed as conditional probabilities of clean data, multiplied by a time-dependent scalar in an analytic form. Motivated by this finding, we propose reparameterized absorbing discrete diffusion (RADD), a dedicated diffusion model without time-condition that characterizes the time-independent conditional probabilities. Besides its simplicity, RADD can reduce the number of function evaluations (NFEs) by caching the output of the time-independent network when the noisy sample remains unchanged in a sampling interval. Empirically, RADD is up to 3.5 times faster while achieving similar performance with the strongest baseline. Built upon the new perspective of conditional distributions, we further unify absorbing discrete diffusion and any-order autoregressive models (AO-ARMs), showing that the upper bound on the negative log-likelihood for the diffusion model can be interpreted as an expected negative log-likelihood for AO-ARMs. Further, our RADD models achieve SOTA performance among diffusion models on 5 zero-shot language modeling benchmarks (measured by perplexity) at the GPT-2 scale. Our code is available at https://github.com/ML-GSAI/RADD.
http://arxiv.org/pdf/2406.03736v2
[ "Jingyang Ou", "Shen Nie", "Kaiwen Xue", "Fengqi Zhu", "Jiacheng Sun", "Zhenguo Li", "Chongxuan Li" ]
2024-07-06T14:40:08Z
2024-06-06T04:22:11Z
2008.07902
Bayesian geoacoustic inversion using mixture density network
Bayesian geoacoustic inversion problems are conventionally solved by Markov chain Monte Carlo methods or its variants, which are computationally expensive. This paper extends the classic Bayesian geoacoustic inversion framework by deriving important geoacoustic statistics of Bayesian geoacoustic inversion from the multidimensional posterior probability density (PPD) using the mixture density network (MDN) theory. These statistics make it convenient to train the network directly on the whole parameter space and get the multidimensional PPD of model parameters. The present approach provides a much more efficient way to solve geoacoustic inversion problems in Bayesian inference framework. The network is trained on a simulated dataset of surface-wave dispersion curves with shear-wave velocities as labels and tested on both synthetic and real data cases. The results show that the network gives reliable predictions and has good generalization performance on unseen data. Once trained, the network can rapidly (within seconds) give a fully probabilistic solution which is comparable to Monte Carlo methods. It provides an promising approach for real-time inversion.
http://arxiv.org/pdf/2008.07902v4
[ "Guoli Wu", "Jingya Zhang", "Junqiang Song" ]
2024-07-06T14:29:20Z
2020-08-18T13:02:40Z
2302.06555
Do Vision and Language Models Share Concepts? A Vector Space Alignment Study
Large-scale pretrained language models (LMs) are said to ``lack the ability to connect utterances to the world'' (Bender and Koller, 2020), because they do not have ``mental models of the world' '(Mitchell and Krakauer, 2023). If so, one would expect LM representations to be unrelated to representations induced by vision models. We present an empirical evaluation across four families of LMs (BERT, GPT-2, OPT and LLaMA-2) and three vision model architectures (ResNet, SegFormer, and MAE). Our experiments show that LMs partially converge towards representations isomorphic to those of vision models, subject to dispersion, polysemy and frequency. This has important implications for both multi-modal processing and the LM understanding debate (Mitchell and Krakauer, 2023).
http://arxiv.org/pdf/2302.06555v2
[ "Jiaang Li", "Yova Kementchedjhieva", "Constanza Fierro", "Anders Søgaard" ]
2024-07-06T14:27:55Z
2023-02-13T17:55:54Z
2407.06226
Quantum Machine Learning with Application to Progressive Supranuclear Palsy Network Classification
Machine learning and quantum computing are being progressively explored to shed light on possible computational approaches to deal with hitherto unsolvable problems. Classical methods for machine learning are ubiquitous in pattern recognition, with support vector machines (SVMs) being a prominent technique for network classification. However, there are limitations to the successful resolution of such classification instances when the input feature space becomes large, and the successive evaluation of so-called kernel functions becomes computationally exorbitant. The use of principal component analysis (PCA) substantially minimizes the dimensionality of feature space thereby enabling computational speed-ups of supervised learning: the creation of a classifier. Further, the application of quantum-based learning to the PCA reduced input feature space might offer an exponential speedup with fewer parameters. The present learning model is evaluated on a real clinical application: the diagnosis of Progressive Supranuclear Palsy (PSP) disorder. The results suggest that quantum machine learning has led to noticeable advancement and outperforms classical frameworks. The optimized variational quantum classifier classifies the PSP dataset with 86% accuracy as compared to conventional SVM. The other technique, a quantum kernel estimator, approximates the kernel function on the quantum machine and optimizes a classical SVM. In particular, we have demonstrated the successful application of the present model on both a quantum simulator and real chips of the IBM quantum platform.
http://arxiv.org/pdf/2407.06226v1
[ "Papri Saha" ]
2024-07-06T14:16:31Z
2024-07-06T14:16:31Z
2407.05082
DMTG: One-Shot Differentiable Multi-Task Grouping
We aim to address Multi-Task Learning (MTL) with a large number of tasks by Multi-Task Grouping (MTG). Given N tasks, we propose to simultaneously identify the best task groups from 2^N candidates and train the model weights simultaneously in one-shot, with the high-order task-affinity fully exploited. This is distinct from the pioneering methods which sequentially identify the groups and train the model weights, where the group identification often relies on heuristics. As a result, our method not only improves the training efficiency, but also mitigates the objective bias introduced by the sequential procedures that potentially lead to a suboptimal solution. Specifically, we formulate MTG as a fully differentiable pruning problem on an adaptive network architecture determined by an underlying Categorical distribution. To categorize N tasks into K groups (represented by K encoder branches), we initially set up KN task heads, where each branch connects to all N task heads to exploit the high-order task-affinity. Then, we gradually prune the KN heads down to N by learning a relaxed differentiable Categorical distribution, ensuring that each task is exclusively and uniquely categorized into only one branch. Extensive experiments on CelebA and Taskonomy datasets with detailed ablations show the promising performance and efficiency of our method. The codes are available at https://github.com/ethanygao/DMTG.
http://arxiv.org/pdf/2407.05082v1
[ "Yuan Gao", "Shuguo Jiang", "Moran Li", "Jin-Gang Yu", "Gui-Song Xia" ]
2024-07-06T13:54:00Z
2024-07-06T13:54:00Z
2405.19752
Understanding Memory-Regret Trade-Off for Streaming Stochastic Multi-Armed Bandits
We study the stochastic multi-armed bandit problem in the $P$-pass streaming model. In this problem, the $n$ arms are present in a stream and at most $m<n$ arms and their statistics can be stored in the memory. We give a complete characterization of the optimal regret in terms of $m, n$ and $P$. Specifically, we design an algorithm with $tilde Oleft((n-m)^{1+frac{2^{P}-2}{2^{P+1}-1}} n^{frac{2-2^{P+1}}{2^{P+1}-1}} T^{frac{2^P}{2^{P+1}-1}}right)$ regret and complement it with an $tilde Omegaleft((n-m)^{1+frac{2^{P}-2}{2^{P+1}-1}} n^{frac{2-2^{P+1}}{2^{P+1}-1}} T^{frac{2^P}{2^{P+1}-1}}right)$ lower bound when the number of rounds $T$ is sufficiently large. Our results are tight up to a logarithmic factor in $n$ and $P$.
http://arxiv.org/pdf/2405.19752v2
[ "Yuchen He", "Zichun Ye", "Chihao Zhang" ]
2024-07-06T13:43:21Z
2024-05-30T06:56:48Z
2407.05051
BrainMetDetect: Predicting Primary Tumor from Brain Metastasis MRI Data Using Radiomic Features and Machine Learning Algorithms
Objective: Brain metastases (BMs) are common in cancer patients and determining the primary tumor site is crucial for effective treatment. This study aims to predict the primary tumor site from BM MRI data using radiomic features and advanced machine learning algorithms. Methods: We utilized a comprehensive dataset from Ocana-Tienda et al. (2023) comprising MRI and clinical data from 75 patients with BMs. Radiomic features were extracted from post-contrast T1-weighted MRI sequences. Feature selection was performed using the GINI index, and data normalization was applied to ensure consistent scaling. We developed and evaluated Random Forest and XGBoost classifiers, both with and without hyperparameter optimization using the FOX (Fox optimizer) algorithm. Model interpretability was enhanced using SHAP (SHapley Additive exPlanations) values. Results: The baseline Random Forest model achieved an accuracy of 0.85, which improved to 0.93 with FOX optimization. The XGBoost model showed an initial accuracy of 0.96, increasing to 0.99 after optimization. SHAP analysis revealed the most influential radiomic features contributing to the models' predictions. The FOX-optimized XGBoost model exhibited the best performance with a precision, recall, and F1-score of 0.99. Conclusion: This study demonstrates the effectiveness of using radiomic features and machine learning to predict primary tumor sites from BM MRI data. The FOX optimization algorithm significantly enhanced model performance, and SHAP provided valuable insights into feature importance. These findings highlight the potential of integrating radiomics and machine learning into clinical practice for improved diagnostic accuracy and personalized treatment planning.
http://arxiv.org/pdf/2407.05051v1
[ "Hamidreza Sadeghsalehi" ]
2024-07-06T11:34:00Z
2024-07-06T11:34:00Z
2407.05040
Code Less, Align More: Efficient LLM Fine-tuning for Code Generation with Data Pruning
Recent work targeting large language models (LLMs) for code generation demonstrated that increasing the amount of training data through synthetic code generation often leads to exceptional performance. In this paper we explore data pruning methods aimed at enhancing the efficiency of model training specifically for code LLMs. We present techniques that integrate various clustering and pruning metrics to selectively reduce training data without compromising the accuracy and functionality of the generated code. We observe significant redundancies in synthetic training data generation, where our experiments demonstrate that benchmark performance can be largely preserved by training on only 10% of the data. Moreover, we observe consistent improvements in benchmark results through moderate pruning of the training data. Our experiments show that these pruning strategies not only reduce the computational resources needed but also enhance the overall quality code generation.
http://arxiv.org/pdf/2407.05040v1
[ "Yun-Da Tsai", "Mingjie Liu", "Haoxing Ren" ]
2024-07-06T10:30:43Z
2024-07-06T10:30:43Z
2407.05036
Enhance the Robustness of Text-Centric Multimodal Alignments
Converting different modalities into general text, serving as input prompts for large language models (LLMs), is a common method to align multimodal models when there is limited pairwise data. This text-centric approach leverages the unique properties of text as a modality space, transforming diverse inputs into a unified textual representation. This enables downstream models to effectively interpret various modal inputs. This study assesses the quality and robustness of multimodal representations in the presence of missing entries, noise, or absent modalities, revealing that current text-centric alignment methods compromise downstream robustness. To address this issue, we propose a new text-centric approach that achieves superior robustness compared to previous methods across various modalities in different settings. Our findings highlight the potential of this approach to enhance the robustness and adaptability of multimodal representations, offering a promising solution for dynamic and real-world applications.
http://arxiv.org/pdf/2407.05036v1
[ "Ting-Yu Yen", "Yun-Da Tsai", "Keng-Te Liao", "Shou-De Lin" ]
2024-07-06T10:12:29Z
2024-07-06T10:12:29Z
2402.08424
Conditional Neural Expert Processes for Learning Movement Primitives from Demonstration
Learning from Demonstration (LfD) is a widely used technique for skill acquisition in robotics. However, demonstrations of the same skill may exhibit significant variances, or learning systems may attempt to acquire different means of the same skill simultaneously, making it challenging to encode these motions into movement primitives. To address these challenges, we propose an LfD framework, namely the Conditional Neural Expert Processes (CNEP), that learns to assign demonstrations from different modes to distinct expert networks utilizing the inherent information within the latent space to match experts with the encoded representations. CNEP does not require supervision on which mode the trajectories belong to. We compare the performance of CNEP against widely used and powerful LfD methods such as Gaussian Mixture Models, Probabilistic Movement Primitives, and Stable Movement Primitives and show that our method outperforms these baselines on multimodal trajectory datasets. The results reveal enhanced modeling performance for movement primitives, leading to the synthesis of trajectories that more accurately reflect those demonstrated by experts, particularly when the skill demonstrations include intersection points from various trajectories. We evaluated the CNEP model on two real-robot tasks, namely obstacle avoidance and pick-and-place tasks, that require the robot to learn multi-modal motion trajectories and execute the correct primitives given target environment conditions. We also showed that our system is capable of on-the-fly adaptation to environmental changes via an online conditioning mechanism. Lastly, we believe that CNEP offers improved explainability and interpretability by autonomously finding discrete behavior primitives and providing probability values about its expert selection decisions.
http://arxiv.org/pdf/2402.08424v2
[ "Yigit Yildirim", "Emre Ugur" ]
2024-07-06T09:40:54Z
2024-02-13T12:52:02Z
2406.09046
ExioML: Eco-economic dataset for Machine Learning in Global Sectoral Sustainability
The Environmental Extended Multi-Regional Input-Output analysis is the predominant framework in Ecological Economics for assessing the environmental impact of economic activities. This paper introduces ExioML, the first Machine Learning benchmark dataset designed for sustainability analysis, aimed at lowering barriers and fostering collaboration between Machine Learning and Ecological Economics research. A crucial greenhouse gas emission regression task was conducted to evaluate sectoral sustainability and demonstrate the usability of the dataset. We compared the performance of traditional shallow models with deep learning models, utilizing a diverse Factor Accounting table and incorporating various categorical and numerical features. Our findings reveal that ExioML, with its high usability, enables deep and ensemble models to achieve low mean square errors, establishing a baseline for future Machine Learning research. Through ExioML, we aim to build a foundational dataset supporting various Machine Learning applications and promote climate actions and sustainable investment decisions.
http://arxiv.org/pdf/2406.09046v2
[ "Yanming Guo", "Charles Guan", "Jin Ma" ]
2024-07-06T09:25:10Z
2024-06-11T17:06:34Z
2406.10552
Large Language Model Enhanced Clustering for News Event Detection
The news landscape is continuously evolving, with an ever-increasing volume of information from around the world. Automated event detection within this vast data repository is essential for monitoring, identifying, and categorizing significant news occurrences across diverse platforms. This paper presents an event detection framework that leverages Large Language Models (LLMs) combined with clustering analysis to detect news events from the Global Database of Events, Language, and Tone (GDELT). The framework enhances event clustering through both pre-event detection tasks (keyword extraction and text embedding) and post-event detection tasks (event summarization and topic labelling). We also evaluate the impact of various textual embeddings on the quality of clustering outcomes, ensuring robust news categorization. Additionally, we introduce a novel Cluster Stability Assessment Index (CSAI) to assess the validity and robustness of clustering results. CSAI utilizes multiple feature vectors to provide a new way of measuring clustering quality. Our experiments indicate that the use of LLM embedding in the event detection framework has significantly improved the results, demonstrating greater robustness in terms of CSAI scores. Moreover, post-event detection tasks generate meaningful insights, facilitating effective interpretation of event clustering results. Overall, our experimental results indicate that the proposed framework offers valuable insights and could enhance the accuracy in news analysis and reporting.
http://arxiv.org/pdf/2406.10552v4
[ "Adane Nega Tarekegn" ]
2024-07-06T09:19:08Z
2024-06-15T08:13:47Z
2407.05005
Personalized Federated Domain-Incremental Learning based on Adaptive Knowledge Matching
This paper focuses on Federated Domain-Incremental Learning (FDIL) where each client continues to learn incremental tasks where their domain shifts from each other. We propose a novel adaptive knowledge matching-based personalized FDIL approach (pFedDIL) which allows each client to alternatively utilize appropriate incremental task learning strategy on the correlation with the knowledge from previous tasks. More specifically, when a new task arrives, each client first calculates its local correlations with previous tasks. Then, the client can choose to adopt a new initial model or a previous model with similar knowledge to train the new task and simultaneously migrate knowledge from previous tasks based on these correlations. Furthermore, to identify the correlations between the new task and previous tasks for each client, we separately employ an auxiliary classifier to each target classification model and propose sharing partial parameters between the target classification model and the auxiliary classifier to condense model parameters. We conduct extensive experiments on several datasets of which results demonstrate that pFedDIL outperforms state-of-the-art methods by up to 14.35% in terms of average accuracy of all tasks.
http://arxiv.org/pdf/2407.05005v1
[ "Yichen Li", "Wenchao Xu", "Haozhao Wang", "Ruixuan Li", "Yining Qi", "Jingcai Guo" ]
2024-07-06T08:57:22Z
2024-07-06T08:57:22Z
2407.05000
LoRA-GA: Low-Rank Adaptation with Gradient Approximation
Fine-tuning large-scale pretrained models is prohibitively expensive in terms of computational and memory costs. LoRA, as one of the most popular Parameter-Efficient Fine-Tuning (PEFT) methods, offers a cost-effective alternative by fine-tuning an auxiliary low-rank model that has significantly fewer parameters. Although LoRA reduces the computational and memory requirements significantly at each iteration, extensive empirical evidence indicates that it converges at a considerably slower rate compared to full fine-tuning, ultimately leading to increased overall compute and often worse test performance. In our paper, we perform an in-depth investigation of the initialization method of LoRA and show that careful initialization (without any change of the architecture and the training algorithm) can significantly enhance both efficiency and performance. In particular, we introduce a novel initialization method, LoRA-GA (Low Rank Adaptation with Gradient Approximation), which aligns the gradients of low-rank matrix product with those of full fine-tuning at the first step. Our extensive experiments demonstrate that LoRA-GA achieves a convergence rate comparable to that of full fine-tuning (hence being significantly faster than vanilla LoRA as well as various recent improvements) while simultaneously attaining comparable or even better performance. For example, on the subset of the GLUE dataset with T5-Base, LoRA-GA outperforms LoRA by 5.69% on average. On larger models such as Llama 2-7B, LoRA-GA shows performance improvements of 0.34, 11.52%, and 5.05% on MT-bench, GSM8K, and Human-eval, respectively. Additionally, we observe up to 2-4 times convergence speed improvement compared to vanilla LoRA, validating its effectiveness in accelerating convergence and enhancing model performance. Code is available at https://github.com/Outsider565/LoRA-GA.
http://arxiv.org/pdf/2407.05000v1
[ "Shaowen Wang", "Linxi Yu", "Jian Li" ]
2024-07-06T08:37:21Z
2024-07-06T08:37:21Z
2407.04999
Rethinking the Effectiveness of Graph Classification Datasets in Benchmarks for Assessing GNNs
Graph classification benchmarks, vital for assessing and developing graph neural networks (GNNs), have recently been scrutinized, as simple methods like MLPs have demonstrated comparable performance. This leads to an important question: Do these benchmarks effectively distinguish the advancements of GNNs over other methodologies? If so, how do we quantitatively measure this effectiveness? In response, we first propose an empirical protocol based on a fair benchmarking framework to investigate the performance discrepancy between simple methods and GNNs. We further propose a novel metric to quantify the dataset effectiveness by considering both dataset complexity and model performance. To the best of our knowledge, our work is the first to thoroughly study and provide an explicit definition for dataset effectiveness in the graph learning area. Through testing across 16 real-world datasets, we found our metric to align with existing studies and intuitive assumptions. Finally, we explore the causes behind the low effectiveness of certain datasets by investigating the correlation between intrinsic graph properties and class labels, and we developed a novel technique supporting the correlation-controllable synthetic dataset generation. Our findings shed light on the current understanding of benchmark datasets, and our new platform could fuel the future evolution of graph classification benchmarks.
http://arxiv.org/pdf/2407.04999v1
[ "Zhengdao Li", "Yong Cao", "Kefan Shuai", "Yiming Miao", "Kai Hwang" ]
2024-07-06T08:33:23Z
2024-07-06T08:33:23Z
2407.04998
The Solution for the 5th GCAIAC Zero-shot Referring Expression Comprehension Challenge
This report presents a solution for the zero-shot referring expression comprehension task. Visual-language multimodal base models (such as CLIP, SAM) have gained significant attention in recent years as a cornerstone of mainstream research. One of the key applications of multimodal base models lies in their ability to generalize to zero-shot downstream tasks. Unlike traditional referring expression comprehension, zero-shot referring expression comprehension aims to apply pre-trained visual-language models directly to the task without specific training. Recent studies have enhanced the zero-shot performance of multimodal base models in referring expression comprehension tasks by introducing visual prompts. To address the zero-shot referring expression comprehension challenge, we introduced a combination of visual prompts and considered the influence of textual prompts, employing joint prediction tailored to the data characteristics. Ultimately, our approach achieved accuracy rates of 84.825 on the A leaderboard and 71.460 on the B leaderboard, securing the first position.
http://arxiv.org/pdf/2407.04998v1
[ "Longfei Huang", "Feng Yu", "Zhihao Guan", "Zhonghua Wan", "Yang Yang" ]
2024-07-06T08:31:33Z
2024-07-06T08:31:33Z
2407.04996
The Solution for the sequential task continual learning track of the 2nd Greater Bay Area International Algorithm Competition
This paper presents a data-free, parameter-isolation-based continual learning algorithm we developed for the sequential task continual learning track of the 2nd Greater Bay Area International Algorithm Competition. The method learns an independent parameter subspace for each task within the network's convolutional and linear layers and freezes the batch normalization layers after the first task. Specifically, for domain incremental setting where all domains share a classification head, we freeze the shared classification head after first task is completed, effectively solving the issue of catastrophic forgetting. Additionally, facing the challenge of domain incremental settings without providing a task identity, we designed an inference task identity strategy, selecting an appropriate mask matrix for each sample. Furthermore, we introduced a gradient supplementation strategy to enhance the importance of unselected parameters for the current task, facilitating learning for new tasks. We also implemented an adaptive importance scoring strategy that dynamically adjusts the amount of parameters to optimize single-task performance while reducing parameter usage. Moreover, considering the limitations of storage space and inference time, we designed a mask matrix compression strategy to save storage space and improve the speed of encryption and decryption of the mask matrix. Our approach does not require expanding the core network or using external auxiliary networks or data, and performs well under both task incremental and domain incremental settings. This solution ultimately won a second-place prize in the competition.
http://arxiv.org/pdf/2407.04996v1
[ "Sishun Pan", "Xixian Wu", "Tingmin Li", "Longfei Huang", "Mingxu Feng", "Zhonghua Wan", "Yang Yang" ]
2024-07-06T08:21:29Z
2024-07-06T08:21:29Z
2407.04994
The Solution for Language-Enhanced Image New Category Discovery
Treating texts as images, combining prompts with textual labels for prompt tuning, and leveraging the alignment properties of CLIP have been successfully applied in zero-shot multi-label image recognition. Nonetheless, relying solely on textual labels to store visual information is insufficient for representing the diversity of visual objects. In this paper, we propose reversing the training process of CLIP and introducing the concept of Pseudo Visual Prompts. These prompts are initialized for each object category and pre-trained on large-scale, low-cost sentence data generated by large language models. This process mines the aligned visual information in CLIP and stores it in class-specific visual prompts. We then employ contrastive learning to transfer the stored visual information to the textual labels, enhancing their visual representation capacity. Additionally, we introduce a dual-adapter module that simultaneously leverages knowledge from the original CLIP and new learning knowledge derived from downstream datasets. Benefiting from the pseudo visual prompts, our method surpasses the state-of-the-art not only on clean annotated text data but also on pseudo text data generated by large language models.
http://arxiv.org/pdf/2407.04994v1
[ "Haonan Xu", "Dian Chao", "Xiangyu Wu", "Zhonghua Wan", "Yang Yang" ]
2024-07-06T08:09:29Z
2024-07-06T08:09:29Z
2407.04992
Scalable Variational Causal Discovery Unconstrained by Acyclicity
Bayesian causal discovery offers the power to quantify epistemic uncertainties among a broad range of structurally diverse causal theories potentially explaining the data, represented in forms of directed acyclic graphs (DAGs). However, existing methods struggle with efficient DAG sampling due to the complex acyclicity constraint. In this study, we propose a scalable Bayesian approach to effectively learn the posterior distribution over causal graphs given observational data thanks to the ability to generate DAGs without explicitly enforcing acyclicity. Specifically, we introduce a novel differentiable DAG sampling method that can generate a valid acyclic causal graph by mapping an unconstrained distribution of implicit topological orders to a distribution over DAGs. Given this efficient DAG sampling scheme, we are able to model the posterior distribution over causal graphs using a simple variational distribution over a continuous domain, which can be learned via the variational inference framework. Extensive empirical experiments on both simulated and real datasets demonstrate the superior performance of the proposed model compared to several state-of-the-art baselines.
http://arxiv.org/pdf/2407.04992v1
[ "Nu Hoang", "Bao Duong", "Thin Nguyen" ]
2024-07-06T07:56:23Z
2024-07-06T07:56:23Z
2407.04991
The Solution for the AIGC Inference Performance Optimization Competition
In recent years, the rapid advancement of large-scale pre-trained language models based on transformer architectures has revolutionized natural language processing tasks. Among these, ChatGPT has gained widespread popularity, demonstrating human-level conversational abilities and attracting over 100 million monthly users by late 2022. Concurrently, Baidu's commercial deployment of the Ernie Wenxin model has significantly enhanced marketing effectiveness through AI-driven technologies. This paper focuses on optimizing high-performance inference for Ernie models, emphasizing GPU acceleration and leveraging the Paddle inference framework. We employ techniques such as Faster Transformer for efficient model processing, embedding layer pruning to reduce computational overhead, and FP16 half-precision inference for enhanced computational efficiency. Additionally, our approach integrates efficient data handling strategies using multi-process parallel processing to minimize latency. Experimental results demonstrate that our optimized solution achieves up to an 8.96x improvement in inference speed compared to standard methods, while maintaining competitive performance.
http://arxiv.org/pdf/2407.04991v1
[ "Sishun Pan", "Haonan Xu", "Zhonghua Wan", "Yang Yang" ]
2024-07-06T07:54:45Z
2024-07-06T07:54:45Z
2407.04988
The Reachability Problem for Neural-Network Control Systems
A control system consists of a plant component and a controller which periodically computes a control input for the plant. We consider systems where the controller is implemented by a feedforward neural network with ReLU activations. The reachability problem asks, given a set of initial states, whether a set of target states can be reached. We show that this problem is undecidable even for trivial plants and fixed-depth neural networks with three inputs and outputs. We also show that the problem becomes semi-decidable when the plant as well as the input and target sets are given by automata over infinite words.
http://arxiv.org/pdf/2407.04988v1
[ "Christian Schilling", "Martin Zimmermann" ]
2024-07-06T07:46:26Z
2024-07-06T07:46:26Z
2407.04986
Calorie Burn Estimation in Community Parks Through DLICP: A Mathematical Modelling Approach
Community parks play a crucial role in promoting physical activity and overall well-being. This study introduces DLICP (Deep Learning Integrated Community Parks), an innovative approach that combines deep learning techniques specifically, face recognition technology with a novel walking activity measurement algorithm to enhance user experience in community parks. The DLICP utilizes a camera with face recognition software to accurately identify and track park users. Simultaneously, a walking activity measurement algorithm calculates parameters such as the average pace and calories burned, tailored to individual attributes. Extensive evaluations confirm the precision of DLICP, with a Mean Absolute Error (MAE) of 5.64 calories and a Mean Percentage Error (MPE) of 1.96%, benchmarked against widely available fitness measurement devices, such as the Apple Watch Series 6. This study contributes significantly to the development of intelligent smart park systems, enabling real-time updates on burned calories and personalized fitness tracking.
http://arxiv.org/pdf/2407.04986v1
[ "Abhishek Sebastian", "Annis Fathima A", "Pragna R", "Madhan Kumar S", "Jesher Joshua M" ]
2024-07-06T07:45:05Z
2024-07-06T07:45:05Z
2407.04985
Combining Neuroevolution with the Search for Novelty to Improve the Generation of Test Inputs for Games
As games challenge traditional automated white-box test generators, the Neatest approach generates test suites consisting of neural networks that exercise the source code by playing the games. Neatest generates these neural networks using an evolutionary algorithm that is guided by an objective function targeting individual source code statements. This approach works well if the objective function provides sufficient guidance, but deceiving or complex fitness landscapes may inhibit the search. In this paper, we investigate whether the issue of challenging fitness landscapes can be addressed by promoting novel behaviours during the search. Our case study on two Scratch games demonstrates that rewarding novel behaviours is a promising approach for overcoming challenging fitness landscapes, thus enabling future research on how to adapt the search algorithms to best use this information.
http://arxiv.org/abs/2407.04985v1
[ "Patric Feldmeier", "Gordon Fraser" ]
2024-07-06T07:36:44Z
2024-07-06T07:36:44Z
2407.04981
TRACE: TRansformer-based Attribution using Contrastive Embeddings in LLMs
The rapid evolution of large language models (LLMs) represents a substantial leap forward in natural language understanding and generation. However, alongside these advancements come significant challenges related to the accountability and transparency of LLM responses. Reliable source attribution is essential to adhering to stringent legal and regulatory standards, including those set forth by the General Data Protection Regulation. Despite the well-established methods in source attribution within the computer vision domain, the application of robust attribution frameworks to natural language processing remains underexplored. To bridge this gap, we propose a novel and versatile TRansformer-based Attribution framework using Contrastive Embeddings called TRACE that, in particular, exploits contrastive learning for source attribution. We perform an extensive empirical evaluation to demonstrate the performance and efficiency of TRACE in various settings and show that TRACE significantly improves the ability to attribute sources accurately, making it a valuable tool for enhancing the reliability and trustworthiness of LLMs.
http://arxiv.org/pdf/2407.04981v1
[ "Cheng Wang", "Xinyang Lu", "See-Kiong Ng", "Bryan Kian Hsiang Low" ]
2024-07-06T07:19:30Z
2024-07-06T07:19:30Z
2407.04980
Enabling Causal Discovery in Post-Nonlinear Models with Normalizing Flows
Post-nonlinear (PNL) causal models stand out as a versatile and adaptable framework for modeling intricate causal relationships. However, accurately capturing the invertibility constraint required in PNL models remains challenging in existing studies. To address this problem, we introduce CAF-PoNo (Causal discovery via Normalizing Flows for Post-Nonlinear models), harnessing the power of the normalizing flows architecture to enforce the crucial invertibility constraint in PNL models. Through normalizing flows, our method precisely reconstructs the hidden noise, which plays a vital role in cause-effect identification through statistical independence testing. Furthermore, the proposed approach exhibits remarkable extensibility, as it can be seamlessly expanded to facilitate multivariate causal discovery via causal order identification, empowering us to efficiently unravel complex causal relationships. Extensive experimental evaluations on both simulated and real datasets consistently demonstrate that the proposed method outperforms several state-of-the-art approaches in both bivariate and multivariate causal discovery tasks.
http://arxiv.org/pdf/2407.04980v1
[ "Nu Hoang", "Bao Duong", "Thin Nguyen" ]
2024-07-06T07:19:21Z
2024-07-06T07:19:21Z
2407.04974
Multi-agent Off-policy Actor-Critic Reinforcement Learning for Partially Observable Environments
This study proposes the use of a social learning method to estimate a global state within a multi-agent off-policy actor-critic algorithm for reinforcement learning (RL) operating in a partially observable environment. We assume that the network of agents operates in a fully-decentralized manner, possessing the capability to exchange variables with their immediate neighbors. The proposed design methodology is supported by an analysis demonstrating that the difference between final outcomes, obtained when the global state is fully observed versus estimated through the social learning method, is $varepsilon$-bounded when an appropriate number of iterations of social learning updates are implemented. Unlike many existing dec-POMDP-based RL approaches, the proposed algorithm is suitable for model-free multi-agent reinforcement learning as it does not require knowledge of a transition model. Furthermore, experimental results illustrate the efficacy of the algorithm and demonstrate its superiority over the current state-of-the-art methods.
http://arxiv.org/pdf/2407.04974v1
[ "Ainur Zhaikhan", "Ali H. Sayed" ]
2024-07-06T06:51:14Z
2024-07-06T06:51:14Z
2407.04973
LogicVista: Multimodal LLM Logical Reasoning Benchmark in Visual Contexts
We propose LogicVista, an evaluation benchmark that assesses the integrated logical reasoning capabilities of multimodal large language models (MLLMs) in Visual contexts. Recent advancements in MLLMs have demonstrated various fascinating abilities, from crafting poetry based on an image to performing mathematical reasoning. However, there is still a lack of systematic evaluation of MLLMs' proficiency in logical reasoning tasks, which are essential for activities like navigation and puzzle-solving. Thus we evaluate general logical cognition abilities across 5 logical reasoning tasks encompassing 9 different capabilities, using a sample of 448 multiple-choice questions. Each question is annotated with the correct answer and the human-written reasoning behind the selection, enabling both open-ended and multiple-choice evaluation. A total of 8 MLLMs are comprehensively evaluated using LogicVista. Code and Data Available at https://github.com/Yijia-Xiao/LogicVista.
http://arxiv.org/pdf/2407.04973v1
[ "Yijia Xiao", "Edward Sun", "Tianyu Liu", "Wei Wang" ]
2024-07-06T06:48:16Z
2024-07-06T06:48:16Z
2407.04970
Idiographic Personality Gaussian Process for Psychological Assessment
We develop a novel measurement framework based on a Gaussian process coregionalization model to address a long-lasting debate in psychometrics: whether psychological features like personality share a common structure across the population, vary uniquely for individuals, or some combination. We propose the idiographic personality Gaussian process (IPGP) framework, an intermediate model that accommodates both shared trait structure across a population and "idiographic" deviations for individuals. IPGP leverages the Gaussian process coregionalization model to handle the grouped nature of battery responses, but adjusted to non-Gaussian ordinal data. We further exploit stochastic variational inference for efficient latent factor estimation required for idiographic modeling at scale. Using synthetic and real data, we show that IPGP improves both prediction of actual responses and estimation of individualized factor structures relative to existing benchmarks. In a third study, we show that IPGP also identifies unique clusters of personality taxonomies in real-world data, displaying great potential in advancing individualized approaches to psychological diagnosis and treatment.
http://arxiv.org/pdf/2407.04970v1
[ "Yehu Chen", "Muchen Xi", "Jacob Montgomery", "Joshua Jackson", "Roman Garnett" ]
2024-07-06T06:09:04Z
2024-07-06T06:09:04Z
2407.04966
A Layer-Anchoring Strategy for Enhancing Cross-Lingual Speech Emotion Recognition
Cross-lingual speech emotion recognition (SER) is important for a wide range of everyday applications. While recent SER research relies heavily on large pretrained models for emotion training, existing studies often concentrate solely on the final transformer layer of these models. However, given the task-specific nature and hierarchical architecture of these models, each transformer layer encapsulates different levels of information. Leveraging this hierarchical structure, our study focuses on the information embedded across different layers. Through an examination of layer feature similarity across different languages, we propose a novel strategy called a layer-anchoring mechanism to facilitate emotion transfer in cross-lingual SER tasks. Our approach is evaluated using two distinct language affective corpora (MSP-Podcast and BIIC-Podcast), achieving a best UAR performance of 60.21% on the BIIC-podcast corpus. The analysis uncovers interesting insights into the behavior of popular pretrained models.
http://arxiv.org/pdf/2407.04966v1
[ "Shreya G. Upadhyay", "Carlos Busso", "Chi-Chun Lee" ]
2024-07-06T05:56:55Z
2024-07-06T05:56:55Z
2402.04038
PAC-Bayesian Adversarially Robust Generalization Bounds for Graph Neural Network
Graph neural networks (GNNs) have gained popularity for various graph-related tasks. However, similar to deep neural networks, GNNs are also vulnerable to adversarial attacks. Empirical studies have shown that adversarially robust generalization has a pivotal role in establishing effective defense algorithms against adversarial attacks. In this paper, we contribute by providing adversarially robust generalization bounds for two kinds of popular GNNs, graph convolutional network (GCN) and message passing graph neural network, using the PAC-Bayesian framework. Our result reveals that spectral norm of the diffusion matrix on the graph and spectral norm of the weights as well as the perturbation factor govern the robust generalization bounds of both models. Our bounds are nontrivial generalizations of the results developed in (Liao et al., 2020) from the standard setting to adversarial setting while avoiding exponential dependence of the maximum node degree. As corollaries, we derive better PAC-Bayesian robust generalization bounds for GCN in the standard setting, which improve the bounds in (Liao et al., 2020) by avoiding exponential dependence on the maximum node degree.
http://arxiv.org/pdf/2402.04038v2
[ "Tan Sun", "Junhong Lin" ]
2024-07-06T05:42:19Z
2024-02-06T14:34:17Z
2209.11737
Visual representations in the human brain are aligned with large language models
The human brain extracts complex information from visual inputs, including objects, their spatial and semantic interrelations, and their interactions with the environment. However, a quantitative approach for studying this information remains elusive. Here, we test whether the contextual information encoded in large language models (LLMs) is beneficial for modelling the complex visual information extracted by the brain from natural scenes. We show that LLM embeddings of scene captions successfully characterise brain activity evoked by viewing the natural scenes. This mapping captures selectivities of different brain areas, and is sufficiently robust that accurate scene captions can be reconstructed from brain activity. Using carefully controlled model comparisons, we then proceed to show that the accuracy with which LLM representations match brain representations derives from the ability of LLMs to integrate complex information contained in scene captions beyond that conveyed by individual words. Finally, we train deep neural network models to transform image inputs into LLM representations. Remarkably, these networks learn representations that are better aligned with brain representations than a large number of state-of-the-art alternative models, despite being trained on orders-of-magnitude less data. Overall, our results suggest that LLM embeddings of scene captions provide a representational format that accounts for complex information extracted by the brain from visual inputs.
http://arxiv.org/pdf/2209.11737v2
[ "Adrien Doerig", "Tim C Kietzmann", "Emily Allen", "Yihan Wu", "Thomas Naselaris", "Kendrick Kay", "Ian Charest" ]
2024-07-06T05:26:33Z
2022-09-23T17:34:33Z
2407.04958
Entropy-Informed Weighting Channel Normalizing Flow
Normalizing Flows (NFs) have gained popularity among deep generative models due to their ability to provide exact likelihood estimation and efficient sampling. However, a crucial limitation of NFs is their substantial memory requirements, arising from maintaining the dimension of the latent space equal to that of the input space. Multi-scale architectures bypass this limitation by progressively reducing the dimension of latent variables while ensuring reversibility. Existing multi-scale architectures split the latent variables in a simple, static manner at the channel level, compromising NFs' expressive power. To address this issue, we propose a regularized and feature-dependent $mathtt{Shuffle}$ operation and integrate it into vanilla multi-scale architecture. This operation heuristically generates channel-wise weights and adaptively shuffles latent variables before splitting them with these weights. We observe that such operation guides the variables to evolve in the direction of entropy increase, hence we refer to NFs with the $mathtt{Shuffle}$ operation as emph{Entropy-Informed Weighting Channel Normalizing Flow} (EIW-Flow). Experimental results indicate that the EIW-Flow achieves state-of-the-art density estimation results and comparable sample quality on CIFAR-10, CelebA and ImageNet datasets, with negligible additional computational overhead.
http://arxiv.org/pdf/2407.04958v1
[ "Wei Chen", "Shian Du", "Shigui Li", "Delu Zeng", "John Paisley" ]
2024-07-06T04:46:41Z
2024-07-06T04:46:41Z
2302.01191
Noncommutative $C^*$-algebra Net: Learning Neural Networks with Powerful Product Structure in $C^*$-algebra
We propose a new generalization of neural network parameter spaces with noncommutative $C^*$-algebra, which possesses a rich noncommutative structure of products. We show that this noncommutative structure induces powerful effects in learning neural networks. Our framework has a wide range of applications, such as learning multiple related neural networks simultaneously with interactions and learning equivariant features with respect to group actions. Numerical experiments illustrate the validity of our framework and its potential power.
http://arxiv.org/pdf/2302.01191v2
[ "Ryuichiro Hataya", "Yuka Hashimoto" ]
2024-07-06T04:40:14Z
2023-01-26T14:35:37Z
2407.04949
Beyond the Federation: Topology-aware Federated Learning for Generalization to Unseen Clients
Federated Learning is widely employed to tackle distributed sensitive data. Existing methods primarily focus on addressing in-federation data heterogeneity. However, we observed that they suffer from significant performance degradation when applied to unseen clients for out-of-federation (OOF) generalization. The recent attempts to address generalization to unseen clients generally struggle to scale up to large-scale distributed settings due to high communication or computation costs. Moreover, methods that scale well often demonstrate poor generalization capability. To achieve OOF-resiliency in a scalable manner, we propose Topology-aware Federated Learning (TFL) that leverages client topology - a graph representing client relationships - to effectively train robust models against OOF data. We formulate a novel optimization problem for TFL, consisting of two key modules: Client Topology Learning, which infers the client relationships in a privacy-preserving manner, and Learning on Client Topology, which leverages the learned topology to identify influential clients and harness this information into the FL optimization process to efficiently build robust models. Empirical evaluation on a variety of real-world datasets verifies TFL's superior OOF robustness and scalability.
http://arxiv.org/pdf/2407.04949v1
[ "Mengmeng Ma", "Tang Li", "Xi Peng" ]
2024-07-06T03:57:05Z
2024-07-06T03:57:05Z
2405.07791
Decentralized Kernel Ridge Regression Based on Data-Dependent Random Feature
Random feature (RF) has been widely used for node consistency in decentralized kernel ridge regression (KRR). Currently, the consistency is guaranteed by imposing constraints on coefficients of features, necessitating that the random features on different nodes are identical. However, in many applications, data on different nodes varies significantly on the number or distribution, which calls for adaptive and data-dependent methods that generate different RFs. To tackle the essential difficulty, we propose a new decentralized KRR algorithm that pursues consensus on decision functions, which allows great flexibility and well adapts data on nodes. The convergence is rigorously given and the effectiveness is numerically verified: by capturing the characteristics of the data on each node, while maintaining the same communication costs as other methods, we achieved an average regression accuracy improvement of 25.5% across six real-world data sets.
http://arxiv.org/abs/2405.07791v2
[ "Ruikai Yang", "Fan He", "Mingzhen He", "Jie Yang", "Xiaolin Huang" ]
2024-07-06T03:51:42Z
2024-05-13T14:37:03Z
2407.04945
On Differentially Private U Statistics
We consider the problem of privately estimating a parameter $mathbb{E}[h(X_1,dots,X_k)]$, where $X_1$, $X_2$, $dots$, $X_k$ are i.i.d. data from some distribution and $h$ is a permutation-invariant function. Without privacy constraints, standard estimators are U-statistics, which commonly arise in a wide range of problems, including nonparametric signed rank tests, symmetry testing, uniformity testing, and subgraph counts in random networks, and can be shown to be minimum variance unbiased estimators under mild conditions. Despite the recent outpouring of interest in private mean estimation, privatizing U-statistics has received little attention. While existing private mean estimation algorithms can be applied to obtain confidence intervals, we show that they can lead to suboptimal private error, e.g., constant-factor inflation in the leading term, or even $Theta(1/n)$ rather than $O(1/n^2)$ in degenerate settings. To remedy this, we propose a new thresholding-based approach using emph{local H'ajek projections} to reweight different subsets of the data. This leads to nearly optimal private error for non-degenerate U-statistics and a strong indication of near-optimality for degenerate U-statistics.
http://arxiv.org/pdf/2407.04945v1
[ "Kamalika Chaudhuri", "Po-Ling Loh", "Shourya Pandey", "Purnamrita Sarkar" ]
2024-07-06T03:27:14Z
2024-07-06T03:27:14Z
2407.04943
Quantizing YOLOv7: A Comprehensive Study
YOLO is a deep neural network (DNN) model presented for robust real-time object detection following the one-stage inference approach. It outperforms other real-time object detectors in terms of speed and accuracy by a wide margin. Nevertheless, since YOLO is developed upon a DNN backbone with numerous parameters, it will cause excessive memory load, thereby deploying it on memory-constrained devices is a severe challenge in practice. To overcome this limitation, model compression techniques, such as quantizing parameters to lower-precision values, can be adopted. As the most recent version of YOLO, YOLOv7 achieves such state-of-the-art performance in speed and accuracy in the range of 5 FPS to 160 FPS that it surpasses all former versions of YOLO and other existing models in this regard. So far, the robustness of several quantization schemes has been evaluated on older versions of YOLO. These methods may not necessarily yield similar results for YOLOv7 as it utilizes a different architecture. In this paper, we conduct in-depth research on the effectiveness of a variety of quantization schemes on the pre-trained weights of the state-of-the-art YOLOv7 model. Experimental results demonstrate that using 4-bit quantization coupled with the combination of different granularities results in ~3.92x and ~3.86x memory-saving for uniform and non-uniform quantization, respectively, with only 2.5% and 1% accuracy loss compared to the full-precision baseline model.
http://arxiv.org/abs/2407.04943v1
[ "Mohammadamin Baghbanbashi", "Mohsen Raji", "Behnam Ghavami" ]
2024-07-06T03:23:04Z
2024-07-06T03:23:04Z
2407.04942
FOSP: Fine-tuning Offline Safe Policy through World Models
Model-based Reinforcement Learning (RL) has shown its high training efficiency and capability of handling high-dimensional tasks. Regarding safety issues, safe model-based RL can achieve nearly zero-cost performance and effectively manage the trade-off between performance and safety. Nevertheless, prior works still pose safety challenges due to the online exploration in real-world deployment. To address this, some offline RL methods have emerged as solutions, which learn from a static dataset in a safe way by avoiding interactions with the environment. In this paper, we aim to further enhance safety during the deployment stage for vision-based robotic tasks by fine-tuning an offline-trained policy. We incorporate in-sample optimization, model-based policy expansion, and reachability guidance to construct a safe offline-to-online framework. Moreover, our method proves to improve the generalization of offline policy in unseen safety-constrained scenarios. Finally, the efficiency of our method is validated on simulation benchmarks with five vision-only tasks and a real robot by solving some deployment problems using limited data.
http://arxiv.org/pdf/2407.04942v1
[ "Chenyang Cao", "Yucheng Xin", "Silang Wu", "Longxiang He", "Zichen Yan", "Junbo Tan", "Xueqian Wang" ]
2024-07-06T03:22:57Z
2024-07-06T03:22:57Z
2407.04940
Resource Constrained U-Net for Extraction of Retinal Vascular Trees
This paper demonstrates the efficacy of a modified U-Net structure for the extraction of vascular tree masks for human fundus photographs. On limited compute resources and training data, the proposed model only slightly underperforms when compared to state of the art methods.
http://arxiv.org/pdf/2407.04940v1
[ "Georgiy Kiselev" ]
2024-07-06T03:15:00Z
2024-07-06T03:15:00Z
2407.04939
Balance of Number of Embedding and their Dimensions in Vector Quantization
The dimensionality of the embedding and the number of available embeddings ( also called codebook size) are critical factors influencing the performance of Vector Quantization(VQ), a discretization process used in many models such as the Vector Quantized Variational Autoencoder (VQ-VAE) architecture. This study examines the balance between the codebook sizes and dimensions of embeddings in VQ, while maintaining their product constant. Traditionally, these hyper parameters are static during training; however, our findings indicate that augmenting the codebook size while simultaneously reducing the embedding dimension can significantly boost the effectiveness of the VQ-VAE. As a result, the strategic selection of codebook size and embedding dimensions, while preserving the capacity of the discrete codebook space, is critically important. To address this, we propose a novel adaptive dynamic quantization approach, underpinned by the Gumbel-Softmax mechanism, which allows the model to autonomously determine the optimal codebook configuration for each data instance. This dynamic discretizer gives the VQ-VAE remarkable flexibility. Thorough empirical evaluations across multiple benchmark datasets validate the notable performance enhancements achieved by our approach, highlighting the significant potential of adaptive dynamic quantization to improve model performance.
http://arxiv.org/pdf/2407.04939v1
[ "Hang Chen", "Sankepally Sainath Reddy", "Ziwei Chen", "Dianbo Liu" ]
2024-07-06T03:07:31Z
2024-07-06T03:07:31Z
2406.02654
kNN Classification of Malware Data Dependency Graph Features
Explainability in classification results are dependent upon the features used for classification. Data dependency graph features representing data movement are directly correlated with operational semantics, and subject to fine grained analysis. This study obtains accurate classification from the use of features tied to structure and semantics. By training an accurate model using labeled data, this feature representation of semantics is shown to be correlated with ground truth labels. This was performed using non-parametric learning with a novel feature representation on a large scale dataset, the Kaggle 2015 Malware dataset. The features used enable fine grained analysis, increase in resolution, and explainable inferences. This allows for the body of the term frequency distribution to be further analyzed and to provide an increase in feature resolution over term frequency features. This method obtains high accuracy from analysis of a single instruction, a method that can be repeated for additional instructions to obtain further increases in accuracy. This study evaluates the hypothesis that the semantic representation and analysis of structure are able to make accurate predications and are also correlated to ground truth labels. Additionally, similarity in the metric space can be calculated directly without prior training. Our results provide evidence that data dependency graphs accurately capture both semantic and structural information for increased explainability in classification results.
http://arxiv.org/pdf/2406.02654v2
[ "John Musgrave", "Anca Ralescu" ]
2024-07-06T02:06:49Z
2024-06-04T16:39:02Z
2403.01361
Bandit Profit-maximization for Targeted Marketing
We study a sequential profit-maximization problem, optimizing for both price and ancillary variables like marketing expenditures. Specifically, we aim to maximize profit over an arbitrary sequence of multiple demand curves, each dependent on a distinct ancillary variable, but sharing the same price. A prototypical example is targeted marketing, where a firm (seller) wishes to sell a product over multiple markets. The firm may invest different marketing expenditures for different markets to optimize customer acquisition, but must maintain the same price across all markets. Moreover, markets may have heterogeneous demand curves, each responding to prices and marketing expenditures differently. The firm's objective is to maximize its gross profit, the total revenue minus marketing costs. Our results are near-optimal algorithms for this class of problems in an adversarial bandit setting, where demand curves are arbitrary non-adaptive sequences, and the firm observes only noisy evaluations of chosen points on the demand curves. For $n$ demand curves (markets), we prove a regret upper bound of $tilde{O}(nT^{3/4})$ and a lower bound of $Omega((nT)^{3/4})$ for monotonic demand curves, and a regret bound of $tilde{Theta}(nT^{2/3})$ for demands curves that are monotonic in price and concave in the ancillary variables.
http://arxiv.org/pdf/2403.01361v2
[ "Joon Suk Huh", "Ellen Vitercik", "Kirthevasan Kandasamy" ]
2024-07-06T00:44:23Z
2024-03-03T01:33:47Z
2407.04900
Closing the Gaps: Optimality of Sample Average Approximation for Data-Driven Newsvendor Problems
We study the regret performance of Sample Average Approximation (SAA) for data-driven newsvendor problems with general convex inventory costs. In literature, the optimality of SAA has not been fully established under both alpha-global strong convexity and (alpha,beta)-local strong convexity (alpha-strongly convex within the beta-neighborhood of the optimal quantity) conditions. This paper closes the gaps between regret upper and lower bounds for both conditions. Under the (alpha,beta)-local strong convexity condition, we prove the optimal regret bound of Theta(log T/alpha + 1/ (alphabeta)) for SAA. This upper bound result demonstrates that the regret performance of SAA is only influenced by alpha and not by beta in the long run, enhancing our understanding about how local properties affect the long-term regret performance of decision-making strategies. Under the alpha-global strong convexity condition, we demonstrate that the worst-case regret of any data-driven method is lower bounded by Omega(log T/alpha), which is the first lower bound result that matches the existing upper bound with respect to both parameter alpha and time horizon T. Along the way, we propose to analyze the SAA regret via a new gradient approximation technique, as well as a new class of smooth inverted-hat-shaped hard problem instances that might be of independent interest for the lower bounds of broader data-driven problems.
http://arxiv.org/pdf/2407.04900v1
[ "Jiameng Lyu", "Shilin Yuan", "Bingkun Zhou", "Yuan Zhou" ]
2024-07-06T00:30:06Z
2024-07-06T00:30:06Z
2407.04898
Nash Incentive-compatible Online Mechanism Learning via Weakly Differentially Private Online Learning
We study a multi-round mechanism design problem, where we interact with a set of agents over a sequence of rounds. We wish to design an incentive-compatible (IC) online learning scheme to maximize an application-specific objective within a given class of mechanisms, without prior knowledge of the agents' type distributions. Even if each mechanism in this class is IC in a single round, if an algorithm naively chooses from this class on each round, the entire learning process may not be IC against non-myopic buyers who appear over multiple rounds. On each round, our method randomly chooses between the recommendation of a weakly differentially private online learning algorithm (e.g., Hedge), and a commitment mechanism which penalizes non-truthful behavior. Our method is IC and achieves $O(T^{frac{1+h}{2}})$ regret for the application-specific objective in an adversarial setting, where $h$ quantifies the long-sightedness of the agents. When compared to prior work, our approach is conceptually simpler,it applies to general mechanism design problems (beyond auctions), and its regret scales gracefully with the size of the mechanism class.
http://arxiv.org/pdf/2407.04898v1
[ "Joon Suk Huh", "Kirthevasan Kandasamy" ]
2024-07-06T00:02:25Z
2024-07-06T00:02:25Z
2407.04889
Maximizing utility in multi-agent environments by anticipating the behavior of other learners
Learning algorithms are often used to make decisions in sequential decision-making environments. In multi-agent settings, the decisions of each agent can affect the utilities/losses of the other agents. Therefore, if an agent is good at anticipating the behavior of the other agents, in particular how they will make decisions in each round as a function of their experience that far, it could try to judiciously make its own decisions over the rounds of the interaction so as to influence the other agents to behave in a way that ultimately benefits its own utility. In this paper, we study repeated two-player games involving two types of agents: a learner, which employs an online learning algorithm to choose its strategy in each round; and an optimizer, which knows the learner's utility function and the learner's online learning algorithm. The optimizer wants to plan ahead to maximize its own utility, while taking into account the learner's behavior. We provide two results: a positive result for repeated zero-sum games and a negative result for repeated general-sum games. Our positive result is an algorithm for the optimizer, which exactly maximizes its utility against a learner that plays the Replicator Dynamics -- the continuous-time analogue of Multiplicative Weights Update (MWU). Additionally, we use this result to provide an algorithm for the optimizer against MWU, i.e.~for the discrete-time setting, which guarantees an average utility for the optimizer that is higher than the value of the one-shot game. Our negative result shows that, unless P=NP, there is no Fully Polynomial Time Approximation Scheme (FPTAS) for maximizing the utility of an optimizer against a learner that best-responds to the history in each round. Yet, this still leaves open the question of whether there exists a polynomial-time algorithm that optimizes the utility up to $o(T)$.
http://arxiv.org/pdf/2407.04889v1
[ "Angelos Assos", "Yuval Dagan", "Constantinos Daskalakis" ]
2024-07-05T23:16:18Z
2024-07-05T23:16:18Z
2407.04884
Differentially Private Convex Approximation of Two-Layer ReLU Networks
We show that it is possible to privately train convex problems that give models with similar privacy-utility trade-off as one hidden-layer ReLU networks trained with differentially private stochastic gradient descent (DP-SGD). As we show, this is possible via a certain dual formulation of the ReLU minimization problem. We derive a stochastic approximation of the dual problem that leads to a strongly convex problem which allows applying, for example, the privacy amplification by iteration type of analysis for gradient-based private optimizers, and in particular allows giving accurate privacy bounds for the noisy cyclic mini-batch gradient descent with fixed disjoint mini-batches. We obtain on the MNIST and FashionMNIST problems for the noisy cyclic mini-batch gradient descent first empirical results that show similar privacy-utility-trade-offs as DP-SGD applied to a ReLU network. We outline theoretical utility bounds that illustrate the speed-ups of the private convex approximation of ReLU networks.
http://arxiv.org/pdf/2407.04884v1
[ "Antti Koskela" ]
2024-07-05T22:43:32Z
2024-07-05T22:43:32Z
2407.04877
Leveraging Data Mining, Active Learning, and Domain Adaptation in a Multi-Stage, Machine Learning-Driven Approach for the Efficient Discovery of Advanced Acidic Oxygen Evolution Electrocatalysts
Developing advanced catalysts for acidic oxygen evolution reaction (OER) is crucial for sustainable hydrogen production. This study introduces a novel, multi-stage machine learning (ML) approach to streamline the discovery and optimization of complex multi-metallic catalysts. Our method integrates data mining, active learning, and domain adaptation throughout the materials discovery process. Unlike traditional trial-and-error methods, this approach systematically narrows the exploration space using domain knowledge with minimized reliance on subjective intuition. Then the active learning module efficiently refines element composition and synthesis conditions through iterative experimental feedback. The process culminated in the discovery of a promising Ru-Mn-Ca-Pr oxide catalyst. Our workflow also enhances theoretical simulations with domain adaptation strategy, providing deeper mechanistic insights aligned with experimental findings. By leveraging diverse data sources and multiple ML strategies, we establish an efficient pathway for electrocatalyst discovery and optimization. This comprehensive, data-driven approach represents a paradigm shift and potentially new benchmark in electrocatalysts research.
http://arxiv.org/pdf/2407.04877v1
[ "Rui Ding", "Jianguo Liu", "Kang Hua", "Xuebin Wang", "Xiaoben Zhang", "Minhua Shao", "Yuxin Chen", "Junhong Chen" ]
2024-07-05T22:14:55Z
2024-07-05T22:14:55Z
2407.07917
Non-Cooperative Backdoor Attacks in Federated Learning: A New Threat Landscape
Despite the promise of Federated Learning (FL) for privacy-preserving model training on distributed data, it remains susceptible to backdoor attacks. These attacks manipulate models by embedding triggers (specific input patterns) in the training data, forcing misclassification as predefined classes during deployment. Traditional single-trigger attacks and recent work on cooperative multiple-trigger attacks, where clients collaborate, highlight limitations in attack realism due to coordination requirements. We investigate a more alarming scenario: non-cooperative multiple-trigger attacks. Here, independent adversaries introduce distinct triggers targeting unique classes. These parallel attacks exploit FL's decentralized nature, making detection difficult. Our experiments demonstrate the alarming vulnerability of FL to such attacks, where individual backdoors can be successfully learned without impacting the main task. This research emphasizes the critical need for robust defenses against diverse backdoor attacks in the evolving FL landscape. While our focus is on empirical analysis, we believe it can guide backdoor research toward more realistic settings, highlighting the crucial role of FL in building robust defenses against diverse backdoor threats. The code is available at url{https://anonymous.4open.science/r/nba-980F/}.
http://arxiv.org/pdf/2407.07917v1
[ "Tuan Nguyen", "Dung Thuy Nguyen", "Khoa D Doan", "Kok-Seng Wong" ]
2024-07-05T22:03:13Z
2024-07-05T22:03:13Z
2406.16829
Understanding and Mitigating Tokenization Bias in Language Models
State-of-the-art language models are autoregressive and operate on subword units known as tokens. Specifically, one must encode the conditioning string into a list of tokens before passing to the language models for next-token prediction. We show that popular encoding schemes, such as maximum prefix encoding (MPE) and byte-pair-encoding (BPE), induce a sampling bias that cannot be mitigated with more training or data. To counter this universal problem, for each encoding scheme above, we propose a novel algorithm to obtain unbiased estimates from any language model trained on tokenized data. Our methods do not require finetuning the model, and the complexity, defined as the number of model runs, scales linearly with the sequence length in the case of MPE. As a result, we show that one can simulate token-free behavior from a tokenized language model. We empirically verify the correctness of our method through a Markov-chain setup, where it accurately recovers the transition probabilities, as opposed to the conventional method of directly prompting tokens into the language model.
http://arxiv.org/pdf/2406.16829v2
[ "Buu Phan", "Marton Havasi", "Matthew Muckley", "Karen Ullrich" ]
2024-07-05T21:49:08Z
2024-06-24T17:38:02Z
2406.04827
Black Box Differential Privacy Auditing Using Total Variation Distance
We present a practical method to audit the differential privacy (DP) guarantees of a machine learning model using a small hold-out dataset that is not exposed to the model during the training. Having a score function such as the loss function employed during the training, our method estimates the total variation (TV) distance between scores obtained with a subset of the training data and the hold-out dataset. With some meta information about the underlying DP training algorithm, these TV distance values can be converted to $(varepsilon,delta)$-guarantees for any $delta$. We show that these score distributions asymptotically give lower bounds for the DP guarantees of the underlying training algorithm, however, we perform a one-shot estimation for practicality reasons. We specify conditions that lead to lower bounds for the DP guarantees with high probability. To estimate the TV distance between the score distributions, we use a simple density estimation method based on histograms. We show that the TV distance gives a very close to optimally robust estimator and has an error rate $mathcal{O}(k^{-1/3})$, where $k$ is the total number of samples. Numerical experiments on benchmark datasets illustrate the effectiveness of our approach and show improvements over baseline methods for black-box auditing.
http://arxiv.org/pdf/2406.04827v2
[ "Antti Koskela", "Jafar Mohammadi" ]
2024-07-05T21:38:38Z
2024-06-07T10:52:15Z
2407.04871
Improving Knowledge Distillation in Transfer Learning with Layer-wise Learning Rates
Transfer learning methods start performing poorly when the complexity of the learning task is increased. Most of these methods calculate the cumulative differences of all the matched features and then use them to back-propagate that loss through all the layers. Contrary to these methods, in this work, we propose a novel layer-wise learning scheme that adjusts learning parameters per layer as a function of the differences in the Jacobian/Attention/Hessian of the output activations w.r.t. the network parameters. We applied this novel scheme for attention map-based and derivative-based (first and second order) transfer learning methods. We received improved learning performance and stability against a wide range of datasets. From extensive experimental evaluation, we observed that the performance boost achieved by our method becomes more significant with the increasing difficulty of the learning task.
http://arxiv.org/pdf/2407.04871v1
[ "Shirley Kokane", "Mostofa Rafid Uddin", "Min Xu" ]
2024-07-05T21:35:17Z
2024-07-05T21:35:17Z
2206.05248
Accelerated Algorithms for Constrained Nonconvex-Nonconcave Min-Max Optimization and Comonotone Inclusion
We study constrained comonotone min-max optimization, a structured class of nonconvex-nonconcave min-max optimization problems, and their generalization to comonotone inclusion. In our first contribution, we extend the Extra Anchored Gradient (EAG) algorithm, originally proposed by Yoon and Ryu (2021) for unconstrained min-max optimization, to constrained comonotone min-max optimization and comonotone inclusion, achieving an optimal convergence rate of $Oleft(frac{1}{T}right)$ among all first-order methods. Additionally, we prove that the algorithm's iterations converge to a point in the solution set. In our second contribution, we extend the Fast Extra Gradient (FEG) algorithm, as developed by Lee and Kim (2021), to constrained comonotone min-max optimization and comonotone inclusion, achieving the same $Oleft(frac{1}{T}right)$ convergence rate. This rate is applicable to the broadest set of comonotone inclusion problems yet studied in the literature. Our analyses are based on simple potential function arguments, which might be useful for analyzing other accelerated algorithms.
http://arxiv.org/pdf/2206.05248v5
[ "Yang Cai", "Argyris Oikonomou", "Weiqiang Zheng" ]
2024-07-05T21:11:50Z
2022-06-10T17:44:06Z
2407.04866
Explainable Metric Learning for Deflating Data Bias
Image classification is an essential part of computer vision which assigns a given input image to a specific category based on the similarity evaluation within given criteria. While promising classifiers can be obtained through deep learning models, these approaches lack explainability, where the classification results are hard to interpret in a human-understandable way. In this paper, we present an explainable metric learning framework, which constructs hierarchical levels of semantic segments of an image for better interpretability. The key methodology involves a bottom-up learning strategy, starting by training the local metric learning model for the individual segments and then combining segments to compose comprehensive metrics in a tree. Specifically, our approach enables a more human-understandable similarity measurement between two images based on the semantic segments within it, which can be utilized to generate new samples to reduce bias in a training dataset. Extensive experimental evaluation demonstrates that the proposed approach can drastically improve model accuracy compared with state-of-the-art methods.
http://arxiv.org/pdf/2407.04866v1
[ "Emma Andrews", "Prabhat Mishra" ]
2024-07-05T21:07:27Z
2024-07-05T21:07:27Z
2407.04864
Augmented Bayesian Policy Search
Deterministic policies are often preferred over stochastic ones when implemented on physical systems. They can prevent erratic and harmful behaviors while being easier to implement and interpret. However, in practice, exploration is largely performed by stochastic policies. First-order Bayesian Optimization (BO) methods offer a principled way of performing exploration using deterministic policies. This is done through a learned probabilistic model of the objective function and its gradient. Nonetheless, such approaches treat policy search as a black-box problem, and thus, neglect the reinforcement learning nature of the problem. In this work, we leverage the performance difference lemma to introduce a novel mean function for the probabilistic model. This results in augmenting BO methods with the action-value function. Hence, we call our method Augmented Bayesian Search~(ABS). Interestingly, this new mean function enhances the posterior gradient with the deterministic policy gradient, effectively bridging the gap between BO and policy gradient methods. The resulting algorithm combines the convenience of the direct policy search with the scalability of reinforcement learning. We validate ABS on high-dimensional locomotion problems and demonstrate competitive performance compared to existing direct policy search schemes.
http://arxiv.org/pdf/2407.04864v1
[ "Mahdi Kallel", "Debabrota Basu", "Riad Akrour", "Carlo D'Eramo" ]
2024-07-05T20:56:45Z
2024-07-05T20:56:45Z
2403.06963
The pitfalls of next-token prediction
Can a mere next-token predictor faithfully model human intelligence? We crystallize this emerging concern and correct popular misconceptions surrounding it, and advocate a simple multi-token objective. As a starting point, we argue that the two often-conflated phases of next-token prediction -- autoregressive inference and teacher-forced training -- must be treated distinctly. The popular criticism that errors can compound during autoregressive inference, crucially assumes that teacher-forcing has learned an accurate next-token predictor. This assumption sidesteps a more deep-rooted problem we expose: in certain classes of tasks, teacher-forcing can simply fail to learn an accurate next-token predictor in the first place. We describe a general mechanism of how teacher-forcing can fail, and design a minimal planning task where both the Transformer and the Mamba architecture empirically fail in that manner -- remarkably, despite the task being straightforward to learn. Finally, we provide preliminary evidence that this failure can be resolved using a simple modification that predicts multiple tokens in advance. We hope this finding can ground future debates and inspire explorations beyond the next-token prediction paradigm. We make our code available under https://github.com/gregorbachmann/Next-Token-Failures
http://arxiv.org/pdf/2403.06963v2
[ "Gregor Bachmann", "Vaishnavh Nagarajan" ]
2024-07-05T20:48:04Z
2024-03-11T17:47:30Z
2407.07916
Benchmarking GNNs Using Lightning Network Data
The Bitcoin Lightning Network is a layer 2 protocol designed to facilitate fast and inexpensive Bitcoin transactions. It operates by establishing channels between users, where Bitcoin is locked and transactions are conducted off-chain until the channels are closed, with only the initial and final transactions recorded on the blockchain. Routing transactions through intermediary nodes is crucial for users without direct channels, allowing these routing nodes to collect fees for their services. Nodes announce their channels to the network, forming a graph with channels as edges. In this paper, we analyze the graph structure of the Lightning Network and investigate the statistical relationships between node properties using machine learning, particularly Graph Neural Networks (GNNs). We formulate a series of tasks to explore these relationships and provide benchmarks for GNN architectures, demonstrating how topological and neighbor information enhances performance. Our evaluation of several models reveals the effectiveness of GNNs in these tasks and highlights the insights gained from their application.
http://arxiv.org/pdf/2407.07916v1
[ "Rainer Feichtinger", "Florian Grötschla", "Lioba Heimbach", "Roger Wattenhofer" ]
2024-07-05T20:35:57Z
2024-07-05T20:35:57Z
2407.04856
Explorative Imitation Learning: A Path Signature Approach for Continuous Environments
Some imitation learning methods combine behavioural cloning with self-supervision to infer actions from state pairs. However, most rely on a large number of expert trajectories to increase generalisation and human intervention to capture key aspects of the problem, such as domain constraints. In this paper, we propose Continuous Imitation Learning from Observation (CILO), a new method augmenting imitation learning with two important features: (i) exploration, allowing for more diverse state transitions, requiring less expert trajectories and resulting in fewer training iterations; and (ii) path signatures, allowing for automatic encoding of constraints, through the creation of non-parametric representations of agents and expert trajectories. We compared CILO with a baseline and two leading imitation learning methods in five environments. It had the best overall performance of all methods in all environments, outperforming the expert in two of them.
http://arxiv.org/pdf/2407.04856v1
[ "Nathan Gavenski", "Juarez Monteiro", "Felipe Meneguzzi", "Michael Luck", "Odinaldo Rodrigues" ]
2024-07-05T20:25:39Z
2024-07-05T20:25:39Z
2310.18606
Where have you been? A Study of Privacy Risk for Point-of-Interest Recommendation
As location-based services (LBS) have grown in popularity, more human mobility data has been collected. The collected data can be used to build machine learning (ML) models for LBS to enhance their performance and improve overall experience for users. However, the convenience comes with the risk of privacy leakage since this type of data might contain sensitive information related to user identities, such as home/work locations. Prior work focuses on protecting mobility data privacy during transmission or prior to release, lacking the privacy risk evaluation of mobility data-based ML models. To better understand and quantify the privacy leakage in mobility data-based ML models, we design a privacy attack suite containing data extraction and membership inference attacks tailored for point-of-interest (POI) recommendation models, one of the most widely used mobility data-based ML models. These attacks in our attack suite assume different adversary knowledge and aim to extract different types of sensitive information from mobility data, providing a holistic privacy risk assessment for POI recommendation models. Our experimental evaluation using two real-world mobility datasets demonstrates that current POI recommendation models are vulnerable to our attacks. We also present unique findings to understand what types of mobility data are more susceptible to privacy attacks. Finally, we evaluate defenses against these attacks and highlight future directions and challenges. Our attack suite is released at https://github.com/KunlinChoi/POIPrivacy.
http://arxiv.org/abs/2310.18606v2
[ "Kunlin Cai", "Jinghuai Zhang", "Zhiqing Hong", "Will Shand", "Guang Wang", "Desheng Zhang", "Jianfeng Chi", "Yuan Tian" ]
2024-07-05T20:17:38Z
2023-10-28T06:17:52Z
2407.04842
MJ-Bench: Is Your Multimodal Reward Model Really a Good Judge for Text-to-Image Generation?
While text-to-image models like DALLE-3 and Stable Diffusion are rapidly proliferating, they often encounter challenges such as hallucination, bias, and the production of unsafe, low-quality output. To effectively address these issues, it is crucial to align these models with desired behaviors based on feedback from a multimodal judge. Despite their significance, current multimodal judges frequently undergo inadequate evaluation of their capabilities and limitations, potentially leading to misalignment and unsafe fine-tuning outcomes. To address this issue, we introduce MJ-Bench, a novel benchmark which incorporates a comprehensive preference dataset to evaluate multimodal judges in providing feedback for image generation models across four key perspectives: alignment, safety, image quality, and bias. Specifically, we evaluate a large variety of multimodal judges including smaller-sized CLIP-based scoring models, open-source VLMs (e.g. LLaVA family), and close-source VLMs (e.g. GPT-4o, Claude 3) on each decomposed subcategory of our preference dataset. Experiments reveal that close-source VLMs generally provide better feedback, with GPT-4o outperforming other judges in average. Compared with open-source VLMs, smaller-sized scoring models can provide better feedback regarding text-image alignment and image quality, while VLMs provide more accurate feedback regarding safety and generation bias due to their stronger reasoning capabilities. Further studies in feedback scale reveal that VLM judges can generally provide more accurate and stable feedback in natural language (Likert-scale) than numerical scales. Notably, human evaluations on end-to-end fine-tuned models using separate feedback from these multimodal judges provide similar conclusions, further confirming the effectiveness of MJ-Bench. All data, code, models are available at https://huggingface.co/MJ-Bench.
http://arxiv.org/pdf/2407.04842v1
[ "Zhaorun Chen", "Yichao Du", "Zichen Wen", "Yiyang Zhou", "Chenhang Cui", "Zhenzhen Weng", "Haoqin Tu", "Chaoqi Wang", "Zhengwei Tong", "Qinglan Huang", "Canyu Chen", "Qinghao Ye", "Zhihong Zhu", "Yuqing Zhang", "Jiawei Zhou", "Zhuokai Zhao", "Rafael Rafailov", "Chelsea Finn", "Huaxiu Yao" ]
2024-07-05T20:03:16Z
2024-07-05T20:03:16Z
2310.15097
A Canonical Data Transformation for Achieving Inter- and Within-group Fairness
Increases in the deployment of machine learning algorithms for applications that deal with sensitive data have brought attention to the issue of fairness in machine learning. Many works have been devoted to applications that require different demographic groups to be treated fairly. However, algorithms that aim to satisfy inter-group fairness (also called group fairness) may inadvertently treat individuals within the same demographic group unfairly. To address this issue, we introduce a formal definition of within-group fairness that maintains fairness among individuals from within the same group. We propose a pre-processing framework to meet both inter- and within-group fairness criteria with little compromise in accuracy. The framework maps the feature vectors of members from different groups to an inter-group-fair canonical domain before feeding them into a scoring function. The mapping is constructed to preserve the relative relationship between the scores obtained from the unprocessed feature vectors of individuals from the same demographic group, guaranteeing within-group fairness. We apply this framework to the COMPAS risk assessment and Law School datasets and compare its performance in achieving inter-group and within-group fairness to two regularization-based methods.
http://arxiv.org/abs/2310.15097v2
[ "Zachary McBride Lazri", "Ivan Brugere", "Xin Tian", "Dana Dachman-Soled", "Antigoni Polychroniadou", "Danial Dervovic", "Min Wu" ]
2024-07-05T19:58:12Z
2023-10-23T17:00:20Z
2407.04841
Associative Recurrent Memory Transformer
This paper addresses the challenge of creating a neural architecture for very long sequences that requires constant time for processing new information at each time step. Our approach, Associative Recurrent Memory Transformer (ARMT), is based on transformer self-attention for local context and segment-level recurrence for storage of task specific information distributed over a long context. We demonstrate that ARMT outperfors existing alternatives in associative retrieval tasks and sets a new performance record in the recent BABILong multi-task long-context benchmark by answering single-fact questions over 50 million tokens with an accuracy of 79.9%. The source code for training and evaluation is available on github.
http://arxiv.org/pdf/2407.04841v1
[ "Ivan Rodkin", "Yuri Kuratov", "Aydar Bulatov", "Mikhail Burtsev" ]
2024-07-05T19:57:49Z
2024-07-05T19:57:49Z
2407.04822
YourMT3+: Multi-instrument Music Transcription with Enhanced Transformer Architectures and Cross-dataset Stem Augmentation
Multi-instrument music transcription aims to convert polyphonic music recordings into musical scores assigned to each instrument. This task is challenging for modeling as it requires simultaneously identifying multiple instruments and transcribing their pitch and precise timing, and the lack of fully annotated data adds to the training difficulties. This paper introduces YourMT3+, a suite of models for enhanced multi-instrument music transcription based on the recent language token decoding approach of MT3. We strengthen its encoder by adopting a hierarchical attention transformer in the time-frequency domain and integrating a mixture of experts (MoE). To address data limitations, we introduce a new multi-channel decoding method for training with incomplete annotations and propose intra- and cross-stem augmentation for dataset mixing. Our experiments demonstrate direct vocal transcription capabilities, eliminating the need for voice separation pre-processors. Benchmarks across ten public datasets show our models' competitiveness with, or superiority to, existing transcription models. Further testing on pop music recordings highlights the limitations of current models. Fully reproducible code and datasets are available at url{https://github.com/mimbres/YourMT3}
http://arxiv.org/pdf/2407.04822v1
[ "Sungkyun Chang", "Emmanouil Benetos", "Holger Kirchhoff", "Simon Dixon" ]
2024-07-05T19:18:33Z
2024-07-05T19:18:33Z
2407.04819
RPN: Reconciled Polynomial Network Towards Unifying PGMs, Kernel SVMs, MLP and KAN
In this paper, we will introduce a novel deep model named Reconciled Polynomial Network (RPN) for deep function learning. RPN has a very general architecture and can be used to build models with various complexities, capacities, and levels of completeness, which all contribute to the correctness of these models. As indicated in the subtitle, RPN can also serve as the backbone to unify different base models into one canonical representation. This includes non-deep models, like probabilistic graphical models (PGMs) - such as Bayesian network and Markov network - and kernel support vector machines (kernel SVMs), as well as deep models like the classic multi-layer perceptron (MLP) and the recent Kolmogorov-Arnold network (KAN). Technically, RPN proposes to disentangle the underlying function to be inferred into the inner product of a data expansion function and a parameter reconciliation function. Together with the remainder function, RPN accurately approximates the underlying functions that governs data distributions. The data expansion functions in RPN project data vectors from the input space to a high-dimensional intermediate space, specified by the expansion functions in definition. Meanwhile, RPN also introduces the parameter reconciliation functions to fabricate a small number of parameters into a higher-order parameter matrix to address the ``curse of dimensionality'' problem caused by the data expansions. Moreover, the remainder functions provide RPN with additional complementary information to reduce potential approximation errors. We conducted extensive empirical experiments on numerous benchmark datasets across multiple modalities, including continuous function datasets, discrete vision and language datasets, and classic tabular datasets, to investigate the effectiveness of RPN.
http://arxiv.org/pdf/2407.04819v1
[ "Jiawei Zhang" ]
2024-07-05T19:00:18Z
2024-07-05T19:00:18Z
2407.04811
Simplifying Deep Temporal Difference Learning
Q-learning played a foundational role in the field reinforcement learning (RL). However, TD algorithms with off-policy data, such as Q-learning, or nonlinear function approximation like deep neural networks require several additional tricks to stabilise training, primarily a replay buffer and target networks. Unfortunately, the delayed updating of frozen network parameters in the target network harms the sample efficiency and, similarly, the replay buffer introduces memory and implementation overheads. In this paper, we investigate whether it is possible to accelerate and simplify TD training while maintaining its stability. Our key theoretical result demonstrates for the first time that regularisation techniques such as LayerNorm can yield provably convergent TD algorithms without the need for a target network, even with off-policy data. Empirically, we find that online, parallelised sampling enabled by vectorised environments stabilises training without the need of a replay buffer. Motivated by these findings, we propose PQN, our simplified deep online Q-Learning algorithm. Surprisingly, this simple algorithm is competitive with more complex methods like: Rainbow in Atari, R2D2 in Hanabi, QMix in Smax, PPO-RNN in Craftax, and can be up to 50x faster than traditional DQN without sacrificing sample efficiency. In an era where PPO has become the go-to RL algorithm, PQN reestablishes Q-learning as a viable alternative. We make our code available at: https://github.com/mttga/purejaxql.
http://arxiv.org/pdf/2407.04811v1
[ "Matteo Gallici", "Mattie Fellows", "Benjamin Ellis", "Bartomeu Pou", "Ivan Masmitja", "Jakob Nicolaus Foerster", "Mario Martin" ]
2024-07-05T18:49:07Z
2024-07-05T18:49:07Z
2407.04804
Fair Submodular Cover
Submodular optimization is a fundamental problem with many applications in machine learning, often involving decision-making over datasets with sensitive attributes such as gender or age. In such settings, it is often desirable to produce a diverse solution set that is fairly distributed with respect to these attributes. Motivated by this, we initiate the study of Fair Submodular Cover (FSC), where given a ground set $U$, a monotone submodular function $f:2^Utomathbb{R}_{ge 0}$, a threshold $tau$, the goal is to find a balanced subset of $S$ with minimum cardinality such that $f(S)getau$. We first introduce discrete algorithms for FSC that achieve a bicriteria approximation ratio of $(frac{1}{epsilon}, 1-O(epsilon))$. We then present a continuous algorithm that achieves a $(lnfrac{1}{epsilon}, 1-O(epsilon))$-bicriteria approximation ratio, which matches the best approximation guarantee of submodular cover without a fairness constraint. Finally, we complement our theoretical results with a number of empirical evaluations that demonstrate the effectiveness of our algorithms on instances of maximum coverage.
http://arxiv.org/pdf/2407.04804v1
[ "Wenjing Chen", "Shuo Xing", "Samson Zhou", "Victoria G. Crawford" ]
2024-07-05T18:37:09Z
2024-07-05T18:37:09Z
2311.14101
Neural Subnetwork Ensembles
Neural network ensembles have been effectively used to improve generalization by combining the predictions of multiple independently trained models. However, the growing scale and complexity of deep neural networks have led to these methods becoming prohibitively expensive and time consuming to implement. Low-cost ensemble methods have become increasingly important as they can alleviate the need to train multiple models from scratch while retaining the generalization benefits that traditional ensemble learning methods afford. This dissertation introduces and formalizes a low-cost framework for constructing Subnetwork Ensembles, where a collection of child networks are formed by sampling, perturbing, and optimizing subnetworks from a trained parent model. We explore several distinct methodologies for generating child networks and we evaluate their efficacy through a variety of ablation studies and established benchmarks. Our findings reveal that this approach can greatly improve training efficiency, parametric utilization, and generalization performance while minimizing computational cost. Subnetwork Ensembles offer a compelling framework for exploring how we can build better systems by leveraging the unrealized potential of deep neural networks.
http://arxiv.org/pdf/2311.14101v2
[ "Tim Whitaker" ]
2024-07-05T18:29:34Z
2023-11-23T17:01:16Z
2407.04803
The Impact of Quantization and Pruning on Deep Reinforcement Learning Models
Deep reinforcement learning (DRL) has achieved remarkable success across various domains, such as video games, robotics, and, recently, large language models. However, the computational costs and memory requirements of DRL models often limit their deployment in resource-constrained environments. The challenge underscores the urgent need to explore neural network compression methods to make RDL models more practical and broadly applicable. Our study investigates the impact of two prominent compression methods, quantization and pruning on DRL models. We examine how these techniques influence four performance factors: average return, memory, inference time, and battery utilization across various DRL algorithms and environments. Despite the decrease in model size, we identify that these compression techniques generally do not improve the energy efficiency of DRL models, but the model size decreases. We provide insights into the trade-offs between model compression and DRL performance, offering guidelines for deploying efficient DRL models in resource-constrained settings.
http://arxiv.org/pdf/2407.04803v1
[ "Heng Lu", "Mehdi Alemi", "Reza Rawassizadeh" ]
2024-07-05T18:21:17Z
2024-07-05T18:21:17Z
2403.00793
Ads Recommendation in a Collapsed and Entangled World
We present Tencent's ads recommendation system and examine the challenges and practices of learning appropriate recommendation representations. Our study begins by showcasing our approaches to preserving prior knowledge when encoding features of diverse types into embedding representations. We specifically address sequence features, numeric features, and pre-trained embedding features. Subsequently, we delve into two crucial challenges related to feature representation: the dimensional collapse of embeddings and the interest entanglement across different tasks or scenarios. We propose several practical approaches to address these challenges that result in robust and disentangled recommendation representations. We then explore several training techniques to facilitate model optimization, reduce bias, and enhance exploration. Additionally, we introduce three analysis tools that enable us to study feature correlation, dimensional collapse, and interest entanglement. This work builds upon the continuous efforts of Tencent's ads recommendation team over the past decade. It summarizes general design principles and presents a series of readily applicable solutions and analysis tools. The reported performance is based on our online advertising platform, which handles hundreds of billions of requests daily and serves millions of ads to billions of users.
http://arxiv.org/abs/2403.00793v2
[ "Junwei Pan", "Wei Xue", "Ximei Wang", "Haibin Yu", "Xun Liu", "Shijie Quan", "Xueming Qiu", "Dapeng Liu", "Lei Xiao", "Jie Jiang" ]
2024-07-05T18:20:15Z
2024-02-22T22:47:08Z
2405.11622
Continuous Predictive Modeling of Clinical Notes and ICD Codes in Patient Health Records
Electronic Health Records (EHR) serve as a valuable source of patient information, offering insights into medical histories, treatments, and outcomes. Previous research has developed systems for detecting applicable ICD codes that should be assigned while writing a given EHR document, mainly focusing on discharge summaries written at the end of a hospital stay. In this work, we investigate the potential of predicting these codes for the whole patient stay at different time points during their stay, even before they are officially assigned by clinicians. The development of methods to predict diagnoses and treatments earlier in advance could open opportunities for predictive medicine, such as identifying disease risks sooner, suggesting treatments, and optimizing resource allocation. Our experiments show that predictions regarding final ICD codes can be made already two days after admission and we propose a custom model that improves performance on this early prediction task.
http://arxiv.org/pdf/2405.11622v2
[ "Mireia Hernandez Caralt", "Clarence Boon Liang Ng", "Marek Rei" ]
2024-07-05T18:14:48Z
2024-05-19T17:23:04Z
2407.04797
Revealing the Utilized Rank of Subspaces of Learning in Neural Networks
In this work, we study how well the learned weights of a neural network utilize the space available to them. This notion is related to capacity, but additionally incorporates the interaction of the network architecture with the dataset. Most learned weights appear to be full rank, and are therefore not amenable to low rank decomposition. This deceptively implies that the weights are utilizing the entire space available to them. We propose a simple data-driven transformation that projects the weights onto the subspace where the data and the weight interact. This preserves the functional mapping of the layer and reveals its low rank structure. In our findings, we conclude that most models utilize a fraction of the available space. For instance, for ViTB-16 and ViTL-16 trained on ImageNet, the mean layer utilization is 35% and 20% respectively. Our transformation results in reducing the parameters to 50% and 25% respectively, while resulting in less than 0.2% accuracy drop after fine-tuning. We also show that self-supervised pre-training drives this utilization up to 70%, justifying its suitability for downstream tasks.
http://arxiv.org/pdf/2407.04797v1
[ "Isha Garg", "Christian Koguchi", "Eshan Verma", "Daniel Ulbricht" ]
2024-07-05T18:14:39Z
2024-07-05T18:14:39Z
2407.04794
On Evaluating The Performance of Watermarked Machine-Generated Texts Under Adversarial Attacks
Large Language Models (LLMs) excel in various applications, including text generation and complex tasks. However, the misuse of LLMs raises concerns about the authenticity and ethical implications of the content they produce, such as deepfake news, academic fraud, and copyright infringement. Watermarking techniques, which embed identifiable markers in machine-generated text, offer a promising solution to these issues by allowing for content verification and origin tracing. Unfortunately, the robustness of current LLM watermarking schemes under potential watermark removal attacks has not been comprehensively explored. In this paper, to fill this gap, we first systematically comb the mainstream watermarking schemes and removal attacks on machine-generated texts, and then we categorize them into pre-text (before text generation) and post-text (after text generation) classes so that we can conduct diversified analyses. In our experiments, we evaluate eight watermarks (five pre-text, three post-text) and twelve attacks (two pre-text, ten post-text) across 87 scenarios. Evaluation results indicate that (1) KGW and Exponential watermarks offer high text quality and watermark retention but remain vulnerable to most attacks; (2) Post-text attacks are found to be more efficient and practical than pre-text attacks; (3) Pre-text watermarks are generally more imperceptible, as they do not alter text fluency, unlike post-text watermarks; (4) Additionally, combined attack methods can significantly increase effectiveness, highlighting the need for more robust watermarking solutions. Our study underscores the vulnerabilities of current techniques and the necessity for developing more resilient schemes.
http://arxiv.org/pdf/2407.04794v1
[ "Zesen Liu", "Tianshuo Cong", "Xinlei He", "Qi Li" ]
2024-07-05T18:09:06Z
2024-07-05T18:09:06Z
2407.04787
Re-Tuning: Overcoming the Compositionality Limits of Large Language Models with Recursive Tuning
We present a new method for large language models to solve compositional tasks. Although they have shown strong performance on traditional language understanding tasks, large language models struggle to solve compositional tasks, where the solution depends on solving smaller instances of the same problem. We propose a natural approach to solve compositional tasks recursively. Our method, Re-Tuning, tunes models to break down a problem into subproblems, solve those subproblems, and combine the results. We show that our method significantly improves model performance on three representative compositional tasks: integer addition, dynamic programming, and parity. Compared to state-of-the-art methods that keep intermediate steps towards solving the problems, Re-Tuning achieves significantly higher accuracy and is more GPU memory efficient.
http://arxiv.org/pdf/2407.04787v1
[ "Eric Pasewark", "Kyle Montgomery", "Kefei Duan", "Dawn Song", "Chenguang Wang" ]
2024-07-05T18:02:28Z
2024-07-05T18:02:28Z
2407.00866
Silver Linings in the Shadows: Harnessing Membership Inference for Machine Unlearning
With the continued advancement and widespread adoption of machine learning (ML) models across various domains, ensuring user privacy and data security has become a paramount concern. In compliance with data privacy regulations, such as GDPR, a secure machine learning framework should not only grant users the right to request the removal of their contributed data used for model training but also facilitates the elimination of sensitive data fingerprints within machine learning models to mitigate potential attack - a process referred to as machine unlearning. In this study, we present a novel unlearning mechanism designed to effectively remove the impact of specific data samples from a neural network while considering the performance of the unlearned model on the primary task. In achieving this goal, we crafted a novel loss function tailored to eliminate privacy-sensitive information from weights and activation values of the target model by combining target classification loss and membership inference loss. Our adaptable framework can easily incorporate various privacy leakage approximation mechanisms to guide the unlearning process. We provide empirical evidence of the effectiveness of our unlearning approach with a theoretical upper-bound analysis through a membership inference mechanism as a proof of concept. Our results showcase the superior performance of our approach in terms of unlearning efficacy and latency as well as the fidelity of the primary task, across four datasets and four deep learning architectures.
http://arxiv.org/pdf/2407.00866v2
[ "Nexhi Sula", "Abhinav Kumar", "Jie Hou", "Han Wang", "Reza Tourani" ]
2024-07-05T18:01:16Z
2024-07-01T00:20:26Z
2407.04783
Agnostic Private Density Estimation via Stable List Decoding
We introduce a new notion of stability--which we call stable list decoding--and demonstrate its applicability in designing differentially private density estimators. This definition is weaker than global stability [ABLMM22] and is related to the notions of replicability [ILPS22] and list replicability [CMY23]. We show that if a class of distributions is stable list decodable, then it can be learned privately in the agnostic setting. As the main application of our framework, we prove the first upper bound on the sample complexity of private density estimation for Gaussian Mixture Models in the agnostic setting, extending the realizable result of Afzali et al. [AAL24].
http://arxiv.org/pdf/2407.04783v1
[ "Mohammad Afzali", "Hassan Ashtiani", "Christopher Liaw" ]
2024-07-05T18:00:22Z
2024-07-05T18:00:22Z
2405.14868
Generative Camera Dolly: Extreme Monocular Dynamic Novel View Synthesis
Accurate reconstruction of complex dynamic scenes from just a single viewpoint continues to be a challenging task in computer vision. Current dynamic novel view synthesis methods typically require videos from many different camera viewpoints, necessitating careful recording setups, and significantly restricting their utility in the wild as well as in terms of embodied AI applications. In this paper, we propose $textbf{GCD}$, a controllable monocular dynamic view synthesis pipeline that leverages large-scale diffusion priors to, given a video of any scene, generate a synchronous video from any other chosen perspective, conditioned on a set of relative camera pose parameters. Our model does not require depth as input, and does not explicitly model 3D scene geometry, instead performing end-to-end video-to-video translation in order to achieve its goal efficiently. Despite being trained on synthetic multi-view video data only, zero-shot real-world generalization experiments show promising results in multiple domains, including robotics, object permanence, and driving environments. We believe our framework can potentially unlock powerful applications in rich dynamic scene understanding, perception for robotics, and interactive 3D video viewing experiences for virtual reality.
http://arxiv.org/pdf/2405.14868v2
[ "Basile Van Hoorick", "Rundi Wu", "Ege Ozguroglu", "Kyle Sargent", "Ruoshi Liu", "Pavel Tokmakov", "Achal Dave", "Changxi Zheng", "Carl Vondrick" ]
2024-07-05T17:59:57Z
2024-05-23T17:59:52Z
2206.05245
List-Decodable Sparse Mean Estimation via Difference-of-Pairs Filtering
We study the problem of list-decodable sparse mean estimation. Specifically, for a parameter $alpha in (0, 1/2)$, we are given $m$ points in $mathbb{R}^n$, $lfloor alpha m rfloor$ of which are i.i.d. samples from a distribution $D$ with unknown $k$-sparse mean $mu$. No assumptions are made on the remaining points, which form the majority of the dataset. The goal is to return a small list of candidates containing a vector $widehat mu$ such that $| widehat mu - mu |_2$ is small. Prior work had studied the problem of list-decodable mean estimation in the dense setting. In this work, we develop a novel, conceptually simpler technique for list-decodable mean estimation. As the main application of our approach, we provide the first sample and computationally efficient algorithm for list-decodable sparse mean estimation. In particular, for distributions with "certifiably bounded" $t$-th moments in $k$-sparse directions and sufficiently light tails, our algorithm achieves error of $(1/alpha)^{O(1/t)}$ with sample complexity $m = (klog(n))^{O(t)}/alpha$ and running time $mathrm{poly}(mn^t)$. For the special case of Gaussian inliers, our algorithm achieves the optimal error guarantee of $Theta (sqrt{log(1/alpha)})$ with quasi-polynomial sample and computational complexity. We complement our upper bounds with nearly-matching statistical query and low-degree polynomial testing lower bounds.
http://arxiv.org/pdf/2206.05245v2
[ "Ilias Diakonikolas", "Daniel M. Kane", "Sushrut Karmalkar", "Ankit Pensia", "Thanasis Pittas" ]
2024-07-05T17:57:31Z
2022-06-10T17:38:18Z
2407.04694
Me, Myself, and AI: The Situational Awareness Dataset (SAD) for LLMs
AI assistants such as ChatGPT are trained to respond to users by saying, "I am a large language model". This raises questions. Do such models know that they are LLMs and reliably act on this knowledge? Are they aware of their current circumstances, such as being deployed to the public? We refer to a model's knowledge of itself and its circumstances as situational awareness. To quantify situational awareness in LLMs, we introduce a range of behavioral tests, based on question answering and instruction following. These tests form the $textbf{Situational Awareness Dataset (SAD)}$, a benchmark comprising 7 task categories and over 13,000 questions. The benchmark tests numerous abilities, including the capacity of LLMs to (i) recognize their own generated text, (ii) predict their own behavior, (iii) determine whether a prompt is from internal evaluation or real-world deployment, and (iv) follow instructions that depend on self-knowledge. We evaluate 16 LLMs on SAD, including both base (pretrained) and chat models. While all models perform better than chance, even the highest-scoring model (Claude 3 Opus) is far from a human baseline on certain tasks. We also observe that performance on SAD is only partially predicted by metrics of general knowledge (e.g. MMLU). Chat models, which are finetuned to serve as AI assistants, outperform their corresponding base models on SAD but not on general knowledge tasks. The purpose of SAD is to facilitate scientific understanding of situational awareness in LLMs by breaking it down into quantitative abilities. Situational awareness is important because it enhances a model's capacity for autonomous planning and action. While this has potential benefits for automation, it also introduces novel risks related to AI safety and control. Code and latest results available at https://situational-awareness-dataset.org .
http://arxiv.org/pdf/2407.04694v1
[ "Rudolf Laine", "Bilal Chughtai", "Jan Betley", "Kaivalya Hariharan", "Jeremy Scheurer", "Mikita Balesni", "Marius Hobbhahn", "Alexander Meinke", "Owain Evans" ]
2024-07-05T17:57:02Z
2024-07-05T17:57:02Z
2407.04690
Missed Causes and Ambiguous Effects: Counterfactuals Pose Challenges for Interpreting Neural Networks
Interpretability research takes counterfactual theories of causality for granted. Most causal methods rely on counterfactual interventions to inputs or the activations of particular model components, followed by observations of the change in models' output logits or behaviors. While this yields more faithful evidence than correlational methods, counterfactuals nonetheless have key problems that bias our findings in specific and predictable ways. Specifically, (i) counterfactual theories do not effectively capture multiple independently sufficient causes of the same effect, which leads us to miss certain causes entirely; and (ii) counterfactual dependencies in neural networks are generally not transitive, which complicates methods for extracting and interpreting causal graphs from neural networks. We discuss the implications of these challenges for interpretability researchers and propose concrete suggestions for future work.
http://arxiv.org/pdf/2407.04690v1
[ "Aaron Mueller" ]
2024-07-05T17:53:03Z
2024-07-05T17:53:03Z
2407.04681
Rethinking Visual Prompting for Multimodal Large Language Models with External Knowledge
In recent years, multimodal large language models (MLLMs) have made significant strides by training on vast high-quality image-text datasets, enabling them to generally understand images well. However, the inherent difficulty in explicitly conveying fine-grained or spatially dense information in text, such as masks, poses a challenge for MLLMs, limiting their ability to answer questions requiring an understanding of detailed or localized visual elements. Drawing inspiration from the Retrieval-Augmented Generation (RAG) concept, this paper proposes a new visual prompt approach to integrate fine-grained external knowledge, gleaned from specialized vision models (e.g., instance segmentation/OCR models), into MLLMs. This is a promising yet underexplored direction for enhancing MLLMs' performance. Our approach diverges from concurrent works, which transform external knowledge into additional text prompts, necessitating the model to indirectly learn the correspondence between visual content and text coordinates. Instead, we propose embedding fine-grained knowledge information directly into a spatial embedding map as a visual prompt. This design can be effortlessly incorporated into various MLLMs, such as LLaVA and Mipha, considerably improving their visual understanding performance. Through rigorous experiments, we demonstrate that our method can enhance MLLM performance across nine benchmarks, amplifying their fine-grained context-aware capabilities.
http://arxiv.org/pdf/2407.04681v1
[ "Yuanze Lin", "Yunsheng Li", "Dongdong Chen", "Weijian Xu", "Ronald Clark", "Philip Torr", "Lu Yuan" ]
2024-07-05T17:43:30Z
2024-07-05T17:43:30Z
2407.04678
XQSV: A Structurally Variable Network to Imitate Human Play in Xiangqi
In this paper, we introduce an innovative deep learning architecture, termed Xiangqi Structurally Variable (XQSV), designed to emulate the behavioral patterns of human players in Xiangqi, or Chinese Chess. The unique attribute of XQSV is its capacity to alter its structural configuration dynamically, optimizing performance for the task based on the particular subset of data on which it is trained. We have incorporated several design improvements to significantly enhance the network's predictive accuracy, including a local illegal move filter, an Elo range partitioning, a sequential one-dimensional input, and a simulation of imperfect memory capacity. Empirical evaluations reveal that XQSV attains a predictive accuracy of approximately 40%, with its performance peaking within the trained Elo range. This indicates the model's success in mimicking the play behavior of individuals within that specific range. A three-terminal Turing Test was employed to demonstrate that the XQSV model imitates human behavior more accurately than conventional Xiangqi engines, rendering it indistinguishable from actual human opponents. Given the inherent nondeterminism in human gameplay, we propose two supplementary relaxed evaluation metrics. To our knowledge, XQSV represents the first model to mimic Xiangqi players.
http://arxiv.org/pdf/2407.04678v1
[ "Chenliang Zhou" ]
2024-07-05T17:43:05Z
2024-07-05T17:43:05Z
2407.04760
SPINEX: Similarity-based Predictions with Explainable Neighbors Exploration for Anomaly and Outlier Detection
This paper presents a novel anomaly and outlier detection algorithm from the SPINEX (Similarity-based Predictions with Explainable Neighbors Exploration) family. This algorithm leverages the concept of similarity and higher-order interactions across multiple subspaces to identify outliers. A comprehensive set of experiments was conducted to evaluate the performance of SPINEX. This algorithm was examined against 21 commonly used anomaly detection algorithms, namely, namely, Angle-Based Outlier Detection (ABOD), Connectivity-Based Outlier Factor (COF), Copula-Based Outlier Detection (COPOD), ECOD, Elliptic Envelope (EE), Feature Bagging with KNN, Gaussian Mixture Models (GMM), Histogram-based Outlier Score (HBOS), Isolation Forest (IF), Isolation Neural Network Ensemble (INNE), Kernel Density Estimation (KDE), K-Nearest Neighbors (KNN), Lightweight Online Detector of Anomalies (LODA), Linear Model Deviation-based Detector (LMDD), Local Outlier Factor (LOF), Minimum Covariance Determinant (MCD), One-Class SVM (OCSVM), Quadratic MCD (QMCD), Robust Covariance (RC), Stochastic Outlier Selection (SOS), and Subspace Outlier Detection (SOD), and across 39 synthetic and real datasets from various domains and of a variety of dimensions and complexities. Furthermore, a complexity analysis was carried out to examine the complexity of the proposed algorithm. Our results demonstrate that SPINEX achieves superior performance, outperforms commonly used anomaly detection algorithms, and has moderate complexity (e.g., O(n log n d)). More specifically, SPINEX was found to rank at the top of algorithms on the synthetic datasets and the 7th on the real datasets. Finally, a demonstration of the explainability capabilities of SPINEX, along with future research needs, is presented.
http://arxiv.org/pdf/2407.04760v1
[ "MZ Naser", "Ahmed Z Naser" ]
2024-07-05T17:42:09Z
2024-07-05T17:42:09Z
2206.03441
Robust Sparse Mean Estimation via Sum of Squares
We study the problem of high-dimensional sparse mean estimation in the presence of an $epsilon$-fraction of adversarial outliers. Prior work obtained sample and computationally efficient algorithms for this task for identity-covariance subgaussian distributions. In this work, we develop the first efficient algorithms for robust sparse mean estimation without a priori knowledge of the covariance. For distributions on $mathbb R^d$ with "certifiably bounded" $t$-th moments and sufficiently light tails, our algorithm achieves error of $O(epsilon^{1-1/t})$ with sample complexity $m = (klog(d))^{O(t)}/epsilon^{2-2/t}$. For the special case of the Gaussian distribution, our algorithm achieves near-optimal error of $tilde O(epsilon)$ with sample complexity $m = O(k^4 mathrm{polylog}(d))/epsilon^2$. Our algorithms follow the Sum-of-Squares based, proofs to algorithms approach. We complement our upper bounds with Statistical Query and low-degree polynomial testing lower bounds, providing evidence that the sample-time-error tradeoffs achieved by our algorithms are qualitatively the best possible.
http://arxiv.org/pdf/2206.03441v2
[ "Ilias Diakonikolas", "Daniel M. Kane", "Sushrut Karmalkar", "Ankit Pensia", "Thanasis Pittas" ]
2024-07-05T17:40:00Z
2022-06-07T16:49:54Z