arxiv_id
stringlengths 7
11
| title
stringlengths 7
243
| abstract
stringlengths 3
2.79k
| link
stringlengths 21
49
| authors
listlengths 1
451
| updated
stringlengths 20
20
| published
stringlengths 20
20
|
---|---|---|---|---|---|---|
2407.00495
|
A Bayesian Solution To The Imitation Gap
|
In many real-world settings, an agent must learn to act in environments where no reward signal can be specified, but a set of expert demonstrations is available. Imitation learning (IL) is a popular framework for learning policies from such demonstrations. However, in some cases, differences in observability between the expert and the agent can give rise to an imitation gap such that the expert's policy is not optimal for the agent and a naive application of IL can fail catastrophically. In particular, if the expert observes the Markov state and the agent does not, then the expert will not demonstrate the information-gathering behavior needed by the agent but not the expert. In this paper, we propose a Bayesian solution to the Imitation Gap (BIG), first using the expert demonstrations, together with a prior specifying the cost of exploratory behavior that is not demonstrated, to infer a posterior over rewards with Bayesian inverse reinforcement learning (IRL). BIG then uses the reward posterior to learn a Bayes-optimal policy. Our experiments show that BIG, unlike IL, allows the agent to explore at test time when presented with an imitation gap, whilst still learning to behave optimally using expert demonstrations when no such gap exists.
|
http://arxiv.org/pdf/2407.00495v1
|
[
"Risto Vuorio",
"Mattie Fellows",
"Cong Lu",
"Clémence Grislain",
"Shimon Whiteson"
] |
2024-06-29T17:13:37Z
|
2024-06-29T17:13:37Z
|
2407.00494
|
Graph Neural Networks Gone Hogwild
|
Message passing graph neural networks (GNNs) would appear to be powerful tools to learn distributed algorithms via gradient descent, but generate catastrophically incorrect predictions when nodes update asynchronously during inference. This failure under asynchrony effectively excludes these architectures from many potential applications, such as learning local communication policies between resource-constrained agents in, e.g., robotic swarms or sensor networks. In this work we explore why this failure occurs in common GNN architectures, and identify "implicitly-defined" GNNs as a class of architectures which is provably robust to partially asynchronous "hogwild" inference, adapting convergence guarantees from work in asynchronous and distributed optimization, e.g., Bertsekas (1982); Niu et al. (2011). We then propose a novel implicitly-defined GNN architecture, which we call an energy GNN. We show that this architecture outperforms other GNNs from this class on a variety of synthetic tasks inspired by multi-agent systems, and achieves competitive performance on real-world datasets.
|
http://arxiv.org/pdf/2407.00494v1
|
[
"Olga Solodova",
"Nick Richardson",
"Deniz Oktay",
"Ryan P. Adams"
] |
2024-06-29T17:11:09Z
|
2024-06-29T17:11:09Z
|
2407.00492
|
Fast Gibbs sampling for the local and global trend Bayesian exponential
smoothing model
|
In Smyl et al. [Local and global trend Bayesian exponential smoothing models. International Journal of Forecasting, 2024.], a generalised exponential smoothing model was proposed that is able to capture strong trends and volatility in time series. This method achieved state-of-the-art performance in many forecasting tasks, but its fitting procedure, which is based on the NUTS sampler, is very computationally expensive. In this work, we propose several modifications to the original model, as well as a bespoke Gibbs sampler for posterior exploration; these changes improve sampling time by an order of magnitude, thus rendering the model much more practically relevant. The new model, and sampler, are evaluated on the M3 dataset and are shown to be competitive, or superior, in terms of accuracy to the original method, while being substantially faster to run.
|
http://arxiv.org/pdf/2407.00492v1
|
[
"Xueying Long",
"Daniel F. Schmidt",
"Christoph Bergmeir",
"Slawek Smyl"
] |
2024-06-29T16:49:28Z
|
2024-06-29T16:49:28Z
|
2407.00490
|
Toward Global Convergence of Gradient EM for Over-Parameterized Gaussian
Mixture Models
|
We study the gradient Expectation-Maximization (EM) algorithm for Gaussian Mixture Models (GMM) in the over-parameterized setting, where a general GMM with $n>1$ components learns from data that are generated by a single ground truth Gaussian distribution. While results for the special case of 2-Gaussian mixtures are well-known, a general global convergence analysis for arbitrary $n$ remains unresolved and faces several new technical barriers since the convergence becomes sub-linear and non-monotonic. To address these challenges, we construct a novel likelihood-based convergence analysis framework and rigorously prove that gradient EM converges globally with a sublinear rate $O(1/sqrt{t})$. This is the first global convergence result for Gaussian mixtures with more than $2$ components. The sublinear convergence rate is due to the algorithmic nature of learning over-parameterized GMM with gradient EM. We also identify a new emerging technical challenge for learning general over-parameterized GMM: the existence of bad local regions that can trap gradient EM for an exponential number of steps.
|
http://arxiv.org/pdf/2407.00490v1
|
[
"Weihang Xu",
"Maryam Fazel",
"Simon S. Du"
] |
2024-06-29T16:44:29Z
|
2024-06-29T16:44:29Z
|
2407.00482
|
Quantifying Spuriousness of Biased Datasets Using Partial Information
Decomposition
|
Spurious patterns refer to a mathematical association between two or more variables in a dataset that are not causally related. However, this notion of spuriousness, which is usually introduced due to sampling biases in the dataset, has classically lacked a formal definition. To address this gap, this work presents the first information-theoretic formalization of spuriousness in a dataset (given a split of spurious and core features) using a mathematical framework called Partial Information Decomposition (PID). Specifically, we disentangle the joint information content that the spurious and core features share about another target variable (e.g., the prediction label) into distinct components, namely unique, redundant, and synergistic information. We propose the use of unique information, with roots in Blackwell Sufficiency, as a novel metric to formally quantify dataset spuriousness and derive its desirable properties. We empirically demonstrate how higher unique information in the spurious features in a dataset could lead a model into choosing the spurious features over the core features for inference, often having low worst-group-accuracy. We also propose a novel autoencoder-based estimator for computing unique information that is able to handle high-dimensional image data. Finally, we also show how this unique information in the spurious feature is reduced across several dataset-based spurious-pattern-mitigation techniques such as data reweighting and varying levels of background mixing, demonstrating a novel tradeoff between unique information (spuriousness) and worst-group-accuracy.
|
http://arxiv.org/pdf/2407.00482v1
|
[
"Barproda Halder",
"Faisal Hamman",
"Pasan Dissanayake",
"Qiuyi Zhang",
"Ilia Sucholutsky",
"Sanghamitra Dutta"
] |
2024-06-29T16:05:47Z
|
2024-06-29T16:05:47Z
|
2407.00478
|
Knowledge-Aware Parsimony Learning: A Perspective from Relational Graphs
|
The scaling law, a strategy that involves the brute-force scaling of the training dataset and learnable parameters, has become a prevalent approach for developing stronger learning models. In this paper, we examine its rationale in terms of learning from relational graphs. We demonstrate that directly adhering to such a scaling law does not necessarily yield stronger models due to architectural incompatibility and representation bottlenecks. To tackle this challenge, we propose a novel framework for learning from relational graphs via knowledge-aware parsimony learning. Our method draws inspiration from the duality between data and knowledge inherent in these graphs. Specifically, we first extract knowledge (like symbolic logic and physical laws) during the learning process, and then apply combinatorial generalization to the task at hand. This extracted knowledge serves as the ``building blocks'' for achieving parsimony learning. By applying this philosophy to architecture, parameters, and inference, we can effectively achieve versatile, sample-efficient, and interpretable learning. Experimental results show that our proposed framework surpasses methods that strictly follow the traditional scaling-up roadmap. This highlights the importance of incorporating knowledge in the development of next-generation learning technologies.
|
http://arxiv.org/pdf/2407.00478v1
|
[
"Quanming Yao",
"Yongqi Zhang",
"Yaqing Wang",
"Nan Yin",
"James Kwok",
"Qiang Yang"
] |
2024-06-29T15:52:37Z
|
2024-06-29T15:52:37Z
|
2407.00474
|
MH-pFLGB: Model Heterogeneous personalized Federated Learning via Global
Bypass for Medical Image Analysis
|
In the evolving application of medical artificial intelligence, federated learning is notable for its ability to protect training data privacy. Federated learning facilitates collaborative model development without the need to share local data from healthcare institutions. Yet, the statistical and system heterogeneity among these institutions poses substantial challenges, which affects the effectiveness of federated learning and hampers the exchange of information between clients. To address these issues, we introduce a novel approach, MH-pFLGB, which employs a global bypass strategy to mitigate the reliance on public datasets and navigate the complexities of non-IID data distributions. Our method enhances traditional federated learning by integrating a global bypass model, which would share the information among the clients, but also serves as part of the network to enhance the performance on each client. Additionally, MH-pFLGB provides a feature fusion module to better combine the local and global features. We validate model{}'s effectiveness and adaptability through extensive testing on different medical tasks, demonstrating superior performance compared to existing state-of-the-art methods.
|
http://arxiv.org/pdf/2407.00474v1
|
[
"Luyuan Xie",
"Manqing Lin",
"ChenMing Xu",
"Tianyu Luan",
"Zhipeng Zeng",
"Wenjun Qian",
"Cong Li",
"Yuejian Fang",
"Qingni Shen",
"Zhonghai Wu"
] |
2024-06-29T15:38:37Z
|
2024-06-29T15:38:37Z
|
2406.13791
|
IoT-Based Preventive Mental Health Using Knowledge Graphs and Standards
for Better Well-Being
|
Sustainable Development Goals (SDGs) give the UN a road map for development with Agenda 2030 as a target. SDG3 "Good Health and Well-Being" ensures healthy lives and promotes well-being for all ages. Digital technologies can support SDG3. Burnout and even depression could be reduced by encouraging better preventive health. Due to the lack of patient knowledge and focus to take care of their health, it is necessary to help patients before it is too late. New trends such as positive psychology and mindfulness are highly encouraged in the USA. Digital Twin (DT) can help with the continuous monitoring of emotion using physiological signals (e.g., collected via wearables). Digital twins facilitate monitoring and provide constant health insight to improve quality of life and well-being with better personalization. Healthcare DT challenges are standardizing data formats, communication protocols, and data exchange mechanisms. To achieve those data integration and knowledge challenges, we designed the Mental Health Knowledge Graph (ontology and dataset) to boost mental health. The Knowledge Graph (KG) acquires knowledge from ontology-based mental health projects classified within the LOV4IoT ontology catalog (Emotion, Depression, and Mental Health). Furthermore, the KG is mapped to standards (e.g., ontologies) when possible. Standards from ETSI SmartM2M, ITU/WHO, ISO, W3C, NIST, and IEEE are relevant to mental health.
|
http://arxiv.org/pdf/2406.13791v2
|
[
"Amelie Gyrard",
"Seyedali Mohammadi",
"Manas Gaur",
"Antonio Kung"
] |
2024-06-29T15:29:56Z
|
2024-06-19T19:35:14Z
|
2407.00467
|
VcLLM: Video Codecs are Secretly Tensor Codecs
|
As the parameter size of large language models (LLMs) continues to expand, the need for a large memory footprint and high communication bandwidth have become significant bottlenecks for the training and inference of LLMs. To mitigate these bottlenecks, various tensor compression techniques have been proposed to reduce the data size, thereby alleviating memory requirements and communication pressure. Our research found that video codecs, despite being originally designed for compressing videos, show excellent efficiency when compressing various types of tensors. We demonstrate that video codecs can be versatile and general-purpose tensor codecs while achieving the state-of-the-art compression efficiency in various tasks. We further make use of the hardware video encoding and decoding module available on GPUs to create a framework capable of both inference and training with video codecs repurposed as tensor codecs. This greatly reduces the requirement for memory capacity and communication bandwidth, enabling training and inference of large models on consumer-grade GPUs.
|
http://arxiv.org/pdf/2407.00467v1
|
[
"Ceyu Xu",
"Yongji Wu",
"Xinyu Yang",
"Beidi Chen",
"Matthew Lentz",
"Danyang Zhuo",
"Lisa Wu Wills"
] |
2024-06-29T15:24:33Z
|
2024-06-29T15:24:33Z
|
2407.00465
|
Characterizing Continual Learning Scenarios and Strategies for Audio
Analysis
|
Audio analysis is useful in many application scenarios. The state-of-the-art audio analysis approaches assume that the data distribution at training and deployment time will be the same. However, due to various real-life environmental factors, the data may encounter drift in its distribution or can encounter new classes in the late future. Thus, a one-time trained model might not perform adequately. In this paper, we characterize continual learning (CL) approaches in audio analysis. In this paper, we characterize continual learning (CL) approaches, intended to tackle catastrophic forgetting arising due to drifts. As there is no CL dataset for audio analysis, we use DCASE 2020 to 2023 datasets to create various CL scenarios for audio-based monitoring tasks. We have investigated the following CL and non-CL approaches: EWC, LwF, SI, GEM, A-GEM, GDumb, Replay, Naive, cumulative, and joint training. The study is very beneficial for researchers and practitioners working in the area of audio analysis for developing adaptive models. We observed that Replay achieved better results than other methods in the DCASE challenge data. It achieved an accuracy of 70.12% for the domain incremental scenario and an accuracy of 96.98% for the class incremental scenario.
|
http://arxiv.org/pdf/2407.00465v1
|
[
"Ruchi Bhatt",
"Pratibha Kumari",
"Dwarikanath Mahapatra",
"Abdulmotaleb El Saddik",
"Mukesh Saini"
] |
2024-06-29T15:21:20Z
|
2024-06-29T15:21:20Z
|
2305.11322
|
Knowing When to Stop: Delay-Adaptive Spiking Neural Network Classifiers
with Reliability Guarantees
|
Spiking neural networks (SNNs) process time-series data via internal event-driven neural dynamics. The energy consumption of an SNN depends on the number of spikes exchanged between neurons over the course of the input presentation. Typically, decisions are produced after the entire input sequence has been processed. This results in latency and energy consumption levels that are fairly uniform across inputs. However, as explored in recent work, SNNs can produce an early decision when the SNN model is sufficiently ``confident'', adapting delay and energy consumption to the difficulty of each example. Existing techniques are based on heuristic measures of confidence that do not provide reliability guarantees, potentially exiting too early. In this paper, we introduce a novel delay-adaptive SNN-based inference methodology that, wrapping around any pre-trained SNN classifier, provides guaranteed reliability for the decisions produced at input-dependent stopping times. The approach, dubbed SpikeCP, leverages tools from conformal prediction (CP). It entails minimal complexity increase as compared to the underlying SNN, requiring only additional thresholding and counting operations at run time. SpikeCP is also extended to integrate a CP-aware training phase that targets delay performance. Variants of CP based on alternative confidence correction schemes, from Bonferroni to Simes, are explored, and extensive experiments are described using the MNIST-DVS data set, DVS128 Gesture dataset, and CIFAR-10 dataset.
|
http://arxiv.org/pdf/2305.11322v4
|
[
"Jiechen Chen",
"Sangwoo Park",
"Osvaldo Simeone"
] |
2024-06-29T15:11:10Z
|
2023-05-18T22:11:04Z
|
2401.16251
|
Cross-silo Federated Learning with Record-level Personalized
Differential Privacy
|
Federated learning (FL) enhanced by differential privacy has emerged as a popular approach to better safeguard the privacy of client-side data by protecting clients' contributions during the training process. Existing solutions typically assume a uniform privacy budget for all records and provide one-size-fits-all solutions that may not be adequate to meet each record's privacy requirement. In this paper, we explore the uncharted territory of cross-silo FL with record-level personalized differential privacy. We devise a novel framework named textit{rPDP-FL}, employing a two-stage hybrid sampling scheme with both uniform client-level sampling and non-uniform record-level sampling to accommodate varying privacy requirements. A critical and non-trivial problem is how to determine the ideal per-record sampling probability $q$ given the personalized privacy budget $varepsilon$. We introduce a versatile solution named textit{Simulation-CurveFitting}, allowing us to uncover a significant insight into the nonlinear correlation between $q$ and $varepsilon$ and derive an elegant mathematical model to tackle the problem. Our evaluation demonstrates that our solution can provide significant performance gains over the baselines that do not consider personalized privacy preservation.
|
http://arxiv.org/pdf/2401.16251v3
|
[
"Junxu Liu",
"Jian Lou",
"Li Xiong",
"Jinfei Liu",
"Xiaofeng Meng"
] |
2024-06-29T14:58:30Z
|
2024-01-29T16:01:46Z
|
2407.00453
|
PerSEval: Assessing Personalization in Text Summarizers
|
Personalized summarization models cater to individuals' subjective understanding of saliency, as represented by their reading history and current topics of attention. Existing personalized text summarizers are primarily evaluated based on accuracy measures such as BLEU, ROUGE, and METEOR. However, a recent study argued that accuracy measures are inadequate for evaluating the degree of personalization of these models and proposed EGISES, the first metric to evaluate personalized text summaries. It was suggested that accuracy is a separate aspect and should be evaluated standalone. In this paper, we challenge the necessity of an accuracy leaderboard, suggesting that relying on accuracy-based aggregated results might lead to misleading conclusions. To support this, we delve deeper into EGISES, demonstrating both theoretically and empirically that it measures the degree of responsiveness, a necessary but not sufficient condition for degree-of-personalization. We subsequently propose PerSEval, a novel measure that satisfies the required sufficiency condition. Based on the benchmarking of ten SOTA summarization models on the PENS dataset, we empirically establish that -- (i) PerSEval is reliable w.r.t human-judgment correlation (Pearson's r = 0.73; Spearman's $rho$ = 0.62; Kendall's $tau$ = 0.42), (ii) PerSEval has high rank-stability, (iii) PerSEval as a rank-measure is not entailed by EGISES-based ranking, and (iv) PerSEval can be a standalone rank-measure without the need of any aggregated ranking.
|
http://arxiv.org/pdf/2407.00453v1
|
[
"Sourish Dasgupta",
"Ankush Chander",
"Parth Borad",
"Isha Motiyani",
"Tanmoy Chakraborty"
] |
2024-06-29T14:37:36Z
|
2024-06-29T14:37:36Z
|
2407.00452
|
KHNNs: hypercomplex neural networks computations via Keras using
TensorFlow and PyTorch
|
Neural networks used in computations with more advanced algebras than real numbers perform better in some applications. However, there is no general framework for constructing hypercomplex neural networks. We propose a library integrated with Keras that can do computations within TensorFlow and PyTorch. It provides Dense and Convolutional 1D, 2D, and 3D layers architectures.
|
http://arxiv.org/pdf/2407.00452v1
|
[
"Agnieszka Niemczynowicz",
"Radosław Antoni Kycia"
] |
2024-06-29T14:36:37Z
|
2024-06-29T14:36:37Z
|
2407.00449
|
Fully tensorial approach to hypercomplex neural networks
|
Fully tensorial theory of hypercomplex neural networks is given. The key point is to observe that the algebra multiplication can be represented as a rank three tensor. This approach is attractive for neural network libraries that support effective tensorial operations.
|
http://arxiv.org/pdf/2407.00449v1
|
[
"Agnieszka Niemczynowicz",
"Radosław Antoni Kycia"
] |
2024-06-29T14:19:40Z
|
2024-06-29T14:19:40Z
|
2309.13635
|
PanopticNDT: Efficient and Robust Panoptic Mapping
|
As the application scenarios of mobile robots are getting more complex and challenging, scene understanding becomes increasingly crucial. A mobile robot that is supposed to operate autonomously in indoor environments must have precise knowledge about what objects are present, where they are, what their spatial extent is, and how they can be reached; i.e., information about free space is also crucial. Panoptic mapping is a powerful instrument providing such information. However, building 3D panoptic maps with high spatial resolution is challenging on mobile robots, given their limited computing capabilities. In this paper, we propose PanopticNDT - an efficient and robust panoptic mapping approach based on occupancy normal distribution transform (NDT) mapping. We evaluate our approach on the publicly available datasets Hypersim and ScanNetV2. The results reveal that our approach can represent panoptic information at a higher level of detail than other state-of-the-art approaches while enabling real-time panoptic mapping on mobile robots. Finally, we prove the real-world applicability of PanopticNDT with qualitative results in a domestic application.
|
http://arxiv.org/pdf/2309.13635v2
|
[
"Daniel Seichter",
"Benedict Stephan",
"Söhnke Benedikt Fischedick",
"Steffen Müller",
"Leonard Rabes",
"Horst-Michael Gross"
] |
2024-06-29T14:18:59Z
|
2023-09-24T13:21:33Z
|
2406.00535
|
Causal Contrastive Learning for Counterfactual Regression Over Time
|
Estimating treatment effects over time holds significance in various domains, including precision medicine, epidemiology, economy, and marketing. This paper introduces a unique approach to counterfactual regression over time, emphasizing long-term predictions. Distinguishing itself from existing models like Causal Transformer, our approach highlights the efficacy of employing RNNs for long-term forecasting, complemented by Contrastive Predictive Coding (CPC) and Information Maximization (InfoMax). Emphasizing efficiency, we avoid the need for computationally expensive transformers. Leveraging CPC, our method captures long-term dependencies in the presence of time-varying confounders. Notably, recent models have disregarded the importance of invertible representation, compromising identification assumptions. To remedy this, we employ the InfoMax principle, maximizing a lower bound of mutual information between sequence data and its representation. Our method achieves state-of-the-art counterfactual estimation results using both synthetic and real-world data, marking the pioneering incorporation of Contrastive Predictive Encoding in causal inference.
|
http://arxiv.org/pdf/2406.00535v2
|
[
"Mouad El Bouchattaoui",
"Myriam Tami",
"Benoit Lepetit",
"Paul-Henry Cournède"
] |
2024-06-29T14:14:04Z
|
2024-06-01T19:07:25Z
|
2406.10521
|
MALLM-GAN: Multi-Agent Large Language Model as Generative Adversarial
Network for Synthesizing Tabular Data
|
In the era of big data, access to abundant data is crucial for driving research forward. However, such data is often inaccessible due to privacy concerns or high costs, particularly in healthcare domain. Generating synthetic (tabular) data can address this, but existing models typically require substantial amounts of data to train effectively, contradicting our objective to solve data scarcity. To address this challenge, we propose a novel framework to generate synthetic tabular data, powered by large language models (LLMs) that emulates the architecture of a Generative Adversarial Network (GAN). By incorporating data generation process as contextual information and utilizing LLM as the optimizer, our approach significantly enhance the quality of synthetic data generation in common scenarios with small sample sizes. Our experimental results on public and private datasets demonstrate that our model outperforms several state-of-art models regarding generating higher quality synthetic data for downstream tasks while keeping privacy of the real data.
|
http://arxiv.org/pdf/2406.10521v2
|
[
"Yaobin Ling",
"Xiaoqian Jiang",
"Yejin Kim"
] |
2024-06-29T13:48:12Z
|
2024-06-15T06:26:17Z
|
2404.09134
|
Generative AI Agents with Large Language Model for Satellite Networks
via a Mixture of Experts Transmission
|
In response to the needs of 6G global communications, satellite communication networks have emerged as a key solution. However, the large-scale development of satellite communication networks is constrained by the complex system models, whose modeling is challenging for massive users. Moreover, transmission interference between satellites and users seriously affects communication performance. To solve these problems, this paper develops generative artificial intelligence (AI) agents for model formulation and then applies a mixture of experts (MoE) approach to design transmission strategies. Specifically, we leverage large language models (LLMs) to build an interactive modeling paradigm and utilize retrieval-augmented generation (RAG) to extract satellite expert knowledge that supports mathematical modeling. Afterward, by integrating the expertise of multiple specialized components, we propose an MoE-proximal policy optimization (PPO) approach to solve the formulated problem. Each expert can optimize the optimization variables at which it excels through specialized training through its own network and then aggregates them through the gating network to perform joint optimization. The simulation results validate the accuracy and effectiveness of employing a generative agent for problem formulation. Furthermore, the superiority of the proposed MoE-ppo approach over other benchmarks is confirmed in solving the formulated problem. The adaptability of MoE-PPO to various customized modeling problems has also been demonstrated.
|
http://arxiv.org/pdf/2404.09134v2
|
[
"Ruichen Zhang",
"Hongyang Du",
"Yinqiu Liu",
"Dusit Niyato",
"Jiawen Kang",
"Zehui Xiong",
"Abbas Jamalipour",
"Dong In Kim"
] |
2024-06-29T13:41:36Z
|
2024-04-14T03:44:54Z
|
2402.03358
|
A Comprehensive Survey on Graph Reduction: Sparsification, Coarsening,
and Condensation
|
Many real-world datasets can be naturally represented as graphs, spanning a wide range of domains. However, the increasing complexity and size of graph datasets present significant challenges for analysis and computation. In response, graph reduction, or graph summarization, has gained prominence for simplifying large graphs while preserving essential properties. In this survey, we aim to provide a comprehensive understanding of graph reduction methods, including graph sparsification, graph coarsening, and graph condensation. Specifically, we establish a unified definition for these methods and introduce a hierarchical taxonomy to categorize the challenges they address. Our survey then systematically reviews the technical details of these methods and emphasizes their practical applications across diverse scenarios. Furthermore, we outline critical research directions to ensure the continued effectiveness of graph reduction techniques, as well as provide a comprehensive paper list at url{https://github.com/Emory-Melody/awesome-graph-reduction}. We hope this survey will bridge literature gaps and propel the advancement of this promising field.
|
http://arxiv.org/pdf/2402.03358v4
|
[
"Mohammad Hashemi",
"Shengbo Gong",
"Juntong Ni",
"Wenqi Fan",
"B. Aditya Prakash",
"Wei Jin"
] |
2024-06-29T13:07:00Z
|
2024-01-29T01:19:09Z
|
2402.02361
|
Pruner: A Speculative Exploration Mechanism to Accelerate Tensor Program
Tuning
|
Tensor program tuning is essential for the efficient deployment of deep neural networks. Search-based approaches have demonstrated scalability and effectiveness in automatically finding high-performance programs for specific hardware. However, the search process is often inefficient, taking hours or even days to discover optimal programs due to the exploration mechanisms guided by an accurate but slow learned cost model. Meanwhile, the learned cost model trained on one platform cannot seamlessly adapt online to another, which we call cross-platform online unawareness. In this work, we propose Pruner and MoA-Pruner. Pruner is a speculative exploration mechanism that accelerates the search process using a "Draft-then-Verify" paradigm. Instead of applying the complex learned cost model to all explored candidates, Pruner drafts small-scale speculative candidates by introducing a naive symbol analyzer (draft model), then identifies the best candidates by the learned cost model. MoA-Pruner introduces Momentum online Adaptation to address the cross-platform online unawareness. We incorporate these techniques into the Ansor and conduct extensive experiments on three GPU-based platforms. Results show that in online cost model tuning scenarios, Pruner and MoA-Pruner can achieve an average speedup of $2.6 times$ and $4.82 times$ compared to Ansor. In offline tuning scenarios, Pruner can achieve an average speedup of $4.75 times$ and $4.05times$ compared to TenSet and TLP, respectively. The code is available at https://github.com/qiaolian9/Pruner.
|
http://arxiv.org/pdf/2402.02361v2
|
[
"Liang Qiao",
"Jun Shi",
"Xiaoyu Hao",
"Xi Fang",
"Minfan Zhao",
"Ziqi Zhu",
"Junshi Chen",
"Hong An",
"Bing Li",
"Honghui Yuan",
"Xinyang Wang",
"Xulong Tang"
] |
2024-06-29T12:57:39Z
|
2024-02-04T06:11:12Z
|
2407.00429
|
Time Series Clustering with General State Space Models via Stochastic
Variational Inference
|
In this paper, we propose a novel method of model-based time series clustering with mixtures of general state space models (MSSMs). Each component of MSSMs is associated with each cluster. An advantage of the proposed method is that it enables the use of time series models appropriate to the specific time series. This not only improves clustering and prediction accuracy but also enhances the interpretability of the estimated parameters. The parameters of the MSSMs are estimated using stochastic variational inference, a subtype of variational inference. The proposed method estimates the latent variables of an arbitrary state space model by using neural networks with a normalizing flow as a variational estimator. The number of clusters can be estimated using the Bayesian information criterion. In addition, to prevent MSSMs from converging to the local optimum, we propose several optimization tricks, including an additional penalty term called entropy annealing. Experiments on simulated datasets show that the proposed method is effective for clustering, parameter estimation, and estimating the number of clusters.
|
http://arxiv.org/pdf/2407.00429v1
|
[
"Ryoichi Ishizuka",
"Takashi Imai",
"Kaoru Kawamoto"
] |
2024-06-29T12:48:53Z
|
2024-06-29T12:48:53Z
|
2406.05612
|
Which Backbone to Use: A Resource-efficient Domain Specific Comparison
for Computer Vision
|
In contemporary computer vision applications, particularly image classification, architectural backbones pre-trained on large datasets like ImageNet are commonly employed as feature extractors. Despite the widespread use of these pre-trained convolutional neural networks (CNNs), there remains a gap in understanding the performance of various resource-efficient backbones across diverse domains and dataset sizes. Our study systematically evaluates multiple lightweight, pre-trained CNN backbones under consistent training settings across a variety of datasets, including natural images, medical images, galaxy images, and remote sensing images. This comprehensive analysis aims to aid machine learning practitioners in selecting the most suitable backbone for their specific problem, especially in scenarios involving small datasets where fine-tuning a pre-trained network is crucial. Even though attention-based architectures are gaining popularity, we observed that they tend to perform poorly under low data finetuning tasks compared to CNNs. We also observed that some CNN architectures such as ConvNeXt, RegNet and EfficientNet performs well compared to others on a diverse set of domains consistently. Our findings provide actionable insights into the performance trade-offs and effectiveness of different backbones, facilitating informed decision-making in model selection for a broad spectrum of computer vision domains. Our code is available here: https://github.com/pranavphoenix/Backbones
|
http://arxiv.org/pdf/2406.05612v2
|
[
"Pranav Jeevan",
"Amit Sethi"
] |
2024-06-29T12:26:42Z
|
2024-06-09T02:01:25Z
|
2407.00419
|
On the Complexity of Learning to Cooperate with Populations of Socially
Rational Agents
|
Artificially intelligent agents deployed in the real-world will require the ability to reliably textit{cooperate} with humans (as well as other, heterogeneous AI agents). To provide formal guarantees of successful cooperation, we must make some assumptions about how partner agents could plausibly behave. Any realistic set of assumptions must account for the fact that other agents may be just as adaptable as our agent is. In this work, we consider the problem of cooperating with a textit{population} of agents in a finitely-repeated, two player general-sum matrix game with private utilities. Two natural assumptions in such settings are that: 1) all agents in the population are individually rational learners, and 2) when any two members of the population are paired together, with high-probability they will achieve at least the same utility as they would under some Pareto efficient equilibrium strategy. Our results first show that these assumptions alone are insufficient to ensure textit{zero-shot} cooperation with members of the target population. We therefore consider the problem of textit{learning} a strategy for cooperating with such a population using prior observations its members interacting with one another. We provide upper and lower bounds on the number of samples needed to learn an effective cooperation strategy. Most importantly, we show that these bounds can be much stronger than those arising from a "naive'' reduction of the problem to one of imitation learning.
|
http://arxiv.org/pdf/2407.00419v1
|
[
"Robert Loftin",
"Saptarashmi Bandyopadhyay",
"Mustafa Mert Çelikok"
] |
2024-06-29T11:59:52Z
|
2024-06-29T11:59:52Z
|
2407.00418
|
eFontes. Part of Speech Tagging and Lemmatization of Medieval Latin
Texts.A Cross-Genre Survey
|
This study introduces the eFontes models for automatic linguistic annotation of Medieval Latin texts, focusing on lemmatization, part-of-speech tagging, and morphological feature determination. Using the Transformers library, these models were trained on Universal Dependencies (UD) corpora and the newly developed eFontes corpus of Polish Medieval Latin. The research evaluates the models' performance, addressing challenges such as orthographic variations and the integration of Latinized vernacular terms. The models achieved high accuracy rates: lemmatization at 92.60%, part-of-speech tagging at 83.29%, and morphological feature determination at 88.57%. The findings underscore the importance of high-quality annotated corpora and propose future enhancements, including extending the models to Named Entity Recognition.
|
http://arxiv.org/pdf/2407.00418v1
|
[
"Krzysztof Nowak",
"Jędrzej Ziębura",
"Krzysztof Wróbel",
"Aleksander Smywiński-Pohl"
] |
2024-06-29T11:59:20Z
|
2024-06-29T11:59:20Z
|
2407.00411
|
Explainability of Machine Learning Models under Missing Data
|
Missing data is a prevalent issue that can significantly impair model performance and interpretability. This paper briefly summarizes the development of the field of missing data with respect to Explainable Artificial Intelligence and experimentally investigates the effects of various imputation methods on the calculation of Shapley values, a popular technique for interpreting complex machine learning models. We compare different imputation strategies and assess their impact on feature importance and interaction as determined by Shapley values. Moreover, we also theoretically analyze the effects of missing values on Shapley values. Importantly, our findings reveal that the choice of imputation method can introduce biases that could lead to changes in the Shapley values, thereby affecting the interpretability of the model. Moreover, and that a lower test prediction mean square error (MSE) may not imply a lower MSE in Shapley values and vice versa. Also, while Xgboost is a method that could handle missing data directly, using Xgboost directly on missing data can seriously affect interpretability compared to imputing the data before training Xgboost. This study provides a comprehensive evaluation of imputation methods in the context of model interpretation, offering practical guidance for selecting appropriate techniques based on dataset characteristics and analysis objectives. The results underscore the importance of considering imputation effects to ensure robust and reliable insights from machine learning models.
|
http://arxiv.org/pdf/2407.00411v1
|
[
"Tuan L. Vo",
"Thu Nguyen",
"Hugo L. Hammer",
"Michael A. Riegler",
"Pal Halvorsen"
] |
2024-06-29T11:31:09Z
|
2024-06-29T11:31:09Z
|
2407.04730
|
The OPS-SAT benchmark for detecting anomalies in satellite telemetry
|
Detecting anomalous events in satellite telemetry is a critical task in space operations. This task, however, is extremely time-consuming, error-prone and human dependent, thus automated data-driven anomaly detection algorithms have been emerging at a steady pace. However, there are no publicly available datasets of real satellite telemetry accompanied with the ground-truth annotations that could be used to train and verify anomaly detection supervised models. In this article, we address this research gap and introduce the AI-ready benchmark dataset (OPSSAT-AD) containing the telemetry data acquired on board OPS-SAT -- a CubeSat mission which has been operated by the European Space Agency which has come to an end during the night of 22--23 May 2024 (CEST). The dataset is accompanied with the baseline results obtained using 30 supervised and unsupervised classic and deep machine learning algorithms for anomaly detection. They were trained and validated using the training-test dataset split introduced in this work, and we present a suggested set of quality metrics which should be always calculated to confront the new algorithms for anomaly detection while exploiting OPSSAT-AD. We believe that this work may become an important step toward building a fair, reproducible and objective validation procedure that can be used to quantify the capabilities of the emerging anomaly detection techniques in an unbiased and fully transparent way.
|
http://arxiv.org/pdf/2407.04730v1
|
[
"Bogdan Ruszczak",
"Krzysztof Kotowski",
"David Evans",
"Jakub Nalepa"
] |
2024-06-29T11:12:22Z
|
2024-06-29T11:12:22Z
|
2407.00401
|
PUZZLES: A Benchmark for Neural Algorithmic Reasoning
|
Algorithmic reasoning is a fundamental cognitive ability that plays a pivotal role in problem-solving and decision-making processes. Reinforcement Learning (RL) has demonstrated remarkable proficiency in tasks such as motor control, handling perceptual input, and managing stochastic environments. These advancements have been enabled in part by the availability of benchmarks. In this work we introduce PUZZLES, a benchmark based on Simon Tatham's Portable Puzzle Collection, aimed at fostering progress in algorithmic and logical reasoning in RL. PUZZLES contains 40 diverse logic puzzles of adjustable sizes and varying levels of complexity; many puzzles also feature a diverse set of additional configuration parameters. The 40 puzzles provide detailed information on the strengths and generalization capabilities of RL agents. Furthermore, we evaluate various RL algorithms on PUZZLES, providing baseline comparisons and demonstrating the potential for future research. All the software, including the environment, is available at https://github.com/ETH-DISCO/rlp.
|
http://arxiv.org/pdf/2407.00401v1
|
[
"Benjamin Estermann",
"Luca A. Lanzendörfer",
"Yannick Niedermayr",
"Roger Wattenhofer"
] |
2024-06-29T11:02:05Z
|
2024-06-29T11:02:05Z
|
2407.00397
|
Markovian Gaussian Process: A Universal State-Space Representation for
Stationary Temporal Gaussian Process
|
Gaussian Processes (GPs) and Linear Dynamical Systems (LDSs) are essential time series and dynamic system modeling tools. GPs can handle complex, nonlinear dynamics but are computationally demanding, while LDSs offer efficient computation but lack the expressive power of GPs. To combine their benefits, we introduce a universal method that allows an LDS to mirror stationary temporal GPs. This state-space representation, known as the Markovian Gaussian Process (Markovian GP), leverages the flexibility of kernel functions while maintaining efficient linear computation. Unlike existing GP-LDS conversion methods, which require separability for most multi-output kernels, our approach works universally for single- and multi-output stationary temporal kernels. We evaluate our method by computing covariance, performing regression tasks, and applying it to a neuroscience application, demonstrating that our method provides an accurate state-space representation for stationary temporal GPs.
|
http://arxiv.org/pdf/2407.00397v1
|
[
"Weihan Li",
"Yule Wang",
"Chengrui Li",
"Anqi Wu"
] |
2024-06-29T10:50:23Z
|
2024-06-29T10:50:23Z
|
2407.00388
|
Weighted mesh algorithms for general Markov decision processes:
Convergence and tractability
|
We introduce a mesh-type approach for tackling discrete-time, finite-horizon Markov Decision Processes (MDPs) characterized by state and action spaces that are general, encompassing both finite and infinite (yet suitably regular) subsets of Euclidean space. In particular, for bounded state and action spaces, our algorithm achieves a computational complexity that is tractable in the sense of Novak and Wozniakowski, and is polynomial in the time horizon. For unbounded state space the algorithm is "semi-tractable" in the sense that the complexity is proportional to $epsilon^{-c}$ with some dimension independent $cgeq2$, for achieving an accuracy $epsilon$, and polynomial in the time horizon with degree linear in the underlying dimension. As such the proposed approach has some flavor of the randomization method by Rust which deals with infinite horizon MDPs and uniform sampling in compact state space. However, the present approach is essentially different due to the finite horizon and a simulation procedure due to general transition distributions, and more general in the sense that it encompasses unbounded state space. To demonstrate the effectiveness of our algorithm, we provide illustrations based on Linear-Quadratic Gaussian (LQG) control problems.
|
http://arxiv.org/pdf/2407.00388v1
|
[
"Denis Belomestny",
"John Schoenmakers"
] |
2024-06-29T10:08:23Z
|
2024-06-29T10:08:23Z
|
2407.00383
|
FANFOLD: Graph Normalizing Flows-driven Asymmetric Network for
Unsupervised Graph-Level Anomaly Detection
|
Unsupervised graph-level anomaly detection (UGAD) has attracted increasing interest due to its widespread application. In recent studies, knowledge distillation-based methods have been widely used in unsupervised anomaly detection to improve model efficiency and generalization. However, the inherent symmetry between the source (teacher) and target (student) networks typically results in consistent outputs across both architectures, making it difficult to distinguish abnormal graphs from normal graphs. Also, existing methods mainly rely on graph features to distinguish anomalies, which may be unstable with complex and diverse data and fail to capture the essence that differentiates normal graphs from abnormal ones. In this work, we propose a Graph Normalizing Flows-driven Asymmetric Network For Unsupervised Graph-Level Anomaly Detection (FANFOLD in short). We introduce normalizing flows to unsupervised graph-level anomaly detection due to their successful application and superior quality in learning the underlying distribution of samples. Specifically, we adopt the knowledge distillation technique and apply normalizing flows on the source network, achieving the asymmetric network. In the training stage, FANFOLD transforms the original distribution of normal graphs to a standard normal distribution. During inference, FANFOLD computes the anomaly score using the source-target loss to discriminate between normal and anomalous graphs. We conduct extensive experiments on 15 datasets of different fields with 9 baseline methods to validate the superiority of FANFOLD.
|
http://arxiv.org/pdf/2407.00383v1
|
[
"Rui Cao",
"Shijie Xue",
"Jindong Li",
"Qi Wang",
"Yi Chang"
] |
2024-06-29T09:49:16Z
|
2024-06-29T09:49:16Z
|
2401.16594
|
Consistent algorithms for multi-label classification with macro-at-$k$
metrics
|
We consider the optimization of complex performance metrics in multi-label classification under the population utility framework. We mainly focus on metrics linearly decomposable into a sum of binary classification utilities applied separately to each label with an additional requirement of exactly $k$ labels predicted for each instance. These "macro-at-$k$" metrics possess desired properties for extreme classification problems with long tail labels. Unfortunately, the at-$k$ constraint couples the otherwise independent binary classification tasks, leading to a much more challenging optimization problem than standard macro-averages. We provide a statistical framework to study this problem, prove the existence and the form of the optimal classifier, and propose a statistically consistent and practical learning algorithm based on the Frank-Wolfe method. Interestingly, our main results concern even more general metrics being non-linear functions of label-wise confusion matrices. Empirical results provide evidence for the competitive performance of the proposed approach.
|
http://arxiv.org/pdf/2401.16594v3
|
[
"Erik Schultheis",
"Wojciech Kotłowski",
"Marek Wydmuch",
"Rohit Babbar",
"Strom Borman",
"Krzysztof Dembczyński"
] |
2024-06-29T09:44:20Z
|
2024-01-29T21:51:27Z
|
2006.16202
|
Partitioned Least Squares
|
In this paper we propose a variant of the linear least squares model allowing practitioners to partition the input features into groups of variables that they require to contribute similarly to the final result. The output allows practitioners to assess the importance of each group and of each variable in the group. We formally show that the new formulation is not convex and provide two alternative methods to deal with the problem: one non-exact method based on an alternating least squares approach; and one exact method based on a reformulation of the problem using an exponential number of sub-problems whose minimum is guaranteed to be the optimal solution. We formally show the correctness of the exact method and also compare the two solutions showing that the exact solution provides better results in a fraction of the time required by the alternating least squares solution (assuming that the number of partitions is small). For the sake of completeness, we also provide an alternative branch and bound algorithm that can be used in place of the exact method when the number of partitions is too large, and a proof of NP-completeness of the optimization problem introduced in this paper.
|
http://arxiv.org/pdf/2006.16202v2
|
[
"Roberto Esposito",
"Mattia Cerrato",
"Marco Locatelli"
] |
2024-06-29T09:40:27Z
|
2020-06-29T17:10:32Z
|
2407.00371
|
Axiomatization of Gradient Smoothing in Neural Networks
|
Gradients play a pivotal role in neural networks explanation. The inherent high dimensionality and structural complexity of neural networks result in the original gradients containing a significant amount of noise. While several approaches were proposed to reduce noise with smoothing, there is little discussion of the rationale behind smoothing gradients in neural networks. In this work, we proposed a gradient smooth theoretical framework for neural networks based on the function mollification and Monte Carlo integration. The framework intrinsically axiomatized gradient smoothing and reveals the rationale of existing methods. Furthermore, we provided an approach to design new smooth methods derived from the framework. By experimental measurement of several newly designed smooth methods, we demonstrated the research potential of our framework.
|
http://arxiv.org/pdf/2407.00371v1
|
[
"Linjiang Zhou",
"Xiaochuan Shi",
"Chao Ma",
"Zepeng Wang"
] |
2024-06-29T08:43:38Z
|
2024-06-29T08:43:38Z
|
2211.07484
|
Contextual Bandits with Packing and Covering Constraints: A Modular
Lagrangian Approach via Regression
|
We consider contextual bandits with linear constraints (CBwLC), a variant of contextual bandits in which the algorithm consumes multiple resources subject to linear constraints on total consumption. This problem generalizes contextual bandits with knapsacks (CBwK), allowing for packing and covering constraints, as well as positive and negative resource consumption. We provide the first algorithm for CBwLC (or CBwK) that is based on regression oracles. The algorithm is simple, computationally efficient, and statistically optimal under mild assumptions. Further, we provide the first vanishing-regret guarantees for CBwLC (or CBwK) that extend beyond the stochastic environment. We side-step strong impossibility results from prior work by identifying a weaker (and, arguably, fairer) benchmark to compare against. Our algorithm builds on LagrangeBwK (Immorlica et al., FOCS 2019), a Lagrangian-based technique for CBwK, and SquareCB (Foster and Rakhlin, ICML 2020), a regression-based technique for contextual bandits. Our analysis leverages the inherent modularity of both techniques.
|
http://arxiv.org/pdf/2211.07484v5
|
[
"Aleksandrs Slivkins",
"Xingyu Zhou",
"Karthik Abinav Sankararaman",
"Dylan J. Foster"
] |
2024-06-29T08:39:56Z
|
2022-11-14T16:08:44Z
|
2407.00356
|
Enhancing Accuracy and Parameter-Efficiency of Neural Representations
for Network Parameterization
|
In this work, we investigate the fundamental trade-off regarding accuracy and parameter efficiency in the parameterization of neural network weights using predictor networks. We present a surprising finding that, when recovering the original model accuracy is the sole objective, it can be achieved effectively through the weight reconstruction objective alone. Additionally, we explore the underlying factors for improving weight reconstruction under parameter-efficiency constraints, and propose a novel training scheme that decouples the reconstruction objective from auxiliary objectives such as knowledge distillation that leads to significant improvements compared to state-of-the-art approaches. Finally, these results pave way for more practical scenarios, where one needs to achieve improvements on both model accuracy and predictor network parameter-efficiency simultaneously.
|
http://arxiv.org/pdf/2407.00356v1
|
[
"Hongjun Choi",
"Jayaraman J. Thiagarajan",
"Ruben Glatt",
"Shusen Liu"
] |
2024-06-29T08:07:39Z
|
2024-06-29T08:07:39Z
|
2202.08465
|
End-to-End Training for Back-Translation with Categorical
Reparameterization Trick
|
Back-translation (BT) is an effective semi-supervised learning framework in neural machine translation (NMT). A pre-trained NMT model translates monolingual sentences and makes synthetic bilingual sentence pairs for the training of the other NMT model, and vice versa. Understanding the two NMT models as inference and generation models, respectively, the training method of variational auto-encoder (VAE) was applied in previous works, which is a mainstream framework of generative models. However, the discrete property of translated sentences prevents gradient information from flowing between the two NMT models. In this paper, we propose the categorical reparameterization trick (CRT) that makes NMT models generate differentiable sentences so that the VAE's training framework can work in an end-to-end fashion. Our BT experiment conducted on a WMT benchmark dataset demonstrates the superiority of our proposed CRT compared to the Gumbel-softmax trick, which is a popular reparameterization method for categorical variable. Moreover, our experiments conducted on multiple WMT benchmark datasets demonstrate that our proposed end-to-end training framework is effective in terms of BLEU scores not only compared to its counterpart baseline which is not trained in an end-to-end fashion, but also compared to other previous BT works. The code is available at the web.
|
http://arxiv.org/pdf/2202.08465v4
|
[
"DongNyeong Heo",
"Heeyoul Choi"
] |
2024-06-29T08:00:04Z
|
2022-02-17T06:31:03Z
|
2305.02449
|
Bayesian Safety Validation for Failure Probability Estimation of
Black-Box Systems
|
Estimating the probability of failure is an important step in the certification of safety-critical systems. Efficient estimation methods are often needed due to the challenges posed by high-dimensional input spaces, risky test scenarios, and computationally expensive simulators. This work frames the problem of black-box safety validation as a Bayesian optimization problem and introduces a method that iteratively fits a probabilistic surrogate model to efficiently predict failures. The algorithm is designed to search for failures, compute the most-likely failure, and estimate the failure probability over an operating domain using importance sampling. We introduce three acquisition functions that aim to reduce uncertainty by covering the design space, optimize the analytically derived failure boundaries, and sample the predicted failure regions. Results show this Bayesian safety validation approach provides a more accurate estimate of failure probability with orders of magnitude fewer samples and performs well across various safety validation metrics. We demonstrate this approach on three test problems, a stochastic decision making system, and a neural network-based runway detection system. This work is open sourced (https://github.com/sisl/BayesianSafetyValidation.jl) and currently being used to supplement the FAA certification process of the machine learning components for an autonomous cargo aircraft.
|
http://arxiv.org/abs/2305.02449v2
|
[
"Robert J. Moss",
"Mykel J. Kochenderfer",
"Maxime Gariel",
"Arthur Dubois"
] |
2024-06-29T07:43:38Z
|
2023-05-03T22:22:48Z
|
2405.17902
|
Boosting Protein Language Models with Negative Sample Mining
|
We introduce a pioneering methodology for boosting large language models in the domain of protein representation learning. Our primary contribution lies in the refinement process for correlating the over-reliance on co-evolution knowledge, in a way that networks are trained to distill invaluable insights from negative samples, constituted by protein pairs sourced from disparate categories. By capitalizing on this novel approach, our technique steers the training of transformer-based models within the attention score space. This advanced strategy not only amplifies performance but also reflects the nuanced biological behaviors exhibited by proteins, offering aligned evidence with traditional biological mechanisms such as protein-protein interaction. We experimentally observed improved performance on various tasks over datasets, on top of several well-established large protein models. This innovative paradigm opens up promising horizons for further progress in the realms of protein research and computational biology.
|
http://arxiv.org/pdf/2405.17902v2
|
[
"Yaoyao Xu",
"Xinjian Zhao",
"Xiaozhuang Song",
"Benyou Wang",
"Tianshu Yu"
] |
2024-06-29T07:07:49Z
|
2024-05-28T07:24:20Z
|
2407.00337
|
WgLaSDI: Weak-Form Greedy Latent Space Dynamics Identification
|
The parametric greedy latent space dynamics identification (gLaSDI) framework has demonstrated promising potential for accurate and efficient modeling of high-dimensional nonlinear physical systems. However, it remains challenging to handle noisy data. To enhance robustness against noise, we incorporate the weak-form estimation of nonlinear dynamics (WENDy) into gLaSDI. In the proposed weak-form gLaSDI (WgLaSDI) framework, an autoencoder and WENDy are trained simultaneously to discover intrinsic nonlinear latent-space dynamics of high-dimensional data. Compared to the standard sparse identification of nonlinear dynamics (SINDy) employed in gLaSDI, WENDy enables variance reduction and robust latent space discovery, therefore leading to more accurate and efficient reduced-order modeling. Furthermore, the greedy physics-informed active learning in WgLaSDI enables adaptive sampling of optimal training data on the fly for enhanced modeling accuracy. The effectiveness of the proposed framework is demonstrated by modeling various nonlinear dynamical problems, including viscous and inviscid Burgers' equations, time-dependent radial advection, and the Vlasov equation for plasma physics. With data that contains 5-10% Gaussian white noise, WgLaSDI outperforms gLaSDI by orders of magnitude, achieving 1-7% relative errors. Compared with the high-fidelity models, WgLaSDI achieves 121 to 1,779x speed-up.
|
http://arxiv.org/pdf/2407.00337v1
|
[
"Xiaolong He",
"April Tran",
"David M. Bortz",
"Youngsoo Choi"
] |
2024-06-29T06:52:59Z
|
2024-06-29T06:52:59Z
|
2407.00336
|
Dual-view Aware Smart Contract Vulnerability Detection for Ethereum
|
The wide application of Ethereum technology has brought technological innovation to traditional industries. As one of Ethereum's core applications, smart contracts utilize diverse contract codes to meet various functional needs and have gained widespread use. However, the non-tamperability of smart contracts, coupled with vulnerabilities caused by natural flaws or human errors, has brought unprecedented challenges to blockchain security. Therefore, in order to ensure the healthy development of blockchain technology and the stability of the blockchain community, it is particularly important to study the vulnerability detection techniques for smart contracts. In this paper, we propose a Dual-view Aware Smart Contract Vulnerability Detection Framework named DVDet. The framework initially converts the source code and bytecode of smart contracts into weighted graphs and control flow sequences, capturing potential risk features from these two perspectives and integrating them for analysis, ultimately achieving effective contract vulnerability detection. Comprehensive experiments on the Ethereum dataset show that our method outperforms others in detecting vulnerabilities.
|
http://arxiv.org/pdf/2407.00336v1
|
[
"Jiacheng Yao",
"Maolin Wang",
"Wanqi Chen",
"Chengxiang Jin",
"Jiajun Zhou",
"Shanqing Yu",
"Qi Xuan"
] |
2024-06-29T06:47:51Z
|
2024-06-29T06:47:51Z
|
2407.00332
|
Machine Learning Models for Dengue Forecasting in Singapore
|
With emerging prevalence beyond traditionally endemic regions, the global burden of dengue disease is forecasted to be one of the fastest growing. With limited direct treatment or vaccination currently available, prevention through vector control is widely believed to be the most effective form of managing outbreaks. This study examines traditional state space models (moving average, autoregressive, ARIMA, SARIMA), supervised learning techniques (XGBoost, SVM, KNN) and deep networks (LSTM, CNN, ConvLSTM) for forecasting weekly dengue cases in Singapore. Meteorological data and search engine trends were included as features for ML techniques. Forecasts using CNNs yielded lowest RMSE in weekly cases in 2019.
|
http://arxiv.org/pdf/2407.00332v1
|
[
"Zi Iun Lai",
"Wai Kit Fung",
"Enquan Chew"
] |
2024-06-29T06:27:52Z
|
2024-06-29T06:27:52Z
|
2407.01624
|
Guided Trajectory Generation with Diffusion Models for Offline
Model-based Optimization
|
Optimizing complex and high-dimensional black-box functions is ubiquitous in science and engineering fields. Unfortunately, the online evaluation of these functions is restricted due to time and safety constraints in most cases. In offline model-based optimization (MBO), we aim to find a design that maximizes the target function using only a pre-existing offline dataset. While prior methods consider forward or inverse approaches to address the problem, these approaches are limited by conservatism and the difficulty of learning highly multi-modal mappings. Recently, there has been an emerging paradigm of learning to improve solutions with synthetic trajectories constructed from the offline dataset. In this paper, we introduce a novel conditional generative modeling approach to produce trajectories toward high-scoring regions. First, we construct synthetic trajectories toward high-scoring regions using the dataset while injecting locality bias for consistent improvement directions. Then, we train a conditional diffusion model to generate trajectories conditioned on their scores. Lastly, we sample multiple trajectories from the trained model with guidance to explore high-scoring regions beyond the dataset and select high-fidelity designs among generated trajectories with the proxy function. Extensive experiment results demonstrate that our method outperforms competitive baselines on Design-Bench and its practical variants. The code is publicly available in texttt{https://github.com/dbsxodud-11/GTG}.
|
http://arxiv.org/pdf/2407.01624v1
|
[
"Taeyoung Yun",
"Sujin Yun",
"Jaewoo Lee",
"Jinkyoo Park"
] |
2024-06-29T06:12:36Z
|
2024-06-29T06:12:36Z
|
2407.01623
|
Uncertainty estimation in satellite precipitation spatial prediction by
combining distributional regression algorithms
|
To facilitate effective decision-making, gridded satellite precipitation products should include uncertainty estimates. Machine learning has been proposed for issuing such estimates. However, most existing algorithms for this purpose rely on quantile regression. Distributional regression offers distinct advantages over quantile regression, including the ability to model intermittency as well as a stronger ability to extrapolate beyond the training data, which is critical for predicting extreme precipitation. In this work, we introduce the concept of distributional regression for the engineering task of creating precipitation datasets through data merging. Building upon this concept, we propose new ensemble learning methods that can be valuable not only for spatial prediction but also for prediction problems in general. These methods exploit conditional zero-adjusted probability distributions estimated with generalized additive models for location, scale, and shape (GAMLSS), spline-based GAMLSS and distributional regression forests as well as their ensembles (stacking based on quantile regression, and equal-weight averaging). To identify the most effective methods for our specific problem, we compared them to benchmarks using a large, multi-source precipitation dataset. Stacking emerged as the most successful strategy. Three specific stacking methods achieved the best performance based on the quantile scoring rule, although the ranking of these methods varied across quantile levels. This suggests that a task-specific combination of multiple algorithms could yield significant benefits.
|
http://arxiv.org/pdf/2407.01623v1
|
[
"Georgia Papacharalampous",
"Hristos Tyralis",
"Nikolaos Doulamis",
"Anastasios Doulamis"
] |
2024-06-29T05:58:00Z
|
2024-06-29T05:58:00Z
|
2407.01622
|
Addressing Prediction Delays in Time Series Forecasting: A Continuous
GRU Approach with Derivative Regularization
|
Time series forecasting has been an essential field in many different application areas, including economic analysis, meteorology, and so forth. The majority of time series forecasting models are trained using the mean squared error (MSE). However, this training based on MSE causes a limitation known as prediction delay. The prediction delay, which implies the ground-truth precedes the prediction, can cause serious problems in a variety of fields, e.g., finance and weather forecasting -- as a matter of fact, predictions succeeding ground-truth observations are not practically meaningful although their MSEs can be low. This paper proposes a new perspective on traditional time series forecasting tasks and introduces a new solution to mitigate the prediction delay. We introduce a continuous-time gated recurrent unit (GRU) based on the neural ordinary differential equation (NODE) which can supervise explicit time-derivatives. We generalize the GRU architecture in a continuous-time manner and minimize the prediction delay through our time-derivative regularization. Our method outperforms in metrics such as MSE, Dynamic Time Warping (DTW) and Time Distortion Index (TDI). In addition, we demonstrate the low prediction delay of our method in a variety of datasets.
|
http://arxiv.org/abs/2407.01622v1
|
[
"Sheo Yon Jhin",
"Seojin Kim",
"Noseong Park"
] |
2024-06-29T05:36:04Z
|
2024-06-29T05:36:04Z
|
2308.09790
|
A Two-Part Machine Learning Approach to Characterizing Network
Interference in A/B Testing
|
The reliability of controlled experiments, commonly referred to as "A/B tests," is often compromised by network interference, where the outcomes of individual units are influenced by interactions with others. Significant challenges in this domain include the lack of accounting for complex social network structures and the difficulty in suitably characterizing network interference. To address these challenges, we propose a machine learning-based method. We introduce "causal network motifs" and utilize transparent machine learning models to characterize network interference patterns underlying an A/B test on networks. Our method's performance has been demonstrated through simulations on both a synthetic experiment and a large-scale test on Instagram. Our experiments show that our approach outperforms conventional methods such as design-based cluster randomization and conventional analysis-based neighborhood exposure mapping. Our approach provides a comprehensive and automated solution to address network interference for A/B testing practitioners. This aids in informing strategic business decisions in areas such as marketing effectiveness and product customization.
|
http://arxiv.org/pdf/2308.09790v2
|
[
"Yuan Yuan",
"Kristen M. Altenburger"
] |
2024-06-29T05:28:23Z
|
2023-08-18T19:37:55Z
|
2401.01218
|
Self-Supervised Position Debiasing for Large Language Models
|
Fine-tuning has been demonstrated to be an effective method to improve the domain performance of large language models (LLMs). However, LLMs might fit the dataset bias and shortcuts for prediction, leading to poor generation performance. Previous works have proven that LLMs are prone to exhibit position bias, i.e., leveraging information positioned at the beginning or end, or specific positional cues within the input. Existing debiasing methods for LLMs require external bias knowledge or annotated non-biased samples, which is lacking for position debiasing and impractical in reality. In this work, we propose a self-supervised position debiasing (SOD) framework to mitigate position bias for LLMs. SOD leverages unsupervised responses from pre-trained LLMs for debiasing without relying on any external knowledge. To improve the quality of unsupervised responses, we propose an objective alignment (OAM) module to prune these responses. Experiments on eight datasets and five tasks show that SOD consistently outperforms existing methods in mitigating three types of position biases. Besides, SOD achieves this by sacrificing only a small performance on biased samples, which is general and effective. To facilitate the reproducibility of the results, we share the code of all methods and datasets on https://github.com/LZKSKY/SOD.
|
http://arxiv.org/pdf/2401.01218v3
|
[
"Zhongkun Liu",
"Zheng Chen",
"Mengqi Zhang",
"Zhaochun Ren",
"Pengjie Ren",
"Zhumin Chen"
] |
2024-06-29T05:20:09Z
|
2024-01-02T14:12:41Z
|
2401.10510
|
When large language models meet evolutionary algorithms
|
Pre-trained large language models (LLMs) have powerful capabilities for generating creative natural text. Evolutionary algorithms (EAs) can discover diverse solutions to complex real-world problems. Motivated by the common collective and directionality of text generation and evolution, this paper illustrates the parallels between LLMs and EAs, which includes multiple one-to-one key characteristics: token representation and individual representation, position encoding and fitness shaping, position embedding and selection, Transformers block and reproduction, and model training and parameter adaptation. By examining these parallels, we analyze existing interdisciplinary research, with a specific focus on evolutionary fine-tuning and LLM-enhanced EAs. Drawing from these insights, valuable future directions are presented for advancing the integration of LLMs and EAs, while highlighting key challenges along the way. These parallels not only reveal the evolution mechanism behind LLMs but also facilitate the development of evolved artificial agents that approach or surpass biological organisms.
|
http://arxiv.org/pdf/2401.10510v2
|
[
"Wang Chao",
"Jiaxuan Zhao",
"Licheng Jiao",
"Lingling Li",
"Fang Liu",
"Shuyuan Yang"
] |
2024-06-29T05:16:33Z
|
2024-01-19T05:58:30Z
|
2407.00320
|
LiteSearch: Efficacious Tree Search for LLM
|
Recent research suggests that tree search algorithms (e.g. Monte Carlo Tree Search) can dramatically boost LLM performance on complex mathematical reasoning tasks. However, they often require more than 10 times the computational resources of greedy decoding due to wasteful search strategies, making them difficult to be deployed in practical applications. This study introduces a novel guided tree search algorithm with dynamic node selection and node-level exploration budget (maximum number of children) calculation to tackle this issue. By considering the search progress towards the final answer (history) and the guidance from a value network (future) trained without any step-wise annotations, our algorithm iteratively selects the most promising tree node before expanding it within the boundaries of the allocated computational budget. Experiments conducted on the GSM8K and TabMWP datasets demonstrate that our approach not only offers competitive performance but also enjoys significantly lower computational costs compared to baseline methods.
|
http://arxiv.org/pdf/2407.00320v1
|
[
"Ante Wang",
"Linfeng Song",
"Ye Tian",
"Baolin Peng",
"Dian Yu",
"Haitao Mi",
"Jinsong Su",
"Dong Yu"
] |
2024-06-29T05:14:04Z
|
2024-06-29T05:14:04Z
|
2209.08907
|
Learning Symbolic Model-Agnostic Loss Functions via Meta-Learning
|
In this paper, we develop upon the emerging topic of loss function learning, which aims to learn loss functions that significantly improve the performance of the models trained under them. Specifically, we propose a new meta-learning framework for learning model-agnostic loss functions via a hybrid neuro-symbolic search approach. The framework first uses evolution-based methods to search the space of primitive mathematical operations to find a set of symbolic loss functions. Second, the set of learned loss functions are subsequently parameterized and optimized via an end-to-end gradient-based training procedure. The versatility of the proposed framework is empirically validated on a diverse set of supervised learning tasks. Results show that the meta-learned loss functions discovered by the newly proposed method outperform both the cross-entropy loss and state-of-the-art loss function learning methods on a diverse range of neural network architectures and datasets.
|
http://arxiv.org/pdf/2209.08907v3
|
[
"Christian Raymond",
"Qi Chen",
"Bing Xue",
"Mengjie Zhang"
] |
2024-06-29T04:57:47Z
|
2022-09-19T10:29:01Z
|
2405.19730
|
Research on Foundation Model for Spatial Data Intelligence: China's 2024
White Paper on Strategic Development of Spatial Data Intelligence
|
This report focuses on spatial data intelligent large models, delving into the principles, methods, and cutting-edge applications of these models. It provides an in-depth discussion on the definition, development history, current status, and trends of spatial data intelligent large models, as well as the challenges they face. The report systematically elucidates the key technologies of spatial data intelligent large models and their applications in urban environments, aerospace remote sensing, geography, transportation, and other scenarios. Additionally, it summarizes the latest application cases of spatial data intelligent large models in themes such as urban development, multimodal systems, remote sensing, smart transportation, and resource environments. Finally, the report concludes with an overview and outlook on the development prospects of spatial data intelligent large models.
|
http://arxiv.org/pdf/2405.19730v2
|
[
"Shaohua Wang",
"Xing Xie",
"Yong Li",
"Danhuai Guo",
"Zhi Cai",
"Yu Liu",
"Yang Yue",
"Xiao Pan",
"Feng Lu",
"Huayi Wu",
"Zhipeng Gui",
"Zhiming Ding",
"Bolong Zheng",
"Fuzheng Zhang",
"Tao Qin",
"Jingyuan Wang",
"Chuang Tao",
"Zhengchao Chen",
"Hao Lu",
"Jiayi Li",
"Hongyang Chen",
"Peng Yue",
"Wenhao Yu",
"Yao Yao",
"Leilei Sun",
"Yong Zhang",
"Longbiao Chen",
"Xiaoping Du",
"Xiang Li",
"Xueying Zhang",
"Kun Qin",
"Zhaoya Gong",
"Weihua Dong",
"Xiaofeng Meng"
] |
2024-06-29T04:31:52Z
|
2024-05-30T06:21:34Z
|
2407.00294
|
Deep Neural Networks with Symplectic Preservation Properties
|
We propose a deep neural network architecture designed such that its output forms an invertible symplectomorphism of the input. This design draws an analogy to the real-valued non-volume-preserving (real NVP) method used in normalizing flow techniques. Utilizing this neural network type allows for learning tasks on unknown Hamiltonian systems without breaking the inherent symplectic structure of the phase space.
|
http://arxiv.org/pdf/2407.00294v1
|
[
"Qing He",
"Wei Cai"
] |
2024-06-29T03:25:54Z
|
2024-06-29T03:25:54Z
|
2407.01621
|
Deciphering interventional dynamical causality from non-intervention
systems
|
Detecting and quantifying causality is a focal topic in the fields of science, engineering, and interdisciplinary studies. However, causal studies on non-intervention systems attract much attention but remain extremely challenging. To address this challenge, we propose a framework named Interventional Dynamical Causality (IntDC) for such non-intervention systems, along with its computational criterion, Interventional Embedding Entropy (IEE), to quantify causality. The IEE criterion theoretically and numerically enables the deciphering of IntDC solely from observational (non-interventional) time-series data, without requiring any knowledge of dynamical models or real interventions in the considered system. Demonstrations of performance showed the accuracy and robustness of IEE on benchmark simulated systems as well as real-world systems, including the neural connectomes of C. elegans, COVID-19 transmission networks in Japan, and regulatory networks surrounding key circadian genes.
|
http://arxiv.org/pdf/2407.01621v1
|
[
"Jifan Shi",
"Yang Li",
"Juan Zhao",
"Siyang Leng",
"Kazuyuki Aihara",
"Luonan Chen",
"Wei Lin"
] |
2024-06-29T03:17:53Z
|
2024-06-29T03:17:53Z
|
2406.04609
|
Diverse Intra- and Inter-Domain Activity Style Fusion for Cross-Person
Generalization in Activity Recognition
|
Existing domain generalization (DG) methods for cross-person generalization tasks often face challenges in capturing intra- and inter-domain style diversity, resulting in domain gaps with the target domain. In this study, we explore a novel perspective to tackle this problem, a process conceptualized as domain padding. This proposal aims to enrich the domain diversity by synthesizing intra- and inter-domain style data while maintaining robustness to class labels. We instantiate this concept using a conditional diffusion model and introduce a style-fused sampling strategy to enhance data generation diversity. In contrast to traditional condition-guided sampling, our style-fused sampling strategy allows for the flexible use of one or more random styles to guide data synthesis. This feature presents a notable advancement: it allows for the maximum utilization of possible permutations and combinations among existing styles to generate a broad spectrum of new style instances. Empirical evaluations on a broad range of datasets demonstrate that our generated data achieves remarkable diversity within the domain space. Both intra- and inter-domain generated data have proven to be significant and valuable, contributing to varying degrees of performance enhancements. Notably, our approach outperforms state-of-the-art DG methods in all human activity recognition tasks.
|
http://arxiv.org/pdf/2406.04609v2
|
[
"Junru Zhang",
"Lang Feng",
"Zhidan Liu",
"Yuhan Wu",
"Yang He",
"Yabo Dong",
"Duanqing Xu"
] |
2024-06-29T03:15:51Z
|
2024-06-07T03:37:30Z
|
2404.15993
|
Uncertainty Estimation and Quantification for LLMs: A Simple Supervised
Approach
|
In this paper, we study the problem of uncertainty estimation and calibration for LLMs. We first formulate the uncertainty estimation problem for LLMs and then propose a supervised approach that takes advantage of the labeled datasets and estimates the uncertainty of the LLMs' responses. Based on the formulation, we illustrate the difference between the uncertainty estimation for LLMs and that for standard ML models and explain why the hidden neurons of the LLMs may contain uncertainty information. Our designed approach demonstrates the benefits of utilizing hidden activations to enhance uncertainty estimation across various tasks and shows robust transferability in out-of-distribution settings. We distinguish the uncertainty estimation task from the uncertainty calibration task and show that a better uncertainty estimation mode leads to a better calibration performance. Furthermore, our method is easy to implement and adaptable to different levels of model accessibility including black box, grey box, and white box.
|
http://arxiv.org/pdf/2404.15993v3
|
[
"Linyu Liu",
"Yu Pan",
"Xiaocheng Li",
"Guanting Chen"
] |
2024-06-29T02:58:21Z
|
2024-04-24T17:10:35Z
|
2407.00286
|
Digital Twin-Assisted Data-Driven Optimization for Reliable Edge Caching
in Wireless Networks
|
Optimizing edge caching is crucial for the advancement of next-generation (nextG) wireless networks, ensuring high-speed and low-latency services for mobile users. Existing data-driven optimization approaches often lack awareness of the distribution of random data variables and focus solely on optimizing cache hit rates, neglecting potential reliability concerns, such as base station overload and unbalanced cache issues. This oversight can result in system crashes and degraded user experience. To bridge this gap, we introduce a novel digital twin-assisted optimization framework, called D-REC, which integrates reinforcement learning (RL) with diverse intervention modules to ensure reliable caching in nextG wireless networks. We first develop a joint vertical and horizontal twinning approach to efficiently create network digital twins, which are then employed by D-REC as RL optimizers and safeguards, providing ample datasets for training and predictive evaluation of our cache replacement policy. By incorporating reliability modules into a constrained Markov decision process, D-REC can adaptively adjust actions, rewards, and states to comply with advantageous constraints, minimizing the risk of network failures. Theoretical analysis demonstrates comparable convergence rates between D-REC and vanilla data-driven methods without compromising caching performance. Extensive experiments validate that D-REC outperforms conventional approaches in cache hit rate and load balancing while effectively enforcing predetermined reliability intervention modules.
|
http://arxiv.org/pdf/2407.00286v1
|
[
"Zifan Zhang",
"Yuchen Liu",
"Zhiyuan Peng",
"Mingzhe Chen",
"Dongkuan Xu",
"Shuguang Cui"
] |
2024-06-29T02:40:28Z
|
2024-06-29T02:40:28Z
|
2407.07742
|
Science-Informed Deep Learning (ScIDL) With Applications to Wireless
Communications
|
Given the extensive and growing capabilities offered by deep learning (DL), more researchers are turning to DL to address complex challenges in next-generation (xG) communications. However, despite its progress, DL also reveals several limitations that are becoming increasingly evident. One significant issue is its lack of interpretability, which is especially critical for safety-sensitive applications. Another significant consideration is that DL may not comply with the constraints set by physics laws or given security standards, which are essential for reliable DL. Additionally, DL models often struggle outside their training data distributions, which is known as poor generalization. Moreover, there is a scarcity of theoretical guidance on designing DL algorithms. These challenges have prompted the emergence of a burgeoning field known as science-informed DL (ScIDL). ScIDL aims to integrate existing scientific knowledge with DL techniques to develop more powerful algorithms. The core objective of this article is to provide a brief tutorial on ScIDL that illustrates its building blocks and distinguishes it from conventional DL. Furthermore, we discuss both recent applications of ScIDL and potential future research directions in the field of wireless communications.
|
http://arxiv.org/pdf/2407.07742v1
|
[
"Atefeh Termehchi",
"Ekram Hossain",
"Isaac Woungang"
] |
2024-06-29T02:35:39Z
|
2024-06-29T02:35:39Z
|
2406.18853
|
Decoding-Time Language Model Alignment with Multiple Objectives
|
Aligning language models (LMs) to human preferences has emerged as a critical pursuit, enabling these models to better serve diverse user needs. Existing methods primarily focus on optimizing LMs for a single reward function, limiting their adaptability to varied objectives. Here, we propose $textbf{multi-objective decoding (MOD)}$, a decoding-time algorithm that outputs the next token from a linear combination of predictions of all base models, for any given weightings over different objectives. We exploit a common form among a family of $f$-divergence regularized alignment approaches (such as PPO, DPO, and their variants) to identify a closed-form solution by Legendre transform, and derive an efficient decoding strategy. Theoretically, we show why existing approaches can be sub-optimal even in natural settings and obtain optimality guarantees for our method. Empirical results demonstrate the effectiveness of the algorithm. For example, compared to a parameter-merging baseline, MOD achieves 12.8% overall reward improvement when equally optimizing towards $3$ objectives. Moreover, we experiment with MOD on combining three fully-finetuned LLMs of different model sizes, each aimed at different objectives such as safety, coding, and general user preference. Unlike traditional methods that require careful curation of a mixture of datasets to achieve comprehensive improvement, we can quickly experiment with preference weightings using MOD to find the best combination of models. Our best combination reduces toxicity on Toxigen to nearly 0% and achieves 7.9--33.3% improvement across other three metrics ($textit{i.e.}$, Codex@1, GSM-COT, BBH-COT).
|
http://arxiv.org/pdf/2406.18853v2
|
[
"Ruizhe Shi",
"Yifang Chen",
"Yushi Hu",
"Alisa Liu",
"Hannaneh Hajishirzi",
"Noah A. Smith",
"Simon Du"
] |
2024-06-29T02:29:38Z
|
2024-06-27T02:46:30Z
|
2407.00278
|
PerAct2: A Perceiver Actor Framework for Bimanual Manipulation Tasks
|
Bimanual manipulation is challenging due to precise spatial and temporal coordination required between two arms. While there exist several real-world bimanual systems, there is a lack of simulated benchmarks with a large task diversity for systematically studying bimanual capabilities across a wide range of tabletop tasks. This paper addresses the gap by extending RLBench to bimanual manipulation. We open-source our code and benchmark comprising 13 new tasks with 23 unique task variations, each requiring a high degree of coordination and adaptability. To kickstart the benchmark, we extended several state-of-the art methods to bimanual manipulation and also present a language-conditioned behavioral cloning agent -- PerAct2, which enables the learning and execution of bimanual 6-DoF manipulation tasks. Our novel network architecture efficiently integrates language processing with action prediction, allowing robots to understand and perform complex bimanual tasks in response to user-specified goals. Project website with code is available at: http://bimanual.github.io
|
http://arxiv.org/pdf/2407.00278v1
|
[
"Markus Grotz",
"Mohit Shridhar",
"Tamim Asfour",
"Dieter Fox"
] |
2024-06-29T02:06:01Z
|
2024-06-29T02:06:01Z
|
2405.12502
|
EntropyStop: Unsupervised Deep Outlier Detection with Loss Entropy
|
Unsupervised Outlier Detection (UOD) is an important data mining task. With the advance of deep learning, deep Outlier Detection (OD) has received broad interest. Most deep UOD models are trained exclusively on clean datasets to learn the distribution of the normal data, which requires huge manual efforts to clean the real-world data if possible. Instead of relying on clean datasets, some approaches directly train and detect on unlabeled contaminated datasets, leading to the need for methods that are robust to such conditions. Ensemble methods emerged as a superior solution to enhance model robustness against contaminated training sets. However, the training time is greatly increased by the ensemble. In this study, we investigate the impact of outliers on the training phase, aiming to halt training on unlabeled contaminated datasets before performance degradation. Initially, we noted that blending normal and anomalous data causes AUC fluctuations, a label-dependent measure of detection accuracy. To circumvent the need for labels, we propose a zero-label entropy metric named Loss Entropy for loss distribution, enabling us to infer optimal stopping points for training without labels. Meanwhile, we theoretically demonstrate negative correlation between entropy metric and the label-based AUC. Based on this, we develop an automated early-stopping algorithm, EntropyStop, which halts training when loss entropy suggests the maximum model detection capability. We conduct extensive experiments on ADBench (including 47 real datasets), and the overall results indicate that AutoEncoder (AE) enhanced by our approach not only achieves better performance than ensemble AEs but also requires under 2% of training time. Lastly, our proposed metric and early-stopping approach are evaluated on other deep OD models, exhibiting their broad potential applicability.
|
http://arxiv.org/abs/2405.12502v3
|
[
"Yihong Huang",
"Yuang Zhang",
"Liping Wang",
"Fan Zhang",
"Xuemin Lin"
] |
2024-06-29T01:40:46Z
|
2024-05-21T05:17:43Z
|
2404.01273
|
TWIN-GPT: Digital Twins for Clinical Trials via Large Language Model
|
Clinical trials are indispensable for medical research and the development of new treatments. However, clinical trials often involve thousands of participants and can span several years to complete, with a high probability of failure during the process. Recently, there has been a burgeoning interest in virtual clinical trials, which simulate real-world scenarios and hold the potential to significantly enhance patient safety, expedite development, reduce costs, and contribute to the broader scientific knowledge in healthcare. Existing research often focuses on leveraging electronic health records (EHRs) to support clinical trial outcome prediction. Yet, trained with limited clinical trial outcome data, existing approaches frequently struggle to perform accurate predictions. Some research has attempted to generate EHRs to augment model development but has fallen short in personalizing the generation for individual patient profiles. Recently, the emergence of large language models has illuminated new possibilities, as their embedded comprehensive clinical knowledge has proven beneficial in addressing medical issues. In this paper, we propose a large language model-based digital twin creation approach, called TWIN-GPT. TWIN-GPT can establish cross-dataset associations of medical information given limited data, generating unique personalized digital twins for different patients, thereby preserving individual patient characteristics. Comprehensive experiments show that using digital twins created by TWIN-GPT can boost the clinical trial outcome prediction, exceeding various previous prediction approaches.
|
http://arxiv.org/pdf/2404.01273v2
|
[
"Yue Wang",
"Tianfan Fu",
"Yinlong Xu",
"Zihan Ma",
"Hongxia Xu",
"Yingzhou Lu",
"Bang Du",
"Honghao Gao",
"Jian Wu"
] |
2024-06-29T01:28:02Z
|
2024-04-01T17:48:55Z
|
2405.12489
|
Exploring and Exploiting the Asymmetric Valley of Deep Neural Networks
|
Exploring the loss landscape offers insights into the inherent principles of deep neural networks (DNNs). Recent work suggests an additional asymmetry of the valley beyond the flat and sharp ones, yet without thoroughly examining its causes or implications. Our study methodically explores the factors affecting the symmetry of DNN valleys, encompassing (1) the dataset, network architecture, initialization, and hyperparameters that influence the convergence point; and (2) the magnitude and direction of the noise for 1D visualization. Our major observation shows that the {it degree of sign consistency} between the noise and the convergence point is a critical indicator of valley symmetry. Theoretical insights from the aspects of ReLU activation and softmax function could explain the interesting phenomenon. Our discovery propels novel understanding and applications in the scenario of Model Fusion: (1) the efficacy of interpolating separate models significantly correlates with their sign consistency ratio, and (2) imposing sign alignment during federated learning emerges as an innovative approach for model parameter alignment.
|
http://arxiv.org/pdf/2405.12489v3
|
[
"Xin-Chun Li",
"Jin-Lin Tang",
"Bo Zhang",
"Lan Li",
"De-Chuan Zhan"
] |
2024-06-29T00:46:04Z
|
2024-05-21T04:18:57Z
|
2407.00267
|
Learning a Clinically-Relevant Concept Bottleneck for Lesion Detection
in Breast Ultrasound
|
Detecting and classifying lesions in breast ultrasound images is a promising application of artificial intelligence (AI) for reducing the burden of cancer in regions with limited access to mammography. Such AI systems are more likely to be useful in a clinical setting if their predictions can be explained to a radiologist. This work proposes an explainable AI model that provides interpretable predictions using a standard lexicon from the American College of Radiology's Breast Imaging and Reporting Data System (BI-RADS). The model is a deep neural network featuring a concept bottleneck layer in which known BI-RADS features are predicted before making a final cancer classification. This enables radiologists to easily review the predictions of the AI system and potentially fix errors in real time by modifying the concept predictions. In experiments, a model is developed on 8,854 images from 994 women with expert annotations and histological cancer labels. The model outperforms state-of-the-art lesion detection frameworks with 48.9 average precision on the held-out testing set, and for cancer classification, concept intervention is shown to increase performance from 0.876 to 0.885 area under the receiver operating characteristic curve. Training and evaluation code is available at https://github.com/hawaii-ai/bus-cbm.
|
http://arxiv.org/pdf/2407.00267v1
|
[
"Arianna Bunnell",
"Yannik Glaser",
"Dustin Valdez",
"Thomas Wolfgruber",
"Aleen Altamirano",
"Carol Zamora González",
"Brenda Y. Hernandez",
"Peter Sadowski",
"John A. Shepherd"
] |
2024-06-29T00:44:33Z
|
2024-06-29T00:44:33Z
|
2407.03365
|
ML Updates for OpenStreetMap: Analysis of Research Gaps and Future
Directions
|
Maintaining accurate, up-to-date maps is important in any dynamic urban landscape, supporting various aspects of modern society, such as urban planning, navigation, and emergency response. However, traditional (i.e. largely manual) map production and crowdsourced mapping methods still struggle to keep pace with rapid changes in the built environment. Such manual mapping workflows are time-consuming and prone to human errors, leading to early obsolescence and/or the need for extensive auditing. The current map updating process in OpenStreetMap provides an example of this limitation, relying on numerous manual steps in its online map updating workflow. To address this, there is a need to explore automating the entire end-to-end map up-dating process. Tech giants such as Google and Microsoft have already started investigating Machine Learning (ML) techniques to tackle this contemporary mapping problem. This paper offers an analysis of these ML approaches, focusing on their application to updating Open-StreetMap in particular. By analysing the current state-of-the-art in this field, this study identi-fies some key research gaps and introduces DeepMapper as a practical solution for advancing the automatic online map updating process in the future.
|
http://arxiv.org/pdf/2407.03365v1
|
[
"Lasith Niroshan",
"James D. Carswell"
] |
2024-06-28T23:51:04Z
|
2024-06-28T23:51:04Z
|
2402.14601
|
Bringing Generative AI to Adaptive Learning in Education
|
The recent surge in generative AI technologies, such as large language models and diffusion models, has boosted the development of AI applications in various domains, including science, finance, and education. Concurrently, adaptive learning, a concept that has gained substantial interest in the educational sphere, has proven its efficacy in enhancing students' learning efficiency. In this position paper, we aim to shed light on the intersectional studies of these two methods, which combine generative AI with adaptive learning concepts. By presenting discussions about the benefits, challenges, and potentials in this field, we argue that this union will contribute significantly to the development of the next-stage learning format in education.
|
http://arxiv.org/pdf/2402.14601v3
|
[
"Hang Li",
"Tianlong Xu",
"Chaoli Zhang",
"Eason Chen",
"Jing Liang",
"Xing Fan",
"Haoyang Li",
"Jiliang Tang",
"Qingsong Wen"
] |
2024-06-28T23:43:07Z
|
2024-02-02T23:54:51Z
|
2407.00264
|
External Model Motivated Agents: Reinforcement Learning for Enhanced
Environment Sampling
|
Unlike reinforcement learning (RL) agents, humans remain capable multitaskers in changing environments. In spite of only experiencing the world through their own observations and interactions, people know how to balance focusing on tasks with learning about how changes may affect their understanding of the world. This is possible by choosing to solve tasks in ways that are interesting and generally informative beyond just the current task. Motivated by this, we propose an agent influence framework for RL agents to improve the adaptation efficiency of external models in changing environments without any changes to the agent's rewards. Our formulation is composed of two self-contained modules: interest fields and behavior shaping via interest fields. We implement an uncertainty-based interest field algorithm as well as a skill-sampling-based behavior-shaping algorithm to use in testing this framework. Our results show that our method outperforms the baselines in terms of external model adaptation on metrics that measure both efficiency and performance.
|
http://arxiv.org/pdf/2407.00264v1
|
[
"Rishav Bhagat",
"Jonathan Balloch",
"Zhiyu Lin",
"Julia Kim",
"Mark Riedl"
] |
2024-06-28T23:31:22Z
|
2024-06-28T23:31:22Z
|
2405.02783
|
Linear Noise Approximation Assisted Bayesian Inference on Mechanistic
Model of Partially Observed Stochastic Reaction Network
|
To support mechanism online learning and facilitate digital twin development for biomanufacturing processes, this paper develops an efficient Bayesian inference approach for partially observed enzymatic stochastic reaction network (SRN), a fundamental building block of multi-scale bioprocess mechanistic model. To tackle the critical challenges brought by the nonlinear stochastic differential equations (SDEs)-based mechanistic model with partially observed state and having measurement errors, an interpretable Bayesian updating linear noise approximation (LNA) metamodel, incorporating the structure information of the mechanistic model, is proposed to approximate the likelihood of observations. Then, an efficient posterior sampling approach is developed by utilizing the gradients of the derived likelihood to speed up the convergence of Markov Chain Monte Carlo (MCMC). The empirical study demonstrates that the proposed approach has a promising performance.
|
http://arxiv.org/pdf/2405.02783v2
|
[
"Wandi Xu",
"Wei Xie"
] |
2024-06-28T23:30:36Z
|
2024-05-05T01:54:21Z
|
2407.00256
|
One Prompt is not Enough: Automated Construction of a Mixture-of-Expert
Prompts
|
Large Language Models (LLMs) exhibit strong generalization capabilities to novel tasks when prompted with language instructions and in-context demos. Since this ability sensitively depends on the quality of prompts, various methods have been explored to automate the instruction design. While these methods demonstrated promising results, they also restricted the searched prompt to one instruction. Such simplification significantly limits their capacity, as a single demo-free instruction might not be able to cover the entire complex problem space of the targeted task. To alleviate this issue, we adopt the Mixture-of-Expert paradigm and divide the problem space into a set of sub-regions; Each sub-region is governed by a specialized expert, equipped with both an instruction and a set of demos. A two-phase process is developed to construct the specialized expert for each region: (1) demo assignment: Inspired by the theoretical connection between in-context learning and kernel regression, we group demos into experts based on their semantic similarity; (2) instruction assignment: A region-based joint search of an instruction per expert complements the demos assigned to it, yielding a synergistic effect. The resulting method, codenamed Mixture-of-Prompts (MoP), achieves an average win rate of 81% against prior arts across several major benchmarks.
|
http://arxiv.org/pdf/2407.00256v1
|
[
"Ruochen Wang",
"Sohyun An",
"Minhao Cheng",
"Tianyi Zhou",
"Sung Ju Hwang",
"Cho-Jui Hsieh"
] |
2024-06-28T23:05:08Z
|
2024-06-28T23:05:08Z
|
2402.11656
|
Integrating Pre-Trained Language Model with Physical Layer
Communications
|
The burgeoning field of on-device AI communication, where devices exchange information directly through embedded foundation models, such as language models (LMs), requires robust, efficient, and generalizable communication frameworks. However, integrating these frameworks with existing wireless systems and effectively managing noise and bit errors pose significant challenges. In this work, we introduce a practical ondevice AI communication framework, integrated with physical layer (PHY) communication functions, demonstrated through its performance on a link-level simulator. Our framework incorporates end-to-end training with channel noise to enhance resilience, incorporates vector quantized variational autoencoders (VQ-VAE) for efficient and robust communication, and utilizes pre-trained encoder-decoder transformers for improved generalization capabilities. Simulations, across various communication scenarios, reveal that our framework achieves a 50% reduction in transmission size while demonstrating substantial generalization ability and noise robustness under standardized 3GPP channel models.
|
http://arxiv.org/pdf/2402.11656v2
|
[
"Ju-Hyung Lee",
"Dong-Ho Lee",
"Joohan Lee",
"Jay Pujara"
] |
2024-06-28T23:00:45Z
|
2024-02-18T17:27:51Z
|
2312.02027
|
Stochastic Optimal Control Matching
|
Stochastic optimal control, which has the goal of driving the behavior of noisy systems, is broadly applicable in science, engineering and artificial intelligence. Our work introduces Stochastic Optimal Control Matching (SOCM), a novel Iterative Diffusion Optimization (IDO) technique for stochastic optimal control that stems from the same philosophy as the conditional score matching loss for diffusion models. That is, the control is learned via a least squares problem by trying to fit a matching vector field. The training loss, which is closely connected to the cross-entropy loss, is optimized with respect to both the control function and a family of reparameterization matrices which appear in the matching vector field. The optimization with respect to the reparameterization matrices aims at minimizing the variance of the matching vector field. Experimentally, our algorithm achieves lower error than all the existing IDO techniques for stochastic optimal control for three out of four control problems, in some cases by an order of magnitude. The key idea underlying SOCM is the path-wise reparameterization trick, a novel technique that may be of independent interest. Code at https://github.com/facebookresearch/SOC-matching
|
http://arxiv.org/pdf/2312.02027v4
|
[
"Carles Domingo-Enrich",
"Jiequn Han",
"Brandon Amos",
"Joan Bruna",
"Ricky T. Q. Chen"
] |
2024-06-28T22:37:36Z
|
2023-12-04T16:49:43Z
|
2403.18631
|
First Experiences with the Identification of People at Risk for Diabetes
in Argentina using Machine Learning Techniques
|
Detecting Type 2 Diabetes (T2D) and Prediabetes (PD) is a real challenge for medicine due to the absence of pathogenic symptoms and the lack of known associated risk factors. Even though some proposals for machine learning models enable the identification of people at risk, the nature of the condition makes it so that a model suitable for one population may not necessarily be suitable for another. In this article, the development and assessment of predictive models to identify people at risk for T2D and PD specifically in Argentina are discussed. First, the database was thoroughly preprocessed and three specific datasets were generated considering a compromise between the number of records and the amount of available variables. After applying 5 different classification models, the results obtained show that a very good performance was observed for two datasets with some of these models. In particular, RF, DT, and ANN demonstrated great classification power, with good values for the metrics under consideration. Given the lack of this type of tool in Argentina, this work represents the first step towards the development of more sophisticated models.
|
http://arxiv.org/abs/2403.18631v2
|
[
"Enzo Rucci",
"Gonzalo Tittarelli",
"Franco Ronchetti",
"Jorge F. Elgart",
"Laura Lanzarini",
"Juan José Gagliardino"
] |
2024-06-28T22:21:45Z
|
2024-03-27T14:38:02Z
|
2407.00245
|
Learning Closed Signal Flow Graphs
|
We develop a learning algorithm for closed signal flow graphs - a graphical model of signal transducers. The algorithm relies on the correspondence between closed signal flow graphs and weighted finite automata on a singleton alphabet. We demonstrate that this procedure results in a genuine reduction of complexity: our algorithm fares better than existing learning algorithms for weighted automata restricted to the case of a singleton alphabet.
|
http://arxiv.org/pdf/2407.00245v1
|
[
"Ekaterina Piotrovskaya",
"Leo Lobski",
"Fabio Zanasi"
] |
2024-06-28T22:04:36Z
|
2024-06-28T22:04:36Z
|
2405.13390
|
Convergence analysis of kernel learning FBSDE filter
|
Kernel learning forward backward SDE filter is an iterative and adaptive meshfree approach to solve the nonlinear filtering problem. It builds from forward backward SDE for Fokker-Planker equation, which defines evolving density for the state variable, and employs KDE to approximate density. This algorithm has shown more superior performance than mainstream particle filter method, in both convergence speed and efficiency of solving high dimension problems. However, this method has only been shown to converge empirically. In this paper, we present a rigorous analysis to demonstrate its local and global convergence, and provide theoretical support for its empirical results.
|
http://arxiv.org/pdf/2405.13390v3
|
[
"Yunzheng Lyu",
"Feng Bao"
] |
2024-06-28T21:45:11Z
|
2024-05-22T07:02:35Z
|
2406.18783
|
Psychological Profiling in Cybersecurity: A Look at LLMs and
Psycholinguistic Features
|
The increasing sophistication of cyber threats necessitates innovative approaches to cybersecurity. In this paper, we explore the potential of psychological profiling techniques, particularly focusing on the utilization of Large Language Models (LLMs) and psycholinguistic features. We investigate the intersection of psychology and cybersecurity, discussing how LLMs can be employed to analyze textual data for identifying psychological traits of threat actors. We explore the incorporation of psycholinguistic features, such as linguistic patterns and emotional cues, into cybersecurity frameworks. Our research underscores the importance of integrating psychological perspectives into cybersecurity practices to bolster defense mechanisms against evolving threats.
|
http://arxiv.org/pdf/2406.18783v2
|
[
"Jean Marie Tshimula",
"D'Jeff K. Nkashama",
"Jean Tshibangu Muabila",
"René Manassé Galekwa",
"Hugues Kanda",
"Maximilien V. Dialufuma",
"Mbuyi Mukendi Didier",
"Kalala Kalonji",
"Serge Mundele",
"Patience Kinshie Lenye",
"Tighana Wenge Basele",
"Aristarque Ilunga",
"Christian N. Mayemba",
"Nathanaël M. Kasoro",
"Selain K. Kasereka",
"Hardy Mikese",
"Pierre-Martin Tardif",
"Marc Frappier",
"Froduald Kabanza",
"Belkacem Chikhaoui",
"Shengrui Wang",
"Ali Mulenda Sumbu",
"Xavier Ndona",
"Raoul Kienge-Kienge Intudi"
] |
2024-06-28T21:22:56Z
|
2024-06-26T23:04:52Z
|
2407.00236
|
Closed-Form Test Functions for Biophysical Sequence Optimization
Algorithms
|
There is a growing body of work seeking to replicate the success of machine learning (ML) on domains like computer vision (CV) and natural language processing (NLP) to applications involving biophysical data. One of the key ingredients of prior successes in CV and NLP was the broad acceptance of difficult benchmarks that distilled key subproblems into approachable tasks that any junior researcher could investigate, but good benchmarks for biophysical domains are rare. This scarcity is partially due to a narrow focus on benchmarks which simulate biophysical data; we propose instead to carefully abstract biophysical problems into simpler ones with key geometric similarities. In particular we propose a new class of closed-form test functions for biophysical sequence optimization, which we call Ehrlich functions. We provide empirical results demonstrating these functions are interesting objects of study and can be non-trivial to solve with a standard genetic optimization baseline.
|
http://arxiv.org/pdf/2407.00236v1
|
[
"Samuel Stanton",
"Robert Alberstein",
"Nathan Frey",
"Andrew Watkins",
"Kyunghyun Cho"
] |
2024-06-28T21:13:57Z
|
2024-06-28T21:13:57Z
|
2407.00233
|
Methodology to Deploy CNN-Based Computer Vision Models on Immersive
Wearable Devices
|
Convolutional Neural Network (CNN) models often lack the ability to incorporate human input, which can be addressed by Augmented Reality (AR) headsets. However, current AR headsets face limitations in processing power, which has prevented researchers from performing real-time, complex image recognition tasks using CNNs in AR headsets. This paper presents a method to deploy CNN models on AR headsets by training them on computers and transferring the optimized weight matrices to the headset. The approach transforms the image data and CNN layers into a one-dimensional format suitable for the AR platform. We demonstrate this method by training the LeNet-5 CNN model on the MNIST dataset using PyTorch and deploying it on a HoloLens AR headset. The results show that the model maintains an accuracy of approximately 98%, similar to its performance on a computer. This integration of CNN and AR enables real-time image processing on AR headsets, allowing for the incorporation of human input into AI models.
|
http://arxiv.org/pdf/2407.00233v1
|
[
"Kaveh Malek",
"Fernando Moreu"
] |
2024-06-28T21:08:10Z
|
2024-06-28T21:08:10Z
|
2407.00215
|
LLM Critics Help Catch LLM Bugs
|
Reinforcement learning from human feedback (RLHF) is fundamentally limited by the capacity of humans to correctly evaluate model output. To improve human evaluation ability and overcome that limitation this work trains "critic" models that help humans to more accurately evaluate model-written code. These critics are themselves LLMs trained with RLHF to write natural language feedback highlighting problems in code from real-world assistant tasks. On code containing naturally occurring LLM errors model-written critiques are preferred over human critiques in 63% of cases, and human evaluation finds that models catch more bugs than human contractors paid for code review. We further confirm that our fine-tuned LLM critics can successfully identify hundreds of errors in ChatGPT training data rated as "flawless", even though the majority of those tasks are non-code tasks and thus out-of-distribution for the critic model. Critics can have limitations of their own, including hallucinated bugs that could mislead humans into making mistakes they might have otherwise avoided, but human-machine teams of critics and contractors catch similar numbers of bugs to LLM critics while hallucinating less than LLMs alone.
|
http://arxiv.org/pdf/2407.00215v1
|
[
"Nat McAleese",
"Rai Michael Pokorny",
"Juan Felipe Ceron Uribe",
"Evgenia Nitishinskaya",
"Maja Trebacz",
"Jan Leike"
] |
2024-06-28T19:53:17Z
|
2024-06-28T19:53:17Z
|
2406.16955
|
SRViT: Vision Transformers for Estimating Radar Reflectivity from
Satellite Observations at Scale
|
We introduce a transformer-based neural network to generate high-resolution (3km) synthetic radar reflectivity fields at scale from geostationary satellite imagery. This work aims to enhance short-term convective-scale forecasts of high-impact weather events and aid in data assimilation for numerical weather prediction over the United States. Compared to convolutional approaches, which have limited receptive fields, our results show improved sharpness and higher accuracy across various composite reflectivity thresholds. Additional case studies over specific atmospheric phenomena support our quantitative findings, while a novel attribution method is introduced to guide domain experts in understanding model outputs.
|
http://arxiv.org/pdf/2406.16955v2
|
[
"Jason Stock",
"Kyle Hilburn",
"Imme Ebert-Uphoff",
"Charles Anderson"
] |
2024-06-28T19:51:25Z
|
2024-06-20T20:40:50Z
|
2406.17831
|
Empirical Bayes for Dynamic Bayesian Networks Using Generalized
Variational Inference
|
In this work, we demonstrate the Empirical Bayes approach to learning a Dynamic Bayesian Network. By starting with several point estimates of structure and weights, we can use a data-driven prior to subsequently obtain a model to quantify uncertainty. This approach uses a recent development of Generalized Variational Inference, and indicates the potential of sampling the uncertainty of a mixture of DAG structures as well as a parameter posterior.
|
http://arxiv.org/pdf/2406.17831v2
|
[
"Vyacheslav Kungurtsev",
"Apaar",
"Aarya Khandelwal",
"Parth Sandeep Rastogi",
"Bapi Chatterjee",
"Jakub Mareček"
] |
2024-06-28T19:40:19Z
|
2024-06-25T14:34:51Z
|
2405.15052
|
Revisiting MoE and Dense Speed-Accuracy Comparisons for LLM Training
|
Mixture-of-Experts (MoE) enjoys performance gain by increasing model capacity while keeping computation cost constant. When comparing MoE to dense models, prior work typically adopt the following setting: 1) use FLOPs or activated parameters as a measure of model complexity; 2) train all models to the same number of tokens. We argue that this setting favors MoE as FLOPs and activated parameters do not accurately measure the communication overhead in sparse layers, leading to a larger actual training budget for MoE. In this work, we revisit the settings by adopting step time as a more accurate measure of model complexity, and by determining the total compute budget under the Chinchilla compute-optimal settings. To efficiently run MoE on modern accelerators, we adopt a 3D sharding method that keeps the dense-to-MoE step time increase within a healthy range. We evaluate MoE and dense LLMs on a set of nine 0-shot and two 1-shot English tasks, as well as MMLU 5-shot and GSM8K 8-shot across three model scales at 6.4B, 12.6B, and 29.6B. Experimental results show that even under these settings, MoE consistently outperform dense LLMs on the speed-accuracy trade-off curve with meaningful gaps. Our full model implementation and sharding strategy has been released at~url{https://github.com/apple/axlearn}
|
http://arxiv.org/pdf/2405.15052v2
|
[
"Xianzhi Du",
"Tom Gunter",
"Xiang Kong",
"Mark Lee",
"Zirui Wang",
"Aonan Zhang",
"Nan Du",
"Ruoming Pang"
] |
2024-06-28T19:39:45Z
|
2024-05-23T21:00:53Z
|
2402.16795
|
If in a Crowdsourced Data Annotation Pipeline, a GPT-4
|
Recent studies indicated GPT-4 outperforms online crowd workers in data labeling accuracy, notably workers from Amazon Mechanical Turk (MTurk). However, these studies were criticized for deviating from standard crowdsourcing practices and emphasizing individual workers' performances over the whole data-annotation process. This paper compared GPT-4 and an ethical and well-executed MTurk pipeline, with 415 workers labeling 3,177 sentence segments from 200 scholarly articles using the CODA-19 scheme. Two worker interfaces yielded 127,080 labels, which were then used to infer the final labels through eight label-aggregation algorithms. Our evaluation showed that despite best practices, MTurk pipeline's highest accuracy was 81.5%, whereas GPT-4 achieved 83.6%. Interestingly, when combining GPT-4's labels with crowd labels collected via an advanced worker interface for aggregation, 2 out of the 8 algorithms achieved an even higher accuracy (87.5%, 87.0%). Further analysis suggested that, when the crowd's and GPT-4's labeling strengths are complementary, aggregating them could increase labeling accuracy.
|
http://arxiv.org/abs/2402.16795v2
|
[
"Zeyu He",
"Chieh-Yang Huang",
"Chien-Kuang Cornelia Ding",
"Shaurya Rohatgi",
"Ting-Hao 'Kenneth' Huang"
] |
2024-06-28T19:33:48Z
|
2024-02-26T18:08:52Z
|
2401.13054
|
Frustrated Random Walks: A Fast Method to Compute Node Distances on
Hypergraphs
|
A hypergraph is a generalization of a graph that arises naturally when attribute-sharing among entities is considered. Compared to graphs, hypergraphs have the distinct advantage that they contain explicit communities and are more convenient to manipulate. An open problem in hypergraph research is how to accurately and efficiently calculate node distances on hypergraphs. Estimating node distances enables us to find a node's nearest neighbors, which has important applications in such areas as recommender system, targeted ads, etc. In this paper, we propose using expected hitting times of random walks to compute hypergraph node distances. We note that simple random walks (SRW) cannot accurately compute node distances on highly complex real-world hypergraphs, which motivates us to introduce frustrated random walks (FRW) for this task. We further benchmark our method against DeepWalk, and show that while the latter can achieve comparable results, FRW has a distinct computational advantage in cases where the number of targets is fairly small. For such cases, we show that FRW runs in significantly shorter time than DeepWalk. Finally, we analyze the time complexity of our method, and show that for large and sparse hypergraphs, the complexity is approximately linear, rendering it superior to the DeepWalk alternative.
|
http://arxiv.org/pdf/2401.13054v2
|
[
"Enzhi Li",
"Scott Nickleach",
"Bilal Fadlallah"
] |
2024-06-28T19:18:24Z
|
2024-01-23T19:26:24Z
|
2307.10438
|
Uncertainty Quantification for Molecular Property Predictions with Graph
Neural Architecture Search
|
Graph Neural Networks (GNNs) have emerged as a prominent class of data-driven methods for molecular property prediction. However, a key limitation of typical GNN models is their inability to quantify uncertainties in the predictions. This capability is crucial for ensuring the trustworthy use and deployment of models in downstream tasks. To that end, we introduce AutoGNNUQ, an automated uncertainty quantification (UQ) approach for molecular property prediction. AutoGNNUQ leverages architecture search to generate an ensemble of high-performing GNNs, enabling the estimation of predictive uncertainties. Our approach employs variance decomposition to separate data (aleatoric) and model (epistemic) uncertainties, providing valuable insights for reducing them. In our computational experiments, we demonstrate that AutoGNNUQ outperforms existing UQ methods in terms of both prediction accuracy and UQ performance on multiple benchmark datasets. Additionally, we utilize t-SNE visualization to explore correlations between molecular features and uncertainty, offering insight for dataset improvement. AutoGNNUQ has broad applicability in domains such as drug discovery and materials science, where accurate uncertainty quantification is crucial for decision-making.
|
http://arxiv.org/abs/2307.10438v3
|
[
"Shengli Jiang",
"Shiyi Qin",
"Reid C. Van Lehn",
"Prasanna Balaprakash",
"Victor M. Zavala"
] |
2024-06-28T19:10:16Z
|
2023-07-19T20:03:42Z
|
2407.00197
|
Tradeoffs When Considering Deep Reinforcement Learning for Contingency
Management in Advanced Air Mobility
|
Air transportation is undergoing a rapid evolution globally with the introduction of Advanced Air Mobility (AAM) and with it comes novel challenges and opportunities for transforming aviation. As AAM operations introduce increasing heterogeneity in vehicle capabilities and density, increased levels of automation are likely necessary to achieve operational safety and efficiency goals. This paper focuses on one example where increased automation has been suggested. Autonomous operations will need contingency management systems that can monitor evolving risk across a span of interrelated (or interdependent) hazards and, if necessary, execute appropriate control interventions via supervised or automated decision making. Accommodating this complex environment may require automated functions (autonomy) that apply artificial intelligence (AI) techniques that can adapt and respond to a quickly changing environment. This paper explores the use of Deep Reinforcement Learning (DRL) which has shown promising performance in complex and high-dimensional environments where the objective can be constructed as a sequential decision-making problem. An extension of a prior formulation of the contingency management problem as a Markov Decision Process (MDP) is presented and uses a DRL framework to train agents that mitigate hazards present in the simulation environment. A comparison of these learning-based agents and classical techniques is presented in terms of their performance, verification difficulties, and development process.
|
http://arxiv.org/pdf/2407.00197v1
|
[
"Luis E. Alvarez",
"Marc W. Brittain",
"Steven D. Young"
] |
2024-06-28T19:09:55Z
|
2024-06-28T19:09:55Z
|
2406.13023
|
Stackelberg Games with $k$-Submodular Function under Distributional
Risk-Receptiveness and Robustness
|
We study submodular optimization in adversarial context, applicable to machine learning problems such as feature selection using data susceptible to uncertainties and attacks. We focus on Stackelberg games between an attacker (or interdictor) and a defender where the attacker aims to minimize the defender's objective of maximizing a $k$-submodular function. We allow uncertainties arising from the success of attacks and inherent data noise, and address challenges due to incomplete knowledge of the probability distribution of random parameters. Specifically, we introduce Distributionally Risk-Averse $k$-Submodular Interdiction Problem (DRA $k$-SIP) and Distributionally Risk-Receptive $k$-Submodular Interdiction Problem (DRR $k$-SIP) along with finitely convergent exact algorithms for solving them. The DRA $k$-SIP solution allows risk-averse interdictor to develop robust strategies for real-world uncertainties. Conversely, DRR $k$-SIP solution suggests aggressive tactics for attackers, willing to embrace (distributional) risk to inflict maximum damage, identifying critical vulnerable components, which can be used for the defender's defensive strategies. The optimal values derived from both DRA $k$-SIP and DRR $k$-SIP offer a confidence interval-like range for the expected value of the defender's objective function, capturing distributional ambiguity. We conduct computational experiments using instances of feature selection and sensor placement problems, and Wisconsin breast cancer data and synthetic data, respectively.
|
http://arxiv.org/pdf/2406.13023v3
|
[
"Seonghun Park",
"Manish Bansal"
] |
2024-06-28T19:08:35Z
|
2024-06-18T19:30:46Z
|
2407.00186
|
DCSM 2.0: Deep Conditional Shape Models for Data Efficient Segmentation
|
Segmentation is often the first step in many medical image analyses workflows. Deep learning approaches, while giving state-of-the-art accuracies, are data intensive and do not scale well to low data regimes. We introduce Deep Conditional Shape Models 2.0, which uses an edge detector, along with an implicit shape function conditioned on edge maps, to leverage cross-modality shape information. The shape function is trained exclusively on a source domain (contrasted CT) and applied to the target domain of interest (3D echocardiography). We demonstrate data efficiency in the target domain by varying the amounts of training data used in the edge detection stage. We observe that DCSM 2.0 outperforms the baseline at all data levels in terms of Hausdorff distances, and while using 50% or less of the training data in terms of average mesh distance, and at 10% or less of the data with the dice coefficient. The method scales well to low data regimes, with gains of up to 5% in dice coefficient, 2.58 mm in average surface distance and 21.02 mm in Hausdorff distance when using just 2% (22 volumes) of the training data.
|
http://arxiv.org/pdf/2407.00186v1
|
[
"Athira J Jacob",
"Puneet Sharma",
"Daniel Rueckert"
] |
2024-06-28T18:52:11Z
|
2024-06-28T18:52:11Z
|
2403.17983
|
Is Watermarking LLM-Generated Code Robust?
|
We present the first study of the robustness of existing watermarking techniques on Python code generated by large language models. Although existing works showed that watermarking can be robust for natural language, we show that it is easy to remove these watermarks on code by semantic-preserving transformations.
|
http://arxiv.org/pdf/2403.17983v2
|
[
"Tarun Suresh",
"Shubham Ugare",
"Gagandeep Singh",
"Sasa Misailovic"
] |
2024-06-28T18:35:22Z
|
2024-03-24T21:41:29Z
|
2407.00176
|
The impact of model size on catastrophic forgetting in Online Continual
Learning
|
This study investigates the impact of model size on Online Continual Learning performance, with a focus on catastrophic forgetting. Employing ResNet architectures of varying sizes, the research examines how network depth and width affect model performance in class-incremental learning using the SplitCIFAR-10 dataset. Key findings reveal that larger models do not guarantee better Continual Learning performance; in fact, they often struggle more in adapting to new tasks, particularly in online settings. These results challenge the notion that larger models inherently mitigate catastrophic forgetting, highlighting the nuanced relationship between model size and Continual Learning efficacy. This study contributes to a deeper understanding of model scalability and its practical implications in Continual Learning scenarios.
|
http://arxiv.org/pdf/2407.00176v1
|
[
"Eunhae Lee"
] |
2024-06-28T18:29:51Z
|
2024-06-28T18:29:51Z
|
2407.00175
|
Permutation invariant multi-output Gaussian Processes for drug
combination prediction in cancer
|
Dose-response prediction in cancer is an active application field in machine learning. Using large libraries of textit{in-vitro} drug sensitivity screens, the goal is to develop accurate predictive models that can be used to guide experimental design or inform treatment decisions. Building on previous work that makes use of permutation invariant multi-output Gaussian Processes in the context of dose-response prediction for drug combinations, we develop a variational approximation to these models. The variational approximation enables a more scalable model that provides uncertainty quantification and naturally handles missing data. Furthermore, we propose using a deep generative model to encode the chemical space in a continuous manner, enabling prediction for new drugs and new combinations. We demonstrate the performance of our model in a simple setting using a high-throughput dataset and show that the model is able to efficiently borrow information across outputs.
|
http://arxiv.org/pdf/2407.00175v1
|
[
"Leiv Rønneberg",
"Vidhi Lalchand",
"Paul D. W. Kirk"
] |
2024-06-28T18:28:38Z
|
2024-06-28T18:28:38Z
|
2407.00170
|
Dataset Representativeness and Downstream Task Fairness
|
Our society collects data on people for a wide range of applications, from building a census for policy evaluation to running meaningful clinical trials. To collect data, we typically sample individuals with the goal of accurately representing a population of interest. However, current sampling processes often collect data opportunistically from data sources, which can lead to datasets that are biased and not representative, i.e., the collected dataset does not accurately reflect the distribution of demographics of the true population. This is a concern because subgroups within the population can be under- or over-represented in a dataset, which may harm generalizability and lead to an unequal distribution of benefits and harms from downstream tasks that use such datasets (e.g., algorithmic bias in medical decision-making algorithms). In this paper, we assess the relationship between dataset representativeness and group-fairness of classifiers trained on that dataset. We demonstrate that there is a natural tension between dataset representativeness and classifier fairness; empirically we observe that training datasets with better representativeness can frequently result in classifiers with higher rates of unfairness. We provide some intuition as to why this occurs via a set of theoretical results in the case of univariate classifiers. We also find that over-sampling underrepresented groups can result in classifiers which exhibit greater bias to those groups. Lastly, we observe that fairness-aware sampling strategies (i.e., those which are specifically designed to select data with high downstream fairness) will often over-sample members of majority groups. These results demonstrate that the relationship between dataset representativeness and downstream classifier fairness is complex; balancing these two quantities requires special care from both model- and dataset-designers.
|
http://arxiv.org/pdf/2407.00170v1
|
[
"Victor Borza",
"Andrew Estornell",
"Chien-Ju Ho",
"Bradley Malin",
"Yevgeniy Vorobeychik"
] |
2024-06-28T18:11:16Z
|
2024-06-28T18:11:16Z
|
2311.09735
|
GEO: Generative Engine Optimization
|
The advent of large language models (LLMs) has ushered in a new paradigm of search engines that use generative models to gather and summarize information to answer user queries. This emerging technology, which we formalize under the unified framework of generative engines (GEs), can generate accurate and personalized responses, rapidly replacing traditional search engines like Google and Bing. Generative Engines typically satisfy queries by synthesizing information from multiple sources and summarizing them using LLMs. While this shift significantly improves $textit{user}$ utility and $textit{generative search engine}$ traffic, it poses a huge challenge for the third stakeholder -- website and content creators. Given the black-box and fast-moving nature of generative engines, content creators have little to no control over $textit{when}$ and $textit{how}$ their content is displayed. With generative engines here to stay, we must ensure the creator economy is not disadvantaged. To address this, we introduce Generative Engine Optimization (GEO), the first novel paradigm to aid content creators in improving their content visibility in generative engine responses through a flexible black-box optimization framework for optimizing and defining visibility metrics. We facilitate systematic evaluation by introducing GEO-bench, a large-scale benchmark of diverse user queries across multiple domains, along with relevant web sources to answer these queries. Through rigorous evaluation, we demonstrate that GEO can boost visibility by up to $40%$ in generative engine responses. Moreover, we show the efficacy of these strategies varies across domains, underscoring the need for domain-specific optimization methods. Our work opens a new frontier in information discovery systems, with profound implications for both developers of generative engines and content creators.
|
http://arxiv.org/pdf/2311.09735v3
|
[
"Pranjal Aggarwal",
"Vishvak Murahari",
"Tanmay Rajpurohit",
"Ashwin Kalyan",
"Karthik Narasimhan",
"Ameet Deshpande"
] |
2024-06-28T17:59:26Z
|
2023-11-16T10:06:09Z
|
2310.03812
|
Fishnets: Information-Optimal, Scalable Aggregation for Sets and Graphs
|
Set-based learning is an essential component of modern deep learning and network science. Graph Neural Networks (GNNs) and their edge-free counterparts Deepsets have proven remarkably useful on ragged and topologically challenging datasets. The key to learning informative embeddings for set members is a specified aggregation function, usually a sum, max, or mean. We propose Fishnets, an aggregation strategy for learning information-optimal embeddings for sets of data for both Bayesian inference and graph aggregation. We demonstrate that i) Fishnets neural summaries can be scaled optimally to an arbitrary number of data objects, ii) Fishnets aggregations are robust to changes in data distribution, unlike standard deepsets, iii) Fishnets saturate Bayesian information content and extend to regimes where MCMC techniques fail and iv) Fishnets can be used as a drop-in aggregation scheme within GNNs. We show that by adopting a Fishnets aggregation scheme for message passing, GNNs can achieve state-of-the-art performance versus architecture size on ogbn-protein data over existing benchmarks with a fraction of learnable parameters and faster training time.
|
http://arxiv.org/pdf/2310.03812v2
|
[
"T. Lucas Makinen",
"Justin Alsing",
"Benjamin D. Wandelt"
] |
2024-06-28T17:59:14Z
|
2023-10-05T18:01:04Z
|
2406.20095
|
LLaRA: Supercharging Robot Learning Data for Vision-Language Policy
|
Large Language Models (LLMs) equipped with extensive world knowledge and strong reasoning skills can tackle diverse tasks across domains, often by posing them as conversation-style instruction-response pairs. In this paper, we propose LLaRA: Large Language and Robotics Assistant, a framework which formulates robot action policy as conversations, and provides improved responses when trained with auxiliary data that complements policy learning. LLMs with visual inputs, i.e., Vision Language Models (VLMs), have the capacity to process state information as visual-textual prompts and generate optimal policy decisions in text. To train such action policy VLMs, we first introduce an automated pipeline to generate diverse high-quality robotics instruction data from existing behavior cloning data. A VLM finetuned with the resulting collection of datasets based on a conversation-style formulation tailored for robotics tasks, can generate meaningful robot action policy decisions. Our experiments across multiple simulated and real-world environments demonstrate the state-of-the-art performance of the proposed LLaRA framework. The code, datasets, and pretrained models are available at https://github.com/LostXine/LLaRA.
|
http://arxiv.org/pdf/2406.20095v1
|
[
"Xiang Li",
"Cristina Mata",
"Jongwoo Park",
"Kumara Kahatapitiya",
"Yoo Sung Jang",
"Jinghuan Shang",
"Kanchana Ranasinghe",
"Ryan Burgert",
"Mu Cai",
"Yong Jae Lee",
"Michael S. Ryoo"
] |
2024-06-28T17:59:12Z
|
2024-06-28T17:59:12Z
|
2406.20094
|
Scaling Synthetic Data Creation with 1,000,000,000 Personas
|
We propose a novel persona-driven data synthesis methodology that leverages various perspectives within a large language model (LLM) to create diverse synthetic data. To fully exploit this methodology at scale, we introduce Persona Hub -- a collection of 1 billion diverse personas automatically curated from web data. These 1 billion personas (~13% of the world's total population), acting as distributed carriers of world knowledge, can tap into almost every perspective encapsulated within the LLM, thereby facilitating the creation of diverse synthetic data at scale for various scenarios. By showcasing Persona Hub's use cases in synthesizing high-quality mathematical and logical reasoning problems, instructions (i.e., user prompts), knowledge-rich texts, game NPCs and tools (functions) at scale, we demonstrate persona-driven data synthesis is versatile, scalable, flexible, and easy to use, potentially driving a paradigm shift in synthetic data creation and applications in practice, which may have a profound impact on LLM research and development.
|
http://arxiv.org/pdf/2406.20094v1
|
[
"Xin Chan",
"Xiaoyang Wang",
"Dian Yu",
"Haitao Mi",
"Dong Yu"
] |
2024-06-28T17:59:01Z
|
2024-06-28T17:59:01Z
|
2406.12909
|
Scalable Training of Graph Foundation Models for Atomistic Materials
Modeling: A Case Study with HydraGNN
|
We present our work on developing and training scalable graph foundation models (GFM) using HydraGNN, a multi-headed graph convolutional neural network architecture. HydraGNN expands the boundaries of graph neural network (GNN) in both training scale and data diversity. It abstracts over message passing algorithms, allowing both reproduction of and comparison across algorithmic innovations that define convolution in GNNs. This work discusses a series of optimizations that have allowed scaling up the GFM training to tens of thousands of GPUs on datasets that consist of hundreds of millions of graphs. Our GFMs use multi-task learning (MTL) to simultaneously learn graph-level and node-level properties of atomistic structures, such as the total energy and atomic forces. Using over 150 million atomistic structures for training, we illustrate the performance of our approach along with the lessons learned on two United States Department of Energy (US-DOE) supercomputers, namely the Perlmutter petascale system at the National Energy Research Scientific Computing Center and the Frontier exascale system at Oak Ridge National Laboratory. The HydraGNN architecture enables the GFM to achieve near-linear strong scaling performance using more than 2,000 GPUs on Perlmutter and 16,000 GPUs on Frontier. Hyperparameter optimization (HPO) was performed on over 64,000 GPUs on Frontier to select GFM architectures with high accuracy. Early stopping was applied on each GFM architecture for energy awareness in performing such an extreme-scale task. The training of an ensemble of highest-ranked GFM architectures continued until convergence to establish uncertainty quantification (UQ) capabilities with ensemble learning. Our contribution opens the door for rapidly developing, training, and deploying GFMs using large-scale computational resources to enable AI-accelerated materials discovery and design.
|
http://arxiv.org/pdf/2406.12909v2
|
[
"Massimiliano Lupo Pasini",
"Jong Youl Choi",
"Kshitij Mehta",
"Pei Zhang",
"David Rogers",
"Jonghyun Bae",
"Khaled Z. Ibrahim",
"Ashwin M. Aji",
"Karl W. Schulz",
"Jorda Polo",
"Prasanna Balaprakash"
] |
2024-06-28T17:58:27Z
|
2024-06-12T21:21:42Z
|
2407.00148
|
Localizing Anomalies via Multiscale Score Matching Analysis
|
Anomaly detection and localization in medical imaging remain critical challenges in healthcare. This paper introduces Spatial-MSMA (Multiscale Score Matching Analysis), a novel unsupervised method for anomaly localization in volumetric brain MRIs. Building upon the MSMA framework, our approach incorporates spatial information and conditional likelihoods to enhance anomaly detection capabilities. We employ a flexible normalizing flow model conditioned on patch positions and global image features to estimate patch-wise anomaly scores. The method is evaluated on a dataset of 1,650 T1- and T2-weighted brain MRIs from typically developing children, with simulated lesions added to the test set. Spatial-MSMA significantly outperforms existing methods, including reconstruction-based, generative-based, and interpretation-based approaches, in lesion detection and segmentation tasks. Our model achieves superior performance in both distance-based metrics (99th percentile Hausdorff Distance: $7.05 pm 0.61$, Mean Surface Distance: $2.10 pm 0.43$) and component-wise metrics (True Positive Rate: $0.83 pm 0.01$, Positive Predictive Value: $0.96 pm 0.01$). These results demonstrate Spatial-MSMA's potential for accurate and interpretable anomaly localization in medical imaging, with implications for improved diagnosis and treatment planning in clinical settings. Our code is available at~url{https://github.com/ahsanMah/sade/}.
|
http://arxiv.org/pdf/2407.00148v1
|
[
"Ahsan Mahmood",
"Junier Oliva",
"Martin Styner"
] |
2024-06-28T17:57:12Z
|
2024-06-28T17:57:12Z
|
2406.20087
|
ProgressGym: Alignment with a Millennium of Moral Progress
|
Frontier AI systems, including large language models (LLMs), hold increasing influence over the epistemology of human users. Such influence can reinforce prevailing societal values, potentially contributing to the lock-in of misguided moral beliefs and, consequently, the perpetuation of problematic moral practices on a broad scale. We introduce progress alignment as a technical solution to mitigate this imminent risk. Progress alignment algorithms learn to emulate the mechanics of human moral progress, thereby addressing the susceptibility of existing alignment methods to contemporary moral blindspots. To empower research in progress alignment, we introduce ProgressGym, an experimental framework allowing the learning of moral progress mechanics from history, in order to facilitate future progress in real-world moral decisions. Leveraging 9 centuries of historical text and 18 historical LLMs, ProgressGym enables codification of real-world progress alignment challenges into concrete benchmarks. Specifically, we introduce three core challenges: tracking evolving values (PG-Follow), preemptively anticipating moral progress (PG-Predict), and regulating the feedback loop between human and AI value shifts (PG-Coevolve). Alignment methods without a temporal dimension are inapplicable to these tasks. In response, we present lifelong and extrapolative algorithms as baseline methods of progress alignment, and build an open leaderboard soliciting novel algorithms and challenges. The framework and the leaderboard are available at https://github.com/PKU-Alignment/ProgressGym and https://huggingface.co/spaces/PKU-Alignment/ProgressGym-LeaderBoard respectively.
|
http://arxiv.org/pdf/2406.20087v1
|
[
"Tianyi Qiu",
"Yang Zhang",
"Xuchuan Huang",
"Jasmine Xinze Li",
"Jiaming Ji",
"Yaodong Yang"
] |
2024-06-28T17:55:24Z
|
2024-06-28T17:55:24Z
|
2406.20086
|
Token Erasure as a Footprint of Implicit Vocabulary Items in LLMs
|
LLMs process text as sequences of tokens that roughly correspond to words, where less common words are represented by multiple tokens. However, individual tokens are often semantically unrelated to the meanings of the words/concepts they comprise. For example, Llama-2-7b's tokenizer splits the word "northeastern" into the tokens ['_n', 'ort', 'he', 'astern'], none of which correspond to semantically meaningful units like "north" or "east." Similarly, the overall meanings of named entities like "Neil Young" and multi-word expressions like "break a leg" cannot be directly inferred from their constituent tokens. Mechanistically, how do LLMs convert such arbitrary groups of tokens into useful higher-level representations? In this work, we find that last token representations of named entities and multi-token words exhibit a pronounced "erasure" effect, where information about previous and current tokens is rapidly forgotten in early layers. Using this observation, we propose a method to "read out" the implicit vocabulary of an autoregressive LLM by examining differences in token representations across layers, and present results of this method for Llama-2-7b and Llama-3-8B. To our knowledge, this is the first attempt to probe the implicit vocabulary of an LLM.
|
http://arxiv.org/pdf/2406.20086v1
|
[
"Sheridan Feucht",
"David Atkinson",
"Byron Wallace",
"David Bau"
] |
2024-06-28T17:54:47Z
|
2024-06-28T17:54:47Z
|
2406.20081
|
Segment Anything without Supervision
|
The Segmentation Anything Model (SAM) requires labor-intensive data labeling. We present Unsupervised SAM (UnSAM) for promptable and automatic whole-image segmentation that does not require human annotations. UnSAM utilizes a divide-and-conquer strategy to "discover" the hierarchical structure of visual scenes. We first leverage top-down clustering methods to partition an unlabeled image into instance/semantic level segments. For all pixels within a segment, a bottom-up clustering method is employed to iteratively merge them into larger groups, thereby forming a hierarchical structure. These unsupervised multi-granular masks are then utilized to supervise model training. Evaluated across seven popular datasets, UnSAM achieves competitive results with the supervised counterpart SAM, and surpasses the previous state-of-the-art in unsupervised segmentation by 11% in terms of AR. Moreover, we show that supervised SAM can also benefit from our self-supervised labels. By integrating our unsupervised pseudo masks into SA-1B's ground-truth masks and training UnSAM with only 1% of SA-1B, a lightly semi-supervised UnSAM can often segment entities overlooked by supervised SAM, exceeding SAM's AR by over 6.7% and AP by 3.9% on SA-1B.
|
http://arxiv.org/pdf/2406.20081v1
|
[
"XuDong Wang",
"Jingfeng Yang",
"Trevor Darrell"
] |
2024-06-28T17:47:32Z
|
2024-06-28T17:47:32Z
|
2402.00093
|
ChIRAAG: ChatGPT Informed Rapid and Automated Assertion Generation
|
System Verilog Assertion (SVA) formulation -- a critical yet complex task is a prerequisite in the Assertion Based Verification (ABV) process. Traditionally, SVA formulation involves expert-driven interpretation of specifications, which is time-consuming and prone to human error. Recently, LLM-informed automatic assertion generation is gaining interest. We designed a novel framework called ChIRAAG, based on OpenAI GPT4, to generate SVA from natural language specifications of a design. ChIRAAG constitutes the systematic breakdown of design specifications into a standardized format, further generating assertions from formatted specifications using LLM. Furthermore, we used few test cases to validate the LLM-generated assertions. Automatic feedback of log messages from the simulation tool to the LLM ensures that the framework can generate correct SVAs. In our experiments, only 27% of LLM-generated raw assertions had errors, which was rectified in few iterations based on the simulation log. Our results on OpenTitan designs show that LLMs can streamline and assist engineers in the assertion generation process, reshaping verification workflows.
|
http://arxiv.org/pdf/2402.00093v3
|
[
"Bhabesh Mali",
"Karthik Maddala",
"Vatsal Gupta",
"Sweeya Reddy",
"Chandan Karfa",
"Ramesh Karri"
] |
2024-06-28T17:46:19Z
|
2024-01-31T12:41:27Z
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.