arxiv_id
stringlengths 7
11
| title
stringlengths 7
243
| abstract
stringlengths 3
2.79k
| link
stringlengths 21
49
| authors
listlengths 1
451
| updated
stringlengths 20
20
| published
stringlengths 20
20
|
---|---|---|---|---|---|---|
2407.02060
|
Terminating Differentiable Tree Experts
|
We advance the recently proposed neuro-symbolic Differentiable Tree Machine, which learns tree operations using a combination of transformers and Tensor Product Representations. We investigate the architecture and propose two key components. We first remove a series of different transformer layers that are used in every step by introducing a mixture of experts. This results in a Differentiable Tree Experts model with a constant number of parameters for any arbitrary number of steps in the computation, compared to the previous method in the Differentiable Tree Machine with a linear growth. Given this flexibility in the number of steps, we additionally propose a new termination algorithm to provide the model the power to choose how many steps to make automatically. The resulting Terminating Differentiable Tree Experts model sluggishly learns to predict the number of steps without an oracle. It can do so while maintaining the learning capabilities of the model, converging to the optimal amount of steps.
|
http://arxiv.org/pdf/2407.02060v1
|
[
"Jonathan Thomm",
"Michael Hersche",
"Giacomo Camposampiero",
"Aleksandar Terzić",
"Bernhard Schölkopf",
"Abbas Rahimi"
] |
2024-07-02T08:45:38Z
|
2024-07-02T08:45:38Z
|
2407.02057
|
HC-GLAD: Dual Hyperbolic Contrastive Learning for Unsupervised
Graph-Level Anomaly Detection
|
Unsupervised graph-level anomaly detection (UGAD) has garnered increasing attention in recent years due to its significance. However, most existing methods only rely on traditional graph neural networks to explore pairwise relationships but such kind of pairwise edges are not enough to describe multifaceted relationships involving anomaly. There is an emergency need to exploit node group information which plays a crucial role in UGAD. In addition, most previous works ignore the global underlying properties (e.g., hierarchy and power-law structure) which are common in real-world graph datasets and therefore are indispensable factors on UGAD task. In this paper, we propose a novel Dual Hyperbolic Contrastive Learning for Unsupervised Graph-Level Anomaly Detection (HC-GLAD in short). To exploit node group connections, we construct hypergraphs based on gold motifs and subsequently perform hypergraph convolution. Furthermore, to preserve the hierarchy of real-world graphs, we introduce hyperbolic geometry into this field and conduct both graph and hypergraph embedding learning in hyperbolic space with hyperboloid model. To the best of our knowledge, this is the first work to simultaneously apply hypergraph with node group connections and hyperbolic geometry into this field. Extensive experiments on several real world datasets of different fields demonstrate the superiority of HC-GLAD on UGAD task. The code is available at https://github.com/Yali-F/HC-GLAD.
|
http://arxiv.org/pdf/2407.02057v1
|
[
"Yali Fu",
"Jindong Li",
"Jiahong Liu",
"Qianli Xing",
"Qi Wang",
"Irwin King"
] |
2024-07-02T08:38:32Z
|
2024-07-02T08:38:32Z
|
2406.19765
|
Systematic Literature Review on Application of Learning-based Approaches
in Continuous Integration
|
Context: Machine learning (ML) and deep learning (DL) analyze raw data to extract valuable insights in specific phases. The rise of continuous practices in software projects emphasizes automating Continuous Integration (CI) with these learning-based methods, while the growing adoption of such approaches underscores the need for systematizing knowledge. Objective: Our objective is to comprehensively review and analyze existing literature concerning learning-based methods within the CI domain. We endeavour to identify and analyse various techniques documented in the literature, emphasizing the fundamental attributes of training phases within learning-based solutions in the context of CI. Method: We conducted a Systematic Literature Review (SLR) involving 52 primary studies. Through statistical and thematic analyses, we explored the correlations between CI tasks and the training phases of learning-based methodologies across the selected studies, encompassing a spectrum from data engineering techniques to evaluation metrics. Results: This paper presents an analysis of the automation of CI tasks utilizing learning-based methods. We identify and analyze nine types of data sources, four steps in data preparation, four feature types, nine subsets of data features, five approaches for hyperparameter selection and tuning, and fifteen evaluation metrics. Furthermore, we discuss the latest techniques employed, existing gaps in CI task automation, and the characteristics of the utilized learning-based techniques. Conclusion: This study provides a comprehensive overview of learning-based methods in CI, offering valuable insights for researchers and practitioners developing CI task automation. It also highlights the need for further research to advance these methods in CI.
|
http://arxiv.org/pdf/2406.19765v2
|
[
"Ali Kazemi Arani",
"Triet Huynh Minh Le",
"Mansooreh Zahedi",
"M. Ali Babar"
] |
2024-07-02T08:29:56Z
|
2024-06-28T09:10:23Z
|
2407.04738
|
A Contrastive Learning Based Convolutional Neural Network for ERP
Brain-Computer Interfaces
|
ERP-based EEG detection is gaining increasing attention in the field of brain-computer interfaces. However, due to the complexity of ERP signal components, their low signal-to-noise ratio, and significant inter-subject variability, cross-subject ERP signal detection has been challenging. The continuous advancement in deep learning has greatly contributed to addressing this issue. This brief proposes a contrastive learning training framework and an Inception module to extract multi-scale temporal and spatial features, representing the subject-invariant components of ERP signals. Specifically, a base encoder integrated with a linear Inception module and a nonlinear projector is used to project the raw data into latent space. By maximizing signal similarity under different targets, the inter-subject EEG signal differences in latent space are minimized. The extracted spatiotemporal features are then used for ERP target detection. The proposed algorithm achieved the best AUC performance in single-trial binary classification tasks on the P300 dataset and showed significant optimization in speller decoding tasks compared to existing algorithms.
|
http://arxiv.org/pdf/2407.04738v1
|
[
"Yuntian Cui",
"Xinke Shen",
"Dan Zhang",
"Chen Yang"
] |
2024-07-02T08:20:52Z
|
2024-07-02T08:20:52Z
|
2407.02031
|
SwiftDiffusion: Efficient Diffusion Model Serving with Add-on Modules
|
This paper documents our characterization study and practices for serving text-to-image requests with stable diffusion models in production. We first comprehensively analyze inference request traces for commercial text-to-image applications. It commences with our observation that add-on modules, i.e., ControlNets and LoRAs, that augment the base stable diffusion models, are ubiquitous in generating images for commercial applications. Despite their efficacy, these add-on modules incur high loading overhead, prolong the serving latency, and swallow up expensive GPU resources. Driven by our characterization study, we present SwiftDiffusion, a system that efficiently generates high-quality images using stable diffusion models and add-on modules. To achieve this, SwiftDiffusion reconstructs the existing text-to-image serving workflow by identifying the opportunities for parallel computation and distributing ControlNet computations across multiple GPUs. Further, SwiftDiffusion thoroughly analyzes the dynamics of image generation and develops techniques to eliminate the overhead associated with LoRA loading and patching while preserving the image quality. Last, SwiftDiffusion proposes specialized optimizations in the backbone architecture of the stable diffusion models, which are also compatible with the efficient serving of add-on modules. Compared to state-of-the-art text-to-image serving systems, SwiftDiffusion reduces serving latency by up to 5x and improves serving throughput by up to 2x without compromising image quality.
|
http://arxiv.org/pdf/2407.02031v1
|
[
"Suyi Li",
"Lingyun Yang",
"Xiaoxiao Jiang",
"Hanfeng Lu",
"Zhipeng Di",
"Weiyi Lu",
"Jiawei Chen",
"Kan Liu",
"Yinghao Yu",
"Tao Lan",
"Guodong Yang",
"Lin Qu",
"Liping Zhang",
"Wei Wang"
] |
2024-07-02T07:59:08Z
|
2024-07-02T07:59:08Z
|
2407.02028
|
Why does in-context learning fail sometimes? Evaluating in-context
learning on open and closed questions
|
We measure the performance of in-context learning as a function of task novelty and difficulty for open and closed questions. For that purpose, we created a novel benchmark consisting of hard scientific questions, each paired with a context of various relevancy. We show that counter-intuitively, a context that is more aligned with the topic does not always help more than a less relevant context. This effect is especially visible for open questions and questions of high difficulty or novelty. This result reveals a fundamental difference between the treatment of close-form and open-form questions by large-language models and shows a need for a more robust evaluation of in-context learning on the variety of different types of questions. It also poses a new question of how to optimally select a context for large language models, especially in the context of Retrieval Augmented Generation (RAG) systems. Our results suggest that the answer to this question can be highly application-dependent and might be contingent on factors including the format of the question, the perceived difficulty level of the questions, and the novelty or popularity of the information we seek.
|
http://arxiv.org/pdf/2407.02028v1
|
[
"Xiang Li",
"Haoran Tang",
"Siyu Chen",
"Ziwei Wang",
"Ryan Chen",
"Marcin Abram"
] |
2024-07-02T07:52:30Z
|
2024-07-02T07:52:30Z
|
2407.02025
|
On the Expressive Power of Sparse Geometric MPNNs
|
Motivated by applications in chemistry and other sciences, we study the expressive power of message-passing neural networks for geometric graphs, whose node features correspond to 3-dimensional positions. Recent work has shown that such models can separate generic pairs of non-equivalent geometric graphs, though they may fail to separate some rare and complicated instances. However, these results assume a fully connected graph, where each node possesses complete knowledge of all other nodes. In contrast, often, in application, every node only possesses knowledge of a small number of nearest neighbors. This paper shows that generic pairs of non-equivalent geometric graphs can be separated by message-passing networks with rotation equivariant features as long as the underlying graph is connected. When only invariant intermediate features are allowed, generic separation is guaranteed for generically globally rigid graphs. We introduce a simple architecture, EGENNET, which achieves our theoretical guarantees and compares favorably with alternative architecture on synthetic and chemical benchmarks.
|
http://arxiv.org/pdf/2407.02025v1
|
[
"Yonatan Sverdlov",
"Nadav Dym"
] |
2024-07-02T07:48:22Z
|
2024-07-02T07:48:22Z
|
2407.02013
|
DiGRAF: Diffeomorphic Graph-Adaptive Activation Function
|
In this paper, we propose a novel activation function tailored specifically for graph data in Graph Neural Networks (GNNs). Motivated by the need for graph-adaptive and flexible activation functions, we introduce DiGRAF, leveraging Continuous Piecewise-Affine Based (CPAB) transformations, which we augment with an additional GNN to learn a graph-adaptive diffeomorphic activation function in an end-to-end manner. In addition to its graph-adaptivity and flexibility, DiGRAF also possesses properties that are widely recognized as desirable for activation functions, such as differentiability, boundness within the domain and computational efficiency. We conduct an extensive set of experiments across diverse datasets and tasks, demonstrating a consistent and superior performance of DiGRAF compared to traditional and graph-specific activation functions, highlighting its effectiveness as an activation function for GNNs.
|
http://arxiv.org/pdf/2407.02013v1
|
[
"Krishna Sri Ipsit Mantri",
"Xinzhi Wang",
"Carola-Bibiane Schönlieb",
"Bruno Ribeiro",
"Beatrice Bevilacqua",
"Moshe Eliasof"
] |
2024-07-02T07:33:40Z
|
2024-07-02T07:33:40Z
|
2405.10020
|
Natural Language Can Help Bridge the Sim2Real Gap
|
The main challenge in learning image-conditioned robotic policies is acquiring a visual representation conducive to low-level control. Due to the high dimensionality of the image space, learning a good visual representation requires a considerable amount of visual data. However, when learning in the real world, data is expensive. Sim2Real is a promising paradigm for overcoming data scarcity in the real-world target domain by using a simulator to collect large amounts of cheap data closely related to the target task. However, it is difficult to transfer an image-conditioned policy from sim to real when the domains are very visually dissimilar. To bridge the sim2real visual gap, we propose using natural language descriptions of images as a unifying signal across domains that captures the underlying task-relevant semantics. Our key insight is that if two image observations from different domains are labeled with similar language, the policy should predict similar action distributions for both images. We demonstrate that training the image encoder to predict the language description or the distance between descriptions of a sim or real image serves as a useful, data-efficient pretraining step that helps learn a domain-invariant image representation. We can then use this image encoder as the backbone of an IL policy trained simultaneously on a large amount of simulated and a handful of real demonstrations. Our approach outperforms widely used prior sim2real methods and strong vision-language pretraining baselines like CLIP and R3M by 25 to 40%. See additional videos and materials at https://robin-lab.cs.utexas.edu/lang4sim2real/.
|
http://arxiv.org/pdf/2405.10020v2
|
[
"Albert Yu",
"Adeline Foote",
"Raymond Mooney",
"Roberto Martín-Martín"
] |
2024-07-02T07:29:04Z
|
2024-05-16T12:02:02Z
|
2407.02010
|
Feynman-Kac Operator Expectation Estimator
|
The Feynman-Kac Operator Expectation Estimator (FKEE) is an innovative method for estimating the target Mathematical Expectation $mathbb{E}_{Xsim P}[f(X)]$ without relying on a large number of samples, in contrast to the commonly used Markov Chain Monte Carlo (MCMC) Expectation Estimator. FKEE comprises diffusion bridge models and approximation of the Feynman-Kac operator. The key idea is to use the solution to the Feynmann-Kac equation at the initial time $u(x_0,0)=mathbb{E}[f(X_T)|X_0=x_0]$. We use Physically Informed Neural Networks (PINN) to approximate the Feynman-Kac operator, which enables the incorporation of diffusion bridge models into the expectation estimator and significantly improves the efficiency of using data while substantially reducing the variance. Diffusion Bridge Model is a more general MCMC method. In order to incorporate extensive MCMC algorithms, we propose a new diffusion bridge model based on the Minimum Wasserstein distance. This diffusion bridge model is universal and reduces the training time of the PINN. FKEE also reduces the adverse impact of the curse of dimensionality and weakens the assumptions on the distribution of $X$ and performance function $f$ in the general MCMC expectation estimator. The theoretical properties of this universal diffusion bridge model are also shown. Finally, we demonstrate the advantages and potential applications of this method through various concrete experiments, including the challenging task of approximating the partition function in the random graph model such as the Ising model.
|
http://arxiv.org/pdf/2407.02010v1
|
[
"Jingyuan Li",
"Wei Liu"
] |
2024-07-02T07:29:02Z
|
2024-07-02T07:29:02Z
|
2308.05564
|
Large Skew-t Copula Models and Asymmetric Dependence in Intraday Equity
Returns
|
Skew-t copula models are attractive for the modeling of financial data because they allow for asymmetric and extreme tail dependence. We show that the copula implicit in the skew-t distribution of Azzalini and Capitanio (2003) allows for a higher level of pairwise asymmetric dependence than two popular alternative skew-t copulas. Estimation of this copula in high dimensions is challenging, and we propose a fast and accurate Bayesian variational inference (VI) approach to do so. The method uses a generative representation of the skew-t distribution to define an augmented posterior that can be approximated accurately. A stochastic gradient ascent algorithm is used to solve the variational optimization. The methodology is used to estimate skew-t factor copula models with up to 15 factors for intraday returns from 2017 to 2021 on 93 U.S. equities. The copula captures substantial heterogeneity in asymmetric dependence over equity pairs, in addition to the variability in pairwise correlations. In a moving window study we show that the asymmetric dependencies also vary over time, and that intraday predictive densities from the skew-t copula are more accurate than those from benchmark copula models. Portfolio selection strategies based on the estimated pairwise asymmetric dependencies improve performance relative to the index.
|
http://arxiv.org/pdf/2308.05564v4
|
[
"Lin Deng",
"Michael Stanley Smith",
"Worapree Maneesoonthorn"
] |
2024-07-02T07:27:27Z
|
2023-08-10T13:24:45Z
|
2407.02543
|
Towards the Next Frontier in Speech Representation Learning Using
Disentanglement
|
The popular frameworks for self-supervised learning of speech representations have largely focused on frame-level masked prediction of speech regions. While this has shown promising downstream task performance for speech recognition and related tasks, this has largely ignored factors of speech that are encoded at coarser level, like characteristics of the speaker or channel that remain consistent through-out a speech utterance. In this work, we propose a framework for Learning Disentangled Self Supervised (termed as Learn2Diss) representations of speech, which consists of frame-level and an utterance-level encoder modules. The two encoders are initially learned independently, where the frame-level model is largely inspired by existing self supervision techniques, thereby learning pseudo-phonemic representations, while the utterance-level encoder is inspired by constrastive learning of pooled embeddings, thereby learning pseudo-speaker representations. The joint learning of these two modules consists of disentangling the two encoders using a mutual information based criterion. With several downstream evaluation experiments, we show that the proposed Learn2Diss achieves state-of-the-art results on a variety of tasks, with the frame-level encoder representations improving semantic tasks, while the utterance-level representations improve non-semantic tasks.
|
http://arxiv.org/pdf/2407.02543v1
|
[
"Varun Krishna",
"Sriram Ganapathy"
] |
2024-07-02T07:13:35Z
|
2024-07-02T07:13:35Z
|
2407.01991
|
Generation of Geodesics with Actor-Critic Reinforcement Learning to
Predict Midpoints
|
To find the shortest paths for all pairs on continuous manifolds with infinitesimally defined metrics, we propose to generate them by predicting midpoints recursively and an actor-critic method to learn midpoint prediction. We prove the soundness of our approach and show experimentally that the proposed method outperforms existing methods on both local and global path planning tasks.
|
http://arxiv.org/pdf/2407.01991v1
|
[
"Kazumi Kasaura"
] |
2024-07-02T07:06:49Z
|
2024-07-02T07:06:49Z
|
2405.05714
|
Estimating Noisy Class Posterior with Part-level Labels for Noisy Label
Learning
|
In noisy label learning, estimating noisy class posteriors plays a fundamental role for developing consistent classifiers, as it forms the basis for estimating clean class posteriors and the transition matrix. Existing methods typically learn noisy class posteriors by training a classification model with noisy labels. However, when labels are incorrect, these models may be misled to overemphasize the feature parts that do not reflect the instance characteristics, resulting in significant errors in estimating noisy class posteriors. To address this issue, this paper proposes to augment the supervised information with part-level labels, encouraging the model to focus on and integrate richer information from various parts. Specifically, our method first partitions features into distinct parts by cropping instances, yielding part-level labels associated with these various parts. Subsequently, we introduce a novel single-to-multiple transition matrix to model the relationship between the noisy and part-level labels, which incorporates part-level labels into a classifier-consistent framework. Utilizing this framework with part-level labels, we can learn the noisy class posteriors more precisely by guiding the model to integrate information from various parts, ultimately improving the classification performance. Our method is theoretically sound, while experiments show that it is empirically effective in synthetic and real-world noisy benchmarks.
|
http://arxiv.org/pdf/2405.05714v2
|
[
"Rui Zhao",
"Bin Shi",
"Jianfei Ruan",
"Tianze Pan",
"Bo Dong"
] |
2024-07-02T07:06:15Z
|
2024-05-08T12:13:40Z
|
2407.02542
|
ECAT: A Entire space Continual and Adaptive Transfer Learning Framework
for Cross-Domain Recommendation
|
In industrial recommendation systems, there are several mini-apps designed to meet the diverse interests and needs of users. The sample space of them is merely a small subset of the entire space, making it challenging to train an efficient model. In recent years, there have been many excellent studies related to cross-domain recommendation aimed at mitigating the problem of data sparsity. However, few of them have simultaneously considered the adaptability of both sample and representation continual transfer setting to the target task. To overcome the above issue, we propose a Entire space Continual and Adaptive Transfer learning framework called ECAT which includes two core components: First, as for sample transfer, we propose a two-stage method that realizes a coarse-to-fine process. Specifically, we perform an initial selection through a graph-guided method, followed by a fine-grained selection using domain adaptation method. Second, we propose an adaptive knowledge distillation method for continually transferring the representations from a model that is well-trained on the entire space dataset. ECAT enables full utilization of the entire space samples and representations under the supervision of the target task, while avoiding negative migration. Comprehensive experiments on real-world industrial datasets from Taobao show that ECAT advances state-of-the-art performance on offline metrics, and brings +13.6% CVR and +8.6% orders for Baiyibutie, a famous mini-app of Taobao.
|
http://arxiv.org/pdf/2407.02542v1
|
[
"Chaoqun Hou",
"Yuanhang Zhou",
"Yi Cao",
"Tong Liu"
] |
2024-07-02T07:02:39Z
|
2024-07-02T07:02:39Z
|
2205.11168
|
Logarithmic regret bounds for continuous-time average-reward Markov
decision processes
|
We consider reinforcement learning for continuous-time Markov decision processes (MDPs) in the infinite-horizon, average-reward setting. In contrast to discrete-time MDPs, a continuous-time process moves to a state and stays there for a random holding time after an action is taken. With unknown transition probabilities and rates of exponential holding times, we derive instance-dependent regret lower bounds that are logarithmic in the time horizon. Moreover, we design a learning algorithm and establish a finite-time regret bound that achieves the logarithmic growth rate. Our analysis builds upon upper confidence reinforcement learning, a delicate estimation of the mean holding times, and stochastic comparison of point processes.
|
http://arxiv.org/pdf/2205.11168v4
|
[
"Xuefeng Gao",
"Xun Yu Zhou"
] |
2024-07-02T06:57:21Z
|
2022-05-23T10:15:00Z
|
2311.03780
|
DynaSemble: Dynamic Ensembling of Textual and Structure-Based Models for
Knowledge Graph Completion
|
We consider two popular approaches to Knowledge Graph Completion (KGC): textual models that rely on textual entity descriptions, and structure-based models that exploit the connectivity structure of the Knowledge Graph (KG). Preliminary experiments show that these approaches have complementary strengths: structure-based models perform exceptionally well when the gold answer is easily reachable from the query head in the KG, while textual models exploit descriptions to give good performance even when the gold answer is not easily reachable. In response, we propose DynaSemble, a novel method for learning query-dependent ensemble weights to combine these approaches by using the distributions of scores assigned by the models in the ensemble to all candidate entities. DynaSemble achieves state-of-the-art results on three standard KGC datasets, with up to 6.8 pt MRR and 8.3 pt Hits@1 gains over the best baseline model for the WN18RR dataset.
|
http://arxiv.org/pdf/2311.03780v2
|
[
"Ananjan Nandi",
"Navdeep Kaur",
"Parag Singla",
"Mausam"
] |
2024-07-02T06:54:56Z
|
2023-11-07T07:53:06Z
|
2407.01985
|
The Epistemic Uncertainty Hole: an issue of Bayesian Neural Networks
|
Bayesian Deep Learning (BDL) gives access not only to aleatoric uncertainty, as standard neural networks already do, but also to epistemic uncertainty, a measure of confidence a model has in its own predictions. In this article, we show through experiments that the evolution of epistemic uncertainty metrics regarding the model size and the size of the training set, goes against theoretical expectations. More precisely, we observe that the epistemic uncertainty collapses literally in the presence of large models and sometimes also of little training data, while we expect the exact opposite behaviour. This phenomenon, which we call "epistemic uncertainty hole", is all the more problematic as it undermines the entire applicative potential of BDL, which is based precisely on the use of epistemic uncertainty. As an example, we evaluate the practical consequences of this uncertainty hole on one of the main applications of BDL, namely the detection of out-of-distribution samples
|
http://arxiv.org/pdf/2407.01985v1
|
[
"Mohammed Fellaji",
"Frédéric Pennerath"
] |
2024-07-02T06:54:46Z
|
2024-07-02T06:54:46Z
|
2407.00116
|
Generative AI for Synthetic Data Across Multiple Medical Modalities: A
Systematic Review of Recent Developments and Challenges
|
This paper presents a comprehensive systematic review of generative models (GANs, VAEs, DMs, and LLMs) used to synthesize various medical data types, including imaging (dermoscopic, mammographic, ultrasound, CT, MRI, and X-ray), text, time-series, and tabular data (EHR). Unlike previous narrowly focused reviews, our study encompasses a broad array of medical data modalities and explores various generative models. Our search strategy queries databases such as Scopus, PubMed, and ArXiv, focusing on recent works from January 2021 to November 2023, excluding reviews and perspectives. This period emphasizes recent advancements beyond GANs, which have been extensively covered previously. The survey reveals insights from three key aspects: (1) Synthesis applications and purpose of synthesis, (2) generation techniques, and (3) evaluation methods. It highlights clinically valid synthesis applications, demonstrating the potential of synthetic data to tackle diverse clinical requirements. While conditional models incorporating class labels, segmentation masks and image translations are prevalent, there is a gap in utilizing prior clinical knowledge and patient-specific context, suggesting a need for more personalized synthesis approaches and emphasizing the importance of tailoring generative approaches to the unique characteristics of medical data. Additionally, there is a significant gap in using synthetic data beyond augmentation, such as for validation and evaluation of downstream medical AI models. The survey uncovers that the lack of standardized evaluation methodologies tailored to medical images is a barrier to clinical application, underscoring the need for in-depth evaluation approaches, benchmarking, and comparative studies to promote openness and collaboration.
|
http://arxiv.org/pdf/2407.00116v2
|
[
"Mahmoud Ibrahim",
"Yasmina Al Khalil",
"Sina Amirrajab",
"Chang Sun",
"Marcel Breeuwer",
"Josien Pluim",
"Bart Elen",
"Gokhan Ertaylan",
"Michel Dumontier"
] |
2024-07-02T06:51:09Z
|
2024-06-27T14:00:11Z
|
2407.01979
|
Unveiling Global Interactive Patterns across Graphs: Towards
Interpretable Graph Neural Networks
|
Graph Neural Networks (GNNs) have emerged as a prominent framework for graph mining, leading to significant advances across various domains. Stemmed from the node-wise representations of GNNs, existing explanation studies have embraced the subgraph-specific viewpoint that attributes the decision results to the salient features and local structures of nodes. However, graph-level tasks necessitate long-range dependencies and global interactions for advanced GNNs, deviating significantly from subgraph-specific explanations. To bridge this gap, this paper proposes a novel intrinsically interpretable scheme for graph classification, termed as Global Interactive Pattern (GIP) learning, which introduces learnable global interactive patterns to explicitly interpret decisions. GIP first tackles the complexity of interpretation by clustering numerous nodes using a constrained graph clustering module. Then, it matches the coarsened global interactive instance with a batch of self-interpretable graph prototypes, thereby facilitating a transparent graph-level reasoning process. Extensive experiments conducted on both synthetic and real-world benchmarks demonstrate that the proposed GIP yields significantly superior interpretability and competitive performance to~the state-of-the-art counterparts. Our code will be made publicly available.
|
http://arxiv.org/abs/2407.01979v1
|
[
"Yuwen Wang",
"Shunyu Liu",
"Tongya Zheng",
"Kaixuan Chen",
"Mingli Song"
] |
2024-07-02T06:31:13Z
|
2024-07-02T06:31:13Z
|
2407.00943
|
FedEx: Expediting Federated Learning over Heterogeneous Mobile Devices
by Overlapping and Participant Selection
|
Training latency is critical for the success of numerous intrigued applications ignited by federated learning (FL) over heterogeneous mobile devices. By revolutionarily overlapping local gradient transmission with continuous local computing, FL can remarkably reduce its training latency over homogeneous clients, yet encounter severe model staleness, model drifts, memory cost and straggler issues in heterogeneous environments. To unleash the full potential of overlapping, we propose, FedEx, a novel underline{fed}erated learning approach to underline{ex}pedite FL training over mobile devices under data, computing and wireless heterogeneity. FedEx redefines the overlapping procedure with staleness ceilings to constrain memory consumption and make overlapping compatible with participation selection (PS) designs. Then, FedEx characterizes the PS utility function by considering the latency reduced by overlapping, and provides a holistic PS solution to address the straggler issue. FedEx also introduces a simple but effective metric to trigger overlapping, in order to avoid model drifts. Experimental results show that compared with its peer designs, FedEx demonstrates substantial reductions in FL training latency over heterogeneous mobile devices with limited memory cost.
|
http://arxiv.org/pdf/2407.00943v2
|
[
"Jiaxiang Geng",
"Boyu Li",
"Xiaoqi Qin",
"Yixuan Li",
"Liang Li",
"Yanzhao Hou",
"Miao Pan"
] |
2024-07-02T06:16:36Z
|
2024-07-01T03:52:22Z
|
2402.15350
|
Farsight: Fostering Responsible AI Awareness During AI Application
Prototyping
|
Prompt-based interfaces for Large Language Models (LLMs) have made prototyping and building AI-powered applications easier than ever before. However, identifying potential harms that may arise from AI applications remains a challenge, particularly during prompt-based prototyping. To address this, we present Farsight, a novel in situ interactive tool that helps people identify potential harms from the AI applications they are prototyping. Based on a user's prompt, Farsight highlights news articles about relevant AI incidents and allows users to explore and edit LLM-generated use cases, stakeholders, and harms. We report design insights from a co-design study with 10 AI prototypers and findings from a user study with 42 AI prototypers. After using Farsight, AI prototypers in our user study are better able to independently identify potential harms associated with a prompt and find our tool more useful and usable than existing resources. Their qualitative feedback also highlights that Farsight encourages them to focus on end-users and think beyond immediate harms. We discuss these findings and reflect on their implications for designing AI prototyping experiences that meaningfully engage with AI harms. Farsight is publicly accessible at: https://PAIR-code.github.io/farsight.
|
http://arxiv.org/abs/2402.15350v2
|
[
"Zijie J. Wang",
"Chinmay Kulkarni",
"Lauren Wilcox",
"Michael Terry",
"Michael Madaio"
] |
2024-07-02T06:12:05Z
|
2024-02-23T14:38:05Z
|
2407.01972
|
MeMemo: On-device Retrieval Augmentation for Private and Personalized
Text Generation
|
Retrieval-augmented text generation (RAG) addresses the common limitations of large language models (LLMs), such as hallucination, by retrieving information from an updatable external knowledge base. However, existing approaches often require dedicated backend servers for data storage and retrieval, thereby limiting their applicability in use cases that require strict data privacy, such as personal finance, education, and medicine. To address the pressing need for client-side dense retrieval, we introduce MeMemo, the first open-source JavaScript toolkit that adapts the state-of-the-art approximate nearest neighbor search technique HNSW to browser environments. Developed with modern and native Web technologies, such as IndexedDB and Web Workers, our toolkit leverages client-side hardware capabilities to enable researchers and developers to efficiently search through millions of high-dimensional vectors in the browser. MeMemo enables exciting new design and research opportunities, such as private and personalized content creation and interactive prototyping, as demonstrated in our example application RAG Playground. Reflecting on our work, we discuss the opportunities and challenges for on-device dense retrieval. MeMemo is available at https://github.com/poloclub/mememo.
|
http://arxiv.org/abs/2407.01972v1
|
[
"Zijie J. Wang",
"Duen Horng Chau"
] |
2024-07-02T06:08:55Z
|
2024-07-02T06:08:55Z
|
2404.02180
|
Remote sensing framework for geological mapping via stacked autoencoders
and clustering
|
Supervised machine learning methods for geological mapping via remote sensing face limitations due to the scarcity of accurately labelled training data that can be addressed by unsupervised learning, such as dimensionality reduction and clustering. Dimensionality reduction methods have the potential to play a crucial role in improving the accuracy of geological maps. Although conventional dimensionality reduction methods may struggle with nonlinear data, unsupervised deep learning models such as autoencoders can model non-linear relationships. Stacked autoencoders feature multiple interconnected layers to capture hierarchical data representations useful for remote sensing data. This study presents an unsupervised machine learning-based framework for processing remote sensing data using stacked autoencoders for dimensionality reduction and k-means clustering for mapping geological units. We use Landsat 8, ASTER, and Sentinel-2 datasets to evaluate the framework for geological mapping of the Mutawintji region in Western New South Wales, Australia. We also compare stacked autoencoders with principal component analysis and canonical autoencoders. Our results reveal that the framework produces accurate and interpretable geological maps, efficiently discriminating rock units. We find that the accuracy of stacked autoencoders ranges from 86.6 % to 90 %, depending on the remote sensing data type, which is superior to their counterparts. We also find that the generated maps align with prior geological knowledge of the study area while providing novel insights into geological structures.
|
http://arxiv.org/pdf/2404.02180v3
|
[
"Sandeep Nagar",
"Ehsan Farahbakhsh",
"Joseph Awange",
"Rohitash Chandra"
] |
2024-07-02T05:52:15Z
|
2024-04-02T09:15:32Z
|
2407.01960
|
Zero-shot Video Restoration and Enhancement Using Pre-Trained Image
Diffusion Model
|
Diffusion-based zero-shot image restoration and enhancement models have achieved great success in various image restoration and enhancement tasks without training. However, directly applying them to video restoration and enhancement results in severe temporal flickering artifacts. In this paper, we propose the first framework for zero-shot video restoration and enhancement based on a pre-trained image diffusion model. By replacing the self-attention layer with the proposed cross-previous-frame attention layer, the pre-trained image diffusion model can take advantage of the temporal correlation between neighboring frames. We further propose temporal consistency guidance, spatial-temporal noise sharing, and an early stopping sampling strategy for better temporally consistent sampling. Our method is a plug-and-play module that can be inserted into any diffusion-based zero-shot image restoration or enhancement methods to further improve their performance. Experimental results demonstrate the superiority of our proposed method in producing temporally consistent videos with better fidelity.
|
http://arxiv.org/pdf/2407.01960v1
|
[
"Cong Cao",
"Huanjing Yue",
"Xin Liu",
"Jingyu Yang"
] |
2024-07-02T05:31:59Z
|
2024-07-02T05:31:59Z
|
2407.01953
|
CatMemo at the FinLLM Challenge Task: Fine-Tuning Large Language Models
using Data Fusion in Financial Applications
|
The integration of Large Language Models (LLMs) into financial analysis has garnered significant attention in the NLP community. This paper presents our solution to IJCAI-2024 FinLLM challenge, investigating the capabilities of LLMs within three critical areas of financial tasks: financial classification, financial text summarization, and single stock trading. We adopted Llama3-8B and Mistral-7B as base models, fine-tuning them through Parameter Efficient Fine-Tuning (PEFT) and Low-Rank Adaptation (LoRA) approaches. To enhance model performance, we combine datasets from task 1 and task 2 for data fusion. Our approach aims to tackle these diverse tasks in a comprehensive and integrated manner, showcasing LLMs' capacity to address diverse and complex financial tasks with improved accuracy and decision-making capabilities.
|
http://arxiv.org/pdf/2407.01953v1
|
[
"Yupeng Cao",
"Zhiyuan Yao",
"Zhi Chen",
"Zhiyang Deng"
] |
2024-07-02T05:04:13Z
|
2024-07-02T05:04:13Z
|
2407.01948
|
Extracting and Encoding: Leveraging Large Language Models and Medical
Knowledge to Enhance Radiological Text Representation
|
Advancing representation learning in specialized fields like medicine remains challenging due to the scarcity of expert annotations for text and images. To tackle this issue, we present a novel two-stage framework designed to extract high-quality factual statements from free-text radiology reports in order to improve the representations of text encoders and, consequently, their performance on various downstream tasks. In the first stage, we propose a textit{Fact Extractor} that leverages large language models (LLMs) to identify factual statements from well-curated domain-specific datasets. In the second stage, we introduce a textit{Fact Encoder} (CXRFE) based on a BERT model fine-tuned with objective functions designed to improve its representations using the extracted factual data. Our framework also includes a new embedding-based metric (CXRFEScore) for evaluating chest X-ray text generation systems, leveraging both stages of our approach. Extensive evaluations show that our fact extractor and encoder outperform current state-of-the-art methods in tasks such as sentence ranking, natural language inference, and label extraction from radiology reports. Additionally, our metric proves to be more robust and effective than existing metrics commonly used in the radiology report generation literature. The code of this project is available at url{https://github.com/PabloMessina/CXR-Fact-Encoder}.
|
http://arxiv.org/pdf/2407.01948v1
|
[
"Pablo Messina",
"René Vidal",
"Denis Parra",
"Álvaro Soto",
"Vladimir Araujo"
] |
2024-07-02T04:39:19Z
|
2024-07-02T04:39:19Z
|
2202.08433
|
ADD 2022: the First Audio Deep Synthesis Detection Challenge
|
Audio deepfake detection is an emerging topic, which was included in the ASVspoof 2021. However, the recent shared tasks have not covered many real-life and challenging scenarios. The first Audio Deep synthesis Detection challenge (ADD) was motivated to fill in the gap. The ADD 2022 includes three tracks: low-quality fake audio detection (LF), partially fake audio detection (PF) and audio fake game (FG). The LF track focuses on dealing with bona fide and fully fake utterances with various real-world noises etc. The PF track aims to distinguish the partially fake audio from the real. The FG track is a rivalry game, which includes two tasks: an audio generation task and an audio fake detection task. In this paper, we describe the datasets, evaluation metrics, and protocols. We also report major findings that reflect the recent advances in audio deepfake detection tasks.
|
http://arxiv.org/pdf/2202.08433v3
|
[
"Jiangyan Yi",
"Ruibo Fu",
"Jianhua Tao",
"Shuai Nie",
"Haoxin Ma",
"Chenglong Wang",
"Tao Wang",
"Zhengkun Tian",
"Xiaohui Zhang",
"Ye Bai",
"Cunhang Fan",
"Shan Liang",
"Shiming Wang",
"Shuai Zhang",
"Xinrui Yan",
"Le Xu",
"Zhengqi Wen",
"Haizhou Li",
"Zheng Lian",
"Bin Liu"
] |
2024-07-02T04:06:53Z
|
2022-02-17T03:29:20Z
|
2407.01258
|
Introducing a Physics-informed Deep Learning Framework for Bridge Scour
Prediction
|
This paper introduces scour physics-informed neural networks (SPINNs), a hybrid physics-data-driven framework for bridge scour prediction using deep learning. SPINNs are developed based on historical scour monitoring data and integrate physics-based empirical equations into neural networks as supplementary loss components. We incorporated three architectures: LSTM, CNN, and NLinear as the base data-driven model. Despite varying performance across different base models and bridges, SPINNs overall outperformed pure data-driven models. In some bridge cases, SPINN reduced forecasting errors by up to 50 percent. In this study, we also explored general models for bridge clusters, trained by aggregating datasets across multiple bridges in a region. The pure data-driven models mostly benefited from this approach, in particular bridges with limited data. However, bridge-specific SPINNs provided more accurate predictions than general SPINNs for almost all case studies. Also, the time-dependent empirical equations derived from SPINNs showed reasonable accuracy in estimating maximum scour depth, providing more accurate predictions compared to HEC-18. Comparing both SPINNs and pure deep learning models with traditional HEC-18 equation indicates substantial improvements in scour prediction accuracy. This study can pave the way for hybrid physics-machine learning methodologies to be implemented for bridge scour design and maintenance.
|
http://arxiv.org/pdf/2407.01258v2
|
[
"Negin Yousefpour",
"Bo Wang"
] |
2024-07-02T03:55:11Z
|
2024-07-01T13:08:09Z
|
2403.13107
|
Towards Unsupervised Question Answering System with Multi-level
Summarization for Legal Text
|
This paper summarizes Team SCaLAR's work on SemEval-2024 Task 5: Legal Argument Reasoning in Civil Procedure. To address this Binary Classification task, which was daunting due to the complexity of the Legal Texts involved, we propose a simple yet novel similarity and distance-based unsupervised approach to generate labels. Further, we explore the Multi-level fusion of Legal-Bert embeddings using ensemble features, including CNN, GRU, and LSTM. To address the lengthy nature of Legal explanation in the dataset, we introduce T5-based segment-wise summarization, which successfully retained crucial information, enhancing the model's performance. Our unsupervised system witnessed a 20-point increase in macro F1-score on the development set and a 10-point increase on the test set, which is promising given its uncomplicated architecture.
|
http://arxiv.org/pdf/2403.13107v2
|
[
"M Manvith Prabhu",
"Haricharana Srinivasa",
"Anand Kumar M"
] |
2024-07-02T03:35:53Z
|
2024-03-19T19:15:13Z
|
2407.01920
|
To Forget or Not? Towards Practical Knowledge Unlearning for Large
Language Models
|
Large Language Models (LLMs) trained on extensive corpora inevitably retain sensitive data, such as personal privacy information and copyrighted material. Recent advancements in knowledge unlearning involve updating LLM parameters to erase specific knowledge. However, current unlearning paradigms are mired in vague forgetting boundaries, often erasing knowledge indiscriminately. In this work, we introduce KnowUnDo, a benchmark containing copyrighted content and user privacy domains to evaluate if the unlearning process inadvertently erases essential knowledge. Our findings indicate that existing unlearning methods often suffer from excessive unlearning. To address this, we propose a simple yet effective method, MemFlex, which utilizes gradient information to precisely target and unlearn sensitive parameters. Experimental results show that MemFlex is superior to existing methods in both precise knowledge unlearning and general knowledge retaining of LLMs. Code and dataset will be released at https://github.com/zjunlp/KnowUnDo.
|
http://arxiv.org/pdf/2407.01920v1
|
[
"Bozhong Tian",
"Xiaozhuan Liang",
"Siyuan Cheng",
"Qingbin Liu",
"Mengru Wang",
"Dianbo Sui",
"Xi Chen",
"Huajun Chen",
"Ningyu Zhang"
] |
2024-07-02T03:34:16Z
|
2024-07-02T03:34:16Z
|
2404.15274
|
Metric-guided Image Reconstruction Bounds via Conformal Prediction
|
Recent advancements in machine learning have led to the development of novel medical imaging systems and algorithms that address ill-posed problems. Assessing their trustworthiness and understanding how to deploy them safely at test time remains an important and open problem. In this work, we propose using conformal prediction to compute valid and distribution-free bounds on downstream metrics given reconstructions generated by one algorithm, and retrieve upper/lower bounds and inlier/outlier reconstructions according to the adjusted bounds. Our work offers 1) test time image reconstruction evaluation without ground truth, 2) downstream performance guarantees, 3) meaningful upper/lower bound reconstructions, and 4) meaningful statistical inliers/outlier reconstructions. We demonstrate our method on post-mastectomy radiotherapy planning using 3D breast CT reconstructions, and show 1) that metric-guided bounds have valid coverage for downstream metrics while conventional pixel-wise bounds do not and 2) anatomical differences of upper/lower bounds between metric-guided and pixel-wise methods. Our work paves way for more meaningful and trustworthy test-time evaluation of medical image reconstructions. Code available at https://github.com/matthewyccheung/conformal-metric
|
http://arxiv.org/pdf/2404.15274v2
|
[
"Matt Y Cheung",
"Tucker J Netherton",
"Laurence E Court",
"Ashok Veeraraghavan",
"Guha Balakrishnan"
] |
2024-07-02T03:31:16Z
|
2024-04-23T17:59:12Z
|
2402.11469
|
A Curious Case of Searching for the Correlation between Training Data
and Adversarial Robustness of Transformer Textual Models
|
Existing works have shown that fine-tuned textual transformer models achieve state-of-the-art prediction performances but are also vulnerable to adversarial text perturbations. Traditional adversarial evaluation is often done textit{only after} fine-tuning the models and ignoring the training data. In this paper, we want to prove that there is also a strong correlation between training data and model robustness. To this end, we extract 13 different features representing a wide range of input fine-tuning corpora properties and use them to predict the adversarial robustness of the fine-tuned models. Focusing mostly on encoder-only transformer models BERT and RoBERTa with additional results for BART, ELECTRA, and GPT2, we provide diverse evidence to support our argument. First, empirical analyses show that (a) extracted features can be used with a lightweight classifier such as Random Forest to predict the attack success rate effectively, and (b) features with the most influence on the model robustness have a clear correlation with the robustness. Second, our framework can be used as a fast and effective additional tool for robustness evaluation since it (a) saves 30x-193x runtime compared to the traditional technique, (b) is transferable across models, (c) can be used under adversarial training, and (d) robust to statistical randomness. Our code is publicly available at url{https://github.com/CaptainCuong/RobustText_ACL2024}.
|
http://arxiv.org/pdf/2402.11469v2
|
[
"Cuong Dang",
"Dung D. Le",
"Thai Le"
] |
2024-07-02T03:29:11Z
|
2024-02-18T05:58:25Z
|
2407.00382
|
Towards Universal Mesh Movement Networks
|
Solving complex Partial Differential Equations (PDEs) accurately and efficiently is an essential and challenging problem in all scientific and engineering disciplines. Mesh movement methods provide the capability to improve the accuracy of the numerical solution without increasing the overall mesh degree of freedom count. Conventional sophisticated mesh movement methods are extremely expensive and struggle to handle scenarios with complex boundary geometries. However, existing learning-based methods require re-training from scratch given a different PDE type or boundary geometry, which limits their applicability, and also often suffer from robustness issues in the form of inverted elements. In this paper, we introduce the Universal Mesh Movement Network (UM2N), which -- once trained -- can be applied in a non-intrusive, zero-shot manner to move meshes with different size distributions and structures, for solvers applicable to different PDE types and boundary geometries. UM2N consists of a Graph Transformer (GT) encoder for extracting features and a Graph Attention Network (GAT) based decoder for moving the mesh. We evaluate our method on advection and Navier-Stokes based examples, as well as a real-world tsunami simulation case. Our method outperforms existing learning-based mesh movement methods in terms of the benchmarks described above. In comparison to the conventional sophisticated Monge-Amp`ere PDE-solver based method, our approach not only significantly accelerates mesh movement, but also proves effective in scenarios where the conventional method fails. Our project page is at https://erizmr.github.io/UM2N/.
|
http://arxiv.org/pdf/2407.00382v2
|
[
"Mingrui Zhang",
"Chunyang Wang",
"Stephan Kramer",
"Joseph G. Wallwork",
"Siyi Li",
"Jiancheng Liu",
"Xiang Chen",
"Matthew D. Piggott"
] |
2024-07-02T03:22:04Z
|
2024-06-29T09:35:12Z
|
2407.01907
|
The Solution for the ICCV 2023 Perception Test Challenge 2023 -- Task 6
-- Grounded videoQA
|
In this paper, we introduce a grounded video question-answering solution. Our research reveals that the fixed official baseline method for video question answering involves two main steps: visual grounding and object tracking. However, a significant challenge emerges during the initial step, where selected frames may lack clearly identifiable target objects. Furthermore, single images cannot address questions like "Track the container from which the person pours the first time." To tackle this issue, we propose an alternative two-stage approach:(1) First, we leverage the VALOR model to answer questions based on video information.(2) concatenate the answered questions with their respective answers. Finally, we employ TubeDETR to generate bounding boxes for the targets.
|
http://arxiv.org/pdf/2407.01907v1
|
[
"Hailiang Zhang",
"Dian Chao",
"Zhihao Guan",
"Yang Yang"
] |
2024-07-02T03:13:27Z
|
2024-07-02T03:13:27Z
|
2407.01903
|
Text-Aware Diffusion for Policy Learning
|
Training an agent to achieve particular goals or perform desired behaviors is often accomplished through reinforcement learning, especially in the absence of expert demonstrations. However, supporting novel goals or behaviors through reinforcement learning requires the ad-hoc design of appropriate reward functions, which quickly becomes intractable. To address this challenge, we propose Text-Aware Diffusion for Policy Learning (TADPoLe), which uses a pretrained, frozen text-conditioned diffusion model to compute dense zero-shot reward signals for text-aligned policy learning. We hypothesize that large-scale pretrained generative models encode rich priors that can supervise a policy to behave not only in a text-aligned manner, but also in alignment with a notion of naturalness summarized from internet-scale training data. In our experiments, we demonstrate that TADPoLe is able to learn policies for novel goal-achievement and continuous locomotion behaviors specified by natural language, in both Humanoid and Dog environments. The behaviors are learned zero-shot without ground-truth rewards or expert demonstrations, and are qualitatively more natural according to human evaluation. We further show that TADPoLe performs competitively when applied to robotic manipulation tasks in the Meta-World environment.
|
http://arxiv.org/pdf/2407.01903v1
|
[
"Calvin Luo",
"Mandy He",
"Zilai Zeng",
"Chen Sun"
] |
2024-07-02T03:08:20Z
|
2024-07-02T03:08:20Z
|
2407.00568
|
Divide And Conquer: Learning Chaotic Dynamical Systems With Multistep
Penalty Neural Ordinary Differential Equations
|
Forecasting high-dimensional dynamical systems is a fundamental challenge in various fields, such as the geosciences and engineering. Neural Ordinary Differential Equations (NODEs), which combine the power of neural networks and numerical solvers, have emerged as a promising algorithm for forecasting complex nonlinear dynamical systems. However, classical techniques used for NODE training are ineffective for learning chaotic dynamical systems. In this work, we propose a novel NODE-training approach that allows for robust learning of chaotic dynamical systems. Our method addresses the challenges of non-convexity and exploding gradients associated with underlying chaotic dynamics. Training data trajectories from such systems are split into multiple, non-overlapping time windows. In addition to the deviation from the training data, the optimization loss term further penalizes the discontinuities of the predicted trajectory between the time windows. The window size is selected based on the fastest Lyapunov time scale of the system. Multi-step penalty(MP) method is first demonstrated on Lorenz equation, to illustrate how it improves the loss landscape and thereby accelerating the optimization convergence. MP method can optimize chaotic systems in a manner similar to least-squares shadowing with significantly lower computational costs. Our proposed algorithm, denoted the Multistep Penalty NODE(MP-NODE), is applied to chaotic systems such as the Kuramoto-Sivashinsky equation and the two-dimensional Kolmogorov flow. It is observed that MP-NODE provide viable performance for such chaotic systems, not only for short-term trajectory predictions but also for invariant statistics that are hallmarks of the chaotic nature of these dynamics.
|
http://arxiv.org/pdf/2407.00568v2
|
[
"Dibyajyoti Chakraborty",
"Seung Whan Chung",
"Romit Maulik"
] |
2024-07-02T02:54:49Z
|
2024-06-30T02:50:28Z
|
2406.10787
|
Evidential Uncertainty Sets in Deep Classifiers Using Conformal
Prediction
|
In this paper, we propose Evidential Conformal Prediction (ECP) method for image classifiers to generate the conformal prediction sets. Our method is designed based on a non-conformity score function that has its roots in Evidential Deep Learning (EDL) as a method of quantifying model (epistemic) uncertainty in DNN classifiers. We use evidence that are derived from the logit values of target labels to compute the components of our non-conformity score function: the heuristic notion of uncertainty in CP, uncertainty surprisal, and expected utility. Our extensive experimental evaluation demonstrates that ECP outperforms three state-of-the-art methods for generating CP sets, in terms of their set sizes and adaptivity while maintaining the coverage of true labels.
|
http://arxiv.org/pdf/2406.10787v2
|
[
"Hamed Karimi",
"Reza Samavi"
] |
2024-07-02T02:38:45Z
|
2024-06-16T03:00:16Z
|
2407.01887
|
Beyond Numeric Awards: In-Context Dueling Bandits with LLM Agents
|
In-context decision-making is an important capability of artificial general intelligence, which Large Language Models (LLMs) have effectively demonstrated in various scenarios. However, LLMs often face challenges when dealing with numerical contexts, and limited attention has been paid to evaluating their performance through preference feedback generated by the environment. This paper investigates the performance of LLMs as decision-makers in the context of Dueling Bandits (DB). We first evaluate the performance of LLMs by comparing GPT-3.5-Turbo, GPT-4, and GPT-4-Turbo against established DB algorithms. Our results reveal that LLMs, particularly GPT-4 Turbo, quickly identify the Condorcet winner, thus outperforming existing state-of-the-art algorithms in terms of weak regret. Nevertheless, LLMs struggle to converge even when explicitly prompted to do so, and are sensitive to prompt variations. To overcome these issues, we introduce an LLM-augmented algorithm, IF-Enhanced LLM, which takes advantage of both in-context decision-making capabilities of LLMs and theoretical guarantees inherited from classic DB algorithms. The design of such an algorithm sheds light on how to enhance trustworthiness for LLMs used in decision-making tasks where performance robustness matters. We show that IF-Enhanced LLM has theoretical guarantees on both weak and strong regret. Our experimental results validate that IF-Enhanced LLM is robust even with noisy and adversarial prompts.
|
http://arxiv.org/pdf/2407.01887v1
|
[
"Fanzeng Xia",
"Hao Liu",
"Yisong Yue",
"Tongxin Li"
] |
2024-07-02T02:18:14Z
|
2024-07-02T02:18:14Z
|
2407.01886
|
Core Knowledge Learning Framework for Graph Adaptation and Scalability
Learning
|
Graph classification is a pivotal challenge in machine learning, especially within the realm of graph-based data, given its importance in numerous real-world applications such as social network analysis, recommendation systems, and bioinformatics. Despite its significance, graph classification faces several hurdles, including adapting to diverse prediction tasks, training across multiple target domains, and handling small-sample prediction scenarios. Current methods often tackle these challenges individually, leading to fragmented solutions that lack a holistic approach to the overarching problem. In this paper, we propose an algorithm aimed at addressing the aforementioned challenges. By incorporating insights from various types of tasks, our method aims to enhance adaptability, scalability, and generalizability in graph classification. Motivated by the recognition that the underlying subgraph plays a crucial role in GNN prediction, while the remainder is task-irrelevant, we introduce the Core Knowledge Learning (method{}) framework for graph adaptation and scalability learning. method{} comprises several key modules, including the core subgraph knowledge submodule, graph domain adaptation module, and few-shot learning module for downstream tasks. Each module is tailored to tackle specific challenges in graph classification, such as domain shift, label inconsistencies, and data scarcity. By learning the core subgraph of the entire graph, we focus on the most pertinent features for task relevance. Consequently, our method offers benefits such as improved model performance, increased domain adaptability, and enhanced robustness to domain variations. Experimental results demonstrate significant performance enhancements achieved by our method compared to state-of-the-art approaches.
|
http://arxiv.org/pdf/2407.01886v1
|
[
"Bowen Zhang",
"Zhichao Huang",
"Genan Dai",
"Guangning Xu",
"Xiaomao Fan",
"Hu Huang"
] |
2024-07-02T02:16:43Z
|
2024-07-02T02:16:43Z
|
2402.15638
|
Fair Resource Allocation in Multi-Task Learning
|
By jointly learning multiple tasks, multi-task learning (MTL) can leverage the shared knowledge across tasks, resulting in improved data efficiency and generalization performance. However, a major challenge in MTL lies in the presence of conflicting gradients, which can hinder the fair optimization of some tasks and subsequently impede MTL's ability to achieve better overall performance. Inspired by fair resource allocation in communication networks, we formulate the optimization of MTL as a utility maximization problem, where the loss decreases across tasks are maximized under different fairness measurements. To solve this problem, we propose FairGrad, a novel MTL optimization method. FairGrad not only enables flexible emphasis on certain tasks but also achieves a theoretical convergence guarantee. Extensive experiments demonstrate that our method can achieve state-of-the-art performance among gradient manipulation methods on a suite of multi-task benchmarks in supervised learning and reinforcement learning. Furthermore, we incorporate the idea of $alpha$-fairness into loss functions of various MTL methods. Extensive empirical studies demonstrate that their performance can be significantly enhanced. Code is provided at url{https://github.com/OptMN-Lab/fairgrad}.
|
http://arxiv.org/pdf/2402.15638v2
|
[
"Hao Ban",
"Kaiyi Ji"
] |
2024-07-02T02:09:32Z
|
2024-02-23T22:46:14Z
|
2407.00077
|
Differentially Private Graph Diffusion with Applications in Personalized
PageRanks
|
Graph diffusion, which iteratively propagates real-valued substances among the graph, is used in numerous graph/network-involved applications. However, releasing diffusion vectors may reveal sensitive linking information in the data such as transaction information in financial network data. However, protecting the privacy of graph data is challenging due to its interconnected nature. This work proposes a novel graph diffusion framework with edge-level differential privacy guarantees by using noisy diffusion iterates. The algorithm injects Laplace noise per diffusion iteration and adopts a degree-based thresholding function to mitigate the high sensitivity induced by low-degree nodes. Our privacy loss analysis is based on Privacy Amplification by Iteration (PABI), which to our best knowledge, is the first effort that analyzes PABI with Laplace noise and provides relevant applications. We also introduce a novel Infinity-Wasserstein distance tracking method, which tightens the analysis of privacy leakage and makes PABI more applicable in practice. We evaluate this framework by applying it to Personalized Pagerank computation for ranking tasks. Experiments on real-world network data demonstrate the superiority of our method under stringent privacy conditions.
|
http://arxiv.org/pdf/2407.00077v2
|
[
"Rongzhe Wei",
"Eli Chien",
"Pan Li"
] |
2024-07-02T02:06:17Z
|
2024-06-22T15:32:53Z
|
2407.02540
|
Analytical Solution of a Three-layer Network with a Matrix Exponential
Activation Function
|
In practice, deeper networks tend to be more powerful than shallow ones, but this has not been understood theoretically. In this paper, we find the analytical solution of a three-layer network with a matrix exponential activation function, i.e., $$ f(X)=W_3exp(W_2exp(W_1X)), Xin mathbb{C}^{dtimes d} $$ have analytical solutions for the equations $$ Y_1=f(X_1),Y_2=f(X_2) $$ for $X_1,X_2,Y_1,Y_2$ with only invertible assumptions. Our proof shows the power of depth and the use of a non-linear activation function, since one layer network can only solve one equation,i.e.,$Y=WX$.
|
http://arxiv.org/pdf/2407.02540v1
|
[
"Kuo Gai",
"Shihua Zhang"
] |
2024-07-02T01:59:34Z
|
2024-07-02T01:59:34Z
|
2401.12963
|
AutoRT: Embodied Foundation Models for Large Scale Orchestration of
Robotic Agents
|
Foundation models that incorporate language, vision, and more recently actions have revolutionized the ability to harness internet scale data to reason about useful tasks. However, one of the key challenges of training embodied foundation models is the lack of data grounded in the physical world. In this paper, we propose AutoRT, a system that leverages existing foundation models to scale up the deployment of operational robots in completely unseen scenarios with minimal human supervision. AutoRT leverages vision-language models (VLMs) for scene understanding and grounding, and further uses large language models (LLMs) for proposing diverse and novel instructions to be performed by a fleet of robots. Guiding data collection by tapping into the knowledge of foundation models enables AutoRT to effectively reason about autonomy tradeoffs and safety while significantly scaling up data collection for robot learning. We demonstrate AutoRT proposing instructions to over 20 robots across multiple buildings and collecting 77k real robot episodes via both teleoperation and autonomous robot policies. We experimentally show that such "in-the-wild" data collected by AutoRT is significantly more diverse, and that AutoRT's use of LLMs allows for instruction following data collection robots that can align to human preferences.
|
http://arxiv.org/pdf/2401.12963v2
|
[
"Michael Ahn",
"Debidatta Dwibedi",
"Chelsea Finn",
"Montse Gonzalez Arenas",
"Keerthana Gopalakrishnan",
"Karol Hausman",
"Brian Ichter",
"Alex Irpan",
"Nikhil Joshi",
"Ryan Julian",
"Sean Kirmani",
"Isabel Leal",
"Edward Lee",
"Sergey Levine",
"Yao Lu",
"Isabel Leal",
"Sharath Maddineni",
"Kanishka Rao",
"Dorsa Sadigh",
"Pannag Sanketi",
"Pierre Sermanet",
"Quan Vuong",
"Stefan Welker",
"Fei Xia",
"Ted Xiao",
"Peng Xu",
"Steve Xu",
"Zhuo Xu"
] |
2024-07-02T01:52:26Z
|
2024-01-23T18:45:54Z
|
2312.02327
|
FLea: Addressing Data Scarcity and Label Skew in Federated Learning via
Privacy-preserving Feature Augmentation
|
Federated Learning (FL) enables model development by leveraging data distributed across numerous edge devices without transferring local data to a central server. However, existing FL methods still face challenges when dealing with scarce and label-skewed data across devices, resulting in local model overfitting and drift, consequently hindering the performance of the global model. In response to these challenges, we propose a pioneering framework called textit{FLea}, incorporating the following key components: textit{i)} A global feature buffer that stores activation-target pairs shared from multiple clients to support local training. This design mitigates local model drift caused by the absence of certain classes; textit{ii)} A feature augmentation approach based on local and global activation mix-ups for local training. This strategy enlarges the training samples, thereby reducing the risk of local overfitting; textit{iii)} An obfuscation method to minimize the correlation between intermediate activations and the source data, enhancing the privacy of shared features. To verify the superiority of textit{FLea}, we conduct extensive experiments using a wide range of data modalities, simulating different levels of local data scarcity and label skew. The results demonstrate that textit{FLea} consistently outperforms state-of-the-art FL counterparts (among 13 of the experimented 18 settings, the improvement is over $5%$) while concurrently mitigating the privacy vulnerabilities associated with shared features. Code is available at https://github.com/XTxiatong/FLea.git
|
http://arxiv.org/abs/2312.02327v2
|
[
"Tong Xia",
"Abhirup Ghosh",
"Xinchi Qiu",
"Cecilia Mascolo"
] |
2024-07-02T01:37:34Z
|
2023-12-04T20:24:09Z
|
2407.01873
|
Automated Text Scoring in the Age of Generative AI for the GPU-poor
|
Current research on generative language models (GLMs) for automated text scoring (ATS) has focused almost exclusively on querying proprietary models via Application Programming Interfaces (APIs). Yet such practices raise issues around transparency and security, and these methods offer little in the way of efficiency or customizability. With the recent proliferation of smaller, open-source models, there is the option to explore GLMs with computers equipped with modest, consumer-grade hardware, that is, for the "GPU poor." In this study, we analyze the performance and efficiency of open-source, small-scale GLMs for ATS. Results show that GLMs can be fine-tuned to achieve adequate, though not state-of-the-art, performance. In addition to ATS, we take small steps towards analyzing models' capacity for generating feedback by prompting GLMs to explain their scores. Model-generated feedback shows promise, but requires more rigorous evaluation focused on targeted use cases.
|
http://arxiv.org/pdf/2407.01873v1
|
[
"Christopher Michael Ormerod",
"Alexander Kwako"
] |
2024-07-02T01:17:01Z
|
2024-07-02T01:17:01Z
|
2407.01869
|
Let it shine: Autofluorescence of Papanicolaou-stain improves AI-based
cytological oral cancer detection
|
Oral cancer is a global health challenge. It is treatable if detected early, but it is often fatal in late stages. There is a shift from the invasive and time-consuming tissue sampling and histological examination, toward non-invasive brush biopsies and cytological examination. Reliable computer-assisted methods are essential for cost-effective and accurate cytological analysis, but the lack of detailed cell-level annotations impairs model effectiveness. This study aims to improve AI-based oral cancer detection using multimodal imaging and deep fusion. We combine brightfield and fluorescence whole slide microscopy imaging to analyze Papanicolaou-stained liquid-based cytology slides of brush biopsies collected from both healthy and cancer patients. Due to limited cytological annotations, we utilize a weakly supervised deep learning approach using only patient-level labels. We evaluate various multimodal fusion strategies, including early, late, and three recent intermediate fusion methods. Our results show: (i) fluorescence imaging of Papanicolaou-stained samples provides substantial diagnostic information; (ii) multimodal fusion enhances classification and cancer detection accuracy over single-modality methods. Intermediate fusion is the leading method among the studied approaches. Specifically, the Co-Attention Fusion Network (CAFNet) model excels with an F1 score of 83.34% and accuracy of 91.79%, surpassing human performance on the task. Additional tests highlight the need for precise image registration to optimize multimodal analysis benefits. This study advances cytopathology by combining deep learning and multimodal imaging to enhance early, non-invasive detection of oral cancer, improving diagnostic accuracy and streamlining clinical workflows. The developed pipeline is also applicable in other cytological settings. Our codes and dataset are available online for further research.
|
http://arxiv.org/pdf/2407.01869v1
|
[
"Wenyi Lian",
"Joakim Lindblad",
"Christina Runow Stark",
"Jan-Michaél Hirsch",
"Nataša Sladoje"
] |
2024-07-02T01:05:35Z
|
2024-07-02T01:05:35Z
|
2311.18237
|
Knowledge Transfer from Vision Foundation Models for Efficient Training
of Small Task-specific Models
|
Vision Foundation Models (VFMs) pretrained on massive datasets exhibit impressive performance on various downstream tasks, especially with limited labeled target data. However, due to their high inference compute cost, these models cannot be deployed for many real-world applications. Motivated by this, we ask the following important question, "How can we leverage the knowledge from a large VFM to train a small task-specific model for a new target task with limited labeled training data?", and propose a simple task-oriented knowledge transfer approach as a highly effective solution to this problem. Our experimental results on five target tasks show that the proposed approach outperforms task-agnostic VFM distillation, web-scale CLIP pretraining, supervised ImageNet pretraining, and self-supervised DINO pretraining by up to 11.6%, 22.1%, 13.7%, and 29.8%, respectively. Furthermore, the proposed approach also demonstrates up to 9x, 4x and 15x reduction in pretraining compute cost when compared to task-agnostic VFM distillation, ImageNet pretraining and DINO pretraining, respectively, while outperforming them. We also show that the dataset used for transferring knowledge has a significant effect on the final target task performance, and introduce a retrieval-augmented knowledge transfer strategy that uses web-scale image retrieval to curate effective transfer sets.
|
http://arxiv.org/pdf/2311.18237v3
|
[
"Raviteja Vemulapalli",
"Hadi Pouransari",
"Fartash Faghri",
"Sachin Mehta",
"Mehrdad Farajtabar",
"Mohammad Rastegari",
"Oncel Tuzel"
] |
2024-07-02T00:22:16Z
|
2023-11-30T04:07:44Z
|
2407.01856
|
Adaptive RKHS Fourier Features for Compositional Gaussian Process Models
|
Deep Gaussian Processes (DGPs) leverage a compositional structure to model non-stationary processes. DGPs typically rely on local inducing point approximations across intermediate GP layers. Recent advances in DGP inference have shown that incorporating global Fourier features from Reproducing Kernel Hilbert Space (RKHS) can enhance the DGPs' capability to capture complex non-stationary patterns. This paper extends the use of these features to compositional GPs involving linear transformations. In particular, we introduce Ordinary Differential Equation (ODE) -based RKHS Fourier features that allow for adaptive amplitude and phase modulation through convolution operations. This convolutional formulation relates our work to recently proposed deep latent force models, a multi-layer structure designed for modelling nonlinear dynamical systems. By embedding these adjustable RKHS Fourier features within a doubly stochastic variational inference framework, our model exhibits improved predictive performance across various regression tasks.
|
http://arxiv.org/pdf/2407.01856v1
|
[
"Xinxing Shi",
"Thomas Baldwin-McDonald",
"Mauricio A. Álvarez"
] |
2024-07-01T23:56:56Z
|
2024-07-01T23:56:56Z
|
2407.01853
|
Improving Multilingual Instruction Finetuning via Linguistically Natural
and Diverse Datasets
|
Advancements in Large Language Models (LLMs) have significantly enhanced instruction-following capabilities. However, most Instruction Fine-Tuning (IFT) datasets are predominantly in English, limiting model performance in other languages. Traditional methods for creating multilingual IFT datasets such as translating existing English IFT datasets or converting existing NLP datasets into IFT datasets by templating, struggle to capture linguistic nuances and ensure prompt (instruction) diversity. To address this issue, we propose a novel method for collecting multilingual IFT datasets that preserves linguistic naturalness and ensures prompt diversity. This approach leverages English-focused LLMs, monolingual corpora, and a scoring function to create high-quality, diversified IFT datasets in multiple languages. Experiments demonstrate that LLMs finetuned using these IFT datasets show notable improvements in both generative and discriminative tasks, indicating enhanced language comprehension by LLMs in non-English contexts. Specifically, on the multilingual summarization task, LLMs using our IFT dataset achieved 17.57% and 15.23% improvements over LLMs fine-tuned with translation-based and template-based datasets, respectively.
|
http://arxiv.org/pdf/2407.01853v1
|
[
"Sathish Reddy Indurthi",
"Wenxuan Zhou",
"Shamil Chollampatt",
"Ravi Agrawal",
"Kaiqiang Song",
"Lingxiao Zhao",
"Chenguang Zhu"
] |
2024-07-01T23:47:09Z
|
2024-07-01T23:47:09Z
|
2211.15771
|
Approximate Gibbs Sampler for Efficient Inference of Hierarchical
Bayesian Models for Grouped Count Data
|
Hierarchical Bayesian Poisson regression models (HBPRMs) provide a flexible modeling approach of the relationship between predictors and count response variables. The applications of HBPRMs to large-scale datasets require efficient inference algorithms due to the high computational cost of inferring many model parameters based on random sampling. Although Markov Chain Monte Carlo (MCMC) algorithms have been widely used for Bayesian inference, sampling using this class of algorithms is time-consuming for applications with large-scale data and time-sensitive decision-making, partially due to the non-conjugacy of many models. To overcome this limitation, this research develops an approximate Gibbs sampler (AGS) to efficiently learn the HBPRMs while maintaining the inference accuracy. In the proposed sampler, the data likelihood is approximated with Gaussian distribution such that the conditional posterior of the coefficients has a closed-form solution. Numerical experiments using real and synthetic datasets with small and large counts demonstrate the superior performance of AGS in comparison to the state-of-the-art sampling algorithm, especially for large datasets.
|
http://arxiv.org/abs/2211.15771v2
|
[
"Jin-Zhu Yu",
"Hiba Baroud"
] |
2024-07-01T23:29:26Z
|
2022-11-28T21:00:55Z
|
2407.02538
|
CGRclust: Chaos Game Representation for Twin Contrastive Clustering of
Unlabelled DNA Sequences
|
This study proposes CGRclust, a novel combination of unsupervised twin contrastive clustering of Chaos Game Representations (CGR) of DNA sequences, with convolutional neural networks (CNNs). To the best of our knowledge, CGRclust is the first method to use unsupervised learning for image classification (herein applied to two-dimensional CGR images) for clustering datasets of DNA sequences. CGRclust overcomes the limitations of traditional sequence classification methods by leveraging unsupervised twin contrastive learning to detect distinctive sequence patterns, without requiring DNA sequence alignment or biological/taxonomic labels. CGRclust accurately clustered twenty-five diverse datasets, with sequence lengths ranging from 664 bp to 100 kbp, including mitochondrial genomes of fish, fungi, and protists, as well as viral whole genome assemblies and synthetic DNA sequences. Compared with three recent clustering methods for DNA sequences (DeLUCS, iDeLUCS, and MeShClust v3.0.), CGRclust is the only method that surpasses 81.70% accuracy across all four taxonomic levels tested for mitochondrial DNA genomes of fish. Moreover, CGRclust also consistently demonstrates superior performance across all the viral genomic datasets. The high clustering accuracy of CGRclust on these twenty-five datasets, which vary significantly in terms of sequence length, number of genomes, number of clusters, and level of taxonomy, demonstrates its robustness, scalability, and versatility.
|
http://arxiv.org/pdf/2407.02538v1
|
[
"Fatemeh Alipour",
"Kathleen A. Hill",
"Lila Kari"
] |
2024-07-01T23:24:05Z
|
2024-07-01T23:24:05Z
|
2405.11277
|
Action Controlled Paraphrasing
|
Recent studies have demonstrated the potential to control paraphrase generation, such as through syntax, which has broad applications in various downstream tasks. However, these methods often require detailed parse trees or syntactic exemplars, countering human-like paraphrasing behavior in language use. Furthermore, an inference gap exists, as control specifications are only available during training but not during inference. In this work, we propose a new setup for controlled paraphrase generation. Specifically, we represent user intent as action tokens, embedding and concatenating them with text embeddings, thus flowing together into a self-attention encoder for representation fusion. To address the inference gap, we introduce an optional action token as a placeholder that encourages the model to determine the appropriate action independently when users' intended actions are not provided. Experimental results show that our method successfully enables precise action-controlled paraphrasing and preserves or even enhances performance compared to conventional uncontrolled methods when actions are not given. Our findings promote the concept of action-controlled paraphrasing for a more user-centered design.
|
http://arxiv.org/pdf/2405.11277v2
|
[
"Ning Shi",
"Zijun Wu"
] |
2024-07-01T23:23:41Z
|
2024-05-18T12:26:31Z
|
2406.19380
|
TabReD: A Benchmark of Tabular Machine Learning in-the-Wild
|
Benchmarks that closely reflect downstream application scenarios are essential for the streamlined adoption of new research in tabular machine learning (ML). In this work, we examine existing tabular benchmarks and find two common characteristics of industry-grade tabular data that are underrepresented in the datasets available to the academic community. First, tabular data often changes over time in real-world deployment scenarios. This impacts model performance and requires time-based train and test splits for correct model evaluation. Yet, existing academic tabular datasets often lack timestamp metadata to enable such evaluation. Second, a considerable portion of datasets in production settings stem from extensive data acquisition and feature engineering pipelines. For each specific dataset, this can have a different impact on the absolute and relative number of predictive, uninformative, and correlated features, which in turn can affect model selection. To fill the aforementioned gaps in academic benchmarks, we introduce TabReD -- a collection of eight industry-grade tabular datasets covering a wide range of domains from finance to food delivery services. We assess a large number of tabular ML models in the feature-rich, temporally-evolving data setting facilitated by TabReD. We demonstrate that evaluation on time-based data splits leads to different methods ranking, compared to evaluation on random splits more common in academic benchmarks. Furthermore, on the TabReD datasets, MLP-like architectures and GBDT show the best results, while more sophisticated DL models are yet to prove their effectiveness.
|
http://arxiv.org/pdf/2406.19380v2
|
[
"Ivan Rubachev",
"Nikolay Kartashev",
"Yury Gorishniy",
"Artem Babenko"
] |
2024-07-01T23:01:33Z
|
2024-06-27T17:55:31Z
|
2210.09475
|
FIMP: Foundation Model-Informed Message Passing for Graph Neural
Networks
|
Foundation models have achieved remarkable success across many domains, relying on pretraining over vast amounts of data. Graph-structured data often lacks the same scale as unstructured data, making the development of graph foundation models challenging. In this work, we propose Foundation-Informed Message Passing (FIMP), a Graph Neural Network (GNN) message-passing framework that leverages pretrained non-textual foundation models in graph-based tasks. We show that the self-attention layers of foundation models can effectively be repurposed on graphs to perform cross-node attention-based message-passing. Our model is evaluated on a real-world image network dataset and two biological applications (single-cell RNA sequencing data and fMRI brain activity recordings) in both finetuned and zero-shot settings. FIMP outperforms strong baselines, demonstrating that it can effectively leverage state-of-the-art foundation models in graph tasks.
|
http://arxiv.org/pdf/2210.09475v5
|
[
"Syed Asad Rizvi",
"Nazreen Pallikkavaliyaveetil",
"David Zhang",
"Zhuoyang Lyu",
"Nhi Nguyen",
"Haoran Lyu",
"Benjamin Christensen",
"Josue Ortega Caro",
"Antonio H. O. Fonseca",
"Emanuele Zappala",
"Maryam Bagherian",
"Christopher Averill",
"Chadi G. Abdallah",
"Amin Karbasi",
"Rex Ying",
"Maria Brbic",
"Rahul Madhav Dhodapkar",
"David van Dijk"
] |
2024-07-01T22:54:01Z
|
2022-10-17T23:44:30Z
|
2310.11324
|
Quantifying Language Models' Sensitivity to Spurious Features in Prompt
Design or: How I learned to start worrying about prompt formatting
|
As large language models (LLMs) are adopted as a fundamental component of language technologies, it is crucial to accurately characterize their performance. Because choices in prompt design can strongly influence model behavior, this design process is critical in effectively using any modern pre-trained generative language model. In this work, we focus on LLM sensitivity to a quintessential class of meaning-preserving design choices: prompt formatting. We find that several widely used open-source LLMs are extremely sensitive to subtle changes in prompt formatting in few-shot settings, with performance differences of up to 76 accuracy points when evaluated using LLaMA-2-13B. Sensitivity remains even when increasing model size, the number of few-shot examples, or performing instruction tuning. Our analysis suggests that work evaluating LLMs with prompting-based methods would benefit from reporting a range of performance across plausible prompt formats, instead of the currently-standard practice of reporting performance on a single format. We also show that format performance only weakly correlates between models, which puts into question the methodological validity of comparing models with an arbitrarily chosen, fixed prompt format. To facilitate systematic analysis we propose FormatSpread, an algorithm that rapidly evaluates a sampled set of plausible prompt formats for a given task, and reports the interval of expected performance without accessing model weights. Furthermore, we present a suite of analyses that characterize the nature of this sensitivity, including exploring the influence of particular atomic perturbations and the internal representation of particular formats.
|
http://arxiv.org/pdf/2310.11324v2
|
[
"Melanie Sclar",
"Yejin Choi",
"Yulia Tsvetkov",
"Alane Suhr"
] |
2024-07-01T22:28:01Z
|
2023-10-17T15:03:30Z
|
2407.01837
|
To Switch or Not to Switch? Balanced Policy Switching in Offline
Reinforcement Learning
|
Reinforcement learning (RL) -- finding the optimal behaviour (also referred to as policy) maximizing the collected long-term cumulative reward -- is among the most influential approaches in machine learning with a large number of successful applications. In several decision problems, however, one faces the possibility of policy switching -- changing from the current policy to a new one -- which incurs a non-negligible cost (examples include the shifting of the currently applied educational technology, modernization of a computing cluster, and the introduction of a new webpage design), and in the decision one is limited to using historical data without the availability for further online interaction. Despite the inevitable importance of this offline learning scenario, to our best knowledge, very little effort has been made to tackle the key problem of balancing between the gain and the cost of switching in a flexible and principled way. Leveraging ideas from the area of optimal transport, we initialize the systematic study of policy switching in offline RL. We establish fundamental properties and design a Net Actor-Critic algorithm for the proposed novel switching formulation. Numerical experiments demonstrate the efficiency of our approach on multiple benchmarks of the Gymnasium.
|
http://arxiv.org/pdf/2407.01837v1
|
[
"Tao Ma",
"Xuzhi Yang",
"Zoltan Szabo"
] |
2024-07-01T22:24:31Z
|
2024-07-01T22:24:31Z
|
2407.01825
|
Empirical Tests of Optimization Assumptions in Deep Learning
|
There is a significant gap between our theoretical understanding of optimization algorithms used in deep learning and their practical performance. Theoretical development usually focuses on proving convergence guarantees under a variety of different assumptions, which are themselves often chosen based on a rough combination of intuitive match to practice and analytical convenience. The theory/practice gap may then arise because of the failure to prove a theorem under such assumptions, or because the assumptions do not reflect reality. In this paper, we carefully measure the degree to which these assumptions are capable of explaining modern optimization algorithms by developing new empirical metrics that closely track the key quantities that must be controlled in theoretical analysis. All of our tested assumptions (including typical modern assumptions based on bounds on the Hessian) fail to reliably capture optimization performance. This highlights a need for new empirical verification of analytical assumptions used in theoretical analysis.
|
http://arxiv.org/pdf/2407.01825v1
|
[
"Hoang Tran",
"Qinzi Zhang",
"Ashok Cutkosky"
] |
2024-07-01T21:56:54Z
|
2024-07-01T21:56:54Z
|
2402.03232
|
Explicit Flow Matching: On The Theory of Flow Matching Algorithms with
Applications
|
This paper proposes a novel method, Explicit Flow Matching (ExFM), for training and analyzing flow-based generative models. ExFM leverages a theoretically grounded loss function, ExFM loss (a tractable form of Flow Matching (FM) loss), to demonstrably reduce variance during training, leading to faster convergence and more stable learning. Based on theoretical analysis of these formulas, we derived exact expressions for the vector field (and score in stochastic cases) for model examples (in particular, for separating multiple exponents), and in some simple cases, exact solutions for trajectories. In addition, we also investigated simple cases of diffusion generative models by adding a stochastic term and obtained an explicit form of the expression for score. While the paper emphasizes the theoretical underpinnings of ExFM, it also showcases its effectiveness through numerical experiments on various datasets, including high-dimensional ones. Compared to traditional FM methods, ExFM achieves superior performance in terms of both learning speed and final outcomes.
|
http://arxiv.org/pdf/2402.03232v2
|
[
"Gleb Ryzhakov",
"Svetlana Pavlova",
"Egor Sevriugov",
"Ivan Oseledets"
] |
2024-07-01T21:28:19Z
|
2024-02-05T17:45:12Z
|
2301.02819
|
ExcelFormer: A neural network surpassing GBDTs on tabular data
|
Data organized in tabular format is ubiquitous in real-world applications, and users often craft tables with biased feature definitions and flexibly set prediction targets of their interests. Thus, a rapid development of a robust, effective, dataset-versatile, user-friendly tabular prediction approach is highly desired. While Gradient Boosting Decision Trees (GBDTs) and existing deep neural networks (DNNs) have been extensively utilized by professional users, they present several challenges for casual users, particularly: (i) the dilemma of model selection due to their different dataset preferences, and (ii) the need for heavy hyperparameter searching, failing which their performances are deemed inadequate. In this paper, we delve into this question: Can we develop a deep learning model that serves as a "sure bet" solution for a wide range of tabular prediction tasks, while also being user-friendly for casual users? We delve into three key drawbacks of deep tabular models, encompassing: (P1) lack of rotational variance property, (P2) large data demand, and (P3) over-smooth solution. We propose ExcelFormer, addressing these challenges through a semi-permeable attention module that effectively constrains the influence of less informative features to break the DNNs' rotational invariance property (for P1), data augmentation approaches tailored for tabular data (for P2), and attentive feedforward network to boost the model fitting capability (for P3). These designs collectively make ExcelFormer a "sure bet" solution for diverse tabular datasets. Extensive and stratified experiments conducted on real-world datasets demonstrate that our model outperforms previous approaches across diverse tabular data prediction tasks, and this framework can be friendly to casual users, offering ease of use without the heavy hyperparameter tuning.
|
http://arxiv.org/pdf/2301.02819v7
|
[
"Jintai Chen",
"Jiahuan Yan",
"Qiyuan Chen",
"Danny Ziyi Chen",
"Jian Wu",
"Jimeng Sun"
] |
2024-07-01T21:26:51Z
|
2023-01-07T09:42:03Z
|
2407.01812
|
Equivariant Diffusion Policy
|
Recent work has shown diffusion models are an effective approach to learning the multimodal distributions arising from demonstration data in behavior cloning. However, a drawback of this approach is the need to learn a denoising function, which is significantly more complex than learning an explicit policy. In this work, we propose Equivariant Diffusion Policy, a novel diffusion policy learning method that leverages domain symmetries to obtain better sample efficiency and generalization in the denoising function. We theoretically analyze the $mathrm{SO}(2)$ symmetry of full 6-DoF control and characterize when a diffusion model is $mathrm{SO}(2)$-equivariant. We furthermore evaluate the method empirically on a set of 12 simulation tasks in MimicGen, and show that it obtains a success rate that is, on average, 21.9% higher than the baseline Diffusion Policy. We also evaluate the method on a real-world system to show that effective policies can be learned with relatively few training samples, whereas the baseline Diffusion Policy cannot.
|
http://arxiv.org/pdf/2407.01812v1
|
[
"Dian Wang",
"Stephen Hart",
"David Surovik",
"Tarik Kelestemur",
"Haojie Huang",
"Haibo Zhao",
"Mark Yeatman",
"Jiuguang Wang",
"Robin Walters",
"Robert Platt"
] |
2024-07-01T21:23:26Z
|
2024-07-01T21:23:26Z
|
2407.06206
|
The Impact of an XAI-Augmented Approach on Binary Classification with
Scarce Data
|
Point-of-Care Ultrasound (POCUS) is the practice of clinicians conducting and interpreting ultrasound scans right at the patient's bedside. However, the expertise needed to interpret these images is considerable and may not always be present in emergency situations. This reality makes algorithms such as machine learning classifiers extremely valuable to augment human decisions. POCUS devices are becoming available at a reasonable cost in the size of a mobile phone. The challenge of turning POCUS devices into life-saving tools is that interpretation of ultrasound images requires specialist training and experience. Unfortunately, the difficulty to obtain positive training images represents an important obstacle to building efficient and accurate classifiers. Hence, the problem we try to investigate is how to explore strategies to increase accuracy of classifiers trained with scarce data. We hypothesize that training with a few data instances may not suffice for classifiers to generalize causing them to overfit. Our approach uses an Explainable AI-Augmented approach to help the algorithm learn more from less and potentially help the classifier better generalize.
|
http://arxiv.org/pdf/2407.06206v1
|
[
"Ximing Wen",
"Rosina O. Weber",
"Anik Sen",
"Darryl Hannan",
"Steven C. Nesbit",
"Vincent Chan",
"Alberto Goffi",
"Michael Morris",
"John C. Hunninghake",
"Nicholas E. Villalobos",
"Edward Kim",
"Christopher J. MacLellan"
] |
2024-07-01T21:09:31Z
|
2024-07-01T21:09:31Z
|
2407.01804
|
DCoM: Active Learning for All Learners
|
Deep Active Learning (AL) techniques can be effective in reducing annotation costs for training deep models. However, their effectiveness in low- and high-budget scenarios seems to require different strategies, and achieving optimal results across varying budget scenarios remains a challenge. In this study, we introduce Dynamic Coverage & Margin mix (DCoM), a novel active learning approach designed to bridge this gap. Unlike existing strategies, DCoM dynamically adjusts its strategy, considering the competence of the current model. Through theoretical analysis and empirical evaluations on diverse datasets, including challenging computer vision tasks, we demonstrate DCoM's ability to overcome the cold start problem and consistently improve results across different budgetary constraints. Thus DCoM achieves state-of-the-art performance in both low- and high-budget regimes.
|
http://arxiv.org/pdf/2407.01804v1
|
[
"Inbal Mishal",
"Daphna Weinshall"
] |
2024-07-01T21:06:34Z
|
2024-07-01T21:06:34Z
|
2407.02536
|
Reducing False Discoveries in Statistically-Significant
Regional-Colocation Mining: A Summary of Results
|
Given a set emph{S} of spatial feature types, its feature instances, a study area, and a neighbor relationship, the goal is to find pairs $<$a region ($r_{g}$), a subset emph{C} of emph{S}$>$ such that emph{C} is a statistically significant regional-colocation pattern in $r_{g}$. This problem is important for applications in various domains including ecology, economics, and sociology. The problem is computationally challenging due to the exponential number of regional colocation patterns and candidate regions. Previously, we proposed a miner cite{10.1145/3557989.3566158} that finds statistically significant regional colocation patterns. However, the numerous simultaneous statistical inferences raise the risk of false discoveries (also known as the multiple comparisons problem) and carry a high computational cost. We propose a novel algorithm, namely, multiple comparisons regional colocation miner (MultComp-RCM) which uses a Bonferroni correction. Theoretical analysis, experimental evaluation, and case study results show that the proposed method reduces both the false discovery rate and computational cost.
|
http://arxiv.org/abs/2407.02536v1
|
[
"Subhankar Ghosh",
"Jayant Gupta",
"Arun Sharma",
"Shuai An",
"Shashi Shekhar"
] |
2024-07-01T21:03:04Z
|
2024-07-01T21:03:04Z
|
2407.01800
|
Normalization and effective learning rates in reinforcement learning
|
Normalization layers have recently experienced a renaissance in the deep reinforcement learning and continual learning literature, with several works highlighting diverse benefits such as improving loss landscape conditioning and combatting overestimation bias. However, normalization brings with it a subtle but important side effect: an equivalence between growth in the norm of the network parameters and decay in the effective learning rate. This becomes problematic in continual learning settings, where the resulting effective learning rate schedule may decay to near zero too quickly relative to the timescale of the learning problem. We propose to make the learning rate schedule explicit with a simple re-parameterization which we call Normalize-and-Project (NaP), which couples the insertion of normalization layers with weight projection, ensuring that the effective learning rate remains constant throughout training. This technique reveals itself as a powerful analytical tool to better understand learning rate schedules in deep reinforcement learning, and as a means of improving robustness to nonstationarity in synthetic plasticity loss benchmarks along with both the single-task and sequential variants of the Arcade Learning Environment. We also show that our approach can be easily applied to popular architectures such as ResNets and transformers while recovering and in some cases even slightly improving the performance of the base model in common stationary benchmarks.
|
http://arxiv.org/pdf/2407.01800v1
|
[
"Clare Lyle",
"Zeyu Zheng",
"Khimya Khetarpal",
"James Martens",
"Hado van Hasselt",
"Razvan Pascanu",
"Will Dabney"
] |
2024-07-01T20:58:01Z
|
2024-07-01T20:58:01Z
|
2311.15876
|
End-to-End Breast Cancer Radiotherapy Planning via LMMs with Consistency
Embedding
|
Recent advances in AI foundation models have significant potential for lightening the clinical workload by mimicking the comprehensive and multi-faceted approaches used by medical professionals. In the field of radiation oncology, the integration of multiple modalities holds great importance, so the opportunity of foundational model is abundant. Inspired by this, here we present RO-LMM, a multi-purpose, comprehensive large multimodal model (LMM) tailored for the field of radiation oncology. This model effectively manages a series of tasks within the clinical workflow, including clinical context summarization, radiation treatment plan suggestion, and plan-guided target volume segmentation by leveraging the capabilities of LMM. In particular, to perform consecutive clinical tasks without error accumulation, we present a novel Consistency Embedding Fine-Tuning (CEFTune) technique, which boosts LMM's robustness to noisy inputs while preserving the consistency of handling clean inputs. We further extend this concept to LMM-driven segmentation framework, leading to a novel Consistency Embedding Segmentation~(CESEG) techniques. Experimental results including multi-centre validation confirm that our RO-LMM with CEFTune and CESEG results in promising performance for multiple clinical tasks with generalization capabilities.
|
http://arxiv.org/pdf/2311.15876v3
|
[
"Kwanyoung Kim",
"Yujin Oh",
"Sangjoon Park",
"Hwa Kyung Byun",
"Joongyo Lee",
"Jin Sung Kim",
"Yong Bae Kim",
"Jong Chul Ye"
] |
2024-07-01T20:51:59Z
|
2023-11-27T14:49:06Z
|
2407.01795
|
Honor Among Bandits: No-Regret Learning for Online Fair Division
|
We consider the problem of online fair division of indivisible goods to players when there are a finite number of types of goods and player values are drawn from distributions with unknown means. Our goal is to maximize social welfare subject to allocating the goods fairly in expectation. When a player's value for an item is unknown at the time of allocation, we show that this problem reduces to a variant of (stochastic) multi-armed bandits, where there exists an arm for each player's value for each type of good. At each time step, we choose a distribution over arms which determines how the next item is allocated. We consider two sets of fairness constraints for this problem: envy-freeness in expectation and proportionality in expectation. Our main result is the design of an explore-then-commit algorithm that achieves $tilde{O}(T^{2/3})$ regret while maintaining either fairness constraint. This result relies on unique properties fundamental to fair-division constraints that allow faster rates of learning, despite the restricted action space.
|
http://arxiv.org/pdf/2407.01795v1
|
[
"Ariel D. Procaccia",
"Benjamin Schiffer",
"Shirley Zhang"
] |
2024-07-01T20:44:52Z
|
2024-07-01T20:44:52Z
|
2407.01794
|
Conditionally valid Probabilistic Conformal Prediction
|
We develop a new method for creating prediction sets that combines the flexibility of conformal methods with an estimate of the conditional distribution $P_{Y mid X}$. Most existing methods, such as conformalized quantile regression and probabilistic conformal prediction, only offer marginal coverage guarantees. Our approach extends these methods to achieve conditional coverage, which is essential for many practical applications. While exact conditional guarantees are impossible without assumptions on the data distribution, we provide non-asymptotic bounds that explicitly depend on the quality of the available estimate of the conditional distribution. Our confidence sets are highly adaptive to the local structure of the data, making them particularly useful in high heteroskedasticity situations. We demonstrate the effectiveness of our approach through extensive simulations, showing that it outperforms existing methods in terms of conditional coverage and improves the reliability of statistical inference in a wide range of applications.
|
http://arxiv.org/pdf/2407.01794v1
|
[
"Vincent Plassier",
"Alexander Fishkov",
"Maxim Panov",
"Eric Moulines"
] |
2024-07-01T20:44:48Z
|
2024-07-01T20:44:48Z
|
2407.01790
|
Label-free Neural Semantic Image Synthesis
|
Recent work has shown great progress in integrating spatial conditioning to control large, pre-trained text-to-image diffusion models. Despite these advances, existing methods describe the spatial image content using hand-crafted conditioning inputs, which are either semantically ambiguous (e.g., edges) or require expensive manual annotations (e.g., semantic segmentation). To address these limitations, we propose a new label-free way of conditioning diffusion models to enable fine-grained spatial control. We introduce the concept of neural semantic image synthesis, which uses neural layouts extracted from pre-trained foundation models as conditioning. Neural layouts are advantageous as they provide rich descriptions of the desired image, containing both semantics and detailed geometry of the scene. We experimentally show that images synthesized via neural semantic image synthesis achieve similar or superior pixel-level alignment of semantic classes compared to those created using expensive semantic label maps. At the same time, they capture better semantics, instance separation, and object orientation than other label-free conditioning options, such as edges or depth. Moreover, we show that images generated by neural layout conditioning can effectively augment real data for training various perception tasks.
|
http://arxiv.org/pdf/2407.01790v1
|
[
"Jiayi Wang",
"Kevin Alexander Laube",
"Yumeng Li",
"Jan Hendrik Metzen",
"Shin-I Cheng",
"Julio Borges",
"Anna Khoreva"
] |
2024-07-01T20:30:23Z
|
2024-07-01T20:30:23Z
|
2407.01784
|
Analyzing Persuasive Strategies in Meme Texts: A Fusion of Language
Models with Paraphrase Enrichment
|
This paper describes our approach to hierarchical multi-label detection of persuasion techniques in meme texts. Our model, developed as a part of the recent SemEval task, is based on fine-tuning individual language models (BERT, XLM-RoBERTa, and mBERT) and leveraging a mean-based ensemble model in addition to dataset augmentation through paraphrase generation from ChatGPT. The scope of the study encompasses enhancing model performance through innovative training techniques and data augmentation strategies. The problem addressed is the effective identification and classification of multiple persuasive techniques in meme texts, a task complicated by the diversity and complexity of such content. The objective of the paper is to improve detection accuracy by refining model training methods and examining the impact of balanced versus unbalanced training datasets. Novelty in the results and discussion lies in the finding that training with paraphrases enhances model performance, yet a balanced training set proves more advantageous than a larger unbalanced one. Additionally, the analysis reveals the potential pitfalls of indiscriminate incorporation of paraphrases from diverse distributions, which can introduce substantial noise. Results with the SemEval 2024 data confirm these insights, demonstrating improved model efficacy with the proposed methods.
|
http://arxiv.org/pdf/2407.01784v1
|
[
"Kota Shamanth Ramanath Nayak",
"Leila Kosseim"
] |
2024-07-01T20:25:20Z
|
2024-07-01T20:25:20Z
|
2407.01781
|
fVDB: A Deep-Learning Framework for Sparse, Large-Scale, and
High-Performance Spatial Intelligence
|
We present fVDB, a novel GPU-optimized framework for deep learning on large-scale 3D data. fVDB provides a complete set of differentiable primitives to build deep learning architectures for common tasks in 3D learning such as convolution, pooling, attention, ray-tracing, meshing, etc. fVDB simultaneously provides a much larger feature set (primitives and operators) than established frameworks with no loss in efficiency: our operators match or exceed the performance of other frameworks with narrower scope. Furthermore, fVDB can process datasets with much larger footprint and spatial resolution than prior works, while providing a competitive memory footprint on small inputs. To achieve this combination of versatility and performance, fVDB relies on a single novel VDB index grid acceleration structure paired with several key innovations including GPU accelerated sparse grid construction, convolution using tensorcores, fast ray tracing kernels using a Hierarchical Digital Differential Analyzer algorithm (HDDA), and jagged tensors. Our framework is fully integrated with PyTorch enabling interoperability with existing pipelines, and we demonstrate its effectiveness on a number of representative tasks such as large-scale point-cloud segmentation, high resolution 3D generative modeling, unbounded scale Neural Radiance Fields, and large-scale point cloud reconstruction.
|
http://arxiv.org/abs/2407.01781v1
|
[
"Francis Williams",
"Jiahui Huang",
"Jonathan Swartz",
"Gergely Klár",
"Vijay Thakkar",
"Matthew Cong",
"Xuanchi Ren",
"Ruilong Li",
"Clement Fuji-Tsang",
"Sanja Fidler",
"Eftychios Sifakis",
"Ken Museth"
] |
2024-07-01T20:20:33Z
|
2024-07-01T20:20:33Z
|
2403.08627
|
Multifidelity linear regression for scientific machine learning from
scarce data
|
Machine learning (ML) methods, which fit to data the parameters of a given parameterized model class, have garnered significant interest as potential methods for learning surrogate models for complex engineering systems for which traditional simulation is expensive. However, in many scientific and engineering settings, generating high-fidelity data on which to train ML models is expensive, and the available budget for generating training data is limited, so that high-fidelity training data are scarce. ML models trained on scarce data have high variance, resulting in poor expected generalization performance. We propose a new multifidelity training approach for scientific machine learning via linear regression that exploits the scientific context where data of varying fidelities and costs are available: for example, high-fidelity data may be generated by an expensive fully resolved physics simulation whereas lower-fidelity data may arise from a cheaper model based on simplifying assumptions. We use the multifidelity data within an approximate control variate framework to define new multifidelity Monte Carlo estimators for linear regression models. We provide bias and variance analysis of our new estimators that guarantee the approach's accuracy and improved robustness to scarce high-fidelity data. Numerical results demonstrate that our multifidelity training approach achieves similar accuracy to the standard high-fidelity only approach with orders-of-magnitude reduced high-fidelity data requirements.
|
http://arxiv.org/pdf/2403.08627v2
|
[
"Elizabeth Qian",
"Dayoung Kang",
"Vignesh Sella",
"Anirban Chaudhuri"
] |
2024-07-01T20:11:32Z
|
2024-03-13T15:40:17Z
|
2407.01776
|
Federated Binary Matrix Factorization using Proximal Optimization
|
Identifying informative components in binary data is an essential task in many research areas, including life sciences, social sciences, and recommendation systems. Boolean matrix factorization (BMF) is a family of methods that performs this task by efficiently factorizing the data. In real-world settings, the data is often distributed across stakeholders and required to stay private, prohibiting the straightforward application of BMF. To adapt BMF to this context, we approach the problem from a federated-learning perspective, while building on a state-of-the-art continuous binary matrix factorization relaxation to BMF that enables efficient gradient-based optimization. We propose to only share the relaxed component matrices, which are aggregated centrally using a proximal operator that regularizes for binary outcomes. We show the convergence of our federated proximal gradient descent algorithm and provide differential privacy guarantees. Our extensive empirical evaluation demonstrates that our algorithm outperforms, in terms of quality and efficacy, federation schemes of state-of-the-art BMF methods on a diverse set of real-world and synthetic data.
|
http://arxiv.org/pdf/2407.01776v1
|
[
"Sebastian Dalleiger",
"Jilles Vreeken",
"Michael Kamp"
] |
2024-07-01T20:10:24Z
|
2024-07-01T20:10:24Z
|
2407.01769
|
Improving Trip Mode Choice Modeling Using Ensemble Synthesizer (ENSY)
|
Accurate classification of mode choice datasets is crucial for transportation planning and decision-making processes. However, conventional classification models often struggle to adequately capture the nuanced patterns of minority classes within these datasets, leading to sub-optimal accuracy. In response to this challenge, we present Ensemble Synthesizer (ENSY) which leverages probability distribution for data augmentation, a novel data model tailored specifically for enhancing classification accuracy in mode choice datasets. In our study, ENSY demonstrates remarkable efficacy by nearly quadrupling the F1 score of minority classes and improving overall classification accuracy by nearly 3%. To assess its performance comprehensively, we compare ENSY against various augmentation techniques including Random Oversampling, SMOTE-NC, and CTGAN. Through experimentation, ENSY consistently outperforms these methods across various scenarios, underscoring its robustness and effectiveness
|
http://arxiv.org/pdf/2407.01769v1
|
[
"Amirhossein Parsi",
"Melina Jafari",
"Sina Sabzekar",
"Zahra Amini"
] |
2024-07-01T19:59:29Z
|
2024-07-01T19:59:29Z
|
2406.14595
|
Adversaries Can Misuse Combinations of Safe Models
|
Developers try to evaluate whether an AI system can be misused by adversaries before releasing it; for example, they might test whether a model enables cyberoffense, user manipulation, or bioterrorism. In this work, we show that individually testing models for misuse is inadequate; adversaries can misuse combinations of models even when each individual model is safe. The adversary accomplishes this by first decomposing tasks into subtasks, then solving each subtask with the best-suited model. For example, an adversary might solve challenging-but-benign subtasks with an aligned frontier model, and easy-but-malicious subtasks with a weaker misaligned model. We study two decomposition methods: manual decomposition where a human identifies a natural decomposition of a task, and automated decomposition where a weak model generates benign tasks for a frontier model to solve, then uses the solutions in-context to solve the original task. Using these decompositions, we empirically show that adversaries can create vulnerable code, explicit images, python scripts for hacking, and manipulative tweets at much higher rates with combinations of models than either individual model. Our work suggests that even perfectly-aligned frontier systems can enable misuse without ever producing malicious outputs, and that red-teaming efforts should extend beyond single models in isolation.
|
http://arxiv.org/pdf/2406.14595v2
|
[
"Erik Jones",
"Anca Dragan",
"Jacob Steinhardt"
] |
2024-07-01T19:58:00Z
|
2024-06-20T17:43:18Z
|
2305.16791
|
On the Generalization and Approximation Capacities of Neural Controlled
Differential Equations
|
Neural Controlled Differential Equations (NCDEs) are a state-of-the-art tool for supervised learning with irregularly sampled time series (Kidger, 2020). However, no theoretical analysis of their performance has been provided yet, and it remains unclear in particular how the irregularity of the time series affects their predictions. By merging the rich theory of controlled differential equations (CDE) and Lipschitz-based measures of the complexity of deep neural nets, we take a first step towards the theoretical understanding of NCDE. Our first result is a generalization bound for this class of predictors that depends on the regularity of the time series data. In a second time, we leverage the continuity of the flow of CDEs to provide a detailed analysis of both the sampling-induced bias and the approximation bias. Regarding this last result, we show how classical approximation results on neural nets may transfer to NCDEs. Our theoretical results are validated through a series of experiments.
|
http://arxiv.org/pdf/2305.16791v4
|
[
"Linus Bleistein",
"Agathe Guilloux"
] |
2024-07-01T19:29:57Z
|
2023-05-26T10:02:32Z
|
2407.01749
|
Invariant Correlation of Representation with Label
|
The Invariant Risk Minimization (IRM) approach aims to address the challenge of domain generalization by training a feature representation that remains invariant across multiple environments. However, in noisy environments, IRM-related techniques such as IRMv1 and VREx may be unable to achieve the optimal IRM solution, primarily due to erroneous optimization directions. To address this issue, we introduce ICorr (an abbreviation for textbf{I}nvariant textbf{Corr}elation), a novel approach designed to surmount the above challenge in noisy settings. Additionally, we dig into a case study to analyze why previous methods may lose ground while ICorr can succeed. Through a theoretical lens, particularly from a causality perspective, we illustrate that the invariant correlation of representation with label is a necessary condition for the optimal invariant predictor in noisy environments, whereas the optimization motivations for other methods may not be. Furthermore, we empirically demonstrate the effectiveness of ICorr by comparing it with other domain generalization methods on various noisy datasets.
|
http://arxiv.org/pdf/2407.01749v1
|
[
"Gaojie Jin",
"Ronghui Mu",
"Xinping Yi",
"Xiaowei Huang",
"Lijun Zhang"
] |
2024-07-01T19:27:28Z
|
2024-07-01T19:27:28Z
|
2407.01745
|
Adaptive control of reaction-diffusion PDEs via neural
operator-approximated gain kernels
|
Neural operator approximations of the gain kernels in PDE backstepping has emerged as a viable method for implementing controllers in real time. With such an approach, one approximates the gain kernel, which maps the plant coefficient into the solution of a PDE, with a neural operator. It is in adaptive control that the benefit of the neural operator is realized, as the kernel PDE solution needs to be computed online, for every updated estimate of the plant coefficient. We extend the neural operator methodology from adaptive control of a hyperbolic PDE to adaptive control of a benchmark parabolic PDE (a reaction-diffusion equation with a spatially-varying and unknown reaction coefficient). We prove global stability and asymptotic regulation of the plant state for a Lyapunov design of parameter adaptation. The key technical challenge of the result is handling the 2D nature of the gain kernels and proving that the target system with two distinct sources of perturbation terms, due to the parameter estimation error and due to the neural approximation error, is Lyapunov stable. To verify our theoretical result, we present simulations achieving calculation speedups up to 45x relative to the traditional finite difference solvers for every timestep in the simulation trajectory.
|
http://arxiv.org/pdf/2407.01745v1
|
[
"Luke Bhan",
"Yuanyuan Shi",
"Miroslav Krstic"
] |
2024-07-01T19:24:36Z
|
2024-07-01T19:24:36Z
|
2404.05748
|
Analyzing heterogeneity in Alzheimer Disease using multimodal normative
modeling on imaging-based ATN biomarkers
|
INTRODUCTION: Previous studies have applied normative modeling on a single neuroimaging modality to investigate Alzheimer Disease (AD) heterogeneity. We employed a deep learning-based multimodal normative framework to analyze individual-level variation across ATN (amyloid-tau-neurodegeneration) imaging biomarkers. METHODS: We selected cross-sectional discovery (n = 665) and replication cohorts (n = 430) with available T1-weighted MRI, amyloid and tau PET. Normative modeling estimated individual-level abnormal deviations in amyloid-positive individuals compared to amyloid-negative controls. Regional abnormality patterns were mapped at different clinical group levels to assess intra-group heterogeneity. An individual-level disease severity index (DSI) was calculated using both the spatial extent and magnitude of abnormal deviations across ATN. RESULTS: Greater intra-group heterogeneity in ATN abnormality patterns was observed in more severe clinical stages of AD. Higher DSI was associated with worse cognitive function and increased risk of disease progression. DISCUSSION: Subject-specific abnormality maps across ATN reveal the heterogeneous impact of AD on the brain.
|
http://arxiv.org/pdf/2404.05748v2
|
[
"Sayantan Kumar",
"Tom Earnest",
"Braden Yang",
"Deydeep Kothapalli",
"Andrew J. Aschenbrenner",
"Jason Hassenstab",
"Chengie Xiong",
"Beau Ances",
"John Morris",
"Tammie L. S. Benzinger",
"Brian A. Gordon",
"Philip Payne",
"Aristeidis Sotiras"
] |
2024-07-01T19:13:36Z
|
2024-04-04T06:06:26Z
|
2302.06564
|
A Domain Decomposition-Based CNN-DNN Architecture for Model Parallel
Training Applied to Image Recognition Problems
|
Deep neural networks (DNNs) and, in particular, convolutional neural networks (CNNs) have brought significant advances in a wide range of modern computer application problems. However, the increasing availability of large amounts of datasets as well as the increasing available computational power of modern computers lead to a steady growth in the complexity and size of DNN and CNN models, respectively, and thus, to longer training times. Hence, various methods and attempts have been developed to accelerate and parallelize the training of complex network architectures. In this work, a novel CNN-DNN architecture is proposed that naturally supports a model parallel training strategy and that is loosely inspired by two-level domain decomposition methods (DDM). First, local CNN models, that is, subnetworks, are defined that operate on overlapping or nonoverlapping parts of the input data, for example, sub-images. The subnetworks can be trained completely in parallel and independently of each other. Each subnetwork then outputs a local decision for the given machine learning problem which is exclusively based on the respective local input data. Subsequently, in a second step, an additional DNN model is trained which evaluates the local decisions of the local subnetworks and generates a final, global decision. In this paper, we apply the proposed architecture to image classification problems using CNNs. Experimental results for different 2D image classification problems are provided as well as a face recognition problem, and a classification problem for 3D computer tomography (CT) scans. Therefore, classical ResNet and VGG architectures are considered. The results show that the proposed approach can significantly accelerate the required training time compared to the global model and, additionally, can also help to improve the accuracy of the underlying classification problem.
|
http://arxiv.org/pdf/2302.06564v2
|
[
"Axel Klawonn",
"Martin Lanser",
"Janine Weber"
] |
2024-07-01T19:12:49Z
|
2023-02-13T18:06:59Z
|
2407.01725
|
DiscoveryBench: Towards Data-Driven Discovery with Large Language Models
|
Can the rapid advances in code generation, function calling, and data analysis using large language models (LLMs) help automate the search and verification of hypotheses purely from a set of provided datasets? To evaluate this question, we present DiscoveryBench, the first comprehensive benchmark that formalizes the multi-step process of data-driven discovery. The benchmark is designed to systematically assess current model capabilities in discovery tasks and provide a useful resource for improving them. Our benchmark contains 264 tasks collected across 6 diverse domains, such as sociology and engineering, by manually deriving discovery workflows from published papers to approximate the real-world challenges faced by researchers, where each task is defined by a dataset, its metadata, and a discovery goal in natural language. We additionally provide 903 synthetic tasks to conduct controlled evaluations across task complexity. Furthermore, our structured formalism of data-driven discovery enables a facet-based evaluation that provides useful insights into different failure modes. We evaluate several popular LLM-based reasoning frameworks using both open and closed LLMs as baselines on DiscoveryBench and find that even the best system scores only 25%. Our benchmark, thus, illustrates the challenges in autonomous data-driven discovery and serves as a valuable resource for the community to make progress.
|
http://arxiv.org/pdf/2407.01725v1
|
[
"Bodhisattwa Prasad Majumder",
"Harshit Surana",
"Dhruv Agarwal",
"Bhavana Dalvi Mishra",
"Abhijeetsingh Meena",
"Aryan Prakhar",
"Tirth Vora",
"Tushar Khot",
"Ashish Sabharwal",
"Peter Clark"
] |
2024-07-01T18:58:22Z
|
2024-07-01T18:58:22Z
|
2407.01718
|
Entropic Optimal Transport Eigenmaps for Nonlinear Alignment and Joint
Embedding of High-Dimensional Datasets
|
Embedding high-dimensional data into a low-dimensional space is an indispensable component of data analysis. In numerous applications, it is necessary to align and jointly embed multiple datasets from different studies or experimental conditions. Such datasets may share underlying structures of interest but exhibit individual distortions, resulting in misaligned embeddings using traditional techniques. In this work, we propose textit{Entropic Optimal Transport (EOT) eigenmaps}, a principled approach for aligning and jointly embedding a pair of datasets with theoretical guarantees. Our approach leverages the leading singular vectors of the EOT plan matrix between two datasets to extract their shared underlying structure and align the datasets accordingly in a common embedding space. We interpret our approach as an inter-data variant of the classical Laplacian eigenmaps and diffusion maps embeddings, showing that it enjoys many favorable analogous properties. We then analyze a data-generative model where two observed high-dimensional datasets share latent variables on a common low-dimensional manifold, but each dataset is subject to data-specific translation, scaling, nuisance structures, and noise. We show that in a high-dimensional asymptotic regime, the EOT plan recovers the shared manifold structure by approximating a kernel function evaluated at the locations of the latent variables. Subsequently, we provide a geometric interpretation of our embedding by relating it to the eigenfunctions of population-level operators encoding the density and geometry of the shared manifold. Finally, we showcase the performance of our approach for data integration and embedding through simulations and analyses of real-world biological data, demonstrating its advantages over alternative methods in challenging scenarios.
|
http://arxiv.org/pdf/2407.01718v1
|
[
"Boris Landa",
"Yuval Kluger",
"Rong Ma"
] |
2024-07-01T18:48:55Z
|
2024-07-01T18:48:55Z
|
2407.01704
|
Weight Clipping for Deep Continual and Reinforcement Learning
|
Many failures in deep continual and reinforcement learning are associated with increasing magnitudes of the weights, making them hard to change and potentially causing overfitting. While many methods address these learning failures, they often change the optimizer or the architecture, a complexity that hinders widespread adoption in various systems. In this paper, we focus on learning failures that are associated with increasing weight norm and we propose a simple technique that can be easily added on top of existing learning systems: clipping neural network weights to limit them to a specific range. We study the effectiveness of weight clipping in a series of supervised and reinforcement learning experiments. Our empirical results highlight the benefits of weight clipping for generalization, addressing loss of plasticity and policy collapse, and facilitating learning with a large replay ratio.
|
http://arxiv.org/pdf/2407.01704v1
|
[
"Mohamed Elsayed",
"Qingfeng Lan",
"Clare Lyle",
"A. Rupam Mahmood"
] |
2024-07-01T18:29:29Z
|
2024-07-01T18:29:29Z
|
2406.00053
|
Dual Process Learning: Controlling Use of In-Context vs. In-Weights
Strategies with Weight Forgetting
|
Language models have the ability to perform in-context learning (ICL), allowing them to flexibly adapt their behavior based on context. This contrasts with in-weights learning, where information is statically encoded in model parameters from iterated observations of the data. Despite this apparent ability to learn in-context, language models are known to struggle when faced with unseen or rarely seen tokens. Hence, we study $textbf{structural in-context learning}$, which we define as the ability of a model to execute in-context learning on arbitrary tokens -- so called because the model must generalize on the basis of e.g. sentence structure or task structure, rather than semantic content encoded in token embeddings. An ideal model would be able to do both: flexibly deploy in-weights operations (in order to robustly accommodate ambiguous or unknown contexts using encoded semantic information) and structural in-context operations (in order to accommodate novel tokens). We study structural in-context algorithms in a simple part-of-speech setting using both practical and toy models. We find that active forgetting, a technique that was recently introduced to help models generalize to new languages, forces models to adopt structural in-context learning solutions. Finally, we introduce $textbf{temporary forgetting}$, a straightforward extension of active forgetting that enables one to control how much a model relies on in-weights vs. in-context solutions. Importantly, temporary forgetting allows us to induce a $textit{dual process strategy}$ where in-context and in-weights solutions coexist within a single model.
|
http://arxiv.org/pdf/2406.00053v2
|
[
"Suraj Anand",
"Michael A. Lepori",
"Jack Merullo",
"Ellie Pavlick"
] |
2024-07-01T18:23:43Z
|
2024-05-28T21:38:20Z
|
2405.12930
|
Pytorch-Wildlife: A Collaborative Deep Learning Framework for
Conservation
|
The alarming decline in global biodiversity, driven by various factors, underscores the urgent need for large-scale wildlife monitoring. In response, scientists have turned to automated deep learning methods for data processing in wildlife monitoring. However, applying these advanced methods in real-world scenarios is challenging due to their complexity and the need for specialized knowledge, primarily because of technical challenges and interdisciplinary barriers. To address these challenges, we introduce Pytorch-Wildlife, an open-source deep learning platform built on PyTorch. It is designed for creating, modifying, and sharing powerful AI models. This platform emphasizes usability and accessibility, making it accessible to individuals with limited or no technical background. It also offers a modular codebase to simplify feature expansion and further development. Pytorch-Wildlife offers an intuitive, user-friendly interface, accessible through local installation or Hugging Face, for animal detection and classification in images and videos. As two real-world applications, Pytorch-Wildlife has been utilized to train animal classification models for species recognition in the Amazon Rainforest and for invasive opossum recognition in the Galapagos Islands. The Opossum model achieves 98% accuracy, and the Amazon model has 92% recognition accuracy for 36 animals in 90% of the data. As Pytorch-Wildlife evolves, we aim to integrate more conservation tasks, addressing various environmental challenges. Pytorch-Wildlife is available at https://github.com/microsoft/CameraTraps.
|
http://arxiv.org/pdf/2405.12930v3
|
[
"Andres Hernandez",
"Zhongqi Miao",
"Luisa Vargas",
"Rahul Dodhia",
"Pablo Arbelaez",
"Juan M. Lavista Ferres"
] |
2024-07-01T18:22:38Z
|
2024-05-21T16:58:35Z
|
2312.03633
|
Exploring the Reversal Curse and Other Deductive Logical Reasoning in
BERT and GPT-Based Large Language Models
|
The term "Reversal Curse" refers to the scenario where auto-regressive decoder large language models (LLMs), such as ChatGPT, trained on "A is B" fail to learn "B is A," assuming that B and A are distinct and can be uniquely identified from each other, demonstrating a basic failure of logical deduction. This raises a red flag in the use of GPT models for certain general tasks such as constructing knowledge graphs, considering their adherence to this symmetric principle. In our study, we examined a bidirectional LLM, BERT, and found that it is immune to the reversal curse. Driven by ongoing efforts to construct biomedical knowledge graphs with LLMs, we also embarked on evaluating more complex but essential deductive reasoning capabilities. This process included first training encoder and decoder language models to master the intersection and union operations on two sets and then moving on to assess their capability to infer different combinations of union and intersection operations on three newly created sets. The findings showed that while both encoder and decoder language models, trained for tasks involving two sets (union/intersection), were proficient in such scenarios, they encountered difficulties when dealing with operations that included three sets (various combinations of union and intersection). Our research highlights the distinct characteristics of encoder and decoder models in simple and complex logical reasoning. In practice, the choice between BERT and GPT should be guided by the specific requirements and nature of the task at hand, leveraging their respective strengths in bidirectional context comprehension and sequence prediction.
|
http://arxiv.org/pdf/2312.03633v3
|
[
"Da Wu",
"Jingye Yang",
"Kai Wang"
] |
2024-07-01T18:13:24Z
|
2023-12-06T17:29:45Z
|
2407.01686
|
Everything that can be learned about a causal structure with latent
variables by observational and interventional probing schemes
|
What types of differences among causal structures with latent variables are impossible to distinguish by statistical data obtained by probing each visible variable? If the probing scheme is simply passive observation, then it is well-known that many different causal structures can realize the same joint probability distributions. Even for the simplest case of two visible variables, for instance, one cannot distinguish between one variable being a causal parent of the other and the two variables sharing a latent common cause. However, it is possible to distinguish between these two causal structures if we have recourse to more powerful probing schemes, such as the possibility of intervening on one of the variables and observing the other. Herein, we address the question of which causal structures remain indistinguishable even given the most informative types of probing schemes on the visible variables. We find that two causal structures remain indistinguishable if and only if they are both associated with the same mDAG structure (as defined by Evans (2016)). We also consider the question of when one causal structure dominates another in the sense that it can realize all of the joint probability distributions that can be realized by the other using a given probing scheme. (Equivalence of causal structures is the special case of mutual dominance.) Finally, we investigate to what extent one can weaken the probing schemes implemented on the visible variables and still have the same discrimination power as a maximally informative probing scheme.
|
http://arxiv.org/pdf/2407.01686v1
|
[
"Marina Maciel Ansanelli",
"Elie Wolfe",
"Robert W. Spekkens"
] |
2024-07-01T18:01:07Z
|
2024-07-01T18:01:07Z
|
2407.01531
|
Sparse Diffusion Policy: A Sparse, Reusable, and Flexible Policy for
Robot Learning
|
The increasing complexity of tasks in robotics demands efficient strategies for multitask and continual learning. Traditional models typically rely on a universal policy for all tasks, facing challenges such as high computational costs and catastrophic forgetting when learning new tasks. To address these issues, we introduce a sparse, reusable, and flexible policy, Sparse Diffusion Policy (SDP). By adopting Mixture of Experts (MoE) within a transformer-based diffusion policy, SDP selectively activates experts and skills, enabling efficient and task-specific learning without retraining the entire model. SDP not only reduces the burden of active parameters but also facilitates the seamless integration and reuse of experts across various tasks. Extensive experiments on diverse tasks in both simulations and real world show that SDP 1) excels in multitask scenarios with negligible increases in active parameters, 2) prevents forgetting in continual learning of new tasks, and 3) enables efficient task transfer, offering a promising solution for advanced robotic applications. Demos and codes can be found in https://forrest-110.github.io/sparse_diffusion_policy/.
|
http://arxiv.org/pdf/2407.01531v1
|
[
"Yixiao Wang",
"Yifei Zhang",
"Mingxiao Huo",
"Ran Tian",
"Xiang Zhang",
"Yichen Xie",
"Chenfeng Xu",
"Pengliang Ji",
"Wei Zhan",
"Mingyu Ding",
"Masayoshi Tomizuka"
] |
2024-07-01T17:59:56Z
|
2024-07-01T17:59:56Z
|
2407.01529
|
On the Abuse and Detection of Polyglot Files
|
A polyglot is a file that is valid in two or more formats. Polyglot files pose a problem for malware detection systems that route files to format-specific detectors/signatures, as well as file upload and sanitization tools. In this work we found that existing file-format and embedded-file detection tools, even those developed specifically for polyglot files, fail to reliably detect polyglot files used in the wild, leaving organizations vulnerable to attack. To address this issue, we studied the use of polyglot files by malicious actors in the wild, finding $30$ polyglot samples and $15$ attack chains that leveraged polyglot files. In this report, we highlight two well-known APTs whose cyber attack chains relied on polyglot files to bypass detection mechanisms. Using knowledge from our survey of polyglot usage in the wild -- the first of its kind -- we created a novel data set based on adversary techniques. We then trained a machine learning detection solution, PolyConv, using this data set. PolyConv achieves a precision-recall area-under-curve score of $0.999$ with an F1 score of $99.20$% for polyglot detection and $99.47$% for file-format identification, significantly outperforming all other tools tested. We developed a content disarmament and reconstruction tool, ImSan, that successfully sanitized $100$% of the tested image-based polyglots, which were the most common type found via the survey. Our work provides concrete tools and suggestions to enable defenders to better defend themselves against polyglot files, as well as directions for future work to create more robust file specifications and methods of disarmament.
|
http://arxiv.org/pdf/2407.01529v1
|
[
"Luke Koch",
"Sean Oesch",
"Amul Chaulagain",
"Jared Dixon",
"Matthew Dixon",
"Mike Huettal",
"Amir Sadovnik",
"Cory Watson",
"Brian Weber",
"Jacob Hartman",
"Richard Patulski"
] |
2024-07-01T17:59:54Z
|
2024-07-01T17:59:54Z
|
2407.01526
|
Scalable Nested Optimization for Deep Learning
|
Gradient-based optimization has been critical to the success of machine learning, updating a single set of parameters to minimize a single loss. A growing number of applications rely on a generalization of this, where we have a bilevel or nested optimization of which subsets of parameters update on different objectives nested inside each other. We focus on motivating examples of hyperparameter optimization and generative adversarial networks. However, naively applying classical methods often fails when we look at solving these nested problems on a large scale. In this thesis, we build tools for nested optimization that scale to deep learning setups.
|
http://arxiv.org/pdf/2407.01526v1
|
[
"Jonathan Lorraine"
] |
2024-07-01T17:59:41Z
|
2024-07-01T17:59:41Z
|
2407.01521
|
Improving Diffusion Inverse Problem Solving with Decoupled Noise
Annealing
|
Diffusion models have recently achieved success in solving Bayesian inverse problems with learned data priors. Current methods build on top of the diffusion sampling process, where each denoising step makes small modifications to samples from the previous step. However, this process struggles to correct errors from earlier sampling steps, leading to worse performance in complicated nonlinear inverse problems, such as phase retrieval. To address this challenge, we propose a new method called Decoupled Annealing Posterior Sampling (DAPS) that relies on a novel noise annealing process. Specifically, we decouple consecutive steps in a diffusion sampling trajectory, allowing them to vary considerably from one another while ensuring their time-marginals anneal to the true posterior as we reduce noise levels. This approach enables the exploration of a larger solution space, improving the success rate for accurate reconstructions. We demonstrate that DAPS significantly improves sample quality and stability across multiple image restoration tasks, particularly in complicated nonlinear inverse problems. For example, we achieve a PSNR of 30.72dB on the FFHQ 256 dataset for phase retrieval, which is an improvement of 9.12dB compared to existing methods.
|
http://arxiv.org/pdf/2407.01521v1
|
[
"Bingliang Zhang",
"Wenda Chu",
"Julius Berner",
"Chenlin Meng",
"Anima Anandkumar",
"Yang Song"
] |
2024-07-01T17:59:23Z
|
2024-07-01T17:59:23Z
|
2407.01518
|
Towards Multimodal Open-Set Domain Generalization and Adaptation through
Self-supervision
|
The task of open-set domain generalization (OSDG) involves recognizing novel classes within unseen domains, which becomes more challenging with multiple modalities as input. Existing works have only addressed unimodal OSDG within the meta-learning framework, without considering multimodal scenarios. In this work, we introduce a novel approach to address Multimodal Open-Set Domain Generalization (MM-OSDG) for the first time, utilizing self-supervision. To this end, we introduce two innovative multimodal self-supervised pretext tasks: Masked Cross-modal Translation and Multimodal Jigsaw Puzzles. These tasks facilitate the learning of multimodal representative features, thereby enhancing generalization and open-class detection capabilities. Additionally, we propose a novel entropy weighting mechanism to balance the loss across different modalities. Furthermore, we extend our approach to tackle also the Multimodal Open-Set Domain Adaptation (MM-OSDA) problem, especially in scenarios where unlabeled data from the target domain is available. Extensive experiments conducted under MM-OSDG, MM-OSDA, and Multimodal Closed-Set DG settings on the EPIC-Kitchens and HAC datasets demonstrate the efficacy and versatility of the proposed approach. Our source code is available at https://github.com/donghao51/MOOSA.
|
http://arxiv.org/pdf/2407.01518v1
|
[
"Hao Dong",
"Eleni Chatzi",
"Olga Fink"
] |
2024-07-01T17:59:09Z
|
2024-07-01T17:59:09Z
|
2407.01517
|
Centerline Boundary Dice Loss for Vascular Segmentation
|
Vascular segmentation in medical imaging plays a crucial role in analysing morphological and functional assessments. Traditional methods, like the centerline Dice (clDice) loss, ensure topology preservation but falter in capturing geometric details, especially under translation and deformation. The combination of clDice with traditional Dice loss can lead to diameter imbalance, favoring larger vessels. Addressing these challenges, we introduce the centerline boundary Dice (cbDice) loss function, which harmonizes topological integrity and geometric nuances, ensuring consistent segmentation across various vessel sizes. cbDice enriches the clDice approach by including boundary-aware aspects, thereby improving geometric detail recognition. It matches the performance of the boundary difference over union (B-DoU) loss through a mask-distance-based approach, enhancing traslation sensitivity. Crucially, cbDice incorporates radius information from vascular skeletons, enabling uniform adaptation to vascular diameter changes and maintaining balance in branch growth and fracture impacts. Furthermore, we conducted a theoretical analysis of clDice variants (cl-X-Dice). We validated cbDice's efficacy on three diverse vascular segmentation datasets, encompassing both 2D and 3D, and binary and multi-class segmentation. Particularly, the method integrated with cbDice demonstrated outstanding performance on the MICCAI 2023 TopCoW Challenge dataset. Our code is made publicly available at: https://github.com/PengchengShi1220/cbDice.
|
http://arxiv.org/pdf/2407.01517v1
|
[
"Pengcheng Shi",
"Jiesi Hu",
"Yanwu Yang",
"Zilve Gao",
"Wei Liu",
"Ting Ma"
] |
2024-07-01T17:58:44Z
|
2024-07-01T17:58:44Z
|
2106.02797
|
Neural Distributed Source Coding
|
Distributed source coding (DSC) is the task of encoding an input in the absence of correlated side information that is only available to the decoder. Remarkably, Slepian and Wolf showed in 1973 that an encoder without access to the side information can asymptotically achieve the same compression rate as when the side information is available to it. While there is vast prior work on this topic, practical DSC has been limited to synthetic datasets and specific correlation structures. Here we present a framework for lossy DSC that is agnostic to the correlation structure and can scale to high dimensions. Rather than relying on hand-crafted source modeling, our method utilizes a conditional Vector-Quantized Variational Autoencoder (VQ-VAE) to learn the distributed encoder and decoder. We evaluate our method on multiple datasets and show that our method can handle complex correlations and achieves state-of-the-art PSNR. Our code is made available at https://github.com/acnagle/neural-dsc.
|
http://arxiv.org/pdf/2106.02797v4
|
[
"Jay Whang",
"Alliot Nagle",
"Anish Acharya",
"Hyeji Kim",
"Alexandros G. Dimakis"
] |
2024-07-01T17:56:00Z
|
2021-06-05T04:50:43Z
|
2407.01502
|
AI Agents That Matter
|
AI agents are an exciting new research direction, and agent development is driven by benchmarks. Our analysis of current agent benchmarks and evaluation practices reveals several shortcomings that hinder their usefulness in real-world applications. First, there is a narrow focus on accuracy without attention to other metrics. As a result, SOTA agents are needlessly complex and costly, and the community has reached mistaken conclusions about the sources of accuracy gains. Our focus on cost in addition to accuracy motivates the new goal of jointly optimizing the two metrics. We design and implement one such optimization, showing its potential to greatly reduce cost while maintaining accuracy. Second, the benchmarking needs of model and downstream developers have been conflated, making it hard to identify which agent would be best suited for a particular application. Third, many agent benchmarks have inadequate holdout sets, and sometimes none at all. This has led to agents that are fragile because they take shortcuts and overfit to the benchmark in various ways. We prescribe a principled framework for avoiding overfitting. Finally, there is a lack of standardization in evaluation practices, leading to a pervasive lack of reproducibility. We hope that the steps we introduce for addressing these shortcomings will spur the development of agents that are useful in the real world and not just accurate on benchmarks.
|
http://arxiv.org/pdf/2407.01502v1
|
[
"Sayash Kapoor",
"Benedikt Stroebl",
"Zachary S. Siegel",
"Nitya Nadgir",
"Arvind Narayanan"
] |
2024-07-01T17:48:14Z
|
2024-07-01T17:48:14Z
|
2407.01501
|
Online Learning of Temporal Dependencies for Sustainable Foraging
Problem
|
The sustainable foraging problem is a dynamic environment testbed for exploring the forms of agent cognition in dealing with social dilemmas in a multi-agent setting. The agents need to resist the temptation of individual rewards through foraging and choose the collective long-term goal of sustainability. We investigate methods of online learning in Neuro-Evolution and Deep Recurrent Q-Networks to enable agents to attempt the problem one-shot as is often required by wicked social problems. We further explore if learning temporal dependencies with Long Short-Term Memory may be able to aid the agents in developing sustainable foraging strategies in the long term. It was found that the integration of Long Short-Term Memory assisted agents in developing sustainable strategies for a single agent, however failed to assist agents in managing the social dilemma that arises in the multi-agent scenario.
|
http://arxiv.org/pdf/2407.01501v1
|
[
"John Payne",
"Aishwaryaprajna",
"Peter R. Lewis"
] |
2024-07-01T17:47:31Z
|
2024-07-01T17:47:31Z
|
2407.01499
|
Pictures Of MIDI: Controlled Music Generation via Graphical Prompts for
Image-Based Diffusion Inpainting
|
Recent years have witnessed significant progress in generative models for music, featuring diverse architectures that balance output quality, diversity, speed, and user control. This study explores a user-friendly graphical interface enabling the drawing of masked regions for inpainting by an Hourglass Diffusion Transformer (HDiT) model trained on MIDI piano roll images. To enhance note generation in specified areas, masked regions can be "repainted" with extra noise. The non-latent HDiTs linear scaling with pixel count allows efficient generation in pixel space, providing intuitive and interpretable controls such as masking throughout the network and removing the need to operate in compressed latent spaces such as those provided by pretrained autoencoders. We demonstrate that, in addition to inpainting of melodies, accompaniment, and continuations, the use of repainting can help increase note density yielding musical structures closely matching user specifications such as rising, falling, or diverging melody and/or accompaniment, even when these lie outside the typical training data distribution. We achieve performance on par with prior results while operating at longer context windows, with no autoencoder, and can enable complex geometries for inpainting masks, increasing the options for machine-assisted composers to control the generated music.
|
http://arxiv.org/pdf/2407.01499v1
|
[
"Scott H. Hawley"
] |
2024-07-01T17:43:45Z
|
2024-07-01T17:43:45Z
|
2407.05941
|
Reducing Vision Transformer Latency on Edge Devices via GPU Tail Effect
and Training-free Token Pruning
|
This paper investigates how to efficiently deploy transformer-based neural networks on edge devices. Recent methods reduce the latency of transformer neural networks by removing or merging tokens, with small accuracy degradation. However, these methods are not designed with edge device deployment in mind, and do not leverage information about the hardware characteristics to improve efficiency. First, we show that the relationship between latency and workload size is governed by the GPU tail-effect. This relationship is used to create a token pruning schedule tailored for a pre-trained model and device pair. Second, we demonstrate a training-free token pruning method utilizing this relationship. This method achieves accuracy-latency trade-offs in a hardware aware manner. We show that for single batch inference, other methods may actually increase latency by 18.6-30.3% with respect to baseline, while we can reduce it by 9%. For similar latency (within 5.2%) across devices we achieve 78.6%-84.5% ImageNet1K accuracy, while the state-of-the-art, Token Merging, achieves 45.8%-85.4%.
|
http://arxiv.org/pdf/2407.05941v1
|
[
"Nick John Eliopoulos",
"Purvish Jajal",
"James Davis",
"Gaowen Liu",
"George K. Thiravathukal",
"Yung-Hsiang Lu"
] |
2024-07-01T17:42:40Z
|
2024-07-01T17:42:40Z
|
2407.01496
|
Fast Iterative Solver For Neural Network Method: II. 1D
Diffusion-Reaction Problems And Data Fitting
|
This paper expands the damped block Newton (dBN) method introduced recently in [4] for 1D diffusion-reaction equations and least-squares data fitting problems. To determine the linear parameters (the weights and bias of the output layer) of the neural network (NN), the dBN method requires solving systems of linear equations involving the mass matrix. While the mass matrix for local hat basis functions is tri-diagonal and well-conditioned, the mass matrix for NNs is dense and ill-conditioned. For example, the condition number of the NN mass matrix for quasi-uniform meshes is at least ${cal O}(n^4)$. We present a factorization of the mass matrix that enables solving the systems of linear equations in ${cal O}(n)$ operations. To determine the non-linear parameters (the weights and bias of the hidden layer), one step of a damped Newton method is employed at each iteration. A Gauss-Newton method is used in place of Newton for the instances in which the Hessian matrices are singular. This modified dBN is referred to as dBGN. For both methods, the computational cost per iteration is ${cal O}(n)$. Numerical results demonstrate the ability dBN and dBGN to efficiently achieve accurate results and outperform BFGS for select examples.
|
http://arxiv.org/pdf/2407.01496v1
|
[
"Zhiqiang Cai",
"Anastassia Doktorova",
"Robert D. Falgout",
"César Herrera"
] |
2024-07-01T17:42:29Z
|
2024-07-01T17:42:29Z
|
2406.17055
|
Large Language Models Assume People are More Rational than We Really are
|
In order for AI systems to communicate effectively with people, they must understand how we make decisions. However, people's decisions are not always rational, so the implicit internal models of human decision-making in Large Language Models (LLMs) must account for this. Previous empirical evidence seems to suggest that these implicit models are accurate -- LLMs offer believable proxies of human behavior, acting how we expect humans would in everyday interactions. However, by comparing LLM behavior and predictions to a large dataset of human decisions, we find that this is actually not the case: when both simulating and predicting people's choices, a suite of cutting-edge LLMs (GPT-4o & 4-Turbo, Llama-3-8B & 70B, Claude 3 Opus) assume that people are more rational than we really are. Specifically, these models deviate from human behavior and align more closely with a classic model of rational choice -- expected value theory. Interestingly, people also tend to assume that other people are rational when interpreting their behavior. As a consequence, when we compare the inferences that LLMs and people draw from the decisions of others using another psychological dataset, we find that these inferences are highly correlated. Thus, the implicit decision-making models of LLMs appear to be aligned with the human expectation that other people will act rationally, rather than with how people actually act.
|
http://arxiv.org/pdf/2406.17055v2
|
[
"Ryan Liu",
"Jiayi Geng",
"Joshua C. Peterson",
"Ilia Sucholutsky",
"Thomas L. Griffiths"
] |
2024-07-01T17:29:54Z
|
2024-06-24T18:15:27Z
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.