Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2504.00750 | Wenxuan Wu | Wenxuan Wu, Xueyuan Chen, Shuai Wang, Jiadong Wang, Lingwei Meng,
Xixin Wu, Helen Meng, Haizhou Li | $C^2$AV-TSE: Context and Confidence-aware Audio Visual Target Speaker
Extraction | Accepted by IEEE Journal of Selected Topics in Signal Processing
(JSTSP) | null | null | null | cs.SD cs.LG cs.MM eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Audio-Visual Target Speaker Extraction (AV-TSE) aims to mimic the human
ability to enhance auditory perception using visual cues. Although numerous
models have been proposed recently, most of them estimate target signals by
primarily relying on local dependencies within acoustic features,
underutilizing the human-like capacity to infer unclear parts of speech through
contextual information. This limitation results in not only suboptimal
performance but also inconsistent extraction quality across the utterance, with
some segments exhibiting poor quality or inadequate suppression of interfering
speakers. To close this gap, we propose a model-agnostic strategy called the
Mask-And-Recover (MAR). It integrates both inter- and intra-modality contextual
correlations to enable global inference within extraction modules.
Additionally, to better target challenging parts within each sample, we
introduce a Fine-grained Confidence Score (FCS) model to assess extraction
quality and guide extraction modules to emphasize improvement on low-quality
segments. To validate the effectiveness of our proposed model-agnostic training
paradigm, six popular AV-TSE backbones were adopted for evaluation on the
VoxCeleb2 dataset, demonstrating consistent performance improvements across
various metrics.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 13:01:30 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Wu",
"Wenxuan",
""
],
[
"Chen",
"Xueyuan",
""
],
[
"Wang",
"Shuai",
""
],
[
"Wang",
"Jiadong",
""
],
[
"Meng",
"Lingwei",
""
],
[
"Wu",
"Xixin",
""
],
[
"Meng",
"Helen",
""
],
[
"Li",
"Haizhou",
""
]
] | TITLE: $C^2$AV-TSE: Context and Confidence-aware Audio Visual Target Speaker
Extraction
ABSTRACT: Audio-Visual Target Speaker Extraction (AV-TSE) aims to mimic the human
ability to enhance auditory perception using visual cues. Although numerous
models have been proposed recently, most of them estimate target signals by
primarily relying on local dependencies within acoustic features,
underutilizing the human-like capacity to infer unclear parts of speech through
contextual information. This limitation results in not only suboptimal
performance but also inconsistent extraction quality across the utterance, with
some segments exhibiting poor quality or inadequate suppression of interfering
speakers. To close this gap, we propose a model-agnostic strategy called the
Mask-And-Recover (MAR). It integrates both inter- and intra-modality contextual
correlations to enable global inference within extraction modules.
Additionally, to better target challenging parts within each sample, we
introduce a Fine-grained Confidence Score (FCS) model to assess extraction
quality and guide extraction modules to emphasize improvement on low-quality
segments. To validate the effectiveness of our proposed model-agnostic training
paradigm, six popular AV-TSE backbones were adopted for evaluation on the
VoxCeleb2 dataset, demonstrating consistent performance improvements across
various metrics.
|
2504.00753 | Doruk Oner | Elyar Esmaeilzadeh, Ehsan Garaaghaji, Farzad Hallaji Azad, Doruk Oner | CAPE: Connectivity-Aware Path Enforcement Loss for Curvilinear Structure
Delineation | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Promoting the connectivity of curvilinear structures, such as neuronal
processes in biomedical scans and blood vessels in CT images, remains a key
challenge in semantic segmentation. Traditional pixel-wise loss functions,
including cross-entropy and Dice losses, often fail to capture high-level
topological connectivity, resulting in topological mistakes in graphs obtained
from prediction maps. In this paper, we propose CAPE (Connectivity-Aware Path
Enforcement), a novel loss function designed to enforce connectivity in graphs
obtained from segmentation maps by optimizing a graph connectivity metric. CAPE
uses the graph representation of the ground truth to select node pairs and
determine their corresponding paths within the predicted segmentation through a
shortest-path algorithm. Using this, we penalize both disconnections and false
positive connections, effectively promoting the model to preserve topological
correctness. Experiments on 2D and 3D datasets, including neuron and blood
vessel tracing demonstrate that CAPE significantly improves topology-aware
metrics and outperforms state-of-the-art methods.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 13:03:52 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Esmaeilzadeh",
"Elyar",
""
],
[
"Garaaghaji",
"Ehsan",
""
],
[
"Azad",
"Farzad Hallaji",
""
],
[
"Oner",
"Doruk",
""
]
] | TITLE: CAPE: Connectivity-Aware Path Enforcement Loss for Curvilinear Structure
Delineation
ABSTRACT: Promoting the connectivity of curvilinear structures, such as neuronal
processes in biomedical scans and blood vessels in CT images, remains a key
challenge in semantic segmentation. Traditional pixel-wise loss functions,
including cross-entropy and Dice losses, often fail to capture high-level
topological connectivity, resulting in topological mistakes in graphs obtained
from prediction maps. In this paper, we propose CAPE (Connectivity-Aware Path
Enforcement), a novel loss function designed to enforce connectivity in graphs
obtained from segmentation maps by optimizing a graph connectivity metric. CAPE
uses the graph representation of the ground truth to select node pairs and
determine their corresponding paths within the predicted segmentation through a
shortest-path algorithm. Using this, we penalize both disconnections and false
positive connections, effectively promoting the model to preserve topological
correctness. Experiments on 2D and 3D datasets, including neuron and blood
vessel tracing demonstrate that CAPE significantly improves topology-aware
metrics and outperforms state-of-the-art methods.
|
2504.00756 | Zhouhong Gu | Lin Zhang, Zhouhong Gu, Xiaoran Shi, Hongwei Feng, Yanghua Xiao | RECKON: Large-scale Reference-based Efficient Knowledge Evaluation for
Large Language Model | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | As large language models (LLMs) advance, efficient knowledge evaluation
becomes crucial to verifying their capabilities. Traditional methods, relying
on benchmarks, face limitations such as high resource costs and information
loss. We propose the Large-scale Reference-based Efficient Knowledge Evaluation
for Large Language Model (RECKON), which directly uses reference data to
evaluate models. RECKON organizes unstructured data into manageable units and
generates targeted questions for each cluster, improving evaluation accuracy
and efficiency. Experimental results show that RECKON reduces resource
consumption by 56.5% compared to traditional methods while achieving over 97%
accuracy across various domains, including world knowledge, code, legal, and
biomedical datasets. Code is available at https://github.com/MikeGu721/reckon
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 13:08:04 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Zhang",
"Lin",
""
],
[
"Gu",
"Zhouhong",
""
],
[
"Shi",
"Xiaoran",
""
],
[
"Feng",
"Hongwei",
""
],
[
"Xiao",
"Yanghua",
""
]
] | TITLE: RECKON: Large-scale Reference-based Efficient Knowledge Evaluation for
Large Language Model
ABSTRACT: As large language models (LLMs) advance, efficient knowledge evaluation
becomes crucial to verifying their capabilities. Traditional methods, relying
on benchmarks, face limitations such as high resource costs and information
loss. We propose the Large-scale Reference-based Efficient Knowledge Evaluation
for Large Language Model (RECKON), which directly uses reference data to
evaluate models. RECKON organizes unstructured data into manageable units and
generates targeted questions for each cluster, improving evaluation accuracy
and efficiency. Experimental results show that RECKON reduces resource
consumption by 56.5% compared to traditional methods while achieving over 97%
accuracy across various domains, including world knowledge, code, legal, and
biomedical datasets. Code is available at https://github.com/MikeGu721/reckon
|
2504.00758 | Paul Andrey | Paul Andrey and Batiste Le Bars and Marc Tommasi | TAMIS: Tailored Membership Inference Attacks on Synthetic Data | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Membership Inference Attacks (MIA) enable to empirically assess the privacy
of a machine learning algorithm. In this paper, we propose TAMIS, a novel MIA
against differentially-private synthetic data generation methods that rely on
graphical models. This attack builds upon MAMA-MIA, a recently-published
state-of-the-art method. It lowers its computational cost and requires less
attacker knowledge. Our attack is the product of a two-fold improvement. First,
we recover the graphical model having generated a synthetic dataset by using
solely that dataset, rather than shadow-modeling over an auxiliary one. This
proves less costly and more performant. Second, we introduce a more
mathematically-grounded attack score, that provides a natural threshold for
binary predictions. In our experiments, TAMIS achieves better or similar
performance as MAMA-MIA on replicas of the SNAKE challenge.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 13:08:48 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Andrey",
"Paul",
""
],
[
"Bars",
"Batiste Le",
""
],
[
"Tommasi",
"Marc",
""
]
] | TITLE: TAMIS: Tailored Membership Inference Attacks on Synthetic Data
ABSTRACT: Membership Inference Attacks (MIA) enable to empirically assess the privacy
of a machine learning algorithm. In this paper, we propose TAMIS, a novel MIA
against differentially-private synthetic data generation methods that rely on
graphical models. This attack builds upon MAMA-MIA, a recently-published
state-of-the-art method. It lowers its computational cost and requires less
attacker knowledge. Our attack is the product of a two-fold improvement. First,
we recover the graphical model having generated a synthetic dataset by using
solely that dataset, rather than shadow-modeling over an auxiliary one. This
proves less costly and more performant. Second, we introduce a more
mathematically-grounded attack score, that provides a natural threshold for
binary predictions. In our experiments, TAMIS achieves better or similar
performance as MAMA-MIA on replicas of the SNAKE challenge.
|
2504.00759 | Dehua Huo | Dehua Huo, Weida Zhan, Jinxin Guo, Depeng Zhu, Yu Chen, YiChun Jiang,
Yueyi Han, Deng Han, and Jin Li | MSSFC-Net:Enhancing Building Interpretation with Multi-Scale
Spatial-Spectral Feature Collaboration | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Building interpretation from remote sensing imagery primarily involves two
fundamental tasks: building extraction and change detection. However, most
existing methods address these tasks independently, overlooking their inherent
correlation and failing to exploit shared feature representations for mutual
enhancement. Furthermore, the diverse spectral,spatial, and scale
characteristics of buildings pose additional challenges in jointly modeling
spatial-spectral multi-scale features and effectively balancing precision and
recall. The limited synergy between spatial and spectral representations often
results in reduced detection accuracy and incomplete change localization.To
address these challenges, we propose a Multi-Scale Spatial-Spectral Feature
Cooperative Dual-Task Network (MSSFC-Net) for joint building extraction and
change detection in remote sensing images. The framework integrates both tasks
within a unified architecture, leveraging their complementary nature to
simultaneously extract building and change features. Specifically,a Dual-branch
Multi-scale Feature Extraction module (DMFE) with Spatial-Spectral Feature
Collaboration (SSFC) is designed to enhance multi-scale representation
learning, effectively capturing shallow texture details and deep semantic
information, thus improving building extraction performance. For temporal
feature aggregation, we introduce a Multi-scale Differential Fusion Module
(MDFM) that explicitly models the interaction between differential and
dual-temporal features. This module refines the network's capability to detect
large-area changes and subtle structural variations in buildings. Extensive
experiments conducted on three benchmark datasets demonstrate that MSSFC-Net
achieves superior performance in both building extraction and change detection
tasks, effectively improving detection accuracy while maintaining completeness.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 13:10:23 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Huo",
"Dehua",
""
],
[
"Zhan",
"Weida",
""
],
[
"Guo",
"Jinxin",
""
],
[
"Zhu",
"Depeng",
""
],
[
"Chen",
"Yu",
""
],
[
"Jiang",
"YiChun",
""
],
[
"Han",
"Yueyi",
""
],
[
"Han",
"Deng",
""
],
[
"Li",
"Jin",
""
]
] | TITLE: MSSFC-Net:Enhancing Building Interpretation with Multi-Scale
Spatial-Spectral Feature Collaboration
ABSTRACT: Building interpretation from remote sensing imagery primarily involves two
fundamental tasks: building extraction and change detection. However, most
existing methods address these tasks independently, overlooking their inherent
correlation and failing to exploit shared feature representations for mutual
enhancement. Furthermore, the diverse spectral,spatial, and scale
characteristics of buildings pose additional challenges in jointly modeling
spatial-spectral multi-scale features and effectively balancing precision and
recall. The limited synergy between spatial and spectral representations often
results in reduced detection accuracy and incomplete change localization.To
address these challenges, we propose a Multi-Scale Spatial-Spectral Feature
Cooperative Dual-Task Network (MSSFC-Net) for joint building extraction and
change detection in remote sensing images. The framework integrates both tasks
within a unified architecture, leveraging their complementary nature to
simultaneously extract building and change features. Specifically,a Dual-branch
Multi-scale Feature Extraction module (DMFE) with Spatial-Spectral Feature
Collaboration (SSFC) is designed to enhance multi-scale representation
learning, effectively capturing shallow texture details and deep semantic
information, thus improving building extraction performance. For temporal
feature aggregation, we introduce a Multi-scale Differential Fusion Module
(MDFM) that explicitly models the interaction between differential and
dual-temporal features. This module refines the network's capability to detect
large-area changes and subtle structural variations in buildings. Extensive
experiments conducted on three benchmark datasets demonstrate that MSSFC-Net
achieves superior performance in both building extraction and change detection
tasks, effectively improving detection accuracy while maintaining completeness.
|
2504.00763 | Yunxuan Mao | Yunxuan Mao, Rong Xiong, Yue Wang, Yiyi Liao | UnIRe: Unsupervised Instance Decomposition for Dynamic Urban Scene
Reconstruction | null | null | null | null | cs.CV cs.RO | http://creativecommons.org/licenses/by/4.0/ | Reconstructing and decomposing dynamic urban scenes is crucial for autonomous
driving, urban planning, and scene editing. However, existing methods fail to
perform instance-aware decomposition without manual annotations, which is
crucial for instance-level scene editing.We propose UnIRe, a 3D Gaussian
Splatting (3DGS) based approach that decomposes a scene into a static
background and individual dynamic instances using only RGB images and LiDAR
point clouds. At its core, we introduce 4D superpoints, a novel representation
that clusters multi-frame LiDAR points in 4D space, enabling unsupervised
instance separation based on spatiotemporal correlations. These 4D superpoints
serve as the foundation for our decomposed 4D initialization, i.e., providing
spatial and temporal initialization to train a dynamic 3DGS for arbitrary
dynamic classes without requiring bounding boxes or object
templates.Furthermore, we introduce a smoothness regularization strategy in
both 2D and 3D space, further improving the temporal stability.Experiments on
benchmark datasets show that our method outperforms existing methods in
decomposed dynamic scene reconstruction while enabling accurate and flexible
instance-level editing, making it a practical solution for real-world
applications.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 13:15:58 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Mao",
"Yunxuan",
""
],
[
"Xiong",
"Rong",
""
],
[
"Wang",
"Yue",
""
],
[
"Liao",
"Yiyi",
""
]
] | TITLE: UnIRe: Unsupervised Instance Decomposition for Dynamic Urban Scene
Reconstruction
ABSTRACT: Reconstructing and decomposing dynamic urban scenes is crucial for autonomous
driving, urban planning, and scene editing. However, existing methods fail to
perform instance-aware decomposition without manual annotations, which is
crucial for instance-level scene editing.We propose UnIRe, a 3D Gaussian
Splatting (3DGS) based approach that decomposes a scene into a static
background and individual dynamic instances using only RGB images and LiDAR
point clouds. At its core, we introduce 4D superpoints, a novel representation
that clusters multi-frame LiDAR points in 4D space, enabling unsupervised
instance separation based on spatiotemporal correlations. These 4D superpoints
serve as the foundation for our decomposed 4D initialization, i.e., providing
spatial and temporal initialization to train a dynamic 3DGS for arbitrary
dynamic classes without requiring bounding boxes or object
templates.Furthermore, we introduce a smoothness regularization strategy in
both 2D and 3D space, further improving the temporal stability.Experiments on
benchmark datasets show that our method outperforms existing methods in
decomposed dynamic scene reconstruction while enabling accurate and flexible
instance-level editing, making it a practical solution for real-world
applications.
|
2504.00773 | Hyunwoo Park | Hyunwoo Park, Gun Ryu, and Wonjun Kim | DropGaussian: Structural Regularization for Sparse-view Gaussian
Splatting | Accepted by CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, 3D Gaussian splatting (3DGS) has gained considerable attentions in
the field of novel view synthesis due to its fast performance while yielding
the excellent image quality. However, 3DGS in sparse-view settings (e.g.,
three-view inputs) often faces with the problem of overfitting to training
views, which significantly drops the visual quality of novel view images. Many
existing approaches have tackled this issue by using strong priors, such as 2D
generative contextual information and external depth signals. In contrast, this
paper introduces a prior-free method, so-called DropGaussian, with simple
changes in 3D Gaussian splatting. Specifically, we randomly remove Gaussians
during the training process in a similar way of dropout, which allows
non-excluded Gaussians to have larger gradients while improving their
visibility. This makes the remaining Gaussians to contribute more to the
optimization process for rendering with sparse input views. Such simple
operation effectively alleviates the overfitting problem and enhances the
quality of novel view synthesis. By simply applying DropGaussian to the
original 3DGS framework, we can achieve the competitive performance with
existing prior-based 3DGS methods in sparse-view settings of benchmark datasets
without any additional complexity. The code and model are publicly available
at: https://github.com/DCVL-3D/DropGaussian release.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 13:23:34 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Park",
"Hyunwoo",
""
],
[
"Ryu",
"Gun",
""
],
[
"Kim",
"Wonjun",
""
]
] | TITLE: DropGaussian: Structural Regularization for Sparse-view Gaussian
Splatting
ABSTRACT: Recently, 3D Gaussian splatting (3DGS) has gained considerable attentions in
the field of novel view synthesis due to its fast performance while yielding
the excellent image quality. However, 3DGS in sparse-view settings (e.g.,
three-view inputs) often faces with the problem of overfitting to training
views, which significantly drops the visual quality of novel view images. Many
existing approaches have tackled this issue by using strong priors, such as 2D
generative contextual information and external depth signals. In contrast, this
paper introduces a prior-free method, so-called DropGaussian, with simple
changes in 3D Gaussian splatting. Specifically, we randomly remove Gaussians
during the training process in a similar way of dropout, which allows
non-excluded Gaussians to have larger gradients while improving their
visibility. This makes the remaining Gaussians to contribute more to the
optimization process for rendering with sparse input views. Such simple
operation effectively alleviates the overfitting problem and enhances the
quality of novel view synthesis. By simply applying DropGaussian to the
original 3DGS framework, we can achieve the competitive performance with
existing prior-based 3DGS methods in sparse-view settings of benchmark datasets
without any additional complexity. The code and model are publicly available
at: https://github.com/DCVL-3D/DropGaussian release.
|
2504.00775 | Ning Lan | Ning Lan, Baoshan Ou, Xuemei Xie, Guangming Shi | Visual Environment-Interactive Planning for Embodied Complex-Question
Answering | null | null | 10.1109/TCSVT.2025.3538860 | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study focuses on Embodied Complex-Question Answering task, which means
the embodied robot need to understand human questions with intricate structures
and abstract semantics. The core of this task lies in making appropriate plans
based on the perception of the visual environment. Existing methods often
generate plans in a once-for-all manner, i.e., one-step planning. Such approach
rely on large models, without sufficient understanding of the environment.
Considering multi-step planning, the framework for formulating plans in a
sequential manner is proposed in this paper. To ensure the ability of our
framework to tackle complex questions, we create a structured semantic space,
where hierarchical visual perception and chain expression of the question
essence can achieve iterative interaction. This space makes sequential task
planning possible. Within the framework, we first parse human natural language
based on a visual hierarchical scene graph, which can clarify the intention of
the question. Then, we incorporate external rules to make a plan for current
step, weakening the reliance on large models. Every plan is generated based on
feedback from visual perception, with multiple rounds of interaction until an
answer is obtained. This approach enables continuous feedback and adjustment,
allowing the robot to optimize its action strategy. To test our framework, we
contribute a new dataset with more complex questions. Experimental results
demonstrate that our approach performs excellently and stably on complex tasks.
And also, the feasibility of our approach in real-world scenarios has been
established, indicating its practical applicability.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 13:26:28 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Lan",
"Ning",
""
],
[
"Ou",
"Baoshan",
""
],
[
"Xie",
"Xuemei",
""
],
[
"Shi",
"Guangming",
""
]
] | TITLE: Visual Environment-Interactive Planning for Embodied Complex-Question
Answering
ABSTRACT: This study focuses on Embodied Complex-Question Answering task, which means
the embodied robot need to understand human questions with intricate structures
and abstract semantics. The core of this task lies in making appropriate plans
based on the perception of the visual environment. Existing methods often
generate plans in a once-for-all manner, i.e., one-step planning. Such approach
rely on large models, without sufficient understanding of the environment.
Considering multi-step planning, the framework for formulating plans in a
sequential manner is proposed in this paper. To ensure the ability of our
framework to tackle complex questions, we create a structured semantic space,
where hierarchical visual perception and chain expression of the question
essence can achieve iterative interaction. This space makes sequential task
planning possible. Within the framework, we first parse human natural language
based on a visual hierarchical scene graph, which can clarify the intention of
the question. Then, we incorporate external rules to make a plan for current
step, weakening the reliance on large models. Every plan is generated based on
feedback from visual perception, with multiple rounds of interaction until an
answer is obtained. This approach enables continuous feedback and adjustment,
allowing the robot to optimize its action strategy. To test our framework, we
contribute a new dataset with more complex questions. Experimental results
demonstrate that our approach performs excellently and stably on complex tasks.
And also, the feasibility of our approach in real-world scenarios has been
established, indicating its practical applicability.
|
2504.00784 | Yang Yang | Yang Yang, Xijie Xu, Yixun Zhou, Jie Zheng | CellVTA: Enhancing Vision Foundation Models for Accurate Cell
Segmentation and Classification | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cell instance segmentation is a fundamental task in digital pathology with
broad clinical applications. Recently, vision foundation models, which are
predominantly based on Vision Transformers (ViTs), have achieved remarkable
success in pathology image analysis. However, their improvements in cell
instance segmentation remain limited. A key challenge arises from the
tokenization process in ViTs, which substantially reduces the spatial
resolution of input images, leading to suboptimal segmentation quality,
especially for small and densely packed cells. To address this problem, we
propose CellVTA (Cell Vision Transformer with Adapter), a novel method that
improves the performance of vision foundation models for cell instance
segmentation by incorporating a CNN-based adapter module. This adapter extracts
high-resolution spatial information from input images and injects it into the
ViT through a cross-attention mechanism. Our method preserves the core
architecture of ViT, ensuring seamless integration with pretrained foundation
models. Extensive experiments show that CellVTA achieves 0.538 mPQ on the CoNIC
dataset and 0.506 mPQ on the PanNuke dataset, which significantly outperforms
the state-of-the-art cell segmentation methods. Ablation studies confirm the
superiority of our approach over other fine-tuning strategies, including
decoder-only fine-tuning and full fine-tuning. Our code and models are publicly
available at https://github.com/JieZheng-ShanghaiTech/CellVTA.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 13:36:46 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Yang",
"Yang",
""
],
[
"Xu",
"Xijie",
""
],
[
"Zhou",
"Yixun",
""
],
[
"Zheng",
"Jie",
""
]
] | TITLE: CellVTA: Enhancing Vision Foundation Models for Accurate Cell
Segmentation and Classification
ABSTRACT: Cell instance segmentation is a fundamental task in digital pathology with
broad clinical applications. Recently, vision foundation models, which are
predominantly based on Vision Transformers (ViTs), have achieved remarkable
success in pathology image analysis. However, their improvements in cell
instance segmentation remain limited. A key challenge arises from the
tokenization process in ViTs, which substantially reduces the spatial
resolution of input images, leading to suboptimal segmentation quality,
especially for small and densely packed cells. To address this problem, we
propose CellVTA (Cell Vision Transformer with Adapter), a novel method that
improves the performance of vision foundation models for cell instance
segmentation by incorporating a CNN-based adapter module. This adapter extracts
high-resolution spatial information from input images and injects it into the
ViT through a cross-attention mechanism. Our method preserves the core
architecture of ViT, ensuring seamless integration with pretrained foundation
models. Extensive experiments show that CellVTA achieves 0.538 mPQ on the CoNIC
dataset and 0.506 mPQ on the PanNuke dataset, which significantly outperforms
the state-of-the-art cell segmentation methods. Ablation studies confirm the
superiority of our approach over other fine-tuning strategies, including
decoder-only fine-tuning and full fine-tuning. Our code and models are publicly
available at https://github.com/JieZheng-ShanghaiTech/CellVTA.
|
2504.00786 | Xin Tong | Xin Tong, Xuanhe Zhou, Bingsheng He, Guoliang Li, Zirui Tang, Wei
Zhou, Fan Wu, Mian Lu, Yuqiang Chen | FeatInsight: An Online ML Feature Management System on 4Paradigm
Sage-Studio Platform | null | null | null | null | cs.DB cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Feature management is essential for many online machine learning applications
and can often become the performance bottleneck (e.g., taking up to 70% of the
overall latency in sales prediction service). Improper feature configurations
(e.g., introducing too many irrelevant features) can severely undermine the
model's generalization capabilities. However, managing online ML features is
challenging due to (1) large-scale, complex raw data (e.g., the 2018 PHM
dataset contains 17 tables and dozens to hundreds of columns), (2) the need for
high-performance, consistent computation of interdependent features with
complex patterns, and (3) the requirement for rapid updates and deployments to
accommodate real-time data changes. In this demo, we present FeatInsight, a
system that supports the entire feature lifecycle, including feature design,
storage, visualization, computation, verification, and lineage management.
FeatInsight (with OpenMLDB as the execution engine) has been deployed in over
100 real-world scenarios on 4Paradigm's Sage Studio platform, handling up to a
trillion-dimensional feature space and enabling millisecond-level feature
updates. We demonstrate how FeatInsight enhances feature design efficiency
(e.g., for online product recommendation) and improve feature computation
performance (e.g., for online fraud detection). The code is available at
https://github.com/4paradigm/FeatInsight.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 13:39:45 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Tong",
"Xin",
""
],
[
"Zhou",
"Xuanhe",
""
],
[
"He",
"Bingsheng",
""
],
[
"Li",
"Guoliang",
""
],
[
"Tang",
"Zirui",
""
],
[
"Zhou",
"Wei",
""
],
[
"Wu",
"Fan",
""
],
[
"Lu",
"Mian",
""
],
[
"Chen",
"Yuqiang",
""
]
] | TITLE: FeatInsight: An Online ML Feature Management System on 4Paradigm
Sage-Studio Platform
ABSTRACT: Feature management is essential for many online machine learning applications
and can often become the performance bottleneck (e.g., taking up to 70% of the
overall latency in sales prediction service). Improper feature configurations
(e.g., introducing too many irrelevant features) can severely undermine the
model's generalization capabilities. However, managing online ML features is
challenging due to (1) large-scale, complex raw data (e.g., the 2018 PHM
dataset contains 17 tables and dozens to hundreds of columns), (2) the need for
high-performance, consistent computation of interdependent features with
complex patterns, and (3) the requirement for rapid updates and deployments to
accommodate real-time data changes. In this demo, we present FeatInsight, a
system that supports the entire feature lifecycle, including feature design,
storage, visualization, computation, verification, and lineage management.
FeatInsight (with OpenMLDB as the execution engine) has been deployed in over
100 real-world scenarios on 4Paradigm's Sage Studio platform, handling up to a
trillion-dimensional feature space and enabling millisecond-level feature
updates. We demonstrate how FeatInsight enhances feature design efficiency
(e.g., for online product recommendation) and improve feature computation
performance (e.g., for online fraud detection). The code is available at
https://github.com/4paradigm/FeatInsight.
|
2504.00794 | Soyeon Kim | Boseon Yoo, Jiwoo Lee, Janghoon Ju, Seijun Chung, Soyeon Kim, Jaesik
Choi | Conditional Temporal Neural Processes with Covariance Loss | 11 pages, 18 figures | Proceedings of the 38th International Conference on Machine
Learning, PMLR 139:12051-12061, 2021 | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | We introduce a novel loss function, Covariance Loss, which is conceptually
equivalent to conditional neural processes and has a form of regularization so
that is applicable to many kinds of neural networks. With the proposed loss,
mappings from input variables to target variables are highly affected by
dependencies of target variables as well as mean activation and mean
dependencies of input and target variables. This nature enables the resulting
neural networks to become more robust to noisy observations and recapture
missing dependencies from prior information. In order to show the validity of
the proposed loss, we conduct extensive sets of experiments on real-world
datasets with state-of-the-art models and discuss the benefits and drawbacks of
the proposed Covariance Loss.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 13:51:44 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Yoo",
"Boseon",
""
],
[
"Lee",
"Jiwoo",
""
],
[
"Ju",
"Janghoon",
""
],
[
"Chung",
"Seijun",
""
],
[
"Kim",
"Soyeon",
""
],
[
"Choi",
"Jaesik",
""
]
] | TITLE: Conditional Temporal Neural Processes with Covariance Loss
ABSTRACT: We introduce a novel loss function, Covariance Loss, which is conceptually
equivalent to conditional neural processes and has a form of regularization so
that is applicable to many kinds of neural networks. With the proposed loss,
mappings from input variables to target variables are highly affected by
dependencies of target variables as well as mean activation and mean
dependencies of input and target variables. This nature enables the resulting
neural networks to become more robust to noisy observations and recapture
missing dependencies from prior information. In order to show the validity of
the proposed loss, we conduct extensive sets of experiments on real-world
datasets with state-of-the-art models and discuss the benefits and drawbacks of
the proposed Covariance Loss.
|
2504.00810 | Zhaojian Yu | Zhaojian Yu, Yinghao Wu, Yilun Zhao, Arman Cohan, Xiao-Ping Zhang | Z1: Efficient Test-time Scaling with Code | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) can achieve enhanced complex problem-solving
through test-time computing scaling, yet this often entails longer contexts and
numerous reasoning token costs. In this paper, we propose an efficient
test-time scaling method that trains LLMs on code-related reasoning
trajectories, facilitating their reduction of excess thinking tokens while
maintaining performance. First, we create Z1-Code-Reasoning-107K, a curated
dataset of simple and complex coding problems paired with their short and long
solution trajectories. Second, we present a novel Shifted Thinking Window to
mitigate overthinking overhead by removing context-delimiting tags (e.g.,
<think>. . . </think>) and capping reasoning tokens. Trained with long and
short trajectory data and equipped with Shifted Thinking Window, our model,
Z1-7B, demonstrates the ability to adjust its reasoning level as the complexity
of problems and exhibits efficient test-time scaling across different reasoning
tasks that matches R1-Distill-Qwen-7B performance with about 30% of its average
thinking tokens. Notably, fine-tuned with only code trajectories, Z1-7B
demonstrates generalization to broader reasoning tasks (47.5% on GPQA Diamond).
Our analysis of efficient reasoning elicitation also provides valuable insights
for future research.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 14:01:50 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Yu",
"Zhaojian",
""
],
[
"Wu",
"Yinghao",
""
],
[
"Zhao",
"Yilun",
""
],
[
"Cohan",
"Arman",
""
],
[
"Zhang",
"Xiao-Ping",
""
]
] | TITLE: Z1: Efficient Test-time Scaling with Code
ABSTRACT: Large Language Models (LLMs) can achieve enhanced complex problem-solving
through test-time computing scaling, yet this often entails longer contexts and
numerous reasoning token costs. In this paper, we propose an efficient
test-time scaling method that trains LLMs on code-related reasoning
trajectories, facilitating their reduction of excess thinking tokens while
maintaining performance. First, we create Z1-Code-Reasoning-107K, a curated
dataset of simple and complex coding problems paired with their short and long
solution trajectories. Second, we present a novel Shifted Thinking Window to
mitigate overthinking overhead by removing context-delimiting tags (e.g.,
<think>. . . </think>) and capping reasoning tokens. Trained with long and
short trajectory data and equipped with Shifted Thinking Window, our model,
Z1-7B, demonstrates the ability to adjust its reasoning level as the complexity
of problems and exhibits efficient test-time scaling across different reasoning
tasks that matches R1-Distill-Qwen-7B performance with about 30% of its average
thinking tokens. Notably, fine-tuned with only code trajectories, Z1-7B
demonstrates generalization to broader reasoning tasks (47.5% on GPQA Diamond).
Our analysis of efficient reasoning elicitation also provides valuable insights
for future research.
|
2504.00812 | Yiqun Duan | Yiqun Duan, Sameera Ramasinghe, Stephen Gould, Ajanthan Thalaiyasingam | Scaling Prompt Instructed Zero Shot Composed Image Retrieval with
Image-Only Data | null | null | null | null | cs.CV cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Composed Image Retrieval (CIR) is the task of retrieving images matching a
reference image augmented with a text, where the text describes changes to the
reference image in natural language. Traditionally, models designed for CIR
have relied on triplet data containing a reference image, reformulation text,
and a target image. However, curating such triplet data often necessitates
human intervention, leading to prohibitive costs. This challenge has hindered
the scalability of CIR model training even with the availability of abundant
unlabeled data. With the recent advances in foundational models, we advocate a
shift in the CIR training paradigm where human annotations can be efficiently
replaced by large language models (LLMs). Specifically, we demonstrate the
capability of large captioning and language models in efficiently generating
data for CIR only relying on unannotated image collections. Additionally, we
introduce an embedding reformulation architecture that effectively combines
image and text modalities. Our model, named InstructCIR, outperforms
state-of-the-art methods in zero-shot composed image retrieval on CIRR and
FashionIQ datasets. Furthermore, we demonstrate that by increasing the amount
of generated data, our zero-shot model gets closer to the performance of
supervised baselines.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 14:03:46 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Duan",
"Yiqun",
""
],
[
"Ramasinghe",
"Sameera",
""
],
[
"Gould",
"Stephen",
""
],
[
"Thalaiyasingam",
"Ajanthan",
""
]
] | TITLE: Scaling Prompt Instructed Zero Shot Composed Image Retrieval with
Image-Only Data
ABSTRACT: Composed Image Retrieval (CIR) is the task of retrieving images matching a
reference image augmented with a text, where the text describes changes to the
reference image in natural language. Traditionally, models designed for CIR
have relied on triplet data containing a reference image, reformulation text,
and a target image. However, curating such triplet data often necessitates
human intervention, leading to prohibitive costs. This challenge has hindered
the scalability of CIR model training even with the availability of abundant
unlabeled data. With the recent advances in foundational models, we advocate a
shift in the CIR training paradigm where human annotations can be efficiently
replaced by large language models (LLMs). Specifically, we demonstrate the
capability of large captioning and language models in efficiently generating
data for CIR only relying on unannotated image collections. Additionally, we
introduce an embedding reformulation architecture that effectively combines
image and text modalities. Our model, named InstructCIR, outperforms
state-of-the-art methods in zero-shot composed image retrieval on CIRR and
FashionIQ datasets. Furthermore, we demonstrate that by increasing the amount
of generated data, our zero-shot model gets closer to the performance of
supervised baselines.
|
2504.00816 | Yeqi Fang | Yeqi Fang, Rong Zhou | The study of non-complete-ring positron emission tomography (PET)
detection method | 18 pages, 14 pages | null | null | null | cs.CV physics.med-ph | http://creativecommons.org/licenses/by/4.0/ | Positron Emission Tomography (PET) is a vital molecular imaging tool widely
used in medical diagnosis and treatment evaluation. Traditional PET systems
typically rely on complete detector rings to achieve full angular coverage for
uniform and statistically robust sampling of coincidence events. However,
incomplete-ring PET scanners have emerged in various scenarios due to hardware
failures, cost constraints, or specific clinical needs. In such cases,
conventional reconstruction algorithms often suffer from performance
degradation due to reduced data completeness and geometric inconsistencies.
This thesis proposes a coarse-to-fine reconstruction framework for
incomplete-ring PET scanners. The framework first employs an Attention U-Net
model to recover complete sinograms from incomplete ones, then uses the OSEM
algorithm for preliminary reconstruction, and finally applies a two-stage
architecture comprising a Coarse Prediction Module (CPM) and an Iterative
Refinement Module (IRM) for fine reconstruction. Our approach utilizes
neighboring axial slices and spectral transform features as auxiliary guidance
at the input level to ensure spatial and frequency domain consistency, and
integrates a contrastive diffusion strategy at the output level to improve
correspondence between low-quality PET inputs and refined PET outputs.
Experimental results on public and in-house brain PET datasets demonstrate that
the proposed method significantly outperforms existing approaches in metrics
such as PSNR (35.6421 dB) and SSIM (0.9588), successfully preserving key
anatomical structures and tracer distribution features, thus providing an
effective solution for incomplete-ring PET imaging.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 14:05:32 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Fang",
"Yeqi",
""
],
[
"Zhou",
"Rong",
""
]
] | TITLE: The study of non-complete-ring positron emission tomography (PET)
detection method
ABSTRACT: Positron Emission Tomography (PET) is a vital molecular imaging tool widely
used in medical diagnosis and treatment evaluation. Traditional PET systems
typically rely on complete detector rings to achieve full angular coverage for
uniform and statistically robust sampling of coincidence events. However,
incomplete-ring PET scanners have emerged in various scenarios due to hardware
failures, cost constraints, or specific clinical needs. In such cases,
conventional reconstruction algorithms often suffer from performance
degradation due to reduced data completeness and geometric inconsistencies.
This thesis proposes a coarse-to-fine reconstruction framework for
incomplete-ring PET scanners. The framework first employs an Attention U-Net
model to recover complete sinograms from incomplete ones, then uses the OSEM
algorithm for preliminary reconstruction, and finally applies a two-stage
architecture comprising a Coarse Prediction Module (CPM) and an Iterative
Refinement Module (IRM) for fine reconstruction. Our approach utilizes
neighboring axial slices and spectral transform features as auxiliary guidance
at the input level to ensure spatial and frequency domain consistency, and
integrates a contrastive diffusion strategy at the output level to improve
correspondence between low-quality PET inputs and refined PET outputs.
Experimental results on public and in-house brain PET datasets demonstrate that
the proposed method significantly outperforms existing approaches in metrics
such as PSNR (35.6421 dB) and SSIM (0.9588), successfully preserving key
anatomical structures and tracer distribution features, thus providing an
effective solution for incomplete-ring PET imaging.
|
2504.00820 | Didong Li | Kevin Wang, Hongqian Niu, Yixin Wang, Didong Li | Deep Generative Models: Complexity, Dimensionality, and Approximation | null | null | null | null | cs.LG math.DG stat.ML | http://creativecommons.org/licenses/by/4.0/ | Generative networks have shown remarkable success in learning complex data
distributions, particularly in generating high-dimensional data from
lower-dimensional inputs. While this capability is well-documented empirically,
its theoretical underpinning remains unclear. One common theoretical
explanation appeals to the widely accepted manifold hypothesis, which suggests
that many real-world datasets, such as images and signals, often possess
intrinsic low-dimensional geometric structures. Under this manifold hypothesis,
it is widely believed that to approximate a distribution on a $d$-dimensional
Riemannian manifold, the latent dimension needs to be at least $d$ or $d+1$. In
this work, we show that this requirement on the latent dimension is not
necessary by demonstrating that generative networks can approximate
distributions on $d$-dimensional Riemannian manifolds from inputs of any
arbitrary dimension, even lower than $d$, taking inspiration from the concept
of space-filling curves. This approach, in turn, leads to a super-exponential
complexity bound of the deep neural networks through expanded neurons. Our
findings thus challenge the conventional belief on the relationship between
input dimensionality and the ability of generative networks to model data
distributions. This novel insight not only corroborates the practical
effectiveness of generative networks in handling complex data structures, but
also underscores a critical trade-off between approximation error,
dimensionality, and model complexity.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 14:07:02 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Wang",
"Kevin",
""
],
[
"Niu",
"Hongqian",
""
],
[
"Wang",
"Yixin",
""
],
[
"Li",
"Didong",
""
]
] | TITLE: Deep Generative Models: Complexity, Dimensionality, and Approximation
ABSTRACT: Generative networks have shown remarkable success in learning complex data
distributions, particularly in generating high-dimensional data from
lower-dimensional inputs. While this capability is well-documented empirically,
its theoretical underpinning remains unclear. One common theoretical
explanation appeals to the widely accepted manifold hypothesis, which suggests
that many real-world datasets, such as images and signals, often possess
intrinsic low-dimensional geometric structures. Under this manifold hypothesis,
it is widely believed that to approximate a distribution on a $d$-dimensional
Riemannian manifold, the latent dimension needs to be at least $d$ or $d+1$. In
this work, we show that this requirement on the latent dimension is not
necessary by demonstrating that generative networks can approximate
distributions on $d$-dimensional Riemannian manifolds from inputs of any
arbitrary dimension, even lower than $d$, taking inspiration from the concept
of space-filling curves. This approach, in turn, leads to a super-exponential
complexity bound of the deep neural networks through expanded neurons. Our
findings thus challenge the conventional belief on the relationship between
input dimensionality and the ability of generative networks to model data
distributions. This novel insight not only corroborates the practical
effectiveness of generative networks in handling complex data structures, but
also underscores a critical trade-off between approximation error,
dimensionality, and model complexity.
|
2504.00829 | Yunjie Ji | Yunjie Ji, Sitong Zhao, Xiaoyu Tian, Haotian Wang, Shuaiting Chen,
Yiping Peng, Han Zhao, Xiangang Li | How Difficulty-Aware Staged Reinforcement Learning Enhances LLMs'
Reasoning Capabilities: A Preliminary Experimental Study | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Enhancing the reasoning capabilities of Large Language Models (LLMs) with
efficiency and scalability remains a fundamental challenge in artificial
intelligence research. This paper presents a rigorous experimental
investigation into how difficulty-aware staged reinforcement learning (RL)
strategies can substantially improve LLM reasoning performance. Through
systematic analysis, we demonstrate that strategically selecting training data
according to well-defined difficulty levels markedly enhances RL optimization.
Moreover, we introduce a staged training methodology, progressively exposing
models to increasingly challenging tasks, further amplifying reasoning
capabilities. Our findings reveal significant cross-domain benefits when
simultaneously training models on mathematical reasoning and code generation
tasks. Notably, our proposed approach enables a 1.5B parameter model to achieve
an accuracy of 42.3\% on the AIME-2024 benchmark, 89.5\% on the MATH-500
benchmark. These results underscore the efficacy of our method in advancing the
reasoning proficiency of LLMs. We will open-source our datasets on GitHub and
Hugging Face.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 14:18:38 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Ji",
"Yunjie",
""
],
[
"Zhao",
"Sitong",
""
],
[
"Tian",
"Xiaoyu",
""
],
[
"Wang",
"Haotian",
""
],
[
"Chen",
"Shuaiting",
""
],
[
"Peng",
"Yiping",
""
],
[
"Zhao",
"Han",
""
],
[
"Li",
"Xiangang",
""
]
] | TITLE: How Difficulty-Aware Staged Reinforcement Learning Enhances LLMs'
Reasoning Capabilities: A Preliminary Experimental Study
ABSTRACT: Enhancing the reasoning capabilities of Large Language Models (LLMs) with
efficiency and scalability remains a fundamental challenge in artificial
intelligence research. This paper presents a rigorous experimental
investigation into how difficulty-aware staged reinforcement learning (RL)
strategies can substantially improve LLM reasoning performance. Through
systematic analysis, we demonstrate that strategically selecting training data
according to well-defined difficulty levels markedly enhances RL optimization.
Moreover, we introduce a staged training methodology, progressively exposing
models to increasingly challenging tasks, further amplifying reasoning
capabilities. Our findings reveal significant cross-domain benefits when
simultaneously training models on mathematical reasoning and code generation
tasks. Notably, our proposed approach enables a 1.5B parameter model to achieve
an accuracy of 42.3\% on the AIME-2024 benchmark, 89.5\% on the MATH-500
benchmark. These results underscore the efficacy of our method in advancing the
reasoning proficiency of LLMs. We will open-source our datasets on GitHub and
Hugging Face.
|
2504.00831 | Soyeon Kim | Soyeon Kim, Junho Choi, Subeen Lee, Jaesik Choi | Example-Based Concept Analysis Framework for Deep Weather Forecast
Models | 39 pages, 10 figures | Artificial Intelligence for the Earth System, 2025, volume 4,
Online ISSN: 2769-7525 | 10.1175/AIES-D-24-0079.1 | null | cs.AI cs.HC | http://creativecommons.org/licenses/by/4.0/ | To improve the trustworthiness of an AI model, finding consistent,
understandable representations of its inference process is essential. This
understanding is particularly important in high-stakes operations such as
weather forecasting, where the identification of underlying meteorological
mechanisms is as critical as the accuracy of the predictions. Despite the
growing literature that addresses this issue through explainable AI, the
applicability of their solutions is often limited due to their AI-centric
development. To fill this gap, we follow a user-centric process to develop an
example-based concept analysis framework, which identifies cases that follow a
similar inference process as the target instance in a target model and presents
them in a user-comprehensible format. Our framework provides the users with
visually and conceptually analogous examples, including the probability of
concept assignment to resolve ambiguities in weather mechanisms. To bridge the
gap between vector representations identified from models and
human-understandable explanations, we compile a human-annotated concept dataset
and implement a user interface to assist domain experts involved in the the
framework development.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 14:22:41 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Kim",
"Soyeon",
""
],
[
"Choi",
"Junho",
""
],
[
"Lee",
"Subeen",
""
],
[
"Choi",
"Jaesik",
""
]
] | TITLE: Example-Based Concept Analysis Framework for Deep Weather Forecast
Models
ABSTRACT: To improve the trustworthiness of an AI model, finding consistent,
understandable representations of its inference process is essential. This
understanding is particularly important in high-stakes operations such as
weather forecasting, where the identification of underlying meteorological
mechanisms is as critical as the accuracy of the predictions. Despite the
growing literature that addresses this issue through explainable AI, the
applicability of their solutions is often limited due to their AI-centric
development. To fill this gap, we follow a user-centric process to develop an
example-based concept analysis framework, which identifies cases that follow a
similar inference process as the target instance in a target model and presents
them in a user-comprehensible format. Our framework provides the users with
visually and conceptually analogous examples, including the probability of
concept assignment to resolve ambiguities in weather mechanisms. To bridge the
gap between vector representations identified from models and
human-understandable explanations, we compile a human-annotated concept dataset
and implement a user interface to assist domain experts involved in the the
framework development.
|
2504.00837 | Shuyu Li | Shuyu Li, Shulei Ji, Zihao Wang, Songruoyao Wu, Jiaxing Yu, Kejun
Zhang | A Survey on Music Generation from Single-Modal, Cross-Modal, and
Multi-Modal Perspectives: Data, Methods, and Challenges | null | null | null | null | cs.SD cs.AI cs.MM | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Multi-modal music generation, using multiple modalities like images, video,
and text alongside musical scores and audio as guidance, is an emerging
research area with broad applications. This paper reviews this field,
categorizing music generation systems from the perspective of modalities. It
covers modality representation, multi-modal data alignment, and their
utilization to guide music generation. We also discuss current datasets and
evaluation methods. Key challenges in this area include effective multi-modal
integration, large-scale comprehensive datasets, and systematic evaluation
methods. Finally, we provide an outlook on future research directions focusing
on multi-modal fusion, alignment, data, and evaluation.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 14:26:25 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Li",
"Shuyu",
""
],
[
"Ji",
"Shulei",
""
],
[
"Wang",
"Zihao",
""
],
[
"Wu",
"Songruoyao",
""
],
[
"Yu",
"Jiaxing",
""
],
[
"Zhang",
"Kejun",
""
]
] | TITLE: A Survey on Music Generation from Single-Modal, Cross-Modal, and
Multi-Modal Perspectives: Data, Methods, and Challenges
ABSTRACT: Multi-modal music generation, using multiple modalities like images, video,
and text alongside musical scores and audio as guidance, is an emerging
research area with broad applications. This paper reviews this field,
categorizing music generation systems from the perspective of modalities. It
covers modality representation, multi-modal data alignment, and their
utilization to guide music generation. We also discuss current datasets and
evaluation methods. Key challenges in this area include effective multi-modal
integration, large-scale comprehensive datasets, and systematic evaluation
methods. Finally, we provide an outlook on future research directions focusing
on multi-modal fusion, alignment, data, and evaluation.
|
2504.00839 | Yuchen Liu | Yuchen Liu, Lino Lerch, Luigi Palmieri, Andrey Rudenko, Sebastian
Koch, Timo Ropinski, Marco Aiello | Context-Aware Human Behavior Prediction Using Multimodal Large Language
Models: Challenges and Insights | null | null | null | null | cs.RO cs.AI | http://creativecommons.org/licenses/by/4.0/ | Predicting human behavior in shared environments is crucial for safe and
efficient human-robot interaction. Traditional data-driven methods to that end
are pre-trained on domain-specific datasets, activity types, and prediction
horizons. In contrast, the recent breakthroughs in Large Language Models (LLMs)
promise open-ended cross-domain generalization to describe various human
activities and make predictions in any context. In particular, Multimodal LLMs
(MLLMs) are able to integrate information from various sources, achieving more
contextual awareness and improved scene understanding. The difficulty in
applying general-purpose MLLMs directly for prediction stems from their limited
capacity for processing large input sequences, sensitivity to prompt design,
and expensive fine-tuning. In this paper, we present a systematic analysis of
applying pre-trained MLLMs for context-aware human behavior prediction. To this
end, we introduce a modular multimodal human activity prediction framework that
allows us to benchmark various MLLMs, input variations, In-Context Learning
(ICL), and autoregressive techniques. Our evaluation indicates that the
best-performing framework configuration is able to reach 92.8% semantic
similarity and 66.1% exact label accuracy in predicting human behaviors in the
target frame.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 14:28:19 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Liu",
"Yuchen",
""
],
[
"Lerch",
"Lino",
""
],
[
"Palmieri",
"Luigi",
""
],
[
"Rudenko",
"Andrey",
""
],
[
"Koch",
"Sebastian",
""
],
[
"Ropinski",
"Timo",
""
],
[
"Aiello",
"Marco",
""
]
] | TITLE: Context-Aware Human Behavior Prediction Using Multimodal Large Language
Models: Challenges and Insights
ABSTRACT: Predicting human behavior in shared environments is crucial for safe and
efficient human-robot interaction. Traditional data-driven methods to that end
are pre-trained on domain-specific datasets, activity types, and prediction
horizons. In contrast, the recent breakthroughs in Large Language Models (LLMs)
promise open-ended cross-domain generalization to describe various human
activities and make predictions in any context. In particular, Multimodal LLMs
(MLLMs) are able to integrate information from various sources, achieving more
contextual awareness and improved scene understanding. The difficulty in
applying general-purpose MLLMs directly for prediction stems from their limited
capacity for processing large input sequences, sensitivity to prompt design,
and expensive fine-tuning. In this paper, we present a systematic analysis of
applying pre-trained MLLMs for context-aware human behavior prediction. To this
end, we introduce a modular multimodal human activity prediction framework that
allows us to benchmark various MLLMs, input variations, In-Context Learning
(ICL), and autoregressive techniques. Our evaluation indicates that the
best-performing framework configuration is able to reach 92.8% semantic
similarity and 66.1% exact label accuracy in predicting human behaviors in the
target frame.
|
2504.00843 | Hyoungwook Jin | Hyoungwook Jin, Yoonsu Kim, Dongyun Jung, Seungju Kim, Kiyoon Choi,
Jinho Son, Juho Kim | Investigating Large Language Models in Diagnosing Students' Cognitive
Skills in Math Problem-solving | null | null | null | null | cs.AI cs.HC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Mathematics learning entails mastery of both content knowledge and cognitive
processing of knowing, applying, and reasoning with it. Automated math
assessment primarily has focused on grading students' exhibition of content
knowledge by finding textual evidence, such as specific numbers, formulas, and
statements. Recent advancements in problem-solving, image recognition, and
reasoning capabilities of large language models (LLMs) show promise for nuanced
evaluation of students' cognitive skills. Diagnosing cognitive skills needs to
infer students' thinking processes beyond textual evidence, which is an
underexplored task in LLM-based automated assessment. In this work, we
investigate how state-of-the-art LLMs diagnose students' cognitive skills in
mathematics. We constructed MathCog, a novel benchmark dataset comprising 639
student responses to 110 expert-curated middle school math problems, each
annotated with detailed teachers' diagnoses based on cognitive skill
checklists. Using MathCog, we evaluated 16 closed and open LLMs of varying
model sizes and vendors. Our evaluation reveals that even the state-of-the-art
LLMs struggle with the task, all F1 scores below 0.5, and tend to exhibit
strong false confidence for incorrect cases ($r_s=.617$). We also found that
model size positively correlates with the diagnosis performance ($r_s=.771$).
Finally, we discuss the implications of these findings, the overconfidence
issue, and directions for improving automated cognitive skill diagnosis.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 14:29:41 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Jin",
"Hyoungwook",
""
],
[
"Kim",
"Yoonsu",
""
],
[
"Jung",
"Dongyun",
""
],
[
"Kim",
"Seungju",
""
],
[
"Choi",
"Kiyoon",
""
],
[
"Son",
"Jinho",
""
],
[
"Kim",
"Juho",
""
]
] | TITLE: Investigating Large Language Models in Diagnosing Students' Cognitive
Skills in Math Problem-solving
ABSTRACT: Mathematics learning entails mastery of both content knowledge and cognitive
processing of knowing, applying, and reasoning with it. Automated math
assessment primarily has focused on grading students' exhibition of content
knowledge by finding textual evidence, such as specific numbers, formulas, and
statements. Recent advancements in problem-solving, image recognition, and
reasoning capabilities of large language models (LLMs) show promise for nuanced
evaluation of students' cognitive skills. Diagnosing cognitive skills needs to
infer students' thinking processes beyond textual evidence, which is an
underexplored task in LLM-based automated assessment. In this work, we
investigate how state-of-the-art LLMs diagnose students' cognitive skills in
mathematics. We constructed MathCog, a novel benchmark dataset comprising 639
student responses to 110 expert-curated middle school math problems, each
annotated with detailed teachers' diagnoses based on cognitive skill
checklists. Using MathCog, we evaluated 16 closed and open LLMs of varying
model sizes and vendors. Our evaluation reveals that even the state-of-the-art
LLMs struggle with the task, all F1 scores below 0.5, and tend to exhibit
strong false confidence for incorrect cases ($r_s=.617$). We also found that
model size positively correlates with the diagnosis performance ($r_s=.771$).
Finally, we discuss the implications of these findings, the overconfidence
issue, and directions for improving automated cognitive skill diagnosis.
|
2504.00844 | Abdelrahman Elskhawy | Abdelrahman Elskhawy, Mengze Li, Nassir Navab, Benjamin Busam | PRISM-0: A Predicate-Rich Scene Graph Generation Framework for Zero-Shot
Open-Vocabulary Tasks | null | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | In Scene Graphs Generation (SGG) one extracts structured representation from
visual inputs in the form of objects nodes and predicates connecting them. This
facilitates image-based understanding and reasoning for various downstream
tasks. Although fully supervised SGG approaches showed steady performance
improvements, they suffer from a severe training bias. This is caused by the
availability of only small subsets of curated data and exhibits long-tail
predicate distribution issues with a lack of predicate diversity adversely
affecting downstream tasks. To overcome this, we introduce PRISM-0, a framework
for zero-shot open-vocabulary SGG that bootstraps foundation models in a
bottom-up approach to capture the whole spectrum of diverse, open-vocabulary
predicate prediction. Detected object pairs are filtered and passed to a Vision
Language Model (VLM) that generates descriptive captions. These are used to
prompt an LLM to generate fine-andcoarse-grained predicates for the pair. The
predicates are then validated using a VQA model to provide a final SGG. With
the modular and dataset-independent PRISM-0, we can enrich existing SG datasets
such as Visual Genome (VG). Experiments illustrate that PRIMS-0 generates
semantically meaningful graphs that improve downstream tasks such as Image
Captioning and Sentence-to-Graph Retrieval with a performance on par to the
best fully supervised methods.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 14:29:51 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Elskhawy",
"Abdelrahman",
""
],
[
"Li",
"Mengze",
""
],
[
"Navab",
"Nassir",
""
],
[
"Busam",
"Benjamin",
""
]
] | TITLE: PRISM-0: A Predicate-Rich Scene Graph Generation Framework for Zero-Shot
Open-Vocabulary Tasks
ABSTRACT: In Scene Graphs Generation (SGG) one extracts structured representation from
visual inputs in the form of objects nodes and predicates connecting them. This
facilitates image-based understanding and reasoning for various downstream
tasks. Although fully supervised SGG approaches showed steady performance
improvements, they suffer from a severe training bias. This is caused by the
availability of only small subsets of curated data and exhibits long-tail
predicate distribution issues with a lack of predicate diversity adversely
affecting downstream tasks. To overcome this, we introduce PRISM-0, a framework
for zero-shot open-vocabulary SGG that bootstraps foundation models in a
bottom-up approach to capture the whole spectrum of diverse, open-vocabulary
predicate prediction. Detected object pairs are filtered and passed to a Vision
Language Model (VLM) that generates descriptive captions. These are used to
prompt an LLM to generate fine-andcoarse-grained predicates for the pair. The
predicates are then validated using a VQA model to provide a final SGG. With
the modular and dataset-independent PRISM-0, we can enrich existing SG datasets
such as Visual Genome (VG). Experiments illustrate that PRIMS-0 generates
semantically meaningful graphs that improve downstream tasks such as Image
Captioning and Sentence-to-Graph Retrieval with a performance on par to the
best fully supervised methods.
|
2504.00848 | Yushan Zhang | Yushan Zhang, Aljo\v{s}a O\v{s}ep, Laura Leal-Taix\'e, Tim Meinhardt | Zero-Shot 4D Lidar Panoptic Segmentation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Zero-shot 4D segmentation and recognition of arbitrary objects in Lidar is
crucial for embodied navigation, with applications ranging from streaming
perception to semantic mapping and localization. However, the primary challenge
in advancing research and developing generalized, versatile methods for
spatio-temporal scene understanding in Lidar lies in the scarcity of datasets
that provide the necessary diversity and scale of annotations.To overcome these
challenges, we propose SAL-4D (Segment Anything in Lidar--4D), a method that
utilizes multi-modal robotic sensor setups as a bridge to distill recent
developments in Video Object Segmentation (VOS) in conjunction with
off-the-shelf Vision-Language foundation models to Lidar. We utilize VOS models
to pseudo-label tracklets in short video sequences, annotate these tracklets
with sequence-level CLIP tokens, and lift them to the 4D Lidar space using
calibrated multi-modal sensory setups to distill them to our SAL-4D model. Due
to temporal consistent predictions, we outperform prior art in 3D Zero-Shot
Lidar Panoptic Segmentation (LPS) over $5$ PQ, and unlock Zero-Shot 4D-LPS.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 14:36:12 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Zhang",
"Yushan",
""
],
[
"Ošep",
"Aljoša",
""
],
[
"Leal-Taixé",
"Laura",
""
],
[
"Meinhardt",
"Tim",
""
]
] | TITLE: Zero-Shot 4D Lidar Panoptic Segmentation
ABSTRACT: Zero-shot 4D segmentation and recognition of arbitrary objects in Lidar is
crucial for embodied navigation, with applications ranging from streaming
perception to semantic mapping and localization. However, the primary challenge
in advancing research and developing generalized, versatile methods for
spatio-temporal scene understanding in Lidar lies in the scarcity of datasets
that provide the necessary diversity and scale of annotations.To overcome these
challenges, we propose SAL-4D (Segment Anything in Lidar--4D), a method that
utilizes multi-modal robotic sensor setups as a bridge to distill recent
developments in Video Object Segmentation (VOS) in conjunction with
off-the-shelf Vision-Language foundation models to Lidar. We utilize VOS models
to pseudo-label tracklets in short video sequences, annotate these tracklets
with sequence-level CLIP tokens, and lift them to the 4D Lidar space using
calibrated multi-modal sensory setups to distill them to our SAL-4D model. Due
to temporal consistent predictions, we outperform prior art in 3D Zero-Shot
Lidar Panoptic Segmentation (LPS) over $5$ PQ, and unlock Zero-Shot 4D-LPS.
|
2504.00850 | Zhuang Qi | Zhuang Qi, Runhui Zhang, Lei Meng, Wei Wu, Yachong Zhang, and Xiangxu
Meng | Global Intervention and Distillation for Federated Out-of-Distribution
Generalization | null | ICME 2025 | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Attribute skew in federated learning leads local models to focus on learning
non-causal associations, guiding them towards inconsistent optimization
directions, which inevitably results in performance degradation and unstable
convergence. Existing methods typically leverage data augmentation to enhance
sample diversity or employ knowledge distillation to learn invariant
representations. However, the instability in the quality of generated data and
the lack of domain information limit their performance on unseen samples. To
address these issues, this paper presents a global intervention and
distillation method, termed FedGID, which utilizes diverse attribute features
for backdoor adjustment to break the spurious association between background
and label. It includes two main modules, where the global intervention module
adaptively decouples objects and backgrounds in images, injects background
information into random samples to intervene in the sample distribution, which
links backgrounds to all categories to prevent the model from treating
background-label associations as causal. The global distillation module
leverages a unified knowledge base to guide the representation learning of
client models, preventing local models from overfitting to client-specific
attributes. Experimental results on three datasets demonstrate that FedGID
enhances the model's ability to focus on the main subjects in unseen data and
outperforms existing methods in collaborative modeling.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 14:36:24 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Qi",
"Zhuang",
""
],
[
"Zhang",
"Runhui",
""
],
[
"Meng",
"Lei",
""
],
[
"Wu",
"Wei",
""
],
[
"Zhang",
"Yachong",
""
],
[
"Meng",
"Xiangxu",
""
]
] | TITLE: Global Intervention and Distillation for Federated Out-of-Distribution
Generalization
ABSTRACT: Attribute skew in federated learning leads local models to focus on learning
non-causal associations, guiding them towards inconsistent optimization
directions, which inevitably results in performance degradation and unstable
convergence. Existing methods typically leverage data augmentation to enhance
sample diversity or employ knowledge distillation to learn invariant
representations. However, the instability in the quality of generated data and
the lack of domain information limit their performance on unseen samples. To
address these issues, this paper presents a global intervention and
distillation method, termed FedGID, which utilizes diverse attribute features
for backdoor adjustment to break the spurious association between background
and label. It includes two main modules, where the global intervention module
adaptively decouples objects and backgrounds in images, injects background
information into random samples to intervene in the sample distribution, which
links backgrounds to all categories to prevent the model from treating
background-label associations as causal. The global distillation module
leverages a unified knowledge base to guide the representation learning of
client models, preventing local models from overfitting to client-specific
attributes. Experimental results on three datasets demonstrate that FedGID
enhances the model's ability to focus on the main subjects in unseen data and
outperforms existing methods in collaborative modeling.
|
2504.00857 | Siba Haidar | Mohammad Kassir and Siba Haidar and Antoun Yaacoub | Exploring Personalized Federated Learning Architectures for Violence
Detection in Surveillance Videos | 7 pages, 5 figures, 4 tables | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | The challenge of detecting violent incidents in urban surveillance systems is
compounded by the voluminous and diverse nature of video data. This paper
presents a targeted approach using Personalized Federated Learning (PFL) to
address these issues, specifically employing the Federated Learning with
Personalization Layers method within the Flower framework. Our methodology
adapts learning models to the unique data characteristics of each surveillance
node, effectively managing the heterogeneous and non-IID nature of surveillance
video data. Through rigorous experiments conducted on balanced and imbalanced
datasets, our PFL models demonstrated enhanced accuracy and efficiency,
achieving up to 99.3% accuracy. This study underscores the potential of PFL to
significantly improve the scalability and effectiveness of surveillance
systems, offering a robust, privacy-preserving solution for violence detection
in complex urban environments.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 14:47:14 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Kassir",
"Mohammad",
""
],
[
"Haidar",
"Siba",
""
],
[
"Yaacoub",
"Antoun",
""
]
] | TITLE: Exploring Personalized Federated Learning Architectures for Violence
Detection in Surveillance Videos
ABSTRACT: The challenge of detecting violent incidents in urban surveillance systems is
compounded by the voluminous and diverse nature of video data. This paper
presents a targeted approach using Personalized Federated Learning (PFL) to
address these issues, specifically employing the Federated Learning with
Personalization Layers method within the Flower framework. Our methodology
adapts learning models to the unique data characteristics of each surveillance
node, effectively managing the heterogeneous and non-IID nature of surveillance
video data. Through rigorous experiments conducted on balanced and imbalanced
datasets, our PFL models demonstrated enhanced accuracy and efficiency,
achieving up to 99.3% accuracy. This study underscores the potential of PFL to
significantly improve the scalability and effectiveness of surveillance
systems, offering a robust, privacy-preserving solution for violence detection
in complex urban environments.
|
2504.00860 | Lucy Havens | Lucy Havens, Benjamin Bach, Melissa Terras, Beatrice Alex | Investigating the Capabilities and Limitations of Machine Learning for
Identifying Bias in English Language Data with Information and Heritage
Professionals | Accepted to the 2025 CHI Conference on Human Factors in Computing
Systems (CHI '25) | null | 10.1145/3706598.3713217 | null | cs.CL cs.AI cs.CY cs.HC cs.LG | http://creativecommons.org/licenses/by/4.0/ | Despite numerous efforts to mitigate their biases, ML systems continue to
harm already-marginalized people. While predominant ML approaches assume bias
can be removed and fair models can be created, we show that these are not
always possible, nor desirable, goals. We reframe the problem of ML bias by
creating models to identify biased language, drawing attention to a dataset's
biases rather than trying to remove them. Then, through a workshop, we
evaluated the models for a specific use case: workflows of information and
heritage professionals. Our findings demonstrate the limitations of ML for
identifying bias due to its contextual nature, the way in which approaches to
mitigating it can simultaneously privilege and oppress different communities,
and its inevitability. We demonstrate the need to expand ML approaches to bias
and fairness, providing a mixed-methods approach to investigating the
feasibility of removing bias or achieving fairness in a given ML use case.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 14:51:25 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Havens",
"Lucy",
""
],
[
"Bach",
"Benjamin",
""
],
[
"Terras",
"Melissa",
""
],
[
"Alex",
"Beatrice",
""
]
] | TITLE: Investigating the Capabilities and Limitations of Machine Learning for
Identifying Bias in English Language Data with Information and Heritage
Professionals
ABSTRACT: Despite numerous efforts to mitigate their biases, ML systems continue to
harm already-marginalized people. While predominant ML approaches assume bias
can be removed and fair models can be created, we show that these are not
always possible, nor desirable, goals. We reframe the problem of ML bias by
creating models to identify biased language, drawing attention to a dataset's
biases rather than trying to remove them. Then, through a workshop, we
evaluated the models for a specific use case: workflows of information and
heritage professionals. Our findings demonstrate the limitations of ML for
identifying bias due to its contextual nature, the way in which approaches to
mitigating it can simultaneously privilege and oppress different communities,
and its inevitability. We demonstrate the need to expand ML approaches to bias
and fairness, providing a mixed-methods approach to investigating the
feasibility of removing bias or achieving fairness in a given ML use case.
|
2504.00870 | Long Peng | Xiaohua Qi, Renda Li, Long Peng, Qiang Ling, Jun Yu, Ziyi Chen, Peng
Chang, Mei Han, Jing Xiao | Data-free Knowledge Distillation with Diffusion Models | Accepted by ICME2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Recently Data-Free Knowledge Distillation (DFKD) has garnered attention and
can transfer knowledge from a teacher neural network to a student neural
network without requiring any access to training data. Although diffusion
models are adept at synthesizing high-fidelity photorealistic images across
various domains, existing methods cannot be easiliy implemented to DFKD. To
bridge that gap, this paper proposes a novel approach based on diffusion
models, DiffDFKD. Specifically, DiffDFKD involves targeted optimizations in two
key areas. Firstly, DiffDFKD utilizes valuable information from teacher models
to guide the pre-trained diffusion models' data synthesis, generating datasets
that mirror the training data distribution and effectively bridge domain gaps.
Secondly, to reduce computational burdens, DiffDFKD introduces Latent CutMix
Augmentation, an efficient technique, to enhance the diversity of diffusion
model-generated images for DFKD while preserving key attributes for effective
knowledge transfer. Extensive experiments validate the efficacy of DiffDFKD,
yielding state-of-the-art results exceeding existing DFKD approaches. We
release our code at https://github.com/xhqi0109/DiffDFKD.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 15:00:33 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Qi",
"Xiaohua",
""
],
[
"Li",
"Renda",
""
],
[
"Peng",
"Long",
""
],
[
"Ling",
"Qiang",
""
],
[
"Yu",
"Jun",
""
],
[
"Chen",
"Ziyi",
""
],
[
"Chang",
"Peng",
""
],
[
"Han",
"Mei",
""
],
[
"Xiao",
"Jing",
""
]
] | TITLE: Data-free Knowledge Distillation with Diffusion Models
ABSTRACT: Recently Data-Free Knowledge Distillation (DFKD) has garnered attention and
can transfer knowledge from a teacher neural network to a student neural
network without requiring any access to training data. Although diffusion
models are adept at synthesizing high-fidelity photorealistic images across
various domains, existing methods cannot be easiliy implemented to DFKD. To
bridge that gap, this paper proposes a novel approach based on diffusion
models, DiffDFKD. Specifically, DiffDFKD involves targeted optimizations in two
key areas. Firstly, DiffDFKD utilizes valuable information from teacher models
to guide the pre-trained diffusion models' data synthesis, generating datasets
that mirror the training data distribution and effectively bridge domain gaps.
Secondly, to reduce computational burdens, DiffDFKD introduces Latent CutMix
Augmentation, an efficient technique, to enhance the diversity of diffusion
model-generated images for DFKD while preserving key attributes for effective
knowledge transfer. Extensive experiments validate the efficacy of DiffDFKD,
yielding state-of-the-art results exceeding existing DFKD approaches. We
release our code at https://github.com/xhqi0109/DiffDFKD.
|
2504.00883 | Zhenyi Liao | Zhenyi Liao, Qingsong Xie, Yanhao Zhang, Zijian Kong, Haonan Lu,
Zhenyu Yang, Zhijie Deng | Improved Visual-Spatial Reasoning via R1-Zero-Like Training | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Increasing attention has been placed on improving the reasoning capacities of
multi-modal large language models (MLLMs). As the cornerstone for AI agents
that function in the physical realm, video-based visual-spatial intelligence
(VSI) emerges as one of the most pivotal reasoning capabilities of MLLMs. This
work conducts a first, in-depth study on improving the visual-spatial reasoning
of MLLMs via R1-Zero-like training. Technically, we first identify that the
visual-spatial reasoning capacities of small- to medium-sized Qwen2-VL models
cannot be activated via Chain of Thought (CoT) prompts. We then incorporate
GRPO training for improved visual-spatial reasoning, using the carefully
curated VSI-100k dataset, following DeepSeek-R1-Zero. During the investigation,
we identify the necessity to keep the KL penalty (even with a small value) in
GRPO. With just 120 GPU hours, our vsGRPO-2B model, fine-tuned from
Qwen2-VL-2B, can outperform the base model by 12.1% and surpass GPT-4o.
Moreover, our vsGRPO-7B model, fine-tuned from Qwen2-VL-7B, achieves
performance comparable to that of the best open-source model
LLaVA-NeXT-Video-72B. Additionally, we compare vsGRPO to supervised fine-tuning
and direct preference optimization baselines and observe strong performance
superiority. The code and dataset will be available soon.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 15:11:11 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Liao",
"Zhenyi",
""
],
[
"Xie",
"Qingsong",
""
],
[
"Zhang",
"Yanhao",
""
],
[
"Kong",
"Zijian",
""
],
[
"Lu",
"Haonan",
""
],
[
"Yang",
"Zhenyu",
""
],
[
"Deng",
"Zhijie",
""
]
] | TITLE: Improved Visual-Spatial Reasoning via R1-Zero-Like Training
ABSTRACT: Increasing attention has been placed on improving the reasoning capacities of
multi-modal large language models (MLLMs). As the cornerstone for AI agents
that function in the physical realm, video-based visual-spatial intelligence
(VSI) emerges as one of the most pivotal reasoning capabilities of MLLMs. This
work conducts a first, in-depth study on improving the visual-spatial reasoning
of MLLMs via R1-Zero-like training. Technically, we first identify that the
visual-spatial reasoning capacities of small- to medium-sized Qwen2-VL models
cannot be activated via Chain of Thought (CoT) prompts. We then incorporate
GRPO training for improved visual-spatial reasoning, using the carefully
curated VSI-100k dataset, following DeepSeek-R1-Zero. During the investigation,
we identify the necessity to keep the KL penalty (even with a small value) in
GRPO. With just 120 GPU hours, our vsGRPO-2B model, fine-tuned from
Qwen2-VL-2B, can outperform the base model by 12.1% and surpass GPT-4o.
Moreover, our vsGRPO-7B model, fine-tuned from Qwen2-VL-7B, achieves
performance comparable to that of the best open-source model
LLaVA-NeXT-Video-72B. Additionally, we compare vsGRPO to supervised fine-tuning
and direct preference optimization baselines and observe strong performance
superiority. The code and dataset will be available soon.
|
2504.00901 | Yongchuan Cui | Enzhe Sun and Yongchuan Cui and Peng Liu and Jining Yan | A Decade of Deep Learning for Remote Sensing Spatiotemporal Fusion:
Advances, Challenges, and Opportunities | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hardware limitations and satellite launch costs make direct acquisition of
high temporal-spatial resolution remote sensing imagery challenging. Remote
sensing spatiotemporal fusion (STF) technology addresses this problem by
merging high temporal but low spatial resolution imagery with high spatial but
low temporal resolution imagery to efficiently generate high spatiotemporal
resolution satellite images. STF provides unprecedented observational
capabilities for land surface change monitoring, agricultural management, and
environmental research. Deep learning (DL) methods have revolutionized the
remote sensing spatiotemporal fusion field over the past decade through
powerful automatic feature extraction and nonlinear modeling capabilities,
significantly outperforming traditional methods in handling complex
spatiotemporal data. Despite the rapid development of DL-based remote sensing
STF, the community lacks a systematic review of this quickly evolving field.
This paper comprehensively reviews DL developments in remote sensing STF over
the last decade, analyzing key research trends, method classifications,
commonly used datasets, and evaluation metrics. It discusses major challenges
in existing research and identifies promising future research directions as
references for researchers in this field to inspire new ideas. The specific
models, datasets, and other information mentioned in this article have been
collected in:
https://github.com/yc-cui/Deep-Learning-Spatiotemporal-Fusion-Survey.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 15:30:48 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Sun",
"Enzhe",
""
],
[
"Cui",
"Yongchuan",
""
],
[
"Liu",
"Peng",
""
],
[
"Yan",
"Jining",
""
]
] | TITLE: A Decade of Deep Learning for Remote Sensing Spatiotemporal Fusion:
Advances, Challenges, and Opportunities
ABSTRACT: Hardware limitations and satellite launch costs make direct acquisition of
high temporal-spatial resolution remote sensing imagery challenging. Remote
sensing spatiotemporal fusion (STF) technology addresses this problem by
merging high temporal but low spatial resolution imagery with high spatial but
low temporal resolution imagery to efficiently generate high spatiotemporal
resolution satellite images. STF provides unprecedented observational
capabilities for land surface change monitoring, agricultural management, and
environmental research. Deep learning (DL) methods have revolutionized the
remote sensing spatiotemporal fusion field over the past decade through
powerful automatic feature extraction and nonlinear modeling capabilities,
significantly outperforming traditional methods in handling complex
spatiotemporal data. Despite the rapid development of DL-based remote sensing
STF, the community lacks a systematic review of this quickly evolving field.
This paper comprehensively reviews DL developments in remote sensing STF over
the last decade, analyzing key research trends, method classifications,
commonly used datasets, and evaluation metrics. It discusses major challenges
in existing research and identifies promising future research directions as
references for researchers in this field to inspire new ideas. The specific
models, datasets, and other information mentioned in this article have been
collected in:
https://github.com/yc-cui/Deep-Learning-Spatiotemporal-Fusion-Survey.
|
2504.00908 | Haoxuan Li | Haoxuan Li, Wei Song, Aofan Liu, Peiwu Qin | DBF-UNet: A Two-Stage Framework for Carotid Artery Segmentation with
Pseudo-Label Generation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Medical image analysis faces significant challenges due to limited annotation
data, particularly in three-dimensional carotid artery segmentation tasks,
where existing datasets exhibit spatially discontinuous slice annotations with
only a small portion of expert-labeled slices in complete 3D volumetric data.
To address this challenge, we propose a two-stage segmentation framework.
First, we construct continuous vessel centerlines by interpolating between
annotated slice centroids and propagate labels along these centerlines to
generate interpolated annotations for unlabeled slices. The slices with expert
annotations are used for fine-tuning SAM-Med2D, while the interpolated labels
on unlabeled slices serve as prompts to guide segmentation during inference. In
the second stage, we propose a novel Dense Bidirectional Feature Fusion UNet
(DBF-UNet). This lightweight architecture achieves precise segmentation of
complete 3D vascular structures. The network incorporates bidirectional feature
fusion in the encoder and integrates multi-scale feature aggregation with dense
connectivity for effective feature reuse. Experimental validation on public
datasets demonstrates that our proposed method effectively addresses the sparse
annotation challenge in carotid artery segmentation while achieving superior
performance compared to existing approaches. The source code is available at
https://github.com/Haoxuanli-Thu/DBF-UNet.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 15:41:57 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Li",
"Haoxuan",
""
],
[
"Song",
"Wei",
""
],
[
"Liu",
"Aofan",
""
],
[
"Qin",
"Peiwu",
""
]
] | TITLE: DBF-UNet: A Two-Stage Framework for Carotid Artery Segmentation with
Pseudo-Label Generation
ABSTRACT: Medical image analysis faces significant challenges due to limited annotation
data, particularly in three-dimensional carotid artery segmentation tasks,
where existing datasets exhibit spatially discontinuous slice annotations with
only a small portion of expert-labeled slices in complete 3D volumetric data.
To address this challenge, we propose a two-stage segmentation framework.
First, we construct continuous vessel centerlines by interpolating between
annotated slice centroids and propagate labels along these centerlines to
generate interpolated annotations for unlabeled slices. The slices with expert
annotations are used for fine-tuning SAM-Med2D, while the interpolated labels
on unlabeled slices serve as prompts to guide segmentation during inference. In
the second stage, we propose a novel Dense Bidirectional Feature Fusion UNet
(DBF-UNet). This lightweight architecture achieves precise segmentation of
complete 3D vascular structures. The network incorporates bidirectional feature
fusion in the encoder and integrates multi-scale feature aggregation with dense
connectivity for effective feature reuse. Experimental validation on public
datasets demonstrates that our proposed method effectively addresses the sparse
annotation challenge in carotid artery segmentation while achieving superior
performance compared to existing approaches. The source code is available at
https://github.com/Haoxuanli-Thu/DBF-UNet.
|
2504.00921 | Chenguang Xiao | Chenguang Xiao, Abhirup Ghosh, Han Wu, Shuo Wang, Diederick van Thiel | Benchmarking Federated Machine Unlearning methods for Tabular Data | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Machine unlearning, which enables a model to forget specific data upon
request, is increasingly relevant in the era of privacy-centric machine
learning, particularly within federated learning (FL) environments. This paper
presents a pioneering study on benchmarking machine unlearning methods within a
federated setting for tabular data, addressing the unique challenges posed by
cross-silo FL where data privacy and communication efficiency are paramount. We
explore unlearning at the feature and instance levels, employing both machine
learning, random forest and logistic regression models. Our methodology
benchmarks various unlearning algorithms, including fine-tuning and
gradient-based approaches, across multiple datasets, with metrics focused on
fidelity, certifiability, and computational efficiency. Experiments demonstrate
that while fidelity remains high across methods, tree-based models excel in
certifiability, ensuring exact unlearning, whereas gradient-based methods show
improved computational efficiency. This study provides critical insights into
the design and selection of unlearning algorithms tailored to the FL
environment, offering a foundation for further research in privacy-preserving
machine learning.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 15:53:36 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Xiao",
"Chenguang",
""
],
[
"Ghosh",
"Abhirup",
""
],
[
"Wu",
"Han",
""
],
[
"Wang",
"Shuo",
""
],
[
"van Thiel",
"Diederick",
""
]
] | TITLE: Benchmarking Federated Machine Unlearning methods for Tabular Data
ABSTRACT: Machine unlearning, which enables a model to forget specific data upon
request, is increasingly relevant in the era of privacy-centric machine
learning, particularly within federated learning (FL) environments. This paper
presents a pioneering study on benchmarking machine unlearning methods within a
federated setting for tabular data, addressing the unique challenges posed by
cross-silo FL where data privacy and communication efficiency are paramount. We
explore unlearning at the feature and instance levels, employing both machine
learning, random forest and logistic regression models. Our methodology
benchmarks various unlearning algorithms, including fine-tuning and
gradient-based approaches, across multiple datasets, with metrics focused on
fidelity, certifiability, and computational efficiency. Experiments demonstrate
that while fidelity remains high across methods, tree-based models excel in
certifiability, ensuring exact unlearning, whereas gradient-based methods show
improved computational efficiency. This study provides critical insights into
the design and selection of unlearning algorithms tailored to the FL
environment, offering a foundation for further research in privacy-preserving
machine learning.
|
2504.00930 | Sebastian M\"uller | Sebastian M\"uller, Vanessa Toborek, Tam\'as Horv\'ath, Christian
Bauckhage | CFIRE: A General Method for Combining Local Explanations | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | We propose a novel eXplainable AI algorithm to compute faithful,
easy-to-understand, and complete global decision rules from local explanations
for tabular data by combining XAI methods with closed frequent itemset mining.
Our method can be used with any local explainer that indicates which dimensions
are important for a given sample for a given black-box decision. This property
allows our algorithm to choose among different local explainers, addressing the
disagreement problem, \ie the observation that no single explanation method
consistently outperforms others across models and datasets. Unlike usual
experimental methodology, our evaluation also accounts for the Rashomon effect
in model explainability. To this end, we demonstrate the robustness of our
approach in finding suitable rules for nearly all of the 700 black-box models
we considered across 14 benchmark datasets. The results also show that our
method exhibits improved runtime, high precision and F1-score while generating
compact and complete rules.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 16:04:33 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Müller",
"Sebastian",
""
],
[
"Toborek",
"Vanessa",
""
],
[
"Horváth",
"Tamás",
""
],
[
"Bauckhage",
"Christian",
""
]
] | TITLE: CFIRE: A General Method for Combining Local Explanations
ABSTRACT: We propose a novel eXplainable AI algorithm to compute faithful,
easy-to-understand, and complete global decision rules from local explanations
for tabular data by combining XAI methods with closed frequent itemset mining.
Our method can be used with any local explainer that indicates which dimensions
are important for a given sample for a given black-box decision. This property
allows our algorithm to choose among different local explainers, addressing the
disagreement problem, \ie the observation that no single explanation method
consistently outperforms others across models and datasets. Unlike usual
experimental methodology, our evaluation also accounts for the Rashomon effect
in model explainability. To this end, we demonstrate the robustness of our
approach in finding suitable rules for nearly all of the 700 black-box models
we considered across 14 benchmark datasets. The results also show that our
method exhibits improved runtime, high precision and F1-score while generating
compact and complete rules.
|
2504.00934 | Zifeng Wang | Zifeng Wang, Junyi Gao, Benjamin Danek, Brandon Theodorou, Ruba Shaik,
Shivashankar Thati, Seunghyun Won, Jimeng Sun | InformGen: An AI Copilot for Accurate and Compliant Clinical Research
Consent Document Generation | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Leveraging large language models (LLMs) to generate high-stakes documents,
such as informed consent forms (ICFs), remains a significant challenge due to
the extreme need for regulatory compliance and factual accuracy. Here, we
present InformGen, an LLM-driven copilot for accurate and compliant ICF
drafting by optimized knowledge document parsing and content generation, with
humans in the loop. We further construct a benchmark dataset comprising
protocols and ICFs from 900 clinical trials. Experimental results demonstrate
that InformGen achieves near 100% compliance with 18 core regulatory rules
derived from FDA guidelines, outperforming a vanilla GPT-4o model by up to 30%.
Additionally, a user study with five annotators shows that InformGen, when
integrated with manual intervention, attains over 90% factual accuracy,
significantly surpassing the vanilla GPT-4o model's 57%-82%. Crucially,
InformGen ensures traceability by providing inline citations to source
protocols, enabling easy verification and maintaining the highest standards of
factual integrity.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 16:14:48 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Wang",
"Zifeng",
""
],
[
"Gao",
"Junyi",
""
],
[
"Danek",
"Benjamin",
""
],
[
"Theodorou",
"Brandon",
""
],
[
"Shaik",
"Ruba",
""
],
[
"Thati",
"Shivashankar",
""
],
[
"Won",
"Seunghyun",
""
],
[
"Sun",
"Jimeng",
""
]
] | TITLE: InformGen: An AI Copilot for Accurate and Compliant Clinical Research
Consent Document Generation
ABSTRACT: Leveraging large language models (LLMs) to generate high-stakes documents,
such as informed consent forms (ICFs), remains a significant challenge due to
the extreme need for regulatory compliance and factual accuracy. Here, we
present InformGen, an LLM-driven copilot for accurate and compliant ICF
drafting by optimized knowledge document parsing and content generation, with
humans in the loop. We further construct a benchmark dataset comprising
protocols and ICFs from 900 clinical trials. Experimental results demonstrate
that InformGen achieves near 100% compliance with 18 core regulatory rules
derived from FDA guidelines, outperforming a vanilla GPT-4o model by up to 30%.
Additionally, a user study with five annotators shows that InformGen, when
integrated with manual intervention, attains over 90% factual accuracy,
significantly surpassing the vanilla GPT-4o model's 57%-82%. Crucially,
InformGen ensures traceability by providing inline citations to source
protocols, enabling easy verification and maintaining the highest standards of
factual integrity.
|
2504.00943 | Snigdha Agarwal | Snigdha Agarwal, Ganaraja V H, Neelam Sinha, Abhilasha Indoria,
Netravathi M, Jitender Saini | Graph Classification and Radiomics Signature for Identification of
Tuberculous Meningitis | 19 pages, 6 figures, 3 tables | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Introduction: Tuberculous meningitis (TBM) is a serious brain infection
caused by Mycobacterium tuberculosis, characterized by inflammation of the
meninges covering the brain and spinal cord. Diagnosis often requires invasive
lumbar puncture (LP) and cerebrospinal fluid (CSF) analysis. Objectives: This
study aims to classify TBM patients using T1-weighted (T1w) non-contrast
Magnetic Resonance Imaging (MRI) scans. We hypothesize that specific brain
regions, such as the interpeduncular cisterns, bone, and corpus callosum,
contain visual markers that can non-invasively distinguish TBM patients from
healthy controls. We propose a novel Pixel-array Graphs Classifier
(PAG-Classifier) that leverages spatial relationships between neighbouring 3D
pixels in a graph-based framework to extract significant features through eigen
decomposition. These features are then used to train machine learning
classifiers for effective patient classification. We validate our approach
using a radiomics-based methodology, classifying TBM patients based on relevant
radiomics features. Results: We utilized an internal dataset consisting of 52
scans, 32 from confirmed TBM patients based on mycobacteria detection in CSF,
and 20 from healthy individuals. We achieved a 5-fold cross-validated average
F1 score of 85.71% for cistern regions with our PAG-Classifier and 92.85% with
the radiomics features classifier, surpassing current state-of-the-art
benchmarks by 15% and 22%, respectively. However, bone and corpus callosum
regions showed poor classification effectiveness, with average F1 scores below
50%. Conclusion: Our study suggests that algorithms like the PAG-Classifier
serve as effective tools for non-invasive TBM analysis, particularly by
targeting the interpeduncular cistern. Findings indicate that the bone and
corpus callosum regions lack distinctive patterns for differentiation.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 16:28:39 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Agarwal",
"Snigdha",
""
],
[
"H",
"Ganaraja V",
""
],
[
"Sinha",
"Neelam",
""
],
[
"Indoria",
"Abhilasha",
""
],
[
"M",
"Netravathi",
""
],
[
"Saini",
"Jitender",
""
]
] | TITLE: Graph Classification and Radiomics Signature for Identification of
Tuberculous Meningitis
ABSTRACT: Introduction: Tuberculous meningitis (TBM) is a serious brain infection
caused by Mycobacterium tuberculosis, characterized by inflammation of the
meninges covering the brain and spinal cord. Diagnosis often requires invasive
lumbar puncture (LP) and cerebrospinal fluid (CSF) analysis. Objectives: This
study aims to classify TBM patients using T1-weighted (T1w) non-contrast
Magnetic Resonance Imaging (MRI) scans. We hypothesize that specific brain
regions, such as the interpeduncular cisterns, bone, and corpus callosum,
contain visual markers that can non-invasively distinguish TBM patients from
healthy controls. We propose a novel Pixel-array Graphs Classifier
(PAG-Classifier) that leverages spatial relationships between neighbouring 3D
pixels in a graph-based framework to extract significant features through eigen
decomposition. These features are then used to train machine learning
classifiers for effective patient classification. We validate our approach
using a radiomics-based methodology, classifying TBM patients based on relevant
radiomics features. Results: We utilized an internal dataset consisting of 52
scans, 32 from confirmed TBM patients based on mycobacteria detection in CSF,
and 20 from healthy individuals. We achieved a 5-fold cross-validated average
F1 score of 85.71% for cistern regions with our PAG-Classifier and 92.85% with
the radiomics features classifier, surpassing current state-of-the-art
benchmarks by 15% and 22%, respectively. However, bone and corpus callosum
regions showed poor classification effectiveness, with average F1 scores below
50%. Conclusion: Our study suggests that algorithms like the PAG-Classifier
serve as effective tools for non-invasive TBM analysis, particularly by
targeting the interpeduncular cistern. Findings indicate that the bone and
corpus callosum regions lack distinctive patterns for differentiation.
|
2504.00946 | Tianqi Ding | Tianqi Ding and Dawei Xiang and Keith E Schubert and Liang Dong | GKAN: Explainable Diagnosis of Alzheimer's Disease Using Graph Neural
Network with Kolmogorov-Arnold Networks | 12 pages, 4 figures, under review of The Southwest Data Science
Conference (SDSC 2025) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Alzheimer's Disease (AD) is a progressive neurodegenerative disorder that
poses significant diagnostic challenges due to its complex etiology. Graph
Convolutional Networks (GCNs) have shown promise in modeling brain connectivity
for AD diagnosis, yet their reliance on linear transformations limits their
ability to capture intricate nonlinear patterns in neuroimaging data. To
address this, we propose GCN-KAN, a novel single-modal framework that
integrates Kolmogorov-Arnold Networks (KAN) into GCNs to enhance both
diagnostic accuracy and interpretability. Leveraging structural MRI data, our
model employs learnable spline-based transformations to better represent brain
region interactions. Evaluated on the Alzheimer's Disease Neuroimaging
Initiative (ADNI) dataset, GCN-KAN outperforms traditional GCNs by 4-8% in
classification accuracy while providing interpretable insights into key brain
regions associated with AD. This approach offers a robust and explainable tool
for early AD diagnosis.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 16:31:00 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Ding",
"Tianqi",
""
],
[
"Xiang",
"Dawei",
""
],
[
"Schubert",
"Keith E",
""
],
[
"Dong",
"Liang",
""
]
] | TITLE: GKAN: Explainable Diagnosis of Alzheimer's Disease Using Graph Neural
Network with Kolmogorov-Arnold Networks
ABSTRACT: Alzheimer's Disease (AD) is a progressive neurodegenerative disorder that
poses significant diagnostic challenges due to its complex etiology. Graph
Convolutional Networks (GCNs) have shown promise in modeling brain connectivity
for AD diagnosis, yet their reliance on linear transformations limits their
ability to capture intricate nonlinear patterns in neuroimaging data. To
address this, we propose GCN-KAN, a novel single-modal framework that
integrates Kolmogorov-Arnold Networks (KAN) into GCNs to enhance both
diagnostic accuracy and interpretability. Leveraging structural MRI data, our
model employs learnable spline-based transformations to better represent brain
region interactions. Evaluated on the Alzheimer's Disease Neuroimaging
Initiative (ADNI) dataset, GCN-KAN outperforms traditional GCNs by 4-8% in
classification accuracy while providing interpretable insights into key brain
regions associated with AD. This approach offers a robust and explainable tool
for early AD diagnosis.
|
2504.00948 | Rachmad Vidya Wicaksana Putra | Rachmad Vidya Wicaksana Putra, Saad Iftikhar, Muhammad Shafique | QSViT: A Methodology for Quantizing Spiking Vision Transformers | Accepted at the International Joint Conference on Neural Networks
(IJCNN) 2025 in Rome, Italy | null | null | null | cs.NE cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vision Transformer (ViT)-based models have shown state-of-the-art performance
(e.g., accuracy) in vision-based AI tasks. However, realizing their capability
in resource-constrained embedded AI systems is challenging due to their
inherent large memory footprints and complex computations, thereby incurring
high power/energy consumption. Recently, Spiking Vision Transformer
(SViT)-based models have emerged as alternate low-power ViT networks. However,
their large memory footprints still hinder their applicability for
resource-constrained embedded AI systems. Therefore, there is a need for a
methodology to compress SViT models without degrading the accuracy
significantly. To address this, we propose QSViT, a novel design methodology to
compress the SViT models through a systematic quantization strategy across
different network layers. To do this, our QSViT employs several key steps: (1)
investigating the impact of different precision levels in different network
layers, (2) identifying the appropriate base quantization settings for guiding
bit precision reduction, (3) performing a guided quantization strategy based on
the base settings to select the appropriate quantization setting, and (4)
developing an efficient quantized network based on the selected quantization
setting. The experimental results demonstrate that, our QSViT methodology
achieves 22.75% memory saving and 21.33% power saving, while also maintaining
high accuracy within 2.1% from that of the original non-quantized SViT model on
the ImageNet dataset. These results highlight the potential of QSViT
methodology to pave the way toward the efficient SViT deployments on
resource-constrained embedded AI systems.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 16:34:46 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Putra",
"Rachmad Vidya Wicaksana",
""
],
[
"Iftikhar",
"Saad",
""
],
[
"Shafique",
"Muhammad",
""
]
] | TITLE: QSViT: A Methodology for Quantizing Spiking Vision Transformers
ABSTRACT: Vision Transformer (ViT)-based models have shown state-of-the-art performance
(e.g., accuracy) in vision-based AI tasks. However, realizing their capability
in resource-constrained embedded AI systems is challenging due to their
inherent large memory footprints and complex computations, thereby incurring
high power/energy consumption. Recently, Spiking Vision Transformer
(SViT)-based models have emerged as alternate low-power ViT networks. However,
their large memory footprints still hinder their applicability for
resource-constrained embedded AI systems. Therefore, there is a need for a
methodology to compress SViT models without degrading the accuracy
significantly. To address this, we propose QSViT, a novel design methodology to
compress the SViT models through a systematic quantization strategy across
different network layers. To do this, our QSViT employs several key steps: (1)
investigating the impact of different precision levels in different network
layers, (2) identifying the appropriate base quantization settings for guiding
bit precision reduction, (3) performing a guided quantization strategy based on
the base settings to select the appropriate quantization setting, and (4)
developing an efficient quantized network based on the selected quantization
setting. The experimental results demonstrate that, our QSViT methodology
achieves 22.75% memory saving and 21.33% power saving, while also maintaining
high accuracy within 2.1% from that of the original non-quantized SViT model on
the ImageNet dataset. These results highlight the potential of QSViT
methodology to pave the way toward the efficient SViT deployments on
resource-constrained embedded AI systems.
|
2504.00952 | Lingxiao Wang | Kumar Kshitij Patel, Weitong Zhang, Lingxiao Wang | Personalized Federated Training of Diffusion Models with Privacy
Guarantees | 18 pages, 4 figures | null | null | null | cs.LG cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The scarcity of accessible, compliant, and ethically sourced data presents a
considerable challenge to the adoption of artificial intelligence (AI) in
sensitive fields like healthcare, finance, and biomedical research.
Furthermore, access to unrestricted public datasets is increasingly constrained
due to rising concerns over privacy, copyright, and competition. Synthetic data
has emerged as a promising alternative, and diffusion models -- a cutting-edge
generative AI technology -- provide an effective solution for generating
high-quality and diverse synthetic data. In this paper, we introduce a novel
federated learning framework for training diffusion models on decentralized
private datasets. Our framework leverages personalization and the inherent
noise in the forward diffusion process to produce high-quality samples while
ensuring robust differential privacy guarantees. Our experiments show that our
framework outperforms non-collaborative training methods, particularly in
settings with high data heterogeneity, and effectively reduces biases and
imbalances in synthetic data, resulting in fairer downstream models.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 16:45:26 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Patel",
"Kumar Kshitij",
""
],
[
"Zhang",
"Weitong",
""
],
[
"Wang",
"Lingxiao",
""
]
] | TITLE: Personalized Federated Training of Diffusion Models with Privacy
Guarantees
ABSTRACT: The scarcity of accessible, compliant, and ethically sourced data presents a
considerable challenge to the adoption of artificial intelligence (AI) in
sensitive fields like healthcare, finance, and biomedical research.
Furthermore, access to unrestricted public datasets is increasingly constrained
due to rising concerns over privacy, copyright, and competition. Synthetic data
has emerged as a promising alternative, and diffusion models -- a cutting-edge
generative AI technology -- provide an effective solution for generating
high-quality and diverse synthetic data. In this paper, we introduce a novel
federated learning framework for training diffusion models on decentralized
private datasets. Our framework leverages personalization and the inherent
noise in the forward diffusion process to produce high-quality samples while
ensuring robust differential privacy guarantees. Our experiments show that our
framework outperforms non-collaborative training methods, particularly in
settings with high data heterogeneity, and effectively reduces biases and
imbalances in synthetic data, resulting in fairer downstream models.
|
2504.00954 | Bangwei Liu | Bangwei Liu, Yicheng Bao, Shaohui Lin, Xuhong Wang, Xin Tan, Yingchun
Wang, Yuan Xie, Chaochao Lu | IDMR: Towards Instance-Driven Precise Visual Correspondence in
Multimodal Retrieval | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multimodal retrieval systems are becoming increasingly vital for cutting-edge
AI technologies, such as embodied AI and AI-driven digital content industries.
However, current multimodal retrieval tasks lack sufficient complexity and
demonstrate limited practical application value. It spires us to design
Instance-Driven Multimodal Image Retrieval (IDMR), a novel task that requires
models to retrieve images containing the same instance as a query image while
matching a text-described scenario. Unlike existing retrieval tasks focused on
global image similarity or category-level matching, IDMR demands fine-grained
instance-level consistency across diverse contexts. To benchmark this
capability, we develop IDMR-bench using real-world object tracking and
first-person video data. Addressing the scarcity of training data, we propose a
cross-domain synthesis method that creates 557K training samples by cropping
objects from standard detection datasets. Our Multimodal Large Language Model
(MLLM) based retrieval model, trained on 1.2M samples, outperforms
state-of-the-art approaches on both traditional benchmarks and our zero-shot
IDMR-bench. Experimental results demonstrate previous models' limitations in
instance-aware retrieval and highlight the potential of MLLM for advanced
retrieval applications. The whole training dataset, codes and models, with wide
ranges of sizes, are available at https://github.com/BwLiu01/IDMR.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 16:47:20 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Liu",
"Bangwei",
""
],
[
"Bao",
"Yicheng",
""
],
[
"Lin",
"Shaohui",
""
],
[
"Wang",
"Xuhong",
""
],
[
"Tan",
"Xin",
""
],
[
"Wang",
"Yingchun",
""
],
[
"Xie",
"Yuan",
""
],
[
"Lu",
"Chaochao",
""
]
] | TITLE: IDMR: Towards Instance-Driven Precise Visual Correspondence in
Multimodal Retrieval
ABSTRACT: Multimodal retrieval systems are becoming increasingly vital for cutting-edge
AI technologies, such as embodied AI and AI-driven digital content industries.
However, current multimodal retrieval tasks lack sufficient complexity and
demonstrate limited practical application value. It spires us to design
Instance-Driven Multimodal Image Retrieval (IDMR), a novel task that requires
models to retrieve images containing the same instance as a query image while
matching a text-described scenario. Unlike existing retrieval tasks focused on
global image similarity or category-level matching, IDMR demands fine-grained
instance-level consistency across diverse contexts. To benchmark this
capability, we develop IDMR-bench using real-world object tracking and
first-person video data. Addressing the scarcity of training data, we propose a
cross-domain synthesis method that creates 557K training samples by cropping
objects from standard detection datasets. Our Multimodal Large Language Model
(MLLM) based retrieval model, trained on 1.2M samples, outperforms
state-of-the-art approaches on both traditional benchmarks and our zero-shot
IDMR-bench. Experimental results demonstrate previous models' limitations in
instance-aware retrieval and highlight the potential of MLLM for advanced
retrieval applications. The whole training dataset, codes and models, with wide
ranges of sizes, are available at https://github.com/BwLiu01/IDMR.
|
2504.00961 | David Atkinson | David Atkinson | Putting GenAI on Notice: GenAI Exceptionalism and Contract Law | null | null | null | null | cs.CY | http://creativecommons.org/licenses/by/4.0/ | Gathering enough data to create sufficiently useful training datasets for
generative artificial intelligence requires scraping most public websites. The
scraping is conducted using pieces of code (scraping bots) that make copies of
website pages. Today, there are only a few ways for website owners to
effectively block these bots from scraping content. One method, prohibiting
scraping in the website terms of service, is loosely enforced because it is not
always clear when the terms are enforceable. This paper aims to clear up the
confusion by describing what scraping is, how entities do it, what makes
website terms of service enforceable, and what claims of damages website owners
may make as a result of being scraped. The novel argument of the paper is that
when (i) a site's terms of service or terms of use prohibit scraping or using
site content to train AI and (ii) a bot scrapes pages on the website including
those terms, the bot's deployer has actual notice of the terms and those terms
are therefore legally enforceable, meaning the site can claim a breach of
contract. This paper also details the legal and substantive arguments favoring
this position while cautioning that nonprofits with a primarily scientific
research focus should be exempt from such strict enforcement.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 16:58:02 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Atkinson",
"David",
""
]
] | TITLE: Putting GenAI on Notice: GenAI Exceptionalism and Contract Law
ABSTRACT: Gathering enough data to create sufficiently useful training datasets for
generative artificial intelligence requires scraping most public websites. The
scraping is conducted using pieces of code (scraping bots) that make copies of
website pages. Today, there are only a few ways for website owners to
effectively block these bots from scraping content. One method, prohibiting
scraping in the website terms of service, is loosely enforced because it is not
always clear when the terms are enforceable. This paper aims to clear up the
confusion by describing what scraping is, how entities do it, what makes
website terms of service enforceable, and what claims of damages website owners
may make as a result of being scraped. The novel argument of the paper is that
when (i) a site's terms of service or terms of use prohibit scraping or using
site content to train AI and (ii) a bot scrapes pages on the website including
those terms, the bot's deployer has actual notice of the terms and those terms
are therefore legally enforceable, meaning the site can claim a breach of
contract. This paper also details the legal and substantive arguments favoring
this position while cautioning that nonprofits with a primarily scientific
research focus should be exempt from such strict enforcement.
|
2504.00977 | Jungyeul Park | Mengyang Qiu, Qingyu Gao, Linxuan Yang, Yang Gu, Tran Minh Nguyen,
Zihao Huang, Jungyeul Park | Chinese Grammatical Error Correction: A Survey | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by-sa/4.0/ | Chinese Grammatical Error Correction (CGEC) is a critical task in Natural
Language Processing, addressing the growing demand for automated writing
assistance in both second-language (L2) and native (L1) Chinese writing. While
L2 learners struggle with mastering complex grammatical structures, L1 users
also benefit from CGEC in academic, professional, and formal contexts where
writing precision is essential. This survey provides a comprehensive review of
CGEC research, covering datasets, annotation schemes, evaluation methodologies,
and system advancements. We examine widely used CGEC datasets, highlighting
their characteristics, limitations, and the need for improved standardization.
We also analyze error annotation frameworks, discussing challenges such as word
segmentation ambiguity and the classification of Chinese-specific error types.
Furthermore, we review evaluation metrics, focusing on their adaptation from
English GEC to Chinese, including character-level scoring and the use of
multiple references. In terms of system development, we trace the evolution
from rule-based and statistical approaches to neural architectures, including
Transformer-based models and the integration of large pre-trained language
models. By consolidating existing research and identifying key challenges, this
survey provides insights into the current state of CGEC and outlines future
directions, including refining annotation standards to address segmentation
challenges, and leveraging multilingual approaches to enhance CGEC.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 17:14:50 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Qiu",
"Mengyang",
""
],
[
"Gao",
"Qingyu",
""
],
[
"Yang",
"Linxuan",
""
],
[
"Gu",
"Yang",
""
],
[
"Nguyen",
"Tran Minh",
""
],
[
"Huang",
"Zihao",
""
],
[
"Park",
"Jungyeul",
""
]
] | TITLE: Chinese Grammatical Error Correction: A Survey
ABSTRACT: Chinese Grammatical Error Correction (CGEC) is a critical task in Natural
Language Processing, addressing the growing demand for automated writing
assistance in both second-language (L2) and native (L1) Chinese writing. While
L2 learners struggle with mastering complex grammatical structures, L1 users
also benefit from CGEC in academic, professional, and formal contexts where
writing precision is essential. This survey provides a comprehensive review of
CGEC research, covering datasets, annotation schemes, evaluation methodologies,
and system advancements. We examine widely used CGEC datasets, highlighting
their characteristics, limitations, and the need for improved standardization.
We also analyze error annotation frameworks, discussing challenges such as word
segmentation ambiguity and the classification of Chinese-specific error types.
Furthermore, we review evaluation metrics, focusing on their adaptation from
English GEC to Chinese, including character-level scoring and the use of
multiple references. In terms of system development, we trace the evolution
from rule-based and statistical approaches to neural architectures, including
Transformer-based models and the integration of large pre-trained language
models. By consolidating existing research and identifying key challenges, this
survey provides insights into the current state of CGEC and outlines future
directions, including refining annotation standards to address segmentation
challenges, and leveraging multilingual approaches to enhance CGEC.
|
2504.00983 | Hong-Xing Yu | Haoyi Duan, Hong-Xing Yu, Sirui Chen, Li Fei-Fei, Jiajun Wu | WorldScore: A Unified Evaluation Benchmark for World Generation | Project website: https://haoyi-duan.github.io/WorldScore/ The first
two authors contributed equally | null | null | null | cs.GR cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce the WorldScore benchmark, the first unified benchmark for world
generation. We decompose world generation into a sequence of next-scene
generation tasks with explicit camera trajectory-based layout specifications,
enabling unified evaluation of diverse approaches from 3D and 4D scene
generation to video generation models. The WorldScore benchmark encompasses a
curated dataset of 3,000 test examples that span diverse worlds: static and
dynamic, indoor and outdoor, photorealistic and stylized. The WorldScore
metrics evaluate generated worlds through three key aspects: controllability,
quality, and dynamics. Through extensive evaluation of 19 representative
models, including both open-source and closed-source ones, we reveal key
insights and challenges for each category of models. Our dataset, evaluation
code, and leaderboard can be found at https://haoyi-duan.github.io/WorldScore/
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 17:20:23 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Duan",
"Haoyi",
""
],
[
"Yu",
"Hong-Xing",
""
],
[
"Chen",
"Sirui",
""
],
[
"Fei-Fei",
"Li",
""
],
[
"Wu",
"Jiajun",
""
]
] | TITLE: WorldScore: A Unified Evaluation Benchmark for World Generation
ABSTRACT: We introduce the WorldScore benchmark, the first unified benchmark for world
generation. We decompose world generation into a sequence of next-scene
generation tasks with explicit camera trajectory-based layout specifications,
enabling unified evaluation of diverse approaches from 3D and 4D scene
generation to video generation models. The WorldScore benchmark encompasses a
curated dataset of 3,000 test examples that span diverse worlds: static and
dynamic, indoor and outdoor, photorealistic and stylized. The WorldScore
metrics evaluate generated worlds through three key aspects: controllability,
quality, and dynamics. Through extensive evaluation of 19 representative
models, including both open-source and closed-source ones, we reveal key
insights and challenges for each category of models. Our dataset, evaluation
code, and leaderboard can be found at https://haoyi-duan.github.io/WorldScore/
|
2504.00992 | Elisabetta Fedele | Elisabetta Fedele, Boyang Sun, Leonidas Guibas, Marc Pollefeys,
Francis Engelmann | SuperDec: 3D Scene Decomposition with Superquadric Primitives | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present SuperDec, an approach for creating compact 3D scene
representations via decomposition into superquadric primitives. While most
recent works leverage geometric primitives to obtain photorealistic 3D scene
representations, we propose to leverage them to obtain a compact yet expressive
representation. We propose to solve the problem locally on individual objects
and leverage the capabilities of instance segmentation methods to scale our
solution to full 3D scenes. In doing that, we design a new architecture which
efficiently decompose point clouds of arbitrary objects in a compact set of
superquadrics. We train our architecture on ShapeNet and we prove its
generalization capabilities on object instances extracted from the ScanNet++
dataset as well as on full Replica scenes. Finally, we show how a compact
representation based on superquadrics can be useful for a diverse range of
downstream applications, including robotic tasks and controllable visual
content generation and editing.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 17:29:35 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Fedele",
"Elisabetta",
""
],
[
"Sun",
"Boyang",
""
],
[
"Guibas",
"Leonidas",
""
],
[
"Pollefeys",
"Marc",
""
],
[
"Engelmann",
"Francis",
""
]
] | TITLE: SuperDec: 3D Scene Decomposition with Superquadric Primitives
ABSTRACT: We present SuperDec, an approach for creating compact 3D scene
representations via decomposition into superquadric primitives. While most
recent works leverage geometric primitives to obtain photorealistic 3D scene
representations, we propose to leverage them to obtain a compact yet expressive
representation. We propose to solve the problem locally on individual objects
and leverage the capabilities of instance segmentation methods to scale our
solution to full 3D scenes. In doing that, we design a new architecture which
efficiently decompose point clouds of arbitrary objects in a compact set of
superquadrics. We train our architecture on ShapeNet and we prove its
generalization capabilities on object instances extracted from the ScanNet++
dataset as well as on full Replica scenes. Finally, we show how a compact
representation based on superquadrics can be useful for a diverse range of
downstream applications, including robotic tasks and controllable visual
content generation and editing.
|
2504.01001 | Jos\'e Pombal | Jos\'e Pombal, Nuno M. Guerreiro, Ricardo Rei, Andr\'e F. T. Martins | Zero-shot Benchmarking: A Framework for Flexible and Scalable Automatic
Evaluation of Language Models | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | As language models improve and become capable of performing more complex
tasks across modalities, evaluating them automatically becomes increasingly
challenging. Developing strong and robust task-specific automatic metrics gets
harder, and human-annotated test sets -- which are expensive to create --
saturate more quickly. A compelling alternative is to design reliable
strategies to automate the creation of test data and evaluation, but previous
attempts either rely on pre-existing data, or focus solely on individual tasks.
We present Zero-shot Benchmarking (ZSB), a framework for creating high-quality
benchmarks for any task by leveraging language models for both synthetic test
data creation and evaluation. ZSB is simple and flexible: it requires only the
creation of a prompt for data generation and one for evaluation; it is scalable
to tasks and languages where collecting real-world data is costly or
impractical; it is model-agnostic, allowing the creation of increasingly
challenging benchmarks as models improve. To assess the effectiveness of our
framework, we create benchmarks for five text-only tasks and a multi-modal one:
general capabilities in four languages (English, Chinese, French, and Korean),
translation, and general vision-language capabilities in English. We then rank
a broad range of open and closed systems on our benchmarks. ZSB rankings
consistently correlate strongly with human rankings, outperforming
widely-adopted standard benchmarks. Through ablations, we find that strong
benchmarks can be created with open models, and that judge model size and
dataset variety are crucial drivers of performance. We release all our
benchmarks, and code to reproduce our experiments and to produce new
benchmarks.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 17:40:08 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Pombal",
"José",
""
],
[
"Guerreiro",
"Nuno M.",
""
],
[
"Rei",
"Ricardo",
""
],
[
"Martins",
"André F. T.",
""
]
] | TITLE: Zero-shot Benchmarking: A Framework for Flexible and Scalable Automatic
Evaluation of Language Models
ABSTRACT: As language models improve and become capable of performing more complex
tasks across modalities, evaluating them automatically becomes increasingly
challenging. Developing strong and robust task-specific automatic metrics gets
harder, and human-annotated test sets -- which are expensive to create --
saturate more quickly. A compelling alternative is to design reliable
strategies to automate the creation of test data and evaluation, but previous
attempts either rely on pre-existing data, or focus solely on individual tasks.
We present Zero-shot Benchmarking (ZSB), a framework for creating high-quality
benchmarks for any task by leveraging language models for both synthetic test
data creation and evaluation. ZSB is simple and flexible: it requires only the
creation of a prompt for data generation and one for evaluation; it is scalable
to tasks and languages where collecting real-world data is costly or
impractical; it is model-agnostic, allowing the creation of increasingly
challenging benchmarks as models improve. To assess the effectiveness of our
framework, we create benchmarks for five text-only tasks and a multi-modal one:
general capabilities in four languages (English, Chinese, French, and Korean),
translation, and general vision-language capabilities in English. We then rank
a broad range of open and closed systems on our benchmarks. ZSB rankings
consistently correlate strongly with human rankings, outperforming
widely-adopted standard benchmarks. Through ablations, we find that strong
benchmarks can be created with open models, and that judge model size and
dataset variety are crucial drivers of performance. We release all our
benchmarks, and code to reproduce our experiments and to produce new
benchmarks.
|
2504.01004 | Yujian Xiong | Yujian Xiong and Xuanzhao Dong and Sebastian Waz and Wenhui Zhu and
Negar Mallak and Zhong-lin Lu and Yalin Wang | Enhancing 3T BOLD fMRI SNR using Unpaired 7T Data with Schr\"odinger
Bridge Diffusion | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | High spatial and temporal resolution, coupled with a strong signal-to-noise
ratio (SNR), has made BOLD 7 Tesla fMRI an invaluable tool for understanding
how the brain processes visual stimuli. However, the limited availability of 7T
MRI systems means that most research relies on 3T MRI systems, which offer
lower spatial and temporal resolution and SNR. This naturally raises the
question: Can we enhance the spatiotemporal resolution and SNR of 3T BOLD fMRI
data to approximate 7T quality? In this study, we propose a novel framework
that aligns 7T and 3T fMRI data from different subjects and datasets in a
shared parametric domain. We then apply an unpaired Brain Disk Schr\"odinger
Bridge diffusion model to enhance the spatiotemporal resolution and SNR of the
3T data. Our approach addresses the challenge of limited 7T data by improving
the 3T scan quality. We demonstrate its effectiveness by testing it on two
distinct fMRI retinotopy datasets (one 7T and one 3T), as well as synthetic
data. The results show that our method significantly improves the SNR and
goodness-of-fit of the population receptive field (pRF) model in the enhanced
3T data, making it comparable to 7T quality. The codes will be available at
Github.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 17:41:24 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Xiong",
"Yujian",
""
],
[
"Dong",
"Xuanzhao",
""
],
[
"Waz",
"Sebastian",
""
],
[
"Zhu",
"Wenhui",
""
],
[
"Mallak",
"Negar",
""
],
[
"Lu",
"Zhong-lin",
""
],
[
"Wang",
"Yalin",
""
]
] | TITLE: Enhancing 3T BOLD fMRI SNR using Unpaired 7T Data with Schr\"odinger
Bridge Diffusion
ABSTRACT: High spatial and temporal resolution, coupled with a strong signal-to-noise
ratio (SNR), has made BOLD 7 Tesla fMRI an invaluable tool for understanding
how the brain processes visual stimuli. However, the limited availability of 7T
MRI systems means that most research relies on 3T MRI systems, which offer
lower spatial and temporal resolution and SNR. This naturally raises the
question: Can we enhance the spatiotemporal resolution and SNR of 3T BOLD fMRI
data to approximate 7T quality? In this study, we propose a novel framework
that aligns 7T and 3T fMRI data from different subjects and datasets in a
shared parametric domain. We then apply an unpaired Brain Disk Schr\"odinger
Bridge diffusion model to enhance the spatiotemporal resolution and SNR of the
3T data. Our approach addresses the challenge of limited 7T data by improving
the 3T scan quality. We demonstrate its effectiveness by testing it on two
distinct fMRI retinotopy datasets (one 7T and one 3T), as well as synthetic
data. The results show that our method significantly improves the SNR and
goodness-of-fit of the population receptive field (pRF) model in the enhanced
3T data, making it comparable to 7T quality. The codes will be available at
Github.
|
2504.01005 | Hritik Bansal | Nishad Singhi, Hritik Bansal, Arian Hosseini, Aditya Grover, Kai-Wei
Chang, Marcus Rohrbach, Anna Rohrbach | When To Solve, When To Verify: Compute-Optimal Problem Solving and
Generative Verification for LLM Reasoning | 29 pages | null | null | null | cs.CL cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Scaling test-time compute has emerged as a key strategy for enhancing the
reasoning capabilities of large language models (LLMs), particularly in tasks
like mathematical problem-solving. A traditional approach, Self-Consistency
(SC), generates multiple solutions to a problem and selects the most common
answer via majority voting. Another common method involves scoring each
solution with a reward model (verifier) and choosing the best one. Recent
advancements in Generative Reward Models (GenRM) reframe verification as a
next-token prediction task, enabling inference-time scaling along a new axis.
Specifically, GenRM generates multiple verification chains-of-thought to score
each solution. Under a limited inference budget, this introduces a fundamental
trade-off: should you spend the budget on scaling solutions via SC or generate
fewer solutions and allocate compute to verification via GenRM? To address
this, we evaluate GenRM against SC under a fixed inference budget.
Interestingly, we find that SC is more compute-efficient than GenRM for most
practical inference budgets across diverse models and datasets. For instance,
GenRM first matches SC after consuming up to 8x the inference compute and
requires significantly more compute to outperform it. Furthermore, we derive
inference scaling laws for the GenRM paradigm, revealing that compute-optimal
inference favors scaling solution generation more aggressively than scaling the
number of verifications. Our work provides practical guidance on optimizing
test-time scaling by balancing solution generation and verification. The code
is available at https://github.com/nishadsinghi/sc-genrm-scaling.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 17:41:57 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Singhi",
"Nishad",
""
],
[
"Bansal",
"Hritik",
""
],
[
"Hosseini",
"Arian",
""
],
[
"Grover",
"Aditya",
""
],
[
"Chang",
"Kai-Wei",
""
],
[
"Rohrbach",
"Marcus",
""
],
[
"Rohrbach",
"Anna",
""
]
] | TITLE: When To Solve, When To Verify: Compute-Optimal Problem Solving and
Generative Verification for LLM Reasoning
ABSTRACT: Scaling test-time compute has emerged as a key strategy for enhancing the
reasoning capabilities of large language models (LLMs), particularly in tasks
like mathematical problem-solving. A traditional approach, Self-Consistency
(SC), generates multiple solutions to a problem and selects the most common
answer via majority voting. Another common method involves scoring each
solution with a reward model (verifier) and choosing the best one. Recent
advancements in Generative Reward Models (GenRM) reframe verification as a
next-token prediction task, enabling inference-time scaling along a new axis.
Specifically, GenRM generates multiple verification chains-of-thought to score
each solution. Under a limited inference budget, this introduces a fundamental
trade-off: should you spend the budget on scaling solutions via SC or generate
fewer solutions and allocate compute to verification via GenRM? To address
this, we evaluate GenRM against SC under a fixed inference budget.
Interestingly, we find that SC is more compute-efficient than GenRM for most
practical inference budgets across diverse models and datasets. For instance,
GenRM first matches SC after consuming up to 8x the inference compute and
requires significantly more compute to outperform it. Furthermore, we derive
inference scaling laws for the GenRM paradigm, revealing that compute-optimal
inference favors scaling solution generation more aggressively than scaling the
number of verifications. Our work provides practical guidance on optimizing
test-time scaling by balancing solution generation and verification. The code
is available at https://github.com/nishadsinghi/sc-genrm-scaling.
|
2504.01009 | Saarthak Kapse | Saarthak Kapse, Pushpak Pati, Srikar Yellapragada, Srijan Das, Rajarsi
R. Gupta, Joel Saltz, Dimitris Samaras, Prateek Prasanna | GECKO: Gigapixel Vision-Concept Contrastive Pretraining in
Histopathology | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Pretraining a Multiple Instance Learning (MIL) aggregator enables the
derivation of Whole Slide Image (WSI)-level embeddings from patch-level
representations without supervision. While recent multimodal MIL pretraining
approaches leveraging auxiliary modalities have demonstrated performance gains
over unimodal WSI pretraining, the acquisition of these additional modalities
necessitates extensive clinical profiling. This requirement increases costs and
limits scalability in existing WSI datasets lacking such paired modalities. To
address this, we propose Gigapixel Vision-Concept Knowledge Contrastive
pretraining (GECKO), which aligns WSIs with a Concept Prior derived from the
available WSIs. First, we derive an inherently interpretable concept prior by
computing the similarity between each WSI patch and textual descriptions of
predefined pathology concepts. GECKO then employs a dual-branch MIL network:
one branch aggregates patch embeddings into a WSI-level deep embedding, while
the other aggregates the concept prior into a corresponding WSI-level concept
embedding. Both aggregated embeddings are aligned using a contrastive
objective, thereby pretraining the entire dual-branch MIL model. Moreover, when
auxiliary modalities such as transcriptomics data are available, GECKO
seamlessly integrates them. Across five diverse tasks, GECKO consistently
outperforms prior unimodal and multimodal pretraining approaches while also
delivering clinically meaningful interpretability that bridges the gap between
computational models and pathology expertise. Code is made available at
https://github.com/bmi-imaginelab/GECKO
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 17:49:59 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Kapse",
"Saarthak",
""
],
[
"Pati",
"Pushpak",
""
],
[
"Yellapragada",
"Srikar",
""
],
[
"Das",
"Srijan",
""
],
[
"Gupta",
"Rajarsi R.",
""
],
[
"Saltz",
"Joel",
""
],
[
"Samaras",
"Dimitris",
""
],
[
"Prasanna",
"Prateek",
""
]
] | TITLE: GECKO: Gigapixel Vision-Concept Contrastive Pretraining in
Histopathology
ABSTRACT: Pretraining a Multiple Instance Learning (MIL) aggregator enables the
derivation of Whole Slide Image (WSI)-level embeddings from patch-level
representations without supervision. While recent multimodal MIL pretraining
approaches leveraging auxiliary modalities have demonstrated performance gains
over unimodal WSI pretraining, the acquisition of these additional modalities
necessitates extensive clinical profiling. This requirement increases costs and
limits scalability in existing WSI datasets lacking such paired modalities. To
address this, we propose Gigapixel Vision-Concept Knowledge Contrastive
pretraining (GECKO), which aligns WSIs with a Concept Prior derived from the
available WSIs. First, we derive an inherently interpretable concept prior by
computing the similarity between each WSI patch and textual descriptions of
predefined pathology concepts. GECKO then employs a dual-branch MIL network:
one branch aggregates patch embeddings into a WSI-level deep embedding, while
the other aggregates the concept prior into a corresponding WSI-level concept
embedding. Both aggregated embeddings are aligned using a contrastive
objective, thereby pretraining the entire dual-branch MIL model. Moreover, when
auxiliary modalities such as transcriptomics data are available, GECKO
seamlessly integrates them. Across five diverse tasks, GECKO consistently
outperforms prior unimodal and multimodal pretraining approaches while also
delivering clinically meaningful interpretability that bridges the gap between
computational models and pathology expertise. Code is made available at
https://github.com/bmi-imaginelab/GECKO
|
2504.01010 | Pingping Zhu | Dylan Lester, James Gao, Samuel Sutphin, Pingping Zhu, Husnu Narman,
Ammar Alzarrad | A YOLO-Based Semi-Automated Labeling Approach to Improve Fault Detection
Efficiency in Railroad Videos | Published on American Society of Engineering Education (ASEE) North
Central Section Conference, 2025 | null | null | null | cs.CV eess.IV | http://creativecommons.org/licenses/by/4.0/ | Manual labeling for large-scale image and video datasets is often
time-intensive, error-prone, and costly, posing a significant barrier to
efficient machine learning workflows in fault detection from railroad videos.
This study introduces a semi-automated labeling method that utilizes a
pre-trained You Only Look Once (YOLO) model to streamline the labeling process
and enhance fault detection accuracy in railroad videos. By initiating the
process with a small set of manually labeled data, our approach iteratively
trains the YOLO model, using each cycle's output to improve model accuracy and
progressively reduce the need for human intervention.
To facilitate easy correction of model predictions, we developed a system to
export YOLO's detection data as an editable text file, enabling rapid
adjustments when detections require refinement. This approach decreases
labeling time from an average of 2 to 4 minutes per image to 30 seconds to 2
minutes, effectively minimizing labor costs and labeling errors. Unlike costly
AI based labeling solutions on paid platforms, our method provides a
cost-effective alternative for researchers and practitioners handling large
datasets in fault detection and other detection based machine learning
applications.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 17:50:30 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Lester",
"Dylan",
""
],
[
"Gao",
"James",
""
],
[
"Sutphin",
"Samuel",
""
],
[
"Zhu",
"Pingping",
""
],
[
"Narman",
"Husnu",
""
],
[
"Alzarrad",
"Ammar",
""
]
] | TITLE: A YOLO-Based Semi-Automated Labeling Approach to Improve Fault Detection
Efficiency in Railroad Videos
ABSTRACT: Manual labeling for large-scale image and video datasets is often
time-intensive, error-prone, and costly, posing a significant barrier to
efficient machine learning workflows in fault detection from railroad videos.
This study introduces a semi-automated labeling method that utilizes a
pre-trained You Only Look Once (YOLO) model to streamline the labeling process
and enhance fault detection accuracy in railroad videos. By initiating the
process with a small set of manually labeled data, our approach iteratively
trains the YOLO model, using each cycle's output to improve model accuracy and
progressively reduce the need for human intervention.
To facilitate easy correction of model predictions, we developed a system to
export YOLO's detection data as an editable text file, enabling rapid
adjustments when detections require refinement. This approach decreases
labeling time from an average of 2 to 4 minutes per image to 30 seconds to 2
minutes, effectively minimizing labor costs and labeling errors. Unlike costly
AI based labeling solutions on paid platforms, our method provides a
cost-effective alternative for researchers and practitioners handling large
datasets in fault detection and other detection based machine learning
applications.
|
2504.01016 | Wenbo Hu | Tian-Xing Xu, Xiangjun Gao, Wenbo Hu, Xiaoyu Li, Song-Hai Zhang, Ying
Shan | GeometryCrafter: Consistent Geometry Estimation for Open-world Videos
with Diffusion Priors | Project webpage: https://geometrycrafter.github.io/ | null | null | null | cs.GR cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | Despite remarkable advancements in video depth estimation, existing methods
exhibit inherent limitations in achieving geometric fidelity through the
affine-invariant predictions, limiting their applicability in reconstruction
and other metrically grounded downstream tasks. We propose GeometryCrafter, a
novel framework that recovers high-fidelity point map sequences with temporal
coherence from open-world videos, enabling accurate 3D/4D reconstruction,
camera parameter estimation, and other depth-based applications. At the core of
our approach lies a point map Variational Autoencoder (VAE) that learns a
latent space agnostic to video latent distributions for effective point map
encoding and decoding. Leveraging the VAE, we train a video diffusion model to
model the distribution of point map sequences conditioned on the input videos.
Extensive evaluations on diverse datasets demonstrate that GeometryCrafter
achieves state-of-the-art 3D accuracy, temporal consistency, and generalization
capability.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 17:58:03 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Xu",
"Tian-Xing",
""
],
[
"Gao",
"Xiangjun",
""
],
[
"Hu",
"Wenbo",
""
],
[
"Li",
"Xiaoyu",
""
],
[
"Zhang",
"Song-Hai",
""
],
[
"Shan",
"Ying",
""
]
] | TITLE: GeometryCrafter: Consistent Geometry Estimation for Open-world Videos
with Diffusion Priors
ABSTRACT: Despite remarkable advancements in video depth estimation, existing methods
exhibit inherent limitations in achieving geometric fidelity through the
affine-invariant predictions, limiting their applicability in reconstruction
and other metrically grounded downstream tasks. We propose GeometryCrafter, a
novel framework that recovers high-fidelity point map sequences with temporal
coherence from open-world videos, enabling accurate 3D/4D reconstruction,
camera parameter estimation, and other depth-based applications. At the core of
our approach lies a point map Variational Autoencoder (VAE) that learns a
latent space agnostic to video latent distributions for effective point map
encoding and decoding. Leveraging the VAE, we train a video diffusion model to
model the distribution of point map sequences conditioned on the input videos.
Extensive evaluations on diverse datasets demonstrate that GeometryCrafter
achieves state-of-the-art 3D accuracy, temporal consistency, and generalization
capability.
|
2504.01019 | Pablo Ruiz-Ponce | Pablo Ruiz-Ponce, German Barquero, Cristina Palmero, Sergio Escalera,
Jos\'e Garc\'ia-Rodr\'iguez | MixerMDM: Learnable Composition of Human Motion Diffusion Models | CVPR 2025 Accepted - Project Page:
https://pabloruizponce.com/papers/MixerMDM | null | null | null | cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | Generating human motion guided by conditions such as textual descriptions is
challenging due to the need for datasets with pairs of high-quality motion and
their corresponding conditions. The difficulty increases when aiming for finer
control in the generation. To that end, prior works have proposed to combine
several motion diffusion models pre-trained on datasets with different types of
conditions, thus allowing control with multiple conditions. However, the
proposed merging strategies overlook that the optimal way to combine the
generation processes might depend on the particularities of each pre-trained
generative model and also the specific textual descriptions. In this context,
we introduce MixerMDM, the first learnable model composition technique for
combining pre-trained text-conditioned human motion diffusion models. Unlike
previous approaches, MixerMDM provides a dynamic mixing strategy that is
trained in an adversarial fashion to learn to combine the denoising process of
each model depending on the set of conditions driving the generation. By using
MixerMDM to combine single- and multi-person motion diffusion models, we
achieve fine-grained control on the dynamics of every person individually, and
also on the overall interaction. Furthermore, we propose a new evaluation
technique that, for the first time in this task, measures the interaction and
individual quality by computing the alignment between the mixed generated
motions and their conditions as well as the capabilities of MixerMDM to adapt
the mixing throughout the denoising process depending on the motions to mix.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 17:59:44 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Ruiz-Ponce",
"Pablo",
""
],
[
"Barquero",
"German",
""
],
[
"Palmero",
"Cristina",
""
],
[
"Escalera",
"Sergio",
""
],
[
"García-Rodríguez",
"José",
""
]
] | TITLE: MixerMDM: Learnable Composition of Human Motion Diffusion Models
ABSTRACT: Generating human motion guided by conditions such as textual descriptions is
challenging due to the need for datasets with pairs of high-quality motion and
their corresponding conditions. The difficulty increases when aiming for finer
control in the generation. To that end, prior works have proposed to combine
several motion diffusion models pre-trained on datasets with different types of
conditions, thus allowing control with multiple conditions. However, the
proposed merging strategies overlook that the optimal way to combine the
generation processes might depend on the particularities of each pre-trained
generative model and also the specific textual descriptions. In this context,
we introduce MixerMDM, the first learnable model composition technique for
combining pre-trained text-conditioned human motion diffusion models. Unlike
previous approaches, MixerMDM provides a dynamic mixing strategy that is
trained in an adversarial fashion to learn to combine the denoising process of
each model depending on the set of conditions driving the generation. By using
MixerMDM to combine single- and multi-person motion diffusion models, we
achieve fine-grained control on the dynamics of every person individually, and
also on the overall interaction. Furthermore, we propose a new evaluation
technique that, for the first time in this task, measures the interaction and
individual quality by computing the alignment between the mixed generated
motions and their conditions as well as the capabilities of MixerMDM to adapt
the mixing throughout the denoising process depending on the motions to mix.
|
2012.07139 | Niclas V\"odisch | Niclas V\"odisch, David Dodel, Michael Sch\"otz | FSOCO: The Formula Student Objects in Context Dataset | null | SAE International Journal of Connected and Automated Vehicles
5.12-05-01-0003 (2022) | 10.4271/12-05-01-0003 | null | cs.CV cs.LG cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents the FSOCO dataset, a collaborative dataset for
vision-based cone detection systems in Formula Student Driverless competitions.
It contains human annotated ground truth labels for both bounding boxes and
instance-wise segmentation masks. The data buy-in philosophy of FSOCO asks
student teams to contribute to the database first before being granted access
ensuring continuous growth. By providing clear labeling guidelines and tools
for a sophisticated raw image selection, new annotations are guaranteed to meet
the desired quality. The effectiveness of the approach is shown by comparing
prediction results of a network trained on FSOCO and its unregulated
predecessor. The FSOCO dataset can be found at
https://fsoco.github.io/fsoco-dataset/.
| [
{
"version": "v1",
"created": "Sun, 13 Dec 2020 20:24:48 GMT"
},
{
"version": "v2",
"created": "Thu, 25 Mar 2021 09:19:44 GMT"
},
{
"version": "v3",
"created": "Tue, 25 May 2021 16:34:19 GMT"
},
{
"version": "v4",
"created": "Mon, 31 Jan 2022 11:22:59 GMT"
},
{
"version": "v5",
"created": "Mon, 31 Mar 2025 12:32:59 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Vödisch",
"Niclas",
""
],
[
"Dodel",
"David",
""
],
[
"Schötz",
"Michael",
""
]
] | TITLE: FSOCO: The Formula Student Objects in Context Dataset
ABSTRACT: This paper presents the FSOCO dataset, a collaborative dataset for
vision-based cone detection systems in Formula Student Driverless competitions.
It contains human annotated ground truth labels for both bounding boxes and
instance-wise segmentation masks. The data buy-in philosophy of FSOCO asks
student teams to contribute to the database first before being granted access
ensuring continuous growth. By providing clear labeling guidelines and tools
for a sophisticated raw image selection, new annotations are guaranteed to meet
the desired quality. The effectiveness of the approach is shown by comparing
prediction results of a network trained on FSOCO and its unregulated
predecessor. The FSOCO dataset can be found at
https://fsoco.github.io/fsoco-dataset/.
|
2105.07610 | Maya Ramchandran | Maya Ramchandran, Rajarshi Mukherjee, and Giovanni Parmigiani | Cross-Cluster Weighted Forests | 12 pages, 6 figures, 1 table | null | null | null | stat.ML cs.LG | http://creativecommons.org/licenses/by/4.0/ | Adapting machine learning algorithms to better handle the presence of
clusters or batch effects within training datasets is important across a wide
variety of biological applications. This article considers the effect of
ensembling Random Forest learners trained on clusters within a dataset with
heterogeneity in the distribution of the features. We find that constructing
ensembles of forests trained on clusters determined by algorithms such as
k-means results in significant improvements in accuracy and generalizability
over the traditional Random Forest algorithm. We begin with a theoretical
exploration of the benefits of our novel approach, denoted as the Cross-Cluster
Weighted Forest, and subsequently empirically examine its robustness to various
data-generating scenarios and outcome models. Furthermore, we explore the
influence of the data partitioning and ensemble weighting strategies on the
benefits of our method over the existing paradigm. Finally, we apply our
approach to cancer molecular profiling and gene expression datasets that are
naturally divisible into clusters and illustrate that our approach outperforms
classic Random Forest.
| [
{
"version": "v1",
"created": "Mon, 17 May 2021 04:58:29 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Oct 2021 02:53:17 GMT"
},
{
"version": "v3",
"created": "Tue, 29 Oct 2024 02:51:27 GMT"
},
{
"version": "v4",
"created": "Fri, 28 Mar 2025 23:40:19 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Ramchandran",
"Maya",
""
],
[
"Mukherjee",
"Rajarshi",
""
],
[
"Parmigiani",
"Giovanni",
""
]
] | TITLE: Cross-Cluster Weighted Forests
ABSTRACT: Adapting machine learning algorithms to better handle the presence of
clusters or batch effects within training datasets is important across a wide
variety of biological applications. This article considers the effect of
ensembling Random Forest learners trained on clusters within a dataset with
heterogeneity in the distribution of the features. We find that constructing
ensembles of forests trained on clusters determined by algorithms such as
k-means results in significant improvements in accuracy and generalizability
over the traditional Random Forest algorithm. We begin with a theoretical
exploration of the benefits of our novel approach, denoted as the Cross-Cluster
Weighted Forest, and subsequently empirically examine its robustness to various
data-generating scenarios and outcome models. Furthermore, we explore the
influence of the data partitioning and ensemble weighting strategies on the
benefits of our method over the existing paradigm. Finally, we apply our
approach to cancer molecular profiling and gene expression datasets that are
naturally divisible into clusters and illustrate that our approach outperforms
classic Random Forest.
|
2109.01123 | Yusuf Dalva | Yusuf Dalva, Hamza Pehlivan, Said Fahri Altindis, and Aysegul Dundar | Benchmarking the Robustness of Instance Segmentation Models | null | IEEE Trans. Neural. Netw. Learn. Syst. 2024 Dec;35(12):17021-17035 | 10.1109/TNNLS.2023.3310985 | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | This paper presents a comprehensive evaluation of instance segmentation
models with respect to real-world image corruptions as well as out-of-domain
image collections, e.g. images captured by a different set-up than the training
dataset. The out-of-domain image evaluation shows the generalization capability
of models, an essential aspect of real-world applications and an extensively
studied topic of domain adaptation. These presented robustness and
generalization evaluations are important when designing instance segmentation
models for real-world applications and picking an off-the-shelf pretrained
model to directly use for the task at hand. Specifically, this benchmark study
includes state-of-the-art network architectures, network backbones,
normalization layers, models trained starting from scratch versus pretrained
networks, and the effect of multi-task training on robustness and
generalization. Through this study, we gain several insights. For example, we
find that group normalization enhances the robustness of networks across
corruptions where the image contents stay the same but corruptions are added on
top. On the other hand, batch normalization improves the generalization of the
models across different datasets where statistics of image features change. We
also find that single-stage detectors do not generalize well to larger image
resolutions than their training size. On the other hand, multi-stage detectors
can easily be used on images of different sizes. We hope that our comprehensive
study will motivate the development of more robust and reliable instance
segmentation models.
| [
{
"version": "v1",
"created": "Thu, 2 Sep 2021 17:50:07 GMT"
},
{
"version": "v2",
"created": "Wed, 10 Aug 2022 13:52:51 GMT"
},
{
"version": "v3",
"created": "Fri, 28 Mar 2025 18:46:44 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Dalva",
"Yusuf",
""
],
[
"Pehlivan",
"Hamza",
""
],
[
"Altindis",
"Said Fahri",
""
],
[
"Dundar",
"Aysegul",
""
]
] | TITLE: Benchmarking the Robustness of Instance Segmentation Models
ABSTRACT: This paper presents a comprehensive evaluation of instance segmentation
models with respect to real-world image corruptions as well as out-of-domain
image collections, e.g. images captured by a different set-up than the training
dataset. The out-of-domain image evaluation shows the generalization capability
of models, an essential aspect of real-world applications and an extensively
studied topic of domain adaptation. These presented robustness and
generalization evaluations are important when designing instance segmentation
models for real-world applications and picking an off-the-shelf pretrained
model to directly use for the task at hand. Specifically, this benchmark study
includes state-of-the-art network architectures, network backbones,
normalization layers, models trained starting from scratch versus pretrained
networks, and the effect of multi-task training on robustness and
generalization. Through this study, we gain several insights. For example, we
find that group normalization enhances the robustness of networks across
corruptions where the image contents stay the same but corruptions are added on
top. On the other hand, batch normalization improves the generalization of the
models across different datasets where statistics of image features change. We
also find that single-stage detectors do not generalize well to larger image
resolutions than their training size. On the other hand, multi-stage detectors
can easily be used on images of different sizes. We hope that our comprehensive
study will motivate the development of more robust and reliable instance
segmentation models.
|
2203.10085 | Sarath Sivaprasad | Ragja Palakkadavath, Sarath Sivaprasad, Shirish Karande, Niranjan
Pedanekar | I Know Therefore I Score: Label-Free Crafting of Scoring Functions using
Constraints Based on Domain Expertise | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | Several real-life applications require crafting concise, quantitative scoring
functions (also called rating systems) from measured observations. For example,
an effectiveness score needs to be created for advertising campaigns using a
number of engagement metrics. Experts often need to create such scoring
functions in the absence of labelled data, where the scores need to reflect
business insights and rules as understood by the domain experts. Without a way
to capture these inputs systematically, this becomes a time-consuming process
involving trial and error. In this paper, we introduce a label-free practical
approach to learn a scoring function from multi-dimensional numerical data. The
approach incorporates insights and business rules from domain experts in the
form of easily observable and specifiable constraints, which are used as weak
supervision by a machine learning model. We convert such constraints into loss
functions that are optimized simultaneously while learning the scoring
function. We examine the efficacy of the approach using a synthetic dataset as
well as four real-life datasets, and also compare how it performs vis-a-vis
supervised learning models.
| [
{
"version": "v1",
"created": "Fri, 18 Mar 2022 17:51:20 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 21:34:43 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Palakkadavath",
"Ragja",
""
],
[
"Sivaprasad",
"Sarath",
""
],
[
"Karande",
"Shirish",
""
],
[
"Pedanekar",
"Niranjan",
""
]
] | TITLE: I Know Therefore I Score: Label-Free Crafting of Scoring Functions using
Constraints Based on Domain Expertise
ABSTRACT: Several real-life applications require crafting concise, quantitative scoring
functions (also called rating systems) from measured observations. For example,
an effectiveness score needs to be created for advertising campaigns using a
number of engagement metrics. Experts often need to create such scoring
functions in the absence of labelled data, where the scores need to reflect
business insights and rules as understood by the domain experts. Without a way
to capture these inputs systematically, this becomes a time-consuming process
involving trial and error. In this paper, we introduce a label-free practical
approach to learn a scoring function from multi-dimensional numerical data. The
approach incorporates insights and business rules from domain experts in the
form of easily observable and specifiable constraints, which are used as weak
supervision by a machine learning model. We convert such constraints into loss
functions that are optimized simultaneously while learning the scoring
function. We examine the efficacy of the approach using a synthetic dataset as
well as four real-life datasets, and also compare how it performs vis-a-vis
supervised learning models.
|
2209.06119 | Ravin Kumar | Ravin Kumar | APTx: better activation function than MISH, SWISH, and ReLU's variants
used in deep learning | 8 pages, 6 figures | International Journal of Artificial Intelligence and Machine
Learning, 2(2), 56-61 (2022) | 10.51483/IJAIML.2.2.2022.56-61 | null | cs.LG cs.AI cs.CV cs.NE | http://creativecommons.org/licenses/by/4.0/ | Activation Functions introduce non-linearity in the deep neural networks.
This nonlinearity helps the neural networks learn faster and efficiently from
the dataset. In deep learning, many activation functions are developed and used
based on the type of problem statement. ReLU's variants, SWISH, and MISH are
goto activation functions. MISH function is considered having similar or even
better performance than SWISH, and much better than ReLU. In this paper, we
propose an activation function named APTx which behaves similar to MISH, but
requires lesser mathematical operations to compute. The lesser computational
requirements of APTx does speed up the model training, and thus also reduces
the hardware requirement for the deep learning model. Source code:
https://github.com/mr-ravin/aptx_activation
| [
{
"version": "v1",
"created": "Sat, 10 Sep 2022 14:26:04 GMT"
},
{
"version": "v2",
"created": "Wed, 14 Sep 2022 16:51:19 GMT"
},
{
"version": "v3",
"created": "Fri, 23 Sep 2022 17:39:14 GMT"
},
{
"version": "v4",
"created": "Fri, 10 Mar 2023 17:31:32 GMT"
},
{
"version": "v5",
"created": "Sat, 29 Mar 2025 16:47:51 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Kumar",
"Ravin",
""
]
] | TITLE: APTx: better activation function than MISH, SWISH, and ReLU's variants
used in deep learning
ABSTRACT: Activation Functions introduce non-linearity in the deep neural networks.
This nonlinearity helps the neural networks learn faster and efficiently from
the dataset. In deep learning, many activation functions are developed and used
based on the type of problem statement. ReLU's variants, SWISH, and MISH are
goto activation functions. MISH function is considered having similar or even
better performance than SWISH, and much better than ReLU. In this paper, we
propose an activation function named APTx which behaves similar to MISH, but
requires lesser mathematical operations to compute. The lesser computational
requirements of APTx does speed up the model training, and thus also reduces
the hardware requirement for the deep learning model. Source code:
https://github.com/mr-ravin/aptx_activation
|
2210.09969 | Daniel Oliveira | Daniel A. P. Oliveira, David Martins de Matos | Transfer-learning for video classification: Video Swin Transformer on
multiple domains | 7 pages, 11 figures | null | null | null | cs.CV cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The computer vision community has seen a shift from convolutional-based to
pure transformer architectures for both image and video tasks. Training a
transformer from zero for these tasks usually requires a lot of data and
computational resources. Video Swin Transformer (VST) is a pure-transformer
model developed for video classification which achieves state-of-the-art
results in accuracy and efficiency on several datasets. In this paper, we aim
to understand if VST generalizes well enough to be used in an out-of-domain
setting. We study the performance of VST on two large-scale datasets, namely
FCVID and Something-Something using a transfer learning approach from
Kinetics-400, which requires around 4x less memory than training from scratch.
We then break down the results to understand where VST fails the most and in
which scenarios the transfer-learning approach is viable. Our experiments show
an 85\% top-1 accuracy on FCVID without retraining the whole model which is
equal to the state-of-the-art for the dataset and a 21\% accuracy on
Something-Something. The experiments also suggest that the performance of the
VST decreases on average when the video duration increases which seems to be a
consequence of a design choice of the model. From the results, we conclude that
VST generalizes well enough to classify out-of-domain videos without retraining
when the target classes are from the same type as the classes used to train the
model. We observed this effect when we performed transfer-learning from
Kinetics-400 to FCVID, where most datasets target mostly objects. On the other
hand, if the classes are not from the same type, then the accuracy after the
transfer-learning approach is expected to be poor. We observed this effect when
we performed transfer-learning from Kinetics-400, where the classes represent
mostly objects, to Something-Something, where the classes represent mostly
actions.
| [
{
"version": "v1",
"created": "Tue, 18 Oct 2022 16:24:55 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 22:54:34 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Oliveira",
"Daniel A. P.",
""
],
[
"de Matos",
"David Martins",
""
]
] | TITLE: Transfer-learning for video classification: Video Swin Transformer on
multiple domains
ABSTRACT: The computer vision community has seen a shift from convolutional-based to
pure transformer architectures for both image and video tasks. Training a
transformer from zero for these tasks usually requires a lot of data and
computational resources. Video Swin Transformer (VST) is a pure-transformer
model developed for video classification which achieves state-of-the-art
results in accuracy and efficiency on several datasets. In this paper, we aim
to understand if VST generalizes well enough to be used in an out-of-domain
setting. We study the performance of VST on two large-scale datasets, namely
FCVID and Something-Something using a transfer learning approach from
Kinetics-400, which requires around 4x less memory than training from scratch.
We then break down the results to understand where VST fails the most and in
which scenarios the transfer-learning approach is viable. Our experiments show
an 85\% top-1 accuracy on FCVID without retraining the whole model which is
equal to the state-of-the-art for the dataset and a 21\% accuracy on
Something-Something. The experiments also suggest that the performance of the
VST decreases on average when the video duration increases which seems to be a
consequence of a design choice of the model. From the results, we conclude that
VST generalizes well enough to classify out-of-domain videos without retraining
when the target classes are from the same type as the classes used to train the
model. We observed this effect when we performed transfer-learning from
Kinetics-400 to FCVID, where most datasets target mostly objects. On the other
hand, if the classes are not from the same type, then the accuracy after the
transfer-learning approach is expected to be poor. We observed this effect when
we performed transfer-learning from Kinetics-400, where the classes represent
mostly objects, to Something-Something, where the classes represent mostly
actions.
|
2211.06543 | Yuki Yada | Yuki Yada, Jiaying Feng, Tsuneo Matsumoto, Nao Fukushima, Fuyuko Kido,
Hayato Yamana | Dark patterns in e-commerce: a dataset and its baseline evaluations | IEEE BigData 2022 | null | null | null | cs.LG cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dark patterns, which are user interface designs in online services, induce
users to take unintended actions. Recently, dark patterns have been raised as
an issue of privacy and fairness. Thus, a wide range of research on detecting
dark patterns is eagerly awaited. In this work, we constructed a dataset for
dark pattern detection and prepared its baseline detection performance with
state-of-the-art machine learning methods. The original dataset was obtained
from Mathur et al.'s study in 2019, which consists of 1,818 dark pattern texts
from shopping sites. Then, we added negative samples, i.e., non-dark pattern
texts, by retrieving texts from the same websites as Mathur et al.'s dataset.
We also applied state-of-the-art machine learning methods to show the automatic
detection accuracy as baselines, including BERT, RoBERTa, ALBERT, and XLNet. As
a result of 5-fold cross-validation, we achieved the highest accuracy of 0.975
with RoBERTa. The dataset and baseline source codes are available at
https://github.com/yamanalab/ec-darkpattern.
| [
{
"version": "v1",
"created": "Sat, 12 Nov 2022 01:53:49 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Mar 2025 09:57:32 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Yada",
"Yuki",
""
],
[
"Feng",
"Jiaying",
""
],
[
"Matsumoto",
"Tsuneo",
""
],
[
"Fukushima",
"Nao",
""
],
[
"Kido",
"Fuyuko",
""
],
[
"Yamana",
"Hayato",
""
]
] | TITLE: Dark patterns in e-commerce: a dataset and its baseline evaluations
ABSTRACT: Dark patterns, which are user interface designs in online services, induce
users to take unintended actions. Recently, dark patterns have been raised as
an issue of privacy and fairness. Thus, a wide range of research on detecting
dark patterns is eagerly awaited. In this work, we constructed a dataset for
dark pattern detection and prepared its baseline detection performance with
state-of-the-art machine learning methods. The original dataset was obtained
from Mathur et al.'s study in 2019, which consists of 1,818 dark pattern texts
from shopping sites. Then, we added negative samples, i.e., non-dark pattern
texts, by retrieving texts from the same websites as Mathur et al.'s dataset.
We also applied state-of-the-art machine learning methods to show the automatic
detection accuracy as baselines, including BERT, RoBERTa, ALBERT, and XLNet. As
a result of 5-fold cross-validation, we achieved the highest accuracy of 0.975
with RoBERTa. The dataset and baseline source codes are available at
https://github.com/yamanalab/ec-darkpattern.
|
2211.09107 | Mohammad Reza Zarei | Mohammad Reza Zarei, Majid Komeili | Interpretable Few-shot Learning with Online Attribute Selection | null | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Few-shot learning (FSL) presents a challenging learning problem in which only
a few samples are available for each class. Decision interpretation is more
important in few-shot classification due to a greater chance of error compared
to traditional classification. However, the majority of the previous FSL
methods are black-box models. In this paper, we propose an inherently
interpretable model for FSL based on human-friendly attributes. Previously,
human-friendly attributes have been utilized to train models with the potential
for human interaction and interpretability. However, such approaches are not
directly extendible to the few-shot classification scenario. Moreover, we
propose an online attribute selection mechanism to effectively filter out
irrelevant attributes in each episode. The attribute selection mechanism
improves accuracy and helps with interpretability by reducing the number of
attributes that participate in each episode. We further propose a mechanism
that automatically detects the episodes where the pool of available
human-friendly attributes is insufficient, and subsequently augments it by
engaging some learned unknown attributes. We demonstrate that the proposed
method achieves results on par with black-box few-shot learning models on four
widely used datasets. We also empirically evaluate the level of decision
alignment between different models and human understanding and show that our
model outperforms the comparison methods based on this criterion.
| [
{
"version": "v1",
"created": "Wed, 16 Nov 2022 18:50:11 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Mar 2023 17:43:18 GMT"
},
{
"version": "v3",
"created": "Mon, 31 Mar 2025 02:41:59 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Zarei",
"Mohammad Reza",
""
],
[
"Komeili",
"Majid",
""
]
] | TITLE: Interpretable Few-shot Learning with Online Attribute Selection
ABSTRACT: Few-shot learning (FSL) presents a challenging learning problem in which only
a few samples are available for each class. Decision interpretation is more
important in few-shot classification due to a greater chance of error compared
to traditional classification. However, the majority of the previous FSL
methods are black-box models. In this paper, we propose an inherently
interpretable model for FSL based on human-friendly attributes. Previously,
human-friendly attributes have been utilized to train models with the potential
for human interaction and interpretability. However, such approaches are not
directly extendible to the few-shot classification scenario. Moreover, we
propose an online attribute selection mechanism to effectively filter out
irrelevant attributes in each episode. The attribute selection mechanism
improves accuracy and helps with interpretability by reducing the number of
attributes that participate in each episode. We further propose a mechanism
that automatically detects the episodes where the pool of available
human-friendly attributes is insufficient, and subsequently augments it by
engaging some learned unknown attributes. We demonstrate that the proposed
method achieves results on par with black-box few-shot learning models on four
widely used datasets. We also empirically evaluate the level of decision
alignment between different models and human understanding and show that our
model outperforms the comparison methods based on this criterion.
|
2305.13608 | Wenxiao Cai | Wenxiao Cai, Ke Jin, Jinyan Hou, Cong Guo, Letian Wu, Wankou Yang | VDD: Varied Drone Dataset for Semantic Segmentation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Semantic segmentation of drone images is critical for various aerial vision
tasks as it provides essential semantic details to understand scenes on the
ground. Ensuring high accuracy of semantic segmentation models for drones
requires access to diverse, large-scale, and high-resolution datasets, which
are often scarce in the field of aerial image processing. While existing
datasets typically focus on urban scenes and are relatively small, our Varied
Drone Dataset (VDD) addresses these limitations by offering a large-scale,
densely labeled collection of 400 high-resolution images spanning 7 classes.
This dataset features various scenes in urban, industrial, rural, and natural
areas, captured from different camera angles and under diverse lighting
conditions. We also make new annotations to UDD and UAVid, integrating them
under VDD annotation standards, to create the Integrated Drone Dataset (IDD).
We train seven state-of-the-art models on drone datasets as baselines. It's
expected that our dataset will generate considerable interest in drone image
segmentation and serve as a foundation for other drone vision tasks. Datasets
are publicly available at \href{our website}{https://github.com/RussRobin/VDD}.
| [
{
"version": "v1",
"created": "Tue, 23 May 2023 02:16:14 GMT"
},
{
"version": "v2",
"created": "Sun, 27 Aug 2023 14:11:34 GMT"
},
{
"version": "v3",
"created": "Tue, 2 Jul 2024 06:35:51 GMT"
},
{
"version": "v4",
"created": "Sat, 29 Mar 2025 17:50:46 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Cai",
"Wenxiao",
""
],
[
"Jin",
"Ke",
""
],
[
"Hou",
"Jinyan",
""
],
[
"Guo",
"Cong",
""
],
[
"Wu",
"Letian",
""
],
[
"Yang",
"Wankou",
""
]
] | TITLE: VDD: Varied Drone Dataset for Semantic Segmentation
ABSTRACT: Semantic segmentation of drone images is critical for various aerial vision
tasks as it provides essential semantic details to understand scenes on the
ground. Ensuring high accuracy of semantic segmentation models for drones
requires access to diverse, large-scale, and high-resolution datasets, which
are often scarce in the field of aerial image processing. While existing
datasets typically focus on urban scenes and are relatively small, our Varied
Drone Dataset (VDD) addresses these limitations by offering a large-scale,
densely labeled collection of 400 high-resolution images spanning 7 classes.
This dataset features various scenes in urban, industrial, rural, and natural
areas, captured from different camera angles and under diverse lighting
conditions. We also make new annotations to UDD and UAVid, integrating them
under VDD annotation standards, to create the Integrated Drone Dataset (IDD).
We train seven state-of-the-art models on drone datasets as baselines. It's
expected that our dataset will generate considerable interest in drone image
segmentation and serve as a foundation for other drone vision tasks. Datasets
are publicly available at \href{our website}{https://github.com/RussRobin/VDD}.
|
2307.04910 | Sirisha Rambhatla | Troy Zada, Natalie Tam, Francois Barnard, Marlize Van Sittert, Venkat
Bhat, Sirisha Rambhatla | Medical Misinformation in AI-Assisted Self-Diagnosis: Development of a
Method (EvalPrompt) for Analyzing Large Language Models | 11 pages, 3 figures, Journal of Medical Internet Research: Formative
Research | JMIR Form Res 2025;9:e66207 | 10.2196/66207 | null | cs.CY | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Rapid integration of large language models (LLMs) in health care is sparking
global discussion about their potential to revolutionize health care quality
and accessibility. At a time when improving health care quality and access
remains a critical concern for countries worldwide, the ability of these models
to pass medical examinations is often cited as a reason to use them for medical
training and diagnosis. However, the impact of their inevitable use as a
self-diagnostic tool and their role in spreading healthcare misinformation has
not been evaluated. This study aims to assess the effectiveness of LLMs,
particularly ChatGPT, from the perspective of an individual self-diagnosing to
better understand the clarity, correctness, and robustness of the models. We
propose the comprehensive testing methodology evaluation of LLM prompts
(EvalPrompt). This evaluation methodology uses multiple-choice medical
licensing examination questions to evaluate LLM responses. We use open-ended
questions to mimic real-world self-diagnosis use cases, and perform sentence
dropout to mimic realistic self-diagnosis with missing information. Human
evaluators then assess the responses returned by ChatGPT for both experiments
for clarity, correctness, and robustness. The results highlight the modest
capabilities of LLMs, as their responses are often unclear and inaccurate. As a
result, medical advice by LLMs should be cautiously approached. However,
evidence suggests that LLMs are steadily improving and could potentially play a
role in healthcare systems in the future. To address the issue of medical
misinformation, there is a pressing need for the development of a comprehensive
self-diagnosis dataset. This dataset could enhance the reliability of LLMs in
medical applications by featuring more realistic prompt styles with minimal
information across a broader range of medical fields.
| [
{
"version": "v1",
"created": "Mon, 10 Jul 2023 21:28:26 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 18:34:35 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Zada",
"Troy",
""
],
[
"Tam",
"Natalie",
""
],
[
"Barnard",
"Francois",
""
],
[
"Van Sittert",
"Marlize",
""
],
[
"Bhat",
"Venkat",
""
],
[
"Rambhatla",
"Sirisha",
""
]
] | TITLE: Medical Misinformation in AI-Assisted Self-Diagnosis: Development of a
Method (EvalPrompt) for Analyzing Large Language Models
ABSTRACT: Rapid integration of large language models (LLMs) in health care is sparking
global discussion about their potential to revolutionize health care quality
and accessibility. At a time when improving health care quality and access
remains a critical concern for countries worldwide, the ability of these models
to pass medical examinations is often cited as a reason to use them for medical
training and diagnosis. However, the impact of their inevitable use as a
self-diagnostic tool and their role in spreading healthcare misinformation has
not been evaluated. This study aims to assess the effectiveness of LLMs,
particularly ChatGPT, from the perspective of an individual self-diagnosing to
better understand the clarity, correctness, and robustness of the models. We
propose the comprehensive testing methodology evaluation of LLM prompts
(EvalPrompt). This evaluation methodology uses multiple-choice medical
licensing examination questions to evaluate LLM responses. We use open-ended
questions to mimic real-world self-diagnosis use cases, and perform sentence
dropout to mimic realistic self-diagnosis with missing information. Human
evaluators then assess the responses returned by ChatGPT for both experiments
for clarity, correctness, and robustness. The results highlight the modest
capabilities of LLMs, as their responses are often unclear and inaccurate. As a
result, medical advice by LLMs should be cautiously approached. However,
evidence suggests that LLMs are steadily improving and could potentially play a
role in healthcare systems in the future. To address the issue of medical
misinformation, there is a pressing need for the development of a comprehensive
self-diagnosis dataset. This dataset could enhance the reliability of LLMs in
medical applications by featuring more realistic prompt styles with minimal
information across a broader range of medical fields.
|
2307.14906 | Philipp Normann | Timo Wilm, Philipp Normann, Sophie Baumeister, Paul-Vincent Kobow | Scaling Session-Based Transformer Recommendations using Optimized
Negative Sampling and Loss Functions | Accepted at the Seventeenth ACM Conference on Recommender Systems
(RecSys '23) | null | 10.1145/3604915.3610236 | null | cs.IR cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | This work introduces TRON, a scalable session-based Transformer Recommender
using Optimized Negative-sampling. Motivated by the scalability and performance
limitations of prevailing models such as SASRec and GRU4Rec+, TRON integrates
top-k negative sampling and listwise loss functions to enhance its
recommendation accuracy. Evaluations on relevant large-scale e-commerce
datasets show that TRON improves upon the recommendation quality of current
methods while maintaining training speeds similar to SASRec. A live A/B test
yielded an 18.14% increase in click-through rate over SASRec, highlighting the
potential of TRON in practical settings. For further research, we provide
access to our source code at https://github.com/otto-de/TRON and an anonymized
dataset at https://github.com/otto-de/recsys-dataset.
| [
{
"version": "v1",
"created": "Thu, 27 Jul 2023 14:47:38 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Mar 2025 12:18:02 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Wilm",
"Timo",
""
],
[
"Normann",
"Philipp",
""
],
[
"Baumeister",
"Sophie",
""
],
[
"Kobow",
"Paul-Vincent",
""
]
] | TITLE: Scaling Session-Based Transformer Recommendations using Optimized
Negative Sampling and Loss Functions
ABSTRACT: This work introduces TRON, a scalable session-based Transformer Recommender
using Optimized Negative-sampling. Motivated by the scalability and performance
limitations of prevailing models such as SASRec and GRU4Rec+, TRON integrates
top-k negative sampling and listwise loss functions to enhance its
recommendation accuracy. Evaluations on relevant large-scale e-commerce
datasets show that TRON improves upon the recommendation quality of current
methods while maintaining training speeds similar to SASRec. A live A/B test
yielded an 18.14% increase in click-through rate over SASRec, highlighting the
potential of TRON in practical settings. For further research, we provide
access to our source code at https://github.com/otto-de/TRON and an anonymized
dataset at https://github.com/otto-de/recsys-dataset.
|
2309.04379 | Dongming Wu | Dongming Wu, Wencheng Han, Yingfei Liu, Tiancai Wang, Cheng-zhong Xu,
Xiangyu Zhang, Jianbing Shen | Language Prompt for Autonomous Driving | Accepted by AAAI2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A new trend in the computer vision community is to capture objects of
interest following flexible human command represented by a natural language
prompt. However, the progress of using language prompts in driving scenarios is
stuck in a bottleneck due to the scarcity of paired prompt-instance data. To
address this challenge, we propose the first object-centric language prompt set
for driving scenes within 3D, multi-view, and multi-frame space, named
NuPrompt. It expands nuScenes dataset by constructing a total of 40,147
language descriptions, each referring to an average of 7.4 object tracklets.
Based on the object-text pairs from the new benchmark, we formulate a novel
prompt-based driving task, \ie, employing a language prompt to predict the
described object trajectory across views and frames. Furthermore, we provide a
simple end-to-end baseline model based on Transformer, named PromptTrack.
Experiments show that our PromptTrack achieves impressive performance on
NuPrompt. We hope this work can provide some new insights for the self-driving
community. The data and code have been released at
https://github.com/wudongming97/Prompt4Driving.
| [
{
"version": "v1",
"created": "Fri, 8 Sep 2023 15:21:07 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Mar 2025 15:11:24 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Wu",
"Dongming",
""
],
[
"Han",
"Wencheng",
""
],
[
"Liu",
"Yingfei",
""
],
[
"Wang",
"Tiancai",
""
],
[
"Xu",
"Cheng-zhong",
""
],
[
"Zhang",
"Xiangyu",
""
],
[
"Shen",
"Jianbing",
""
]
] | TITLE: Language Prompt for Autonomous Driving
ABSTRACT: A new trend in the computer vision community is to capture objects of
interest following flexible human command represented by a natural language
prompt. However, the progress of using language prompts in driving scenarios is
stuck in a bottleneck due to the scarcity of paired prompt-instance data. To
address this challenge, we propose the first object-centric language prompt set
for driving scenes within 3D, multi-view, and multi-frame space, named
NuPrompt. It expands nuScenes dataset by constructing a total of 40,147
language descriptions, each referring to an average of 7.4 object tracklets.
Based on the object-text pairs from the new benchmark, we formulate a novel
prompt-based driving task, \ie, employing a language prompt to predict the
described object trajectory across views and frames. Furthermore, we provide a
simple end-to-end baseline model based on Transformer, named PromptTrack.
Experiments show that our PromptTrack achieves impressive performance on
NuPrompt. We hope this work can provide some new insights for the self-driving
community. The data and code have been released at
https://github.com/wudongming97/Prompt4Driving.
|
2309.13885 | Jing Zhu | Jing Zhu, Xiang Song, Vassilis N. Ioannidis, Danai Koutra, Christos
Faloutsos | TouchUp-G: Improving Feature Representation through Graph-Centric
Finetuning | SIGIR 2024 | null | null | null | cs.LG cs.AI cs.CL cs.CV cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How can we enhance the node features acquired from Pretrained Models (PMs) to
better suit downstream graph learning tasks? Graph Neural Networks (GNNs) have
become the state-of-the-art approach for many high-impact, real-world graph
applications. For feature-rich graphs, a prevalent practice involves utilizing
a PM directly to generate features, without incorporating any domain adaptation
techniques. Nevertheless, this practice is suboptimal because the node features
extracted from PM are graph-agnostic and prevent GNNs from fully utilizing the
potential correlations between the graph structure and node features, leading
to a decline in GNNs performance. In this work, we seek to improve the node
features obtained from a PM for downstream graph tasks and introduce TOUCHUP-G,
which has several advantages. It is (a) General: applicable to any downstream
graph task, including link prediction which is often employed in recommender
systems; (b) Multi-modal: able to improve raw features of any modality (e.g.
images, texts, audio); (c) Principled: it is closely related to a novel metric,
feature homophily, which we propose to quantify the potential correlations
between the graph structure and node features and we show that TOUCHUP-G can
effectively shrink the discrepancy between the graph structure and node
features; (d) Effective: achieving state-of-the-art results on four real-world
datasets spanning different tasks and modalities.
| [
{
"version": "v1",
"created": "Mon, 25 Sep 2023 05:44:40 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Mar 2025 05:32:14 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Zhu",
"Jing",
""
],
[
"Song",
"Xiang",
""
],
[
"Ioannidis",
"Vassilis N.",
""
],
[
"Koutra",
"Danai",
""
],
[
"Faloutsos",
"Christos",
""
]
] | TITLE: TouchUp-G: Improving Feature Representation through Graph-Centric
Finetuning
ABSTRACT: How can we enhance the node features acquired from Pretrained Models (PMs) to
better suit downstream graph learning tasks? Graph Neural Networks (GNNs) have
become the state-of-the-art approach for many high-impact, real-world graph
applications. For feature-rich graphs, a prevalent practice involves utilizing
a PM directly to generate features, without incorporating any domain adaptation
techniques. Nevertheless, this practice is suboptimal because the node features
extracted from PM are graph-agnostic and prevent GNNs from fully utilizing the
potential correlations between the graph structure and node features, leading
to a decline in GNNs performance. In this work, we seek to improve the node
features obtained from a PM for downstream graph tasks and introduce TOUCHUP-G,
which has several advantages. It is (a) General: applicable to any downstream
graph task, including link prediction which is often employed in recommender
systems; (b) Multi-modal: able to improve raw features of any modality (e.g.
images, texts, audio); (c) Principled: it is closely related to a novel metric,
feature homophily, which we propose to quantify the potential correlations
between the graph structure and node features and we show that TOUCHUP-G can
effectively shrink the discrepancy between the graph structure and node
features; (d) Effective: achieving state-of-the-art results on four real-world
datasets spanning different tasks and modalities.
|
2309.16924 | Xiang Gao | Xiang Gao, Hainan Cui, Yangdong Liu, and Shuhan Shen | Incremental Rotation Averaging Revisited | Submitted to Elsevier Journal | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | In order to further advance the accuracy and robustness of the incremental
parameter estimation-based rotation averaging methods, in this paper, a new
member of the Incremental Rotation Averaging (IRA) family is introduced, which
is termed as IRAv4. As its most significant feature, a task-specific connected
dominating set is extracted in IRAv4 to serve as a more reliable and accurate
reference for rotation local-to-global alignment. This alignment reference is
incrementally constructed, together with the absolute rotations of the vertices
belong to it simultaneously estimated. Comprehensive evaluations are performed
on the 1DSfM dataset, by which the effectiveness of both the reference
construction method and the entire rotation averaging pipeline proposed in this
paper is demonstrated.
| [
{
"version": "v1",
"created": "Fri, 29 Sep 2023 01:51:04 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Dec 2023 11:14:03 GMT"
},
{
"version": "v3",
"created": "Fri, 5 Jan 2024 02:49:43 GMT"
},
{
"version": "v4",
"created": "Sat, 29 Mar 2025 08:40:25 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Gao",
"Xiang",
""
],
[
"Cui",
"Hainan",
""
],
[
"Liu",
"Yangdong",
""
],
[
"Shen",
"Shuhan",
""
]
] | TITLE: Incremental Rotation Averaging Revisited
ABSTRACT: In order to further advance the accuracy and robustness of the incremental
parameter estimation-based rotation averaging methods, in this paper, a new
member of the Incremental Rotation Averaging (IRA) family is introduced, which
is termed as IRAv4. As its most significant feature, a task-specific connected
dominating set is extracted in IRAv4 to serve as a more reliable and accurate
reference for rotation local-to-global alignment. This alignment reference is
incrementally constructed, together with the absolute rotations of the vertices
belong to it simultaneously estimated. Comprehensive evaluations are performed
on the 1DSfM dataset, by which the effectiveness of both the reference
construction method and the entire rotation averaging pipeline proposed in this
paper is demonstrated.
|
2309.17095 | Adam Rida | Adam Rida, Marie-Jeanne Lesot, Xavier Renard, and Christophe Marsala | Dynamic Interpretability for Model Comparison via Decision Rules | null | null | 10.1007/978-3-031-74630-7_23 | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Explainable AI (XAI) methods have mostly been built to investigate and shed
light on single machine learning models and are not designed to capture and
explain differences between multiple models effectively. This paper addresses
the challenge of understanding and explaining differences between machine
learning models, which is crucial for model selection, monitoring and lifecycle
management in real-world applications. We propose DeltaXplainer, a
model-agnostic method for generating rule-based explanations describing the
differences between two binary classifiers. To assess the effectiveness of
DeltaXplainer, we conduct experiments on synthetic and real-world datasets,
covering various model comparison scenarios involving different types of
concept drift.
| [
{
"version": "v1",
"created": "Fri, 29 Sep 2023 09:42:49 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Rida",
"Adam",
""
],
[
"Lesot",
"Marie-Jeanne",
""
],
[
"Renard",
"Xavier",
""
],
[
"Marsala",
"Christophe",
""
]
] | TITLE: Dynamic Interpretability for Model Comparison via Decision Rules
ABSTRACT: Explainable AI (XAI) methods have mostly been built to investigate and shed
light on single machine learning models and are not designed to capture and
explain differences between multiple models effectively. This paper addresses
the challenge of understanding and explaining differences between machine
learning models, which is crucial for model selection, monitoring and lifecycle
management in real-world applications. We propose DeltaXplainer, a
model-agnostic method for generating rule-based explanations describing the
differences between two binary classifiers. To assess the effectiveness of
DeltaXplainer, we conduct experiments on synthetic and real-world datasets,
covering various model comparison scenarios involving different types of
concept drift.
|
2310.12781 | Yifei Xiong | Yifei Xiong, Nianqiao Phyllis Ju, Sanguo Zhang | Simulation-based Bayesian Inference from Privacy Protected Data | 28 pages, 15 figures | null | null | null | stat.ML cs.LG stat.CO | http://creativecommons.org/licenses/by/4.0/ | Many modern statistical analysis and machine learning applications require
training models on sensitive user data. Under a formal definition of privacy
protection, differentially private algorithms inject calibrated noise into the
confidential data or during the data analysis process to produce
privacy-protected datasets or queries. However, restricting access to only
privatized data during statistical analysis makes it computationally
challenging to make valid statistical inferences. In this work, we propose
simulation-based inference methods from privacy-protected datasets. In addition
to sequential Monte Carlo approximate Bayesian computation, we adopt neural
conditional density estimators as a flexible family of distributions to
approximate the posterior distribution of model parameters given the observed
private query results. We illustrate our methods on discrete time-series data
under an infectious disease model and with ordinary linear regression models.
Illustrating the privacy-utility trade-off, our experiments and analysis
demonstrate the necessity and feasibility of designing valid statistical
inference procedures to correct for biases introduced by the privacy-protection
mechanisms.
| [
{
"version": "v1",
"created": "Thu, 19 Oct 2023 14:34:17 GMT"
},
{
"version": "v2",
"created": "Fri, 20 Oct 2023 07:24:36 GMT"
},
{
"version": "v3",
"created": "Sat, 30 Dec 2023 15:13:46 GMT"
},
{
"version": "v4",
"created": "Sat, 29 Mar 2025 19:39:41 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Xiong",
"Yifei",
""
],
[
"Ju",
"Nianqiao Phyllis",
""
],
[
"Zhang",
"Sanguo",
""
]
] | TITLE: Simulation-based Bayesian Inference from Privacy Protected Data
ABSTRACT: Many modern statistical analysis and machine learning applications require
training models on sensitive user data. Under a formal definition of privacy
protection, differentially private algorithms inject calibrated noise into the
confidential data or during the data analysis process to produce
privacy-protected datasets or queries. However, restricting access to only
privatized data during statistical analysis makes it computationally
challenging to make valid statistical inferences. In this work, we propose
simulation-based inference methods from privacy-protected datasets. In addition
to sequential Monte Carlo approximate Bayesian computation, we adopt neural
conditional density estimators as a flexible family of distributions to
approximate the posterior distribution of model parameters given the observed
private query results. We illustrate our methods on discrete time-series data
under an infectious disease model and with ordinary linear regression models.
Illustrating the privacy-utility trade-off, our experiments and analysis
demonstrate the necessity and feasibility of designing valid statistical
inference procedures to correct for biases introduced by the privacy-protection
mechanisms.
|
2310.13766 | Andrea Boscolo Camiletto | Andrea Boscolo Camiletto, Alfredo Bochicchio, Alexander Liniger,
Dengxin Dai, Abel Gawel | U-BEV: Height-aware Bird's-Eye-View Segmentation and Neural Map-based
Relocalization | Published in: 2024 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS) | null | 10.1109/IROS58592.2024.10802787 | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Efficient relocalization is essential for intelligent vehicles when GPS
reception is insufficient or sensor-based localization fails. Recent advances
in Bird's-Eye-View (BEV) segmentation allow for accurate estimation of local
scene appearance and in turn, can benefit the relocalization of the vehicle.
However, one downside of BEV methods is the heavy computation required to
leverage the geometric constraints. This paper presents U-BEV, a U-Net inspired
architecture that extends the current state-of-the-art by allowing the BEV to
reason about the scene on multiple height layers before flattening the BEV
features. We show that this extension boosts the performance of the U-BEV by up
to 4.11 IoU. Additionally, we combine the encoded neural BEV with a
differentiable template matcher to perform relocalization on neural SD-map
data. The model is fully end-to-end trainable and outperforms transformer-based
BEV methods of similar computational complexity by 1.7 to 2.8 mIoU and
BEV-based relocalization by over 26% Recall Accuracy on the nuScenes dataset.
| [
{
"version": "v1",
"created": "Fri, 20 Oct 2023 18:57:38 GMT"
},
{
"version": "v2",
"created": "Sun, 1 Sep 2024 22:05:52 GMT"
},
{
"version": "v3",
"created": "Sat, 29 Mar 2025 12:41:24 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Camiletto",
"Andrea Boscolo",
""
],
[
"Bochicchio",
"Alfredo",
""
],
[
"Liniger",
"Alexander",
""
],
[
"Dai",
"Dengxin",
""
],
[
"Gawel",
"Abel",
""
]
] | TITLE: U-BEV: Height-aware Bird's-Eye-View Segmentation and Neural Map-based
Relocalization
ABSTRACT: Efficient relocalization is essential for intelligent vehicles when GPS
reception is insufficient or sensor-based localization fails. Recent advances
in Bird's-Eye-View (BEV) segmentation allow for accurate estimation of local
scene appearance and in turn, can benefit the relocalization of the vehicle.
However, one downside of BEV methods is the heavy computation required to
leverage the geometric constraints. This paper presents U-BEV, a U-Net inspired
architecture that extends the current state-of-the-art by allowing the BEV to
reason about the scene on multiple height layers before flattening the BEV
features. We show that this extension boosts the performance of the U-BEV by up
to 4.11 IoU. Additionally, we combine the encoded neural BEV with a
differentiable template matcher to perform relocalization on neural SD-map
data. The model is fully end-to-end trainable and outperforms transformer-based
BEV methods of similar computational complexity by 1.7 to 2.8 mIoU and
BEV-based relocalization by over 26% Recall Accuracy on the nuScenes dataset.
|
2310.14356 | Andre Ye | Andre Ye, Sebastin Santy, Jena D. Hwang, Amy X. Zhang, Ranjay Krishna | Computer Vision Datasets and Models Exhibit Cultural and Linguistic
Diversity in Perception | CVPR 2025 | null | null | null | cs.CV cs.CL cs.CY cs.HC | http://creativecommons.org/licenses/by/4.0/ | Computer vision often treats human perception as homogeneous: an implicit
assumption that visual stimuli are perceived similarly by everyone. This
assumption is reflected in the way researchers collect datasets and train
vision models. By contrast, literature in cross-cultural psychology and
linguistics has provided evidence that people from different cultural
backgrounds observe vastly different concepts even when viewing the same visual
stimuli. In this paper, we study how these differences manifest themselves in
vision-language datasets and models, using language as a proxy for culture. By
comparing textual descriptions generated across 7 languages for the same
images, we find significant differences in the semantic content and linguistic
expression. When datasets are multilingual as opposed to monolingual,
descriptions have higher semantic coverage on average, where coverage is
measured using scene graphs, model embeddings, and linguistic taxonomies. For
example, multilingual descriptions have on average 29.9% more objects, 24.5%
more relations, and 46.0% more attributes than a set of monolingual captions.
When prompted to describe images in different languages, popular models (e.g.
LLaVA) inherit this bias and describe different parts of the image. Moreover,
finetuning models on captions from one language performs best on corresponding
test data from that language, while finetuning on multilingual data performs
consistently well across all test data compositions. Our work points towards
the need to account for and embrace the diversity of human perception in the
computer vision community.
| [
{
"version": "v1",
"created": "Sun, 22 Oct 2023 16:51:42 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Nov 2023 05:55:12 GMT"
},
{
"version": "v3",
"created": "Sat, 9 Mar 2024 20:47:30 GMT"
},
{
"version": "v4",
"created": "Sat, 29 Mar 2025 01:42:57 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Ye",
"Andre",
""
],
[
"Santy",
"Sebastin",
""
],
[
"Hwang",
"Jena D.",
""
],
[
"Zhang",
"Amy X.",
""
],
[
"Krishna",
"Ranjay",
""
]
] | TITLE: Computer Vision Datasets and Models Exhibit Cultural and Linguistic
Diversity in Perception
ABSTRACT: Computer vision often treats human perception as homogeneous: an implicit
assumption that visual stimuli are perceived similarly by everyone. This
assumption is reflected in the way researchers collect datasets and train
vision models. By contrast, literature in cross-cultural psychology and
linguistics has provided evidence that people from different cultural
backgrounds observe vastly different concepts even when viewing the same visual
stimuli. In this paper, we study how these differences manifest themselves in
vision-language datasets and models, using language as a proxy for culture. By
comparing textual descriptions generated across 7 languages for the same
images, we find significant differences in the semantic content and linguistic
expression. When datasets are multilingual as opposed to monolingual,
descriptions have higher semantic coverage on average, where coverage is
measured using scene graphs, model embeddings, and linguistic taxonomies. For
example, multilingual descriptions have on average 29.9% more objects, 24.5%
more relations, and 46.0% more attributes than a set of monolingual captions.
When prompted to describe images in different languages, popular models (e.g.
LLaVA) inherit this bias and describe different parts of the image. Moreover,
finetuning models on captions from one language performs best on corresponding
test data from that language, while finetuning on multilingual data performs
consistently well across all test data compositions. Our work points towards
the need to account for and embrace the diversity of human perception in the
computer vision community.
|
2311.07622 | Junyang Chen | Junyang Chen, Hanjiang Lai | Pretrain like Your Inference: Masked Tuning Improves Zero-Shot Composed
Image Retrieval | accepted by ICME 2025, this is the full version of paper | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Zero-shot composed image retrieval (ZS-CIR), which takes a textual
modification and a reference image as a query to retrieve a target image
without triplet labeling, has gained more and more attention in data mining.
Current ZS-CIR research mainly relies on the generalization ability of
pre-trained vision-language models, e.g., CLIP. However, the pre-trained
vision-language models and CIR tasks have substantial discrepancies, where the
vision-language models focus on learning the similarities but CIR aims to learn
the modifications of the image guided by text. In this paper, we introduce a
novel unlabeled and pre-trained masked tuning approach, which reduces the gap
between the pre-trained vision-language model and the downstream CIR task.
First, to reduce the gap, we reformulate the contrastive learning of the
vision-language model as the CIR task, where we randomly mask input image
patches to generate $\langle$masked image, text, image$\rangle$ triplet from an
image-text pair. Then, we propose a simple but novel pre-trained masked tuning
method, which uses the text and the masked image to learn the modifications of
the original image. With such a simple design, the proposed masked tuning can
learn to better capture fine-grained text-guided modifications. Extensive
experimental results demonstrate the significant superiority of our approach
over the baseline models on four ZS-CIR datasets, including FashionIQ, CIRR,
CIRCO, and GeneCIS. Our codes are available at
https://github.com/Chen-Junyang-cn/PLI
| [
{
"version": "v1",
"created": "Mon, 13 Nov 2023 02:49:57 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Nov 2023 04:13:37 GMT"
},
{
"version": "v3",
"created": "Sun, 30 Mar 2025 08:28:42 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Chen",
"Junyang",
""
],
[
"Lai",
"Hanjiang",
""
]
] | TITLE: Pretrain like Your Inference: Masked Tuning Improves Zero-Shot Composed
Image Retrieval
ABSTRACT: Zero-shot composed image retrieval (ZS-CIR), which takes a textual
modification and a reference image as a query to retrieve a target image
without triplet labeling, has gained more and more attention in data mining.
Current ZS-CIR research mainly relies on the generalization ability of
pre-trained vision-language models, e.g., CLIP. However, the pre-trained
vision-language models and CIR tasks have substantial discrepancies, where the
vision-language models focus on learning the similarities but CIR aims to learn
the modifications of the image guided by text. In this paper, we introduce a
novel unlabeled and pre-trained masked tuning approach, which reduces the gap
between the pre-trained vision-language model and the downstream CIR task.
First, to reduce the gap, we reformulate the contrastive learning of the
vision-language model as the CIR task, where we randomly mask input image
patches to generate $\langle$masked image, text, image$\rangle$ triplet from an
image-text pair. Then, we propose a simple but novel pre-trained masked tuning
method, which uses the text and the masked image to learn the modifications of
the original image. With such a simple design, the proposed masked tuning can
learn to better capture fine-grained text-guided modifications. Extensive
experimental results demonstrate the significant superiority of our approach
over the baseline models on four ZS-CIR datasets, including FashionIQ, CIRR,
CIRCO, and GeneCIS. Our codes are available at
https://github.com/Chen-Junyang-cn/PLI
|
2311.14435 | Georgii Mikriukov | Georgii Mikriukov, Gesina Schwalbe, Korinna Bade | Local Concept Embeddings for Analysis of Concept Distributions in Vision
DNN Feature Spaces | This is the authors accepted manuscript of the article accepted for
publication in the International Journal of Computer Vision (IJCV). The final
version will be available via SpringerLink upon publication. To cite this
work please refer to the final journal version once published | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Insights into the learned latent representations are imperative for verifying
deep neural networks (DNNs) in critical computer vision (CV) tasks. Therefore,
state-of-the-art supervised Concept-based eXplainable Artificial Intelligence
(C-XAI) methods associate user-defined concepts like ``car'' each with a single
vector in the DNN latent space (concept embedding vector). In the case of
concept segmentation, these linearly separate between activation map pixels
belonging to a concept and those belonging to background. Existing methods for
concept segmentation, however, fall short of capturing implicitly learned
sub-concepts (e.g., the DNN might split car into ``proximate car'' and
``distant car''), and overlap of user-defined concepts (e.g., between ``bus''
and ``truck''). In other words, they do not capture the full distribution of
concept representatives in latent space. For the first time, this work shows
that these simplifications are frequently broken and that distribution
information can be particularly useful for understanding DNN-learned notions of
sub-concepts, concept confusion, and concept outliers. To allow exploration of
learned concept distributions, we propose a novel local concept analysis
framework. Instead of optimizing a single global concept vector on the complete
dataset, it generates a local concept embedding (LoCE) vector for each
individual sample. We use the distribution formed by LoCEs to explore the
latent concept distribution by fitting Gaussian mixture models (GMMs),
hierarchical clustering, and concept-level information retrieval and outlier
detection. Despite its context sensitivity, our method's concept segmentation
performance is competitive to global baselines. Analysis results are obtained
on three datasets and six diverse vision DNN architectures, including vision
transformers (ViTs).
| [
{
"version": "v1",
"created": "Fri, 24 Nov 2023 12:22:00 GMT"
},
{
"version": "v2",
"created": "Mon, 4 Nov 2024 12:48:38 GMT"
},
{
"version": "v3",
"created": "Sun, 30 Mar 2025 15:12:08 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Mikriukov",
"Georgii",
""
],
[
"Schwalbe",
"Gesina",
""
],
[
"Bade",
"Korinna",
""
]
] | TITLE: Local Concept Embeddings for Analysis of Concept Distributions in Vision
DNN Feature Spaces
ABSTRACT: Insights into the learned latent representations are imperative for verifying
deep neural networks (DNNs) in critical computer vision (CV) tasks. Therefore,
state-of-the-art supervised Concept-based eXplainable Artificial Intelligence
(C-XAI) methods associate user-defined concepts like ``car'' each with a single
vector in the DNN latent space (concept embedding vector). In the case of
concept segmentation, these linearly separate between activation map pixels
belonging to a concept and those belonging to background. Existing methods for
concept segmentation, however, fall short of capturing implicitly learned
sub-concepts (e.g., the DNN might split car into ``proximate car'' and
``distant car''), and overlap of user-defined concepts (e.g., between ``bus''
and ``truck''). In other words, they do not capture the full distribution of
concept representatives in latent space. For the first time, this work shows
that these simplifications are frequently broken and that distribution
information can be particularly useful for understanding DNN-learned notions of
sub-concepts, concept confusion, and concept outliers. To allow exploration of
learned concept distributions, we propose a novel local concept analysis
framework. Instead of optimizing a single global concept vector on the complete
dataset, it generates a local concept embedding (LoCE) vector for each
individual sample. We use the distribution formed by LoCEs to explore the
latent concept distribution by fitting Gaussian mixture models (GMMs),
hierarchical clustering, and concept-level information retrieval and outlier
detection. Despite its context sensitivity, our method's concept segmentation
performance is competitive to global baselines. Analysis results are obtained
on three datasets and six diverse vision DNN architectures, including vision
transformers (ViTs).
|
2312.01970 | Chuanneng Sun | Chuanneng Sun, Gueyoung Jung, Tuyen Xuan Tran, Dario Pompili | Cascade Reinforcement Learning with State Space Factorization for
O-RAN-based Traffic Steering | 9 pages, 8 figures | null | null | null | cs.NI cs.SY eess.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Open Radio Access Network (O-RAN) architecture empowers intelligent and
automated optimization of the RAN through applications deployed on the RAN
Intelligent Controller (RIC) platform, enabling capabilities beyond what is
achievable with traditional RAN solutions. Within this paradigm, Traffic
Steering (TS) emerges as a pivotal RIC application that focuses on optimizing
cell-level mobility settings in near-real-time, aiming to significantly improve
network spectral efficiency. In this paper, we design a novel TS algorithm
based on a Cascade Reinforcement Learning (CaRL) framework. We propose state
space factorization and policy decomposition to reduce the need for large
models and well-labeled datasets. For each sub-state space, an RL sub-policy
will be trained to learn an optimized mapping onto the action space. To apply
CaRL on new network regions, we propose a knowledge transfer approach to
initialize a new sub-policy based on knowledge learned by the trained policies.
To evaluate CaRL, we build a data-driven and scalable RIC digital twin (DT)
that is modeled using important real-world data, including network
configuration, user geo-distribution, and traffic demand, among others, from a
tier-1 mobile operator in the US. We evaluate CaRL on two DT scenarios
representing two network clusters in two different cities and compare its
performance with the business-as-usual (BAU) policy and other competing
optimization approaches using heuristic and Q-table algorithms. Benchmarking
results show that CaRL performs the best and improves the average
cluster-aggregated downlink throughput over the BAU policy by 24% and 18% in
these two scenarios, respectively.
| [
{
"version": "v1",
"created": "Mon, 4 Dec 2023 15:33:00 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Nov 2024 14:01:29 GMT"
},
{
"version": "v3",
"created": "Mon, 31 Mar 2025 03:33:05 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Sun",
"Chuanneng",
""
],
[
"Jung",
"Gueyoung",
""
],
[
"Tran",
"Tuyen Xuan",
""
],
[
"Pompili",
"Dario",
""
]
] | TITLE: Cascade Reinforcement Learning with State Space Factorization for
O-RAN-based Traffic Steering
ABSTRACT: The Open Radio Access Network (O-RAN) architecture empowers intelligent and
automated optimization of the RAN through applications deployed on the RAN
Intelligent Controller (RIC) platform, enabling capabilities beyond what is
achievable with traditional RAN solutions. Within this paradigm, Traffic
Steering (TS) emerges as a pivotal RIC application that focuses on optimizing
cell-level mobility settings in near-real-time, aiming to significantly improve
network spectral efficiency. In this paper, we design a novel TS algorithm
based on a Cascade Reinforcement Learning (CaRL) framework. We propose state
space factorization and policy decomposition to reduce the need for large
models and well-labeled datasets. For each sub-state space, an RL sub-policy
will be trained to learn an optimized mapping onto the action space. To apply
CaRL on new network regions, we propose a knowledge transfer approach to
initialize a new sub-policy based on knowledge learned by the trained policies.
To evaluate CaRL, we build a data-driven and scalable RIC digital twin (DT)
that is modeled using important real-world data, including network
configuration, user geo-distribution, and traffic demand, among others, from a
tier-1 mobile operator in the US. We evaluate CaRL on two DT scenarios
representing two network clusters in two different cities and compare its
performance with the business-as-usual (BAU) policy and other competing
optimization approaches using heuristic and Q-table algorithms. Benchmarking
results show that CaRL performs the best and improves the average
cluster-aggregated downlink throughput over the BAU policy by 24% and 18% in
these two scenarios, respectively.
|
2312.07384 | Haoyu Tang | Yupeng Hu, Han Jiang, Hao Liu, Kun Wang, Haoyu Tang, Liqiang Nie | Visual Self-paced Iterative Learning for Unsupervised Temporal Action
Localization | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Recently, temporal action localization (TAL) has garnered significant
interest in information retrieval community. However, existing
supervised/weakly supervised methods are heavily dependent on extensive labeled
temporal boundaries and action categories, which is labor-intensive and
time-consuming. Although some unsupervised methods have utilized the
``iteratively clustering and localization'' paradigm for TAL, they still suffer
from two pivotal impediments: 1) unsatisfactory video clustering confidence,
and 2) unreliable video pseudolabels for model training. To address these
limitations, we present a novel self-paced iterative learning model to enhance
clustering and localization training simultaneously, thereby facilitating more
effective unsupervised TAL. Concretely, we improve the clustering confidence
through exploring the contextual feature-robust visual information. Thereafter,
we design two (constant- and variable- speed) incremental instance learning
strategies for easy-to-hard model training, thus ensuring the reliability of
these video pseudolabels and further improving overall localization
performance. Extensive experiments on two public datasets have substantiated
the superiority of our model over several state-of-the-art competitors.
| [
{
"version": "v1",
"created": "Tue, 12 Dec 2023 16:00:55 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Mar 2025 14:33:14 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Hu",
"Yupeng",
""
],
[
"Jiang",
"Han",
""
],
[
"Liu",
"Hao",
""
],
[
"Wang",
"Kun",
""
],
[
"Tang",
"Haoyu",
""
],
[
"Nie",
"Liqiang",
""
]
] | TITLE: Visual Self-paced Iterative Learning for Unsupervised Temporal Action
Localization
ABSTRACT: Recently, temporal action localization (TAL) has garnered significant
interest in information retrieval community. However, existing
supervised/weakly supervised methods are heavily dependent on extensive labeled
temporal boundaries and action categories, which is labor-intensive and
time-consuming. Although some unsupervised methods have utilized the
``iteratively clustering and localization'' paradigm for TAL, they still suffer
from two pivotal impediments: 1) unsatisfactory video clustering confidence,
and 2) unreliable video pseudolabels for model training. To address these
limitations, we present a novel self-paced iterative learning model to enhance
clustering and localization training simultaneously, thereby facilitating more
effective unsupervised TAL. Concretely, we improve the clustering confidence
through exploring the contextual feature-robust visual information. Thereafter,
we design two (constant- and variable- speed) incremental instance learning
strategies for easy-to-hard model training, thus ensuring the reliability of
these video pseudolabels and further improving overall localization
performance. Extensive experiments on two public datasets have substantiated
the superiority of our model over several state-of-the-art competitors.
|
2312.10181 | Yongkai Wu | Yucong Dai, Gen Li, Feng Luo, Xiaolong Ma, Yongkai Wu | Integrating Fairness and Model Pruning Through Bi-level Optimization | null | null | null | null | cs.LG cs.AI cs.CY | http://creativecommons.org/licenses/by/4.0/ | Deep neural networks have achieved exceptional results across a range of
applications. As the demand for efficient and sparse deep learning models
escalates, the significance of model compression, particularly pruning, is
increasingly recognized. Traditional pruning methods, however, can
unintentionally intensify algorithmic biases, leading to unequal prediction
outcomes in critical applications and raising concerns about the dilemma of
pruning practices and social justice. To tackle this challenge, we introduce a
novel concept of fair model pruning, which involves developing a sparse model
that adheres to fairness criteria. In particular, we propose a framework to
jointly optimize the pruning mask and weight update processes with fairness
constraints. This framework is engineered to compress models that maintain
performance while ensuring fairness in a unified process. To this end, we
formulate the fair pruning problem as a novel constrained bi-level optimization
task and derive efficient and effective solving strategies. We design
experiments across various datasets and scenarios to validate our proposed
method. Our empirical analysis contrasts our framework with several mainstream
pruning strategies, emphasizing our method's superiority in maintaining model
fairness, performance, and efficiency.
| [
{
"version": "v1",
"created": "Fri, 15 Dec 2023 20:08:53 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Mar 2025 01:56:39 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Dai",
"Yucong",
""
],
[
"Li",
"Gen",
""
],
[
"Luo",
"Feng",
""
],
[
"Ma",
"Xiaolong",
""
],
[
"Wu",
"Yongkai",
""
]
] | TITLE: Integrating Fairness and Model Pruning Through Bi-level Optimization
ABSTRACT: Deep neural networks have achieved exceptional results across a range of
applications. As the demand for efficient and sparse deep learning models
escalates, the significance of model compression, particularly pruning, is
increasingly recognized. Traditional pruning methods, however, can
unintentionally intensify algorithmic biases, leading to unequal prediction
outcomes in critical applications and raising concerns about the dilemma of
pruning practices and social justice. To tackle this challenge, we introduce a
novel concept of fair model pruning, which involves developing a sparse model
that adheres to fairness criteria. In particular, we propose a framework to
jointly optimize the pruning mask and weight update processes with fairness
constraints. This framework is engineered to compress models that maintain
performance while ensuring fairness in a unified process. To this end, we
formulate the fair pruning problem as a novel constrained bi-level optimization
task and derive efficient and effective solving strategies. We design
experiments across various datasets and scenarios to validate our proposed
method. Our empirical analysis contrasts our framework with several mainstream
pruning strategies, emphasizing our method's superiority in maintaining model
fairness, performance, and efficiency.
|
2312.11923 | Xiaomeng Yang | Xiaomeng Yang, Zhi Qiao, Yu Zhou | IPAD: Iterative, Parallel, and Diffusion-based Network for Scene Text
Recognition | Accepted by IJCV | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Nowadays, scene text recognition has attracted more and more attention due to
its diverse applications. Most state-of-the-art methods adopt an
encoder-decoder framework with the attention mechanism, autoregressively
generating text from left to right. Despite the convincing performance, this
sequential decoding strategy constrains the inference speed. Conversely,
non-autoregressive models provide faster, simultaneous predictions but often
sacrifice accuracy. Although utilizing an explicit language model can improve
performance, it burdens the computational load. Besides, separating linguistic
knowledge from vision information may harm the final prediction. In this paper,
we propose an alternative solution that uses a parallel and iterative decoder
that adopts an easy-first decoding strategy. Furthermore, we regard text
recognition as an image-based conditional text generation task and utilize the
discrete diffusion strategy, ensuring exhaustive exploration of bidirectional
contextual information. Extensive experiments demonstrate that the proposed
approach achieves superior results on the benchmark datasets, including both
Chinese and English text images.
| [
{
"version": "v1",
"created": "Tue, 19 Dec 2023 08:03:19 GMT"
},
{
"version": "v2",
"created": "Sat, 12 Oct 2024 17:54:19 GMT"
},
{
"version": "v3",
"created": "Sat, 29 Mar 2025 17:22:44 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Yang",
"Xiaomeng",
""
],
[
"Qiao",
"Zhi",
""
],
[
"Zhou",
"Yu",
""
]
] | TITLE: IPAD: Iterative, Parallel, and Diffusion-based Network for Scene Text
Recognition
ABSTRACT: Nowadays, scene text recognition has attracted more and more attention due to
its diverse applications. Most state-of-the-art methods adopt an
encoder-decoder framework with the attention mechanism, autoregressively
generating text from left to right. Despite the convincing performance, this
sequential decoding strategy constrains the inference speed. Conversely,
non-autoregressive models provide faster, simultaneous predictions but often
sacrifice accuracy. Although utilizing an explicit language model can improve
performance, it burdens the computational load. Besides, separating linguistic
knowledge from vision information may harm the final prediction. In this paper,
we propose an alternative solution that uses a parallel and iterative decoder
that adopts an easy-first decoding strategy. Furthermore, we regard text
recognition as an image-based conditional text generation task and utilize the
discrete diffusion strategy, ensuring exhaustive exploration of bidirectional
contextual information. Extensive experiments demonstrate that the proposed
approach achieves superior results on the benchmark datasets, including both
Chinese and English text images.
|
2402.01929 | Menghua Wu | Menghua Wu, Yujia Bao, Regina Barzilay, Tommi Jaakkola | Sample, estimate, aggregate: A recipe for causal discovery foundation
models | Our code is available at https://github.com/rmwu/sea | Transactions on Machine Learning Research (03/2025) | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | Causal discovery, the task of inferring causal structure from data, has the
potential to uncover mechanistic insights from biological experiments,
especially those involving perturbations. However, causal discovery algorithms
over larger sets of variables tend to be brittle against misspecification or
when data are limited. For example, single-cell transcriptomics measures
thousands of genes, but the nature of their relationships is not known, and
there may be as few as tens of cells per intervention setting. To mitigate
these challenges, we propose a foundation model-inspired approach: a supervised
model trained on large-scale, synthetic data to predict causal graphs from
summary statistics -- like the outputs of classical causal discovery algorithms
run over subsets of variables and other statistical hints like inverse
covariance. Our approach is enabled by the observation that typical errors in
the outputs of a discovery algorithm remain comparable across datasets.
Theoretically, we show that the model architecture is well-specified, in the
sense that it can recover a causal graph consistent with graphs over subsets.
Empirically, we train the model to be robust to misspecification and
distribution shift using diverse datasets. Experiments on biological and
synthetic data confirm that this model generalizes well beyond its training
set, runs on graphs with hundreds of variables in seconds, and can be easily
adapted to different underlying data assumptions.
| [
{
"version": "v1",
"created": "Fri, 2 Feb 2024 21:57:58 GMT"
},
{
"version": "v2",
"created": "Thu, 23 May 2024 13:09:20 GMT"
},
{
"version": "v3",
"created": "Fri, 28 Mar 2025 19:27:51 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Wu",
"Menghua",
""
],
[
"Bao",
"Yujia",
""
],
[
"Barzilay",
"Regina",
""
],
[
"Jaakkola",
"Tommi",
""
]
] | TITLE: Sample, estimate, aggregate: A recipe for causal discovery foundation
models
ABSTRACT: Causal discovery, the task of inferring causal structure from data, has the
potential to uncover mechanistic insights from biological experiments,
especially those involving perturbations. However, causal discovery algorithms
over larger sets of variables tend to be brittle against misspecification or
when data are limited. For example, single-cell transcriptomics measures
thousands of genes, but the nature of their relationships is not known, and
there may be as few as tens of cells per intervention setting. To mitigate
these challenges, we propose a foundation model-inspired approach: a supervised
model trained on large-scale, synthetic data to predict causal graphs from
summary statistics -- like the outputs of classical causal discovery algorithms
run over subsets of variables and other statistical hints like inverse
covariance. Our approach is enabled by the observation that typical errors in
the outputs of a discovery algorithm remain comparable across datasets.
Theoretically, we show that the model architecture is well-specified, in the
sense that it can recover a causal graph consistent with graphs over subsets.
Empirically, we train the model to be robust to misspecification and
distribution shift using diverse datasets. Experiments on biological and
synthetic data confirm that this model generalizes well beyond its training
set, runs on graphs with hundreds of variables in seconds, and can be easily
adapted to different underlying data assumptions.
|
2402.06190 | Amin Karimi Monsefi | Amin Karimi Monsefi, Payam Karisani, Mengxi Zhou, Stacey Choi, Nathan
Doble, Heng Ji, Srinivasan Parthasarathy, Rajiv Ramnath | Masked LoGoNet: Fast and Accurate 3D Image Analysis for Medical Domain | Accepted to KDD 2024 | null | 10.1145/3637528.3672069 | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Standard modern machine-learning-based imaging methods have faced challenges
in medical applications due to the high cost of dataset construction and,
thereby, the limited labeled training data available. Additionally, upon
deployment, these methods are usually used to process a large volume of data on
a daily basis, imposing a high maintenance cost on medical facilities. In this
paper, we introduce a new neural network architecture, termed LoGoNet, with a
tailored self-supervised learning (SSL) method to mitigate such challenges.
LoGoNet integrates a novel feature extractor within a U-shaped architecture,
leveraging Large Kernel Attention (LKA) and a dual encoding strategy to capture
both long-range and short-range feature dependencies adeptly. This is in
contrast to existing methods that rely on increasing network capacity to
enhance feature extraction. This combination of novel techniques in our model
is especially beneficial in medical image segmentation, given the difficulty of
learning intricate and often irregular body organ shapes, such as the spleen.
Complementary, we propose a novel SSL method tailored for 3D images to
compensate for the lack of large labeled datasets. The method combines masking
and contrastive learning techniques within a multi-task learning framework and
is compatible with both Vision Transformer (ViT) and CNN-based models. We
demonstrate the efficacy of our methods in numerous tasks across two standard
datasets (i.e., BTCV and MSD). Benchmark comparisons with eight
state-of-the-art models highlight LoGoNet's superior performance in both
inference time and accuracy.
| [
{
"version": "v1",
"created": "Fri, 9 Feb 2024 05:06:58 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Mar 2025 03:59:35 GMT"
},
{
"version": "v3",
"created": "Fri, 28 Mar 2025 21:25:09 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Monsefi",
"Amin Karimi",
""
],
[
"Karisani",
"Payam",
""
],
[
"Zhou",
"Mengxi",
""
],
[
"Choi",
"Stacey",
""
],
[
"Doble",
"Nathan",
""
],
[
"Ji",
"Heng",
""
],
[
"Parthasarathy",
"Srinivasan",
""
],
[
"Ramnath",
"Rajiv",
""
]
] | TITLE: Masked LoGoNet: Fast and Accurate 3D Image Analysis for Medical Domain
ABSTRACT: Standard modern machine-learning-based imaging methods have faced challenges
in medical applications due to the high cost of dataset construction and,
thereby, the limited labeled training data available. Additionally, upon
deployment, these methods are usually used to process a large volume of data on
a daily basis, imposing a high maintenance cost on medical facilities. In this
paper, we introduce a new neural network architecture, termed LoGoNet, with a
tailored self-supervised learning (SSL) method to mitigate such challenges.
LoGoNet integrates a novel feature extractor within a U-shaped architecture,
leveraging Large Kernel Attention (LKA) and a dual encoding strategy to capture
both long-range and short-range feature dependencies adeptly. This is in
contrast to existing methods that rely on increasing network capacity to
enhance feature extraction. This combination of novel techniques in our model
is especially beneficial in medical image segmentation, given the difficulty of
learning intricate and often irregular body organ shapes, such as the spleen.
Complementary, we propose a novel SSL method tailored for 3D images to
compensate for the lack of large labeled datasets. The method combines masking
and contrastive learning techniques within a multi-task learning framework and
is compatible with both Vision Transformer (ViT) and CNN-based models. We
demonstrate the efficacy of our methods in numerous tasks across two standard
datasets (i.e., BTCV and MSD). Benchmark comparisons with eight
state-of-the-art models highlight LoGoNet's superior performance in both
inference time and accuracy.
|
2402.19059 | Jiahao Zhou | Jiahao Zhou, Chen Long, Yue Xie, Jialiang Wang, Conglang Zhang, Boheng
Li, Haiping Wang, Zhe Chen, Zhen Dong | WHU-Synthetic: A Synthetic Perception Dataset for 3-D Multitask Model
Research | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | End-to-end models capable of handling multiple sub-tasks in parallel have
become a new trend, thereby presenting significant challenges and opportunities
for the integration of multiple tasks within the domain of 3D vision. The
limitations of 3D data acquisition conditions have not only restricted the
exploration of many innovative research problems but have also caused existing
3D datasets to predominantly focus on single tasks. This has resulted in a lack
of systematic approaches and theoretical frameworks for 3D multi-task learning,
with most efforts merely serving as auxiliary support to the primary task. In
this paper, we introduce WHU-Synthetic, a large-scale 3D synthetic perception
dataset designed for multi-task learning, from the initial data augmentation
(upsampling and depth completion), through scene understanding (segmentation),
to macro-level tasks (place recognition and 3D reconstruction). Collected in
the same environmental domain, we ensure inherent alignment across sub-tasks to
construct multi-task models without separate training methods. Besides, we
implement several novel settings, making it possible to realize certain ideas
that are difficult to achieve in real-world scenarios. This supports more
adaptive and robust multi-task perception tasks, such as sampling on city-level
models, providing point clouds with different densities, and simulating
temporal changes. Using our dataset, we conduct several experiments to
investigate mutual benefits between sub-tasks, revealing new observations,
challenges, and opportunities for future research. The dataset is accessible at
https://github.com/WHU-USI3DV/WHU-Synthetic.
| [
{
"version": "v1",
"created": "Thu, 29 Feb 2024 11:38:44 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Mar 2024 07:18:18 GMT"
},
{
"version": "v3",
"created": "Sat, 29 Mar 2025 01:12:39 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Zhou",
"Jiahao",
""
],
[
"Long",
"Chen",
""
],
[
"Xie",
"Yue",
""
],
[
"Wang",
"Jialiang",
""
],
[
"Zhang",
"Conglang",
""
],
[
"Li",
"Boheng",
""
],
[
"Wang",
"Haiping",
""
],
[
"Chen",
"Zhe",
""
],
[
"Dong",
"Zhen",
""
]
] | TITLE: WHU-Synthetic: A Synthetic Perception Dataset for 3-D Multitask Model
Research
ABSTRACT: End-to-end models capable of handling multiple sub-tasks in parallel have
become a new trend, thereby presenting significant challenges and opportunities
for the integration of multiple tasks within the domain of 3D vision. The
limitations of 3D data acquisition conditions have not only restricted the
exploration of many innovative research problems but have also caused existing
3D datasets to predominantly focus on single tasks. This has resulted in a lack
of systematic approaches and theoretical frameworks for 3D multi-task learning,
with most efforts merely serving as auxiliary support to the primary task. In
this paper, we introduce WHU-Synthetic, a large-scale 3D synthetic perception
dataset designed for multi-task learning, from the initial data augmentation
(upsampling and depth completion), through scene understanding (segmentation),
to macro-level tasks (place recognition and 3D reconstruction). Collected in
the same environmental domain, we ensure inherent alignment across sub-tasks to
construct multi-task models without separate training methods. Besides, we
implement several novel settings, making it possible to realize certain ideas
that are difficult to achieve in real-world scenarios. This supports more
adaptive and robust multi-task perception tasks, such as sampling on city-level
models, providing point clouds with different densities, and simulating
temporal changes. Using our dataset, we conduct several experiments to
investigate mutual benefits between sub-tasks, revealing new observations,
challenges, and opportunities for future research. The dataset is accessible at
https://github.com/WHU-USI3DV/WHU-Synthetic.
|
2403.02308 | Yuchen Duan | Yuchen Duan, Weiyun Wang, Zhe Chen, Xizhou Zhu, Lewei Lu, Tong Lu, Yu
Qiao, Hongsheng Li, Jifeng Dai, Wenhai Wang | Vision-RWKV: Efficient and Scalable Visual Perception with RWKV-Like
Architectures | Code is released at \url{https://github.com/OpenGVLab/Vision-RWKV} | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Transformers have revolutionized computer vision and natural language
processing, but their high computational complexity limits their application in
high-resolution image processing and long-context analysis. This paper
introduces Vision-RWKV (VRWKV), a model adapted from the RWKV model used in the
NLP field with necessary modifications for vision tasks. Similar to the Vision
Transformer (ViT), our model is designed to efficiently handle sparse inputs
and demonstrate robust global processing capabilities, while also scaling up
effectively, accommodating both large-scale parameters and extensive datasets.
Its distinctive advantage lies in its reduced spatial aggregation complexity,
which renders it exceptionally adept at processing high-resolution images
seamlessly, eliminating the necessity for windowing operations. Our evaluations
demonstrate that VRWKV surpasses ViT's performance in image classification and
has significantly faster speeds and lower memory usage processing
high-resolution inputs. In dense prediction tasks, it outperforms window-based
models, maintaining comparable speeds. These results highlight VRWKV's
potential as a more efficient alternative for visual perception tasks. Code is
released at https://github.com/OpenGVLab/Vision-RWKV.
| [
{
"version": "v1",
"created": "Mon, 4 Mar 2024 18:46:20 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Mar 2024 15:43:08 GMT"
},
{
"version": "v3",
"created": "Mon, 31 Mar 2025 06:14:48 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Duan",
"Yuchen",
""
],
[
"Wang",
"Weiyun",
""
],
[
"Chen",
"Zhe",
""
],
[
"Zhu",
"Xizhou",
""
],
[
"Lu",
"Lewei",
""
],
[
"Lu",
"Tong",
""
],
[
"Qiao",
"Yu",
""
],
[
"Li",
"Hongsheng",
""
],
[
"Dai",
"Jifeng",
""
],
[
"Wang",
"Wenhai",
""
]
] | TITLE: Vision-RWKV: Efficient and Scalable Visual Perception with RWKV-Like
Architectures
ABSTRACT: Transformers have revolutionized computer vision and natural language
processing, but their high computational complexity limits their application in
high-resolution image processing and long-context analysis. This paper
introduces Vision-RWKV (VRWKV), a model adapted from the RWKV model used in the
NLP field with necessary modifications for vision tasks. Similar to the Vision
Transformer (ViT), our model is designed to efficiently handle sparse inputs
and demonstrate robust global processing capabilities, while also scaling up
effectively, accommodating both large-scale parameters and extensive datasets.
Its distinctive advantage lies in its reduced spatial aggregation complexity,
which renders it exceptionally adept at processing high-resolution images
seamlessly, eliminating the necessity for windowing operations. Our evaluations
demonstrate that VRWKV surpasses ViT's performance in image classification and
has significantly faster speeds and lower memory usage processing
high-resolution inputs. In dense prediction tasks, it outperforms window-based
models, maintaining comparable speeds. These results highlight VRWKV's
potential as a more efficient alternative for visual perception tasks. Code is
released at https://github.com/OpenGVLab/Vision-RWKV.
|
2404.05272 | Jie Liu | Jie Liu, Tao Feng, Yan Jiang, Peizheng Wang, Chao Wu | Pricing Strategies for Different Accuracy Models from the Same Dataset
Based on Generalized Hotelling's Law | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider a scenario where a seller possesses a dataset $D$ and trains it
into models of varying accuracies for sale in the market. Due to the
reproducibility of data, the dataset can be reused to train models with
different accuracies, and the training cost is independent of the sales volume.
These two characteristics lead to fundamental differences between the data
trading market and traditional trading markets. The introduction of different
models into the market inevitably gives rise to competition. However, due to
the varying accuracies of these models, traditional multi-oligopoly games are
not applicable. We consider a generalized Hotelling's law, where the accuracy
of the models is abstracted as distance. Buyers choose to purchase models based
on a trade-off between accuracy and price, while sellers determine their
pricing strategies based on the market's demand. We present two pricing
strategies: static pricing strategy and dynamic pricing strategy, and we focus
on the static pricing strategy. We propose static pricing mechanisms based on
various market conditions and provide an example. Finally, we demonstrate that
our pricing strategy remains robust in the context of incomplete information
games.
| [
{
"version": "v1",
"created": "Mon, 8 Apr 2024 08:02:18 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Mar 2025 08:49:42 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Liu",
"Jie",
""
],
[
"Feng",
"Tao",
""
],
[
"Jiang",
"Yan",
""
],
[
"Wang",
"Peizheng",
""
],
[
"Wu",
"Chao",
""
]
] | TITLE: Pricing Strategies for Different Accuracy Models from the Same Dataset
Based on Generalized Hotelling's Law
ABSTRACT: We consider a scenario where a seller possesses a dataset $D$ and trains it
into models of varying accuracies for sale in the market. Due to the
reproducibility of data, the dataset can be reused to train models with
different accuracies, and the training cost is independent of the sales volume.
These two characteristics lead to fundamental differences between the data
trading market and traditional trading markets. The introduction of different
models into the market inevitably gives rise to competition. However, due to
the varying accuracies of these models, traditional multi-oligopoly games are
not applicable. We consider a generalized Hotelling's law, where the accuracy
of the models is abstracted as distance. Buyers choose to purchase models based
on a trade-off between accuracy and price, while sellers determine their
pricing strategies based on the market's demand. We present two pricing
strategies: static pricing strategy and dynamic pricing strategy, and we focus
on the static pricing strategy. We propose static pricing mechanisms based on
various market conditions and provide an example. Finally, we demonstrate that
our pricing strategy remains robust in the context of incomplete information
games.
|
2404.10690 | Anastasiia Fadeeva | Philippe Gervais, Anastasiia Fadeeva, Andrii Maksai | MathWriting: A Dataset For Handwritten Mathematical Expression
Recognition | null | null | null | null | cs.CV cs.HC cs.LG | http://creativecommons.org/licenses/by/4.0/ | Recognition of handwritten mathematical expressions allows to transfer
scientific notes into their digital form. It facilitates the sharing,
searching, and preservation of scientific information. We introduce
MathWriting, the largest online handwritten mathematical expression dataset to
date. It consists of 230k human-written samples and an additional 400k
synthetic ones}. This dataset can also be used in its rendered form for offline
HME recognition. One MathWriting sample consists of a formula written on a
touch screen and a corresponding LaTeX expression. We also provide a normalized
version of LaTeX expression to simplify the recognition task and enhance the
result quality. We provide baseline performance of standard models like OCR and
CTC Transformer as well as Vision-Language Models like PaLI on the dataset. The
dataset together with an example colab is accessible on Github.
| [
{
"version": "v1",
"created": "Tue, 16 Apr 2024 16:10:23 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Mar 2025 12:18:26 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Gervais",
"Philippe",
""
],
[
"Fadeeva",
"Anastasiia",
""
],
[
"Maksai",
"Andrii",
""
]
] | TITLE: MathWriting: A Dataset For Handwritten Mathematical Expression
Recognition
ABSTRACT: Recognition of handwritten mathematical expressions allows to transfer
scientific notes into their digital form. It facilitates the sharing,
searching, and preservation of scientific information. We introduce
MathWriting, the largest online handwritten mathematical expression dataset to
date. It consists of 230k human-written samples and an additional 400k
synthetic ones}. This dataset can also be used in its rendered form for offline
HME recognition. One MathWriting sample consists of a formula written on a
touch screen and a corresponding LaTeX expression. We also provide a normalized
version of LaTeX expression to simplify the recognition task and enhance the
result quality. We provide baseline performance of standard models like OCR and
CTC Transformer as well as Vision-Language Models like PaLI on the dataset. The
dataset together with an example colab is accessible on Github.
|
2404.14657 | Abhishek Aich | Abhishek Aich, Yumin Suh, Samuel Schulter, Manmohan Chandraker | Progressive Token Length Scaling in Transformer Encoders for Efficient
Universal Segmentation | Accepted to ICLR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A powerful architecture for universal segmentation relies on transformers
that encode multi-scale image features and decode object queries into mask
predictions. With efficiency being a high priority for scaling such models, we
observed that the state-of-the-art method Mask2Former uses 50% of its compute
only on the transformer encoder. This is due to the retention of a full-length
token-level representation of all backbone feature scales at each encoder
layer. With this observation, we propose a strategy termed PROgressive Token
Length SCALing for Efficient transformer encoders (PRO-SCALE) that can be
plugged-in to the Mask2Former segmentation architecture to significantly reduce
the computational cost. The underlying principle of PRO-SCALE is: progressively
scale the length of the tokens with the layers of the encoder. This allows
PRO-SCALE to reduce computations by a large margin with minimal sacrifice in
performance (~52% encoder and ~27% overall GFLOPs reduction with no drop in
performance on COCO dataset). Experiments conducted on public benchmarks
demonstrates PRO-SCALE's flexibility in architectural configurations, and
exhibits potential for extension beyond the settings of segmentation tasks to
encompass object detection. Code here:
https://github.com/abhishekaich27/proscale-pytorch
| [
{
"version": "v1",
"created": "Tue, 23 Apr 2024 01:34:20 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Jan 2025 00:01:50 GMT"
},
{
"version": "v3",
"created": "Sat, 29 Mar 2025 01:58:12 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Aich",
"Abhishek",
""
],
[
"Suh",
"Yumin",
""
],
[
"Schulter",
"Samuel",
""
],
[
"Chandraker",
"Manmohan",
""
]
] | TITLE: Progressive Token Length Scaling in Transformer Encoders for Efficient
Universal Segmentation
ABSTRACT: A powerful architecture for universal segmentation relies on transformers
that encode multi-scale image features and decode object queries into mask
predictions. With efficiency being a high priority for scaling such models, we
observed that the state-of-the-art method Mask2Former uses 50% of its compute
only on the transformer encoder. This is due to the retention of a full-length
token-level representation of all backbone feature scales at each encoder
layer. With this observation, we propose a strategy termed PROgressive Token
Length SCALing for Efficient transformer encoders (PRO-SCALE) that can be
plugged-in to the Mask2Former segmentation architecture to significantly reduce
the computational cost. The underlying principle of PRO-SCALE is: progressively
scale the length of the tokens with the layers of the encoder. This allows
PRO-SCALE to reduce computations by a large margin with minimal sacrifice in
performance (~52% encoder and ~27% overall GFLOPs reduction with no drop in
performance on COCO dataset). Experiments conducted on public benchmarks
demonstrates PRO-SCALE's flexibility in architectural configurations, and
exhibits potential for extension beyond the settings of segmentation tasks to
encompass object detection. Code here:
https://github.com/abhishekaich27/proscale-pytorch
|
2404.15458 | Darui Lu | Darui Lu, Yang Deng, Jordan M. Malof and Willie J. Padilla | Learning Electromagnetic Metamaterial Physics With ChatGPT | null | null | null | null | physics.optics cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs) such as ChatGPT, Gemini, LlaMa, and Claude are
trained on massive quantities of text parsed from the internet and have shown a
remarkable ability to respond to complex prompts in a manner often
indistinguishable from humans. For all-dielectric metamaterials consisting of
unit cells with four elliptical resonators, we present a LLM fine-tuned on up
to 40,000 data that can predict the absorptivity spectrum given a text prompt
that only specifies the metasurface geometry. Results are compared to
conventional machine learning approaches including feed-forward neural
networks, random forest, linear regression, and K-nearest neighbor (KNN).
Remarkably, the fine-tuned LLM (FT-LLM) achieves a comparable performance
across large dataset sizes with a deep neural network. We also explore inverse
problems by asking the LLM to predict the geometry necessary to achieve a
desired spectrum. LLMs possess several advantages over humans that may give
them benefits for research, including the ability to process enormous amounts
of data, find hidden patterns in data, and operate in higher-dimensional
spaces. This suggests they may be able to leverage their general knowledge of
the world to learn faster from training data than traditional models, making
them valuable tools for research and analysis.
| [
{
"version": "v1",
"created": "Tue, 23 Apr 2024 19:05:42 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Feb 2025 21:47:23 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Lu",
"Darui",
""
],
[
"Deng",
"Yang",
""
],
[
"Malof",
"Jordan M.",
""
],
[
"Padilla",
"Willie J.",
""
]
] | TITLE: Learning Electromagnetic Metamaterial Physics With ChatGPT
ABSTRACT: Large language models (LLMs) such as ChatGPT, Gemini, LlaMa, and Claude are
trained on massive quantities of text parsed from the internet and have shown a
remarkable ability to respond to complex prompts in a manner often
indistinguishable from humans. For all-dielectric metamaterials consisting of
unit cells with four elliptical resonators, we present a LLM fine-tuned on up
to 40,000 data that can predict the absorptivity spectrum given a text prompt
that only specifies the metasurface geometry. Results are compared to
conventional machine learning approaches including feed-forward neural
networks, random forest, linear regression, and K-nearest neighbor (KNN).
Remarkably, the fine-tuned LLM (FT-LLM) achieves a comparable performance
across large dataset sizes with a deep neural network. We also explore inverse
problems by asking the LLM to predict the geometry necessary to achieve a
desired spectrum. LLMs possess several advantages over humans that may give
them benefits for research, including the ability to process enormous amounts
of data, find hidden patterns in data, and operate in higher-dimensional
spaces. This suggests they may be able to leverage their general knowledge of
the world to learn faster from training data than traditional models, making
them valuable tools for research and analysis.
|
2405.01272 | Xinquan Huang | Xinquan Huang, Tariq Alkhalifah | Learned frequency-domain scattered wavefield solutions using neural
operators | Geophysical Journal International accepted | null | null | null | physics.geo-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Solving the wave equation is essential to seismic imaging and inversion. The
numerical solution of the Helmholtz equation, fundamental to this process,
often encounters significant computational and memory challenges. We propose an
innovative frequency-domain scattered wavefield modeling method employing
neural operators adaptable to diverse seismic velocities. The source location
and frequency information are embedded within the input background wavefield,
enhancing the neural operator's ability to process source configurations
effectively. In addition, we utilize a single reference frequency, which
enables scaling from larger-domain forward modeling to higher-frequency
scenarios, thereby improving our method's accuracy and generalization
capabilities for larger-domain applications. Several tests on the OpenFWI
datasets and realistic velocity models validate the accuracy and efficacy of
our method as a surrogate model, demonstrating its potential to address the
computational and memory limitations of numerical methods.
| [
{
"version": "v1",
"created": "Thu, 2 May 2024 13:30:59 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Aug 2024 05:19:01 GMT"
},
{
"version": "v3",
"created": "Sun, 30 Mar 2025 17:53:15 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Huang",
"Xinquan",
""
],
[
"Alkhalifah",
"Tariq",
""
]
] | TITLE: Learned frequency-domain scattered wavefield solutions using neural
operators
ABSTRACT: Solving the wave equation is essential to seismic imaging and inversion. The
numerical solution of the Helmholtz equation, fundamental to this process,
often encounters significant computational and memory challenges. We propose an
innovative frequency-domain scattered wavefield modeling method employing
neural operators adaptable to diverse seismic velocities. The source location
and frequency information are embedded within the input background wavefield,
enhancing the neural operator's ability to process source configurations
effectively. In addition, we utilize a single reference frequency, which
enables scaling from larger-domain forward modeling to higher-frequency
scenarios, thereby improving our method's accuracy and generalization
capabilities for larger-domain applications. Several tests on the OpenFWI
datasets and realistic velocity models validate the accuracy and efficacy of
our method as a surrogate model, demonstrating its potential to address the
computational and memory limitations of numerical methods.
|
2405.06851 | Francesca Mignacco | Francesca Mignacco, Chi-Ning Chou, SueYeon Chung | Nonlinear classification of neural manifolds with contextual information | 7 pages, 7 figures | null | null | null | q-bio.NC cond-mat.dis-nn cond-mat.stat-mech cs.NE stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding how neural systems efficiently process information through
distributed representations is a fundamental challenge at the interface of
neuroscience and machine learning. Recent approaches analyze the statistical
and geometrical attributes of neural representations as population-level
mechanistic descriptors of task implementation. In particular, manifold
capacity has emerged as a promising framework linking population geometry to
the separability of neural manifolds. However, this metric has been limited to
linear readouts. To address this limitation, we introduce a theoretical
framework that leverages latent directions in input space, which can be related
to contextual information. We derive an exact formula for the context-dependent
manifold capacity that depends on manifold geometry and context correlations,
and validate it on synthetic and real data. Our framework's increased
expressivity captures representation reformatting in deep networks at early
stages of the layer hierarchy, previously inaccessible to analysis. As
context-dependent nonlinearity is ubiquitous in neural systems, our data-driven
and theoretically grounded approach promises to elucidate context-dependent
computation across scales, datasets, and models.
| [
{
"version": "v1",
"created": "Fri, 10 May 2024 23:37:31 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Mar 2025 21:32:47 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Mignacco",
"Francesca",
""
],
[
"Chou",
"Chi-Ning",
""
],
[
"Chung",
"SueYeon",
""
]
] | TITLE: Nonlinear classification of neural manifolds with contextual information
ABSTRACT: Understanding how neural systems efficiently process information through
distributed representations is a fundamental challenge at the interface of
neuroscience and machine learning. Recent approaches analyze the statistical
and geometrical attributes of neural representations as population-level
mechanistic descriptors of task implementation. In particular, manifold
capacity has emerged as a promising framework linking population geometry to
the separability of neural manifolds. However, this metric has been limited to
linear readouts. To address this limitation, we introduce a theoretical
framework that leverages latent directions in input space, which can be related
to contextual information. We derive an exact formula for the context-dependent
manifold capacity that depends on manifold geometry and context correlations,
and validate it on synthetic and real data. Our framework's increased
expressivity captures representation reformatting in deep networks at early
stages of the layer hierarchy, previously inaccessible to analysis. As
context-dependent nonlinearity is ubiquitous in neural systems, our data-driven
and theoretically grounded approach promises to elucidate context-dependent
computation across scales, datasets, and models.
|
2405.11067 | Felix Ott | Nisha L. Raichur, Lucas Heublein, Tobias Feigl, Alexander R\"ugamer,
Christopher Mutschler, Felix Ott | Bayesian Learning-driven Prototypical Contrastive Loss for
Class-Incremental Learning | 27 pages, 22 figures | Transactions on Machine Learning Research (TMLR), March 2025,
https://openreview.net/forum?id=dNWaTuKV9M | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The primary objective of methods in continual learning is to learn tasks in a
sequential manner over time (sometimes from a stream of data), while mitigating
the detrimental phenomenon of catastrophic forgetting. This paper proposes a
method to learn an effective representation between previous and newly
encountered class prototypes. We propose a prototypical network with a Bayesian
learning-driven contrastive loss (BLCL), tailored specifically for
class-incremental learning scenarios. We introduce a contrastive loss that
incorporates novel classes into the latent representation by reducing
intra-class and increasing inter-class distance. Our approach dynamically
adapts the balance between the cross-entropy and contrastive loss functions
with a Bayesian learning technique. Experimental results conducted on the
CIFAR-10, CIFAR-100, and ImageNet100 datasets for image classification and
images of a GNSS-based dataset for interference classification validate the
efficacy of our method, showcasing its superiority over existing
state-of-the-art approaches. Git:
https://gitlab.cc-asp.fraunhofer.de/darcy_gnss/gnss_class_incremental_learning
| [
{
"version": "v1",
"created": "Fri, 17 May 2024 19:49:02 GMT"
},
{
"version": "v2",
"created": "Fri, 12 Jul 2024 16:14:33 GMT"
},
{
"version": "v3",
"created": "Mon, 31 Mar 2025 13:04:03 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Raichur",
"Nisha L.",
""
],
[
"Heublein",
"Lucas",
""
],
[
"Feigl",
"Tobias",
""
],
[
"Rügamer",
"Alexander",
""
],
[
"Mutschler",
"Christopher",
""
],
[
"Ott",
"Felix",
""
]
] | TITLE: Bayesian Learning-driven Prototypical Contrastive Loss for
Class-Incremental Learning
ABSTRACT: The primary objective of methods in continual learning is to learn tasks in a
sequential manner over time (sometimes from a stream of data), while mitigating
the detrimental phenomenon of catastrophic forgetting. This paper proposes a
method to learn an effective representation between previous and newly
encountered class prototypes. We propose a prototypical network with a Bayesian
learning-driven contrastive loss (BLCL), tailored specifically for
class-incremental learning scenarios. We introduce a contrastive loss that
incorporates novel classes into the latent representation by reducing
intra-class and increasing inter-class distance. Our approach dynamically
adapts the balance between the cross-entropy and contrastive loss functions
with a Bayesian learning technique. Experimental results conducted on the
CIFAR-10, CIFAR-100, and ImageNet100 datasets for image classification and
images of a GNSS-based dataset for interference classification validate the
efficacy of our method, showcasing its superiority over existing
state-of-the-art approaches. Git:
https://gitlab.cc-asp.fraunhofer.de/darcy_gnss/gnss_class_incremental_learning
|
2405.13073 | Edward Hall\'e-Hannan | Edward Hall\'e-Hannan, Charles Audet, Youssef Diouane, S\'ebastien Le
Digabel, Paul Saves | A distance for mixed-variable and hierarchical domains with meta
variables | 29 pages (without references), 12 figures, 5 tables, data and scripts
available at https://github.com/bbopt/graph_distance | null | null | null | stat.ML cs.LG | http://creativecommons.org/licenses/by/4.0/ | Heterogeneous datasets emerge in various machine learning and optimization
applications that feature different input sources, types or formats. Most
models or methods do not natively tackle heterogeneity. Hence, such datasets
are often partitioned into smaller and simpler ones, which may limit the
generalizability or performance, especially when data is limited. The first
main contribution of this work is a modeling framework that generalizes
hierarchical, tree-structured, variable-size or conditional search frameworks.
The framework models mixed-variable and hierarchical domains in which variables
may be continuous, integer, or categorical, with some identified as meta when
they influence the structure of the problem. The second main contribution is a
novel distance that compares any pair of mixed-variable points that do not
share the same variables, allowing to use whole heterogeneous datasets that
reside in mixed-variable and hierarchical domains with meta variables. The
contributions are illustrated through regression and classification experiments
using simple distance-based models applied to datasets of hyperparameters with
corresponding performance scores.
| [
{
"version": "v1",
"created": "Mon, 20 May 2024 23:11:03 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Aug 2024 20:04:32 GMT"
},
{
"version": "v3",
"created": "Mon, 31 Mar 2025 15:41:59 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Hallé-Hannan",
"Edward",
""
],
[
"Audet",
"Charles",
""
],
[
"Diouane",
"Youssef",
""
],
[
"Digabel",
"Sébastien Le",
""
],
[
"Saves",
"Paul",
""
]
] | TITLE: A distance for mixed-variable and hierarchical domains with meta
variables
ABSTRACT: Heterogeneous datasets emerge in various machine learning and optimization
applications that feature different input sources, types or formats. Most
models or methods do not natively tackle heterogeneity. Hence, such datasets
are often partitioned into smaller and simpler ones, which may limit the
generalizability or performance, especially when data is limited. The first
main contribution of this work is a modeling framework that generalizes
hierarchical, tree-structured, variable-size or conditional search frameworks.
The framework models mixed-variable and hierarchical domains in which variables
may be continuous, integer, or categorical, with some identified as meta when
they influence the structure of the problem. The second main contribution is a
novel distance that compares any pair of mixed-variable points that do not
share the same variables, allowing to use whole heterogeneous datasets that
reside in mixed-variable and hierarchical domains with meta variables. The
contributions are illustrated through regression and classification experiments
using simple distance-based models applied to datasets of hyperparameters with
corresponding performance scores.
|
2405.13362 | Danial Ebrat | Danial Ebrat, Eli Paradalis, Luis Rueda | Lusifer: LLM-based User SImulated Feedback Environment for online
Recommender systems | null | null | null | null | cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reinforcement learning (RL) recommender systems often rely on static datasets
that fail to capture the fluid, ever changing nature of user preferences in
real-world scenarios. Meanwhile, generative AI techniques have emerged as
powerful tools for creating synthetic data, including user profiles and
behaviors. Recognizing this potential, we introduce Lusifer, an LLM-based
simulation environment designed to generate dynamic, realistic user feedback
for RL-based recommender training. In Lusifer, user profiles are incrementally
updated at each interaction step, with Large Language Models (LLMs) providing
transparent explanations of how and why preferences evolve. We focus on the
MovieLens dataset, extracting only the last 40 interactions for each user, to
emphasize recent behavior. By processing textual metadata (such as movie
overviews and tags) Lusifer creates more context aware user states and
simulates feedback on new items, including those with limited or no prior
ratings. This approach reduces reliance on extensive historical data and
facilitates cold start scenario handling and adaptation to out of distribution
cases. Our experiments compare Lusifer with traditional collaborative filtering
models, revealing that while Lusifer can be comparable in predictive accuracy,
it excels at capturing dynamic user responses and yielding explainable results
at every step. These qualities highlight its potential as a scalable, ethically
sound alternative to live user experiments, supporting iterative and
user-centric evaluations of RL-based recommender strategies. Looking ahead, we
envision Lusifer serving as a foundational tool for exploring generative
AI-driven user simulations, enabling more adaptive and personalized
recommendation pipelines under real world constraints.
| [
{
"version": "v1",
"created": "Wed, 22 May 2024 05:43:15 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Nov 2024 17:07:41 GMT"
},
{
"version": "v3",
"created": "Fri, 27 Dec 2024 14:44:30 GMT"
},
{
"version": "v4",
"created": "Sat, 29 Mar 2025 14:45:21 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Ebrat",
"Danial",
""
],
[
"Paradalis",
"Eli",
""
],
[
"Rueda",
"Luis",
""
]
] | TITLE: Lusifer: LLM-based User SImulated Feedback Environment for online
Recommender systems
ABSTRACT: Reinforcement learning (RL) recommender systems often rely on static datasets
that fail to capture the fluid, ever changing nature of user preferences in
real-world scenarios. Meanwhile, generative AI techniques have emerged as
powerful tools for creating synthetic data, including user profiles and
behaviors. Recognizing this potential, we introduce Lusifer, an LLM-based
simulation environment designed to generate dynamic, realistic user feedback
for RL-based recommender training. In Lusifer, user profiles are incrementally
updated at each interaction step, with Large Language Models (LLMs) providing
transparent explanations of how and why preferences evolve. We focus on the
MovieLens dataset, extracting only the last 40 interactions for each user, to
emphasize recent behavior. By processing textual metadata (such as movie
overviews and tags) Lusifer creates more context aware user states and
simulates feedback on new items, including those with limited or no prior
ratings. This approach reduces reliance on extensive historical data and
facilitates cold start scenario handling and adaptation to out of distribution
cases. Our experiments compare Lusifer with traditional collaborative filtering
models, revealing that while Lusifer can be comparable in predictive accuracy,
it excels at capturing dynamic user responses and yielding explainable results
at every step. These qualities highlight its potential as a scalable, ethically
sound alternative to live user experiments, supporting iterative and
user-centric evaluations of RL-based recommender strategies. Looking ahead, we
envision Lusifer serving as a foundational tool for exploring generative
AI-driven user simulations, enabling more adaptive and personalized
recommendation pipelines under real world constraints.
|
2405.21061 | Min Chen | Jianqing Liang and Min Chen and Jiye Liang | Graph External Attention Enhanced Transformer | In Proceedings of ICML 2024 | Proceedings of the 41st International Conference on Machine
Learning, 2024 | 10.48550/arXiv.2405.21061 | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Transformer architecture has recently gained considerable attention in
the field of graph representation learning, as it naturally overcomes several
limitations of Graph Neural Networks (GNNs) with customized attention
mechanisms or positional and structural encodings. Despite making some
progress, existing works tend to overlook external information of graphs,
specifically the correlation between graphs. Intuitively, graphs with similar
structures should have similar representations. Therefore, we propose Graph
External Attention (GEA) -- a novel attention mechanism that leverages multiple
external node/edge key-value units to capture inter-graph correlations
implicitly. On this basis, we design an effective architecture called Graph
External Attention Enhanced Transformer (GEAET), which integrates local
structure and global interaction information for more comprehensive graph
representations. Extensive experiments on benchmark datasets demonstrate that
GEAET achieves state-of-the-art empirical performance. The source code is
available for reproducibility at: https://github.com/icm1018/GEAET.
| [
{
"version": "v1",
"created": "Fri, 31 May 2024 17:50:27 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Jun 2024 14:20:27 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Liang",
"Jianqing",
""
],
[
"Chen",
"Min",
""
],
[
"Liang",
"Jiye",
""
]
] | TITLE: Graph External Attention Enhanced Transformer
ABSTRACT: The Transformer architecture has recently gained considerable attention in
the field of graph representation learning, as it naturally overcomes several
limitations of Graph Neural Networks (GNNs) with customized attention
mechanisms or positional and structural encodings. Despite making some
progress, existing works tend to overlook external information of graphs,
specifically the correlation between graphs. Intuitively, graphs with similar
structures should have similar representations. Therefore, we propose Graph
External Attention (GEA) -- a novel attention mechanism that leverages multiple
external node/edge key-value units to capture inter-graph correlations
implicitly. On this basis, we design an effective architecture called Graph
External Attention Enhanced Transformer (GEAET), which integrates local
structure and global interaction information for more comprehensive graph
representations. Extensive experiments on benchmark datasets demonstrate that
GEAET achieves state-of-the-art empirical performance. The source code is
available for reproducibility at: https://github.com/icm1018/GEAET.
|
2406.01638 | Chenxi Liu | Chenxi Liu, Qianxiong Xu, Hao Miao, Sun Yang, Lingzheng Zhang, Cheng
Long, Ziyue Li, Rui Zhao | TimeCMA: Towards LLM-Empowered Multivariate Time Series Forecasting via
Cross-Modality Alignment | Accepted as an Oral Presentation at AAAI 2025 (Main Technical Track) | null | null | null | cs.LG cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multivariate time series forecasting (MTSF) aims to learn temporal dynamics
among variables to forecast future time series. Existing statistical and deep
learning-based methods suffer from limited learnable parameters and small-scale
training data. Recently, large language models (LLMs) combining time series
with textual prompts have achieved promising performance in MTSF. However, we
discovered that current LLM-based solutions fall short in learning disentangled
embeddings. We introduce TimeCMA, an intuitive yet effective framework for MTSF
via cross-modality alignment. Specifically, we present a dual-modality encoding
with two branches: the time series encoding branch extracts disentangled yet
weak time series embeddings, and the LLM-empowered encoding branch wraps the
same time series with text as prompts to obtain entangled yet robust prompt
embeddings. As a result, such a cross-modality alignment retrieves both
disentangled and robust time series embeddings, "the best of two worlds", from
the prompt embeddings based on time series and prompt modality similarities. As
another key design, to reduce the computational costs from time series with
their length textual prompts, we design an effective prompt to encourage the
most essential temporal information to be encapsulated in the last token: only
the last token is passed to downstream prediction. We further store the last
token embeddings to accelerate inference speed. Extensive experiments on eight
real datasets demonstrate that TimeCMA outperforms state-of-the-arts.
| [
{
"version": "v1",
"created": "Mon, 3 Jun 2024 00:27:29 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Jun 2024 07:53:12 GMT"
},
{
"version": "v3",
"created": "Fri, 14 Jun 2024 01:39:29 GMT"
},
{
"version": "v4",
"created": "Wed, 18 Dec 2024 15:01:32 GMT"
},
{
"version": "v5",
"created": "Sat, 29 Mar 2025 08:44:30 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Liu",
"Chenxi",
""
],
[
"Xu",
"Qianxiong",
""
],
[
"Miao",
"Hao",
""
],
[
"Yang",
"Sun",
""
],
[
"Zhang",
"Lingzheng",
""
],
[
"Long",
"Cheng",
""
],
[
"Li",
"Ziyue",
""
],
[
"Zhao",
"Rui",
""
]
] | TITLE: TimeCMA: Towards LLM-Empowered Multivariate Time Series Forecasting via
Cross-Modality Alignment
ABSTRACT: Multivariate time series forecasting (MTSF) aims to learn temporal dynamics
among variables to forecast future time series. Existing statistical and deep
learning-based methods suffer from limited learnable parameters and small-scale
training data. Recently, large language models (LLMs) combining time series
with textual prompts have achieved promising performance in MTSF. However, we
discovered that current LLM-based solutions fall short in learning disentangled
embeddings. We introduce TimeCMA, an intuitive yet effective framework for MTSF
via cross-modality alignment. Specifically, we present a dual-modality encoding
with two branches: the time series encoding branch extracts disentangled yet
weak time series embeddings, and the LLM-empowered encoding branch wraps the
same time series with text as prompts to obtain entangled yet robust prompt
embeddings. As a result, such a cross-modality alignment retrieves both
disentangled and robust time series embeddings, "the best of two worlds", from
the prompt embeddings based on time series and prompt modality similarities. As
another key design, to reduce the computational costs from time series with
their length textual prompts, we design an effective prompt to encourage the
most essential temporal information to be encapsulated in the last token: only
the last token is passed to downstream prediction. We further store the last
token embeddings to accelerate inference speed. Extensive experiments on eight
real datasets demonstrate that TimeCMA outperforms state-of-the-arts.
|
2406.09126 | Weijie Wei | Weijie Wei, Osman \"Ulger, Fatemeh Karimi Nejadasl, Theo Gevers,
Martin R. Oswald | 3D-AVS: LiDAR-based 3D Auto-Vocabulary Segmentation | v3 is the camera-ready version for CVPR 2025, while v2 serves as both
a preview and the camera-ready version for the CVPR 2024 OpenSun3D Workshop | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Open-Vocabulary Segmentation (OVS) methods offer promising capabilities in
detecting unseen object categories, but the category must be known and needs to
be provided by a human, either via a text prompt or pre-labeled datasets, thus
limiting their scalability. We propose 3D-AVS, a method for Auto-Vocabulary
Segmentation of 3D point clouds for which the vocabulary is unknown and
auto-generated for each input at runtime, thus eliminating the human in the
loop and typically providing a substantially larger vocabulary for richer
annotations. 3D-AVS first recognizes semantic entities from image or point
cloud data and then segments all points with the automatically generated
vocabulary. Our method incorporates both image-based and point-based
recognition, enhancing robustness under challenging lighting conditions where
geometric information from LiDAR is especially valuable. Our point-based
recognition features a Sparse Masked Attention Pooling (SMAP) module to enrich
the diversity of recognized objects. To address the challenges of evaluating
unknown vocabularies and avoid annotation biases from label synonyms,
hierarchies, or semantic overlaps, we introduce the annotation-free Text-Point
Semantic Similarity (TPSS) metric for assessing generated vocabulary quality.
Our evaluations on nuScenes and ScanNet200 demonstrate 3D-AVS's ability to
generate semantic classes with accurate point-wise segmentations. Codes will be
released at https://github.com/ozzyou/3D-AVS
| [
{
"version": "v1",
"created": "Thu, 13 Jun 2024 13:59:47 GMT"
},
{
"version": "v2",
"created": "Thu, 25 Jul 2024 11:50:52 GMT"
},
{
"version": "v3",
"created": "Sun, 30 Mar 2025 19:24:42 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Wei",
"Weijie",
""
],
[
"Ülger",
"Osman",
""
],
[
"Nejadasl",
"Fatemeh Karimi",
""
],
[
"Gevers",
"Theo",
""
],
[
"Oswald",
"Martin R.",
""
]
] | TITLE: 3D-AVS: LiDAR-based 3D Auto-Vocabulary Segmentation
ABSTRACT: Open-Vocabulary Segmentation (OVS) methods offer promising capabilities in
detecting unseen object categories, but the category must be known and needs to
be provided by a human, either via a text prompt or pre-labeled datasets, thus
limiting their scalability. We propose 3D-AVS, a method for Auto-Vocabulary
Segmentation of 3D point clouds for which the vocabulary is unknown and
auto-generated for each input at runtime, thus eliminating the human in the
loop and typically providing a substantially larger vocabulary for richer
annotations. 3D-AVS first recognizes semantic entities from image or point
cloud data and then segments all points with the automatically generated
vocabulary. Our method incorporates both image-based and point-based
recognition, enhancing robustness under challenging lighting conditions where
geometric information from LiDAR is especially valuable. Our point-based
recognition features a Sparse Masked Attention Pooling (SMAP) module to enrich
the diversity of recognized objects. To address the challenges of evaluating
unknown vocabularies and avoid annotation biases from label synonyms,
hierarchies, or semantic overlaps, we introduce the annotation-free Text-Point
Semantic Similarity (TPSS) metric for assessing generated vocabulary quality.
Our evaluations on nuScenes and ScanNet200 demonstrate 3D-AVS's ability to
generate semantic classes with accurate point-wise segmentations. Codes will be
released at https://github.com/ozzyou/3D-AVS
|
2406.13155 | Alexander Bodner | Alexander Dylan Bodner, Antonio Santiago Tepsich, Jack Natan Spolski,
Santiago Pourteau | Convolutional Kolmogorov-Arnold Networks | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this paper, we present Convolutional Kolmogorov-Arnold Networks, a novel
architecture that integrates the learnable spline-based activation functions of
Kolmogorov-Arnold Networks (KANs) into convolutional layers. By replacing
traditional fixed-weight kernels with learnable non-linear functions,
Convolutional KANs offer a significant improvement in parameter efficiency and
expressive power over standard Convolutional Neural Networks (CNNs). We
empirically evaluate Convolutional KANs on the Fashion-MNIST dataset,
demonstrating competitive accuracy with up to 50% fewer parameters compared to
baseline classic convolutions. This suggests that the KAN Convolution can
effectively capture complex spatial relationships with fewer resources,
offering a promising alternative for parameter-efficient deep learning models.
| [
{
"version": "v1",
"created": "Wed, 19 Jun 2024 02:09:44 GMT"
},
{
"version": "v2",
"created": "Mon, 4 Nov 2024 00:55:06 GMT"
},
{
"version": "v3",
"created": "Mon, 31 Mar 2025 12:55:11 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Bodner",
"Alexander Dylan",
""
],
[
"Tepsich",
"Antonio Santiago",
""
],
[
"Spolski",
"Jack Natan",
""
],
[
"Pourteau",
"Santiago",
""
]
] | TITLE: Convolutional Kolmogorov-Arnold Networks
ABSTRACT: In this paper, we present Convolutional Kolmogorov-Arnold Networks, a novel
architecture that integrates the learnable spline-based activation functions of
Kolmogorov-Arnold Networks (KANs) into convolutional layers. By replacing
traditional fixed-weight kernels with learnable non-linear functions,
Convolutional KANs offer a significant improvement in parameter efficiency and
expressive power over standard Convolutional Neural Networks (CNNs). We
empirically evaluate Convolutional KANs on the Fashion-MNIST dataset,
demonstrating competitive accuracy with up to 50% fewer parameters compared to
baseline classic convolutions. This suggests that the KAN Convolution can
effectively capture complex spatial relationships with fewer resources,
offering a promising alternative for parameter-efficient deep learning models.
|
2406.16201 | Debeshee Das | Debeshee Das and Jie Zhang and Florian Tram\`er | Blind Baselines Beat Membership Inference Attacks for Foundation Models | Accepted to be presented at DATA-FM @ ICLR 2025 and IEEE DLSP
Workshop 2025 | null | null | null | cs.CR cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Membership inference (MI) attacks try to determine if a data sample was used
to train a machine learning model. For foundation models trained on unknown Web
data, MI attacks are often used to detect copyrighted training materials,
measure test set contamination, or audit machine unlearning. Unfortunately, we
find that evaluations of MI attacks for foundation models are flawed, because
they sample members and non-members from different distributions. For 8
published MI evaluation datasets, we show that blind attacks -- that
distinguish the member and non-member distributions without looking at any
trained model -- outperform state-of-the-art MI attacks. Existing evaluations
thus tell us nothing about membership leakage of a foundation model's training
data.
| [
{
"version": "v1",
"created": "Sun, 23 Jun 2024 19:40:11 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Mar 2025 08:39:32 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Das",
"Debeshee",
""
],
[
"Zhang",
"Jie",
""
],
[
"Tramèr",
"Florian",
""
]
] | TITLE: Blind Baselines Beat Membership Inference Attacks for Foundation Models
ABSTRACT: Membership inference (MI) attacks try to determine if a data sample was used
to train a machine learning model. For foundation models trained on unknown Web
data, MI attacks are often used to detect copyrighted training materials,
measure test set contamination, or audit machine unlearning. Unfortunately, we
find that evaluations of MI attacks for foundation models are flawed, because
they sample members and non-members from different distributions. For 8
published MI evaluation datasets, we show that blind attacks -- that
distinguish the member and non-member distributions without looking at any
trained model -- outperform state-of-the-art MI attacks. Existing evaluations
thus tell us nothing about membership leakage of a foundation model's training
data.
|
2406.16321 | Jing Zhu | Jing Zhu, Yuhang Zhou, Shengyi Qian, Zhongmou He, Tong Zhao, Neil
Shah, Danai Koutra | Mosaic of Modalities: A Comprehensive Benchmark for Multimodal Graph
Learning | CVPR 2025 | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph machine learning has made significant strides in recent years, yet the
integration of visual information with graph structure and its potential for
improving performance in downstream tasks remains an underexplored area. To
address this critical gap, we introduce the Multimodal Graph Benchmark
(MM-GRAPH), a pioneering benchmark that incorporates both visual and textual
information into graph learning tasks. MM-GRAPH extends beyond existing
text-attributed graph benchmarks, offering a more comprehensive evaluation
framework for multimodal graph learning Our benchmark comprises seven diverse
datasets of varying scales (ranging from thousands to millions of edges),
designed to assess algorithms across different tasks in real-world scenarios.
These datasets feature rich multimodal node attributes, including visual data,
which enables a more holistic evaluation of various graph learning frameworks
in complex, multimodal environments. To support advancements in this emerging
field, we provide an extensive empirical study on various graph learning
frameworks when presented with features from multiple modalities, particularly
emphasizing the impact of visual information. This study offers valuable
insights into the challenges and opportunities of integrating visual data into
graph learning.
| [
{
"version": "v1",
"created": "Mon, 24 Jun 2024 05:14:09 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Mar 2025 06:11:30 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Zhu",
"Jing",
""
],
[
"Zhou",
"Yuhang",
""
],
[
"Qian",
"Shengyi",
""
],
[
"He",
"Zhongmou",
""
],
[
"Zhao",
"Tong",
""
],
[
"Shah",
"Neil",
""
],
[
"Koutra",
"Danai",
""
]
] | TITLE: Mosaic of Modalities: A Comprehensive Benchmark for Multimodal Graph
Learning
ABSTRACT: Graph machine learning has made significant strides in recent years, yet the
integration of visual information with graph structure and its potential for
improving performance in downstream tasks remains an underexplored area. To
address this critical gap, we introduce the Multimodal Graph Benchmark
(MM-GRAPH), a pioneering benchmark that incorporates both visual and textual
information into graph learning tasks. MM-GRAPH extends beyond existing
text-attributed graph benchmarks, offering a more comprehensive evaluation
framework for multimodal graph learning Our benchmark comprises seven diverse
datasets of varying scales (ranging from thousands to millions of edges),
designed to assess algorithms across different tasks in real-world scenarios.
These datasets feature rich multimodal node attributes, including visual data,
which enables a more holistic evaluation of various graph learning frameworks
in complex, multimodal environments. To support advancements in this emerging
field, we provide an extensive empirical study on various graph learning
frameworks when presented with features from multiple modalities, particularly
emphasizing the impact of visual information. This study offers valuable
insights into the challenges and opportunities of integrating visual data into
graph learning.
|
2407.00506 | Chi Zhao | Chi Zhao, Jing Liu, Elena Parilina | ShapG: new feature importance method based on the Shapley value | This paper has been published in the journal "Engineering
Applications of Artificial Intelligence" | Engineering Applications of Artificial Intelligence 148 (2025):
110409 | 10.1016/j.engappai.2025.110409 | null | cs.AI cs.GT cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With wide application of Artificial Intelligence (AI), it has become
particularly important to make decisions of AI systems explainable and
transparent. In this paper, we proposed a new Explainable Artificial
Intelligence (XAI) method called ShapG (Explanations based on Shapley value for
Graphs) for measuring feature importance. ShapG is a model-agnostic global
explanation method. At the first stage, it defines an undirected graph based on
the dataset, where nodes represent features and edges are added based on
calculation of correlation coefficients between features. At the second stage,
it calculates an approximated Shapley value by sampling the data taking into
account this graph structure. The sampling approach of ShapG allows to
calculate the importance of features efficiently, i.e. to reduce computational
complexity. Comparison of ShapG with other existing XAI methods shows that it
provides more accurate explanations for two examined datasets. We also compared
other XAI methods developed based on cooperative game theory with ShapG in
running time, and the results show that ShapG exhibits obvious advantages in
its running time, which further proves efficiency of ShapG. In addition,
extensive experiments demonstrate a wide range of applicability of the ShapG
method for explaining complex models. We find ShapG an important tool in
improving explainability and transparency of AI systems and believe it can be
widely used in various fields.
| [
{
"version": "v1",
"created": "Sat, 29 Jun 2024 18:19:55 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Mar 2025 06:57:08 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Zhao",
"Chi",
""
],
[
"Liu",
"Jing",
""
],
[
"Parilina",
"Elena",
""
]
] | TITLE: ShapG: new feature importance method based on the Shapley value
ABSTRACT: With wide application of Artificial Intelligence (AI), it has become
particularly important to make decisions of AI systems explainable and
transparent. In this paper, we proposed a new Explainable Artificial
Intelligence (XAI) method called ShapG (Explanations based on Shapley value for
Graphs) for measuring feature importance. ShapG is a model-agnostic global
explanation method. At the first stage, it defines an undirected graph based on
the dataset, where nodes represent features and edges are added based on
calculation of correlation coefficients between features. At the second stage,
it calculates an approximated Shapley value by sampling the data taking into
account this graph structure. The sampling approach of ShapG allows to
calculate the importance of features efficiently, i.e. to reduce computational
complexity. Comparison of ShapG with other existing XAI methods shows that it
provides more accurate explanations for two examined datasets. We also compared
other XAI methods developed based on cooperative game theory with ShapG in
running time, and the results show that ShapG exhibits obvious advantages in
its running time, which further proves efficiency of ShapG. In addition,
extensive experiments demonstrate a wide range of applicability of the ShapG
method for explaining complex models. We find ShapG an important tool in
improving explainability and transparency of AI systems and believe it can be
widely used in various fields.
|
2407.02264 | Huiyu Gao | Huiyu Gao, Jiahao Ma, David Ahmedt-Aristizabal, Chuong Nguyen,
Miaomiao Liu | SOAF: Scene Occlusion-aware Neural Acoustic Field | null | null | null | null | cs.CV cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper tackles the problem of novel view audio-visual synthesis along an
arbitrary trajectory in an indoor scene, given the audio-video recordings from
other known trajectories of the scene. Existing methods often overlook the
effect of room geometry, particularly wall occlusions on sound propagation,
making them less accurate in multi-room environments. In this work, we propose
a new approach called Scene Occlusion-aware Acoustic Field (SOAF) for accurate
sound generation. Our approach derives a global prior for the sound field using
distance-aware parametric sound-propagation modeling and then transforms it
based on the scene structure learned from the input video. We extract features
from the local acoustic field centered at the receiver using a Fibonacci Sphere
to generate binaural audio for novel views with a direction-aware attention
mechanism. Extensive experiments on the real dataset RWAVS and the synthetic
dataset SoundSpaces demonstrate that our method outperforms previous
state-of-the-art techniques in audio generation.
| [
{
"version": "v1",
"created": "Tue, 2 Jul 2024 13:40:56 GMT"
},
{
"version": "v2",
"created": "Wed, 3 Jul 2024 01:24:37 GMT"
},
{
"version": "v3",
"created": "Sun, 30 Mar 2025 06:07:49 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Gao",
"Huiyu",
""
],
[
"Ma",
"Jiahao",
""
],
[
"Ahmedt-Aristizabal",
"David",
""
],
[
"Nguyen",
"Chuong",
""
],
[
"Liu",
"Miaomiao",
""
]
] | TITLE: SOAF: Scene Occlusion-aware Neural Acoustic Field
ABSTRACT: This paper tackles the problem of novel view audio-visual synthesis along an
arbitrary trajectory in an indoor scene, given the audio-video recordings from
other known trajectories of the scene. Existing methods often overlook the
effect of room geometry, particularly wall occlusions on sound propagation,
making them less accurate in multi-room environments. In this work, we propose
a new approach called Scene Occlusion-aware Acoustic Field (SOAF) for accurate
sound generation. Our approach derives a global prior for the sound field using
distance-aware parametric sound-propagation modeling and then transforms it
based on the scene structure learned from the input video. We extract features
from the local acoustic field centered at the receiver using a Fibonacci Sphere
to generate binaural audio for novel views with a direction-aware attention
mechanism. Extensive experiments on the real dataset RWAVS and the synthetic
dataset SoundSpaces demonstrate that our method outperforms previous
state-of-the-art techniques in audio generation.
|
2407.05311 | Kun Li | Kun Li, Pengyu Liu, Dan Guo, Fei Wang, Zhiliang Wu, Hehe Fan, Meng
Wang | MMAD: Multi-label Micro-Action Detection in Videos | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human body actions are an important form of non-verbal communication in
social interactions. This paper specifically focuses on a subset of body
actions known as micro-actions, which are subtle, low-intensity body movements
with promising applications in human emotion analysis. In real-world scenarios,
human micro-actions often temporally co-occur, with multiple micro-actions
overlapping in time, such as concurrent head and hand movements. However,
current research primarily focuses on recognizing individual micro-actions
while overlooking their co-occurring nature. To address this gap, we propose a
new task named Multi-label Micro-Action Detection (MMAD), which involves
identifying all micro-actions in a given short video, determining their start
and end times, and categorizing them. Accomplishing this requires a model
capable of accurately capturing both long-term and short-term action
relationships to detect multiple overlapping micro-actions. To facilitate the
MMAD task, we introduce a new dataset named Multi-label Micro-Action-52
(MMA-52) and propose a baseline method equipped with a dual-path
spatial-temporal adapter to address the challenges of subtle visual change in
MMAD. We hope that MMA-52 can stimulate research on micro-action analysis in
videos and prompt the development of spatio-temporal modeling in human-centric
video understanding. The proposed MMA-52 dataset is available at:
https://github.com/VUT-HFUT/Micro-Action.
| [
{
"version": "v1",
"created": "Sun, 7 Jul 2024 09:45:14 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Mar 2025 10:25:39 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Li",
"Kun",
""
],
[
"Liu",
"Pengyu",
""
],
[
"Guo",
"Dan",
""
],
[
"Wang",
"Fei",
""
],
[
"Wu",
"Zhiliang",
""
],
[
"Fan",
"Hehe",
""
],
[
"Wang",
"Meng",
""
]
] | TITLE: MMAD: Multi-label Micro-Action Detection in Videos
ABSTRACT: Human body actions are an important form of non-verbal communication in
social interactions. This paper specifically focuses on a subset of body
actions known as micro-actions, which are subtle, low-intensity body movements
with promising applications in human emotion analysis. In real-world scenarios,
human micro-actions often temporally co-occur, with multiple micro-actions
overlapping in time, such as concurrent head and hand movements. However,
current research primarily focuses on recognizing individual micro-actions
while overlooking their co-occurring nature. To address this gap, we propose a
new task named Multi-label Micro-Action Detection (MMAD), which involves
identifying all micro-actions in a given short video, determining their start
and end times, and categorizing them. Accomplishing this requires a model
capable of accurately capturing both long-term and short-term action
relationships to detect multiple overlapping micro-actions. To facilitate the
MMAD task, we introduce a new dataset named Multi-label Micro-Action-52
(MMA-52) and propose a baseline method equipped with a dual-path
spatial-temporal adapter to address the challenges of subtle visual change in
MMAD. We hope that MMA-52 can stimulate research on micro-action analysis in
videos and prompt the development of spatio-temporal modeling in human-centric
video understanding. The proposed MMA-52 dataset is available at:
https://github.com/VUT-HFUT/Micro-Action.
|
2407.06740 | Jorge Paz-Ruza | Jorge Paz-Ruza, David Esteban-Mart\'inez, Amparo Alonso-Betanzos,
Bertha Guijarro-Berdi\~nas | Sustainable techniques to improve Data Quality for training image-based
explanatory models for Recommender Systems | null | null | null | null | cs.LG cs.AI cs.CV cs.IR | http://creativecommons.org/licenses/by/4.0/ | Visual explanations based on user-uploaded images are an effective and
self-contained approach to provide transparency to Recommender Systems (RS),
but intrinsic limitations of data used in this explainability paradigm cause
existing approaches to use bad quality training data that is highly sparse and
suffers from labelling noise. Popular training enrichment approaches like model
enlargement or massive data gathering are expensive and environmentally
unsustainable, thus we seek to provide better visual explanations to RS
aligning with the principles of Responsible AI. In this work, we research the
intersection of effective and sustainable training enrichment strategies for
visual-based RS explainability models by developing three novel strategies that
focus on training Data Quality: 1) selection of reliable negative training
examples using Positive-unlabelled Learning, 2) transform-based data
augmentation, and 3) text-to-image generative-based data augmentation. The
integration of these strategies in three state-of-the-art explainability models
increases 5% the performance in relevant ranking metrics of these visual-based
RS explainability models without penalizing their practical long-term
sustainability, as tested in multiple real-world restaurant recommendation
explanation datasets.
| [
{
"version": "v1",
"created": "Tue, 9 Jul 2024 10:40:31 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Mar 2025 10:16:08 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Paz-Ruza",
"Jorge",
""
],
[
"Esteban-Martínez",
"David",
""
],
[
"Alonso-Betanzos",
"Amparo",
""
],
[
"Guijarro-Berdiñas",
"Bertha",
""
]
] | TITLE: Sustainable techniques to improve Data Quality for training image-based
explanatory models for Recommender Systems
ABSTRACT: Visual explanations based on user-uploaded images are an effective and
self-contained approach to provide transparency to Recommender Systems (RS),
but intrinsic limitations of data used in this explainability paradigm cause
existing approaches to use bad quality training data that is highly sparse and
suffers from labelling noise. Popular training enrichment approaches like model
enlargement or massive data gathering are expensive and environmentally
unsustainable, thus we seek to provide better visual explanations to RS
aligning with the principles of Responsible AI. In this work, we research the
intersection of effective and sustainable training enrichment strategies for
visual-based RS explainability models by developing three novel strategies that
focus on training Data Quality: 1) selection of reliable negative training
examples using Positive-unlabelled Learning, 2) transform-based data
augmentation, and 3) text-to-image generative-based data augmentation. The
integration of these strategies in three state-of-the-art explainability models
increases 5% the performance in relevant ranking metrics of these visual-based
RS explainability models without penalizing their practical long-term
sustainability, as tested in multiple real-world restaurant recommendation
explanation datasets.
|
2407.11204 | Brian Moser | Vijul Shah, Ko Watanabe, Brian B. Moser and Andreas Dengel | PupilSense: A Novel Application for Webcam-Based Pupil Diameter
Estimation | null | null | null | null | cs.CV cs.AI cs.CY cs.HC cs.LG | http://creativecommons.org/licenses/by/4.0/ | Measuring pupil diameter is vital for gaining insights into physiological and
psychological states - traditionally captured by expensive, specialized
equipment like Tobii eye-trackers and Pupillabs glasses. This paper presents a
novel application that enables pupil diameter estimation using standard
webcams, making the process accessible in everyday environments without
specialized equipment. Our app estimates pupil diameters from videos and offers
detailed analysis, including class activation maps, graphs of predicted left
and right pupil diameters, and eye aspect ratios during blinks. This tool
expands the accessibility of pupil diameter measurement, particularly in
everyday settings, benefiting fields like human behavior research and
healthcare. Additionally, we present a new open source dataset for pupil
diameter estimation using webcam images containing cropped eye images and
corresponding pupil diameter measurements.
| [
{
"version": "v1",
"created": "Mon, 15 Jul 2024 19:39:28 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Mar 2025 01:19:17 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Shah",
"Vijul",
""
],
[
"Watanabe",
"Ko",
""
],
[
"Moser",
"Brian B.",
""
],
[
"Dengel",
"Andreas",
""
]
] | TITLE: PupilSense: A Novel Application for Webcam-Based Pupil Diameter
Estimation
ABSTRACT: Measuring pupil diameter is vital for gaining insights into physiological and
psychological states - traditionally captured by expensive, specialized
equipment like Tobii eye-trackers and Pupillabs glasses. This paper presents a
novel application that enables pupil diameter estimation using standard
webcams, making the process accessible in everyday environments without
specialized equipment. Our app estimates pupil diameters from videos and offers
detailed analysis, including class activation maps, graphs of predicted left
and right pupil diameters, and eye aspect ratios during blinks. This tool
expands the accessibility of pupil diameter measurement, particularly in
everyday settings, benefiting fields like human behavior research and
healthcare. Additionally, we present a new open source dataset for pupil
diameter estimation using webcam images containing cropped eye images and
corresponding pupil diameter measurements.
|
2407.12773 | Zhuoyan Shen | Zhuoyan Shen, Mikael Simard, Douglas Brand, Vanghelita Andrei, Ali
Al-Khader, Fatine Oumlil, Katherine Trevers, Thomas Butters, Simon Haefliger,
Eleanna Kara, Fernanda Amary, Roberto Tirabosco, Paul Cool, Gary Royle, Maria
A. Hawkins, Adrienne M. Flanagan, Charles-Antoine Collins Fekete | OMG-Net: A Deep Learning Framework Deploying Segment Anything to Detect
Pan-Cancer Mitotic Figures from Haematoxylin and Eosin-Stained Slides | null | null | 10.1038/s42003-024-07398-6 | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Mitotic activity is an important feature for grading several cancer types.
Counting mitotic figures (MFs) is a time-consuming, laborious task prone to
inter-observer variation. Inaccurate recognition of MFs can lead to incorrect
grading and hence potential suboptimal treatment. In this study, we propose an
artificial intelligence (AI)-aided approach to detect MFs in digitised
haematoxylin and eosin-stained whole slide images (WSIs). Advances in this area
are hampered by the limited number and types of cancer datasets of MFs. Here we
establish the largest pan-cancer dataset of mitotic figures by combining an
in-house dataset of soft tissue tumours (STMF) with five open-source mitotic
datasets comprising multiple human cancers and canine specimens (ICPR, TUPAC,
CCMCT, CMC and MIDOG++). This new dataset identifies 74,620 MFs and 105,538
mitotic-like figures. We then employed a two-stage framework (the Optimised
Mitoses Generator Network (OMG-Net) to classify MFs. The framework first
deploys the Segment Anything Model (SAM) to automate the contouring of MFs and
surrounding objects. An adapted ResNet18 is subsequently trained to classify
MFs. OMG-Net reaches an F1-score of 0.84 on pan-cancer MF detection (breast
carcinoma, neuroendocrine tumour and melanoma), largely outperforming the
previous state-of-the-art MIDOG++ benchmark model on its hold-out testing set
(e.g. +16% F1-score on breast cancer detection, p<0.001) thereby providing
superior accuracy in detecting MFs on various types of tumours obtained with
different scanners.
| [
{
"version": "v1",
"created": "Wed, 17 Jul 2024 17:53:37 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Shen",
"Zhuoyan",
""
],
[
"Simard",
"Mikael",
""
],
[
"Brand",
"Douglas",
""
],
[
"Andrei",
"Vanghelita",
""
],
[
"Al-Khader",
"Ali",
""
],
[
"Oumlil",
"Fatine",
""
],
[
"Trevers",
"Katherine",
""
],
[
"Butters",
"Thomas",
""
],
[
"Haefliger",
"Simon",
""
],
[
"Kara",
"Eleanna",
""
],
[
"Amary",
"Fernanda",
""
],
[
"Tirabosco",
"Roberto",
""
],
[
"Cool",
"Paul",
""
],
[
"Royle",
"Gary",
""
],
[
"Hawkins",
"Maria A.",
""
],
[
"Flanagan",
"Adrienne M.",
""
],
[
"Fekete",
"Charles-Antoine Collins",
""
]
] | TITLE: OMG-Net: A Deep Learning Framework Deploying Segment Anything to Detect
Pan-Cancer Mitotic Figures from Haematoxylin and Eosin-Stained Slides
ABSTRACT: Mitotic activity is an important feature for grading several cancer types.
Counting mitotic figures (MFs) is a time-consuming, laborious task prone to
inter-observer variation. Inaccurate recognition of MFs can lead to incorrect
grading and hence potential suboptimal treatment. In this study, we propose an
artificial intelligence (AI)-aided approach to detect MFs in digitised
haematoxylin and eosin-stained whole slide images (WSIs). Advances in this area
are hampered by the limited number and types of cancer datasets of MFs. Here we
establish the largest pan-cancer dataset of mitotic figures by combining an
in-house dataset of soft tissue tumours (STMF) with five open-source mitotic
datasets comprising multiple human cancers and canine specimens (ICPR, TUPAC,
CCMCT, CMC and MIDOG++). This new dataset identifies 74,620 MFs and 105,538
mitotic-like figures. We then employed a two-stage framework (the Optimised
Mitoses Generator Network (OMG-Net) to classify MFs. The framework first
deploys the Segment Anything Model (SAM) to automate the contouring of MFs and
surrounding objects. An adapted ResNet18 is subsequently trained to classify
MFs. OMG-Net reaches an F1-score of 0.84 on pan-cancer MF detection (breast
carcinoma, neuroendocrine tumour and melanoma), largely outperforming the
previous state-of-the-art MIDOG++ benchmark model on its hold-out testing set
(e.g. +16% F1-score on breast cancer detection, p<0.001) thereby providing
superior accuracy in detecting MFs on various types of tumours obtained with
different scanners.
|
2407.18456 | Jiawei Sun | Zhaoqing Chen, Jiawei Sun, Xibin Yang, Xinyi Ye, Bin Zhao, Xuelong Li,
Juergen Czarske | Diffusion-driven lensless fiber endomicroscopic quantitative phase
imaging towards digital pathology | null | null | null | null | physics.optics cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Lensless fiber endomicroscope is an emerging tool for in-vivo microscopic
imaging, where quantitative phase imaging (QPI) can be utilized as a label-free
method to enhance image contrast. However, existing single-shot phase
reconstruction methods through lensless fiber endomicroscope typically perform
well on simple images but struggle with complex microscopic structures. Here,
we propose a speckle-conditioned diffusion model (SpecDiffusion), which
reconstructs phase images directly from speckles captured at the detection side
of a multi-core fiber (MCF). Unlike conventional neural networks, SpecDiffusion
employs iterative phase denoising steps for speckle-driven phase
reconstruction. The iteration scheme allows SpecDiffusion to break down the
phase reconstruction process into multiple steps, gradually building up to the
final phase image. This attribute alleviates the computation challenge at each
step and enables the reconstruction of rich details in complex microscopic
images. To validate its efficacy, we build an optical system to capture
speckles from MCF and construct a dataset consisting of 100,000 paired images.
SpecDiffusion provides high-fidelity phase reconstruction results and shows
powerful generalization capacity for unseen objects, such as test charts and
biological tissues, reducing the average mean absolute error of the
reconstructed tissue images by 7 times. Furthermore, the reconstructed tissue
images using SpecDiffusion shows higher accuracy in zero-shot cell segmentation
tasks compared to the conventional method, demonstrating the potential for
further cell morphology analysis through the learning-based lensless fiber
endomicroscope. SpecDiffusion offers a precise and generalized method to phase
reconstruction through scattering media, including MCFs, opening new
perspective in lensless fiber endomicroscopic imaging.
| [
{
"version": "v1",
"created": "Fri, 26 Jul 2024 01:42:31 GMT"
},
{
"version": "v2",
"created": "Fri, 13 Sep 2024 11:12:00 GMT"
},
{
"version": "v3",
"created": "Mon, 30 Sep 2024 02:52:08 GMT"
},
{
"version": "v4",
"created": "Mon, 31 Mar 2025 02:03:41 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Chen",
"Zhaoqing",
""
],
[
"Sun",
"Jiawei",
""
],
[
"Yang",
"Xibin",
""
],
[
"Ye",
"Xinyi",
""
],
[
"Zhao",
"Bin",
""
],
[
"Li",
"Xuelong",
""
],
[
"Czarske",
"Juergen",
""
]
] | TITLE: Diffusion-driven lensless fiber endomicroscopic quantitative phase
imaging towards digital pathology
ABSTRACT: Lensless fiber endomicroscope is an emerging tool for in-vivo microscopic
imaging, where quantitative phase imaging (QPI) can be utilized as a label-free
method to enhance image contrast. However, existing single-shot phase
reconstruction methods through lensless fiber endomicroscope typically perform
well on simple images but struggle with complex microscopic structures. Here,
we propose a speckle-conditioned diffusion model (SpecDiffusion), which
reconstructs phase images directly from speckles captured at the detection side
of a multi-core fiber (MCF). Unlike conventional neural networks, SpecDiffusion
employs iterative phase denoising steps for speckle-driven phase
reconstruction. The iteration scheme allows SpecDiffusion to break down the
phase reconstruction process into multiple steps, gradually building up to the
final phase image. This attribute alleviates the computation challenge at each
step and enables the reconstruction of rich details in complex microscopic
images. To validate its efficacy, we build an optical system to capture
speckles from MCF and construct a dataset consisting of 100,000 paired images.
SpecDiffusion provides high-fidelity phase reconstruction results and shows
powerful generalization capacity for unseen objects, such as test charts and
biological tissues, reducing the average mean absolute error of the
reconstructed tissue images by 7 times. Furthermore, the reconstructed tissue
images using SpecDiffusion shows higher accuracy in zero-shot cell segmentation
tasks compared to the conventional method, demonstrating the potential for
further cell morphology analysis through the learning-based lensless fiber
endomicroscope. SpecDiffusion offers a precise and generalized method to phase
reconstruction through scattering media, including MCFs, opening new
perspective in lensless fiber endomicroscopic imaging.
|
2408.03095 | Siqi Gu | Siqi Gu, Quanjun Zhang, Kecheng Li, Chunrong Fang, Fangyuan Tian,
Liuchuan Zhu, Jianyi Zhou, Zhenyu Chen | TestART: Improving LLM-based Unit Testing via Co-evolution of Automated
Generation and Repair Iteration | null | null | null | null | cs.SE | http://creativecommons.org/publicdomain/zero/1.0/ | Unit testing is crucial for detecting bugs in individual program units but
consumes time and effort. Recently, large language models (LLMs) have
demonstrated remarkable capabilities in generating unit test cases. However,
several problems limit their ability to generate high-quality unit test cases:
(1) compilation and runtime errors caused by the hallucination of LLMs; (2)
lack of testing and coverage feedback information restricting the increase of
code coverage;(3) the repetitive suppression problem causing invalid LLM-based
repair and generation attempts. To address these limitations, we propose
TestART, a novel unit test generation method. TestART improves LLM-based unit
testing via co-evolution of automated generation and repair iteration,
representing a significant advancement in automated unit test generation.
TestART leverages the template-based repair strategy to effectively fix bugs in
LLM-generated test cases for the first time. Meanwhile, TestART extracts
coverage information from successful test cases and uses it as coverage-guided
testing feedback. It also incorporates positive prompt injection to prevent
repetition suppression, thereby enhancing the sufficiency of the final test
case. This synergy between generation and repair elevates the correctness and
sufficiency of the produced test cases significantly beyond previous methods.
In comparative experiments, TestART demonstrates an 18% improvement in pass
rate and a 20% enhancement in coverage across three types of datasets compared
to baseline models. Additionally, it achieves better coverage rates than
EvoSuite with only half the number of test cases. These results demonstrate
TestART's superior ability to produce high-quality unit test cases by
harnessing the power of LLMs while overcoming their inherent flaws.
| [
{
"version": "v1",
"created": "Tue, 6 Aug 2024 10:52:41 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Aug 2024 07:28:48 GMT"
},
{
"version": "v3",
"created": "Mon, 12 Aug 2024 08:27:56 GMT"
},
{
"version": "v4",
"created": "Tue, 5 Nov 2024 12:57:35 GMT"
},
{
"version": "v5",
"created": "Sat, 21 Dec 2024 12:51:04 GMT"
},
{
"version": "v6",
"created": "Mon, 31 Mar 2025 13:13:27 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Gu",
"Siqi",
""
],
[
"Zhang",
"Quanjun",
""
],
[
"Li",
"Kecheng",
""
],
[
"Fang",
"Chunrong",
""
],
[
"Tian",
"Fangyuan",
""
],
[
"Zhu",
"Liuchuan",
""
],
[
"Zhou",
"Jianyi",
""
],
[
"Chen",
"Zhenyu",
""
]
] | TITLE: TestART: Improving LLM-based Unit Testing via Co-evolution of Automated
Generation and Repair Iteration
ABSTRACT: Unit testing is crucial for detecting bugs in individual program units but
consumes time and effort. Recently, large language models (LLMs) have
demonstrated remarkable capabilities in generating unit test cases. However,
several problems limit their ability to generate high-quality unit test cases:
(1) compilation and runtime errors caused by the hallucination of LLMs; (2)
lack of testing and coverage feedback information restricting the increase of
code coverage;(3) the repetitive suppression problem causing invalid LLM-based
repair and generation attempts. To address these limitations, we propose
TestART, a novel unit test generation method. TestART improves LLM-based unit
testing via co-evolution of automated generation and repair iteration,
representing a significant advancement in automated unit test generation.
TestART leverages the template-based repair strategy to effectively fix bugs in
LLM-generated test cases for the first time. Meanwhile, TestART extracts
coverage information from successful test cases and uses it as coverage-guided
testing feedback. It also incorporates positive prompt injection to prevent
repetition suppression, thereby enhancing the sufficiency of the final test
case. This synergy between generation and repair elevates the correctness and
sufficiency of the produced test cases significantly beyond previous methods.
In comparative experiments, TestART demonstrates an 18% improvement in pass
rate and a 20% enhancement in coverage across three types of datasets compared
to baseline models. Additionally, it achieves better coverage rates than
EvoSuite with only half the number of test cases. These results demonstrate
TestART's superior ability to produce high-quality unit test cases by
harnessing the power of LLMs while overcoming their inherent flaws.
|
2408.05288 | Bj\"orn L\"utjens | Bj\"orn L\"utjens and Raffaele Ferrari and Duncan Watson-Parris and
Noelle Selin | The impact of internal variability on benchmarking deep learning climate
emulators | null | null | null | null | cs.LG cs.AI cs.CE cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Full-complexity Earth system models (ESMs) are computationally very
expensive, limiting their use in exploring the climate outcomes of multiple
emission pathways. More efficient emulators that approximate ESMs can directly
map emissions onto climate outcomes, and benchmarks are being used to evaluate
their accuracy on standardized tasks and datasets. We investigate a popular
benchmark in data-driven climate emulation, ClimateBench, on which deep
learning-based emulators are currently achieving the best performance. We
compare these deep learning emulators with a linear regression-based emulator,
akin to pattern scaling, and show that it outperforms the incumbent
100M-parameter deep learning foundation model, ClimaX, on 3 out of 4
regionally-resolved climate variables, notably surface temperature and
precipitation. While emulating surface temperature is expected to be
predominantly linear, this result is surprising for emulating precipitation.
Precipitation is a much more noisy variable, and we show that deep learning
emulators can overfit to internal variability noise at low frequencies,
degrading their performance in comparison to a linear emulator. We address the
issue of overfitting by increasing the number of climate simulations per
emission pathway (from 3 to 50) and updating the benchmark targets with the
respective ensemble averages from the MPI-ESM1.2-LR model. Using the new
targets, we show that linear pattern scaling continues to be more accurate on
temperature, but can be outperformed by a deep learning-based technique for
emulating precipitation. We publish our code and data at
github.com/blutjens/climate-emulator.
| [
{
"version": "v1",
"created": "Fri, 9 Aug 2024 18:17:17 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Mar 2025 16:06:28 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Lütjens",
"Björn",
""
],
[
"Ferrari",
"Raffaele",
""
],
[
"Watson-Parris",
"Duncan",
""
],
[
"Selin",
"Noelle",
""
]
] | TITLE: The impact of internal variability on benchmarking deep learning climate
emulators
ABSTRACT: Full-complexity Earth system models (ESMs) are computationally very
expensive, limiting their use in exploring the climate outcomes of multiple
emission pathways. More efficient emulators that approximate ESMs can directly
map emissions onto climate outcomes, and benchmarks are being used to evaluate
their accuracy on standardized tasks and datasets. We investigate a popular
benchmark in data-driven climate emulation, ClimateBench, on which deep
learning-based emulators are currently achieving the best performance. We
compare these deep learning emulators with a linear regression-based emulator,
akin to pattern scaling, and show that it outperforms the incumbent
100M-parameter deep learning foundation model, ClimaX, on 3 out of 4
regionally-resolved climate variables, notably surface temperature and
precipitation. While emulating surface temperature is expected to be
predominantly linear, this result is surprising for emulating precipitation.
Precipitation is a much more noisy variable, and we show that deep learning
emulators can overfit to internal variability noise at low frequencies,
degrading their performance in comparison to a linear emulator. We address the
issue of overfitting by increasing the number of climate simulations per
emission pathway (from 3 to 50) and updating the benchmark targets with the
respective ensemble averages from the MPI-ESM1.2-LR model. Using the new
targets, we show that linear pattern scaling continues to be more accurate on
temperature, but can be outperformed by a deep learning-based technique for
emulating precipitation. We publish our code and data at
github.com/blutjens/climate-emulator.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.