Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2503.23676 | Chenguang Wan | Chenguang Wan, Youngwoo Cho, Zhisong Qu, Yann Camenen, Robin Varennes,
Kyungtak Lim, Kunpeng Li, Jiangang Li, Yanlong Li, and Xavier Garbet | A high-fidelity surrogate model for the ion temperature gradient (ITG)
instability using a small expensive simulation dataset | null | null | null | null | physics.plasm-ph | http://creativecommons.org/licenses/by/4.0/ | One of the main challenges in building high-fidelity surrogate models of
tokamak turbulence is the substantial demand for high-quality data. Typically,
producing high-quality data involves simulating complex physical processes,
which requires extensive computing resources. In this work, we propose a fine
tuning-based approach to develop the surrogate model that reduces the amount of
high-quality data required by 80\%. We demonstrate the effectiveness of this
approach by constructing a proof-of-principle ITG surrogate model using
datasets generated from two gyrokinetic codes, GKW and GX. GX needs in terms of
computing resources are much lighter than GKW. Remarkably, the surrogate
models' performance remain nearly the same whether trained on 798 GKW results
alone or 159 GKW results plus an additional 11979 GX results. These encouraging
outcomes indicate that fine tuning methods can significantly decrease the
high-quality data needed to develop the simulation-driven surrogate model.
Moreover, the approach presented here has the potential to facilitate surrogate
model development for heavy codes and may ultimately pave the way for digital
twin systems of tokamaks.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 02:43:48 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Wan",
"Chenguang",
""
],
[
"Cho",
"Youngwoo",
""
],
[
"Qu",
"Zhisong",
""
],
[
"Camenen",
"Yann",
""
],
[
"Varennes",
"Robin",
""
],
[
"Lim",
"Kyungtak",
""
],
[
"Li",
"Kunpeng",
""
],
[
"Li",
"Jiangang",
""
],
[
"Li",
"Yanlong",
""
],
[
"Garbet",
"Xavier",
""
]
] | TITLE: A high-fidelity surrogate model for the ion temperature gradient (ITG)
instability using a small expensive simulation dataset
ABSTRACT: One of the main challenges in building high-fidelity surrogate models of
tokamak turbulence is the substantial demand for high-quality data. Typically,
producing high-quality data involves simulating complex physical processes,
which requires extensive computing resources. In this work, we propose a fine
tuning-based approach to develop the surrogate model that reduces the amount of
high-quality data required by 80\%. We demonstrate the effectiveness of this
approach by constructing a proof-of-principle ITG surrogate model using
datasets generated from two gyrokinetic codes, GKW and GX. GX needs in terms of
computing resources are much lighter than GKW. Remarkably, the surrogate
models' performance remain nearly the same whether trained on 798 GKW results
alone or 159 GKW results plus an additional 11979 GX results. These encouraging
outcomes indicate that fine tuning methods can significantly decrease the
high-quality data needed to develop the simulation-driven surrogate model.
Moreover, the approach presented here has the potential to facilitate surrogate
model development for heavy codes and may ultimately pave the way for digital
twin systems of tokamaks.
|
2503.23684 | Chenxing Wang | Haitao Tian, Junyang Li, Chenxing Wang, and Helong Jiang | Detail-aware multi-view stereo network for depth estimation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-view stereo methods have achieved great success for depth estimation
based on the coarse-to-fine depth learning frameworks, however, the existing
methods perform poorly in recovering the depth of object boundaries and detail
regions. To address these issues, we propose a detail-aware multi-view stereo
network (DA-MVSNet) with a coarse-to-fine framework. The geometric depth clues
hidden in the coarse stage are utilized to maintain the geometric structural
relationships between object surfaces and enhance the expressive capability of
image features. In addition, an image synthesis loss is employed to constrain
the gradient flow for detailed regions and further strengthen the supervision
of object boundaries and texture-rich areas. Finally, we propose an adaptive
depth interval adjustment strategy to improve the accuracy of object
reconstruction. Extensive experiments on the DTU and Tanks & Temples datasets
demonstrate that our method achieves competitive results. The code is available
at https://github.com/wsmtht520-/DAMVSNet.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 03:23:39 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Tian",
"Haitao",
""
],
[
"Li",
"Junyang",
""
],
[
"Wang",
"Chenxing",
""
],
[
"Jiang",
"Helong",
""
]
] | TITLE: Detail-aware multi-view stereo network for depth estimation
ABSTRACT: Multi-view stereo methods have achieved great success for depth estimation
based on the coarse-to-fine depth learning frameworks, however, the existing
methods perform poorly in recovering the depth of object boundaries and detail
regions. To address these issues, we propose a detail-aware multi-view stereo
network (DA-MVSNet) with a coarse-to-fine framework. The geometric depth clues
hidden in the coarse stage are utilized to maintain the geometric structural
relationships between object surfaces and enhance the expressive capability of
image features. In addition, an image synthesis loss is employed to constrain
the gradient flow for detailed regions and further strengthen the supervision
of object boundaries and texture-rich areas. Finally, we propose an adaptive
depth interval adjustment strategy to improve the accuracy of object
reconstruction. Extensive experiments on the DTU and Tanks & Temples datasets
demonstrate that our method achieves competitive results. The code is available
at https://github.com/wsmtht520-/DAMVSNet.
|
2503.23686 | Oliver Schmidt | Oliver T. Schmidt | Data-Driven Forecasting of High-Dimensional Transient and Stationary
Processes via Space-Time Projection | null | null | null | null | cs.LG astro-ph.GA nlin.CD physics.comp-ph physics.data-an physics.flu-dyn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Space-Time Projection (STP) is introduced as a data-driven forecasting
approach for high-dimensional and time-resolved data. The method computes
extended space-time proper orthogonal modes from training data spanning a
prediction horizon comprising both hindcast and forecast intervals. Forecasts
are then generated by projecting the hindcast portion of these modes onto new
data, simultaneously leveraging their orthogonality and optimal correlation
with the forecast extension. Rooted in Proper Orthogonal Decomposition (POD)
theory, dimensionality reduction and time-delay embedding are intrinsic to the
approach. For a given ensemble and fixed prediction horizon, the only tunable
parameter is the truncation rank--no additional hyperparameters are required.
The hindcast accuracy serves as a reliable indicator for short-term forecast
accuracy and establishes a lower bound on forecast errors. The efficacy of the
method is demonstrated using two datasets: transient, highly anisotropic
simulations of supernova explosions in a turbulent interstellar medium, and
experimental velocity fields of a turbulent high-subsonic engineering flow. In
a comparative study with standard Long Short-Term Memory (LSTM) neural
networks--acknowledging that alternative architectures or training strategies
may yield different outcomes--the method consistently provided more accurate
forecasts. Considering its simplicity and robust performance, STP offers an
interpretable and competitive benchmark for forecasting high-dimensional
transient and chaotic processes, relying purely on spatiotemporal correlation
information.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 03:36:59 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Schmidt",
"Oliver T.",
""
]
] | TITLE: Data-Driven Forecasting of High-Dimensional Transient and Stationary
Processes via Space-Time Projection
ABSTRACT: Space-Time Projection (STP) is introduced as a data-driven forecasting
approach for high-dimensional and time-resolved data. The method computes
extended space-time proper orthogonal modes from training data spanning a
prediction horizon comprising both hindcast and forecast intervals. Forecasts
are then generated by projecting the hindcast portion of these modes onto new
data, simultaneously leveraging their orthogonality and optimal correlation
with the forecast extension. Rooted in Proper Orthogonal Decomposition (POD)
theory, dimensionality reduction and time-delay embedding are intrinsic to the
approach. For a given ensemble and fixed prediction horizon, the only tunable
parameter is the truncation rank--no additional hyperparameters are required.
The hindcast accuracy serves as a reliable indicator for short-term forecast
accuracy and establishes a lower bound on forecast errors. The efficacy of the
method is demonstrated using two datasets: transient, highly anisotropic
simulations of supernova explosions in a turbulent interstellar medium, and
experimental velocity fields of a turbulent high-subsonic engineering flow. In
a comparative study with standard Long Short-Term Memory (LSTM) neural
networks--acknowledging that alternative architectures or training strategies
may yield different outcomes--the method consistently provided more accurate
forecasts. Considering its simplicity and robust performance, STP offers an
interpretable and competitive benchmark for forecasting high-dimensional
transient and chaotic processes, relying purely on spatiotemporal correlation
information.
|
2503.23691 | Xiaomei Li | Xiaomei Li, Alex Whan, Meredith McNeil, David Starns, Jessica Irons,
Samuel C. Andrew and Rad Suchecki | A Conceptual Framework for Human-AI Collaborative Genome Annotation | 17 pages, 3 figures | null | null | null | q-bio.GN cs.HC | http://creativecommons.org/licenses/by/4.0/ | Genome annotation is essential for understanding the functional elements
within genomes. While automated methods are indispensable for processing
large-scale genomic data, they often face challenges in accurately predicting
gene structures and functions. Consequently, manual curation by domain experts
remains crucial for validating and refining these predictions. These combined
outcomes from automated tools and manual curation highlight the importance of
integrating human expertise with AI capabilities to improve both the accuracy
and efficiency of genome annotation. However, the manual curation process is
inherently labor-intensive and time-consuming, making it difficult to scale for
large datasets. To address these challenges, we propose a conceptual framework,
Human-AI Collaborative Genome Annotation (HAICoGA), which leverages the
synergistic partnership between humans and artificial intelligence to enhance
human capabilities and accelerate the genome annotation process. Additionally,
we explore the potential of integrating Large Language Models (LLMs) into this
framework to support and augment specific tasks. Finally, we discuss emerging
challenges and outline open research questions to guide further exploration in
this area.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 03:44:00 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Li",
"Xiaomei",
""
],
[
"Whan",
"Alex",
""
],
[
"McNeil",
"Meredith",
""
],
[
"Starns",
"David",
""
],
[
"Irons",
"Jessica",
""
],
[
"Andrew",
"Samuel C.",
""
],
[
"Suchecki",
"Rad",
""
]
] | TITLE: A Conceptual Framework for Human-AI Collaborative Genome Annotation
ABSTRACT: Genome annotation is essential for understanding the functional elements
within genomes. While automated methods are indispensable for processing
large-scale genomic data, they often face challenges in accurately predicting
gene structures and functions. Consequently, manual curation by domain experts
remains crucial for validating and refining these predictions. These combined
outcomes from automated tools and manual curation highlight the importance of
integrating human expertise with AI capabilities to improve both the accuracy
and efficiency of genome annotation. However, the manual curation process is
inherently labor-intensive and time-consuming, making it difficult to scale for
large datasets. To address these challenges, we propose a conceptual framework,
Human-AI Collaborative Genome Annotation (HAICoGA), which leverages the
synergistic partnership between humans and artificial intelligence to enhance
human capabilities and accelerate the genome annotation process. Additionally,
we explore the potential of integrating Large Language Models (LLMs) into this
framework to support and augment specific tasks. Finally, we discuss emerging
challenges and outline open research questions to guide further exploration in
this area.
|
2503.23702 | ShuFan Xi | Shufan Xi, Zexian Liu, Junlin Chang, Hongyu Wu, Xiaogang Wang, Aimin
Hao | 3D Dental Model Segmentation with Geometrical Boundary Preserving | The IEEE/CVF Conference on Computer Vision and Pattern Recognition
2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | 3D intraoral scan mesh is widely used in digital dentistry diagnosis,
segmenting 3D intraoral scan mesh is a critical preliminary task. Numerous
approaches have been devised for precise tooth segmentation. Currently, the
deep learning-based methods are capable of the high accuracy segmentation of
crown. However, the segmentation accuracy at the junction between the crown and
the gum is still below average. Existing down-sampling methods are unable to
effectively preserve the geometric details at the junction. To address these
problems, we propose CrossTooth, a boundary-preserving segmentation method that
combines 3D mesh selective downsampling to retain more vertices at the
tooth-gingiva area, along with cross-modal discriminative boundary features
extracted from multi-view rendered images, enhancing the geometric
representation of the segmentation network. Using a point network as a backbone
and incorporating image complementary features, CrossTooth significantly
improves segmentation accuracy, as demonstrated by experiments on a public
intraoral scan dataset.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 04:00:11 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Xi",
"Shufan",
""
],
[
"Liu",
"Zexian",
""
],
[
"Chang",
"Junlin",
""
],
[
"Wu",
"Hongyu",
""
],
[
"Wang",
"Xiaogang",
""
],
[
"Hao",
"Aimin",
""
]
] | TITLE: 3D Dental Model Segmentation with Geometrical Boundary Preserving
ABSTRACT: 3D intraoral scan mesh is widely used in digital dentistry diagnosis,
segmenting 3D intraoral scan mesh is a critical preliminary task. Numerous
approaches have been devised for precise tooth segmentation. Currently, the
deep learning-based methods are capable of the high accuracy segmentation of
crown. However, the segmentation accuracy at the junction between the crown and
the gum is still below average. Existing down-sampling methods are unable to
effectively preserve the geometric details at the junction. To address these
problems, we propose CrossTooth, a boundary-preserving segmentation method that
combines 3D mesh selective downsampling to retain more vertices at the
tooth-gingiva area, along with cross-modal discriminative boundary features
extracted from multi-view rendered images, enhancing the geometric
representation of the segmentation network. Using a point network as a backbone
and incorporating image complementary features, CrossTooth significantly
improves segmentation accuracy, as demonstrated by experiments on a public
intraoral scan dataset.
|
2503.23714 | Youmi Ma | Youmi Ma, Sakae Mizuki, Kazuki Fujii, Taishi Nakamura, Masanari Ohi,
Hinari Shimada, Taihei Shiotani, Koshiro Saito, Koki Maeda, Kakeru Hattori,
Takumi Okamoto, Shigeki Ishida, Rio Yokota, Hiroya Takamura, Naoaki Okazaki | Building Instruction-Tuning Datasets from Human-Written Instructions
with Open-Weight Large Language Models | 15 pages, 5 figures | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Instruction tuning is crucial for enabling Large Language Models (LLMs) to
solve real-world tasks. Prior work has shown the effectiveness of
instruction-tuning data synthesized solely from LLMs, raising a fundamental
question: Do we still need human-originated signals for instruction tuning?
This work answers the question affirmatively: we build state-of-the-art
instruction-tuning datasets sourced from human-written instructions, by simply
pairing them with LLM-generated responses. LLMs fine-tuned on our datasets
consistently outperform those fine-tuned on existing ones. Our data
construction approach can be easily adapted to other languages; we build
datasets for Japanese and confirm that LLMs tuned with our data reach
state-of-the-art performance. Analyses suggest that instruction-tuning in a new
language allows LLMs to follow instructions, while the tuned models exhibit a
notable lack of culture-specific knowledge in that language. The datasets and
fine-tuned models will be publicly available. Our datasets, synthesized with
open-weight LLMs, are openly distributed under permissive licenses, allowing
for diverse use cases.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 04:28:38 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Ma",
"Youmi",
""
],
[
"Mizuki",
"Sakae",
""
],
[
"Fujii",
"Kazuki",
""
],
[
"Nakamura",
"Taishi",
""
],
[
"Ohi",
"Masanari",
""
],
[
"Shimada",
"Hinari",
""
],
[
"Shiotani",
"Taihei",
""
],
[
"Saito",
"Koshiro",
""
],
[
"Maeda",
"Koki",
""
],
[
"Hattori",
"Kakeru",
""
],
[
"Okamoto",
"Takumi",
""
],
[
"Ishida",
"Shigeki",
""
],
[
"Yokota",
"Rio",
""
],
[
"Takamura",
"Hiroya",
""
],
[
"Okazaki",
"Naoaki",
""
]
] | TITLE: Building Instruction-Tuning Datasets from Human-Written Instructions
with Open-Weight Large Language Models
ABSTRACT: Instruction tuning is crucial for enabling Large Language Models (LLMs) to
solve real-world tasks. Prior work has shown the effectiveness of
instruction-tuning data synthesized solely from LLMs, raising a fundamental
question: Do we still need human-originated signals for instruction tuning?
This work answers the question affirmatively: we build state-of-the-art
instruction-tuning datasets sourced from human-written instructions, by simply
pairing them with LLM-generated responses. LLMs fine-tuned on our datasets
consistently outperform those fine-tuned on existing ones. Our data
construction approach can be easily adapted to other languages; we build
datasets for Japanese and confirm that LLMs tuned with our data reach
state-of-the-art performance. Analyses suggest that instruction-tuning in a new
language allows LLMs to follow instructions, while the tuned models exhibit a
notable lack of culture-specific knowledge in that language. The datasets and
fine-tuned models will be publicly available. Our datasets, synthesized with
open-weight LLMs, are openly distributed under permissive licenses, allowing
for diverse use cases.
|
2503.23715 | Qi Liu | Kun Liu, Qi Liu, Xinchen Liu, Jie Li, Yongdong Zhang, Jiebo Luo,
Xiaodong He, Wu Liu | HOIGen-1M: A Large-scale Dataset for Human-Object Interaction Video
Generation | CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Text-to-video (T2V) generation has made tremendous progress in generating
complicated scenes based on texts. However, human-object interaction (HOI)
often cannot be precisely generated by current T2V models due to the lack of
large-scale videos with accurate captions for HOI. To address this issue, we
introduce HOIGen-1M, the first largescale dataset for HOI Generation,
consisting of over one million high-quality videos collected from diverse
sources. In particular, to guarantee the high quality of videos, we first
design an efficient framework to automatically curate HOI videos using the
powerful multimodal large language models (MLLMs), and then the videos are
further cleaned by human annotators. Moreover, to obtain accurate textual
captions for HOI videos, we design a novel video description method based on a
Mixture-of-Multimodal-Experts (MoME) strategy that not only generates
expressive captions but also eliminates the hallucination by individual MLLM.
Furthermore, due to the lack of an evaluation framework for generated HOI
videos, we propose two new metrics to assess the quality of generated videos in
a coarse-to-fine manner. Extensive experiments reveal that current T2V models
struggle to generate high-quality HOI videos and confirm that our HOIGen-1M
dataset is instrumental for improving HOI video generation. Project webpage is
available at https://liuqi-creat.github.io/HOIGen.github.io.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 04:30:34 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Liu",
"Kun",
""
],
[
"Liu",
"Qi",
""
],
[
"Liu",
"Xinchen",
""
],
[
"Li",
"Jie",
""
],
[
"Zhang",
"Yongdong",
""
],
[
"Luo",
"Jiebo",
""
],
[
"He",
"Xiaodong",
""
],
[
"Liu",
"Wu",
""
]
] | TITLE: HOIGen-1M: A Large-scale Dataset for Human-Object Interaction Video
Generation
ABSTRACT: Text-to-video (T2V) generation has made tremendous progress in generating
complicated scenes based on texts. However, human-object interaction (HOI)
often cannot be precisely generated by current T2V models due to the lack of
large-scale videos with accurate captions for HOI. To address this issue, we
introduce HOIGen-1M, the first largescale dataset for HOI Generation,
consisting of over one million high-quality videos collected from diverse
sources. In particular, to guarantee the high quality of videos, we first
design an efficient framework to automatically curate HOI videos using the
powerful multimodal large language models (MLLMs), and then the videos are
further cleaned by human annotators. Moreover, to obtain accurate textual
captions for HOI videos, we design a novel video description method based on a
Mixture-of-Multimodal-Experts (MoME) strategy that not only generates
expressive captions but also eliminates the hallucination by individual MLLM.
Furthermore, due to the lack of an evaluation framework for generated HOI
videos, we propose two new metrics to assess the quality of generated videos in
a coarse-to-fine manner. Extensive experiments reveal that current T2V models
struggle to generate high-quality HOI videos and confirm that our HOIGen-1M
dataset is instrumental for improving HOI video generation. Project webpage is
available at https://liuqi-creat.github.io/HOIGen.github.io.
|
2503.23717 | Yi Liu | Yi Liu, Wengen Li, Jihong Guan, Shuigeng Zhou, Yichao Zhang | Effective Cloud Removal for Remote Sensing Images by an Improved
Mean-Reverting Denoising Model with Elucidated Design Space | 29 pages, 12 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cloud removal (CR) remains a challenging task in remote sensing image
processing. Although diffusion models (DM) exhibit strong generative
capabilities, their direct applications to CR are suboptimal, as they generate
cloudless images from random noise, ignoring inherent information in cloudy
inputs. To overcome this drawback, we develop a new CR model EMRDM based on
mean-reverting diffusion models (MRDMs) to establish a direct diffusion process
between cloudy and cloudless images. Compared to current MRDMs, EMRDM offers a
modular framework with updatable modules and an elucidated design space, based
on a reformulated forward process and a new ordinary differential equation
(ODE)-based backward process. Leveraging our framework, we redesign key MRDM
modules to boost CR performance, including restructuring the denoiser via a
preconditioning technique, reorganizing the training process, and improving the
sampling process by introducing deterministic and stochastic samplers. To
achieve multi-temporal CR, we further develop a denoising network for
simultaneously denoising sequential images. Experiments on mono-temporal and
multi-temporal datasets demonstrate the superior performance of EMRDM. Our code
is available at https://github.com/Ly403/EMRDM.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 04:37:18 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Liu",
"Yi",
""
],
[
"Li",
"Wengen",
""
],
[
"Guan",
"Jihong",
""
],
[
"Zhou",
"Shuigeng",
""
],
[
"Zhang",
"Yichao",
""
]
] | TITLE: Effective Cloud Removal for Remote Sensing Images by an Improved
Mean-Reverting Denoising Model with Elucidated Design Space
ABSTRACT: Cloud removal (CR) remains a challenging task in remote sensing image
processing. Although diffusion models (DM) exhibit strong generative
capabilities, their direct applications to CR are suboptimal, as they generate
cloudless images from random noise, ignoring inherent information in cloudy
inputs. To overcome this drawback, we develop a new CR model EMRDM based on
mean-reverting diffusion models (MRDMs) to establish a direct diffusion process
between cloudy and cloudless images. Compared to current MRDMs, EMRDM offers a
modular framework with updatable modules and an elucidated design space, based
on a reformulated forward process and a new ordinary differential equation
(ODE)-based backward process. Leveraging our framework, we redesign key MRDM
modules to boost CR performance, including restructuring the denoiser via a
preconditioning technique, reorganizing the training process, and improving the
sampling process by introducing deterministic and stochastic samplers. To
achieve multi-temporal CR, we further develop a denoising network for
simultaneously denoising sequential images. Experiments on mono-temporal and
multi-temporal datasets demonstrate the superior performance of EMRDM. Our code
is available at https://github.com/Ly403/EMRDM.
|
2503.23725 | Hongwei Ren | Hongwei Ren, Xiaopeng Lin, Hongxiang Huang, Yue Zhou, Bojun Cheng | Exploring Temporal Dynamics in Event-based Eye Tracker | Accepted by CVPR 2025 Event-based Vision Workshop | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Eye-tracking is a vital technology for human-computer interaction, especially
in wearable devices such as AR, VR, and XR. The realization of high-speed and
high-precision eye-tracking using frame-based image sensors is constrained by
their limited temporal resolution, which impairs the accurate capture of rapid
ocular dynamics, such as saccades and blinks. Event cameras, inspired by
biological vision systems, are capable of perceiving eye movements with
extremely low power consumption and ultra-high temporal resolution. This makes
them a promising solution for achieving high-speed, high-precision tracking
with rich temporal dynamics. In this paper, we propose TDTracker, an effective
eye-tracking framework that captures rapid eye movements by thoroughly modeling
temporal dynamics from both implicit and explicit perspectives. TDTracker
utilizes 3D convolutional neural networks to capture implicit short-term
temporal dynamics and employs a cascaded structure consisting of a
Frequency-aware Module, GRU, and Mamba to extract explicit long-term temporal
dynamics. Ultimately, a prediction heatmap is used for eye coordinate
regression. Experimental results demonstrate that TDTracker achieves
state-of-the-art (SOTA) performance on the synthetic SEET dataset and secured
Third place in the CVPR event-based eye-tracking challenge 2025. Our code is
available at https://github.com/rhwxmx/TDTracker.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 04:57:13 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Ren",
"Hongwei",
""
],
[
"Lin",
"Xiaopeng",
""
],
[
"Huang",
"Hongxiang",
""
],
[
"Zhou",
"Yue",
""
],
[
"Cheng",
"Bojun",
""
]
] | TITLE: Exploring Temporal Dynamics in Event-based Eye Tracker
ABSTRACT: Eye-tracking is a vital technology for human-computer interaction, especially
in wearable devices such as AR, VR, and XR. The realization of high-speed and
high-precision eye-tracking using frame-based image sensors is constrained by
their limited temporal resolution, which impairs the accurate capture of rapid
ocular dynamics, such as saccades and blinks. Event cameras, inspired by
biological vision systems, are capable of perceiving eye movements with
extremely low power consumption and ultra-high temporal resolution. This makes
them a promising solution for achieving high-speed, high-precision tracking
with rich temporal dynamics. In this paper, we propose TDTracker, an effective
eye-tracking framework that captures rapid eye movements by thoroughly modeling
temporal dynamics from both implicit and explicit perspectives. TDTracker
utilizes 3D convolutional neural networks to capture implicit short-term
temporal dynamics and employs a cascaded structure consisting of a
Frequency-aware Module, GRU, and Mamba to extract explicit long-term temporal
dynamics. Ultimately, a prediction heatmap is used for eye coordinate
regression. Experimental results demonstrate that TDTracker achieves
state-of-the-art (SOTA) performance on the synthetic SEET dataset and secured
Third place in the CVPR event-based eye-tracking challenge 2025. Our code is
available at https://github.com/rhwxmx/TDTracker.
|
2503.23726 | Feng Li | Lina Wang, Yunsheng Yuan, Chunxiao Wang, Feng Li | PDSL: Privacy-Preserved Decentralized Stochastic Learning with
Heterogeneous Data Distribution | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In the paradigm of decentralized learning, a group of agents collaborates to
learn a global model using distributed datasets without a central server.
However, due to the heterogeneity of the local data across the different
agents, learning a robust global model is rather challenging. Moreover, the
collaboration of the agents relies on their gradient information exchange,
which poses a risk of privacy leakage. In this paper, to address these issues,
we propose PDSL, a novel privacy-preserved decentralized stochastic learning
algorithm with heterogeneous data distribution. On one hand, we innovate in
utilizing the notion of Shapley values such that each agent can precisely
measure the contributions of its heterogeneous neighbors to the global learning
goal; on the other hand, we leverage the notion of differential privacy to
prevent each agent from suffering privacy leakage when it contributes gradient
information to its neighbors. We conduct both solid theoretical analysis and
extensive experiments to demonstrate the efficacy of our PDSL algorithm in
terms of privacy preservation and convergence.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 04:58:05 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Wang",
"Lina",
""
],
[
"Yuan",
"Yunsheng",
""
],
[
"Wang",
"Chunxiao",
""
],
[
"Li",
"Feng",
""
]
] | TITLE: PDSL: Privacy-Preserved Decentralized Stochastic Learning with
Heterogeneous Data Distribution
ABSTRACT: In the paradigm of decentralized learning, a group of agents collaborates to
learn a global model using distributed datasets without a central server.
However, due to the heterogeneity of the local data across the different
agents, learning a robust global model is rather challenging. Moreover, the
collaboration of the agents relies on their gradient information exchange,
which poses a risk of privacy leakage. In this paper, to address these issues,
we propose PDSL, a novel privacy-preserved decentralized stochastic learning
algorithm with heterogeneous data distribution. On one hand, we innovate in
utilizing the notion of Shapley values such that each agent can precisely
measure the contributions of its heterogeneous neighbors to the global learning
goal; on the other hand, we leverage the notion of differential privacy to
prevent each agent from suffering privacy leakage when it contributes gradient
information to its neighbors. We conduct both solid theoretical analysis and
extensive experiments to demonstrate the efficacy of our PDSL algorithm in
terms of privacy preservation and convergence.
|
2503.23736 | Lingyu Liu | Lingyu Liu, Yaxiong Wang, Li Zhu, Zhedong Zheng | Every Painting Awakened: A Training-free Framework for
Painting-to-Animation Generation | The project is available at:
https://painting-animation.github.io/animation/ | null | null | null | cs.CV cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a training-free framework specifically designed to bring
real-world static paintings to life through image-to-video (I2V) synthesis,
addressing the persistent challenge of aligning these motions with textual
guidance while preserving fidelity to the original artworks. Existing I2V
methods, primarily trained on natural video datasets, often struggle to
generate dynamic outputs from static paintings. It remains challenging to
generate motion while maintaining visual consistency with real-world paintings.
This results in two distinct failure modes: either static outputs due to
limited text-based motion interpretation or distorted dynamics caused by
inadequate alignment with real-world artistic styles. We leverage the advanced
text-image alignment capabilities of pre-trained image models to guide the
animation process. Our approach introduces synthetic proxy images through two
key innovations: (1) Dual-path score distillation: We employ a dual-path
architecture to distill motion priors from both real and synthetic data,
preserving static details from the original painting while learning dynamic
characteristics from synthetic frames. (2) Hybrid latent fusion: We integrate
hybrid features extracted from real paintings and synthetic proxy images via
spherical linear interpolation in the latent space, ensuring smooth transitions
and enhancing temporal consistency. Experimental evaluations confirm that our
approach significantly improves semantic alignment with text prompts while
faithfully preserving the unique characteristics and integrity of the original
paintings. Crucially, by achieving enhanced dynamic effects without requiring
any model training or learnable parameters, our framework enables plug-and-play
integration with existing I2V methods, making it an ideal solution for
animating real-world paintings. More animated examples can be found on our
project website.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 05:25:49 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Liu",
"Lingyu",
""
],
[
"Wang",
"Yaxiong",
""
],
[
"Zhu",
"Li",
""
],
[
"Zheng",
"Zhedong",
""
]
] | TITLE: Every Painting Awakened: A Training-free Framework for
Painting-to-Animation Generation
ABSTRACT: We introduce a training-free framework specifically designed to bring
real-world static paintings to life through image-to-video (I2V) synthesis,
addressing the persistent challenge of aligning these motions with textual
guidance while preserving fidelity to the original artworks. Existing I2V
methods, primarily trained on natural video datasets, often struggle to
generate dynamic outputs from static paintings. It remains challenging to
generate motion while maintaining visual consistency with real-world paintings.
This results in two distinct failure modes: either static outputs due to
limited text-based motion interpretation or distorted dynamics caused by
inadequate alignment with real-world artistic styles. We leverage the advanced
text-image alignment capabilities of pre-trained image models to guide the
animation process. Our approach introduces synthetic proxy images through two
key innovations: (1) Dual-path score distillation: We employ a dual-path
architecture to distill motion priors from both real and synthetic data,
preserving static details from the original painting while learning dynamic
characteristics from synthetic frames. (2) Hybrid latent fusion: We integrate
hybrid features extracted from real paintings and synthetic proxy images via
spherical linear interpolation in the latent space, ensuring smooth transitions
and enhancing temporal consistency. Experimental evaluations confirm that our
approach significantly improves semantic alignment with text prompts while
faithfully preserving the unique characteristics and integrity of the original
paintings. Crucially, by achieving enhanced dynamic effects without requiring
any model training or learnable parameters, our framework enables plug-and-play
integration with existing I2V methods, making it an ideal solution for
animating real-world paintings. More animated examples can be found on our
project website.
|
2503.23740 | Lu Fan | Lu Fan, Jiashu Pu, Rongsheng Zhang, Xiao-Ming Wu | LANID: LLM-assisted New Intent Discovery | Published in LREC-COLING 2024 | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Task-oriented Dialogue Systems (TODS) often face the challenge of
encountering new intents. New Intent Discovery (NID) is a crucial task that
aims to identify these novel intents while maintaining the capability to
recognize existing ones. Previous efforts to adapt TODS to new intents have
struggled with inadequate semantic representation or have depended on external
knowledge, which is often not scalable or flexible. Recently, Large Language
Models (LLMs) have demonstrated strong zero-shot capabilities; however, their
scale can be impractical for real-world applications that involve extensive
queries. To address the limitations of existing NID methods by leveraging LLMs,
we propose LANID, a framework that enhances the semantic representation of
lightweight NID encoders with the guidance of LLMs. Specifically, LANID employs
the $K$-nearest neighbors and Density-Based Spatial Clustering of Applications
with Noise (DBSCAN) algorithms to sample selective utterance pairs from the
training set. It then queries an LLM to ascertain the relationships between
these pairs. The data produced from this process is utilized to design a
contrastive fine-tuning task, which is then used to train a small encoder with
a contrastive triplet loss. Our experimental results demonstrate the efficacy
of the proposed method across three distinct NID datasets, surpassing strong
baselines in both unsupervised and semi-supervised settings. Our code is
available at https://github.com/floatSDSDS/LANID.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 05:34:32 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Fan",
"Lu",
""
],
[
"Pu",
"Jiashu",
""
],
[
"Zhang",
"Rongsheng",
""
],
[
"Wu",
"Xiao-Ming",
""
]
] | TITLE: LANID: LLM-assisted New Intent Discovery
ABSTRACT: Task-oriented Dialogue Systems (TODS) often face the challenge of
encountering new intents. New Intent Discovery (NID) is a crucial task that
aims to identify these novel intents while maintaining the capability to
recognize existing ones. Previous efforts to adapt TODS to new intents have
struggled with inadequate semantic representation or have depended on external
knowledge, which is often not scalable or flexible. Recently, Large Language
Models (LLMs) have demonstrated strong zero-shot capabilities; however, their
scale can be impractical for real-world applications that involve extensive
queries. To address the limitations of existing NID methods by leveraging LLMs,
we propose LANID, a framework that enhances the semantic representation of
lightweight NID encoders with the guidance of LLMs. Specifically, LANID employs
the $K$-nearest neighbors and Density-Based Spatial Clustering of Applications
with Noise (DBSCAN) algorithms to sample selective utterance pairs from the
training set. It then queries an LLM to ascertain the relationships between
these pairs. The data produced from this process is utilized to design a
contrastive fine-tuning task, which is then used to train a small encoder with
a contrastive triplet loss. Our experimental results demonstrate the efficacy
of the proposed method across three distinct NID datasets, surpassing strong
baselines in both unsupervised and semi-supervised settings. Our code is
available at https://github.com/floatSDSDS/LANID.
|
2503.23746 | Dizhan Xue | Dizhan Xue, Jing Cui, Shengsheng Qian, Chuanrui Hu, Changsheng Xu | Short-video Propagation Influence Rating: A New Real-world Dataset and A
New Large Graph Model | null | null | null | null | cs.CV cs.CL cs.LG cs.MM cs.SI | http://creativecommons.org/licenses/by/4.0/ | Short-video platforms have gained immense popularity, captivating the
interest of millions, if not billions, of users globally. Recently, researchers
have highlighted the significance of analyzing the propagation of short-videos,
which typically involves discovering commercial values, public opinions, user
behaviors, etc. This paper proposes a new Short-video Propagation Influence
Rating (SPIR) task and aims to promote SPIR from both the dataset and method
perspectives. First, we propose a new Cross-platform Short-Video (XS-Video)
dataset, which aims to provide a large-scale and real-world short-video
propagation network across various platforms to facilitate the research on
short-video propagation. Our XS-Video dataset includes 117,720 videos, 381,926
samples, and 535 topics across 5 biggest Chinese platforms, annotated with the
propagation influence from level 0 to 9. To the best of our knowledge, this is
the first large-scale short-video dataset that contains cross-platform data or
provides all of the views, likes, shares, collects, fans, comments, and comment
content. Second, we propose a Large Graph Model (LGM) named NetGPT, based on a
novel three-stage training mechanism, to bridge heterogeneous graph-structured
data with the powerful reasoning ability and knowledge of Large Language Models
(LLMs). Our NetGPT can comprehend and analyze the short-video propagation
graph, enabling it to predict the long-term propagation influence of
short-videos. Comprehensive experimental results evaluated by both
classification and regression metrics on our XS-Video dataset indicate the
superiority of our method for SPIR.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 05:53:15 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Xue",
"Dizhan",
""
],
[
"Cui",
"Jing",
""
],
[
"Qian",
"Shengsheng",
""
],
[
"Hu",
"Chuanrui",
""
],
[
"Xu",
"Changsheng",
""
]
] | TITLE: Short-video Propagation Influence Rating: A New Real-world Dataset and A
New Large Graph Model
ABSTRACT: Short-video platforms have gained immense popularity, captivating the
interest of millions, if not billions, of users globally. Recently, researchers
have highlighted the significance of analyzing the propagation of short-videos,
which typically involves discovering commercial values, public opinions, user
behaviors, etc. This paper proposes a new Short-video Propagation Influence
Rating (SPIR) task and aims to promote SPIR from both the dataset and method
perspectives. First, we propose a new Cross-platform Short-Video (XS-Video)
dataset, which aims to provide a large-scale and real-world short-video
propagation network across various platforms to facilitate the research on
short-video propagation. Our XS-Video dataset includes 117,720 videos, 381,926
samples, and 535 topics across 5 biggest Chinese platforms, annotated with the
propagation influence from level 0 to 9. To the best of our knowledge, this is
the first large-scale short-video dataset that contains cross-platform data or
provides all of the views, likes, shares, collects, fans, comments, and comment
content. Second, we propose a Large Graph Model (LGM) named NetGPT, based on a
novel three-stage training mechanism, to bridge heterogeneous graph-structured
data with the powerful reasoning ability and knowledge of Large Language Models
(LLMs). Our NetGPT can comprehend and analyze the short-video propagation
graph, enabling it to predict the long-term propagation influence of
short-videos. Comprehensive experimental results evaluated by both
classification and regression metrics on our XS-Video dataset indicate the
superiority of our method for SPIR.
|
2503.23747 | Jingyi Zhou | Jingyi Zhou, Peng Ye, Haoyu Zhang, Jiakang Yuan, Rao Qiang, Liu
YangChenXu, Wu Cailin, Feng Xu, Tao Chen | Consistency-aware Self-Training for Iterative-based Stereo Matching | Accepted by CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Iterative-based methods have become mainstream in stereo matching due to
their high performance. However, these methods heavily rely on labeled data and
face challenges with unlabeled real-world data. To this end, we propose a
consistency-aware self-training framework for iterative-based stereo matching
for the first time, leveraging real-world unlabeled data in a teacher-student
manner. We first observe that regions with larger errors tend to exhibit more
pronounced oscillation characteristics during model prediction.Based on this,
we introduce a novel consistency-aware soft filtering module to evaluate the
reliability of teacher-predicted pseudo-labels, which consists of a
multi-resolution prediction consistency filter and an iterative prediction
consistency filter to assess the prediction fluctuations of multiple
resolutions and iterative optimization respectively. Further, we introduce a
consistency-aware soft-weighted loss to adjust the weight of pseudo-labels
accordingly, relieving the error accumulation and performance degradation
problem due to incorrect pseudo-labels. Extensive experiments demonstrate that
our method can improve the performance of various iterative-based stereo
matching approaches in various scenarios. In particular, our method can achieve
further enhancements over the current SOTA methods on several benchmark
datasets.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 05:58:25 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Zhou",
"Jingyi",
""
],
[
"Ye",
"Peng",
""
],
[
"Zhang",
"Haoyu",
""
],
[
"Yuan",
"Jiakang",
""
],
[
"Qiang",
"Rao",
""
],
[
"YangChenXu",
"Liu",
""
],
[
"Cailin",
"Wu",
""
],
[
"Xu",
"Feng",
""
],
[
"Chen",
"Tao",
""
]
] | TITLE: Consistency-aware Self-Training for Iterative-based Stereo Matching
ABSTRACT: Iterative-based methods have become mainstream in stereo matching due to
their high performance. However, these methods heavily rely on labeled data and
face challenges with unlabeled real-world data. To this end, we propose a
consistency-aware self-training framework for iterative-based stereo matching
for the first time, leveraging real-world unlabeled data in a teacher-student
manner. We first observe that regions with larger errors tend to exhibit more
pronounced oscillation characteristics during model prediction.Based on this,
we introduce a novel consistency-aware soft filtering module to evaluate the
reliability of teacher-predicted pseudo-labels, which consists of a
multi-resolution prediction consistency filter and an iterative prediction
consistency filter to assess the prediction fluctuations of multiple
resolutions and iterative optimization respectively. Further, we introduce a
consistency-aware soft-weighted loss to adjust the weight of pseudo-labels
accordingly, relieving the error accumulation and performance degradation
problem due to incorrect pseudo-labels. Extensive experiments demonstrate that
our method can improve the performance of various iterative-based stereo
matching approaches in various scenarios. In particular, our method can achieve
further enhancements over the current SOTA methods on several benchmark
datasets.
|
2503.23748 | Yujin Huang | Yujin Huang, Zhi Zhang, Qingchuan Zhao, Xingliang Yuan, Chunyang Chen | THEMIS: Towards Practical Intellectual Property Protection for
Post-Deployment On-Device Deep Learning Models | To Appear in the 34th USENIX Security Symposium, August 13-15, 2025 | null | null | null | cs.CR cs.LG cs.SE | http://creativecommons.org/licenses/by/4.0/ | On-device deep learning (DL) has rapidly gained adoption in mobile apps,
offering the benefits of offline model inference and user privacy preservation
over cloud-based approaches. However, it inevitably stores models on user
devices, introducing new vulnerabilities, particularly model-stealing attacks
and intellectual property infringement. While system-level protections like
Trusted Execution Environments (TEEs) provide a robust solution, practical
challenges remain in achieving scalable on-device DL model protection,
including complexities in supporting third-party models and limited adoption in
current mobile solutions. Advancements in TEE-enabled hardware, such as
NVIDIA's GPU-based TEEs, may address these obstacles in the future. Currently,
watermarking serves as a common defense against model theft but also faces
challenges here as many mobile app developers lack corresponding machine
learning expertise and the inherent read-only and inference-only nature of
on-device DL models prevents third parties like app stores from implementing
existing watermarking techniques in post-deployment models.
To protect the intellectual property of on-device DL models, in this paper,
we propose THEMIS, an automatic tool that lifts the read-only restriction of
on-device DL models by reconstructing their writable counterparts and leverages
the untrainable nature of on-device DL models to solve watermark parameters and
protect the model owner's intellectual property. Extensive experimental results
across various datasets and model structures show the superiority of THEMIS in
terms of different metrics. Further, an empirical investigation of 403
real-world DL mobile apps from Google Play is performed with a success rate of
81.14%, showing the practicality of THEMIS.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 05:58:57 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Huang",
"Yujin",
""
],
[
"Zhang",
"Zhi",
""
],
[
"Zhao",
"Qingchuan",
""
],
[
"Yuan",
"Xingliang",
""
],
[
"Chen",
"Chunyang",
""
]
] | TITLE: THEMIS: Towards Practical Intellectual Property Protection for
Post-Deployment On-Device Deep Learning Models
ABSTRACT: On-device deep learning (DL) has rapidly gained adoption in mobile apps,
offering the benefits of offline model inference and user privacy preservation
over cloud-based approaches. However, it inevitably stores models on user
devices, introducing new vulnerabilities, particularly model-stealing attacks
and intellectual property infringement. While system-level protections like
Trusted Execution Environments (TEEs) provide a robust solution, practical
challenges remain in achieving scalable on-device DL model protection,
including complexities in supporting third-party models and limited adoption in
current mobile solutions. Advancements in TEE-enabled hardware, such as
NVIDIA's GPU-based TEEs, may address these obstacles in the future. Currently,
watermarking serves as a common defense against model theft but also faces
challenges here as many mobile app developers lack corresponding machine
learning expertise and the inherent read-only and inference-only nature of
on-device DL models prevents third parties like app stores from implementing
existing watermarking techniques in post-deployment models.
To protect the intellectual property of on-device DL models, in this paper,
we propose THEMIS, an automatic tool that lifts the read-only restriction of
on-device DL models by reconstructing their writable counterparts and leverages
the untrainable nature of on-device DL models to solve watermark parameters and
protect the model owner's intellectual property. Extensive experimental results
across various datasets and model structures show the superiority of THEMIS in
terms of different metrics. Further, an empirical investigation of 403
real-world DL mobile apps from Google Play is performed with a success rate of
81.14%, showing the practicality of THEMIS.
|
2503.23752 | Jin Zhou | Jin Zhou, Yi Zhou, Pengfei Xu and Hui Huang | StrokeFusion: Vector Sketch Generation via Joint Stroke-UDF Encoding and
Latent Sequence Diffusion | null | null | null | null | cs.GR cs.CV | http://creativecommons.org/licenses/by/4.0/ | In the field of sketch generation, raster-format trained models often produce
non-stroke artifacts, while vector-format trained models typically lack a
holistic understanding of sketches, leading to compromised recognizability.
Moreover, existing methods struggle to extract common features from similar
elements (e.g., eyes of animals) appearing at varying positions across
sketches. To address these challenges, we propose StrokeFusion, a two-stage
framework for vector sketch generation. It contains a dual-modal sketch feature
learning network that maps strokes into a high-quality latent space. This
network decomposes sketches into normalized strokes and jointly encodes stroke
sequences with Unsigned Distance Function (UDF) maps, representing sketches as
sets of stroke feature vectors. Building upon this representation, our
framework exploits a stroke-level latent diffusion model that simultaneously
adjusts stroke position, scale, and trajectory during generation. This enables
high-fidelity sketch generation while supporting stroke interpolation editing.
Extensive experiments on the QuickDraw dataset demonstrate that our framework
outperforms state-of-the-art techniques, validating its effectiveness in
preserving structural integrity and semantic features. Code and models will be
made publicly available upon publication.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 06:03:03 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Zhou",
"Jin",
""
],
[
"Zhou",
"Yi",
""
],
[
"Xu",
"Pengfei",
""
],
[
"Huang",
"Hui",
""
]
] | TITLE: StrokeFusion: Vector Sketch Generation via Joint Stroke-UDF Encoding and
Latent Sequence Diffusion
ABSTRACT: In the field of sketch generation, raster-format trained models often produce
non-stroke artifacts, while vector-format trained models typically lack a
holistic understanding of sketches, leading to compromised recognizability.
Moreover, existing methods struggle to extract common features from similar
elements (e.g., eyes of animals) appearing at varying positions across
sketches. To address these challenges, we propose StrokeFusion, a two-stage
framework for vector sketch generation. It contains a dual-modal sketch feature
learning network that maps strokes into a high-quality latent space. This
network decomposes sketches into normalized strokes and jointly encodes stroke
sequences with Unsigned Distance Function (UDF) maps, representing sketches as
sets of stroke feature vectors. Building upon this representation, our
framework exploits a stroke-level latent diffusion model that simultaneously
adjusts stroke position, scale, and trajectory during generation. This enables
high-fidelity sketch generation while supporting stroke interpolation editing.
Extensive experiments on the QuickDraw dataset demonstrate that our framework
outperforms state-of-the-art techniques, validating its effectiveness in
preserving structural integrity and semantic features. Code and models will be
made publicly available upon publication.
|
2503.23762 | Yuanyuan Wang | Yuanyuan Wang, Hangting Chen, Dongchao Yang, Weiqin Li, Dan Luo,
Guangzhi Li, Shan Yang, Zhiyong Wu, Helen Meng, Xixin Wu | UniSep: Universal Target Audio Separation with Language Models at Scale | Accepted by ICME 2025 | null | null | null | cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose Universal target audio Separation (UniSep), addressing the
separation task on arbitrary mixtures of different types of audio.
Distinguished from previous studies, UniSep is performed on unlimited source
domains and unlimited source numbers. We formulate the separation task as a
sequence-to-sequence problem, and a large language model (LLM) is used to model
the audio sequence in the discrete latent space, leveraging the power of LLM in
handling complex mixture audios with large-scale data. Moreover, a novel
pre-training strategy is proposed to utilize audio-only data, which reduces the
efforts of large-scale data simulation and enhances the ability of LLMs to
understand the consistency and correlation of information within audio
sequences. We also demonstrate the effectiveness of scaling datasets in an
audio separation task: we use large-scale data (36.5k hours), including speech,
music, and sound, to train a universal target audio separation model that is
not limited to a specific domain. Experiments show that UniSep achieves
competitive subjective and objective evaluation results compared with
single-task models.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 06:27:37 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Wang",
"Yuanyuan",
""
],
[
"Chen",
"Hangting",
""
],
[
"Yang",
"Dongchao",
""
],
[
"Li",
"Weiqin",
""
],
[
"Luo",
"Dan",
""
],
[
"Li",
"Guangzhi",
""
],
[
"Yang",
"Shan",
""
],
[
"Wu",
"Zhiyong",
""
],
[
"Meng",
"Helen",
""
],
[
"Wu",
"Xixin",
""
]
] | TITLE: UniSep: Universal Target Audio Separation with Language Models at Scale
ABSTRACT: We propose Universal target audio Separation (UniSep), addressing the
separation task on arbitrary mixtures of different types of audio.
Distinguished from previous studies, UniSep is performed on unlimited source
domains and unlimited source numbers. We formulate the separation task as a
sequence-to-sequence problem, and a large language model (LLM) is used to model
the audio sequence in the discrete latent space, leveraging the power of LLM in
handling complex mixture audios with large-scale data. Moreover, a novel
pre-training strategy is proposed to utilize audio-only data, which reduces the
efforts of large-scale data simulation and enhances the ability of LLMs to
understand the consistency and correlation of information within audio
sequences. We also demonstrate the effectiveness of scaling datasets in an
audio separation task: we use large-scale data (36.5k hours), including speech,
music, and sound, to train a universal target audio separation model that is
not limited to a specific domain. Experiments show that UniSep achieves
competitive subjective and objective evaluation results compared with
single-task models.
|
2503.23766 | Jiangjie Qiu | Jiangjie Qiu, Hou Hei Lam, Xiuyuan Hu, Wentao Li, Siwei Fu, Fankun
Zeng, Hao Zhang, Xiaonan Wang | Accelerating High-Efficiency Organic Photovoltaic Discovery via
Pretrained Graph Neural Networks and Generative Reinforcement Learning | AI for Accelerated Materials Design - ICLR 2025 | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Organic photovoltaic (OPV) materials offer a promising avenue toward
cost-effective solar energy utilization. However, optimizing donor-acceptor
(D-A) combinations to achieve high power conversion efficiency (PCE) remains a
significant challenge. In this work, we propose a framework that integrates
large-scale pretraining of graph neural networks (GNNs) with a GPT-2
(Generative Pretrained Transformer 2)-based reinforcement learning (RL)
strategy to design OPV molecules with potentially high PCE. This approach
produces candidate molecules with predicted efficiencies approaching 21\%,
although further experimental validation is required. Moreover, we conducted a
preliminary fragment-level analysis to identify structural motifs recognized by
the RL model that may contribute to enhanced PCE, thus providing design
guidelines for the broader research community. To facilitate continued
discovery, we are building the largest open-source OPV dataset to date,
expected to include nearly 3,000 donor-acceptor pairs. Finally, we discuss
plans to collaborate with experimental teams on synthesizing and characterizing
AI-designed molecules, which will provide new data to refine and improve our
predictive and generative models.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 06:31:15 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Qiu",
"Jiangjie",
""
],
[
"Lam",
"Hou Hei",
""
],
[
"Hu",
"Xiuyuan",
""
],
[
"Li",
"Wentao",
""
],
[
"Fu",
"Siwei",
""
],
[
"Zeng",
"Fankun",
""
],
[
"Zhang",
"Hao",
""
],
[
"Wang",
"Xiaonan",
""
]
] | TITLE: Accelerating High-Efficiency Organic Photovoltaic Discovery via
Pretrained Graph Neural Networks and Generative Reinforcement Learning
ABSTRACT: Organic photovoltaic (OPV) materials offer a promising avenue toward
cost-effective solar energy utilization. However, optimizing donor-acceptor
(D-A) combinations to achieve high power conversion efficiency (PCE) remains a
significant challenge. In this work, we propose a framework that integrates
large-scale pretraining of graph neural networks (GNNs) with a GPT-2
(Generative Pretrained Transformer 2)-based reinforcement learning (RL)
strategy to design OPV molecules with potentially high PCE. This approach
produces candidate molecules with predicted efficiencies approaching 21\%,
although further experimental validation is required. Moreover, we conducted a
preliminary fragment-level analysis to identify structural motifs recognized by
the RL model that may contribute to enhanced PCE, thus providing design
guidelines for the broader research community. To facilitate continued
discovery, we are building the largest open-source OPV dataset to date,
expected to include nearly 3,000 donor-acceptor pairs. Finally, we discuss
plans to collaborate with experimental teams on synthesizing and characterizing
AI-designed molecules, which will provide new data to refine and improve our
predictive and generative models.
|
2503.23767 | Linghao Feng | Linghao Feng, Dongcheng Zhao, Sicheng Shen, Yi Zeng | Biologically Inspired Spiking Diffusion Model with Adaptive Lateral
Selection Mechanism | null | null | null | null | cs.NE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Lateral connection is a fundamental feature of biological neural circuits,
facilitating local information processing and adaptive learning. In this work,
we integrate lateral connections with a substructure selection network to
develop a novel diffusion model based on spiking neural networks (SNNs). Unlike
conventional artificial neural networks, SNNs employ an intrinsic spiking inner
loop to process sequential binary spikes. We leverage this spiking inner loop
alongside a lateral connection mechanism to iteratively refine the substructure
selection network, enhancing model adaptability and expressivity. Specifically,
we design a lateral connection framework comprising a learnable lateral matrix
and a lateral mapping function, both implemented using spiking neurons, to
dynamically update lateral connections. Through mathematical modeling, we
establish that the proposed lateral update mechanism, under a well-defined
local objective, aligns with biologically plausible synaptic plasticity
principles. Extensive experiments validate the effectiveness of our approach,
analyzing the role of substructure selection and lateral connection during
training. Furthermore, quantitative comparisons demonstrate that our model
consistently surpasses state-of-the-art SNN-based generative models across
multiple benchmark datasets.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 06:31:50 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Feng",
"Linghao",
""
],
[
"Zhao",
"Dongcheng",
""
],
[
"Shen",
"Sicheng",
""
],
[
"Zeng",
"Yi",
""
]
] | TITLE: Biologically Inspired Spiking Diffusion Model with Adaptive Lateral
Selection Mechanism
ABSTRACT: Lateral connection is a fundamental feature of biological neural circuits,
facilitating local information processing and adaptive learning. In this work,
we integrate lateral connections with a substructure selection network to
develop a novel diffusion model based on spiking neural networks (SNNs). Unlike
conventional artificial neural networks, SNNs employ an intrinsic spiking inner
loop to process sequential binary spikes. We leverage this spiking inner loop
alongside a lateral connection mechanism to iteratively refine the substructure
selection network, enhancing model adaptability and expressivity. Specifically,
we design a lateral connection framework comprising a learnable lateral matrix
and a lateral mapping function, both implemented using spiking neurons, to
dynamically update lateral connections. Through mathematical modeling, we
establish that the proposed lateral update mechanism, under a well-defined
local objective, aligns with biologically plausible synaptic plasticity
principles. Extensive experiments validate the effectiveness of our approach,
analyzing the role of substructure selection and lateral connection during
training. Furthermore, quantitative comparisons demonstrate that our model
consistently surpasses state-of-the-art SNN-based generative models across
multiple benchmark datasets.
|
2503.23768 | Zhecheng Li | Zhecheng Li, Guoxian Song, Yujun Cai, Zhen Xiong, Junsong Yuan, Yiwei
Wang | Texture or Semantics? Vision-Language Models Get Lost in Font
Recognition | null | null | null | null | cs.CL cs.CV | http://creativecommons.org/licenses/by/4.0/ | Modern Vision-Language Models (VLMs) exhibit remarkable visual and linguistic
capabilities, achieving impressive performance in various tasks such as image
recognition and object localization. However, their effectiveness in
fine-grained tasks remains an open question. In everyday scenarios, individuals
encountering design materials, such as magazines, typography tutorials,
research papers, or branding content, may wish to identify aesthetically
pleasing fonts used in the text. Given their multimodal capabilities and free
accessibility, many VLMs are often considered potential tools for font
recognition. This raises a fundamental question: Do VLMs truly possess the
capability to recognize fonts? To investigate this, we introduce the Font
Recognition Benchmark (FRB), a compact and well-structured dataset comprising
15 commonly used fonts. FRB includes two versions: (i) an easy version, where
10 sentences are rendered in different fonts, and (ii) a hard version, where
each text sample consists of the names of the 15 fonts themselves, introducing
a stroop effect that challenges model perception. Through extensive evaluation
of various VLMs on font recognition tasks, we arrive at the following key
findings: (i) Current VLMs exhibit limited font recognition capabilities, with
many state-of-the-art models failing to achieve satisfactory performance. (ii)
Few-shot learning and Chain-of-Thought (CoT) prompting provide minimal benefits
in improving font recognition accuracy across different VLMs. (iii) Attention
analysis sheds light on the inherent limitations of VLMs in capturing semantic
features.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 06:33:21 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Li",
"Zhecheng",
""
],
[
"Song",
"Guoxian",
""
],
[
"Cai",
"Yujun",
""
],
[
"Xiong",
"Zhen",
""
],
[
"Yuan",
"Junsong",
""
],
[
"Wang",
"Yiwei",
""
]
] | TITLE: Texture or Semantics? Vision-Language Models Get Lost in Font
Recognition
ABSTRACT: Modern Vision-Language Models (VLMs) exhibit remarkable visual and linguistic
capabilities, achieving impressive performance in various tasks such as image
recognition and object localization. However, their effectiveness in
fine-grained tasks remains an open question. In everyday scenarios, individuals
encountering design materials, such as magazines, typography tutorials,
research papers, or branding content, may wish to identify aesthetically
pleasing fonts used in the text. Given their multimodal capabilities and free
accessibility, many VLMs are often considered potential tools for font
recognition. This raises a fundamental question: Do VLMs truly possess the
capability to recognize fonts? To investigate this, we introduce the Font
Recognition Benchmark (FRB), a compact and well-structured dataset comprising
15 commonly used fonts. FRB includes two versions: (i) an easy version, where
10 sentences are rendered in different fonts, and (ii) a hard version, where
each text sample consists of the names of the 15 fonts themselves, introducing
a stroop effect that challenges model perception. Through extensive evaluation
of various VLMs on font recognition tasks, we arrive at the following key
findings: (i) Current VLMs exhibit limited font recognition capabilities, with
many state-of-the-art models failing to achieve satisfactory performance. (ii)
Few-shot learning and Chain-of-Thought (CoT) prompting provide minimal benefits
in improving font recognition accuracy across different VLMs. (iii) Attention
analysis sheds light on the inherent limitations of VLMs in capturing semantic
features.
|
2503.23775 | Felix Ott | Lucas Heublein and Nisha L. Raichur and Tobias Feigl and Tobias
Brieger and Fin Heuer and Lennart Asbach and Alexander R\"ugamer and Felix
Ott | Evaluation of (Un-)Supervised Machine Learning Methods for GNSS
Interference Classification with Real-World Data Discrepancies | 34 pages, 25 figures | Proceedings of the 37th International Technical Meeting of the
Satellite Division of The Institute of Navigation (ION GNSS+), Baltimore,
Maryland, September 2024, pp. 1260-1293 | 10.33012/2024.19887 | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The accuracy and reliability of vehicle localization on roads are crucial for
applications such as self-driving cars, toll systems, and digital tachographs.
To achieve accurate positioning, vehicles typically use global navigation
satellite system (GNSS) receivers to validate their absolute positions.
However, GNSS-based positioning can be compromised by interference signals,
necessitating the identification, classification, determination of purpose, and
localization of such interference to mitigate or eliminate it. Recent
approaches based on machine learning (ML) have shown superior performance in
monitoring interference. However, their feasibility in real-world applications
and environments has yet to be assessed. Effective implementation of ML
techniques requires training datasets that incorporate realistic interference
signals, including real-world noise and potential multipath effects that may
occur between transmitter, receiver, and satellite in the operational area.
Additionally, these datasets require reference labels. Creating such datasets
is often challenging due to legal restrictions, as causing interference to GNSS
sources is strictly prohibited. Consequently, the performance of ML-based
methods in practical applications remains unclear. To address this gap, we
describe a series of large-scale measurement campaigns conducted in real-world
settings at two highway locations in Germany and the Seetal Alps in Austria,
and in large-scale controlled indoor environments. We evaluate the latest
supervised ML-based methods to report on their performance in real-world
settings and present the applicability of pseudo-labeling for unsupervised
learning. We demonstrate the challenges of combining datasets due to data
discrepancies and evaluate outlier detection, domain adaptation, and data
augmentation techniques to present the models' capabilities to adapt to changes
in the datasets.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 06:51:52 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Heublein",
"Lucas",
""
],
[
"Raichur",
"Nisha L.",
""
],
[
"Feigl",
"Tobias",
""
],
[
"Brieger",
"Tobias",
""
],
[
"Heuer",
"Fin",
""
],
[
"Asbach",
"Lennart",
""
],
[
"Rügamer",
"Alexander",
""
],
[
"Ott",
"Felix",
""
]
] | TITLE: Evaluation of (Un-)Supervised Machine Learning Methods for GNSS
Interference Classification with Real-World Data Discrepancies
ABSTRACT: The accuracy and reliability of vehicle localization on roads are crucial for
applications such as self-driving cars, toll systems, and digital tachographs.
To achieve accurate positioning, vehicles typically use global navigation
satellite system (GNSS) receivers to validate their absolute positions.
However, GNSS-based positioning can be compromised by interference signals,
necessitating the identification, classification, determination of purpose, and
localization of such interference to mitigate or eliminate it. Recent
approaches based on machine learning (ML) have shown superior performance in
monitoring interference. However, their feasibility in real-world applications
and environments has yet to be assessed. Effective implementation of ML
techniques requires training datasets that incorporate realistic interference
signals, including real-world noise and potential multipath effects that may
occur between transmitter, receiver, and satellite in the operational area.
Additionally, these datasets require reference labels. Creating such datasets
is often challenging due to legal restrictions, as causing interference to GNSS
sources is strictly prohibited. Consequently, the performance of ML-based
methods in practical applications remains unclear. To address this gap, we
describe a series of large-scale measurement campaigns conducted in real-world
settings at two highway locations in Germany and the Seetal Alps in Austria,
and in large-scale controlled indoor environments. We evaluate the latest
supervised ML-based methods to report on their performance in real-world
settings and present the applicability of pseudo-labeling for unsupervised
learning. We demonstrate the challenges of combining datasets due to data
discrepancies and evaluate outlier detection, domain adaptation, and data
augmentation techniques to present the models' capabilities to adapt to changes
in the datasets.
|
2503.23781 | Jinwei Su | Jinwei Su, Yinghui Xia, Ronghua Shi, Jianhui Wang, Jianuo Huang, Yijin
Wang, Tianyu Shi, Yang Jingsong, Lewei He | DebFlow: Automating Agent Creation via Agent Debate | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs) have demonstrated strong potential and
impressive performance in automating the generation and optimization of
workflows. However, existing approaches are marked by limited reasoning
capabilities, high computational demands, and significant resource
requirements. To address these issues, we propose DebFlow, a framework that
employs a debate mechanism to optimize workflows and integrates reflexion to
improve based on previous experiences. We evaluated our method across six
benchmark datasets, including HotpotQA, MATH, and ALFWorld. Our approach
achieved a 3\% average performance improvement over the latest baselines,
demonstrating its effectiveness in diverse problem domains. In particular,
during training, our framework reduces resource consumption by 37\% compared to
the state-of-the-art baselines. Additionally, we performed ablation studies.
Removing the Debate component resulted in a 4\% performance drop across two
benchmark datasets, significantly greater than the 2\% drop observed when the
Reflection component was removed. These findings strongly demonstrate the
critical role of Debate in enhancing framework performance, while also
highlighting the auxiliary contribution of reflexion to overall optimization.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 06:56:13 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Su",
"Jinwei",
""
],
[
"Xia",
"Yinghui",
""
],
[
"Shi",
"Ronghua",
""
],
[
"Wang",
"Jianhui",
""
],
[
"Huang",
"Jianuo",
""
],
[
"Wang",
"Yijin",
""
],
[
"Shi",
"Tianyu",
""
],
[
"Jingsong",
"Yang",
""
],
[
"He",
"Lewei",
""
]
] | TITLE: DebFlow: Automating Agent Creation via Agent Debate
ABSTRACT: Large language models (LLMs) have demonstrated strong potential and
impressive performance in automating the generation and optimization of
workflows. However, existing approaches are marked by limited reasoning
capabilities, high computational demands, and significant resource
requirements. To address these issues, we propose DebFlow, a framework that
employs a debate mechanism to optimize workflows and integrates reflexion to
improve based on previous experiences. We evaluated our method across six
benchmark datasets, including HotpotQA, MATH, and ALFWorld. Our approach
achieved a 3\% average performance improvement over the latest baselines,
demonstrating its effectiveness in diverse problem domains. In particular,
during training, our framework reduces resource consumption by 37\% compared to
the state-of-the-art baselines. Additionally, we performed ablation studies.
Removing the Debate component resulted in a 4\% performance drop across two
benchmark datasets, significantly greater than the 2\% drop observed when the
Reflection component was removed. These findings strongly demonstrate the
critical role of Debate in enhancing framework performance, while also
highlighting the auxiliary contribution of reflexion to overall optimization.
|
2503.23786 | Haoran Shen | Haoran Shen, Peixian Zhuang, Jiahao Kou, Yuxin Zeng, Haoying Xu,
Jiangyun Li | MGD-SAM2: Multi-view Guided Detail-enhanced Segment Anything Model 2 for
High-Resolution Class-agnostic Segmentation | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Segment Anything Models (SAMs), as vision foundation models, have
demonstrated remarkable performance across various image analysis tasks.
Despite their strong generalization capabilities, SAMs encounter challenges in
fine-grained detail segmentation for high-resolution class-independent
segmentation (HRCS), due to the limitations in the direct processing of
high-resolution inputs and low-resolution mask predictions, and the reliance on
accurate manual prompts. To address these limitations, we propose MGD-SAM2
which integrates SAM2 with multi-view feature interaction between a global
image and local patches to achieve precise segmentation. MGD-SAM2 incorporates
the pre-trained SAM2 with four novel modules: the Multi-view Perception Adapter
(MPAdapter), the Multi-view Complementary Enhancement Module (MCEM), the
Hierarchical Multi-view Interaction Module (HMIM), and the Detail Refinement
Module (DRM). Specifically, we first introduce MPAdapter to adapt the SAM2
encoder for enhanced extraction of local details and global semantics in HRCS
images. Then, MCEM and HMIM are proposed to further exploit local texture and
global context by aggregating multi-view features within and across
multi-scales. Finally, DRM is designed to generate gradually restored
high-resolution mask predictions, compensating for the loss of fine-grained
details resulting from directly upsampling the low-resolution prediction maps.
Experimental results demonstrate the superior performance and strong
generalization of our model on multiple high-resolution and normal-resolution
datasets. Code will be available at https://github.com/sevenshr/MGD-SAM2.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 07:02:32 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Shen",
"Haoran",
""
],
[
"Zhuang",
"Peixian",
""
],
[
"Kou",
"Jiahao",
""
],
[
"Zeng",
"Yuxin",
""
],
[
"Xu",
"Haoying",
""
],
[
"Li",
"Jiangyun",
""
]
] | TITLE: MGD-SAM2: Multi-view Guided Detail-enhanced Segment Anything Model 2 for
High-Resolution Class-agnostic Segmentation
ABSTRACT: Segment Anything Models (SAMs), as vision foundation models, have
demonstrated remarkable performance across various image analysis tasks.
Despite their strong generalization capabilities, SAMs encounter challenges in
fine-grained detail segmentation for high-resolution class-independent
segmentation (HRCS), due to the limitations in the direct processing of
high-resolution inputs and low-resolution mask predictions, and the reliance on
accurate manual prompts. To address these limitations, we propose MGD-SAM2
which integrates SAM2 with multi-view feature interaction between a global
image and local patches to achieve precise segmentation. MGD-SAM2 incorporates
the pre-trained SAM2 with four novel modules: the Multi-view Perception Adapter
(MPAdapter), the Multi-view Complementary Enhancement Module (MCEM), the
Hierarchical Multi-view Interaction Module (HMIM), and the Detail Refinement
Module (DRM). Specifically, we first introduce MPAdapter to adapt the SAM2
encoder for enhanced extraction of local details and global semantics in HRCS
images. Then, MCEM and HMIM are proposed to further exploit local texture and
global context by aggregating multi-view features within and across
multi-scales. Finally, DRM is designed to generate gradually restored
high-resolution mask predictions, compensating for the loss of fine-grained
details resulting from directly upsampling the low-resolution prediction maps.
Experimental results demonstrate the superior performance and strong
generalization of our model on multiple high-resolution and normal-resolution
datasets. Code will be available at https://github.com/sevenshr/MGD-SAM2.
|
2503.23798 | Xuan Luo | Xuan Luo, Weizhi Wang, Xifeng Yan | Adaptive Layer-skipping in Pre-trained LLMs | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Various layer-skipping methods have been proposed to accelerate token
generation in large language models (LLMs). However, they have overlooked a
fundamental question: How do computational demands vary across the generation
of different tokens? In this work, we introduce FlexiDepth, a method that
dynamically adjusts the number of Transformer layers used in text generation.
By incorporating a plug-in router and adapter, FlexiDepth enables adaptive
layer-skipping in LLMs without modifying their original parameters. Introducing
FlexiDepth to Llama-3-8B model achieves layer skipping of 8 layers out of 32,
and meanwhile maintains the full 100\% benchmark performance. Experimental
results with FlexiDepth demonstrate that computational demands in LLMs
significantly vary based on token type. Specifically, generating repetitive
tokens or fixed phrases requires fewer layers, whereas producing tokens
involving computation or high uncertainty requires more layers. Interestingly,
this adaptive allocation pattern aligns with human intuition. To advance
research in this area, we open sourced FlexiDepth and a dataset documenting
FlexiDepth's layer allocation patterns for future exploration.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 07:20:58 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Luo",
"Xuan",
""
],
[
"Wang",
"Weizhi",
""
],
[
"Yan",
"Xifeng",
""
]
] | TITLE: Adaptive Layer-skipping in Pre-trained LLMs
ABSTRACT: Various layer-skipping methods have been proposed to accelerate token
generation in large language models (LLMs). However, they have overlooked a
fundamental question: How do computational demands vary across the generation
of different tokens? In this work, we introduce FlexiDepth, a method that
dynamically adjusts the number of Transformer layers used in text generation.
By incorporating a plug-in router and adapter, FlexiDepth enables adaptive
layer-skipping in LLMs without modifying their original parameters. Introducing
FlexiDepth to Llama-3-8B model achieves layer skipping of 8 layers out of 32,
and meanwhile maintains the full 100\% benchmark performance. Experimental
results with FlexiDepth demonstrate that computational demands in LLMs
significantly vary based on token type. Specifically, generating repetitive
tokens or fixed phrases requires fewer layers, whereas producing tokens
involving computation or high uncertainty requires more layers. Interestingly,
this adaptive allocation pattern aligns with human intuition. To advance
research in this area, we open sourced FlexiDepth and a dataset documenting
FlexiDepth's layer allocation patterns for future exploration.
|
2503.23804 | Shiyi Yang | Shiyi Yang, Zhibo Hu, Chen Wang, Tong Yu, Xiwei Xu, Liming Zhu, Lina
Yao | Get the Agents Drunk: Memory Perturbations in Autonomous Agent-based
Recommender Systems | null | null | null | null | cs.CR cs.CL cs.IR cs.MA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language model-based agents are increasingly used in recommender
systems (Agent4RSs) to achieve personalized behavior modeling. Specifically,
Agent4RSs introduces memory mechanisms that enable the agents to autonomously
learn and self-evolve from real-world interactions. However, to the best of our
knowledge, how robust Agent4RSs are remains unexplored. As such, in this paper,
we propose the first work to attack Agent4RSs by perturbing agents' memories,
not only to uncover their limitations but also to enhance their security and
robustness, ensuring the development of safer and more reliable AI agents.
Given the security and privacy concerns, it is more practical to launch
attacks under a black-box setting, where the accurate knowledge of the victim
models cannot be easily obtained. Moreover, the practical attacks are often
stealthy to maximize the impact. To this end, we propose a novel practical
attack framework named DrunkAgent. DrunkAgent consists of a generation module,
a strategy module, and a surrogate module. The generation module aims to
produce effective and coherent adversarial textual triggers, which can be used
to achieve attack objectives such as promoting the target items. The strategy
module is designed to `get the target agents drunk' so that their memories
cannot be effectively updated during the interaction process. As such, the
triggers can play the best role. Both of the modules are optimized on the
surrogate module to improve the transferability and imperceptibility of the
attacks. By identifying and analyzing the vulnerabilities, our work provides
critical insights that pave the way for building safer and more resilient
Agent4RSs. Extensive experiments across various real-world datasets demonstrate
the effectiveness of DrunkAgent.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 07:35:40 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Yang",
"Shiyi",
""
],
[
"Hu",
"Zhibo",
""
],
[
"Wang",
"Chen",
""
],
[
"Yu",
"Tong",
""
],
[
"Xu",
"Xiwei",
""
],
[
"Zhu",
"Liming",
""
],
[
"Yao",
"Lina",
""
]
] | TITLE: Get the Agents Drunk: Memory Perturbations in Autonomous Agent-based
Recommender Systems
ABSTRACT: Large language model-based agents are increasingly used in recommender
systems (Agent4RSs) to achieve personalized behavior modeling. Specifically,
Agent4RSs introduces memory mechanisms that enable the agents to autonomously
learn and self-evolve from real-world interactions. However, to the best of our
knowledge, how robust Agent4RSs are remains unexplored. As such, in this paper,
we propose the first work to attack Agent4RSs by perturbing agents' memories,
not only to uncover their limitations but also to enhance their security and
robustness, ensuring the development of safer and more reliable AI agents.
Given the security and privacy concerns, it is more practical to launch
attacks under a black-box setting, where the accurate knowledge of the victim
models cannot be easily obtained. Moreover, the practical attacks are often
stealthy to maximize the impact. To this end, we propose a novel practical
attack framework named DrunkAgent. DrunkAgent consists of a generation module,
a strategy module, and a surrogate module. The generation module aims to
produce effective and coherent adversarial textual triggers, which can be used
to achieve attack objectives such as promoting the target items. The strategy
module is designed to `get the target agents drunk' so that their memories
cannot be effectively updated during the interaction process. As such, the
triggers can play the best role. Both of the modules are optimized on the
surrogate module to improve the transferability and imperceptibility of the
attacks. By identifying and analyzing the vulnerabilities, our work provides
critical insights that pave the way for building safer and more resilient
Agent4RSs. Extensive experiments across various real-world datasets demonstrate
the effectiveness of DrunkAgent.
|
2503.23819 | Tapabrata Chakraborti | Swarnava Bhattacharyya and Umapada Pal and Tapabrata Chakraborti | Conformal uncertainty quantification to evaluate predictive fairness of
foundation AI model for skin lesion classes across patient demographics | null | null | null | null | cs.LG cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | Deep learning based diagnostic AI systems based on medical images are
starting to provide similar performance as human experts. However these data
hungry complex systems are inherently black boxes and therefore slow to be
adopted for high risk applications like healthcare. This problem of lack of
transparency is exacerbated in the case of recent large foundation models,
which are trained in a self supervised manner on millions of data points to
provide robust generalisation across a range of downstream tasks, but the
embeddings generated from them happen through a process that is not
interpretable, and hence not easily trustable for clinical applications. To
address this timely issue, we deploy conformal analysis to quantify the
predictive uncertainty of a vision transformer (ViT) based foundation model
across patient demographics with respect to sex, age and ethnicity for the
tasks of skin lesion classification using several public benchmark datasets.
The significant advantage of this method is that conformal analysis is method
independent and it not only provides a coverage guarantee at population level
but also provides an uncertainty score for each individual. We used a
model-agnostic dynamic F1-score-based sampling during model training, which
helped to stabilize the class imbalance and we investigate the effects on
uncertainty quantification (UQ) with or without this bias mitigation step. Thus
we show how this can be used as a fairness metric to evaluate the robustness of
the feature embeddings of the foundation model (Google DermFoundation) and thus
advance the trustworthiness and fairness of clinical AI.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 08:06:00 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Bhattacharyya",
"Swarnava",
""
],
[
"Pal",
"Umapada",
""
],
[
"Chakraborti",
"Tapabrata",
""
]
] | TITLE: Conformal uncertainty quantification to evaluate predictive fairness of
foundation AI model for skin lesion classes across patient demographics
ABSTRACT: Deep learning based diagnostic AI systems based on medical images are
starting to provide similar performance as human experts. However these data
hungry complex systems are inherently black boxes and therefore slow to be
adopted for high risk applications like healthcare. This problem of lack of
transparency is exacerbated in the case of recent large foundation models,
which are trained in a self supervised manner on millions of data points to
provide robust generalisation across a range of downstream tasks, but the
embeddings generated from them happen through a process that is not
interpretable, and hence not easily trustable for clinical applications. To
address this timely issue, we deploy conformal analysis to quantify the
predictive uncertainty of a vision transformer (ViT) based foundation model
across patient demographics with respect to sex, age and ethnicity for the
tasks of skin lesion classification using several public benchmark datasets.
The significant advantage of this method is that conformal analysis is method
independent and it not only provides a coverage guarantee at population level
but also provides an uncertainty score for each individual. We used a
model-agnostic dynamic F1-score-based sampling during model training, which
helped to stabilize the class imbalance and we investigate the effects on
uncertainty quantification (UQ) with or without this bias mitigation step. Thus
we show how this can be used as a fairness metric to evaluate the robustness of
the feature embeddings of the foundation model (Google DermFoundation) and thus
advance the trustworthiness and fairness of clinical AI.
|
2503.23844 | Danfeng Hong | Xuyang Li and Chenyu Li and Pedram Ghamisi and Danfeng Hong | FlexiMo: A Flexible Remote Sensing Foundation Model | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rapid expansion of multi-source satellite imagery drives innovation in
Earth observation, opening unprecedented opportunities for Remote Sensing
Foundation Models to harness diverse data. However, many existing models remain
constrained by fixed spatial resolutions and patch sizes, limiting their
ability to fully exploit the heterogeneous spatial characteristics inherent in
satellite imagery. To address these challenges, we propose FlexiMo, a flexible
remote sensing foundation model that endows the pre-trained model with the
flexibility to adapt to arbitrary spatial resolutions. Central to FlexiMo is a
spatial resolution-aware module that employs a parameter-free alignment
embedding mechanism to dynamically recalibrate patch embeddings based on the
input image's resolution and dimensions. This design not only preserves
critical token characteristics and ensures multi-scale feature fidelity but
also enables efficient feature extraction without requiring modifications to
the underlying network architecture. In addition, FlexiMo incorporates a
lightweight channel adaptation module that leverages prior spectral information
from sensors. This mechanism allows the model to process images with varying
numbers of channels while maintaining the data's intrinsic physical properties.
Extensive experiments on diverse multimodal, multi-resolution, and multi-scale
datasets demonstrate that FlexiMo significantly enhances model generalization
and robustness. In particular, our method achieves outstanding performance
across a range of downstream tasks, including scene classification, land cover
classification, urban building segmentation, and cloud detection. By enabling
parameter-efficient and physically consistent adaptation, FlexiMo paves the way
for more adaptable and effective foundation models in real-world remote sensing
applications.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 08:46:05 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Li",
"Xuyang",
""
],
[
"Li",
"Chenyu",
""
],
[
"Ghamisi",
"Pedram",
""
],
[
"Hong",
"Danfeng",
""
]
] | TITLE: FlexiMo: A Flexible Remote Sensing Foundation Model
ABSTRACT: The rapid expansion of multi-source satellite imagery drives innovation in
Earth observation, opening unprecedented opportunities for Remote Sensing
Foundation Models to harness diverse data. However, many existing models remain
constrained by fixed spatial resolutions and patch sizes, limiting their
ability to fully exploit the heterogeneous spatial characteristics inherent in
satellite imagery. To address these challenges, we propose FlexiMo, a flexible
remote sensing foundation model that endows the pre-trained model with the
flexibility to adapt to arbitrary spatial resolutions. Central to FlexiMo is a
spatial resolution-aware module that employs a parameter-free alignment
embedding mechanism to dynamically recalibrate patch embeddings based on the
input image's resolution and dimensions. This design not only preserves
critical token characteristics and ensures multi-scale feature fidelity but
also enables efficient feature extraction without requiring modifications to
the underlying network architecture. In addition, FlexiMo incorporates a
lightweight channel adaptation module that leverages prior spectral information
from sensors. This mechanism allows the model to process images with varying
numbers of channels while maintaining the data's intrinsic physical properties.
Extensive experiments on diverse multimodal, multi-resolution, and multi-scale
datasets demonstrate that FlexiMo significantly enhances model generalization
and robustness. In particular, our method achieves outstanding performance
across a range of downstream tasks, including scene classification, land cover
classification, urban building segmentation, and cloud detection. By enabling
parameter-efficient and physically consistent adaptation, FlexiMo paves the way
for more adaptable and effective foundation models in real-world remote sensing
applications.
|
2503.23848 | Minghan Wang | Minghan Wang, Ye Bai, Yuxia Wang, Thuy-Trang Vu, Ehsan Shareghi,
Gholamreza Haffari | SpeechDialogueFactory: Generating High-Quality Speech Dialogue Data to
Accelerate Your Speech-LLM Development | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | High-quality speech dialogue datasets are crucial for Speech-LLM development,
yet existing acquisition methods face significant limitations. Human recordings
incur high costs and privacy concerns, while synthetic approaches often lack
conversational authenticity. To address these challenges, we introduce
\textsc{SpeechDialogueFactory}, a production-ready framework for generating
natural speech dialogues efficiently. Our solution employs a comprehensive
pipeline including metadata generation, dialogue scripting,
paralinguistic-enriched utterance simulation, and natural speech synthesis with
voice cloning. Additionally, the system provides an interactive UI for detailed
sample inspection and a high-throughput batch synthesis mode. Evaluations show
that dialogues generated by our system achieve a quality comparable to human
recordings while significantly reducing production costs. We release our work
as an open-source toolkit, alongside example datasets available in English and
Chinese, empowering researchers and developers in Speech-LLM research and
development.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 08:52:21 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Wang",
"Minghan",
""
],
[
"Bai",
"Ye",
""
],
[
"Wang",
"Yuxia",
""
],
[
"Vu",
"Thuy-Trang",
""
],
[
"Shareghi",
"Ehsan",
""
],
[
"Haffari",
"Gholamreza",
""
]
] | TITLE: SpeechDialogueFactory: Generating High-Quality Speech Dialogue Data to
Accelerate Your Speech-LLM Development
ABSTRACT: High-quality speech dialogue datasets are crucial for Speech-LLM development,
yet existing acquisition methods face significant limitations. Human recordings
incur high costs and privacy concerns, while synthetic approaches often lack
conversational authenticity. To address these challenges, we introduce
\textsc{SpeechDialogueFactory}, a production-ready framework for generating
natural speech dialogues efficiently. Our solution employs a comprehensive
pipeline including metadata generation, dialogue scripting,
paralinguistic-enriched utterance simulation, and natural speech synthesis with
voice cloning. Additionally, the system provides an interactive UI for detailed
sample inspection and a high-throughput batch synthesis mode. Evaluations show
that dialogues generated by our system achieve a quality comparable to human
recordings while significantly reducing production costs. We release our work
as an open-source toolkit, alongside example datasets available in English and
Chinese, empowering researchers and developers in Speech-LLM research and
development.
|
2503.23863 | Johannes Wehrstein | Johannes Wehrstein, Tiemo Bang, Roman Heinrich, Carsten Binnig | GRACEFUL: A Learned Cost Estimator For UDFs | The paper has been accepted by ICDE 2025 | null | null | null | cs.DB | http://creativecommons.org/licenses/by/4.0/ | User-Defined-Functions (UDFs) are a pivotal feature in modern DBMS, enabling
the extension of native DBMS functionality with custom logic. However, the
integration of UDFs into query optimization processes poses significant
challenges, primarily due to the difficulty of estimating UDF execution costs.
Consequently, existing cost models in DBMS optimizers largely ignore UDFs or
rely on static assumptions, resulting in suboptimal performance for queries
involving UDFs. In this paper, we introduce GRACEFUL, a novel learned cost
model to make accurate cost predictions of query plans with UDFs enabling
optimization decisions for UDFs in DBMS. For example, as we show in our
evaluation, using our cost model, we can achieve 50x speedups through informed
pull-up/push-down filter decisions of the UDF compared to the standard case
where always a filter push-down is applied. Additionally, we release a
synthetic dataset of over 90,000 UDF queries to promote further research in
this area.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 09:09:12 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Wehrstein",
"Johannes",
""
],
[
"Bang",
"Tiemo",
""
],
[
"Heinrich",
"Roman",
""
],
[
"Binnig",
"Carsten",
""
]
] | TITLE: GRACEFUL: A Learned Cost Estimator For UDFs
ABSTRACT: User-Defined-Functions (UDFs) are a pivotal feature in modern DBMS, enabling
the extension of native DBMS functionality with custom logic. However, the
integration of UDFs into query optimization processes poses significant
challenges, primarily due to the difficulty of estimating UDF execution costs.
Consequently, existing cost models in DBMS optimizers largely ignore UDFs or
rely on static assumptions, resulting in suboptimal performance for queries
involving UDFs. In this paper, we introduce GRACEFUL, a novel learned cost
model to make accurate cost predictions of query plans with UDFs enabling
optimization decisions for UDFs in DBMS. For example, as we show in our
evaluation, using our cost model, we can achieve 50x speedups through informed
pull-up/push-down filter decisions of the UDF compared to the standard case
where always a filter push-down is applied. Additionally, we release a
synthetic dataset of over 90,000 UDF queries to promote further research in
this area.
|
2503.23866 | Jialin Wan | Jialin Wan, Nan Cheng, Jinglong Shen | A Channel-Triggered Backdoor Attack on Wireless Semantic Image
Reconstruction | null | null | null | null | cs.CR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the transformative impact of deep learning (DL) on wireless
communication systems through data-driven end-to-end (E2E) learning, the
security vulnerabilities of these systems have been largely overlooked. Unlike
the extensively studied image domain, limited research has explored the threat
of backdoor attacks on the reconstruction of symbols in semantic communication
(SemCom) systems. Previous work has investigated such backdoor attacks at the
input level, but these approaches are infeasible in applications with strict
input control. In this paper, we propose a novel attack paradigm, termed
Channel-Triggered Backdoor Attack (CT-BA), where the backdoor trigger is a
specific wireless channel. This attack leverages fundamental physical layer
characteristics, making it more covert and potentially more threatening
compared to previous input-level attacks. Specifically, we utilize channel gain
with different fading distributions or channel noise with different power
spectral densities as potential triggers. This approach establishes
unprecedented attack flexibility as the adversary can select backdoor triggers
from both fading characteristics and noise variations in diverse channel
environments. Moreover, during the testing phase, CT-BA enables automatic
trigger activation through natural channel variations without requiring active
adversary participation. We evaluate the robustness of CT-BA on a ViT-based
Joint Source-Channel Coding (JSCC) model across three datasets: MNIST,
CIFAR-10, and ImageNet. Furthermore, we apply CT-BA to three typical E2E SemCom
systems: BDJSCC, ADJSCC, and JSCCOFDM. Experimental results demonstrate that
our attack achieves near-perfect attack success rate (ASR) while maintaining
effective stealth. Finally, we discuss potential defense mechanisms against
such attacks.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 09:17:10 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Wan",
"Jialin",
""
],
[
"Cheng",
"Nan",
""
],
[
"Shen",
"Jinglong",
""
]
] | TITLE: A Channel-Triggered Backdoor Attack on Wireless Semantic Image
Reconstruction
ABSTRACT: Despite the transformative impact of deep learning (DL) on wireless
communication systems through data-driven end-to-end (E2E) learning, the
security vulnerabilities of these systems have been largely overlooked. Unlike
the extensively studied image domain, limited research has explored the threat
of backdoor attacks on the reconstruction of symbols in semantic communication
(SemCom) systems. Previous work has investigated such backdoor attacks at the
input level, but these approaches are infeasible in applications with strict
input control. In this paper, we propose a novel attack paradigm, termed
Channel-Triggered Backdoor Attack (CT-BA), where the backdoor trigger is a
specific wireless channel. This attack leverages fundamental physical layer
characteristics, making it more covert and potentially more threatening
compared to previous input-level attacks. Specifically, we utilize channel gain
with different fading distributions or channel noise with different power
spectral densities as potential triggers. This approach establishes
unprecedented attack flexibility as the adversary can select backdoor triggers
from both fading characteristics and noise variations in diverse channel
environments. Moreover, during the testing phase, CT-BA enables automatic
trigger activation through natural channel variations without requiring active
adversary participation. We evaluate the robustness of CT-BA on a ViT-based
Joint Source-Channel Coding (JSCC) model across three datasets: MNIST,
CIFAR-10, and ImageNet. Furthermore, we apply CT-BA to three typical E2E SemCom
systems: BDJSCC, ADJSCC, and JSCCOFDM. Experimental results demonstrate that
our attack achieves near-perfect attack success rate (ASR) while maintaining
effective stealth. Finally, we discuss potential defense mechanisms against
such attacks.
|
2503.23869 | Yongle Li | Yongle Li, Bo Liu, Sheng Huang, ZHeng ZHang, Xiaotong Yuan, and
Richang Hong | Communication-Efficient and Personalized Federated Foundation Model
Fine-Tuning via Tri-Matrix Adaptation | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In federated learning, fine-tuning pre-trained foundation models poses
significant challenges, particularly regarding high communication cost and
suboptimal model performance due to data heterogeneity between the clients. To
address these issues, this paper introduces communication-efficient federated
LoRA adaption (CE-LoRA), a method that employs a tri-factorization low-rank
adaptation approach with personalized model parameter aggregation. We first
presents a novel LoRA parameter factorization by introducing a small-size dense
matrix, which can significantly reduce the communication cost and achieve
comparable empirical performance than transferring the low-rank parameter
matrix used by existing methods. Without violating data privacy, the server
considers the client similarity in both training dataset and model parameter
space, and learns personalized weights for model aggregation. Our experiments
on various LLM and VLM fine-tuning tasks demonstrate that CE-LoRA not only
significantly reduces communication overhead but also improves performance
under not independently and identically distributed data conditions. In
addition, CE-LoRA improves data privacy protection, effectively mitigating
gradient-based data reconstruction attacks.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 09:18:42 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Li",
"Yongle",
""
],
[
"Liu",
"Bo",
""
],
[
"Huang",
"Sheng",
""
],
[
"ZHang",
"ZHeng",
""
],
[
"Yuan",
"Xiaotong",
""
],
[
"Hong",
"Richang",
""
]
] | TITLE: Communication-Efficient and Personalized Federated Foundation Model
Fine-Tuning via Tri-Matrix Adaptation
ABSTRACT: In federated learning, fine-tuning pre-trained foundation models poses
significant challenges, particularly regarding high communication cost and
suboptimal model performance due to data heterogeneity between the clients. To
address these issues, this paper introduces communication-efficient federated
LoRA adaption (CE-LoRA), a method that employs a tri-factorization low-rank
adaptation approach with personalized model parameter aggregation. We first
presents a novel LoRA parameter factorization by introducing a small-size dense
matrix, which can significantly reduce the communication cost and achieve
comparable empirical performance than transferring the low-rank parameter
matrix used by existing methods. Without violating data privacy, the server
considers the client similarity in both training dataset and model parameter
space, and learns personalized weights for model aggregation. Our experiments
on various LLM and VLM fine-tuning tasks demonstrate that CE-LoRA not only
significantly reduces communication overhead but also improves performance
under not independently and identically distributed data conditions. In
addition, CE-LoRA improves data privacy protection, effectively mitigating
gradient-based data reconstruction attacks.
|
2503.23877 | Junyao Shi | Junyao Shi, Zhuolun Zhao, Tianyou Wang, Ian Pedroza, Amy Luo, Jie
Wang, Jason Ma, Dinesh Jayaraman | ZeroMimic: Distilling Robotic Manipulation Skills from Web Videos | ICRA 2025. Project website: https://zeromimic.github.io/ | null | null | null | cs.RO cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Many recent advances in robotic manipulation have come through imitation
learning, yet these rely largely on mimicking a particularly hard-to-acquire
form of demonstrations: those collected on the same robot in the same room with
the same objects as the trained policy must handle at test time. In contrast,
large pre-recorded human video datasets demonstrating manipulation skills
in-the-wild already exist, which contain valuable information for robots. Is it
possible to distill a repository of useful robotic skill policies out of such
data without any additional requirements on robot-specific demonstrations or
exploration? We present the first such system ZeroMimic, that generates
immediately deployable image goal-conditioned skill policies for several common
categories of manipulation tasks (opening, closing, pouring, pick&place,
cutting, and stirring) each capable of acting upon diverse objects and across
diverse unseen task setups. ZeroMimic is carefully designed to exploit recent
advances in semantic and geometric visual understanding of human videos,
together with modern grasp affordance detectors and imitation policy classes.
After training ZeroMimic on the popular EpicKitchens dataset of ego-centric
human videos, we evaluate its out-of-the-box performance in varied real-world
and simulated kitchen settings with two different robot embodiments,
demonstrating its impressive abilities to handle these varied tasks. To enable
plug-and-play reuse of ZeroMimic policies on other task setups and robots, we
release software and policy checkpoints of our skill policies.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 09:27:00 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Shi",
"Junyao",
""
],
[
"Zhao",
"Zhuolun",
""
],
[
"Wang",
"Tianyou",
""
],
[
"Pedroza",
"Ian",
""
],
[
"Luo",
"Amy",
""
],
[
"Wang",
"Jie",
""
],
[
"Ma",
"Jason",
""
],
[
"Jayaraman",
"Dinesh",
""
]
] | TITLE: ZeroMimic: Distilling Robotic Manipulation Skills from Web Videos
ABSTRACT: Many recent advances in robotic manipulation have come through imitation
learning, yet these rely largely on mimicking a particularly hard-to-acquire
form of demonstrations: those collected on the same robot in the same room with
the same objects as the trained policy must handle at test time. In contrast,
large pre-recorded human video datasets demonstrating manipulation skills
in-the-wild already exist, which contain valuable information for robots. Is it
possible to distill a repository of useful robotic skill policies out of such
data without any additional requirements on robot-specific demonstrations or
exploration? We present the first such system ZeroMimic, that generates
immediately deployable image goal-conditioned skill policies for several common
categories of manipulation tasks (opening, closing, pouring, pick&place,
cutting, and stirring) each capable of acting upon diverse objects and across
diverse unseen task setups. ZeroMimic is carefully designed to exploit recent
advances in semantic and geometric visual understanding of human videos,
together with modern grasp affordance detectors and imitation policy classes.
After training ZeroMimic on the popular EpicKitchens dataset of ego-centric
human videos, we evaluate its out-of-the-box performance in varied real-world
and simulated kitchen settings with two different robot embodiments,
demonstrating its impressive abilities to handle these varied tasks. To enable
plug-and-play reuse of ZeroMimic policies on other task setups and robots, we
release software and policy checkpoints of our skill policies.
|
2503.23882 | Halil \.Ibrahim \"Ozt\"urk | Halil \.Ibrahim \"Ozt\"urk, Muhammet Esat Kalfao\u{g}lu, Ozsel Kilinc | GLane3D : Detecting Lanes with Graph of 3D Keypoints | Accepted to CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate and efficient lane detection in 3D space is essential for autonomous
driving systems, where robust generalization is the foremost requirement for 3D
lane detection algorithms. Considering the extensive variation in lane
structures worldwide, achieving high generalization capacity is particularly
challenging, as algorithms must accurately identify a wide variety of lane
patterns worldwide. Traditional top-down approaches rely heavily on learning
lane characteristics from training datasets, often struggling with lanes
exhibiting previously unseen attributes. To address this generalization
limitation, we propose a method that detects keypoints of lanes and
subsequently predicts sequential connections between them to construct complete
3D lanes. Each key point is essential for maintaining lane continuity, and we
predict multiple proposals per keypoint by allowing adjacent grids to predict
the same keypoint using an offset mechanism. PointNMS is employed to eliminate
overlapping proposal keypoints, reducing redundancy in the estimated BEV graph
and minimizing computational overhead from connection estimations. Our model
surpasses previous state-of-the-art methods on both the Apollo and OpenLane
datasets, demonstrating superior F1 scores and a strong generalization capacity
when models trained on OpenLane are evaluated on the Apollo dataset, compared
to prior approaches.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 09:33:26 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Öztürk",
"Halil İbrahim",
""
],
[
"Kalfaoğlu",
"Muhammet Esat",
""
],
[
"Kilinc",
"Ozsel",
""
]
] | TITLE: GLane3D : Detecting Lanes with Graph of 3D Keypoints
ABSTRACT: Accurate and efficient lane detection in 3D space is essential for autonomous
driving systems, where robust generalization is the foremost requirement for 3D
lane detection algorithms. Considering the extensive variation in lane
structures worldwide, achieving high generalization capacity is particularly
challenging, as algorithms must accurately identify a wide variety of lane
patterns worldwide. Traditional top-down approaches rely heavily on learning
lane characteristics from training datasets, often struggling with lanes
exhibiting previously unseen attributes. To address this generalization
limitation, we propose a method that detects keypoints of lanes and
subsequently predicts sequential connections between them to construct complete
3D lanes. Each key point is essential for maintaining lane continuity, and we
predict multiple proposals per keypoint by allowing adjacent grids to predict
the same keypoint using an offset mechanism. PointNMS is employed to eliminate
overlapping proposal keypoints, reducing redundancy in the estimated BEV graph
and minimizing computational overhead from connection estimations. Our model
surpasses previous state-of-the-art methods on both the Apollo and OpenLane
datasets, demonstrating superior F1 scores and a strong generalization capacity
when models trained on OpenLane are evaluated on the Apollo dataset, compared
to prior approaches.
|
2503.23887 | Qiao Bowei | Bowei Qiao, Hongwei Wang | An End-to-End Comprehensive Gear Fault Diagnosis Method Based on
Multi-Scale Feature-Level Fusion Strategy | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | To satisfy the requirements of the end-to-end fault diagnosis of gears, an
integrated intelligent method of fault diagnosis for gears using acceleration
signals was proposed, which was based on Gabor-based Adaptive Short-Time
Fourier Transform (Gabor-ASTFT) and Dual-Tree Complex Wavelet Transform(DTCWT)
algorithms, Dilated Residual structure and feature fusion layer, is proposed in
this paper. Initially, the raw one-dimensional acceleration signals collected
from the gearbox base using vibration sensors undergo pre-segmentation
processing. The Gabor-ASTFT and DTCWT are then applied to convert the original
one-dimensional time-domain signals into two-dimensional time-frequency
representations, facilitating the preliminary extraction of fault features and
obtaining weak feature maps.Subsequently, a dual-channel structure is
established using deconvolution and dilated convolution to perform upsampling
and downsampling on the feature maps, adjusting their sizes accordingly. A
feature fusion layer is then constructed to integrate the dual-channel
features, enabling multi-scale analysis of the extracted fault
features.Finally, a convolutional neural network (CNN) model incorporating a
residual structure is developed to conduct deep feature extraction from the
fused feature maps. The extracted features are subsequently fed into a Global
Average Pooling(GAP) and a classification function for fault classification.
Conducting comparative experiments on different datasets, the proposed method
is demonstrated to effectively meet the requirements of end-to-end fault
diagnosis for gears.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 09:40:06 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Qiao",
"Bowei",
""
],
[
"Wang",
"Hongwei",
""
]
] | TITLE: An End-to-End Comprehensive Gear Fault Diagnosis Method Based on
Multi-Scale Feature-Level Fusion Strategy
ABSTRACT: To satisfy the requirements of the end-to-end fault diagnosis of gears, an
integrated intelligent method of fault diagnosis for gears using acceleration
signals was proposed, which was based on Gabor-based Adaptive Short-Time
Fourier Transform (Gabor-ASTFT) and Dual-Tree Complex Wavelet Transform(DTCWT)
algorithms, Dilated Residual structure and feature fusion layer, is proposed in
this paper. Initially, the raw one-dimensional acceleration signals collected
from the gearbox base using vibration sensors undergo pre-segmentation
processing. The Gabor-ASTFT and DTCWT are then applied to convert the original
one-dimensional time-domain signals into two-dimensional time-frequency
representations, facilitating the preliminary extraction of fault features and
obtaining weak feature maps.Subsequently, a dual-channel structure is
established using deconvolution and dilated convolution to perform upsampling
and downsampling on the feature maps, adjusting their sizes accordingly. A
feature fusion layer is then constructed to integrate the dual-channel
features, enabling multi-scale analysis of the extracted fault
features.Finally, a convolutional neural network (CNN) model incorporating a
residual structure is developed to conduct deep feature extraction from the
fused feature maps. The extracted features are subsequently fed into a Global
Average Pooling(GAP) and a classification function for fault classification.
Conducting comparative experiments on different datasets, the proposed method
is demonstrated to effectively meet the requirements of end-to-end fault
diagnosis for gears.
|
2503.23890 | Julius Beerwerth | Julius Beerwerth and Bassam Alrifaee | Less is More: Contextual Sampling for Nonlinear Data-Enabled Predictive
Control | Submitted to IROS 2025 on March 1st | null | null | null | cs.RO cs.SY eess.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data-enabled Predictive Control (DeePC) is a powerful data-driven approach
for predictive control without requiring an explicit system model. However, its
high computational cost limits its applicability to real-time robotic systems.
For robotic applications such as motion planning and trajectory tracking,
real-time control is crucial. Nonlinear DeePC either relies on large datasets
or learning the nonlinearities to ensure predictive accuracy, leading to high
computational complexity. This work introduces contextual sampling, a novel
data selection strategy to handle nonlinearities for DeePC by dynamically
selecting the most relevant data at each time step. By reducing the dataset
size while preserving prediction accuracy, our method improves computational
efficiency, of DeePC for real-time robotic applications. We validate our
approach for autonomous vehicle motion planning. For a dataset size of 100
sub-trajectories, Contextual sampling DeePC reduces tracking error by 53.2 %
compared to Leverage Score sampling. Additionally, Contextual sampling reduces
max computation time by 87.2 % compared to using the full dataset of 491
sub-trajectories while achieving comparable tracking performance. These results
highlight the potential of Contextual sampling to enable real-time, data-driven
control for robotic systems.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 09:41:44 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Beerwerth",
"Julius",
""
],
[
"Alrifaee",
"Bassam",
""
]
] | TITLE: Less is More: Contextual Sampling for Nonlinear Data-Enabled Predictive
Control
ABSTRACT: Data-enabled Predictive Control (DeePC) is a powerful data-driven approach
for predictive control without requiring an explicit system model. However, its
high computational cost limits its applicability to real-time robotic systems.
For robotic applications such as motion planning and trajectory tracking,
real-time control is crucial. Nonlinear DeePC either relies on large datasets
or learning the nonlinearities to ensure predictive accuracy, leading to high
computational complexity. This work introduces contextual sampling, a novel
data selection strategy to handle nonlinearities for DeePC by dynamically
selecting the most relevant data at each time step. By reducing the dataset
size while preserving prediction accuracy, our method improves computational
efficiency, of DeePC for real-time robotic applications. We validate our
approach for autonomous vehicle motion planning. For a dataset size of 100
sub-trajectories, Contextual sampling DeePC reduces tracking error by 53.2 %
compared to Leverage Score sampling. Additionally, Contextual sampling reduces
max computation time by 87.2 % compared to using the full dataset of 491
sub-trajectories while achieving comparable tracking performance. These results
highlight the potential of Contextual sampling to enable real-time, data-driven
control for robotic systems.
|
2503.23895 | Yuqiao Tan | Yuqiao Tan, Shizhu He, Huanxuan Liao, Jun Zhao, Kang Liu | Better wit than wealth: Dynamic Parametric Retrieval Augmented
Generation for Test-time Knowledge Enhancement | preprint | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Retrieval-augmented generation (RAG) enhances large language models (LLMs) by
retrieving relevant documents from external sources and incorporating them into
the context. While it improves reliability by providing factual texts, it
significantly increases inference costs as context length grows and introduces
challenging issue of RAG hallucination, primarily caused by the lack of
corresponding parametric knowledge in LLMs. An efficient solution is to enhance
the knowledge of LLMs at test-time. Parametric RAG (PRAG) addresses this by
embedding document into LLMs parameters to perform test-time knowledge
enhancement, effectively reducing inference costs through offline training.
However, its high training and storage costs, along with limited generalization
ability, significantly restrict its practical adoption. To address these
challenges, we propose Dynamic Parametric RAG (DyPRAG), a novel framework that
leverages a lightweight parameter translator model to efficiently convert
documents into parametric knowledge. DyPRAG not only reduces inference,
training, and storage costs but also dynamically generates parametric
knowledge, seamlessly enhancing the knowledge of LLMs and resolving knowledge
conflicts in a plug-and-play manner at test-time. Extensive experiments on
multiple datasets demonstrate the effectiveness and generalization capabilities
of DyPRAG, offering a powerful and practical RAG paradigm which enables
superior knowledge fusion and mitigates RAG hallucination in real-world
applications. Our code is available at https://github.com/Trae1ounG/DyPRAG.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 09:46:35 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Tan",
"Yuqiao",
""
],
[
"He",
"Shizhu",
""
],
[
"Liao",
"Huanxuan",
""
],
[
"Zhao",
"Jun",
""
],
[
"Liu",
"Kang",
""
]
] | TITLE: Better wit than wealth: Dynamic Parametric Retrieval Augmented
Generation for Test-time Knowledge Enhancement
ABSTRACT: Retrieval-augmented generation (RAG) enhances large language models (LLMs) by
retrieving relevant documents from external sources and incorporating them into
the context. While it improves reliability by providing factual texts, it
significantly increases inference costs as context length grows and introduces
challenging issue of RAG hallucination, primarily caused by the lack of
corresponding parametric knowledge in LLMs. An efficient solution is to enhance
the knowledge of LLMs at test-time. Parametric RAG (PRAG) addresses this by
embedding document into LLMs parameters to perform test-time knowledge
enhancement, effectively reducing inference costs through offline training.
However, its high training and storage costs, along with limited generalization
ability, significantly restrict its practical adoption. To address these
challenges, we propose Dynamic Parametric RAG (DyPRAG), a novel framework that
leverages a lightweight parameter translator model to efficiently convert
documents into parametric knowledge. DyPRAG not only reduces inference,
training, and storage costs but also dynamically generates parametric
knowledge, seamlessly enhancing the knowledge of LLMs and resolving knowledge
conflicts in a plug-and-play manner at test-time. Extensive experiments on
multiple datasets demonstrate the effectiveness and generalization capabilities
of DyPRAG, offering a powerful and practical RAG paradigm which enables
superior knowledge fusion and mitigates RAG hallucination in real-world
applications. Our code is available at https://github.com/Trae1ounG/DyPRAG.
|
2503.23898 | Zhenyu Yang | Rihui Zhang, Haiming Zhu, Jingtong Zhao, Lei Zhang, Fang-Fang Yin,
Chunhao Wang and Zhenyu Yang | An Explainable Neural Radiomic Sequence Model with Spatiotemporal
Continuity for Quantifying 4DCT-based Pulmonary Ventilation | 43 pages, 13 figures | null | null | null | physics.med-ph cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate evaluation of regional lung ventilation is essential for the
management and treatment of lung cancer patients, supporting assessments of
pulmonary function, optimization of therapeutic strategies, and monitoring of
treatment response. Currently, ventilation scintigraphy using nuclear medicine
techniques is widely employed in clinical practice; however, it is often
time-consuming, costly, and entails additional radiation exposure. In this
study, we propose an explainable neural radiomic sequence model to identify
regions of compromised pulmonary ventilation based on four-dimensional computed
tomography (4DCT). A cohort of 45 lung cancer patients from the VAMPIRE dataset
was analyzed. For each patient, lung volumes were segmented from 4DCT, and
voxel-wise radiomic features (56-dimensional) were extracted across the
respiratory cycle to capture local intensity and texture dynamics, forming
temporal radiomic sequences. Ground truth ventilation defects were delineated
voxel-wise using Galligas-PET and DTPA-SPECT. To identify compromised regions,
we developed a temporal saliency-enhanced explainable long short-term memory
(LSTM) network trained on the radiomic sequences. Temporal saliency maps were
generated to highlight key features contributing to the model's predictions.
The proposed model demonstrated robust performance, achieving average (range)
Dice similarity coefficients of 0.78 (0.74-0.79) for 25 PET cases and 0.78
(0.74-0.82) for 20 SPECT cases. The temporal saliency map explained three key
radiomic sequences in ventilation quantification: during lung exhalation,
compromised pulmonary function region typically exhibits (1) an increasing
trend of intensity and (2) a decreasing trend of homogeneity, in contrast to
healthy lung tissue.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 09:47:03 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Zhang",
"Rihui",
""
],
[
"Zhu",
"Haiming",
""
],
[
"Zhao",
"Jingtong",
""
],
[
"Zhang",
"Lei",
""
],
[
"Yin",
"Fang-Fang",
""
],
[
"Wang",
"Chunhao",
""
],
[
"Yang",
"Zhenyu",
""
]
] | TITLE: An Explainable Neural Radiomic Sequence Model with Spatiotemporal
Continuity for Quantifying 4DCT-based Pulmonary Ventilation
ABSTRACT: Accurate evaluation of regional lung ventilation is essential for the
management and treatment of lung cancer patients, supporting assessments of
pulmonary function, optimization of therapeutic strategies, and monitoring of
treatment response. Currently, ventilation scintigraphy using nuclear medicine
techniques is widely employed in clinical practice; however, it is often
time-consuming, costly, and entails additional radiation exposure. In this
study, we propose an explainable neural radiomic sequence model to identify
regions of compromised pulmonary ventilation based on four-dimensional computed
tomography (4DCT). A cohort of 45 lung cancer patients from the VAMPIRE dataset
was analyzed. For each patient, lung volumes were segmented from 4DCT, and
voxel-wise radiomic features (56-dimensional) were extracted across the
respiratory cycle to capture local intensity and texture dynamics, forming
temporal radiomic sequences. Ground truth ventilation defects were delineated
voxel-wise using Galligas-PET and DTPA-SPECT. To identify compromised regions,
we developed a temporal saliency-enhanced explainable long short-term memory
(LSTM) network trained on the radiomic sequences. Temporal saliency maps were
generated to highlight key features contributing to the model's predictions.
The proposed model demonstrated robust performance, achieving average (range)
Dice similarity coefficients of 0.78 (0.74-0.79) for 25 PET cases and 0.78
(0.74-0.82) for 20 SPECT cases. The temporal saliency map explained three key
radiomic sequences in ventilation quantification: during lung exhalation,
compromised pulmonary function region typically exhibits (1) an increasing
trend of intensity and (2) a decreasing trend of homogeneity, in contrast to
healthy lung tissue.
|
2503.23899 | Gabrielle Gaudeau | Diana Galvan-Sosa, Gabrielle Gaudeau, Pride Kavumba, Yunmeng Li,
Hongyi gu, Zheng Yuan, Keisuke Sakaguchi, Paula Buttery | Rubrik's Cube: Testing a New Rubric for Evaluating Explanations on the
CUBE dataset | 9 main pages (21 appendix pages), 7 figures, submitted to ACL 2025 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The performance and usability of Large-Language Models (LLMs) are driving
their use in explanation generation tasks. However, despite their widespread
adoption, LLM explanations have been found to be unreliable, making it
difficult for users to distinguish good from bad explanations. To address this
issue, we present Rubrik's CUBE, an education-inspired rubric and a dataset of
26k explanations, written and later quality-annotated using the rubric by both
humans and six open- and closed-source LLMs. The CUBE dataset focuses on two
reasoning and two language tasks, providing the necessary diversity for us to
effectively test our proposed rubric. Using Rubrik, we find that explanations
are influenced by both task and perceived difficulty. Low quality stems
primarily from a lack of conciseness in LLM-generated explanations, rather than
cohesion and word choice. The full dataset, rubric, and code will be made
available upon acceptance.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 09:48:59 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Galvan-Sosa",
"Diana",
""
],
[
"Gaudeau",
"Gabrielle",
""
],
[
"Kavumba",
"Pride",
""
],
[
"Li",
"Yunmeng",
""
],
[
"gu",
"Hongyi",
""
],
[
"Yuan",
"Zheng",
""
],
[
"Sakaguchi",
"Keisuke",
""
],
[
"Buttery",
"Paula",
""
]
] | TITLE: Rubrik's Cube: Testing a New Rubric for Evaluating Explanations on the
CUBE dataset
ABSTRACT: The performance and usability of Large-Language Models (LLMs) are driving
their use in explanation generation tasks. However, despite their widespread
adoption, LLM explanations have been found to be unreliable, making it
difficult for users to distinguish good from bad explanations. To address this
issue, we present Rubrik's CUBE, an education-inspired rubric and a dataset of
26k explanations, written and later quality-annotated using the rubric by both
humans and six open- and closed-source LLMs. The CUBE dataset focuses on two
reasoning and two language tasks, providing the necessary diversity for us to
effectively test our proposed rubric. Using Rubrik, we find that explanations
are influenced by both task and perceived difficulty. Low quality stems
primarily from a lack of conciseness in LLM-generated explanations, rather than
cohesion and word choice. The full dataset, rubric, and code will be made
available upon acceptance.
|
2503.23905 | Qihan Huang | Qihan Huang, Long Chan, Jinlong Liu, Wanggui He, Hao Jiang, Mingli
Song, Jingyuan Chen, Chang Yao, Jie Song | Boosting MLLM Reasoning with Text-Debiased Hint-GRPO | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | MLLM reasoning has drawn widespread research for its excellent
problem-solving capability. Current reasoning methods fall into two types: PRM,
which supervises the intermediate reasoning steps, and ORM, which supervises
the final results. Recently, DeepSeek-R1 has challenged the traditional view
that PRM outperforms ORM, which demonstrates strong generalization performance
using an ORM method (i.e., GRPO). However, current MLLM's GRPO algorithms still
struggle to handle challenging and complex multimodal reasoning tasks (e.g.,
mathematical reasoning). In this work, we reveal two problems that impede the
performance of GRPO on the MLLM: Low data utilization and Text-bias. Low data
utilization refers to that GRPO cannot acquire positive rewards to update the
MLLM on difficult samples, and text-bias is a phenomenon that the MLLM bypasses
image condition and solely relies on text condition for generation after GRPO
training. To tackle these problems, this work proposes Hint-GRPO that improves
data utilization by adaptively providing hints for samples of varying
difficulty, and text-bias calibration that mitigates text-bias by calibrating
the token prediction logits with image condition in test-time. Experiment
results on three base MLLMs across eleven datasets demonstrate that our
proposed methods advance the reasoning capability of original MLLM by a large
margin, exhibiting superior performance to existing MLLM reasoning methods. Our
code is available at https://github.com/hqhQAQ/Hint-GRPO.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 09:54:55 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Huang",
"Qihan",
""
],
[
"Chan",
"Long",
""
],
[
"Liu",
"Jinlong",
""
],
[
"He",
"Wanggui",
""
],
[
"Jiang",
"Hao",
""
],
[
"Song",
"Mingli",
""
],
[
"Chen",
"Jingyuan",
""
],
[
"Yao",
"Chang",
""
],
[
"Song",
"Jie",
""
]
] | TITLE: Boosting MLLM Reasoning with Text-Debiased Hint-GRPO
ABSTRACT: MLLM reasoning has drawn widespread research for its excellent
problem-solving capability. Current reasoning methods fall into two types: PRM,
which supervises the intermediate reasoning steps, and ORM, which supervises
the final results. Recently, DeepSeek-R1 has challenged the traditional view
that PRM outperforms ORM, which demonstrates strong generalization performance
using an ORM method (i.e., GRPO). However, current MLLM's GRPO algorithms still
struggle to handle challenging and complex multimodal reasoning tasks (e.g.,
mathematical reasoning). In this work, we reveal two problems that impede the
performance of GRPO on the MLLM: Low data utilization and Text-bias. Low data
utilization refers to that GRPO cannot acquire positive rewards to update the
MLLM on difficult samples, and text-bias is a phenomenon that the MLLM bypasses
image condition and solely relies on text condition for generation after GRPO
training. To tackle these problems, this work proposes Hint-GRPO that improves
data utilization by adaptively providing hints for samples of varying
difficulty, and text-bias calibration that mitigates text-bias by calibrating
the token prediction logits with image condition in test-time. Experiment
results on three base MLLMs across eleven datasets demonstrate that our
proposed methods advance the reasoning capability of original MLLM by a large
margin, exhibiting superior performance to existing MLLM reasoning methods. Our
code is available at https://github.com/hqhQAQ/Hint-GRPO.
|
2503.23907 | Zhichao Liao | Zhichao Liao, Xiaokun Liu, Wenyu Qin, Qingyu Li, Qiulin Wang, Pengfei
Wan, Di Zhang, Long Zeng, Pingfa Feng | HumanAesExpert: Advancing a Multi-Modality Foundation Model for Human
Image Aesthetic Assessment | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Image Aesthetic Assessment (IAA) is a long-standing and challenging research
task. However, its subset, Human Image Aesthetic Assessment (HIAA), has been
scarcely explored, even though HIAA is widely used in social media, AI
workflows, and related domains. To bridge this research gap, our work pioneers
a holistic implementation framework tailored for HIAA. Specifically, we
introduce HumanBeauty, the first dataset purpose-built for HIAA, which
comprises 108k high-quality human images with manual annotations. To achieve
comprehensive and fine-grained HIAA, 50K human images are manually collected
through a rigorous curation process and annotated leveraging our trailblazing
12-dimensional aesthetic standard, while the remaining 58K with overall
aesthetic labels are systematically filtered from public datasets. Based on the
HumanBeauty database, we propose HumanAesExpert, a powerful Vision Language
Model for aesthetic evaluation of human images. We innovatively design an
Expert head to incorporate human knowledge of aesthetic sub-dimensions while
jointly utilizing the Language Modeling (LM) and Regression head. This approach
empowers our model to achieve superior proficiency in both overall and
fine-grained HIAA. Furthermore, we introduce a MetaVoter, which aggregates
scores from all three heads, to effectively balance the capabilities of each
head, thereby realizing improved assessment precision. Extensive experiments
demonstrate that our HumanAesExpert models deliver significantly better
performance in HIAA than other state-of-the-art models. Our datasets, models,
and codes are publicly released to advance the HIAA community. Project webpage:
https://humanaesexpert.github.io/HumanAesExpert/
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 09:58:11 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Liao",
"Zhichao",
""
],
[
"Liu",
"Xiaokun",
""
],
[
"Qin",
"Wenyu",
""
],
[
"Li",
"Qingyu",
""
],
[
"Wang",
"Qiulin",
""
],
[
"Wan",
"Pengfei",
""
],
[
"Zhang",
"Di",
""
],
[
"Zeng",
"Long",
""
],
[
"Feng",
"Pingfa",
""
]
] | TITLE: HumanAesExpert: Advancing a Multi-Modality Foundation Model for Human
Image Aesthetic Assessment
ABSTRACT: Image Aesthetic Assessment (IAA) is a long-standing and challenging research
task. However, its subset, Human Image Aesthetic Assessment (HIAA), has been
scarcely explored, even though HIAA is widely used in social media, AI
workflows, and related domains. To bridge this research gap, our work pioneers
a holistic implementation framework tailored for HIAA. Specifically, we
introduce HumanBeauty, the first dataset purpose-built for HIAA, which
comprises 108k high-quality human images with manual annotations. To achieve
comprehensive and fine-grained HIAA, 50K human images are manually collected
through a rigorous curation process and annotated leveraging our trailblazing
12-dimensional aesthetic standard, while the remaining 58K with overall
aesthetic labels are systematically filtered from public datasets. Based on the
HumanBeauty database, we propose HumanAesExpert, a powerful Vision Language
Model for aesthetic evaluation of human images. We innovatively design an
Expert head to incorporate human knowledge of aesthetic sub-dimensions while
jointly utilizing the Language Modeling (LM) and Regression head. This approach
empowers our model to achieve superior proficiency in both overall and
fine-grained HIAA. Furthermore, we introduce a MetaVoter, which aggregates
scores from all three heads, to effectively balance the capabilities of each
head, thereby realizing improved assessment precision. Extensive experiments
demonstrate that our HumanAesExpert models deliver significantly better
performance in HIAA than other state-of-the-art models. Our datasets, models,
and codes are publicly released to advance the HIAA community. Project webpage:
https://humanaesexpert.github.io/HumanAesExpert/
|
2503.23911 | Ruisheng Han | Ruisheng Han, Kanglei Zhou, Amir Atapour-Abarghouei, Xiaohui Liang,
Hubert P. H. Shum | FineCausal: A Causal-Based Framework for Interpretable Fine-Grained
Action Quality Assessment | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Action quality assessment (AQA) is critical for evaluating athletic
performance, informing training strategies, and ensuring safety in competitive
sports. However, existing deep learning approaches often operate as black boxes
and are vulnerable to spurious correlations, limiting both their reliability
and interpretability. In this paper, we introduce FineCausal, a novel
causal-based framework that achieves state-of-the-art performance on the
FineDiving-HM dataset. Our approach leverages a Graph Attention Network-based
causal intervention module to disentangle human-centric foreground cues from
background confounders, and incorporates a temporal causal attention module to
capture fine-grained temporal dependencies across action stages. This
dual-module strategy enables FineCausal to generate detailed spatio-temporal
representations that not only achieve state-of-the-art scoring performance but
also provide transparent, interpretable feedback on which features drive the
assessment. Despite its strong performance, FineCausal requires extensive
expert knowledge to define causal structures and depends on high-quality
annotations, challenges that we discuss and address as future research
directions. Code is available at https://github.com/Harrison21/FineCausal.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 10:02:29 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Han",
"Ruisheng",
""
],
[
"Zhou",
"Kanglei",
""
],
[
"Atapour-Abarghouei",
"Amir",
""
],
[
"Liang",
"Xiaohui",
""
],
[
"Shum",
"Hubert P. H.",
""
]
] | TITLE: FineCausal: A Causal-Based Framework for Interpretable Fine-Grained
Action Quality Assessment
ABSTRACT: Action quality assessment (AQA) is critical for evaluating athletic
performance, informing training strategies, and ensuring safety in competitive
sports. However, existing deep learning approaches often operate as black boxes
and are vulnerable to spurious correlations, limiting both their reliability
and interpretability. In this paper, we introduce FineCausal, a novel
causal-based framework that achieves state-of-the-art performance on the
FineDiving-HM dataset. Our approach leverages a Graph Attention Network-based
causal intervention module to disentangle human-centric foreground cues from
background confounders, and incorporates a temporal causal attention module to
capture fine-grained temporal dependencies across action stages. This
dual-module strategy enables FineCausal to generate detailed spatio-temporal
representations that not only achieve state-of-the-art scoring performance but
also provide transparent, interpretable feedback on which features drive the
assessment. Despite its strong performance, FineCausal requires extensive
expert knowledge to define causal structures and depends on high-quality
annotations, challenges that we discuss and address as future research
directions. Code is available at https://github.com/Harrison21/FineCausal.
|
2503.23930 | Tang Jiankai | Jiankai Tang, Jiacheng Liu, Renling Tong, Kai Zhu, Zhe Li, Xin Yi,
Junliang Xing, Yuanchun Shi, Yuntao Wang | Exploring Reliable PPG Authentication on Smartwatches in Daily Scenarios | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Photoplethysmography (PPG) Sensors, widely deployed in smartwatches, offer a
simple and non-invasive authentication approach for daily use. However, PPG
authentication faces reliability issues due to motion artifacts from physical
activity and physiological variability over time. To address these challenges,
we propose MTL-RAPID, an efficient and reliable PPG authentication model, that
employs a multitask joint training strategy, simultaneously assessing signal
quality and verifying user identity. The joint optimization of these two tasks
in MTL-RAPID results in a structure that outperforms models trained on
individual tasks separately, achieving stronger performance with fewer
parameters. In our comprehensive user studies regarding motion artifacts (N =
30), time variations (N = 32), and user preferences (N = 16), MTL-RAPID
achieves a best AUC of 99.2\% and an EER of 3.5\%, outperforming existing
baselines. We opensource our PPG authentication dataset along with the
MTL-RAPID model to facilitate future research on GitHub.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 10:25:48 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Tang",
"Jiankai",
""
],
[
"Liu",
"Jiacheng",
""
],
[
"Tong",
"Renling",
""
],
[
"Zhu",
"Kai",
""
],
[
"Li",
"Zhe",
""
],
[
"Yi",
"Xin",
""
],
[
"Xing",
"Junliang",
""
],
[
"Shi",
"Yuanchun",
""
],
[
"Wang",
"Yuntao",
""
]
] | TITLE: Exploring Reliable PPG Authentication on Smartwatches in Daily Scenarios
ABSTRACT: Photoplethysmography (PPG) Sensors, widely deployed in smartwatches, offer a
simple and non-invasive authentication approach for daily use. However, PPG
authentication faces reliability issues due to motion artifacts from physical
activity and physiological variability over time. To address these challenges,
we propose MTL-RAPID, an efficient and reliable PPG authentication model, that
employs a multitask joint training strategy, simultaneously assessing signal
quality and verifying user identity. The joint optimization of these two tasks
in MTL-RAPID results in a structure that outperforms models trained on
individual tasks separately, achieving stronger performance with fewer
parameters. In our comprehensive user studies regarding motion artifacts (N =
30), time variations (N = 32), and user preferences (N = 16), MTL-RAPID
achieves a best AUC of 99.2\% and an EER of 3.5\%, outperforming existing
baselines. We opensource our PPG authentication dataset along with the
MTL-RAPID model to facilitate future research on GitHub.
|
2503.23934 | Ioannis Mavromatis Dr | Adri\'an S\'anchez-Momp\'o and Ioannis Mavromatis and Peizheng Li and
Konstantinos Katsaros and Aftab Khan | Green MLOps to Green GenOps: An Empirical Study of Energy Consumption in
Discriminative and Generative AI Operations | Published to MDPI Information - Artificial Intelligence Section | null | 10.3390/info16040281 | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | This study presents an empirical investigation into the energy consumption of
Discriminative and Generative AI models within real-world MLOps pipelines. For
Discriminative models, we examine various architectures and hyperparameters
during training and inference and identify energy-efficient practices. For
Generative AI, Large Language Models (LLMs) are assessed, focusing primarily on
energy consumption across different model sizes and varying service requests.
Our study employs software-based power measurements, ensuring ease of
replication across diverse configurations, models, and datasets. We analyse
multiple models and hardware setups to uncover correlations among various
metrics, identifying key contributors to energy consumption. The results
indicate that for Discriminative models, optimising architectures,
hyperparameters, and hardware can significantly reduce energy consumption
without sacrificing performance. For LLMs, energy efficiency depends on
balancing model size, reasoning complexity, and request-handling capacity, as
larger models do not necessarily consume more energy when utilisation remains
low. This analysis provides practical guidelines for designing green and
sustainable ML operations, emphasising energy consumption and carbon footprint
reductions while maintaining performance. This paper can serve as a benchmark
for accurately estimating total energy use across different types of AI models.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 10:28:04 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Sánchez-Mompó",
"Adrián",
""
],
[
"Mavromatis",
"Ioannis",
""
],
[
"Li",
"Peizheng",
""
],
[
"Katsaros",
"Konstantinos",
""
],
[
"Khan",
"Aftab",
""
]
] | TITLE: Green MLOps to Green GenOps: An Empirical Study of Energy Consumption in
Discriminative and Generative AI Operations
ABSTRACT: This study presents an empirical investigation into the energy consumption of
Discriminative and Generative AI models within real-world MLOps pipelines. For
Discriminative models, we examine various architectures and hyperparameters
during training and inference and identify energy-efficient practices. For
Generative AI, Large Language Models (LLMs) are assessed, focusing primarily on
energy consumption across different model sizes and varying service requests.
Our study employs software-based power measurements, ensuring ease of
replication across diverse configurations, models, and datasets. We analyse
multiple models and hardware setups to uncover correlations among various
metrics, identifying key contributors to energy consumption. The results
indicate that for Discriminative models, optimising architectures,
hyperparameters, and hardware can significantly reduce energy consumption
without sacrificing performance. For LLMs, energy efficiency depends on
balancing model size, reasoning complexity, and request-handling capacity, as
larger models do not necessarily consume more energy when utilisation remains
low. This analysis provides practical guidelines for designing green and
sustainable ML operations, emphasising energy consumption and carbon footprint
reductions while maintaining performance. This paper can serve as a benchmark
for accurately estimating total energy use across different types of AI models.
|
2503.23949 | Florian Bayer | Florian Bayer and Christian Rathgeb | AMB-FHE: Adaptive Multi-biometric Fusion with Fully Homomorphic
Encryption | null | null | null | null | cs.CR cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Biometric systems strive to balance security and usability. The use of
multi-biometric systems combining multiple biometric modalities is usually
recommended for high-security applications. However, the presentation of
multiple biometric modalities can impair the user-friendliness of the overall
system and might not be necessary in all cases. In this work, we present a
simple but flexible approach to increase the privacy protection of
homomorphically encrypted multi-biometric reference templates while enabling
adaptation to security requirements at run-time: An adaptive multi-biometric
fusion with fully homomorphic encryption (AMB-FHE). AMB-FHE is benchmarked
against a bimodal biometric database consisting of the CASIA iris and MCYT
fingerprint datasets using deep neural networks for feature extraction. Our
contribution is easy to implement and increases the flexibility of biometric
authentication while offering increased privacy protection through joint
encryption of templates from multiple modalities.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 11:00:08 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Bayer",
"Florian",
""
],
[
"Rathgeb",
"Christian",
""
]
] | TITLE: AMB-FHE: Adaptive Multi-biometric Fusion with Fully Homomorphic
Encryption
ABSTRACT: Biometric systems strive to balance security and usability. The use of
multi-biometric systems combining multiple biometric modalities is usually
recommended for high-security applications. However, the presentation of
multiple biometric modalities can impair the user-friendliness of the overall
system and might not be necessary in all cases. In this work, we present a
simple but flexible approach to increase the privacy protection of
homomorphically encrypted multi-biometric reference templates while enabling
adaptation to security requirements at run-time: An adaptive multi-biometric
fusion with fully homomorphic encryption (AMB-FHE). AMB-FHE is benchmarked
against a bimodal biometric database consisting of the CASIA iris and MCYT
fingerprint datasets using deep neural networks for feature extraction. Our
contribution is easy to implement and increases the flexibility of biometric
authentication while offering increased privacy protection through joint
encryption of templates from multiple modalities.
|
2503.23958 | Amirreza Mahbod | Nima Torbati, Anastasia Meshcheryakova, Diana Mechtcheriakova,
Amirreza Mahbod | A Multi-Stage Auto-Context Deep Learning Framework for Tissue and Nuclei
Segmentation and Classification in H&E-Stained Histological Images of
Advanced Melanoma | 15 pages | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Melanoma is the most lethal form of skin cancer, with an increasing incidence
rate worldwide. Analyzing histological images of melanoma by localizing and
classifying tissues and cell nuclei is considered the gold standard method for
diagnosis and treatment options for patients. While many computerized
approaches have been proposed for automatic analysis, most perform tissue-based
analysis and nuclei (cell)-based analysis as separate tasks, which might be
suboptimal.
In this work, using the PUMA challenge dataset, we proposed a novel
multi-stage deep learning approach by combining tissue and nuclei information
in a unified framework based on the auto-context concept to perform
segmentation and classification in histological images of melanoma. Through
pre-training and further post-processing, our approach achieved second and
first place rankings in the PUMA challenge, with average micro Dice tissue
score and summed nuclei F1-score of 73.40% for Track 1 and 63.48% for Track 2,
respectively. Our implementation for training and testing is available at:
https://github.com/NimaTorbati/PumaSubmit
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 11:15:50 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Torbati",
"Nima",
""
],
[
"Meshcheryakova",
"Anastasia",
""
],
[
"Mechtcheriakova",
"Diana",
""
],
[
"Mahbod",
"Amirreza",
""
]
] | TITLE: A Multi-Stage Auto-Context Deep Learning Framework for Tissue and Nuclei
Segmentation and Classification in H&E-Stained Histological Images of
Advanced Melanoma
ABSTRACT: Melanoma is the most lethal form of skin cancer, with an increasing incidence
rate worldwide. Analyzing histological images of melanoma by localizing and
classifying tissues and cell nuclei is considered the gold standard method for
diagnosis and treatment options for patients. While many computerized
approaches have been proposed for automatic analysis, most perform tissue-based
analysis and nuclei (cell)-based analysis as separate tasks, which might be
suboptimal.
In this work, using the PUMA challenge dataset, we proposed a novel
multi-stage deep learning approach by combining tissue and nuclei information
in a unified framework based on the auto-context concept to perform
segmentation and classification in histological images of melanoma. Through
pre-training and further post-processing, our approach achieved second and
first place rankings in the PUMA challenge, with average micro Dice tissue
score and summed nuclei F1-score of 73.40% for Track 1 and 63.48% for Track 2,
respectively. Our implementation for training and testing is available at:
https://github.com/NimaTorbati/PumaSubmit
|
2503.23963 | Miao Fan | Miao Fan, Shanshan Yu, Shengtong Xu, Kun Jiang, Haoyi Xiong, Xiangzeng
Liu | A Benchmark for Vision-Centric HD Mapping by V2I Systems | Accepted by IEEE IV'25 | null | null | null | cs.CV cs.RO | http://creativecommons.org/licenses/by/4.0/ | Autonomous driving faces safety challenges due to a lack of global
perspective and the semantic information of vectorized high-definition (HD)
maps. Information from roadside cameras can greatly expand the map perception
range through vehicle-to-infrastructure (V2I) communications. However, there is
still no dataset from the real world available for the study on map
vectorization onboard under the scenario of vehicle-infrastructure cooperation.
To prosper the research on online HD mapping for Vehicle-Infrastructure
Cooperative Autonomous Driving (VICAD), we release a real-world dataset, which
contains collaborative camera frames from both vehicles and roadside
infrastructures, and provides human annotations of HD map elements. We also
present an end-to-end neural framework (i.e., V2I-HD) leveraging vision-centric
V2I systems to construct vectorized maps. To reduce computation costs and
further deploy V2I-HD on autonomous vehicles, we introduce a directionally
decoupled self-attention mechanism to V2I-HD. Extensive experiments show that
V2I-HD has superior performance in real-time inference speed, as tested by our
real-world dataset. Abundant qualitative results also demonstrate stable and
robust map construction quality with low cost in complex and various driving
scenes. As a benchmark, both source codes and the dataset have been released at
OneDrive for the purpose of further study.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 11:24:53 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Fan",
"Miao",
""
],
[
"Yu",
"Shanshan",
""
],
[
"Xu",
"Shengtong",
""
],
[
"Jiang",
"Kun",
""
],
[
"Xiong",
"Haoyi",
""
],
[
"Liu",
"Xiangzeng",
""
]
] | TITLE: A Benchmark for Vision-Centric HD Mapping by V2I Systems
ABSTRACT: Autonomous driving faces safety challenges due to a lack of global
perspective and the semantic information of vectorized high-definition (HD)
maps. Information from roadside cameras can greatly expand the map perception
range through vehicle-to-infrastructure (V2I) communications. However, there is
still no dataset from the real world available for the study on map
vectorization onboard under the scenario of vehicle-infrastructure cooperation.
To prosper the research on online HD mapping for Vehicle-Infrastructure
Cooperative Autonomous Driving (VICAD), we release a real-world dataset, which
contains collaborative camera frames from both vehicles and roadside
infrastructures, and provides human annotations of HD map elements. We also
present an end-to-end neural framework (i.e., V2I-HD) leveraging vision-centric
V2I systems to construct vectorized maps. To reduce computation costs and
further deploy V2I-HD on autonomous vehicles, we introduce a directionally
decoupled self-attention mechanism to V2I-HD. Extensive experiments show that
V2I-HD has superior performance in real-time inference speed, as tested by our
real-world dataset. Abundant qualitative results also demonstrate stable and
robust map construction quality with low cost in complex and various driving
scenes. As a benchmark, both source codes and the dataset have been released at
OneDrive for the purpose of further study.
|
2503.23965 | Miao Fan | Miao Fan, Xuxu Kong, Shengtong Xu, Haoyi Xiong, Xiangzeng Liu | Video-based Traffic Light Recognition by Rockchip RV1126 for Autonomous
Driving | Accepted by IEEE IV'25 | null | null | null | cs.CV cs.RO | http://creativecommons.org/licenses/by/4.0/ | Real-time traffic light recognition is fundamental for autonomous driving
safety and navigation in urban environments. While existing approaches rely on
single-frame analysis from onboard cameras, they struggle with complex
scenarios involving occlusions and adverse lighting conditions. We present
\textit{ViTLR}, a novel video-based end-to-end neural network that processes
multiple consecutive frames to achieve robust traffic light detection and state
classification. The architecture leverages a transformer-like design with
convolutional self-attention modules, which is optimized specifically for
deployment on the Rockchip RV1126 embedded platform. Extensive evaluations on
two real-world datasets demonstrate that \textit{ViTLR} achieves
state-of-the-art performance while maintaining real-time processing
capabilities (>25 FPS) on RV1126's NPU. The system shows superior robustness
across temporal stability, varying target distances, and challenging
environmental conditions compared to existing single-frame approaches. We have
successfully integrated \textit{ViTLR} into an ego-lane traffic light
recognition system using HD maps for autonomous driving applications. The
complete implementation, including source code and datasets, is made publicly
available to facilitate further research in this domain.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 11:27:48 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Fan",
"Miao",
""
],
[
"Kong",
"Xuxu",
""
],
[
"Xu",
"Shengtong",
""
],
[
"Xiong",
"Haoyi",
""
],
[
"Liu",
"Xiangzeng",
""
]
] | TITLE: Video-based Traffic Light Recognition by Rockchip RV1126 for Autonomous
Driving
ABSTRACT: Real-time traffic light recognition is fundamental for autonomous driving
safety and navigation in urban environments. While existing approaches rely on
single-frame analysis from onboard cameras, they struggle with complex
scenarios involving occlusions and adverse lighting conditions. We present
\textit{ViTLR}, a novel video-based end-to-end neural network that processes
multiple consecutive frames to achieve robust traffic light detection and state
classification. The architecture leverages a transformer-like design with
convolutional self-attention modules, which is optimized specifically for
deployment on the Rockchip RV1126 embedded platform. Extensive evaluations on
two real-world datasets demonstrate that \textit{ViTLR} achieves
state-of-the-art performance while maintaining real-time processing
capabilities (>25 FPS) on RV1126's NPU. The system shows superior robustness
across temporal stability, varying target distances, and challenging
environmental conditions compared to existing single-frame approaches. We have
successfully integrated \textit{ViTLR} into an ego-lane traffic light
recognition system using HD maps for autonomous driving applications. The
complete implementation, including source code and datasets, is made publicly
available to facilitate further research in this domain.
|
2503.23980 | Yanbo Wang | Yanbo Wang, Yongtao Chen, Chuan Cao, Tianchen Deng, Wentao Zhao,
Jingchuan Wang, Weidong Chen | SALT: A Flexible Semi-Automatic Labeling Tool for General LiDAR Point
Clouds with Cross-Scene Adaptability and 4D Consistency | null | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a flexible Semi-Automatic Labeling Tool (SALT) for general LiDAR
point clouds with cross-scene adaptability and 4D consistency. Unlike recent
approaches that rely on camera distillation, SALT operates directly on raw
LiDAR data, automatically generating pre-segmentation results. To achieve this,
we propose a novel zero-shot learning paradigm, termed data alignment, which
transforms LiDAR data into pseudo-images by aligning with the training
distribution of vision foundation models. Additionally, we design a
4D-consistent prompting strategy and 4D non-maximum suppression module to
enhance SAM2, ensuring high-quality, temporally consistent presegmentation.
SALT surpasses the latest zero-shot methods by 18.4% PQ on SemanticKITTI and
achieves nearly 40-50% of human annotator performance on our newly collected
low-resolution LiDAR data and on combined data from three LiDAR types,
significantly boosting annotation efficiency. We anticipate that SALT's
open-sourcing will catalyze substantial expansion of current LiDAR datasets and
lay the groundwork for the future development of LiDAR foundation models. Code
is available at https://github.com/Cavendish518/SALT.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 11:46:55 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Wang",
"Yanbo",
""
],
[
"Chen",
"Yongtao",
""
],
[
"Cao",
"Chuan",
""
],
[
"Deng",
"Tianchen",
""
],
[
"Zhao",
"Wentao",
""
],
[
"Wang",
"Jingchuan",
""
],
[
"Chen",
"Weidong",
""
]
] | TITLE: SALT: A Flexible Semi-Automatic Labeling Tool for General LiDAR Point
Clouds with Cross-Scene Adaptability and 4D Consistency
ABSTRACT: We propose a flexible Semi-Automatic Labeling Tool (SALT) for general LiDAR
point clouds with cross-scene adaptability and 4D consistency. Unlike recent
approaches that rely on camera distillation, SALT operates directly on raw
LiDAR data, automatically generating pre-segmentation results. To achieve this,
we propose a novel zero-shot learning paradigm, termed data alignment, which
transforms LiDAR data into pseudo-images by aligning with the training
distribution of vision foundation models. Additionally, we design a
4D-consistent prompting strategy and 4D non-maximum suppression module to
enhance SAM2, ensuring high-quality, temporally consistent presegmentation.
SALT surpasses the latest zero-shot methods by 18.4% PQ on SemanticKITTI and
achieves nearly 40-50% of human annotator performance on our newly collected
low-resolution LiDAR data and on combined data from three LiDAR types,
significantly boosting annotation efficiency. We anticipate that SALT's
open-sourcing will catalyze substantial expansion of current LiDAR datasets and
lay the groundwork for the future development of LiDAR foundation models. Code
is available at https://github.com/Cavendish518/SALT.
|
2503.23981 | Xianchao Xiu | Chenyi Huang, Xinrong Li, Xianchao Xiu | Federated Structured Sparse PCA for Anomaly Detection in IoT Networks | null | null | null | null | cs.LG math.OC | http://creativecommons.org/licenses/by/4.0/ | Although federated learning has gained prominence as a privacy-preserving
framework tailored for distributed Internet of Things (IoT) environments,
current federated principal component analysis (PCA) methods lack integration
of sparsity, a critical feature for robust anomaly detection. To address this
limitation, we propose a novel federated structured sparse PCA (FedSSP)
approach for anomaly detection in IoT networks. The proposed model uniquely
integrates double sparsity regularization: (1) row-wise sparsity governed by
$\ell_{2,p}$-norm with $p\in[0,1)$ to eliminate redundant feature dimensions,
and (2) element-wise sparsity via $\ell_{q}$-norm with $q\in[0,1)$ to suppress
noise-sensitive components. To efficiently solve this non-convex optimization
problem in a distributed setting, we devise a proximal alternating minimization
(PAM) algorithm with rigorous theoretical proofs establishing its convergence
guarantees. Experiments on real datasets validate that incorporating structured
sparsity enhances both model interpretability and detection accuracy.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 11:50:21 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Huang",
"Chenyi",
""
],
[
"Li",
"Xinrong",
""
],
[
"Xiu",
"Xianchao",
""
]
] | TITLE: Federated Structured Sparse PCA for Anomaly Detection in IoT Networks
ABSTRACT: Although federated learning has gained prominence as a privacy-preserving
framework tailored for distributed Internet of Things (IoT) environments,
current federated principal component analysis (PCA) methods lack integration
of sparsity, a critical feature for robust anomaly detection. To address this
limitation, we propose a novel federated structured sparse PCA (FedSSP)
approach for anomaly detection in IoT networks. The proposed model uniquely
integrates double sparsity regularization: (1) row-wise sparsity governed by
$\ell_{2,p}$-norm with $p\in[0,1)$ to eliminate redundant feature dimensions,
and (2) element-wise sparsity via $\ell_{q}$-norm with $q\in[0,1)$ to suppress
noise-sensitive components. To efficiently solve this non-convex optimization
problem in a distributed setting, we devise a proximal alternating minimization
(PAM) algorithm with rigorous theoretical proofs establishing its convergence
guarantees. Experiments on real datasets validate that incorporating structured
sparsity enhances both model interpretability and detection accuracy.
|
2503.23989 | Dhruv Kumar | Aditya Pathak, Rachit Gandhi, Vaibhav Uttam, Devansh, Yashwanth Nakka,
Aaryan Raj Jindal, Pratyush Ghosh, Arnav Ramamoorthy, Shreyash Verma, Aditya
Mittal, Aashna Ased, Chirag Khatri, Jagat Sesh Challa, Dhruv Kumar | Rubric Is All You Need: Enhancing LLM-based Code Evaluation With
Question-Specific Rubrics | Under Review | null | null | null | cs.SE cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Since the disruption in LLM technology brought about by the release of GPT-3
and ChatGPT, LLMs have shown remarkable promise in programming-related tasks.
While code generation remains a popular field of research, code evaluation
using LLMs remains a problem with no conclusive solution. In this paper, we
focus on LLM-based code evaluation and attempt to fill in the existing gaps. We
propose multi-agentic novel approaches using question-specific rubrics tailored
to the problem statement, arguing that these perform better for logical
assessment than the existing approaches that use question-agnostic rubrics. To
address the lack of suitable evaluation datasets, we introduce two datasets: a
Data Structures and Algorithms dataset containing 150 student submissions from
a popular Data Structures and Algorithms practice website, and an Object
Oriented Programming dataset comprising 80 student submissions from
undergraduate computer science courses. In addition to using standard metrics
(Spearman Correlation, Cohen's Kappa), we additionally propose a new metric
called as Leniency, which quantifies evaluation strictness relative to expert
assessment. Our comprehensive analysis demonstrates that question-specific
rubrics significantly enhance logical assessment of code in educational
settings, providing better feedback aligned with instructional goals beyond
mere syntactic correctness.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 11:59:43 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Pathak",
"Aditya",
""
],
[
"Gandhi",
"Rachit",
""
],
[
"Uttam",
"Vaibhav",
""
],
[
"Devansh",
"",
""
],
[
"Nakka",
"Yashwanth",
""
],
[
"Jindal",
"Aaryan Raj",
""
],
[
"Ghosh",
"Pratyush",
""
],
[
"Ramamoorthy",
"Arnav",
""
],
[
"Verma",
"Shreyash",
""
],
[
"Mittal",
"Aditya",
""
],
[
"Ased",
"Aashna",
""
],
[
"Khatri",
"Chirag",
""
],
[
"Challa",
"Jagat Sesh",
""
],
[
"Kumar",
"Dhruv",
""
]
] | TITLE: Rubric Is All You Need: Enhancing LLM-based Code Evaluation With
Question-Specific Rubrics
ABSTRACT: Since the disruption in LLM technology brought about by the release of GPT-3
and ChatGPT, LLMs have shown remarkable promise in programming-related tasks.
While code generation remains a popular field of research, code evaluation
using LLMs remains a problem with no conclusive solution. In this paper, we
focus on LLM-based code evaluation and attempt to fill in the existing gaps. We
propose multi-agentic novel approaches using question-specific rubrics tailored
to the problem statement, arguing that these perform better for logical
assessment than the existing approaches that use question-agnostic rubrics. To
address the lack of suitable evaluation datasets, we introduce two datasets: a
Data Structures and Algorithms dataset containing 150 student submissions from
a popular Data Structures and Algorithms practice website, and an Object
Oriented Programming dataset comprising 80 student submissions from
undergraduate computer science courses. In addition to using standard metrics
(Spearman Correlation, Cohen's Kappa), we additionally propose a new metric
called as Leniency, which quantifies evaluation strictness relative to expert
assessment. Our comprehensive analysis demonstrates that question-specific
rubrics significantly enhance logical assessment of code in educational
settings, providing better feedback aligned with instructional goals beyond
mere syntactic correctness.
|
2503.23990 | Yumeng Fu | Yumeng Fu, Junjie Wu, Zhongjie Wang, Meishan Zhang, Yulin Wu, Bingquan
Liu | BeMERC: Behavior-Aware MLLM-based Framework for Multimodal Emotion
Recognition in Conversation | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multimodal emotion recognition in conversation (MERC), the task of
identifying the emotion label for each utterance in a conversation, is vital
for developing empathetic machines. Current MLLM-based MERC studies focus
mainly on capturing the speaker's textual or vocal characteristics, but ignore
the significance of video-derived behavior information. Different from text and
audio inputs, learning videos with rich facial expression, body language and
posture, provides emotion trigger signals to the models for more accurate
emotion predictions. In this paper, we propose a novel behavior-aware
MLLM-based framework (BeMERC) to incorporate speaker's behaviors, including
subtle facial micro-expression, body language and posture, into a vanilla
MLLM-based MERC model, thereby facilitating the modeling of emotional dynamics
during a conversation. Furthermore, BeMERC adopts a two-stage instruction
tuning strategy to extend the model to the conversations scenario for
end-to-end training of a MERC predictor. Experiments demonstrate that BeMERC
achieves superior performance than the state-of-the-art methods on two
benchmark datasets, and also provides a detailed discussion on the significance
of video-derived behavior information in MERC.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 12:04:53 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Fu",
"Yumeng",
""
],
[
"Wu",
"Junjie",
""
],
[
"Wang",
"Zhongjie",
""
],
[
"Zhang",
"Meishan",
""
],
[
"Wu",
"Yulin",
""
],
[
"Liu",
"Bingquan",
""
]
] | TITLE: BeMERC: Behavior-Aware MLLM-based Framework for Multimodal Emotion
Recognition in Conversation
ABSTRACT: Multimodal emotion recognition in conversation (MERC), the task of
identifying the emotion label for each utterance in a conversation, is vital
for developing empathetic machines. Current MLLM-based MERC studies focus
mainly on capturing the speaker's textual or vocal characteristics, but ignore
the significance of video-derived behavior information. Different from text and
audio inputs, learning videos with rich facial expression, body language and
posture, provides emotion trigger signals to the models for more accurate
emotion predictions. In this paper, we propose a novel behavior-aware
MLLM-based framework (BeMERC) to incorporate speaker's behaviors, including
subtle facial micro-expression, body language and posture, into a vanilla
MLLM-based MERC model, thereby facilitating the modeling of emotional dynamics
during a conversation. Furthermore, BeMERC adopts a two-stage instruction
tuning strategy to extend the model to the conversations scenario for
end-to-end training of a MERC predictor. Experiments demonstrate that BeMERC
achieves superior performance than the state-of-the-art methods on two
benchmark datasets, and also provides a detailed discussion on the significance
of video-derived behavior information in MERC.
|
2503.23993 | Ming Yuan | Ming Yuan, Sichao Wang, Chuang Zhang, Lei He, Qing Xu, Jianqiang Wang | DenseFormer: Learning Dense Depth Map from Sparse Depth and Image via
Conditional Diffusion Model | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The depth completion task is a critical problem in autonomous driving,
involving the generation of dense depth maps from sparse depth maps and RGB
images. Most existing methods employ a spatial propagation network to
iteratively refine the depth map after obtaining an initial dense depth. In
this paper, we propose DenseFormer, a novel method that integrates the
diffusion model into the depth completion task. By incorporating the denoising
mechanism of the diffusion model, DenseFormer generates the dense depth map by
progressively refining an initial random depth distribution through multiple
iterations. We propose a feature extraction module that leverages a feature
pyramid structure, along with multi-layer deformable attention, to effectively
extract and integrate features from sparse depth maps and RGB images, which
serve as the guiding condition for the diffusion process. Additionally, this
paper presents a depth refinement module that applies multi-step iterative
refinement across various ranges to the dense depth results generated by the
diffusion process. The module utilizes image features enriched with multi-scale
information and sparse depth input to further enhance the accuracy of the
predicted depth map. Extensive experiments on the KITTI outdoor scene dataset
demonstrate that DenseFormer outperforms classical depth completion methods.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 12:11:01 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Yuan",
"Ming",
""
],
[
"Wang",
"Sichao",
""
],
[
"Zhang",
"Chuang",
""
],
[
"He",
"Lei",
""
],
[
"Xu",
"Qing",
""
],
[
"Wang",
"Jianqiang",
""
]
] | TITLE: DenseFormer: Learning Dense Depth Map from Sparse Depth and Image via
Conditional Diffusion Model
ABSTRACT: The depth completion task is a critical problem in autonomous driving,
involving the generation of dense depth maps from sparse depth maps and RGB
images. Most existing methods employ a spatial propagation network to
iteratively refine the depth map after obtaining an initial dense depth. In
this paper, we propose DenseFormer, a novel method that integrates the
diffusion model into the depth completion task. By incorporating the denoising
mechanism of the diffusion model, DenseFormer generates the dense depth map by
progressively refining an initial random depth distribution through multiple
iterations. We propose a feature extraction module that leverages a feature
pyramid structure, along with multi-layer deformable attention, to effectively
extract and integrate features from sparse depth maps and RGB images, which
serve as the guiding condition for the diffusion process. Additionally, this
paper presents a depth refinement module that applies multi-step iterative
refinement across various ranges to the dense depth results generated by the
diffusion process. The module utilizes image features enriched with multi-scale
information and sparse depth input to further enhance the accuracy of the
predicted depth map. Extensive experiments on the KITTI outdoor scene dataset
demonstrate that DenseFormer outperforms classical depth completion methods.
|
2503.24006 | Safa AlSaidi | Safa Alsaidi, Marc Vincent, Olivia Boyer, Nicolas Garcelon, Miguel
Couceiro, and Adrien Coulet | Comparing representations of long clinical texts for the task of patient
note-identification | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by-sa/4.0/ | In this paper, we address the challenge of patient-note identification, which
involves accurately matching an anonymized clinical note to its corresponding
patient, represented by a set of related notes. This task has broad
applications, including duplicate records detection and patient similarity
analysis, which require robust patient-level representations. We explore
various embedding methods, including Hierarchical Attention Networks (HAN),
three-level Hierarchical Transformer Networks (HTN), LongFormer, and advanced
BERT-based models, focusing on their ability to process mediumto-long clinical
texts effectively. Additionally, we evaluate different pooling strategies
(mean, max, and mean_max) for aggregating wordlevel embeddings into
patient-level representations and we examine the impact of sliding windows on
model performance. Our results indicate that BERT-based embeddings outperform
traditional and hierarchical models, particularly in processing lengthy
clinical notes and capturing nuanced patient representations. Among the pooling
strategies, mean_max pooling consistently yields the best results, highlighting
its ability to capture critical features from clinical notes. Furthermore, the
reproduction of our results on both MIMIC dataset and Necker hospital data
warehouse illustrates the generalizability of these approaches to real-world
applications, emphasizing the importance of both embedding methods and
aggregation strategies in optimizing patient-note identification and enhancing
patient-level modeling.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 12:31:44 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Alsaidi",
"Safa",
""
],
[
"Vincent",
"Marc",
""
],
[
"Boyer",
"Olivia",
""
],
[
"Garcelon",
"Nicolas",
""
],
[
"Couceiro",
"Miguel",
""
],
[
"Coulet",
"Adrien",
""
]
] | TITLE: Comparing representations of long clinical texts for the task of patient
note-identification
ABSTRACT: In this paper, we address the challenge of patient-note identification, which
involves accurately matching an anonymized clinical note to its corresponding
patient, represented by a set of related notes. This task has broad
applications, including duplicate records detection and patient similarity
analysis, which require robust patient-level representations. We explore
various embedding methods, including Hierarchical Attention Networks (HAN),
three-level Hierarchical Transformer Networks (HTN), LongFormer, and advanced
BERT-based models, focusing on their ability to process mediumto-long clinical
texts effectively. Additionally, we evaluate different pooling strategies
(mean, max, and mean_max) for aggregating wordlevel embeddings into
patient-level representations and we examine the impact of sliding windows on
model performance. Our results indicate that BERT-based embeddings outperform
traditional and hierarchical models, particularly in processing lengthy
clinical notes and capturing nuanced patient representations. Among the pooling
strategies, mean_max pooling consistently yields the best results, highlighting
its ability to capture critical features from clinical notes. Furthermore, the
reproduction of our results on both MIMIC dataset and Necker hospital data
warehouse illustrates the generalizability of these approaches to real-world
applications, emphasizing the importance of both embedding methods and
aggregation strategies in optimizing patient-note identification and enhancing
patient-level modeling.
|
2503.24008 | Qi Wu | Qi Wu and Quanlong Zheng and Yanhao Zhang and Junlin Xie and Jinguo
Luo and Kuo Wang and Peng Liu and Qingsong Xie and Ru Zhen and Haonan Lu and
Zhenyu Yang | H2VU-Benchmark: A Comprehensive Benchmark for Hierarchical Holistic
Video Understanding | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | With the rapid development of multimodal models, the demand for assessing
video understanding capabilities has been steadily increasing. However,
existing benchmarks for evaluating video understanding exhibit significant
limitations in coverage, task diversity, and scene adaptability. These
shortcomings hinder the accurate assessment of models' comprehensive video
understanding capabilities. To tackle this challenge, we propose a hierarchical
and holistic video understanding (H2VU) benchmark designed to evaluate both
general video and online streaming video comprehension. This benchmark
contributes three key features:
Extended video duration: Spanning videos from brief 3-second clips to
comprehensive 1.5-hour recordings, thereby bridging the temporal gaps found in
current benchmarks. Comprehensive assessment tasks: Beyond traditional
perceptual and reasoning tasks, we have introduced modules for
countercommonsense comprehension and trajectory state tracking. These additions
test the models' deep understanding capabilities beyond mere prior knowledge.
Enriched video data: To keep pace with the rapid evolution of current AI
agents, we have expanded first-person streaming video datasets. This expansion
allows for the exploration of multimodal models' performance in understanding
streaming videos from a first-person perspective. Extensive results from H2VU
reveal that existing multimodal large language models (MLLMs) possess
substantial potential for improvement in our newly proposed evaluation tasks.
We expect that H2VU will facilitate advancements in video understanding
research by offering a comprehensive and in-depth analysis of MLLMs.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 12:32:51 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Wu",
"Qi",
""
],
[
"Zheng",
"Quanlong",
""
],
[
"Zhang",
"Yanhao",
""
],
[
"Xie",
"Junlin",
""
],
[
"Luo",
"Jinguo",
""
],
[
"Wang",
"Kuo",
""
],
[
"Liu",
"Peng",
""
],
[
"Xie",
"Qingsong",
""
],
[
"Zhen",
"Ru",
""
],
[
"Lu",
"Haonan",
""
],
[
"Yang",
"Zhenyu",
""
]
] | TITLE: H2VU-Benchmark: A Comprehensive Benchmark for Hierarchical Holistic
Video Understanding
ABSTRACT: With the rapid development of multimodal models, the demand for assessing
video understanding capabilities has been steadily increasing. However,
existing benchmarks for evaluating video understanding exhibit significant
limitations in coverage, task diversity, and scene adaptability. These
shortcomings hinder the accurate assessment of models' comprehensive video
understanding capabilities. To tackle this challenge, we propose a hierarchical
and holistic video understanding (H2VU) benchmark designed to evaluate both
general video and online streaming video comprehension. This benchmark
contributes three key features:
Extended video duration: Spanning videos from brief 3-second clips to
comprehensive 1.5-hour recordings, thereby bridging the temporal gaps found in
current benchmarks. Comprehensive assessment tasks: Beyond traditional
perceptual and reasoning tasks, we have introduced modules for
countercommonsense comprehension and trajectory state tracking. These additions
test the models' deep understanding capabilities beyond mere prior knowledge.
Enriched video data: To keep pace with the rapid evolution of current AI
agents, we have expanded first-person streaming video datasets. This expansion
allows for the exploration of multimodal models' performance in understanding
streaming videos from a first-person perspective. Extensive results from H2VU
reveal that existing multimodal large language models (MLLMs) possess
substantial potential for improvement in our newly proposed evaluation tasks.
We expect that H2VU will facilitate advancements in video understanding
research by offering a comprehensive and in-depth analysis of MLLMs.
|
2503.24012 | Bingyuan Zhang | Bingyuan Zhang, Yoshikazu Terada | Tree-Guided $L_1$-Convex Clustering | null | null | null | null | cs.LG stat.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Convex clustering is a modern clustering framework that guarantees globally
optimal solutions and performs comparably to other advanced clustering methods.
However, obtaining a complete dendrogram (clusterpath) for large-scale datasets
remains computationally challenging due to the extensive costs associated with
iterative optimization approaches. To address this limitation, we develop a
novel convex clustering algorithm called Tree-Guided $L_1$-Convex Clustering
(TGCC). We first focus on the fact that the loss function of $L_1$-convex
clustering with tree-structured weights can be efficiently optimized using a
dynamic programming approach. We then develop an efficient cluster fusion
algorithm that utilizes the tree structure of the weights to accelerate the
optimization process and eliminate the issue of cluster splits commonly
observed in convex clustering. By combining the dynamic programming approach
with the cluster fusion algorithm, the TGCC algorithm achieves superior
computational efficiency without sacrificing clustering performance.
Remarkably, our TGCC algorithm can construct a complete clusterpath for $10^6$
points in $\mathbb{R}^2$ within 15 seconds on a standard laptop without the
need for parallel or distributed computing frameworks. Moreover, we extend the
TGCC algorithm to develop biclustering and sparse convex clustering algorithms.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 12:39:48 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Zhang",
"Bingyuan",
""
],
[
"Terada",
"Yoshikazu",
""
]
] | TITLE: Tree-Guided $L_1$-Convex Clustering
ABSTRACT: Convex clustering is a modern clustering framework that guarantees globally
optimal solutions and performs comparably to other advanced clustering methods.
However, obtaining a complete dendrogram (clusterpath) for large-scale datasets
remains computationally challenging due to the extensive costs associated with
iterative optimization approaches. To address this limitation, we develop a
novel convex clustering algorithm called Tree-Guided $L_1$-Convex Clustering
(TGCC). We first focus on the fact that the loss function of $L_1$-convex
clustering with tree-structured weights can be efficiently optimized using a
dynamic programming approach. We then develop an efficient cluster fusion
algorithm that utilizes the tree structure of the weights to accelerate the
optimization process and eliminate the issue of cluster splits commonly
observed in convex clustering. By combining the dynamic programming approach
with the cluster fusion algorithm, the TGCC algorithm achieves superior
computational efficiency without sacrificing clustering performance.
Remarkably, our TGCC algorithm can construct a complete clusterpath for $10^6$
points in $\mathbb{R}^2$ within 15 seconds on a standard laptop without the
need for parallel or distributed computing frameworks. Moreover, we extend the
TGCC algorithm to develop biclustering and sparse convex clustering algorithms.
|
2503.24014 | Minh Thao Chan | Minh David Thao Chan, Ruoyu Zhao, Yukuan Jia, Ruiqing Mao, and Sheng
Zhou | Optimization of Layer Skipping and Frequency Scaling for Convolutional
Neural Networks under Latency Constraint | 12 pages, 6 figures, Accepted in Proc. Eur. Conf. Comput. Vis. (ECCV)
Workshops. Milan, Italy: Springer, September 2024 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | The energy consumption of Convolutional Neural Networks (CNNs) is a critical
factor in deploying deep learning models on resource-limited equipment such as
mobile devices and autonomous vehicles. We propose an approach involving
Proportional Layer Skipping (PLS) and Frequency Scaling (FS). Layer skipping
reduces computational complexity by selectively bypassing network layers,
whereas frequency scaling adjusts the frequency of the processor to optimize
energy use under latency constraints. Experiments of PLS and FS on ResNet-152
with the CIFAR-10 dataset demonstrated significant reductions in computational
demands and energy consumption with minimal accuracy loss. This study offers
practical solutions for improving real-time processing in resource-limited
settings and provides insights into balancing computational efficiency and
model performance.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 12:40:11 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Chan",
"Minh David Thao",
""
],
[
"Zhao",
"Ruoyu",
""
],
[
"Jia",
"Yukuan",
""
],
[
"Mao",
"Ruiqing",
""
],
[
"Zhou",
"Sheng",
""
]
] | TITLE: Optimization of Layer Skipping and Frequency Scaling for Convolutional
Neural Networks under Latency Constraint
ABSTRACT: The energy consumption of Convolutional Neural Networks (CNNs) is a critical
factor in deploying deep learning models on resource-limited equipment such as
mobile devices and autonomous vehicles. We propose an approach involving
Proportional Layer Skipping (PLS) and Frequency Scaling (FS). Layer skipping
reduces computational complexity by selectively bypassing network layers,
whereas frequency scaling adjusts the frequency of the processor to optimize
energy use under latency constraints. Experiments of PLS and FS on ResNet-152
with the CIFAR-10 dataset demonstrated significant reductions in computational
demands and energy consumption with minimal accuracy loss. This study offers
practical solutions for improving real-time processing in resource-limited
settings and provides insights into balancing computational efficiency and
model performance.
|
2503.24017 | Chenqi Guo Dr. | Chenqi Guo, Mengshuo Rong, Qianli Feng, Rongfan Feng, Yinglong Ma | Crossmodal Knowledge Distillation with WordNet-Relaxed Text Embeddings
for Robust Image Classification | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Crossmodal knowledge distillation (KD) aims to enhance a unimodal student
using a multimodal teacher model. In particular, when the teacher's modalities
include the student's, additional complementary information can be exploited to
improve knowledge transfer. In supervised image classification, image datasets
typically include class labels that represent high-level concepts, suggesting a
natural avenue to incorporate textual cues for crossmodal KD. However, these
labels rarely capture the deeper semantic structures in real-world visuals and
can lead to label leakage if used directly as inputs, ultimately limiting KD
performance. To address these issues, we propose a multi-teacher crossmodal KD
framework that integrates CLIP image embeddings with learnable WordNet-relaxed
text embeddings under a hierarchical loss. By avoiding direct use of exact
class names and instead using semantically richer WordNet expansions, we
mitigate label leakage and introduce more diverse textual cues. Experiments
show that this strategy significantly boosts student performance, whereas noisy
or overly precise text embeddings hinder distillation efficiency.
Interpretability analyses confirm that WordNet-relaxed prompts encourage
heavier reliance on visual features over textual shortcuts, while still
effectively incorporating the newly introduced textual cues. Our method
achieves state-of-the-art or second-best results on six public datasets,
demonstrating its effectiveness in advancing crossmodal KD.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 12:41:26 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Guo",
"Chenqi",
""
],
[
"Rong",
"Mengshuo",
""
],
[
"Feng",
"Qianli",
""
],
[
"Feng",
"Rongfan",
""
],
[
"Ma",
"Yinglong",
""
]
] | TITLE: Crossmodal Knowledge Distillation with WordNet-Relaxed Text Embeddings
for Robust Image Classification
ABSTRACT: Crossmodal knowledge distillation (KD) aims to enhance a unimodal student
using a multimodal teacher model. In particular, when the teacher's modalities
include the student's, additional complementary information can be exploited to
improve knowledge transfer. In supervised image classification, image datasets
typically include class labels that represent high-level concepts, suggesting a
natural avenue to incorporate textual cues for crossmodal KD. However, these
labels rarely capture the deeper semantic structures in real-world visuals and
can lead to label leakage if used directly as inputs, ultimately limiting KD
performance. To address these issues, we propose a multi-teacher crossmodal KD
framework that integrates CLIP image embeddings with learnable WordNet-relaxed
text embeddings under a hierarchical loss. By avoiding direct use of exact
class names and instead using semantically richer WordNet expansions, we
mitigate label leakage and introduce more diverse textual cues. Experiments
show that this strategy significantly boosts student performance, whereas noisy
or overly precise text embeddings hinder distillation efficiency.
Interpretability analyses confirm that WordNet-relaxed prompts encourage
heavier reliance on visual features over textual shortcuts, while still
effectively incorporating the newly introduced textual cues. Our method
achieves state-of-the-art or second-best results on six public datasets,
demonstrating its effectiveness in advancing crossmodal KD.
|
2503.24021 | Mingyang Gu | Mingyang Gu, Jiamin Zhu, Qipeng Wang, Fengjie Wang, Xiaolin Wen, Yong
Wang, Min Zhu | IntelliCircos: A Data-driven and AI-powered Authoring Tool for Circos
Plots | null | null | null | null | cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Genomics data is essential in biological and medical domains, and
bioinformatics analysts often manually create circos plots to analyze the data
and extract valuable insights. However, creating circos plots is complex, as it
requires careful design for multiple track attributes and positional
relationships between them. Typically, analysts often seek inspiration from
existing circos plots, and they have to iteratively adjust and refine the plot
to achieve a satisfactory final design, making the process both tedious and
time-intensive. To address these challenges, we propose IntelliCircos, an
AI-powered interactive authoring tool that streamlines the process from initial
visual design to the final implementation of circos plots. Specifically, we
build a new dataset containing 4396 circos plots with corresponding annotations
and configurations, which are extracted and labeled from published papers. With
the dataset, we further identify track combination patterns, and utilize Large
Language Model (LLM) to provide domain-specific design recommendations and
configuration references to navigate the design of circos plots. We conduct a
user study with 8 bioinformatics analysts to evaluate IntelliCircos, and the
results demonstrate its usability and effectiveness in authoring circos plots.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 12:48:39 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Gu",
"Mingyang",
""
],
[
"Zhu",
"Jiamin",
""
],
[
"Wang",
"Qipeng",
""
],
[
"Wang",
"Fengjie",
""
],
[
"Wen",
"Xiaolin",
""
],
[
"Wang",
"Yong",
""
],
[
"Zhu",
"Min",
""
]
] | TITLE: IntelliCircos: A Data-driven and AI-powered Authoring Tool for Circos
Plots
ABSTRACT: Genomics data is essential in biological and medical domains, and
bioinformatics analysts often manually create circos plots to analyze the data
and extract valuable insights. However, creating circos plots is complex, as it
requires careful design for multiple track attributes and positional
relationships between them. Typically, analysts often seek inspiration from
existing circos plots, and they have to iteratively adjust and refine the plot
to achieve a satisfactory final design, making the process both tedious and
time-intensive. To address these challenges, we propose IntelliCircos, an
AI-powered interactive authoring tool that streamlines the process from initial
visual design to the final implementation of circos plots. Specifically, we
build a new dataset containing 4396 circos plots with corresponding annotations
and configurations, which are extracted and labeled from published papers. With
the dataset, we further identify track combination patterns, and utilize Large
Language Model (LLM) to provide domain-specific design recommendations and
configuration references to navigate the design of circos plots. We conduct a
user study with 8 bioinformatics analysts to evaluate IntelliCircos, and the
results demonstrate its usability and effectiveness in authoring circos plots.
|
2503.24027 | Florian Carichon | Florian Carichon, Romain Rampa, Golnoosh Farnadi | Crossing Boundaries: Leveraging Semantic Divergences to Explore Cultural
Novelty in Cooking Recipes | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Novelty modeling and detection is a core topic in Natural Language Processing
(NLP), central to numerous tasks such as recommender systems and automatic
summarization. It involves identifying pieces of text that deviate in some way
from previously known information. However, novelty is also a crucial
determinant of the unique perception of relevance and quality of an experience,
as it rests upon each individual's understanding of the world. Social factors,
particularly cultural background, profoundly influence perceptions of novelty
and innovation. Cultural novelty arises from differences in salience and
novelty as shaped by the distance between distinct communities. While cultural
diversity has garnered increasing attention in artificial intelligence (AI),
the lack of robust metrics for quantifying cultural novelty hinders a deeper
understanding of these divergences. This gap limits quantifying and
understanding cultural differences within computational frameworks. To address
this, we propose an interdisciplinary framework that integrates knowledge from
sociology and management. Central to our approach is GlobalFusion, a novel
dataset comprising 500 dishes and approximately 100,000 cooking recipes
capturing cultural adaptation from over 150 countries. By introducing a set of
Jensen-Shannon Divergence metrics for novelty, we leverage this dataset to
analyze textual divergences when recipes from one community are modified by
another with a different cultural background. The results reveal significant
correlations between our cultural novelty metrics and established cultural
measures based on linguistic, religious, and geographical distances. Our
findings highlight the potential of our framework to advance the understanding
and measurement of cultural diversity in AI.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 12:52:52 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Carichon",
"Florian",
""
],
[
"Rampa",
"Romain",
""
],
[
"Farnadi",
"Golnoosh",
""
]
] | TITLE: Crossing Boundaries: Leveraging Semantic Divergences to Explore Cultural
Novelty in Cooking Recipes
ABSTRACT: Novelty modeling and detection is a core topic in Natural Language Processing
(NLP), central to numerous tasks such as recommender systems and automatic
summarization. It involves identifying pieces of text that deviate in some way
from previously known information. However, novelty is also a crucial
determinant of the unique perception of relevance and quality of an experience,
as it rests upon each individual's understanding of the world. Social factors,
particularly cultural background, profoundly influence perceptions of novelty
and innovation. Cultural novelty arises from differences in salience and
novelty as shaped by the distance between distinct communities. While cultural
diversity has garnered increasing attention in artificial intelligence (AI),
the lack of robust metrics for quantifying cultural novelty hinders a deeper
understanding of these divergences. This gap limits quantifying and
understanding cultural differences within computational frameworks. To address
this, we propose an interdisciplinary framework that integrates knowledge from
sociology and management. Central to our approach is GlobalFusion, a novel
dataset comprising 500 dishes and approximately 100,000 cooking recipes
capturing cultural adaptation from over 150 countries. By introducing a set of
Jensen-Shannon Divergence metrics for novelty, we leverage this dataset to
analyze textual divergences when recipes from one community are modified by
another with a different cultural background. The results reveal significant
correlations between our cultural novelty metrics and established cultural
measures based on linguistic, religious, and geographical distances. Our
findings highlight the potential of our framework to advance the understanding
and measurement of cultural diversity in AI.
|
2503.24028 | Qiang Wang | Qiang Wang, Dawei Feng, Xu Zhang, Ao Shen, Yang Xu, Bo Ding, Huaimin
Wang | Pay More Attention to the Robustness of Prompt for Instruction Data
Mining | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Instruction tuning has emerged as a paramount method for tailoring the
behaviors of LLMs. Recent work has unveiled the potential for LLMs to achieve
high performance through fine-tuning with a limited quantity of high-quality
instruction data. Building upon this approach, we further explore the impact of
prompt's robustness on the selection of high-quality instruction data. This
paper proposes a pioneering framework of high-quality online instruction data
mining for instruction tuning, focusing on the impact of prompt's robustness on
the data mining process. Our notable innovation, is to generate the adversarial
instruction data by conducting the attack for the prompt of online instruction
data. Then, we introduce an Adversarial Instruction-Following Difficulty metric
to measure how much help the adversarial instruction data can provide to the
generation of the corresponding response. Apart from it, we propose a novel
Adversarial Instruction Output Embedding Consistency approach to select
high-quality online instruction data. We conduct extensive experiments on two
benchmark datasets to assess the performance. The experimental results serve to
underscore the effectiveness of our proposed two methods. Moreover, the results
underscore the critical practical significance of considering prompt's
robustness.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 12:53:08 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Wang",
"Qiang",
""
],
[
"Feng",
"Dawei",
""
],
[
"Zhang",
"Xu",
""
],
[
"Shen",
"Ao",
""
],
[
"Xu",
"Yang",
""
],
[
"Ding",
"Bo",
""
],
[
"Wang",
"Huaimin",
""
]
] | TITLE: Pay More Attention to the Robustness of Prompt for Instruction Data
Mining
ABSTRACT: Instruction tuning has emerged as a paramount method for tailoring the
behaviors of LLMs. Recent work has unveiled the potential for LLMs to achieve
high performance through fine-tuning with a limited quantity of high-quality
instruction data. Building upon this approach, we further explore the impact of
prompt's robustness on the selection of high-quality instruction data. This
paper proposes a pioneering framework of high-quality online instruction data
mining for instruction tuning, focusing on the impact of prompt's robustness on
the data mining process. Our notable innovation, is to generate the adversarial
instruction data by conducting the attack for the prompt of online instruction
data. Then, we introduce an Adversarial Instruction-Following Difficulty metric
to measure how much help the adversarial instruction data can provide to the
generation of the corresponding response. Apart from it, we propose a novel
Adversarial Instruction Output Embedding Consistency approach to select
high-quality online instruction data. We conduct extensive experiments on two
benchmark datasets to assess the performance. The experimental results serve to
underscore the effectiveness of our proposed two methods. Moreover, the results
underscore the critical practical significance of considering prompt's
robustness.
|
2503.24043 | Zhenkai Qin | Jiahui LU, Shuang Wu, Zhenkai Qin, Dongze Wu, Guifang Yang | Frequency-Aware Attention-LSTM for PM$_{2.5}$ Time Series Forecasting | null | null | null | null | cs.LG | http://creativecommons.org/publicdomain/zero/1.0/ | To enhance the accuracy and robustness of PM$_{2.5}$ concentration
forecasting, this paper introduces FALNet, a Frequency-Aware LSTM Network that
integrates frequency-domain decomposition, temporal modeling, and
attention-based refinement. The model first applies STL and FFT to extract
trend, seasonal, and denoised residual components, effectively filtering out
high-frequency noise. The filtered residuals are then fed into a stacked LSTM
to capture long-term dependencies, followed by a multi-head attention mechanism
that dynamically focuses on key time steps. Experiments conducted on real-world
urban air quality datasets demonstrate that FALNet consistently outperforms
conventional models across standard metrics such as MAE, RMSE, and $R^2$. The
model shows strong adaptability in capturing sharp fluctuations during
pollution peaks and non-stationary conditions. These results validate the
effectiveness and generalizability of FALNet for real-time air pollution
prediction, environmental risk assessment, and decision-making support.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 13:07:33 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"LU",
"Jiahui",
""
],
[
"Wu",
"Shuang",
""
],
[
"Qin",
"Zhenkai",
""
],
[
"Wu",
"Dongze",
""
],
[
"Yang",
"Guifang",
""
]
] | TITLE: Frequency-Aware Attention-LSTM for PM$_{2.5}$ Time Series Forecasting
ABSTRACT: To enhance the accuracy and robustness of PM$_{2.5}$ concentration
forecasting, this paper introduces FALNet, a Frequency-Aware LSTM Network that
integrates frequency-domain decomposition, temporal modeling, and
attention-based refinement. The model first applies STL and FFT to extract
trend, seasonal, and denoised residual components, effectively filtering out
high-frequency noise. The filtered residuals are then fed into a stacked LSTM
to capture long-term dependencies, followed by a multi-head attention mechanism
that dynamically focuses on key time steps. Experiments conducted on real-world
urban air quality datasets demonstrate that FALNet consistently outperforms
conventional models across standard metrics such as MAE, RMSE, and $R^2$. The
model shows strong adaptability in capturing sharp fluctuations during
pollution peaks and non-stationary conditions. These results validate the
effectiveness and generalizability of FALNet for real-time air pollution
prediction, environmental risk assessment, and decision-making support.
|
2503.24052 | Vijay Kumar Sutrakar | Anantram Patel, Nikhil Mogre, Mandar Mane, Jayavardhan Reddy Enumula,
Vijay Kumar Sutrakar | Accelerated Airfoil Design Using Neural Network Approaches | null | null | null | null | cs.LG math-ph math.MP physics.app-ph physics.flu-dyn physics.space-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, prediction of airfoil shape from targeted pressure
distribution (suction and pressure sides) and vice versa is demonstrated using
both Convolutional Neural Networks (CNNs) and Deep Neural Networks (DNNs)
techniques. The dataset is generated for 1600 airfoil shapes, with simulations
carried out at Reynolds numbers (Re) ranging from 10,000 and 90,00,000 and
angles of attack (AoA) ranging from 0 to 15 degrees, ensuring the dataset
captured diverse aerodynamic conditions. Five different CNN and DNN models are
developed depending on the input/output parameters. Results demonstrate that
the refined models exhibit improved efficiency, with the DNN model achieving a
multi-fold reduction in training time compared to the CNN model for complex
datasets consisting of varying airfoil, Re, and AoA. The predicted airfoil
shapes/pressure distribution closely match the targeted values, validating the
effectiveness of deep learning frameworks. However, the performance of CNN
models is found to be better compared to DNN models. Lastly, a flying wing
aircraft model of wingspan >10 m is considered for the prediction of pressure
distribution along the chordwise. The proposed CNN and DNN models show
promising results. This research underscores the potential of deep learning
models accelerating aerodynamic optimization and advancing the design of
high-performance airfoils.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 13:14:14 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Patel",
"Anantram",
""
],
[
"Mogre",
"Nikhil",
""
],
[
"Mane",
"Mandar",
""
],
[
"Enumula",
"Jayavardhan Reddy",
""
],
[
"Sutrakar",
"Vijay Kumar",
""
]
] | TITLE: Accelerated Airfoil Design Using Neural Network Approaches
ABSTRACT: In this paper, prediction of airfoil shape from targeted pressure
distribution (suction and pressure sides) and vice versa is demonstrated using
both Convolutional Neural Networks (CNNs) and Deep Neural Networks (DNNs)
techniques. The dataset is generated for 1600 airfoil shapes, with simulations
carried out at Reynolds numbers (Re) ranging from 10,000 and 90,00,000 and
angles of attack (AoA) ranging from 0 to 15 degrees, ensuring the dataset
captured diverse aerodynamic conditions. Five different CNN and DNN models are
developed depending on the input/output parameters. Results demonstrate that
the refined models exhibit improved efficiency, with the DNN model achieving a
multi-fold reduction in training time compared to the CNN model for complex
datasets consisting of varying airfoil, Re, and AoA. The predicted airfoil
shapes/pressure distribution closely match the targeted values, validating the
effectiveness of deep learning frameworks. However, the performance of CNN
models is found to be better compared to DNN models. Lastly, a flying wing
aircraft model of wingspan >10 m is considered for the prediction of pressure
distribution along the chordwise. The proposed CNN and DNN models show
promising results. This research underscores the potential of deep learning
models accelerating aerodynamic optimization and advancing the design of
high-performance airfoils.
|
2503.24057 | Xuxiong Liu | Xuxiong Liu, Tengteng Dong, Fei Wang, Weijie Feng, Xiao Sun | AMMSM: Adaptive Motion Magnification and Sparse Mamba for
Micro-Expression Recognition | Accepted by ICME 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Micro-expressions are typically regarded as unconscious manifestations of a
person's genuine emotions. However, their short duration and subtle signals
pose significant challenges for downstream recognition. We propose a multi-task
learning framework named the Adaptive Motion Magnification and Sparse Mamba
(AMMSM) to address this. This framework aims to enhance the accurate capture of
micro-expressions through self-supervised subtle motion magnification, while
the sparse spatial selection Mamba architecture combines sparse activation with
the advanced Visual Mamba model to model key motion regions and their valuable
representations more effectively. Additionally, we employ evolutionary search
to optimize the magnification factor and the sparsity ratios of spatial
selection, followed by fine-tuning to improve performance further. Extensive
experiments on two standard datasets demonstrate that the proposed AMMSM
achieves state-of-the-art (SOTA) accuracy and robustness.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 13:17:43 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Liu",
"Xuxiong",
""
],
[
"Dong",
"Tengteng",
""
],
[
"Wang",
"Fei",
""
],
[
"Feng",
"Weijie",
""
],
[
"Sun",
"Xiao",
""
]
] | TITLE: AMMSM: Adaptive Motion Magnification and Sparse Mamba for
Micro-Expression Recognition
ABSTRACT: Micro-expressions are typically regarded as unconscious manifestations of a
person's genuine emotions. However, their short duration and subtle signals
pose significant challenges for downstream recognition. We propose a multi-task
learning framework named the Adaptive Motion Magnification and Sparse Mamba
(AMMSM) to address this. This framework aims to enhance the accurate capture of
micro-expressions through self-supervised subtle motion magnification, while
the sparse spatial selection Mamba architecture combines sparse activation with
the advanced Visual Mamba model to model key motion regions and their valuable
representations more effectively. Additionally, we employ evolutionary search
to optimize the magnification factor and the sparsity ratios of spatial
selection, followed by fine-tuning to improve performance further. Extensive
experiments on two standard datasets demonstrate that the proposed AMMSM
achieves state-of-the-art (SOTA) accuracy and robustness.
|
2503.24062 | Fatemeh Mohammadi | Fatemeh Mohammadi, Tommaso Romano, Samira Maghool, Paolo Ceravolo | Artificial Conversations, Real Results: Fostering Language Detection
with Synthetic Data | null | null | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Collecting high-quality training data is essential for fine-tuning Large
Language Models (LLMs). However, acquiring such data is often costly and
time-consuming, especially for non-English languages such as Italian. Recently,
researchers have begun to explore the use of LLMs to generate synthetic
datasets as a viable alternative. This study proposes a pipeline for generating
synthetic data and a comprehensive approach for investigating the factors that
influence the validity of synthetic data generated by LLMs by examining how
model performance is affected by metrics such as prompt strategy, text length
and target position in a specific task, i.e. inclusive language detection in
Italian job advertisements. Our results show that, in most cases and across
different metrics, the fine-tuned models trained on synthetic data consistently
outperformed other models on both real and synthetic test datasets. The study
discusses the practical implications and limitations of using synthetic data
for language detection tasks with LLMs.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 13:22:34 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Mohammadi",
"Fatemeh",
""
],
[
"Romano",
"Tommaso",
""
],
[
"Maghool",
"Samira",
""
],
[
"Ceravolo",
"Paolo",
""
]
] | TITLE: Artificial Conversations, Real Results: Fostering Language Detection
with Synthetic Data
ABSTRACT: Collecting high-quality training data is essential for fine-tuning Large
Language Models (LLMs). However, acquiring such data is often costly and
time-consuming, especially for non-English languages such as Italian. Recently,
researchers have begun to explore the use of LLMs to generate synthetic
datasets as a viable alternative. This study proposes a pipeline for generating
synthetic data and a comprehensive approach for investigating the factors that
influence the validity of synthetic data generated by LLMs by examining how
model performance is affected by metrics such as prompt strategy, text length
and target position in a specific task, i.e. inclusive language detection in
Italian job advertisements. Our results show that, in most cases and across
different metrics, the fine-tuned models trained on synthetic data consistently
outperformed other models on both real and synthetic test datasets. The study
discusses the practical implications and limitations of using synthetic data
for language detection tasks with LLMs.
|
2503.24064 | Ashton Ian Hetherington | Prajith Pillai, Ashton Hetherington, Laura Saavedra Sago, Soledad Le
Clainche | A low cost singular value decomposition based data assimilation
technique for analysis of heterogeneous combustion data | null | null | null | null | physics.flu-dyn | http://creativecommons.org/licenses/by/4.0/ | This article applies low-cost singular value decomposition (lcSVD) for the
first time, to the authors knowledge, on combustion reactive flow databases.
The lcSVD algorithm is a novel approach to SVD, suitable for calculating
high-resolution 2D or 3D proper orthogonal decomposition (POD) modes and
temporal coefficients using data from sensors. Consequently, the computational
cost associated with this technique is much lower compared to standard SVD.
Additionally, for the analysis of full n-dimensional datasets, the method
reduces data dimensionality by selecting a strategically reduced number of
points from the original dataset through optimal sensor placement or uniform
sampling before performing SVD. Moreover, the properties of data assimilation
of heterogeneous databases of this method are illustrated using two distinct
reactive flow test cases: a numerical database modeling an axisymmetric,
time-varying laminar coflow flame with a fuel mixture of 65% methane and 35%
nitrogen, using air as the oxidizer, and experimental data generated from a
turbulent bluff-body-stabilized hydrogen flame. The computational speed-up and
memory gains associated with the lcSVD algorithm compared to SVD can reach
values larger than 10, with compression factors greater than 2000. Applying
lcSVD for data assimilation to reconstruct the flow dynamics combining data
from sensors with simulation measurements, we found errors smaller than 1% in
the most relevant species modelling the flow.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 13:24:03 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Pillai",
"Prajith",
""
],
[
"Hetherington",
"Ashton",
""
],
[
"Sago",
"Laura Saavedra",
""
],
[
"Clainche",
"Soledad Le",
""
]
] | TITLE: A low cost singular value decomposition based data assimilation
technique for analysis of heterogeneous combustion data
ABSTRACT: This article applies low-cost singular value decomposition (lcSVD) for the
first time, to the authors knowledge, on combustion reactive flow databases.
The lcSVD algorithm is a novel approach to SVD, suitable for calculating
high-resolution 2D or 3D proper orthogonal decomposition (POD) modes and
temporal coefficients using data from sensors. Consequently, the computational
cost associated with this technique is much lower compared to standard SVD.
Additionally, for the analysis of full n-dimensional datasets, the method
reduces data dimensionality by selecting a strategically reduced number of
points from the original dataset through optimal sensor placement or uniform
sampling before performing SVD. Moreover, the properties of data assimilation
of heterogeneous databases of this method are illustrated using two distinct
reactive flow test cases: a numerical database modeling an axisymmetric,
time-varying laminar coflow flame with a fuel mixture of 65% methane and 35%
nitrogen, using air as the oxidizer, and experimental data generated from a
turbulent bluff-body-stabilized hydrogen flame. The computational speed-up and
memory gains associated with the lcSVD algorithm compared to SVD can reach
values larger than 10, with compression factors greater than 2000. Applying
lcSVD for data assimilation to reconstruct the flow dynamics combining data
from sensors with simulation measurements, we found errors smaller than 1% in
the most relevant species modelling the flow.
|
2503.24075 | Man Shun Ang | Flavia Esposito, Andersen Ang | Riemannian Multiplicative Update for Sparse Simplex constraint using
oblique rotation manifold | 8 pages, 1 figure | null | null | null | math.OC cs.LG | http://creativecommons.org/licenses/by/4.0/ | We propose a new manifold optimization method to solve low-rank problems with
sparse simplex constraints (variables are simultaneous nonnegativity, sparsity,
and sum-to-1) that are beneficial in applications. The proposed approach
exploits oblique rotation manifolds, rewrite the problem, and introduce a new
Riemannian optimization method. Experiments on synthetic datasets compared to
the standard Euclidean method show the effectiveness of the proposed method.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 13:31:05 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Esposito",
"Flavia",
""
],
[
"Ang",
"Andersen",
""
]
] | TITLE: Riemannian Multiplicative Update for Sparse Simplex constraint using
oblique rotation manifold
ABSTRACT: We propose a new manifold optimization method to solve low-rank problems with
sparse simplex constraints (variables are simultaneous nonnegativity, sparsity,
and sum-to-1) that are beneficial in applications. The proposed approach
exploits oblique rotation manifolds, rewrite the problem, and introduce a new
Riemannian optimization method. Experiments on synthetic datasets compared to
the standard Euclidean method show the effectiveness of the proposed method.
|
2503.24091 | Xiangyuan Peng | Xiangyuan Peng, Miao Tang, Huawei Sun, Lorenzo Servadei and Robert
Wille | 4D mmWave Radar in Adverse Environments for Autonomous Driving: A Survey | 8 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Autonomous driving systems require accurate and reliable perception. However,
adverse environments, such as rain, snow, and fog, can significantly degrade
the performance of LiDAR and cameras. In contrast, 4D millimeter-wave (mmWave)
radar not only provides 3D sensing and additional velocity measurements but
also maintains robustness in challenging conditions, making it increasingly
valuable for autonomous driving. Recently, research on 4D mmWave radar under
adverse environments has been growing, but a comprehensive survey is still
lacking. To bridge this gap, this survey comprehensively reviews the current
research on 4D mmWave radar under adverse environments. First, we present an
overview of existing 4D mmWave radar datasets encompassing diverse weather and
lighting scenarios. Next, we analyze methods and models according to different
adverse conditions. Finally, the challenges faced in current studies and
potential future directions are discussed for advancing 4D mmWave radar
applications in harsh environments. To the best of our knowledge, this is the
first survey specifically focusing on 4D mmWave radar in adverse environments
for autonomous driving.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 13:42:50 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Peng",
"Xiangyuan",
""
],
[
"Tang",
"Miao",
""
],
[
"Sun",
"Huawei",
""
],
[
"Servadei",
"Lorenzo",
""
],
[
"Wille",
"Robert",
""
]
] | TITLE: 4D mmWave Radar in Adverse Environments for Autonomous Driving: A Survey
ABSTRACT: Autonomous driving systems require accurate and reliable perception. However,
adverse environments, such as rain, snow, and fog, can significantly degrade
the performance of LiDAR and cameras. In contrast, 4D millimeter-wave (mmWave)
radar not only provides 3D sensing and additional velocity measurements but
also maintains robustness in challenging conditions, making it increasingly
valuable for autonomous driving. Recently, research on 4D mmWave radar under
adverse environments has been growing, but a comprehensive survey is still
lacking. To bridge this gap, this survey comprehensively reviews the current
research on 4D mmWave radar under adverse environments. First, we present an
overview of existing 4D mmWave radar datasets encompassing diverse weather and
lighting scenarios. Next, we analyze methods and models according to different
adverse conditions. Finally, the challenges faced in current studies and
potential future directions are discussed for advancing 4D mmWave radar
applications in harsh environments. To the best of our knowledge, this is the
first survey specifically focusing on 4D mmWave radar in adverse environments
for autonomous driving.
|
2503.24102 | Yewei Song | Yewei Song, Lujun Li, Cedric Lothritz, Saad Ezzini, Lama Sleem,
Niccolo Gentile, Radu State, Tegawend\'e F. Bissyand\'e, Jacques Klein | Is LLM the Silver Bullet to Low-Resource Languages Machine Translation? | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Low-Resource Languages (LRLs) present significant challenges in natural
language processing due to their limited linguistic resources and
underrepresentation in standard datasets. While recent advancements in Large
Language Models (LLMs) and Neural Machine Translation (NMT) have substantially
improved translation capabilities for high-resource languages, performance
disparities persist for LRLs, particularly impacting privacy-sensitive and
resource-constrained scenarios. This paper systematically evaluates the
limitations of current LLMs across 200 languages using benchmarks such as
FLORES-200. We also explore alternative data sources, including news articles
and bilingual dictionaries, and demonstrate how knowledge distillation from
large pre-trained models can significantly improve smaller LRL translations.
Additionally, we investigate various fine-tuning strategies, revealing that
incremental enhancements markedly reduce performance gaps on smaller LLMs.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 13:56:03 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Song",
"Yewei",
""
],
[
"Li",
"Lujun",
""
],
[
"Lothritz",
"Cedric",
""
],
[
"Ezzini",
"Saad",
""
],
[
"Sleem",
"Lama",
""
],
[
"Gentile",
"Niccolo",
""
],
[
"State",
"Radu",
""
],
[
"Bissyandé",
"Tegawendé F.",
""
],
[
"Klein",
"Jacques",
""
]
] | TITLE: Is LLM the Silver Bullet to Low-Resource Languages Machine Translation?
ABSTRACT: Low-Resource Languages (LRLs) present significant challenges in natural
language processing due to their limited linguistic resources and
underrepresentation in standard datasets. While recent advancements in Large
Language Models (LLMs) and Neural Machine Translation (NMT) have substantially
improved translation capabilities for high-resource languages, performance
disparities persist for LRLs, particularly impacting privacy-sensitive and
resource-constrained scenarios. This paper systematically evaluates the
limitations of current LLMs across 200 languages using benchmarks such as
FLORES-200. We also explore alternative data sources, including news articles
and bilingual dictionaries, and demonstrate how knowledge distillation from
large pre-trained models can significantly improve smaller LRL translations.
Additionally, we investigate various fine-tuning strategies, revealing that
incremental enhancements markedly reduce performance gaps on smaller LLMs.
|
2503.24111 | Arthur M. Faria | Arthur M. Faria, Ignacio F. Gra\~na, Savvas Varsamopoulos | Inductive Graph Representation Learning with Quantum Graph Neural
Networks | 18 pages, 6 figures | null | null | null | quant-ph cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Quantum Graph Neural Networks (QGNNs) present a promising approach for
combining quantum computing with graph-structured data processing. While
classical Graph Neural Networks (GNNs) are renowned for their scalability and
robustness, existing QGNNs often lack flexibility due to graph-specific quantum
circuit designs, limiting their applicability to a narrower range of
graph-structured problems, falling short of real-world scenarios. To address
these limitations, we propose a versatile QGNN framework inspired by the
classical GraphSAGE approach, utilizing quantum models as aggregators. In this
work, we integrate established techniques for inductive representation learning
on graphs with parametrized quantum convolutional and pooling layers,
effectively bridging classical and quantum paradigms. The convolutional layer
is flexible, enabling tailored designs for specific problems. Benchmarked on a
node regression task with the QM9 dataset, we demonstrate that our framework
successfully models a non-trivial molecular dataset, achieving performance
comparable to classical GNNs. In particular, we show that our quantum approach
exhibits robust generalization across molecules with varying numbers of atoms
without requiring circuit modifications, slightly outperforming classical GNNs.
Furthermore, we numerically investigate the scalability of the QGNN framework.
Specifically, we demonstrate the absence of barren plateaus in our architecture
as the number of qubits increases, suggesting that the proposed quantum model
can be extended to handle larger and more complex graph-based problems
effectively.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 14:04:08 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Faria",
"Arthur M.",
""
],
[
"Graña",
"Ignacio F.",
""
],
[
"Varsamopoulos",
"Savvas",
""
]
] | TITLE: Inductive Graph Representation Learning with Quantum Graph Neural
Networks
ABSTRACT: Quantum Graph Neural Networks (QGNNs) present a promising approach for
combining quantum computing with graph-structured data processing. While
classical Graph Neural Networks (GNNs) are renowned for their scalability and
robustness, existing QGNNs often lack flexibility due to graph-specific quantum
circuit designs, limiting their applicability to a narrower range of
graph-structured problems, falling short of real-world scenarios. To address
these limitations, we propose a versatile QGNN framework inspired by the
classical GraphSAGE approach, utilizing quantum models as aggregators. In this
work, we integrate established techniques for inductive representation learning
on graphs with parametrized quantum convolutional and pooling layers,
effectively bridging classical and quantum paradigms. The convolutional layer
is flexible, enabling tailored designs for specific problems. Benchmarked on a
node regression task with the QM9 dataset, we demonstrate that our framework
successfully models a non-trivial molecular dataset, achieving performance
comparable to classical GNNs. In particular, we show that our quantum approach
exhibits robust generalization across molecules with varying numbers of atoms
without requiring circuit modifications, slightly outperforming classical GNNs.
Furthermore, we numerically investigate the scalability of the QGNN framework.
Specifically, we demonstrate the absence of barren plateaus in our architecture
as the number of qubits increases, suggesting that the proposed quantum model
can be extended to handle larger and more complex graph-based problems
effectively.
|
2503.24129 | Dominik Schnaus | Dominik Schnaus, Nikita Araslanov, Daniel Cremers | It's a (Blind) Match! Towards Vision-Language Correspondence without
Parallel Data | Accepted to CVPR 2025, Project page:
https://dominik-schnaus.github.io/itsamatch/ | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | The platonic representation hypothesis suggests that vision and language
embeddings become more homogeneous as model and dataset sizes increase. In
particular, pairwise distances within each modality become more similar. This
suggests that as foundation models mature, it may become possible to match
vision and language embeddings in a fully unsupervised fashion, i.e. without
parallel data. We present the first feasibility study, and investigate
conformity of existing vision and language foundation models in the context of
unsupervised, or "blind", matching. First, we formulate unsupervised matching
as a quadratic assignment problem and introduce a novel heuristic that
outperforms previous solvers. We also develop a technique to find optimal
matching problems, for which a non-trivial match is very likely. Second, we
conduct an extensive study deploying a range of vision and language models on
four datasets. Our analysis reveals that for many problem instances, vision and
language representations can be indeed matched without supervision. This
finding opens up the exciting possibility of embedding semantic knowledge into
other modalities virtually annotation-free. As a proof of concept, we showcase
an unsupervised classifier, which achieves non-trivial classification accuracy
without any image-text annotation.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 14:14:25 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Schnaus",
"Dominik",
""
],
[
"Araslanov",
"Nikita",
""
],
[
"Cremers",
"Daniel",
""
]
] | TITLE: It's a (Blind) Match! Towards Vision-Language Correspondence without
Parallel Data
ABSTRACT: The platonic representation hypothesis suggests that vision and language
embeddings become more homogeneous as model and dataset sizes increase. In
particular, pairwise distances within each modality become more similar. This
suggests that as foundation models mature, it may become possible to match
vision and language embeddings in a fully unsupervised fashion, i.e. without
parallel data. We present the first feasibility study, and investigate
conformity of existing vision and language foundation models in the context of
unsupervised, or "blind", matching. First, we formulate unsupervised matching
as a quadratic assignment problem and introduce a novel heuristic that
outperforms previous solvers. We also develop a technique to find optimal
matching problems, for which a non-trivial match is very likely. Second, we
conduct an extensive study deploying a range of vision and language models on
four datasets. Our analysis reveals that for many problem instances, vision and
language representations can be indeed matched without supervision. This
finding opens up the exciting possibility of embedding semantic knowledge into
other modalities virtually annotation-free. As a proof of concept, we showcase
an unsupervised classifier, which achieves non-trivial classification accuracy
without any image-text annotation.
|
2503.24132 | Martin Langhammer | Martin Langhammer, George A. Constantinides | Banked Memories for Soft SIMT Processors | 10 pages, 9 figures | null | null | null | cs.AR | http://creativecommons.org/licenses/by/4.0/ | Recent advances in soft GPGPU architectures have shown that a small (<10K
LUT), high performance (770 MHz) processor is possible in modern FPGAs. In this
paper we architect and evaluate soft SIMT processor banked memories, which can
support high bandwidth (up to 16 ports) while maintaining high speed (over 770
MHz). We compare 9 different memory architectures, including simpler multi-port
memories, and run a total of 51 benchmarks (different combinations of
algorithms, data sizes and processor memories) to develop a comprehensive set
of data which will guide the reader in making an informed memory architecture
decision for their application. Our benchmarks are comprised of matrix
transpositions (memory intensive) and FFTs (split between memory accesses,
floating point, and integer computations) to provide a balanced evaluation. We
show that the simpler (but more memory block intensive) multi-port memories
offer higher performance than the more architecturally complex banked memories
for many applications, especially for smaller memories, but the effective
footprint cost of the multi-port memories quickly becomes prohibitive as
dataset sizes increase. Our banked memory implementation results - high
bandwidth, high Fmax, and high density - can be used for other FPGA
applications as well, such as HLS (High Level Synthesis).
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 14:17:12 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Langhammer",
"Martin",
""
],
[
"Constantinides",
"George A.",
""
]
] | TITLE: Banked Memories for Soft SIMT Processors
ABSTRACT: Recent advances in soft GPGPU architectures have shown that a small (<10K
LUT), high performance (770 MHz) processor is possible in modern FPGAs. In this
paper we architect and evaluate soft SIMT processor banked memories, which can
support high bandwidth (up to 16 ports) while maintaining high speed (over 770
MHz). We compare 9 different memory architectures, including simpler multi-port
memories, and run a total of 51 benchmarks (different combinations of
algorithms, data sizes and processor memories) to develop a comprehensive set
of data which will guide the reader in making an informed memory architecture
decision for their application. Our benchmarks are comprised of matrix
transpositions (memory intensive) and FFTs (split between memory accesses,
floating point, and integer computations) to provide a balanced evaluation. We
show that the simpler (but more memory block intensive) multi-port memories
offer higher performance than the more architecturally complex banked memories
for many applications, especially for smaller memories, but the effective
footprint cost of the multi-port memories quickly becomes prohibitive as
dataset sizes increase. Our banked memory implementation results - high
bandwidth, high Fmax, and high density - can be used for other FPGA
applications as well, such as HLS (High Level Synthesis).
|
2503.24135 | Soufiane Belharbi | Alexis Guichemerre, Soufiane Belharbi, Mohammadhadi Shateri, Luke
McCaffrey, Eric Granger | PixelCAM: Pixel Class Activation Mapping for Histology Image
Classification and ROI Localization | 32 pages, 20 figures, Medical Imaging with Deep Learning (MIDL 2025) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Weakly supervised object localization (WSOL) methods allow training models to
classify images and localize ROIs. WSOL only requires low-cost image-class
annotations yet provides a visually interpretable classifier, which is
important in histology image analysis. Standard WSOL methods rely on class
activation mapping (CAM) methods to produce spatial localization maps according
to a single- or two-step strategy. While both strategies have made significant
progress, they still face several limitations with histology images.
Single-step methods can easily result in under- or over-activation due to the
limited visual ROI saliency in histology images and the limited localization
cues. They also face the well-known issue of asynchronous convergence between
classification and localization tasks. The two-step approach is sub-optimal
because it is tied to a frozen classifier, limiting the capacity for
localization. Moreover, these methods also struggle when applied to
out-of-distribution (OOD) datasets. In this paper, a multi-task approach for
WSOL is introduced for simultaneous training of both tasks to address the
asynchronous convergence problem. In particular, localization is performed in
the pixel-feature space of an image encoder that is shared with classification.
This allows learning discriminant features and accurate delineation of
foreground/background regions to support ROI localization and image
classification. We propose PixelCAM, a cost-effective foreground/background
pixel-wise classifier in the pixel-feature space that allows for spatial object
localization. PixelCAM is trained using pixel pseudo-labels collected from a
pretrained WSOL model. Both image and pixel-wise classifiers are trained
simultaneously using standard gradient descent. In addition, our pixel
classifier can easily be integrated into CNN- and transformer-based
architectures without any modifications.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 14:18:01 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Guichemerre",
"Alexis",
""
],
[
"Belharbi",
"Soufiane",
""
],
[
"Shateri",
"Mohammadhadi",
""
],
[
"McCaffrey",
"Luke",
""
],
[
"Granger",
"Eric",
""
]
] | TITLE: PixelCAM: Pixel Class Activation Mapping for Histology Image
Classification and ROI Localization
ABSTRACT: Weakly supervised object localization (WSOL) methods allow training models to
classify images and localize ROIs. WSOL only requires low-cost image-class
annotations yet provides a visually interpretable classifier, which is
important in histology image analysis. Standard WSOL methods rely on class
activation mapping (CAM) methods to produce spatial localization maps according
to a single- or two-step strategy. While both strategies have made significant
progress, they still face several limitations with histology images.
Single-step methods can easily result in under- or over-activation due to the
limited visual ROI saliency in histology images and the limited localization
cues. They also face the well-known issue of asynchronous convergence between
classification and localization tasks. The two-step approach is sub-optimal
because it is tied to a frozen classifier, limiting the capacity for
localization. Moreover, these methods also struggle when applied to
out-of-distribution (OOD) datasets. In this paper, a multi-task approach for
WSOL is introduced for simultaneous training of both tasks to address the
asynchronous convergence problem. In particular, localization is performed in
the pixel-feature space of an image encoder that is shared with classification.
This allows learning discriminant features and accurate delineation of
foreground/background regions to support ROI localization and image
classification. We propose PixelCAM, a cost-effective foreground/background
pixel-wise classifier in the pixel-feature space that allows for spatial object
localization. PixelCAM is trained using pixel pseudo-labels collected from a
pretrained WSOL model. Both image and pixel-wise classifiers are trained
simultaneously using standard gradient descent. In addition, our pixel
classifier can easily be integrated into CNN- and transformer-based
architectures without any modifications.
|
2503.24138 | Leire Benito Del Valle | Uxue Delaquintana-Aramendi, Leire Benito-del-Valle, Aitor
Alvarez-Gila, Javier Pascau, Luisa F S\'anchez-Peralta, Artzai Pic\'on, J
Blas Pagador, Cristina L Saratxaga | AI-Assisted Colonoscopy: Polyp Detection and Segmentation using
Foundation Models | This work has been submitted to the IEEE TMI for possible publication | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In colonoscopy, 80% of the missed polyps could be detected with the help of
Deep Learning models. In the search for algorithms capable of addressing this
challenge, foundation models emerge as promising candidates. Their zero-shot or
few-shot learning capabilities, facilitate generalization to new data or tasks
without extensive fine-tuning. A concept that is particularly advantageous in
the medical imaging domain, where large annotated datasets for traditional
training are scarce. In this context, a comprehensive evaluation of foundation
models for polyp segmentation was conducted, assessing both detection and
delimitation. For the study, three different colonoscopy datasets have been
employed to compare the performance of five different foundation models,
DINOv2, YOLO-World, GroundingDINO, SAM and MedSAM, against two benchmark
networks, YOLOv8 and Mask R-CNN. Results show that the success of foundation
models in polyp characterization is highly dependent on domain specialization.
For optimal performance in medical applications, domain-specific models are
essential, and generic models require fine-tuning to achieve effective results.
Through this specialization, foundation models demonstrated superior
performance compared to state-of-the-art detection and segmentation models,
with some models even excelling in zero-shot evaluation; outperforming
fine-tuned models on unseen data.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 14:20:53 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Delaquintana-Aramendi",
"Uxue",
""
],
[
"Benito-del-Valle",
"Leire",
""
],
[
"Alvarez-Gila",
"Aitor",
""
],
[
"Pascau",
"Javier",
""
],
[
"Sánchez-Peralta",
"Luisa F",
""
],
[
"Picón",
"Artzai",
""
],
[
"Pagador",
"J Blas",
""
],
[
"Saratxaga",
"Cristina L",
""
]
] | TITLE: AI-Assisted Colonoscopy: Polyp Detection and Segmentation using
Foundation Models
ABSTRACT: In colonoscopy, 80% of the missed polyps could be detected with the help of
Deep Learning models. In the search for algorithms capable of addressing this
challenge, foundation models emerge as promising candidates. Their zero-shot or
few-shot learning capabilities, facilitate generalization to new data or tasks
without extensive fine-tuning. A concept that is particularly advantageous in
the medical imaging domain, where large annotated datasets for traditional
training are scarce. In this context, a comprehensive evaluation of foundation
models for polyp segmentation was conducted, assessing both detection and
delimitation. For the study, three different colonoscopy datasets have been
employed to compare the performance of five different foundation models,
DINOv2, YOLO-World, GroundingDINO, SAM and MedSAM, against two benchmark
networks, YOLOv8 and Mask R-CNN. Results show that the success of foundation
models in polyp characterization is highly dependent on domain specialization.
For optimal performance in medical applications, domain-specific models are
essential, and generic models require fine-tuning to achieve effective results.
Through this specialization, foundation models demonstrated superior
performance compared to state-of-the-art detection and segmentation models,
with some models even excelling in zero-shot evaluation; outperforming
fine-tuned models on unseen data.
|
2503.24150 | Kailas Vodrahalli | Kailas Vodrahalli, Wei Wei, James Zou | Learning a Canonical Basis of Human Preferences from Binary Ratings | 25 pages, 11 figures | null | null | null | cs.LG cs.AI cs.HC | http://creativecommons.org/licenses/by/4.0/ | Recent advances in generative AI have been driven by alignment techniques
such as reinforcement learning from human feedback (RLHF). RLHF and related
techniques typically involve constructing a dataset of binary or ranked choice
human preferences and subsequently fine-tuning models to align with these
preferences. This paper shifts the focus to understanding the preferences
encoded in such datasets and identifying common human preferences. We find that
a small subset of 21 preference categories (selected from a set of nearly 5,000
distinct preferences) captures >89% of preference variation across individuals.
This small set of preferences is analogous to a canonical basis of human
preferences, similar to established findings that characterize human variation
in psychology or facial recognition studies. Through both synthetic and
empirical evaluations, we confirm that our low-rank, canonical set of human
preferences generalizes across the entire dataset and within specific topics.
We further demonstrate our preference basis' utility in model evaluation, where
our preference categories offer deeper insights into model alignment, and in
model training, where we show that fine-tuning on preference-defined subsets
successfully aligns the model accordingly.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 14:35:48 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Vodrahalli",
"Kailas",
""
],
[
"Wei",
"Wei",
""
],
[
"Zou",
"James",
""
]
] | TITLE: Learning a Canonical Basis of Human Preferences from Binary Ratings
ABSTRACT: Recent advances in generative AI have been driven by alignment techniques
such as reinforcement learning from human feedback (RLHF). RLHF and related
techniques typically involve constructing a dataset of binary or ranked choice
human preferences and subsequently fine-tuning models to align with these
preferences. This paper shifts the focus to understanding the preferences
encoded in such datasets and identifying common human preferences. We find that
a small subset of 21 preference categories (selected from a set of nearly 5,000
distinct preferences) captures >89% of preference variation across individuals.
This small set of preferences is analogous to a canonical basis of human
preferences, similar to established findings that characterize human variation
in psychology or facial recognition studies. Through both synthetic and
empirical evaluations, we confirm that our low-rank, canonical set of human
preferences generalizes across the entire dataset and within specific topics.
We further demonstrate our preference basis' utility in model evaluation, where
our preference categories offer deeper insights into model alignment, and in
model training, where we show that fine-tuning on preference-defined subsets
successfully aligns the model accordingly.
|
2503.24165 | Saeed Hassanpour | Peiying Hua, Andrea Olofson, Faraz Farhadi, Liesbeth Hondelink,
Gregory Tsongalis, Konstantin Dragnev, Dagmar Hoegemann Savellano, Arief
Suriawinata, Laura Tafe, Saeed Hassanpour | Predicting Targeted Therapy Resistance in Non-Small Cell Lung Cancer
Using Multimodal Machine Learning | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Lung cancer is the primary cause of cancer death globally, with non-small
cell lung cancer (NSCLC) emerging as its most prevalent subtype. Among NSCLC
patients, approximately 32.3% have mutations in the epidermal growth factor
receptor (EGFR) gene. Osimertinib, a third-generation EGFR-tyrosine kinase
inhibitor (TKI), has demonstrated remarkable efficacy in the treatment of NSCLC
patients with activating and T790M resistance EGFR mutations. Despite its
established efficacy, drug resistance poses a significant challenge for
patients to fully benefit from osimertinib. The absence of a standard tool to
accurately predict TKI resistance, including that of osimertinib, remains a
critical obstacle. To bridge this gap, in this study, we developed an
interpretable multimodal machine learning model designed to predict patient
resistance to osimertinib among late-stage NSCLC patients with activating EGFR
mutations, achieving a c-index of 0.82 on a multi-institutional dataset. This
machine learning model harnesses readily available data routinely collected
during patient visits and medical assessments to facilitate precision lung
cancer management and informed treatment decisions. By integrating various data
types such as histology images, next generation sequencing (NGS) data,
demographics data, and clinical records, our multimodal model can generate
well-informed recommendations. Our experiment results also demonstrated the
superior performance of the multimodal model over single modality models
(c-index 0.82 compared with 0.75 and 0.77), thus underscoring the benefit of
combining multiple modalities in patient outcome prediction.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 14:47:02 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Hua",
"Peiying",
""
],
[
"Olofson",
"Andrea",
""
],
[
"Farhadi",
"Faraz",
""
],
[
"Hondelink",
"Liesbeth",
""
],
[
"Tsongalis",
"Gregory",
""
],
[
"Dragnev",
"Konstantin",
""
],
[
"Savellano",
"Dagmar Hoegemann",
""
],
[
"Suriawinata",
"Arief",
""
],
[
"Tafe",
"Laura",
""
],
[
"Hassanpour",
"Saeed",
""
]
] | TITLE: Predicting Targeted Therapy Resistance in Non-Small Cell Lung Cancer
Using Multimodal Machine Learning
ABSTRACT: Lung cancer is the primary cause of cancer death globally, with non-small
cell lung cancer (NSCLC) emerging as its most prevalent subtype. Among NSCLC
patients, approximately 32.3% have mutations in the epidermal growth factor
receptor (EGFR) gene. Osimertinib, a third-generation EGFR-tyrosine kinase
inhibitor (TKI), has demonstrated remarkable efficacy in the treatment of NSCLC
patients with activating and T790M resistance EGFR mutations. Despite its
established efficacy, drug resistance poses a significant challenge for
patients to fully benefit from osimertinib. The absence of a standard tool to
accurately predict TKI resistance, including that of osimertinib, remains a
critical obstacle. To bridge this gap, in this study, we developed an
interpretable multimodal machine learning model designed to predict patient
resistance to osimertinib among late-stage NSCLC patients with activating EGFR
mutations, achieving a c-index of 0.82 on a multi-institutional dataset. This
machine learning model harnesses readily available data routinely collected
during patient visits and medical assessments to facilitate precision lung
cancer management and informed treatment decisions. By integrating various data
types such as histology images, next generation sequencing (NGS) data,
demographics data, and clinical records, our multimodal model can generate
well-informed recommendations. Our experiment results also demonstrated the
superior performance of the multimodal model over single modality models
(c-index 0.82 compared with 0.75 and 0.77), thus underscoring the benefit of
combining multiple modalities in patient outcome prediction.
|
2503.24166 | Fabian Fuchs | Fabian Fuchs, Mario Ruben Fernandez, Norman Ettrich, and Janis Keuper | Foundation Models For Seismic Data Processing: An Extensive Review | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Seismic processing plays a crucial role in transforming raw data into
high-quality subsurface images, pivotal for various geoscience applications.
Despite its importance, traditional seismic processing techniques face
challenges such as noisy and damaged data and the reliance on manual,
time-consuming workflows. The emergence of deep learning approaches has
introduced effective and user-friendly alternatives, yet many of these deep
learning approaches rely on synthetic datasets and specialized neural networks.
Recently, foundation models have gained traction in the seismic domain, due to
their success in natural imaging. This paper investigates the application of
foundation models in seismic processing on the tasks: demultiple,
interpolation, and denoising. It evaluates the impact of different model
characteristics, such as pre-training technique and neural network
architecture, on performance and efficiency. Rather than proposing a single
seismic foundation model, this paper critically examines various natural image
foundation models and suggest some promising candidates for future exploration.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 14:48:31 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Fuchs",
"Fabian",
""
],
[
"Fernandez",
"Mario Ruben",
""
],
[
"Ettrich",
"Norman",
""
],
[
"Keuper",
"Janis",
""
]
] | TITLE: Foundation Models For Seismic Data Processing: An Extensive Review
ABSTRACT: Seismic processing plays a crucial role in transforming raw data into
high-quality subsurface images, pivotal for various geoscience applications.
Despite its importance, traditional seismic processing techniques face
challenges such as noisy and damaged data and the reliance on manual,
time-consuming workflows. The emergence of deep learning approaches has
introduced effective and user-friendly alternatives, yet many of these deep
learning approaches rely on synthetic datasets and specialized neural networks.
Recently, foundation models have gained traction in the seismic domain, due to
their success in natural imaging. This paper investigates the application of
foundation models in seismic processing on the tasks: demultiple,
interpolation, and denoising. It evaluates the impact of different model
characteristics, such as pre-training technique and neural network
architecture, on performance and efficiency. Rather than proposing a single
seismic foundation model, this paper critically examines various natural image
foundation models and suggest some promising candidates for future exploration.
|
2503.24180 | Zhiyuan Huang | Ziming Cheng, Zhiyuan Huang, Junting Pan, Zhaohui Hou and Mingjie Zhan | Navi-plus: Managing Ambiguous GUI Navigation Tasks with Follow-up | null | null | null | null | cs.CV cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graphical user interfaces (GUI) automation agents are emerging as powerful
tools, enabling humans to accomplish increasingly complex tasks on smart
devices. However, users often inadvertently omit key information when conveying
tasks, which hinders agent performance in the current agent paradigm that does
not support immediate user intervention. To address this issue, we introduce a
$\textbf{Self-Correction GUI Navigation}$ task that incorporates interactive
information completion capabilities within GUI agents. We developed the
$\textbf{Navi-plus}$ dataset with GUI follow-up question-answer pairs,
alongside a $\textbf{Dual-Stream Trajectory Evaluation}$ method to benchmark
this new capability. Our results show that agents equipped with the ability to
ask GUI follow-up questions can fully recover their performance when faced with
ambiguous user tasks.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 14:56:24 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Cheng",
"Ziming",
""
],
[
"Huang",
"Zhiyuan",
""
],
[
"Pan",
"Junting",
""
],
[
"Hou",
"Zhaohui",
""
],
[
"Zhan",
"Mingjie",
""
]
] | TITLE: Navi-plus: Managing Ambiguous GUI Navigation Tasks with Follow-up
ABSTRACT: Graphical user interfaces (GUI) automation agents are emerging as powerful
tools, enabling humans to accomplish increasingly complex tasks on smart
devices. However, users often inadvertently omit key information when conveying
tasks, which hinders agent performance in the current agent paradigm that does
not support immediate user intervention. To address this issue, we introduce a
$\textbf{Self-Correction GUI Navigation}$ task that incorporates interactive
information completion capabilities within GUI agents. We developed the
$\textbf{Navi-plus}$ dataset with GUI follow-up question-answer pairs,
alongside a $\textbf{Dual-Stream Trajectory Evaluation}$ method to benchmark
this new capability. Our results show that agents equipped with the ability to
ask GUI follow-up questions can fully recover their performance when faced with
ambiguous user tasks.
|
2503.24182 | Yingrui Ji | Yingrui Ji, Xi Xiao, Gaofei Chen, Hao Xu, Chenrui Ma, Lijing Zhu,
Aokun Liang, Jiansheng Chen | CIBR: Cross-modal Information Bottleneck Regularization for Robust CLIP
Generalization | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Contrastive Language-Image Pretraining (CLIP) has achieved remarkable success
in cross-modal tasks such as zero-shot image classification and text-image
retrieval by effectively aligning visual and textual representations. However,
the theoretical foundations underlying CLIP's strong generalization remain
unclear. In this work, we address this gap by proposing the Cross-modal
Information Bottleneck (CIB) framework. CIB offers a principled interpretation
of CLIP's contrastive learning objective as an implicit Information Bottleneck
optimization. Under this view, the model maximizes shared cross-modal
information while discarding modality-specific redundancies, thereby preserving
essential semantic alignment across modalities. Building on this insight, we
introduce a Cross-modal Information Bottleneck Regularization (CIBR) method
that explicitly enforces these IB principles during training. CIBR introduces a
penalty term to discourage modality-specific redundancy, thereby enhancing
semantic alignment between image and text features. We validate CIBR on
extensive vision-language benchmarks, including zero-shot classification across
seven diverse image datasets and text-image retrieval on MSCOCO and Flickr30K.
The results show consistent performance gains over standard CLIP. These
findings provide the first theoretical understanding of CLIP's generalization
through the IB lens. They also demonstrate practical improvements, offering
guidance for future cross-modal representation learning.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 15:00:01 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Ji",
"Yingrui",
""
],
[
"Xiao",
"Xi",
""
],
[
"Chen",
"Gaofei",
""
],
[
"Xu",
"Hao",
""
],
[
"Ma",
"Chenrui",
""
],
[
"Zhu",
"Lijing",
""
],
[
"Liang",
"Aokun",
""
],
[
"Chen",
"Jiansheng",
""
]
] | TITLE: CIBR: Cross-modal Information Bottleneck Regularization for Robust CLIP
Generalization
ABSTRACT: Contrastive Language-Image Pretraining (CLIP) has achieved remarkable success
in cross-modal tasks such as zero-shot image classification and text-image
retrieval by effectively aligning visual and textual representations. However,
the theoretical foundations underlying CLIP's strong generalization remain
unclear. In this work, we address this gap by proposing the Cross-modal
Information Bottleneck (CIB) framework. CIB offers a principled interpretation
of CLIP's contrastive learning objective as an implicit Information Bottleneck
optimization. Under this view, the model maximizes shared cross-modal
information while discarding modality-specific redundancies, thereby preserving
essential semantic alignment across modalities. Building on this insight, we
introduce a Cross-modal Information Bottleneck Regularization (CIBR) method
that explicitly enforces these IB principles during training. CIBR introduces a
penalty term to discourage modality-specific redundancy, thereby enhancing
semantic alignment between image and text features. We validate CIBR on
extensive vision-language benchmarks, including zero-shot classification across
seven diverse image datasets and text-image retrieval on MSCOCO and Flickr30K.
The results show consistent performance gains over standard CLIP. These
findings provide the first theoretical understanding of CLIP's generalization
through the IB lens. They also demonstrate practical improvements, offering
guidance for future cross-modal representation learning.
|
2503.24198 | Jingxian Xu | Jingxian Xu, Mengyu Zhou, Weichang Liu, Hanbing Liu, Shi Han, Dongmei
Zhang | TwT: Thinking without Tokens by Habitual Reasoning Distillation with
Multi-Teachers' Guidance | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) have made significant strides in problem-solving
by incorporating reasoning processes. However, this enhanced reasoning
capability results in an increased number of output tokens during inference,
leading to higher computational costs. To address this challenge, we propose
TwT (Thinking without Tokens), a method that reduces inference-time costs
through habitual reasoning distillation with multi-teachers' guidance, while
maintaining high performance. Our approach introduces a Habitual Reasoning
Distillation method, which internalizes explicit reasoning into the model's
habitual behavior through a Teacher-Guided compression strategy inspired by
human cognition. Additionally, we propose Dual-Criteria Rejection Sampling
(DCRS), a technique that generates a high-quality and diverse distillation
dataset using multiple teacher models, making our method suitable for
unsupervised scenarios. Experimental results demonstrate that TwT effectively
reduces inference costs while preserving superior performance, achieving up to
a 13.6% improvement in accuracy with fewer output tokens compared to other
distillation methods, offering a highly practical solution for efficient LLM
deployment.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 15:16:31 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Xu",
"Jingxian",
""
],
[
"Zhou",
"Mengyu",
""
],
[
"Liu",
"Weichang",
""
],
[
"Liu",
"Hanbing",
""
],
[
"Han",
"Shi",
""
],
[
"Zhang",
"Dongmei",
""
]
] | TITLE: TwT: Thinking without Tokens by Habitual Reasoning Distillation with
Multi-Teachers' Guidance
ABSTRACT: Large Language Models (LLMs) have made significant strides in problem-solving
by incorporating reasoning processes. However, this enhanced reasoning
capability results in an increased number of output tokens during inference,
leading to higher computational costs. To address this challenge, we propose
TwT (Thinking without Tokens), a method that reduces inference-time costs
through habitual reasoning distillation with multi-teachers' guidance, while
maintaining high performance. Our approach introduces a Habitual Reasoning
Distillation method, which internalizes explicit reasoning into the model's
habitual behavior through a Teacher-Guided compression strategy inspired by
human cognition. Additionally, we propose Dual-Criteria Rejection Sampling
(DCRS), a technique that generates a high-quality and diverse distillation
dataset using multiple teacher models, making our method suitable for
unsupervised scenarios. Experimental results demonstrate that TwT effectively
reduces inference costs while preserving superior performance, achieving up to
a 13.6% improvement in accuracy with fewer output tokens compared to other
distillation methods, offering a highly practical solution for efficient LLM
deployment.
|
2503.24205 | Stefano Riva | Stefano Riva, Andrea Missaglia, Carolina Introini, In Cheol Bang,
Antonio Cammi | A Comparison of Parametric Dynamic Mode Decomposition Algorithms for
Thermal-Hydraulics Applications | null | null | null | null | math.DS cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | In recent years, algorithms aiming at learning models from available data
have become quite popular due to two factors: 1) the significant developments
in Artificial Intelligence techniques and 2) the availability of large amounts
of data. Nevertheless, this topic has already been addressed by methodologies
belonging to the Reduced Order Modelling framework, of which perhaps the most
famous equation-free technique is Dynamic Mode Decomposition. This algorithm
aims to learn the best linear model that represents the physical phenomena
described by a time series dataset: its output is a best state operator of the
underlying dynamical system that can be used, in principle, to advance the
original dataset in time even beyond its span. However, in its standard
formulation, this technique cannot deal with parametric time series, meaning
that a different linear model has to be derived for each parameter realization.
Research on this is ongoing, and some versions of a parametric Dynamic Mode
Decomposition already exist. This work contributes to this research field by
comparing the different algorithms presently deployed and assessing their
advantages and shortcomings compared to each other. To this aim, three
different thermal-hydraulics problems are considered: two benchmark 'flow over
cylinder' test cases at diverse Reynolds numbers, whose datasets are,
respectively, obtained with the FEniCS finite element solver and retrieved from
the CFDbench dataset, and the DYNASTY experimental facility operating at
Politecnico di Milano, which studies the natural circulation established by
internally heated fluids for Generation IV nuclear applications, whose dataset
was generated using the RELAP5 nodal solver.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 15:23:22 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Riva",
"Stefano",
""
],
[
"Missaglia",
"Andrea",
""
],
[
"Introini",
"Carolina",
""
],
[
"Bang",
"In Cheol",
""
],
[
"Cammi",
"Antonio",
""
]
] | TITLE: A Comparison of Parametric Dynamic Mode Decomposition Algorithms for
Thermal-Hydraulics Applications
ABSTRACT: In recent years, algorithms aiming at learning models from available data
have become quite popular due to two factors: 1) the significant developments
in Artificial Intelligence techniques and 2) the availability of large amounts
of data. Nevertheless, this topic has already been addressed by methodologies
belonging to the Reduced Order Modelling framework, of which perhaps the most
famous equation-free technique is Dynamic Mode Decomposition. This algorithm
aims to learn the best linear model that represents the physical phenomena
described by a time series dataset: its output is a best state operator of the
underlying dynamical system that can be used, in principle, to advance the
original dataset in time even beyond its span. However, in its standard
formulation, this technique cannot deal with parametric time series, meaning
that a different linear model has to be derived for each parameter realization.
Research on this is ongoing, and some versions of a parametric Dynamic Mode
Decomposition already exist. This work contributes to this research field by
comparing the different algorithms presently deployed and assessing their
advantages and shortcomings compared to each other. To this aim, three
different thermal-hydraulics problems are considered: two benchmark 'flow over
cylinder' test cases at diverse Reynolds numbers, whose datasets are,
respectively, obtained with the FEniCS finite element solver and retrieved from
the CFDbench dataset, and the DYNASTY experimental facility operating at
Politecnico di Milano, which studies the natural circulation established by
internally heated fluids for Generation IV nuclear applications, whose dataset
was generated using the RELAP5 nodal solver.
|
2503.24219 | Karim Radouane | Karim Radouane and Hanane Azzag and Mustapha lebbah | MB-ORES: A Multi-Branch Object Reasoner for Visual Grounding in Remote
Sensing | null | null | null | null | cs.CV cs.AI cs.CL cs.LG cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a unified framework that integrates object detection (OD) and
visual grounding (VG) for remote sensing (RS) imagery. To support conventional
OD and establish an intuitive prior for VG task, we fine-tune an open-set
object detector using referring expression data, framing it as a partially
supervised OD task. In the first stage, we construct a graph representation of
each image, comprising object queries, class embeddings, and proposal
locations. Then, our task-aware architecture processes this graph to perform
the VG task. The model consists of: (i) a multi-branch network that integrates
spatial, visual, and categorical features to generate task-aware proposals, and
(ii) an object reasoning network that assigns probabilities across proposals,
followed by a soft selection mechanism for final referring object localization.
Our model demonstrates superior performance on the OPT-RSVG and DIOR-RSVG
datasets, achieving significant improvements over state-of-the-art methods
while retaining classical OD capabilities. The code will be available in our
repository: \url{https://github.com/rd20karim/MB-ORES}.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 15:36:41 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Radouane",
"Karim",
""
],
[
"Azzag",
"Hanane",
""
],
[
"lebbah",
"Mustapha",
""
]
] | TITLE: MB-ORES: A Multi-Branch Object Reasoner for Visual Grounding in Remote
Sensing
ABSTRACT: We propose a unified framework that integrates object detection (OD) and
visual grounding (VG) for remote sensing (RS) imagery. To support conventional
OD and establish an intuitive prior for VG task, we fine-tune an open-set
object detector using referring expression data, framing it as a partially
supervised OD task. In the first stage, we construct a graph representation of
each image, comprising object queries, class embeddings, and proposal
locations. Then, our task-aware architecture processes this graph to perform
the VG task. The model consists of: (i) a multi-branch network that integrates
spatial, visual, and categorical features to generate task-aware proposals, and
(ii) an object reasoning network that assigns probabilities across proposals,
followed by a soft selection mechanism for final referring object localization.
Our model demonstrates superior performance on the OPT-RSVG and DIOR-RSVG
datasets, achieving significant improvements over state-of-the-art methods
while retaining classical OD capabilities. The code will be available in our
repository: \url{https://github.com/rd20karim/MB-ORES}.
|
2503.24229 | Hirokatsu Kataoka | Daichi Otsuka, Shinichi Mae, Ryosuke Yamada, Hirokatsu Kataoka | Pre-training with 3D Synthetic Data: Learning 3D Point Cloud Instance
Segmentation from 3D Synthetic Scenes | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the recent years, the research community has witnessed growing use of 3D
point cloud data for the high applicability in various real-world applications.
By means of 3D point cloud, this modality enables to consider the actual size
and spatial understanding. The applied fields include mechanical control of
robots, vehicles, or other real-world systems. Along this line, we would like
to improve 3D point cloud instance segmentation which has emerged as a
particularly promising approach for these applications. However, the creation
of 3D point cloud datasets entails enormous costs compared to 2D image
datasets. To train a model of 3D point cloud instance segmentation, it is
necessary not only to assign categories but also to provide detailed
annotations for each point in the large-scale 3D space. Meanwhile, the increase
of recent proposals for generative models in 3D domain has spurred proposals
for using a generative model to create 3D point cloud data. In this work, we
propose a pre-training with 3D synthetic data to train a 3D point cloud
instance segmentation model based on generative model for 3D scenes represented
by point cloud data. We directly generate 3D point cloud data with Point-E for
inserting a generated data into a 3D scene. More recently in 2025, although
there are other accurate 3D generation models, even using the Point-E as an
early 3D generative model can effectively support the pre-training with 3D
synthetic data. In the experimental section, we compare our pre-training method
with baseline methods indicated improved performance, demonstrating the
efficacy of 3D generative models for 3D point cloud instance segmentation.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 15:42:10 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Otsuka",
"Daichi",
""
],
[
"Mae",
"Shinichi",
""
],
[
"Yamada",
"Ryosuke",
""
],
[
"Kataoka",
"Hirokatsu",
""
]
] | TITLE: Pre-training with 3D Synthetic Data: Learning 3D Point Cloud Instance
Segmentation from 3D Synthetic Scenes
ABSTRACT: In the recent years, the research community has witnessed growing use of 3D
point cloud data for the high applicability in various real-world applications.
By means of 3D point cloud, this modality enables to consider the actual size
and spatial understanding. The applied fields include mechanical control of
robots, vehicles, or other real-world systems. Along this line, we would like
to improve 3D point cloud instance segmentation which has emerged as a
particularly promising approach for these applications. However, the creation
of 3D point cloud datasets entails enormous costs compared to 2D image
datasets. To train a model of 3D point cloud instance segmentation, it is
necessary not only to assign categories but also to provide detailed
annotations for each point in the large-scale 3D space. Meanwhile, the increase
of recent proposals for generative models in 3D domain has spurred proposals
for using a generative model to create 3D point cloud data. In this work, we
propose a pre-training with 3D synthetic data to train a 3D point cloud
instance segmentation model based on generative model for 3D scenes represented
by point cloud data. We directly generate 3D point cloud data with Point-E for
inserting a generated data into a 3D scene. More recently in 2025, although
there are other accurate 3D generation models, even using the Point-E as an
early 3D generative model can effectively support the pre-training with 3D
synthetic data. In the experimental section, we compare our pre-training method
with baseline methods indicated improved performance, demonstrating the
efficacy of 3D generative models for 3D point cloud instance segmentation.
|
2503.24245 | Dun Yuan | Dun Yuan, Hao Zhou, Di Wu, Xue Liu, Hao Chen, Yan Xin, Jianzhong
(Charlie) Zhang | Enhancing Large Language Models (LLMs) for Telecommunications using
Knowledge Graphs and Retrieval-Augmented Generation | This work has been accepted to ICC 2025 IEEE International Conference
on Communications. copyright 2025 IEEE | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs) have made significant progress in
general-purpose natural language processing tasks. However, LLMs are still
facing challenges when applied to domain-specific areas like
telecommunications, which demands specialized expertise and adaptability to
evolving standards. This paper presents a novel framework that combines
knowledge graph (KG) and retrieval-augmented generation (RAG) techniques to
enhance LLM performance in the telecom domain. The framework leverages a KG to
capture structured, domain-specific information about network protocols,
standards, and other telecom-related entities, comprehensively representing
their relationships. By integrating KG with RAG, LLMs can dynamically access
and utilize the most relevant and up-to-date knowledge during response
generation. This hybrid approach bridges the gap between structured knowledge
representation and the generative capabilities of LLMs, significantly enhancing
accuracy, adaptability, and domain-specific comprehension. Our results
demonstrate the effectiveness of the KG-RAG framework in addressing complex
technical queries with precision. The proposed KG-RAG model attained an
accuracy of 88% for question answering tasks on a frequently used
telecom-specific dataset, compared to 82% for the RAG-only and 48% for the
LLM-only approaches.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 15:58:08 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Yuan",
"Dun",
"",
"Charlie"
],
[
"Zhou",
"Hao",
"",
"Charlie"
],
[
"Wu",
"Di",
"",
"Charlie"
],
[
"Liu",
"Xue",
"",
"Charlie"
],
[
"Chen",
"Hao",
"",
"Charlie"
],
[
"Xin",
"Yan",
"",
"Charlie"
],
[
"Jianzhong",
"",
"",
"Charlie"
],
[
"Zhang",
"",
""
]
] | TITLE: Enhancing Large Language Models (LLMs) for Telecommunications using
Knowledge Graphs and Retrieval-Augmented Generation
ABSTRACT: Large language models (LLMs) have made significant progress in
general-purpose natural language processing tasks. However, LLMs are still
facing challenges when applied to domain-specific areas like
telecommunications, which demands specialized expertise and adaptability to
evolving standards. This paper presents a novel framework that combines
knowledge graph (KG) and retrieval-augmented generation (RAG) techniques to
enhance LLM performance in the telecom domain. The framework leverages a KG to
capture structured, domain-specific information about network protocols,
standards, and other telecom-related entities, comprehensively representing
their relationships. By integrating KG with RAG, LLMs can dynamically access
and utilize the most relevant and up-to-date knowledge during response
generation. This hybrid approach bridges the gap between structured knowledge
representation and the generative capabilities of LLMs, significantly enhancing
accuracy, adaptability, and domain-specific comprehension. Our results
demonstrate the effectiveness of the KG-RAG framework in addressing complex
technical queries with precision. The proposed KG-RAG model attained an
accuracy of 88% for question answering tasks on a frequently used
telecom-specific dataset, compared to 82% for the RAG-only and 48% for the
LLM-only approaches.
|
2503.24251 | Suchana Datta | Sourav Saha, Suchana Datta, Dwaipayan Roy, Mandar Mitra, Derek Greene | Combining Query Performance Predictors: A Reproducibility Study | null | null | null | null | cs.IR | http://creativecommons.org/licenses/by/4.0/ | A large number of approaches to Query Performance Prediction (QPP) have been
proposed over the last two decades. As early as 2009, Hauff et al. [28]
explored whether different QPP methods may be combined to improve prediction
quality. Since then, significant research has been done both on QPP approaches,
as well as their evaluation. This study revisits Hauff et al.s work to assess
the reproducibility of their findings in the light of new prediction methods,
evaluation metrics, and datasets. We expand the scope of the earlier
investigation by: (i) considering post-retrieval methods, including supervised
neural techniques (only pre-retrieval techniques were studied in [28]); (ii)
using sMARE for evaluation, in addition to the traditional correlation
coefficients and RMSE; and (iii) experimenting with additional datasets
(Clueweb09B and TREC DL). Our results largely support previous claims, but we
also present several interesting findings. We interpret these findings by
taking a more nuanced look at the correlation between QPP methods, examining
whether they capture diverse information or rely on overlapping factors.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 16:01:58 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Saha",
"Sourav",
""
],
[
"Datta",
"Suchana",
""
],
[
"Roy",
"Dwaipayan",
""
],
[
"Mitra",
"Mandar",
""
],
[
"Greene",
"Derek",
""
]
] | TITLE: Combining Query Performance Predictors: A Reproducibility Study
ABSTRACT: A large number of approaches to Query Performance Prediction (QPP) have been
proposed over the last two decades. As early as 2009, Hauff et al. [28]
explored whether different QPP methods may be combined to improve prediction
quality. Since then, significant research has been done both on QPP approaches,
as well as their evaluation. This study revisits Hauff et al.s work to assess
the reproducibility of their findings in the light of new prediction methods,
evaluation metrics, and datasets. We expand the scope of the earlier
investigation by: (i) considering post-retrieval methods, including supervised
neural techniques (only pre-retrieval techniques were studied in [28]); (ii)
using sMARE for evaluation, in addition to the traditional correlation
coefficients and RMSE; and (iii) experimenting with additional datasets
(Clueweb09B and TREC DL). Our results largely support previous claims, but we
also present several interesting findings. We interpret these findings by
taking a more nuanced look at the correlation between QPP methods, examining
whether they capture diverse information or rely on overlapping factors.
|
2503.24258 | Valerio Guarrasi | Lorenzo Tronchin, Tommy L\"ofstedt, Paolo Soda, Valerio Guarrasi | Beyond a Single Mode: GAN Ensembles for Diverse Medical Data Generation | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | The advancement of generative AI, particularly in medical imaging, confronts
the trilemma of ensuring high fidelity, diversity, and efficiency in synthetic
data generation. While Generative Adversarial Networks (GANs) have shown
promise across various applications, they still face challenges like mode
collapse and insufficient coverage of real data distributions. This work
explores the use of GAN ensembles to overcome these limitations, specifically
in the context of medical imaging. By solving a multi-objective optimisation
problem that balances fidelity and diversity, we propose a method for selecting
an optimal ensemble of GANs tailored for medical data. The selected ensemble is
capable of generating diverse synthetic medical images that are representative
of true data distributions and computationally efficient. Each model in the
ensemble brings a unique contribution, ensuring minimal redundancy. We
conducted a comprehensive evaluation using three distinct medical datasets,
testing 22 different GAN architectures with various loss functions and
regularisation techniques. By sampling models at different training epochs, we
crafted 110 unique configurations. The results highlight the capability of GAN
ensembles to enhance the quality and utility of synthetic medical images,
thereby improving the efficacy of downstream tasks such as diagnostic
modelling.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 16:06:01 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Tronchin",
"Lorenzo",
""
],
[
"Löfstedt",
"Tommy",
""
],
[
"Soda",
"Paolo",
""
],
[
"Guarrasi",
"Valerio",
""
]
] | TITLE: Beyond a Single Mode: GAN Ensembles for Diverse Medical Data Generation
ABSTRACT: The advancement of generative AI, particularly in medical imaging, confronts
the trilemma of ensuring high fidelity, diversity, and efficiency in synthetic
data generation. While Generative Adversarial Networks (GANs) have shown
promise across various applications, they still face challenges like mode
collapse and insufficient coverage of real data distributions. This work
explores the use of GAN ensembles to overcome these limitations, specifically
in the context of medical imaging. By solving a multi-objective optimisation
problem that balances fidelity and diversity, we propose a method for selecting
an optimal ensemble of GANs tailored for medical data. The selected ensemble is
capable of generating diverse synthetic medical images that are representative
of true data distributions and computationally efficient. Each model in the
ensemble brings a unique contribution, ensuring minimal redundancy. We
conducted a comprehensive evaluation using three distinct medical datasets,
testing 22 different GAN architectures with various loss functions and
regularisation techniques. By sampling models at different training epochs, we
crafted 110 unique configurations. The results highlight the capability of GAN
ensembles to enhance the quality and utility of synthetic medical images,
thereby improving the efficacy of downstream tasks such as diagnostic
modelling.
|
2503.24262 | Umberto Michelucci | Umberto Michelucci and Francesca Venturini | New Statistical Framework for Extreme Error Probability in High-Stakes
Domains for Reliable Machine Learning | null | null | null | null | cs.LG cs.AI stat.ME stat.ML | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Machine learning is vital in high-stakes domains, yet conventional validation
methods rely on averaging metrics like mean squared error (MSE) or mean
absolute error (MAE), which fail to quantify extreme errors. Worst-case
prediction failures can have substantial consequences, but current frameworks
lack statistical foundations for assessing their probability. In this work a
new statistical framework, based on Extreme Value Theory (EVT), is presented
that provides a rigorous approach to estimating worst-case failures. Applying
EVT to synthetic and real-world datasets, this method is shown to enable robust
estimation of catastrophic failure probabilities, overcoming the fundamental
limitations of standard cross-validation. This work establishes EVT as a
fundamental tool for assessing model reliability, ensuring safer AI deployment
in new technologies where uncertainty quantification is central to
decision-making or scientific analysis.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 16:08:11 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Michelucci",
"Umberto",
""
],
[
"Venturini",
"Francesca",
""
]
] | TITLE: New Statistical Framework for Extreme Error Probability in High-Stakes
Domains for Reliable Machine Learning
ABSTRACT: Machine learning is vital in high-stakes domains, yet conventional validation
methods rely on averaging metrics like mean squared error (MSE) or mean
absolute error (MAE), which fail to quantify extreme errors. Worst-case
prediction failures can have substantial consequences, but current frameworks
lack statistical foundations for assessing their probability. In this work a
new statistical framework, based on Extreme Value Theory (EVT), is presented
that provides a rigorous approach to estimating worst-case failures. Applying
EVT to synthetic and real-world datasets, this method is shown to enable robust
estimation of catastrophic failure probabilities, overcoming the fundamental
limitations of standard cross-validation. This work establishes EVT as a
fundamental tool for assessing model reliability, ensuring safer AI deployment
in new technologies where uncertainty quantification is central to
decision-making or scientific analysis.
|
2503.24267 | Yixuan Li | Yixuan Li, Yu Tian, Yipo Huang, Wei Lu, Shiqi Wang, Weisi Lin,
Anderson Rocha | FakeScope: Large Multimodal Expert Model for Transparent AI-Generated
Image Forensics | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rapid and unrestrained advancement of generative artificial intelligence
(AI) presents a double-edged sword: while enabling unprecedented creativity, it
also facilitates the generation of highly convincing deceptive content,
undermining societal trust. As image generation techniques become increasingly
sophisticated, detecting synthetic images is no longer just a binary task: it
necessitates interpretable, context-aware methodologies that enhance
trustworthiness and transparency. However, existing detection models primarily
focus on classification, offering limited explanatory insights into image
authenticity. In this work, we propose FakeScope, an expert multimodal model
(LMM) tailored for AI-generated image forensics, which not only identifies
AI-synthetic images with high accuracy but also provides rich, interpretable,
and query-driven forensic insights. We first construct FakeChain dataset that
contains linguistic authenticity reasoning based on visual trace evidence,
developed through a novel human-machine collaborative framework. Building upon
it, we further present FakeInstruct, the largest multimodal instruction tuning
dataset containing 2 million visual instructions tailored to enhance forensic
awareness in LMMs. FakeScope achieves state-of-the-art performance in both
closed-ended and open-ended forensic scenarios. It can distinguish synthetic
images with high accuracy while offering coherent and insightful explanations,
free-form discussions on fine-grained forgery attributes, and actionable
enhancement strategies. Notably, despite being trained exclusively on
qualitative hard labels, FakeScope demonstrates remarkable zero-shot
quantitative capability on detection, enabled by our proposed token-based
probability estimation strategy. Furthermore, FakeScope exhibits strong
generalization and in-the-wild ability, ensuring its applicability in
real-world scenarios.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 16:12:48 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Li",
"Yixuan",
""
],
[
"Tian",
"Yu",
""
],
[
"Huang",
"Yipo",
""
],
[
"Lu",
"Wei",
""
],
[
"Wang",
"Shiqi",
""
],
[
"Lin",
"Weisi",
""
],
[
"Rocha",
"Anderson",
""
]
] | TITLE: FakeScope: Large Multimodal Expert Model for Transparent AI-Generated
Image Forensics
ABSTRACT: The rapid and unrestrained advancement of generative artificial intelligence
(AI) presents a double-edged sword: while enabling unprecedented creativity, it
also facilitates the generation of highly convincing deceptive content,
undermining societal trust. As image generation techniques become increasingly
sophisticated, detecting synthetic images is no longer just a binary task: it
necessitates interpretable, context-aware methodologies that enhance
trustworthiness and transparency. However, existing detection models primarily
focus on classification, offering limited explanatory insights into image
authenticity. In this work, we propose FakeScope, an expert multimodal model
(LMM) tailored for AI-generated image forensics, which not only identifies
AI-synthetic images with high accuracy but also provides rich, interpretable,
and query-driven forensic insights. We first construct FakeChain dataset that
contains linguistic authenticity reasoning based on visual trace evidence,
developed through a novel human-machine collaborative framework. Building upon
it, we further present FakeInstruct, the largest multimodal instruction tuning
dataset containing 2 million visual instructions tailored to enhance forensic
awareness in LMMs. FakeScope achieves state-of-the-art performance in both
closed-ended and open-ended forensic scenarios. It can distinguish synthetic
images with high accuracy while offering coherent and insightful explanations,
free-form discussions on fine-grained forgery attributes, and actionable
enhancement strategies. Notably, despite being trained exclusively on
qualitative hard labels, FakeScope demonstrates remarkable zero-shot
quantitative capability on detection, enabled by our proposed token-based
probability estimation strategy. Furthermore, FakeScope exhibits strong
generalization and in-the-wild ability, ensuring its applicability in
real-world scenarios.
|
2503.24271 | Francesco Pio Ramunno | Francesco Pio Ramunno, Paolo Massa, Vitaliy Kinakh, Brandon Panos,
Andr\'e Csillaghy, Slava Voloshynovskiy | Enhancing Image Resolution of Solar Magnetograms: A Latent Diffusion
Model Approach | Accepted for publication on A&A | null | null | null | astro-ph.SR astro-ph.IM cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | The spatial properties of the solar magnetic field are crucial to decoding
the physical processes in the solar interior and their interplanetary effects.
However, observations from older instruments, such as the Michelson Doppler
Imager (MDI), have limited spatial or temporal resolution, which hinders the
ability to study small-scale solar features in detail. Super resolving these
older datasets is essential for uniform analysis across different solar cycles,
enabling better characterization of solar flares, active regions, and magnetic
network dynamics. In this work, we introduce a novel diffusion model approach
for Super-Resolution and we apply it to MDI magnetograms to match the
higher-resolution capabilities of the Helioseismic and Magnetic Imager (HMI).
By training a Latent Diffusion Model (LDM) with residuals on downscaled HMI
data and fine-tuning it with paired MDI/HMI data, we can enhance the resolution
of MDI observations from 2"/pixel to 0.5"/pixel. We evaluate the quality of the
reconstructed images by means of classical metrics (e.g., PSNR, SSIM, FID and
LPIPS) and we check if physical properties, such as the unsigned magnetic flux
or the size of an active region, are preserved. We compare our model with
different variations of LDM and Denoising Diffusion Probabilistic models
(DDPMs), but also with two deterministic architectures already used in the past
for performing the Super-Resolution task. Furthermore, we show with an analysis
in the Fourier domain that the LDM with residuals can resolve features smaller
than 2", and due to the probabilistic nature of the LDM, we can asses their
reliability, in contrast with the deterministic models. Future studies aim to
super-resolve the temporal scale of the solar MDI instrument so that we can
also have a better overview of the dynamics of the old events.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 16:16:26 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Ramunno",
"Francesco Pio",
""
],
[
"Massa",
"Paolo",
""
],
[
"Kinakh",
"Vitaliy",
""
],
[
"Panos",
"Brandon",
""
],
[
"Csillaghy",
"André",
""
],
[
"Voloshynovskiy",
"Slava",
""
]
] | TITLE: Enhancing Image Resolution of Solar Magnetograms: A Latent Diffusion
Model Approach
ABSTRACT: The spatial properties of the solar magnetic field are crucial to decoding
the physical processes in the solar interior and their interplanetary effects.
However, observations from older instruments, such as the Michelson Doppler
Imager (MDI), have limited spatial or temporal resolution, which hinders the
ability to study small-scale solar features in detail. Super resolving these
older datasets is essential for uniform analysis across different solar cycles,
enabling better characterization of solar flares, active regions, and magnetic
network dynamics. In this work, we introduce a novel diffusion model approach
for Super-Resolution and we apply it to MDI magnetograms to match the
higher-resolution capabilities of the Helioseismic and Magnetic Imager (HMI).
By training a Latent Diffusion Model (LDM) with residuals on downscaled HMI
data and fine-tuning it with paired MDI/HMI data, we can enhance the resolution
of MDI observations from 2"/pixel to 0.5"/pixel. We evaluate the quality of the
reconstructed images by means of classical metrics (e.g., PSNR, SSIM, FID and
LPIPS) and we check if physical properties, such as the unsigned magnetic flux
or the size of an active region, are preserved. We compare our model with
different variations of LDM and Denoising Diffusion Probabilistic models
(DDPMs), but also with two deterministic architectures already used in the past
for performing the Super-Resolution task. Furthermore, we show with an analysis
in the Fourier domain that the LDM with residuals can resolve features smaller
than 2", and due to the probabilistic nature of the LDM, we can asses their
reliability, in contrast with the deterministic models. Future studies aim to
super-resolve the temporal scale of the solar MDI instrument so that we can
also have a better overview of the dynamics of the old events.
|
2503.24272 | Yizhou Huang | Yizhou Huang, Yihua Cheng, Kezhi Wang | Learning Velocity and Acceleration: Self-Supervised Motion Consistency
for Pedestrian Trajectory Prediction | null | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Understanding human motion is crucial for accurate pedestrian trajectory
prediction. Conventional methods typically rely on supervised learning, where
ground-truth labels are directly optimized against predicted trajectories. This
amplifies the limitations caused by long-tailed data distributions, making it
difficult for the model to capture abnormal behaviors. In this work, we propose
a self-supervised pedestrian trajectory prediction framework that explicitly
models position, velocity, and acceleration. We leverage velocity and
acceleration information to enhance position prediction through feature
injection and a self-supervised motion consistency mechanism. Our model
hierarchically injects velocity features into the position stream. Acceleration
features are injected into the velocity stream. This enables the model to
predict position, velocity, and acceleration jointly. From the predicted
position, we compute corresponding pseudo velocity and acceleration, allowing
the model to learn from data-generated pseudo labels and thus achieve
self-supervised learning. We further design a motion consistency evaluation
strategy grounded in physical principles; it selects the most reasonable
predicted motion trend by comparing it with historical dynamics and uses this
trend to guide and constrain trajectory generation. We conduct experiments on
the ETH-UCY and Stanford Drone datasets, demonstrating that our method achieves
state-of-the-art performance on both datasets.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 16:17:45 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Huang",
"Yizhou",
""
],
[
"Cheng",
"Yihua",
""
],
[
"Wang",
"Kezhi",
""
]
] | TITLE: Learning Velocity and Acceleration: Self-Supervised Motion Consistency
for Pedestrian Trajectory Prediction
ABSTRACT: Understanding human motion is crucial for accurate pedestrian trajectory
prediction. Conventional methods typically rely on supervised learning, where
ground-truth labels are directly optimized against predicted trajectories. This
amplifies the limitations caused by long-tailed data distributions, making it
difficult for the model to capture abnormal behaviors. In this work, we propose
a self-supervised pedestrian trajectory prediction framework that explicitly
models position, velocity, and acceleration. We leverage velocity and
acceleration information to enhance position prediction through feature
injection and a self-supervised motion consistency mechanism. Our model
hierarchically injects velocity features into the position stream. Acceleration
features are injected into the velocity stream. This enables the model to
predict position, velocity, and acceleration jointly. From the predicted
position, we compute corresponding pseudo velocity and acceleration, allowing
the model to learn from data-generated pseudo labels and thus achieve
self-supervised learning. We further design a motion consistency evaluation
strategy grounded in physical principles; it selects the most reasonable
predicted motion trend by comparing it with historical dynamics and uses this
trend to guide and constrain trajectory generation. We conduct experiments on
the ETH-UCY and Stanford Drone datasets, demonstrating that our method achieves
state-of-the-art performance on both datasets.
|
2503.24282 | Jian Wang | Jian Wang, Xin Lan, Jizhe Zhou, Yuxin Tian, Jiancheng Lv | Style Quantization for Data-Efficient GAN Training | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Under limited data setting, GANs often struggle to navigate and effectively
exploit the input latent space. Consequently, images generated from adjacent
variables in a sparse input latent space may exhibit significant discrepancies
in realism, leading to suboptimal consistency regularization (CR) outcomes. To
address this, we propose \textit{SQ-GAN}, a novel approach that enhances CR by
introducing a style space quantization scheme. This method transforms the
sparse, continuous input latent space into a compact, structured discrete proxy
space, allowing each element to correspond to a specific real data point,
thereby improving CR performance. Instead of direct quantization, we first map
the input latent variables into a less entangled ``style'' space and apply
quantization using a learnable codebook. This enables each quantized code to
control distinct factors of variation. Additionally, we optimize the optimal
transport distance to align the codebook codes with features extracted from the
training data by a foundation model, embedding external knowledge into the
codebook and establishing a semantically rich vocabulary that properly
describes the training dataset. Extensive experiments demonstrate significant
improvements in both discriminator robustness and generation quality with our
method.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 16:28:44 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Wang",
"Jian",
""
],
[
"Lan",
"Xin",
""
],
[
"Zhou",
"Jizhe",
""
],
[
"Tian",
"Yuxin",
""
],
[
"Lv",
"Jiancheng",
""
]
] | TITLE: Style Quantization for Data-Efficient GAN Training
ABSTRACT: Under limited data setting, GANs often struggle to navigate and effectively
exploit the input latent space. Consequently, images generated from adjacent
variables in a sparse input latent space may exhibit significant discrepancies
in realism, leading to suboptimal consistency regularization (CR) outcomes. To
address this, we propose \textit{SQ-GAN}, a novel approach that enhances CR by
introducing a style space quantization scheme. This method transforms the
sparse, continuous input latent space into a compact, structured discrete proxy
space, allowing each element to correspond to a specific real data point,
thereby improving CR performance. Instead of direct quantization, we first map
the input latent variables into a less entangled ``style'' space and apply
quantization using a learnable codebook. This enables each quantized code to
control distinct factors of variation. Additionally, we optimize the optimal
transport distance to align the codebook codes with features extracted from the
training data by a foundation model, embedding external knowledge into the
codebook and establishing a semantically rich vocabulary that properly
describes the training dataset. Extensive experiments demonstrate significant
improvements in both discriminator robustness and generation quality with our
method.
|
2503.24293 | Hayley Ross | Hayley Ross, Kathryn Davidson, Najoung Kim | Is analogy enough to draw novel adjective-noun inferences? | 8 pages (16 pages with appendix). Submitted to SCiL 2025 | null | null | null | cs.CL | http://creativecommons.org/licenses/by-sa/4.0/ | Recent work (Ross et al., 2025, 2024) has argued that the ability of humans
and LLMs respectively to generalize to novel adjective-noun combinations shows
that they each have access to a compositional mechanism to determine the
phrase's meaning and derive inferences. We study whether these inferences can
instead be derived by analogy to known inferences, without need for
composition. We investigate this by (1) building a model of analogical
reasoning using similarity over lexical items, and (2) asking human
participants to reason by analogy. While we find that this strategy works well
for a large proportion of the dataset of Ross et al. (2025), there are novel
combinations for which both humans and LLMs derive convergent inferences but
which are not well handled by analogy. We thus conclude that the mechanism
humans and LLMs use to generalize in these cases cannot be fully reduced to
analogy, and likely involves composition.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 16:41:16 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Ross",
"Hayley",
""
],
[
"Davidson",
"Kathryn",
""
],
[
"Kim",
"Najoung",
""
]
] | TITLE: Is analogy enough to draw novel adjective-noun inferences?
ABSTRACT: Recent work (Ross et al., 2025, 2024) has argued that the ability of humans
and LLMs respectively to generalize to novel adjective-noun combinations shows
that they each have access to a compositional mechanism to determine the
phrase's meaning and derive inferences. We study whether these inferences can
instead be derived by analogy to known inferences, without need for
composition. We investigate this by (1) building a model of analogical
reasoning using similarity over lexical items, and (2) asking human
participants to reason by analogy. While we find that this strategy works well
for a large proportion of the dataset of Ross et al. (2025), there are novel
combinations for which both humans and LLMs derive convergent inferences but
which are not well handled by analogy. We thus conclude that the mechanism
humans and LLMs use to generalize in these cases cannot be fully reduced to
analogy, and likely involves composition.
|
2503.24298 | Thinesh Thiyakesan Ponbagavathi | Thinesh Thiyakesan Ponbagavathi, Alina Roitberg | Order Matters: On Parameter-Efficient Image-to-Video Probing for
Recognizing Nearly Symmetric Actions | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | We study parameter-efficient image-to-video probing for the unaddressed
challenge of recognizing nearly symmetric actions - visually similar actions
that unfold in opposite temporal order (e.g., opening vs. closing a bottle).
Existing probing mechanisms for image-pretrained models, such as DinoV2 and
CLIP, rely on attention mechanism for temporal modeling but are inherently
permutation-invariant, leading to identical predictions regardless of frame
order. To address this, we introduce Self-attentive Temporal Embedding Probing
(STEP), a simple yet effective approach designed to enforce temporal
sensitivity in parameter-efficient image-to-video transfer. STEP enhances
self-attentive probing with three key modifications: (1) a learnable frame-wise
positional encoding, explicitly encoding temporal order; (2) a single global
CLS token, for sequence coherence; and (3) a simplified attention mechanism to
improve parameter efficiency. STEP outperforms existing image-to-video probing
mechanisms by 3-15% across four activity recognition benchmarks with only 1/3
of the learnable parameters. On two datasets, it surpasses all published
methods, including fully fine-tuned models. STEP shows a distinct advantage in
recognizing nearly symmetric actions, surpassing other probing mechanisms by
9-19%. and parameter-heavier PEFT-based transfer methods by 5-15%. Code and
models will be made publicly available.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 16:42:38 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Ponbagavathi",
"Thinesh Thiyakesan",
""
],
[
"Roitberg",
"Alina",
""
]
] | TITLE: Order Matters: On Parameter-Efficient Image-to-Video Probing for
Recognizing Nearly Symmetric Actions
ABSTRACT: We study parameter-efficient image-to-video probing for the unaddressed
challenge of recognizing nearly symmetric actions - visually similar actions
that unfold in opposite temporal order (e.g., opening vs. closing a bottle).
Existing probing mechanisms for image-pretrained models, such as DinoV2 and
CLIP, rely on attention mechanism for temporal modeling but are inherently
permutation-invariant, leading to identical predictions regardless of frame
order. To address this, we introduce Self-attentive Temporal Embedding Probing
(STEP), a simple yet effective approach designed to enforce temporal
sensitivity in parameter-efficient image-to-video transfer. STEP enhances
self-attentive probing with three key modifications: (1) a learnable frame-wise
positional encoding, explicitly encoding temporal order; (2) a single global
CLS token, for sequence coherence; and (3) a simplified attention mechanism to
improve parameter efficiency. STEP outperforms existing image-to-video probing
mechanisms by 3-15% across four activity recognition benchmarks with only 1/3
of the learnable parameters. On two datasets, it surpasses all published
methods, including fully fine-tuned models. STEP shows a distinct advantage in
recognizing nearly symmetric actions, surpassing other probing mechanisms by
9-19%. and parameter-heavier PEFT-based transfer methods by 5-15%. Code and
models will be made publicly available.
|
2503.24306 | Adam Schmidt | Adam Schmidt, Mert Asim Karaoglu, Soham Sinha, Mingang Jang, Ho-Gun
Ha, Kyungmin Jung, Kyeongmo Gu, Ihsan Ullah, Hyunki Lee, Jon\'a\v{s}
\v{S}er\'ych, Michal Neoral, Ji\v{r}\'i Matas, Rulin Zhou, Wenlong He, An
Wang, Hongliang Ren, Bruno Silva, Sandro Queir\'os, Est\^ev\~ao Lima, Jo\~ao
L. Vila\c{c}a, Shunsuke Kikuchi, Atsushi Kouno, Hiroki Matsuzaki, Tongtong
Li, Yulu Chen, Ling Li, Xiang Ma, Xiaojian Li, Mona Sheikh Zeinoddin, Xu
Wang, Zafer Tandogdu, Greg Shaw, Evangelos Mazomenos, Danail Stoyanov, Yuxin
Chen, Zijian Wu, Alexander Ladikos, Simon DiMaio, Septimiu E. Salcudean, Omid
Mohareri | Point Tracking in Surgery--The 2024 Surgical Tattoos in Infrared (STIR)
Challenge | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding tissue motion in surgery is crucial to enable applications in
downstream tasks such as segmentation, 3D reconstruction, virtual tissue
landmarking, autonomous probe-based scanning, and subtask autonomy. Labeled
data are essential to enabling algorithms in these downstream tasks since they
allow us to quantify and train algorithms. This paper introduces a point
tracking challenge to address this, wherein participants can submit their
algorithms for quantification. The submitted algorithms are evaluated using a
dataset named surgical tattoos in infrared (STIR), with the challenge aptly
named the STIR Challenge 2024. The STIR Challenge 2024 comprises two
quantitative components: accuracy and efficiency. The accuracy component tests
the accuracy of algorithms on in vivo and ex vivo sequences. The efficiency
component tests the latency of algorithm inference. The challenge was conducted
as a part of MICCAI EndoVis 2024. In this challenge, we had 8 total teams, with
4 teams submitting before and 4 submitting after challenge day. This paper
details the STIR Challenge 2024, which serves to move the field towards more
accurate and efficient algorithms for spatial understanding in surgery. In this
paper we summarize the design, submissions, and results from the challenge. The
challenge dataset is available here: https://zenodo.org/records/14803158 , and
the code for baseline models and metric calculation is available here:
https://github.com/athaddius/STIRMetrics
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 16:53:09 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Schmidt",
"Adam",
""
],
[
"Karaoglu",
"Mert Asim",
""
],
[
"Sinha",
"Soham",
""
],
[
"Jang",
"Mingang",
""
],
[
"Ha",
"Ho-Gun",
""
],
[
"Jung",
"Kyungmin",
""
],
[
"Gu",
"Kyeongmo",
""
],
[
"Ullah",
"Ihsan",
""
],
[
"Lee",
"Hyunki",
""
],
[
"Šerých",
"Jonáš",
""
],
[
"Neoral",
"Michal",
""
],
[
"Matas",
"Jiří",
""
],
[
"Zhou",
"Rulin",
""
],
[
"He",
"Wenlong",
""
],
[
"Wang",
"An",
""
],
[
"Ren",
"Hongliang",
""
],
[
"Silva",
"Bruno",
""
],
[
"Queirós",
"Sandro",
""
],
[
"Lima",
"Estêvão",
""
],
[
"Vilaça",
"João L.",
""
],
[
"Kikuchi",
"Shunsuke",
""
],
[
"Kouno",
"Atsushi",
""
],
[
"Matsuzaki",
"Hiroki",
""
],
[
"Li",
"Tongtong",
""
],
[
"Chen",
"Yulu",
""
],
[
"Li",
"Ling",
""
],
[
"Ma",
"Xiang",
""
],
[
"Li",
"Xiaojian",
""
],
[
"Zeinoddin",
"Mona Sheikh",
""
],
[
"Wang",
"Xu",
""
],
[
"Tandogdu",
"Zafer",
""
],
[
"Shaw",
"Greg",
""
],
[
"Mazomenos",
"Evangelos",
""
],
[
"Stoyanov",
"Danail",
""
],
[
"Chen",
"Yuxin",
""
],
[
"Wu",
"Zijian",
""
],
[
"Ladikos",
"Alexander",
""
],
[
"DiMaio",
"Simon",
""
],
[
"Salcudean",
"Septimiu E.",
""
],
[
"Mohareri",
"Omid",
""
]
] | TITLE: Point Tracking in Surgery--The 2024 Surgical Tattoos in Infrared (STIR)
Challenge
ABSTRACT: Understanding tissue motion in surgery is crucial to enable applications in
downstream tasks such as segmentation, 3D reconstruction, virtual tissue
landmarking, autonomous probe-based scanning, and subtask autonomy. Labeled
data are essential to enabling algorithms in these downstream tasks since they
allow us to quantify and train algorithms. This paper introduces a point
tracking challenge to address this, wherein participants can submit their
algorithms for quantification. The submitted algorithms are evaluated using a
dataset named surgical tattoos in infrared (STIR), with the challenge aptly
named the STIR Challenge 2024. The STIR Challenge 2024 comprises two
quantitative components: accuracy and efficiency. The accuracy component tests
the accuracy of algorithms on in vivo and ex vivo sequences. The efficiency
component tests the latency of algorithm inference. The challenge was conducted
as a part of MICCAI EndoVis 2024. In this challenge, we had 8 total teams, with
4 teams submitting before and 4 submitting after challenge day. This paper
details the STIR Challenge 2024, which serves to move the field towards more
accurate and efficient algorithms for spatial understanding in surgery. In this
paper we summarize the design, submissions, and results from the challenge. The
challenge dataset is available here: https://zenodo.org/records/14803158 , and
the code for baseline models and metric calculation is available here:
https://github.com/athaddius/STIRMetrics
|
2503.24307 | Arshia Kermani | Arshia Kermani, Veronica Perez-Rosas, Vangelis Metsis | A Systematic Evaluation of LLM Strategies for Mental Health Text
Analysis: Fine-tuning vs. Prompt Engineering vs. RAG | null | null | null | null | cs.CL cs.AI cs.IR cs.LG | http://creativecommons.org/licenses/by/4.0/ | This study presents a systematic comparison of three approaches for the
analysis of mental health text using large language models (LLMs): prompt
engineering, retrieval augmented generation (RAG), and fine-tuning. Using LLaMA
3, we evaluate these approaches on emotion classification and mental health
condition detection tasks across two datasets. Fine-tuning achieves the highest
accuracy (91% for emotion classification, 80% for mental health conditions) but
requires substantial computational resources and large training sets, while
prompt engineering and RAG offer more flexible deployment with moderate
performance (40-68% accuracy). Our findings provide practical insights for
implementing LLM-based solutions in mental health applications, highlighting
the trade-offs between accuracy, computational requirements, and deployment
flexibility.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 16:54:04 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Kermani",
"Arshia",
""
],
[
"Perez-Rosas",
"Veronica",
""
],
[
"Metsis",
"Vangelis",
""
]
] | TITLE: A Systematic Evaluation of LLM Strategies for Mental Health Text
Analysis: Fine-tuning vs. Prompt Engineering vs. RAG
ABSTRACT: This study presents a systematic comparison of three approaches for the
analysis of mental health text using large language models (LLMs): prompt
engineering, retrieval augmented generation (RAG), and fine-tuning. Using LLaMA
3, we evaluate these approaches on emotion classification and mental health
condition detection tasks across two datasets. Fine-tuning achieves the highest
accuracy (91% for emotion classification, 80% for mental health conditions) but
requires substantial computational resources and large training sets, while
prompt engineering and RAG offer more flexible deployment with moderate
performance (40-68% accuracy). Our findings provide practical insights for
implementing LLM-based solutions in mental health applications, highlighting
the trade-offs between accuracy, computational requirements, and deployment
flexibility.
|
2503.24345 | Fang Yan | Fang Yan, Jianfeng Wu, Jiawen Li, Wei Wang, Jiaxuan Lu, Wen Chen,
Zizhao Gao, Jianan Li, Hong Yan, Jiabo Ma, Minda Chen, Yang Lu, Qing Chen,
Yizhi Wang, Xitong Ling, Xuenian Wang, Zihan Wang, Qiang Huang, Shengyi Hua,
Mianxin Liu, Lei Ma, Tian Shen, Xiaofan Zhang, Yonghong He, Hao Chen,
Shaoting Zhang, Zhe Wang | PathOrchestra: A Comprehensive Foundation Model for Computational
Pathology with Over 100 Diverse Clinical-Grade Tasks | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The complexity and variability inherent in high-resolution pathological
images present significant challenges in computational pathology. While
pathology foundation models leveraging AI have catalyzed transformative
advancements, their development demands large-scale datasets, considerable
storage capacity, and substantial computational resources. Furthermore,
ensuring their clinical applicability and generalizability requires rigorous
validation across a broad spectrum of clinical tasks. Here, we present
PathOrchestra, a versatile pathology foundation model trained via
self-supervised learning on a dataset comprising 300K pathological slides from
20 tissue and organ types across multiple centers. The model was rigorously
evaluated on 112 clinical tasks using a combination of 61 private and 51 public
datasets. These tasks encompass digital slide preprocessing, pan-cancer
classification, lesion identification, multi-cancer subtype classification,
biomarker assessment, gene expression prediction, and the generation of
structured reports. PathOrchestra demonstrated exceptional performance across
27,755 WSIs and 9,415,729 ROIs, achieving over 0.950 accuracy in 47 tasks,
including pan-cancer classification across various organs, lymphoma subtype
diagnosis, and bladder cancer screening. Notably, it is the first model to
generate structured reports for high-incidence colorectal cancer and
diagnostically complex lymphoma-areas that are infrequently addressed by
foundational models but hold immense clinical potential. Overall, PathOrchestra
exemplifies the feasibility and efficacy of a large-scale, self-supervised
pathology foundation model, validated across a broad range of clinical-grade
tasks. Its high accuracy and reduced reliance on extensive data annotation
underline its potential for clinical integration, offering a pathway toward
more efficient and high-quality medical services.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 17:28:02 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Yan",
"Fang",
""
],
[
"Wu",
"Jianfeng",
""
],
[
"Li",
"Jiawen",
""
],
[
"Wang",
"Wei",
""
],
[
"Lu",
"Jiaxuan",
""
],
[
"Chen",
"Wen",
""
],
[
"Gao",
"Zizhao",
""
],
[
"Li",
"Jianan",
""
],
[
"Yan",
"Hong",
""
],
[
"Ma",
"Jiabo",
""
],
[
"Chen",
"Minda",
""
],
[
"Lu",
"Yang",
""
],
[
"Chen",
"Qing",
""
],
[
"Wang",
"Yizhi",
""
],
[
"Ling",
"Xitong",
""
],
[
"Wang",
"Xuenian",
""
],
[
"Wang",
"Zihan",
""
],
[
"Huang",
"Qiang",
""
],
[
"Hua",
"Shengyi",
""
],
[
"Liu",
"Mianxin",
""
],
[
"Ma",
"Lei",
""
],
[
"Shen",
"Tian",
""
],
[
"Zhang",
"Xiaofan",
""
],
[
"He",
"Yonghong",
""
],
[
"Chen",
"Hao",
""
],
[
"Zhang",
"Shaoting",
""
],
[
"Wang",
"Zhe",
""
]
] | TITLE: PathOrchestra: A Comprehensive Foundation Model for Computational
Pathology with Over 100 Diverse Clinical-Grade Tasks
ABSTRACT: The complexity and variability inherent in high-resolution pathological
images present significant challenges in computational pathology. While
pathology foundation models leveraging AI have catalyzed transformative
advancements, their development demands large-scale datasets, considerable
storage capacity, and substantial computational resources. Furthermore,
ensuring their clinical applicability and generalizability requires rigorous
validation across a broad spectrum of clinical tasks. Here, we present
PathOrchestra, a versatile pathology foundation model trained via
self-supervised learning on a dataset comprising 300K pathological slides from
20 tissue and organ types across multiple centers. The model was rigorously
evaluated on 112 clinical tasks using a combination of 61 private and 51 public
datasets. These tasks encompass digital slide preprocessing, pan-cancer
classification, lesion identification, multi-cancer subtype classification,
biomarker assessment, gene expression prediction, and the generation of
structured reports. PathOrchestra demonstrated exceptional performance across
27,755 WSIs and 9,415,729 ROIs, achieving over 0.950 accuracy in 47 tasks,
including pan-cancer classification across various organs, lymphoma subtype
diagnosis, and bladder cancer screening. Notably, it is the first model to
generate structured reports for high-incidence colorectal cancer and
diagnostically complex lymphoma-areas that are infrequently addressed by
foundational models but hold immense clinical potential. Overall, PathOrchestra
exemplifies the feasibility and efficacy of a large-scale, self-supervised
pathology foundation model, validated across a broad range of clinical-grade
tasks. Its high accuracy and reduced reliance on extensive data annotation
underline its potential for clinical integration, offering a pathway toward
more efficient and high-quality medical services.
|
2503.24357 | Shuaizheng Liu | Shuaizheng Liu, Jianqi Ma, Lingchen Sun, Xiangtao Kong, Lei Zhang | InstructRestore: Region-Customized Image Restoration with Human
Instructions | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the significant progress in diffusion prior-based image restoration,
most existing methods apply uniform processing to the entire image, lacking the
capability to perform region-customized image restoration according to user
instructions. In this work, we propose a new framework, namely InstructRestore,
to perform region-adjustable image restoration following human instructions. To
achieve this, we first develop a data generation engine to produce training
triplets, each consisting of a high-quality image, the target region
description, and the corresponding region mask. With this engine and careful
data screening, we construct a comprehensive dataset comprising 536,945
triplets to support the training and evaluation of this task. We then examine
how to integrate the low-quality image features under the ControlNet
architecture to adjust the degree of image details enhancement. Consequently,
we develop a ControlNet-like model to identify the target region and allocate
different integration scales to the target and surrounding regions, enabling
region-customized image restoration that aligns with user instructions.
Experimental results demonstrate that our proposed InstructRestore approach
enables effective human-instructed image restoration, such as images with bokeh
effects and user-instructed local enhancement. Our work advances the
investigation of interactive image restoration and enhancement techniques.
Data, code, and models will be found at
https://github.com/shuaizhengliu/InstructRestore.git.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 17:36:05 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Liu",
"Shuaizheng",
""
],
[
"Ma",
"Jianqi",
""
],
[
"Sun",
"Lingchen",
""
],
[
"Kong",
"Xiangtao",
""
],
[
"Zhang",
"Lei",
""
]
] | TITLE: InstructRestore: Region-Customized Image Restoration with Human
Instructions
ABSTRACT: Despite the significant progress in diffusion prior-based image restoration,
most existing methods apply uniform processing to the entire image, lacking the
capability to perform region-customized image restoration according to user
instructions. In this work, we propose a new framework, namely InstructRestore,
to perform region-adjustable image restoration following human instructions. To
achieve this, we first develop a data generation engine to produce training
triplets, each consisting of a high-quality image, the target region
description, and the corresponding region mask. With this engine and careful
data screening, we construct a comprehensive dataset comprising 536,945
triplets to support the training and evaluation of this task. We then examine
how to integrate the low-quality image features under the ControlNet
architecture to adjust the degree of image details enhancement. Consequently,
we develop a ControlNet-like model to identify the target region and allocate
different integration scales to the target and surrounding regions, enabling
region-customized image restoration that aligns with user instructions.
Experimental results demonstrate that our proposed InstructRestore approach
enables effective human-instructed image restoration, such as images with bokeh
effects and user-instructed local enhancement. Our work advances the
investigation of interactive image restoration and enhancement techniques.
Data, code, and models will be found at
https://github.com/shuaizhengliu/InstructRestore.git.
|
2503.24358 | Hao Wang | Hao Wang, Ligong Han, Kai Xu, Akash Srivastava | SQuat: Subspace-orthogonal KV Cache Quantization | null | null | null | null | cs.LG cs.AI cs.CL cs.IT math.IT | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The key-value (KV) cache accelerates LLMs decoding by storing KV tensors from
previously generated tokens. It reduces redundant computation at the cost of
increased memory usage. To mitigate this overhead, existing approaches compress
KV tensors into lower-bit representations; however, quantization errors can
accumulate as more tokens are generated, potentially resulting in undesired
outputs. In this paper, we introduce SQuat (Subspace-orthogonal KV cache
quantization). It first constructs a subspace spanned by query tensors to
capture the most critical task-related information. During key tensor
quantization, it enforces that the difference between the (de)quantized and
original keys remains orthogonal to this subspace, minimizing the impact of
quantization errors on the attention mechanism's outputs. SQuat requires no
model fine-tuning, no additional calibration dataset for offline learning, and
is grounded in a theoretical framework we develop. Through numerical
experiments, we show that our method reduces peak memory by 2.17 to 2.82,
improves throughput by 2.45 to 3.60, and achieves more favorable benchmark
scores than existing KV cache quantization algorithms.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 17:37:32 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Wang",
"Hao",
""
],
[
"Han",
"Ligong",
""
],
[
"Xu",
"Kai",
""
],
[
"Srivastava",
"Akash",
""
]
] | TITLE: SQuat: Subspace-orthogonal KV Cache Quantization
ABSTRACT: The key-value (KV) cache accelerates LLMs decoding by storing KV tensors from
previously generated tokens. It reduces redundant computation at the cost of
increased memory usage. To mitigate this overhead, existing approaches compress
KV tensors into lower-bit representations; however, quantization errors can
accumulate as more tokens are generated, potentially resulting in undesired
outputs. In this paper, we introduce SQuat (Subspace-orthogonal KV cache
quantization). It first constructs a subspace spanned by query tensors to
capture the most critical task-related information. During key tensor
quantization, it enforces that the difference between the (de)quantized and
original keys remains orthogonal to this subspace, minimizing the impact of
quantization errors on the attention mechanism's outputs. SQuat requires no
model fine-tuning, no additional calibration dataset for offline learning, and
is grounded in a theoretical framework we develop. Through numerical
experiments, we show that our method reduces peak memory by 2.17 to 2.82,
improves throughput by 2.45 to 3.60, and achieves more favorable benchmark
scores than existing KV cache quantization algorithms.
|
2503.24368 | Lin Zhao | Xiaoran Zhang, Eric Z. Chen, Lin Zhao, Xiao Chen, Yikang Liu, Boris
Maihe, James S. Duncan, Terrence Chen, and Shanhui Sun | Adapting Vision Foundation Models for Real-time Ultrasound Image
Segmentation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel approach that adapts hierarchical vision foundation models
for real-time ultrasound image segmentation. Existing ultrasound segmentation
methods often struggle with adaptability to new tasks, relying on costly manual
annotations, while real-time approaches generally fail to match
state-of-the-art performance. To overcome these limitations, we introduce an
adaptive framework that leverages the vision foundation model Hiera to extract
multi-scale features, interleaved with DINOv2 representations to enhance visual
expressiveness. These enriched features are then decoded to produce precise and
robust segmentation. We conduct extensive evaluations on six public datasets
and one in-house dataset, covering both cardiac and thyroid ultrasound
segmentation. Experiments show that our approach outperforms state-of-the-art
methods across multiple datasets and excels with limited supervision,
surpassing nnUNet by over 20\% on average in the 1\% and 10\% data settings.
Our method achieves $\sim$77 FPS inference speed with TensorRT on a single GPU,
enabling real-time clinical applications.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 17:47:42 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Zhang",
"Xiaoran",
""
],
[
"Chen",
"Eric Z.",
""
],
[
"Zhao",
"Lin",
""
],
[
"Chen",
"Xiao",
""
],
[
"Liu",
"Yikang",
""
],
[
"Maihe",
"Boris",
""
],
[
"Duncan",
"James S.",
""
],
[
"Chen",
"Terrence",
""
],
[
"Sun",
"Shanhui",
""
]
] | TITLE: Adapting Vision Foundation Models for Real-time Ultrasound Image
Segmentation
ABSTRACT: We propose a novel approach that adapts hierarchical vision foundation models
for real-time ultrasound image segmentation. Existing ultrasound segmentation
methods often struggle with adaptability to new tasks, relying on costly manual
annotations, while real-time approaches generally fail to match
state-of-the-art performance. To overcome these limitations, we introduce an
adaptive framework that leverages the vision foundation model Hiera to extract
multi-scale features, interleaved with DINOv2 representations to enhance visual
expressiveness. These enriched features are then decoded to produce precise and
robust segmentation. We conduct extensive evaluations on six public datasets
and one in-house dataset, covering both cardiac and thyroid ultrasound
segmentation. Experiments show that our approach outperforms state-of-the-art
methods across multiple datasets and excels with limited supervision,
surpassing nnUNet by over 20\% on average in the 1\% and 10\% data settings.
Our method achieves $\sim$77 FPS inference speed with TensorRT on a single GPU,
enabling real-time clinical applications.
|
2503.24374 | Vincent Chen | Maxim V. Shugaev, Vincent Chen, Maxim Karrenbach, Kyle Ashley, Bridget
Kennedy, Naresh P. Cuntoor | ERUPT: Efficient Rendering with Unposed Patch Transformer | Accepted to CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | This work addresses the problem of novel view synthesis in diverse scenes
from small collections of RGB images. We propose ERUPT (Efficient Rendering
with Unposed Patch Transformer) a state-of-the-art scene reconstruction model
capable of efficient scene rendering using unposed imagery. We introduce
patch-based querying, in contrast to existing pixel-based queries, to reduce
the compute required to render a target view. This makes our model highly
efficient both during training and at inference, capable of rendering at 600
fps on commercial hardware. Notably, our model is designed to use a learned
latent camera pose which allows for training using unposed targets in datasets
with sparse or inaccurate ground truth camera pose. We show that our approach
can generalize on large real-world data and introduce a new benchmark dataset
(MSVS-1M) for latent view synthesis using street-view imagery collected from
Mapillary. In contrast to NeRF and Gaussian Splatting, which require dense
imagery and precise metadata, ERUPT can render novel views of arbitrary scenes
with as few as five unposed input images. ERUPT achieves better rendered image
quality than current state-of-the-art methods for unposed image synthesis
tasks, reduces labeled data requirements by ~95\% and decreases computational
requirements by an order of magnitude, providing efficient novel view synthesis
for diverse real-world scenes.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 17:53:05 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Shugaev",
"Maxim V.",
""
],
[
"Chen",
"Vincent",
""
],
[
"Karrenbach",
"Maxim",
""
],
[
"Ashley",
"Kyle",
""
],
[
"Kennedy",
"Bridget",
""
],
[
"Cuntoor",
"Naresh P.",
""
]
] | TITLE: ERUPT: Efficient Rendering with Unposed Patch Transformer
ABSTRACT: This work addresses the problem of novel view synthesis in diverse scenes
from small collections of RGB images. We propose ERUPT (Efficient Rendering
with Unposed Patch Transformer) a state-of-the-art scene reconstruction model
capable of efficient scene rendering using unposed imagery. We introduce
patch-based querying, in contrast to existing pixel-based queries, to reduce
the compute required to render a target view. This makes our model highly
efficient both during training and at inference, capable of rendering at 600
fps on commercial hardware. Notably, our model is designed to use a learned
latent camera pose which allows for training using unposed targets in datasets
with sparse or inaccurate ground truth camera pose. We show that our approach
can generalize on large real-world data and introduce a new benchmark dataset
(MSVS-1M) for latent view synthesis using street-view imagery collected from
Mapillary. In contrast to NeRF and Gaussian Splatting, which require dense
imagery and precise metadata, ERUPT can render novel views of arbitrary scenes
with as few as five unposed input images. ERUPT achieves better rendered image
quality than current state-of-the-art methods for unposed image synthesis
tasks, reduces labeled data requirements by ~95\% and decreases computational
requirements by an order of magnitude, providing efficient novel view synthesis
for diverse real-world scenes.
|
2503.24376 | Yi Chen | Yi Chen, Yuying Ge, Rui Wang, Yixiao Ge, Lu Qiu, Ying Shan, Xihui Liu | Exploring the Effect of Reinforcement Learning on Video Understanding:
Insights from SEED-Bench-R1 | Technical Report (In Progress); Code released at:
https://github.com/TencentARC/SEED-Bench-R1 | null | null | null | cs.CV cs.AI cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Recent advancements in Chain of Thought (COT) generation have significantly
improved the reasoning capabilities of Large Language Models (LLMs), with
reinforcement learning (RL) emerging as an effective post-training approach.
Multimodal Large Language Models (MLLMs) inherit this reasoning potential but
remain underexplored in tasks requiring both perception and logical reasoning.
To address this, we introduce SEED-Bench-R1, a benchmark designed to
systematically evaluate post-training methods for MLLMs in video understanding.
It includes intricate real-world videos and complex everyday planning tasks in
the format of multiple-choice questions, requiring sophisticated perception and
reasoning. SEED-Bench-R1 assesses generalization through a three-level
hierarchy: in-distribution, cross-environment, and cross-environment-task
scenarios, equipped with a large-scale training dataset with easily verifiable
ground-truth answers. Using Qwen2-VL-Instruct-7B as a base model, we compare RL
with supervised fine-tuning (SFT), demonstrating RL's data efficiency and
superior performance on both in-distribution and out-of-distribution tasks,
even outperforming SFT on general video understanding benchmarks like
LongVideoBench. Our detailed analysis reveals that RL enhances visual
perception but often produces less logically coherent reasoning chains. We
identify key limitations such as inconsistent reasoning and overlooked visual
cues, and suggest future improvements in base model reasoning, reward modeling,
and RL robustness against noisy signals.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 17:55:23 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Chen",
"Yi",
""
],
[
"Ge",
"Yuying",
""
],
[
"Wang",
"Rui",
""
],
[
"Ge",
"Yixiao",
""
],
[
"Qiu",
"Lu",
""
],
[
"Shan",
"Ying",
""
],
[
"Liu",
"Xihui",
""
]
] | TITLE: Exploring the Effect of Reinforcement Learning on Video Understanding:
Insights from SEED-Bench-R1
ABSTRACT: Recent advancements in Chain of Thought (COT) generation have significantly
improved the reasoning capabilities of Large Language Models (LLMs), with
reinforcement learning (RL) emerging as an effective post-training approach.
Multimodal Large Language Models (MLLMs) inherit this reasoning potential but
remain underexplored in tasks requiring both perception and logical reasoning.
To address this, we introduce SEED-Bench-R1, a benchmark designed to
systematically evaluate post-training methods for MLLMs in video understanding.
It includes intricate real-world videos and complex everyday planning tasks in
the format of multiple-choice questions, requiring sophisticated perception and
reasoning. SEED-Bench-R1 assesses generalization through a three-level
hierarchy: in-distribution, cross-environment, and cross-environment-task
scenarios, equipped with a large-scale training dataset with easily verifiable
ground-truth answers. Using Qwen2-VL-Instruct-7B as a base model, we compare RL
with supervised fine-tuning (SFT), demonstrating RL's data efficiency and
superior performance on both in-distribution and out-of-distribution tasks,
even outperforming SFT on general video understanding benchmarks like
LongVideoBench. Our detailed analysis reveals that RL enhances visual
perception but often produces less logically coherent reasoning chains. We
identify key limitations such as inconsistent reasoning and overlooked visual
cues, and suggest future improvements in base model reasoning, reward modeling,
and RL robustness against noisy signals.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.