Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2503.17402 | Badih Ghattas | Oscar L. Cruz-Gonz\'alez, Val\'erie Deplano, Badih Ghattas | Enhanced Vascular Flow Simulations in Aortic Aneurysm via
Physics-Informed Neural Networks and Deep Operator Networks | null | null | null | null | cs.LG stat.CO stat.ML | http://creativecommons.org/publicdomain/zero/1.0/ | Due to the limited accuracy of 4D Magnetic Resonance Imaging (MRI) in
identifying hemodynamics in cardiovascular diseases, the challenges in
obtaining patient-specific flow boundary conditions, and the computationally
demanding and time-consuming nature of Computational Fluid Dynamics (CFD)
simulations, it is crucial to explore new data assimilation algorithms that
offer possible alternatives to these limitations. In the present work, we study
Physics-Informed Neural Networks (PINNs), Deep Operator Networks (DeepONets),
and their Physics-Informed extensions (PI-DeepONets) in predicting vascular
flow simulations in the context of a 3D Abdominal Aortic Aneurysm (AAA)
idealized model. PINN is a technique that combines deep neural networks with
the fundamental principles of physics, incorporating the physics laws, which
are given as partial differential equations, directly into loss functions used
during the training process. On the other hand, DeepONet is designed to learn
nonlinear operators from data and is particularly useful in studying parametric
partial differential equations (PDEs), e.g., families of PDEs with different
source terms, boundary conditions, or initial conditions. Here, we adapt the
approaches to address the particular use case of AAA by integrating the 3D
Navier-Stokes equations (NSE) as the physical laws governing fluid dynamics. In
addition, we follow best practices to enhance the capabilities of the models by
effectively capturing the underlying physics of the problem under study. The
advantages and limitations of each approach are highlighted through a series of
relevant application cases. We validate our results by comparing them with CFD
simulations for benchmark datasets, demonstrating good agreements and
emphasizing those cases where improvements in computational efficiency are
observed.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 22:38:52 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Cruz-González",
"Oscar L.",
""
],
[
"Deplano",
"Valérie",
""
],
[
"Ghattas",
"Badih",
""
]
] | TITLE: Enhanced Vascular Flow Simulations in Aortic Aneurysm via
Physics-Informed Neural Networks and Deep Operator Networks
ABSTRACT: Due to the limited accuracy of 4D Magnetic Resonance Imaging (MRI) in
identifying hemodynamics in cardiovascular diseases, the challenges in
obtaining patient-specific flow boundary conditions, and the computationally
demanding and time-consuming nature of Computational Fluid Dynamics (CFD)
simulations, it is crucial to explore new data assimilation algorithms that
offer possible alternatives to these limitations. In the present work, we study
Physics-Informed Neural Networks (PINNs), Deep Operator Networks (DeepONets),
and their Physics-Informed extensions (PI-DeepONets) in predicting vascular
flow simulations in the context of a 3D Abdominal Aortic Aneurysm (AAA)
idealized model. PINN is a technique that combines deep neural networks with
the fundamental principles of physics, incorporating the physics laws, which
are given as partial differential equations, directly into loss functions used
during the training process. On the other hand, DeepONet is designed to learn
nonlinear operators from data and is particularly useful in studying parametric
partial differential equations (PDEs), e.g., families of PDEs with different
source terms, boundary conditions, or initial conditions. Here, we adapt the
approaches to address the particular use case of AAA by integrating the 3D
Navier-Stokes equations (NSE) as the physical laws governing fluid dynamics. In
addition, we follow best practices to enhance the capabilities of the models by
effectively capturing the underlying physics of the problem under study. The
advantages and limitations of each approach are highlighted through a series of
relevant application cases. We validate our results by comparing them with CFD
simulations for benchmark datasets, demonstrating good agreements and
emphasizing those cases where improvements in computational efficiency are
observed.
|
2503.17403 | Azim Akhtarshenas | Azim Akhtarshenas, Afshin Dini, Navid Ayoobi | ChatGPT or A Silent Everywhere Helper: A Survey of Large Language Models | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Large Language Models (LLMs) have revo lutionized natural language processing
Natural Language Processing (NLP), with Chat Generative Pre-trained Transformer
(ChatGPT) standing out as a notable exampledue to its advanced capabilities and
widespread applications. This survey provides a comprehensive analysis of
ChatGPT, exploring its architecture, training processes, and functionalities.
We examine its integration into various domains across industries such as
customer service, education, healthcare, and entertainment. A comparative
analysis with other LLMs highlights ChatGPT's unique features and performance
metrics. Regarding benchmarks, the paper examines ChatGPT's comparative
performance against other LLMs and discusses potential risks such as
misinformation, bias, and data privacy concerns. Additionally, we offer a
number of figures and tables that outline the backdrop of the discussion, the
main ideas of the article, the numerous LLM models, a thorough list of datasets
used for pre-training, fine-tuning, and evaluation, as well as particular LLM
applications with pertinent references. Finally, we identify future research
directions and technological advancements, underscoring the evolving landscape
of LLMs and their profound impact on artificial intelligence Artificial
Intelligence (AI) and society.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 22:55:08 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Akhtarshenas",
"Azim",
""
],
[
"Dini",
"Afshin",
""
],
[
"Ayoobi",
"Navid",
""
]
] | TITLE: ChatGPT or A Silent Everywhere Helper: A Survey of Large Language Models
ABSTRACT: Large Language Models (LLMs) have revo lutionized natural language processing
Natural Language Processing (NLP), with Chat Generative Pre-trained Transformer
(ChatGPT) standing out as a notable exampledue to its advanced capabilities and
widespread applications. This survey provides a comprehensive analysis of
ChatGPT, exploring its architecture, training processes, and functionalities.
We examine its integration into various domains across industries such as
customer service, education, healthcare, and entertainment. A comparative
analysis with other LLMs highlights ChatGPT's unique features and performance
metrics. Regarding benchmarks, the paper examines ChatGPT's comparative
performance against other LLMs and discusses potential risks such as
misinformation, bias, and data privacy concerns. Additionally, we offer a
number of figures and tables that outline the backdrop of the discussion, the
main ideas of the article, the numerous LLM models, a thorough list of datasets
used for pre-training, fine-tuning, and evaluation, as well as particular LLM
applications with pertinent references. Finally, we identify future research
directions and technological advancements, underscoring the evolving landscape
of LLMs and their profound impact on artificial intelligence Artificial
Intelligence (AI) and society.
|
2503.17406 | Haochen Zhang | Haochen Zhang, Nader Zantout, Pujith Kachana, Ji Zhang, Wenshan Wang | IRef-VLA: A Benchmark for Interactive Referential Grounding with
Imperfect Language in 3D Scenes | Accepted to ICRA 2025. Code available at
https://github.com/HaochenZ11/IRef-VLA. arXiv admin note: text overlap with
arXiv:2411.03540 | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the recent rise of large language models, vision-language models, and
other general foundation models, there is growing potential for multimodal,
multi-task robotics that can operate in diverse environments given natural
language input. One such application is indoor navigation using natural
language instructions. However, despite recent progress, this problem remains
challenging due to the 3D spatial reasoning and semantic understanding
required. Additionally, the language used may be imperfect or misaligned with
the scene, further complicating the task. To address this challenge, we curate
a benchmark dataset, IRef-VLA, for Interactive Referential Vision and
Language-guided Action in 3D Scenes with imperfect references. IRef-VLA is the
largest real-world dataset for the referential grounding task, consisting of
over 11.5K scanned 3D rooms from existing datasets, 7.6M heuristically
generated semantic relations, and 4.7M referential statements. Our dataset also
contains semantic object and room annotations, scene graphs, navigable free
space annotations, and is augmented with statements where the language has
imperfections or ambiguities. We verify the generalizability of our dataset by
evaluating with state-of-the-art models to obtain a performance baseline and
also develop a graph-search baseline to demonstrate the performance bound and
generation of alternatives using scene-graph knowledge. With this benchmark, we
aim to provide a resource for 3D scene understanding that aids the development
of robust, interactive navigation systems. The dataset and all source code is
publicly released at https://github.com/HaochenZ11/IRef-VLA.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 16:16:10 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Zhang",
"Haochen",
""
],
[
"Zantout",
"Nader",
""
],
[
"Kachana",
"Pujith",
""
],
[
"Zhang",
"Ji",
""
],
[
"Wang",
"Wenshan",
""
]
] | TITLE: IRef-VLA: A Benchmark for Interactive Referential Grounding with
Imperfect Language in 3D Scenes
ABSTRACT: With the recent rise of large language models, vision-language models, and
other general foundation models, there is growing potential for multimodal,
multi-task robotics that can operate in diverse environments given natural
language input. One such application is indoor navigation using natural
language instructions. However, despite recent progress, this problem remains
challenging due to the 3D spatial reasoning and semantic understanding
required. Additionally, the language used may be imperfect or misaligned with
the scene, further complicating the task. To address this challenge, we curate
a benchmark dataset, IRef-VLA, for Interactive Referential Vision and
Language-guided Action in 3D Scenes with imperfect references. IRef-VLA is the
largest real-world dataset for the referential grounding task, consisting of
over 11.5K scanned 3D rooms from existing datasets, 7.6M heuristically
generated semantic relations, and 4.7M referential statements. Our dataset also
contains semantic object and room annotations, scene graphs, navigable free
space annotations, and is augmented with statements where the language has
imperfections or ambiguities. We verify the generalizability of our dataset by
evaluating with state-of-the-art models to obtain a performance baseline and
also develop a graph-search baseline to demonstrate the performance bound and
generation of alternatives using scene-graph knowledge. With this benchmark, we
aim to provide a resource for 3D scene understanding that aids the development
of robust, interactive navigation systems. The dataset and all source code is
publicly released at https://github.com/HaochenZ11/IRef-VLA.
|
2503.17408 | Pablo Rivas | Maisha Binte Rashid, Pablo Rivas | Leveraging OpenFlamingo for Multimodal Embedding Analysis of C2C Car
Parts Data | The 26th International Conference on Artificial Intelligence
(ICAI'24: July 22-25, 2024; Las Vegas, USA) | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we aim to investigate the capabilities of multimodal machine
learning models, particularly the OpenFlamingo model, in processing a
large-scale dataset of consumer-to-consumer (C2C) online posts related to car
parts. We have collected data from two platforms, OfferUp and Craigslist,
resulting in a dataset of over 1.2 million posts with their corresponding
images. The OpenFlamingo model was used to extract embeddings for the text and
image of each post. We used $k$-means clustering on the joint embeddings to
identify underlying patterns and commonalities among the posts. We have found
that most clusters contain a pattern, but some clusters showed no internal
patterns. The results provide insight into the fact that OpenFlamingo can be
used for finding patterns in large datasets but needs some modification in the
architecture according to the dataset.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 19:35:15 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Rashid",
"Maisha Binte",
""
],
[
"Rivas",
"Pablo",
""
]
] | TITLE: Leveraging OpenFlamingo for Multimodal Embedding Analysis of C2C Car
Parts Data
ABSTRACT: In this paper, we aim to investigate the capabilities of multimodal machine
learning models, particularly the OpenFlamingo model, in processing a
large-scale dataset of consumer-to-consumer (C2C) online posts related to car
parts. We have collected data from two platforms, OfferUp and Craigslist,
resulting in a dataset of over 1.2 million posts with their corresponding
images. The OpenFlamingo model was used to extract embeddings for the text and
image of each post. We used $k$-means clustering on the joint embeddings to
identify underlying patterns and commonalities among the posts. We have found
that most clusters contain a pattern, but some clusters showed no internal
patterns. The results provide insight into the fact that OpenFlamingo can be
used for finding patterns in large datasets but needs some modification in the
architecture according to the dataset.
|
2503.17410 | Josef Koumar | Josef Koumar, Timotej Smole\v{n}, Kamil Je\v{r}\'abek, Tom\'a\v{s}
\v{C}ejka | Comparative Analysis of Deep Learning Models for Real-World ISP Network
Traffic Forecasting | null | null | null | null | cs.LG cs.AI cs.NI | http://creativecommons.org/licenses/by/4.0/ | Accurate network traffic forecasting is essential for Internet Service
Providers (ISP) to optimize resources, enhance user experience, and mitigate
anomalies. This study evaluates state-of-the-art deep learning models on
CESNET-TimeSeries24, a recently published, comprehensive real-world network
traffic dataset from the ISP network CESNET3 spanning multivariate time series
over 40 weeks. Our findings highlight the balance between prediction accuracy
and computational efficiency across different levels of network granularity.
Additionally, this work establishes a reproducible methodology that facilitates
direct comparison of existing approaches, explores their strengths and
weaknesses, and provides a benchmark for future studies using this dataset.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 21:04:20 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Koumar",
"Josef",
""
],
[
"Smoleň",
"Timotej",
""
],
[
"Jeřábek",
"Kamil",
""
],
[
"Čejka",
"Tomáš",
""
]
] | TITLE: Comparative Analysis of Deep Learning Models for Real-World ISP Network
Traffic Forecasting
ABSTRACT: Accurate network traffic forecasting is essential for Internet Service
Providers (ISP) to optimize resources, enhance user experience, and mitigate
anomalies. This study evaluates state-of-the-art deep learning models on
CESNET-TimeSeries24, a recently published, comprehensive real-world network
traffic dataset from the ISP network CESNET3 spanning multivariate time series
over 40 weeks. Our findings highlight the balance between prediction accuracy
and computational efficiency across different levels of network granularity.
Additionally, this work establishes a reproducible methodology that facilitates
direct comparison of existing approaches, explores their strengths and
weaknesses, and provides a benchmark for future studies using this dataset.
|
2503.17416 | Corina P\u{a}s\u{a}reanu | Boyue Caroline Hu, Divya Gopinath, Corina S. Pasareanu, Nina
Narodytska, Ravi Mangal, Susmit Jha | Debugging and Runtime Analysis of Neural Networks with VLMs (A Case
Study) | CAIN 2025 (4th International Conference on AI Engineering -- Software
Engineering for AI) | null | null | null | cs.SE cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Debugging of Deep Neural Networks (DNNs), particularly vision models, is very
challenging due to the complex and opaque decision-making processes in these
networks. In this paper, we explore multi-modal Vision-Language Models (VLMs),
such as CLIP, to automatically interpret the opaque representation space of
vision models using natural language. This in turn, enables a semantic analysis
of model behavior using human-understandable concepts, without requiring costly
human annotations. Key to our approach is the notion of semantic heatmap, that
succinctly captures the statistical properties of DNNs in terms of the concepts
discovered with the VLM and that are computed off-line using a held-out data
set. We show the utility of semantic heatmaps for fault localization -- an
essential step in debugging -- in vision models. Our proposed technique helps
localize the fault in the network (encoder vs head) and also highlights the
responsible high-level concepts, by leveraging novel differential heatmaps,
which summarize the semantic differences between the correct and incorrect
behaviour of the analyzed DNN. We further propose a lightweight runtime
analysis to detect and filter-out defects at runtime, thus improving the
reliability of the analyzed DNNs. The runtime analysis works by measuring and
comparing the similarity between the heatmap computed for a new (unseen) input
and the heatmaps computed a-priori for correct vs incorrect DNN behavior. We
consider two types of defects: misclassifications and vulnerabilities to
adversarial attacks. We demonstrate the debugging and runtime analysis on a
case study involving a complex ResNet-based classifier trained on the RIVAL10
dataset.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 01:12:57 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Hu",
"Boyue Caroline",
""
],
[
"Gopinath",
"Divya",
""
],
[
"Pasareanu",
"Corina S.",
""
],
[
"Narodytska",
"Nina",
""
],
[
"Mangal",
"Ravi",
""
],
[
"Jha",
"Susmit",
""
]
] | TITLE: Debugging and Runtime Analysis of Neural Networks with VLMs (A Case
Study)
ABSTRACT: Debugging of Deep Neural Networks (DNNs), particularly vision models, is very
challenging due to the complex and opaque decision-making processes in these
networks. In this paper, we explore multi-modal Vision-Language Models (VLMs),
such as CLIP, to automatically interpret the opaque representation space of
vision models using natural language. This in turn, enables a semantic analysis
of model behavior using human-understandable concepts, without requiring costly
human annotations. Key to our approach is the notion of semantic heatmap, that
succinctly captures the statistical properties of DNNs in terms of the concepts
discovered with the VLM and that are computed off-line using a held-out data
set. We show the utility of semantic heatmaps for fault localization -- an
essential step in debugging -- in vision models. Our proposed technique helps
localize the fault in the network (encoder vs head) and also highlights the
responsible high-level concepts, by leveraging novel differential heatmaps,
which summarize the semantic differences between the correct and incorrect
behaviour of the analyzed DNN. We further propose a lightweight runtime
analysis to detect and filter-out defects at runtime, thus improving the
reliability of the analyzed DNNs. The runtime analysis works by measuring and
comparing the similarity between the heatmap computed for a new (unseen) input
and the heatmaps computed a-priori for correct vs incorrect DNN behavior. We
consider two types of defects: misclassifications and vulnerabilities to
adversarial attacks. We demonstrate the debugging and runtime analysis on a
case study involving a complex ResNet-based classifier trained on the RIVAL10
dataset.
|
2503.17417 | JungKyoo Shin | Jungkyoo Shin, Bumsoo Kim, Eunwoo Kim | Generative Modeling of Class Probability for Multi-Modal Representation
Learning | Accepted to CVPR2025 | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Multi-modal understanding plays a crucial role in artificial intelligence by
enabling models to jointly interpret inputs from different modalities. However,
conventional approaches such as contrastive learning often struggle with
modality discrepancies, leading to potential misalignments. In this paper, we
propose a novel class anchor alignment approach that leverages class
probability distributions for multi-modal representation learning. Our method,
Class-anchor-ALigned generative Modeling (CALM), encodes class anchors as
prompts to generate and align class probability distributions for each
modality, enabling more effective alignment. Furthermore, we introduce a
cross-modal probabilistic variational autoencoder to model uncertainty in the
alignment, enhancing the ability to capture deeper relationships between
modalities and data variations. Extensive experiments on four benchmark
datasets demonstrate that our approach significantly outperforms
state-of-the-art methods, especially in out-of-domain evaluations. This
highlights its superior generalization capabilities in multi-modal
representation learning.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 01:17:44 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Shin",
"Jungkyoo",
""
],
[
"Kim",
"Bumsoo",
""
],
[
"Kim",
"Eunwoo",
""
]
] | TITLE: Generative Modeling of Class Probability for Multi-Modal Representation
Learning
ABSTRACT: Multi-modal understanding plays a crucial role in artificial intelligence by
enabling models to jointly interpret inputs from different modalities. However,
conventional approaches such as contrastive learning often struggle with
modality discrepancies, leading to potential misalignments. In this paper, we
propose a novel class anchor alignment approach that leverages class
probability distributions for multi-modal representation learning. Our method,
Class-anchor-ALigned generative Modeling (CALM), encodes class anchors as
prompts to generate and align class probability distributions for each
modality, enabling more effective alignment. Furthermore, we introduce a
cross-modal probabilistic variational autoencoder to model uncertainty in the
alignment, enhancing the ability to capture deeper relationships between
modalities and data variations. Extensive experiments on four benchmark
datasets demonstrate that our approach significantly outperforms
state-of-the-art methods, especially in out-of-domain evaluations. This
highlights its superior generalization capabilities in multi-modal
representation learning.
|
2503.17427 | Michael White | Michael D. White, Michael D. Atkinson, Adam J. Plowman and Pratheek
Shanthraj | 3D variational autoencoder for fingerprinting microstructure volume
elements | 24 pages, 9 figures | null | null | null | cond-mat.mtrl-sci cs.LG | http://creativecommons.org/licenses/by/4.0/ | Microstructure quantification is an important step towards establishing
structure-property relationships in materials. Machine learning-based image
processing methods have been shown to outperform conventional image processing
techniques and are increasingly applied to microstructure quantification tasks.
In this work, we present a 3D variational autoencoder (VAE) for encoding
microstructure volume elements (VEs) comprising voxelated crystallographic
orientation data. Crystal symmetries in the orientation space are accounted for
by mapping to the crystallographic fundamental zone as a preprocessing step,
which allows for a continuous loss function to be used and improves the
training convergence rate. The VAE is then used to encode a training set of VEs
with an equiaxed polycrystalline microstructure with random texture. Accurate
reconstructions are achieved with a relative average misorientation error of
9x10-3 on the test dataset, for a continuous latent space with dimension 256.
We show that the model generalises well to microstructures with textures, grain
sizes and aspect ratios outside the training distribution. Structure-property
relationships are explored through using the training set of VEs as initial
configurations in various crystal plasticity (CP) simulations. Microstructural
fingerprints extracted from the VAE, which parameterise the VEs in a
low-dimensional latent space, are stored alongside the volume-averaged stress
response, at each strain increment, to uniaxial tensile deformation from CP
simulations. This is then used to train a fully connected neural network
mapping the input fingerprint to the resulting stress response, which acts as a
surrogate model for the CP simulation. The fingerprint-based surrogate model is
shown to accurately predict the microstructural dependence in the CP stress
response, with a relative mean-squared error of 8.9x10-4 on unseen test data.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 11:17:10 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"White",
"Michael D.",
""
],
[
"Atkinson",
"Michael D.",
""
],
[
"Plowman",
"Adam J.",
""
],
[
"Shanthraj",
"Pratheek",
""
]
] | TITLE: 3D variational autoencoder for fingerprinting microstructure volume
elements
ABSTRACT: Microstructure quantification is an important step towards establishing
structure-property relationships in materials. Machine learning-based image
processing methods have been shown to outperform conventional image processing
techniques and are increasingly applied to microstructure quantification tasks.
In this work, we present a 3D variational autoencoder (VAE) for encoding
microstructure volume elements (VEs) comprising voxelated crystallographic
orientation data. Crystal symmetries in the orientation space are accounted for
by mapping to the crystallographic fundamental zone as a preprocessing step,
which allows for a continuous loss function to be used and improves the
training convergence rate. The VAE is then used to encode a training set of VEs
with an equiaxed polycrystalline microstructure with random texture. Accurate
reconstructions are achieved with a relative average misorientation error of
9x10-3 on the test dataset, for a continuous latent space with dimension 256.
We show that the model generalises well to microstructures with textures, grain
sizes and aspect ratios outside the training distribution. Structure-property
relationships are explored through using the training set of VEs as initial
configurations in various crystal plasticity (CP) simulations. Microstructural
fingerprints extracted from the VAE, which parameterise the VEs in a
low-dimensional latent space, are stored alongside the volume-averaged stress
response, at each strain increment, to uniaxial tensile deformation from CP
simulations. This is then used to train a fully connected neural network
mapping the input fingerprint to the resulting stress response, which acts as a
surrogate model for the CP simulation. The fingerprint-based surrogate model is
shown to accurately predict the microstructural dependence in the CP stress
response, with a relative mean-squared error of 8.9x10-4 on unseen test data.
|
2503.17439 | Zhuoshi Pan | Zhuoshi Pan, Yu Li, Honglin Lin, Qizhi Pei, Zinan Tang, Wei Wu,
Chenlin Ming, H. Vicky Zhao, Conghui He, Lijun Wu | LEMMA: Learning from Errors for MatheMatical Advancement in LLMs | 9 pages, 6 figures, 4 tables, under review | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large language models (LLMs) have demonstrated remarkable reasoning
capability in solving mathematical problems. However, existing approaches
primarily focus on improving the quality of correct training data, e.g.,
distilling high-quality correct solutions from advanced models, neglecting the
value contained in error data, potentially hindering the model's reflective
ability. Though some studies attempt to leverage error data, they often involve
complex mechanisms, such as Monte Carlo Tree Search (MCTS) to explore error
nodes. In this work, we propose to enhance LLMs' reasoning ability by Learning
from Errors for Mathematical Advancement (LEMMA). LEMMA constructs data
consisting of an incorrect solution with an erroneous step and a reflection
connection to a correct solution for fine-tuning. Specifically, we
systematically analyze the model-generated error types and introduce an
error-type grounded mistake augmentation method to collect diverse and
representative errors. Correct solutions are either from fixing the errors or
generating a fresh start. Through a model-aware smooth reflection connection,
the erroneous solution is transferred to the correct one. By fine-tuning on the
constructed dataset, the model is able to self-correct errors autonomously
within the generation process without relying on external critique models.
Experimental results demonstrate that LEMMA achieves significant performance
improvements over other strong baselines.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 17:59:10 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Pan",
"Zhuoshi",
""
],
[
"Li",
"Yu",
""
],
[
"Lin",
"Honglin",
""
],
[
"Pei",
"Qizhi",
""
],
[
"Tang",
"Zinan",
""
],
[
"Wu",
"Wei",
""
],
[
"Ming",
"Chenlin",
""
],
[
"Zhao",
"H. Vicky",
""
],
[
"He",
"Conghui",
""
],
[
"Wu",
"Lijun",
""
]
] | TITLE: LEMMA: Learning from Errors for MatheMatical Advancement in LLMs
ABSTRACT: Large language models (LLMs) have demonstrated remarkable reasoning
capability in solving mathematical problems. However, existing approaches
primarily focus on improving the quality of correct training data, e.g.,
distilling high-quality correct solutions from advanced models, neglecting the
value contained in error data, potentially hindering the model's reflective
ability. Though some studies attempt to leverage error data, they often involve
complex mechanisms, such as Monte Carlo Tree Search (MCTS) to explore error
nodes. In this work, we propose to enhance LLMs' reasoning ability by Learning
from Errors for Mathematical Advancement (LEMMA). LEMMA constructs data
consisting of an incorrect solution with an erroneous step and a reflection
connection to a correct solution for fine-tuning. Specifically, we
systematically analyze the model-generated error types and introduce an
error-type grounded mistake augmentation method to collect diverse and
representative errors. Correct solutions are either from fixing the errors or
generating a fresh start. Through a model-aware smooth reflection connection,
the erroneous solution is transferred to the correct one. By fine-tuning on the
constructed dataset, the model is able to self-correct errors autonomously
within the generation process without relying on external critique models.
Experimental results demonstrate that LEMMA achieves significant performance
improvements over other strong baselines.
|
2503.17452 | Gideon Stein | Gideon Stein, Maha Shadaydeh, Jan Blunk, Niklas Penzel, Joachim
Denzler | CausalRivers -- Scaling up benchmarking of causal discovery for
real-world time-series | 10 pages, 8 figures, ICLR2025 main track | null | null | null | cs.LG cs.AI stat.ML | http://creativecommons.org/licenses/by/4.0/ | Causal discovery, or identifying causal relationships from observational
data, is a notoriously challenging task, with numerous methods proposed to
tackle it. Despite this, in-the-wild evaluation of these methods is still
lacking, as works frequently rely on synthetic data evaluation and sparse
real-world examples under critical theoretical assumptions. Real-world causal
structures, however, are often complex, making it hard to decide on a proper
causal discovery strategy. To bridge this gap, we introduce CausalRivers, the
largest in-the-wild causal discovery benchmarking kit for time-series data to
date. CausalRivers features an extensive dataset on river discharge that covers
the eastern German territory (666 measurement stations) and the state of
Bavaria (494 measurement stations). It spans the years 2019 to 2023 with a
15-minute temporal resolution. Further, we provide additional data from a flood
around the Elbe River, as an event with a pronounced distributional shift.
Leveraging multiple sources of information and time-series meta-data, we
constructed two distinct causal ground truth graphs (Bavaria and eastern
Germany). These graphs can be sampled to generate thousands of subgraphs to
benchmark causal discovery across diverse and challenging settings. To
demonstrate the utility of CausalRivers, we evaluate several causal discovery
approaches through a set of experiments to identify areas for improvement.
CausalRivers has the potential to facilitate robust evaluations and comparisons
of causal discovery methods. Besides this primary purpose, we also expect that
this dataset will be relevant for connected areas of research, such as
time-series forecasting and anomaly detection. Based on this, we hope to push
benchmark-driven method development that fosters advanced techniques for causal
discovery, as is the case for many other areas of machine learning.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 18:02:35 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Stein",
"Gideon",
""
],
[
"Shadaydeh",
"Maha",
""
],
[
"Blunk",
"Jan",
""
],
[
"Penzel",
"Niklas",
""
],
[
"Denzler",
"Joachim",
""
]
] | TITLE: CausalRivers -- Scaling up benchmarking of causal discovery for
real-world time-series
ABSTRACT: Causal discovery, or identifying causal relationships from observational
data, is a notoriously challenging task, with numerous methods proposed to
tackle it. Despite this, in-the-wild evaluation of these methods is still
lacking, as works frequently rely on synthetic data evaluation and sparse
real-world examples under critical theoretical assumptions. Real-world causal
structures, however, are often complex, making it hard to decide on a proper
causal discovery strategy. To bridge this gap, we introduce CausalRivers, the
largest in-the-wild causal discovery benchmarking kit for time-series data to
date. CausalRivers features an extensive dataset on river discharge that covers
the eastern German territory (666 measurement stations) and the state of
Bavaria (494 measurement stations). It spans the years 2019 to 2023 with a
15-minute temporal resolution. Further, we provide additional data from a flood
around the Elbe River, as an event with a pronounced distributional shift.
Leveraging multiple sources of information and time-series meta-data, we
constructed two distinct causal ground truth graphs (Bavaria and eastern
Germany). These graphs can be sampled to generate thousands of subgraphs to
benchmark causal discovery across diverse and challenging settings. To
demonstrate the utility of CausalRivers, we evaluate several causal discovery
approaches through a set of experiments to identify areas for improvement.
CausalRivers has the potential to facilitate robust evaluations and comparisons
of causal discovery methods. Besides this primary purpose, we also expect that
this dataset will be relevant for connected areas of research, such as
time-series forecasting and anomaly detection. Based on this, we hope to push
benchmark-driven method development that fosters advanced techniques for causal
discovery, as is the case for many other areas of machine learning.
|
2503.17453 | Longjiang Yang | Ran Liu, Fengyu Zhang, Cong Yu, Longjiang Yang, Zhuofan Wen, Siyuan
Zhang, Hailiang Yao, Shun Chen, Zheng Lian, Bin Liu | Feature-Based Dual Visual Feature Extraction Model for Compound
Multimodal Emotion Recognition | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This article presents our results for the eighth Affective Behavior Analysis
in-the-wild (ABAW) competition.Multimodal emotion recognition (ER) has
important applications in affective computing and human-computer interaction.
However, in the real world, compound emotion recognition faces greater issues
of uncertainty and modal conflicts. For the Compound Expression (CE)
Recognition Challenge,this paper proposes a multimodal emotion recognition
method that fuses the features of Vision Transformer (ViT) and Residual Network
(ResNet). We conducted experiments on the C-EXPR-DB and MELD datasets. The
results show that in scenarios with complex visual and audio cues (such as
C-EXPR-DB), the model that fuses the features of ViT and ResNet exhibits
superior performance.Our code are avalible on
https://github.com/MyGitHub-ax/8th_ABAW
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 18:03:44 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Liu",
"Ran",
""
],
[
"Zhang",
"Fengyu",
""
],
[
"Yu",
"Cong",
""
],
[
"Yang",
"Longjiang",
""
],
[
"Wen",
"Zhuofan",
""
],
[
"Zhang",
"Siyuan",
""
],
[
"Yao",
"Hailiang",
""
],
[
"Chen",
"Shun",
""
],
[
"Lian",
"Zheng",
""
],
[
"Liu",
"Bin",
""
]
] | TITLE: Feature-Based Dual Visual Feature Extraction Model for Compound
Multimodal Emotion Recognition
ABSTRACT: This article presents our results for the eighth Affective Behavior Analysis
in-the-wild (ABAW) competition.Multimodal emotion recognition (ER) has
important applications in affective computing and human-computer interaction.
However, in the real world, compound emotion recognition faces greater issues
of uncertainty and modal conflicts. For the Compound Expression (CE)
Recognition Challenge,this paper proposes a multimodal emotion recognition
method that fuses the features of Vision Transformer (ViT) and Residual Network
(ResNet). We conducted experiments on the C-EXPR-DB and MELD datasets. The
results show that in scenarios with complex visual and audio cues (such as
C-EXPR-DB), the model that fuses the features of ViT and ResNet exhibits
superior performance.Our code are avalible on
https://github.com/MyGitHub-ax/8th_ABAW
|
2503.17457 | Taylor Lundy | Taylor Lundy, Narun Raman, Scott Duke Kominers, Kevin Leyton-Brown | NFTs as a Data-Rich Test Bed: Conspicuous Consumption and its
Determinants | null | null | null | null | cs.CY cs.GT | http://creativecommons.org/licenses/by/4.0/ | Conspicuous consumption occurs when a consumer derives value from a good
based on its social meaning as a signal of wealth, taste, and/or community
affiliation. Common conspicuous goods include designer footwear, country club
memberships, and artwork; conspicuous goods also exist in the digital sphere,
with non-fungible tokens (NFTs) as a prominent example. The NFT market merits
deeper study for two key reasons: first, it is poorly understood relative to
its economic scale; and second, it is unusually amenable to analysis because
NFT transactions are publicly available on the blockchain, making them useful
as a test bed for conspicuous consumption dynamics. This paper introduces a
model that incorporates two previously identified elements of conspicuous
consumption: the \emph{bandwagon effect} (goods increase in value as they
become more popular) and the \emph{snob effect} (goods increase in value as
they become rarer). Our model resolves the apparent tension between these two
effects, exhibiting net complementarity between others' and one's own
conspicuous consumption. We also introduce a novel dataset combining NFT
transactions with embeddings of the corresponding NFT images computed using an
off-the-shelf vision transformer architecture. We use our dataset to validate
the model, showing that the bandwagon effect raises an NFT collection's value
as more consumers join, while the snob effect drives consumers to seek rarer
NFTs within a given collection.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 18:09:43 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Lundy",
"Taylor",
""
],
[
"Raman",
"Narun",
""
],
[
"Kominers",
"Scott Duke",
""
],
[
"Leyton-Brown",
"Kevin",
""
]
] | TITLE: NFTs as a Data-Rich Test Bed: Conspicuous Consumption and its
Determinants
ABSTRACT: Conspicuous consumption occurs when a consumer derives value from a good
based on its social meaning as a signal of wealth, taste, and/or community
affiliation. Common conspicuous goods include designer footwear, country club
memberships, and artwork; conspicuous goods also exist in the digital sphere,
with non-fungible tokens (NFTs) as a prominent example. The NFT market merits
deeper study for two key reasons: first, it is poorly understood relative to
its economic scale; and second, it is unusually amenable to analysis because
NFT transactions are publicly available on the blockchain, making them useful
as a test bed for conspicuous consumption dynamics. This paper introduces a
model that incorporates two previously identified elements of conspicuous
consumption: the \emph{bandwagon effect} (goods increase in value as they
become more popular) and the \emph{snob effect} (goods increase in value as
they become rarer). Our model resolves the apparent tension between these two
effects, exhibiting net complementarity between others' and one's own
conspicuous consumption. We also introduce a novel dataset combining NFT
transactions with embeddings of the corresponding NFT images computed using an
off-the-shelf vision transformer architecture. We use our dataset to validate
the model, showing that the bandwagon effect raises an NFT collection's value
as more consumers join, while the snob effect drives consumers to seek rarer
NFTs within a given collection.
|
2503.17460 | Reem Gody | Reem Gody, Mahmoud Goudy, Ahmed Y. Tawfik | ConvoGen: Enhancing Conversational AI with Synthetic Data: A Multi-Agent
Approach | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | In this paper, we present ConvoGen: an innovative framework for generating
synthetic conversational data using multi-agent systems. Our method leverages
few-shot learning and introduces iterative sampling from a dynamically updated
few-shot hub to create diverse and realistic conversational scenarios. The
generated data has numerous applications, including training and evaluating
conversational AI models, and augmenting existing datasets for tasks like
conversational intent classification or conversation summarization. Our
experiments demonstrate the effectiveness of this method in producing
high-quality diverse synthetic conversational data, highlighting its potential
to enhance the development and evaluation of conversational AI systems.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 18:14:12 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Gody",
"Reem",
""
],
[
"Goudy",
"Mahmoud",
""
],
[
"Tawfik",
"Ahmed Y.",
""
]
] | TITLE: ConvoGen: Enhancing Conversational AI with Synthetic Data: A Multi-Agent
Approach
ABSTRACT: In this paper, we present ConvoGen: an innovative framework for generating
synthetic conversational data using multi-agent systems. Our method leverages
few-shot learning and introduces iterative sampling from a dynamically updated
few-shot hub to create diverse and realistic conversational scenarios. The
generated data has numerous applications, including training and evaluating
conversational AI models, and augmenting existing datasets for tasks like
conversational intent classification or conversation summarization. Our
experiments demonstrate the effectiveness of this method in producing
high-quality diverse synthetic conversational data, highlighting its potential
to enhance the development and evaluation of conversational AI systems.
|
2503.17485 | Hassan Alhuzali | Lama Ayash, Hassan Alhuzali, Ashwag Alasmari, Sultan Aloufi | SaudiCulture: A Benchmark for Evaluating Large Language Models Cultural
Competence within Saudi Arabia | 34 pages, under-review | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Large Language Models (LLMs) have demonstrated remarkable capabilities in
natural language processing; however, they often struggle to accurately capture
and reflect cultural nuances. This research addresses this challenge by
focusing on Saudi Arabia, a country characterized by diverse dialects and rich
cultural traditions. We introduce SaudiCulture, a novel benchmark designed to
evaluate the cultural competence of LLMs within the distinct geographical and
cultural contexts of Saudi Arabia. SaudiCulture is a comprehensive dataset of
questions covering five major geographical regions, such as West, East, South,
North, and Center, along with general questions applicable across all regions.
The dataset encompasses a broad spectrum of cultural domains, including food,
clothing, entertainment, celebrations, and crafts. To ensure a rigorous
evaluation, SaudiCulture includes questions of varying complexity, such as
open-ended, single-choice, and multiple-choice formats, with some requiring
multiple correct answers. Additionally, the dataset distinguishes between
common cultural knowledge and specialized regional aspects. We conduct
extensive evaluations on five LLMs, such as GPT-4, Llama 3.3, FANAR, Jais, and
AceGPT, analyzing their performance across different question types and
cultural contexts. Our findings reveal that all models experience significant
performance declines when faced with highly specialized or region-specific
questions, particularly those requiring multiple correct responses.
Additionally, certain cultural categories are more easily identifiable than
others, further highlighting inconsistencies in LLMs cultural understanding.
These results emphasize the importance of incorporating region-specific
knowledge into LLMs training to enhance their cultural competence.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 18:55:10 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Ayash",
"Lama",
""
],
[
"Alhuzali",
"Hassan",
""
],
[
"Alasmari",
"Ashwag",
""
],
[
"Aloufi",
"Sultan",
""
]
] | TITLE: SaudiCulture: A Benchmark for Evaluating Large Language Models Cultural
Competence within Saudi Arabia
ABSTRACT: Large Language Models (LLMs) have demonstrated remarkable capabilities in
natural language processing; however, they often struggle to accurately capture
and reflect cultural nuances. This research addresses this challenge by
focusing on Saudi Arabia, a country characterized by diverse dialects and rich
cultural traditions. We introduce SaudiCulture, a novel benchmark designed to
evaluate the cultural competence of LLMs within the distinct geographical and
cultural contexts of Saudi Arabia. SaudiCulture is a comprehensive dataset of
questions covering five major geographical regions, such as West, East, South,
North, and Center, along with general questions applicable across all regions.
The dataset encompasses a broad spectrum of cultural domains, including food,
clothing, entertainment, celebrations, and crafts. To ensure a rigorous
evaluation, SaudiCulture includes questions of varying complexity, such as
open-ended, single-choice, and multiple-choice formats, with some requiring
multiple correct answers. Additionally, the dataset distinguishes between
common cultural knowledge and specialized regional aspects. We conduct
extensive evaluations on five LLMs, such as GPT-4, Llama 3.3, FANAR, Jais, and
AceGPT, analyzing their performance across different question types and
cultural contexts. Our findings reveal that all models experience significant
performance declines when faced with highly specialized or region-specific
questions, particularly those requiring multiple correct responses.
Additionally, certain cultural categories are more easily identifiable than
others, further highlighting inconsistencies in LLMs cultural understanding.
These results emphasize the importance of incorporating region-specific
knowledge into LLMs training to enhance their cultural competence.
|
2503.17488 | Tianwen Zhou | Tianwen Zhou, Jing Wang, Songtao Wu, Kuanhong Xu | ProDehaze: Prompting Diffusion Models Toward Faithful Image Dehazing | Accepted to ICME 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent approaches using large-scale pretrained diffusion models for image
dehazing improve perceptual quality but often suffer from hallucination issues,
producing unfaithful dehazed image to the original one. To mitigate this, we
propose ProDehaze, a framework that employs internal image priors to direct
external priors encoded in pretrained models. We introduce two types of
\textit{selective} internal priors that prompt the model to concentrate on
critical image areas: a Structure-Prompted Restorer in the latent space that
emphasizes structure-rich regions, and a Haze-Aware Self-Correcting Refiner in
the decoding process to align distributions between clearer input regions and
the output. Extensive experiments on real-world datasets demonstrate that
ProDehaze achieves high-fidelity results in image dehazing, particularly in
reducing color shifts. Our code is at https://github.com/TianwenZhou/ProDehaze.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 18:56:50 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Zhou",
"Tianwen",
""
],
[
"Wang",
"Jing",
""
],
[
"Wu",
"Songtao",
""
],
[
"Xu",
"Kuanhong",
""
]
] | TITLE: ProDehaze: Prompting Diffusion Models Toward Faithful Image Dehazing
ABSTRACT: Recent approaches using large-scale pretrained diffusion models for image
dehazing improve perceptual quality but often suffer from hallucination issues,
producing unfaithful dehazed image to the original one. To mitigate this, we
propose ProDehaze, a framework that employs internal image priors to direct
external priors encoded in pretrained models. We introduce two types of
\textit{selective} internal priors that prompt the model to concentrate on
critical image areas: a Structure-Prompted Restorer in the latent space that
emphasizes structure-rich regions, and a Haze-Aware Self-Correcting Refiner in
the decoding process to align distributions between clearer input regions and
the output. Extensive experiments on real-world datasets demonstrate that
ProDehaze achieves high-fidelity results in image dehazing, particularly in
reducing color shifts. Our code is at https://github.com/TianwenZhou/ProDehaze.
|
2503.17489 | Dongping Chen | Shu Pu, Yaochen Wang, Dongping Chen, Yuhang Chen, Guohao Wang, Qi Qin,
Zhongyi Zhang, Zhiyuan Zhang, Zetong Zhou, Shuang Gong, Yi Gui, Yao Wan,
Philip S. Yu | Judge Anything: MLLM as a Judge Across Any Modality | null | null | null | null | cs.CL cs.CV | http://creativecommons.org/licenses/by/4.0/ | Evaluating generative foundation models on open-ended multimodal
understanding (MMU) and generation (MMG) tasks across diverse modalities (e.g.,
images, audio, video) poses significant challenges due to the complexity of
cross-modal interactions. To this end, the idea of utilizing Multimodal LLMs
(MLLMs) as automated judges has emerged, with encouraging results in assessing
vision-language understanding tasks. Moving further, this paper extends
MLLM-as-a-Judge across modalities to a unified manner by introducing two
benchmarks, TaskAnything and JudgeAnything, to respectively evaluate the
overall performance and judging capabilities of MLLMs across any-to-any
modality tasks. Specifically, TaskAnything evaluates the MMU and MMG
capabilities across 15 any-to-any modality categories, employing 1,500 queries
curated from well-established benchmarks. Furthermore, JudgeAnything evaluates
the judging capabilities of 5 advanced (e.g., GPT-4o and Gemini-2.0-Flash) from
the perspectives of Pair Comparison and Score Evaluation, providing a
standardized testbed that incorporates human judgments and detailed rubrics.
Our extensive experiments reveal that while these MLLMs show promise in
assessing MMU (i.e., achieving an average of 66.55% in Pair Comparison setting
and 42.79% in Score Evaluation setting), they encounter significant challenges
with MMG tasks (i.e., averaging only 53.37% in Pair Comparison setting and
30.05% in Score Evaluation setting), exposing cross-modality biases and
hallucination issues. To address this, we present OmniArena, an automated
platform for evaluating omni-models and multimodal reward models. Our work
highlights the need for fairer evaluation protocols and stronger alignment with
human preferences. The source code and dataset are publicly available at:
https://urrealhero.github.io/judgeanythingweb/.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 18:59:20 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Pu",
"Shu",
""
],
[
"Wang",
"Yaochen",
""
],
[
"Chen",
"Dongping",
""
],
[
"Chen",
"Yuhang",
""
],
[
"Wang",
"Guohao",
""
],
[
"Qin",
"Qi",
""
],
[
"Zhang",
"Zhongyi",
""
],
[
"Zhang",
"Zhiyuan",
""
],
[
"Zhou",
"Zetong",
""
],
[
"Gong",
"Shuang",
""
],
[
"Gui",
"Yi",
""
],
[
"Wan",
"Yao",
""
],
[
"Yu",
"Philip S.",
""
]
] | TITLE: Judge Anything: MLLM as a Judge Across Any Modality
ABSTRACT: Evaluating generative foundation models on open-ended multimodal
understanding (MMU) and generation (MMG) tasks across diverse modalities (e.g.,
images, audio, video) poses significant challenges due to the complexity of
cross-modal interactions. To this end, the idea of utilizing Multimodal LLMs
(MLLMs) as automated judges has emerged, with encouraging results in assessing
vision-language understanding tasks. Moving further, this paper extends
MLLM-as-a-Judge across modalities to a unified manner by introducing two
benchmarks, TaskAnything and JudgeAnything, to respectively evaluate the
overall performance and judging capabilities of MLLMs across any-to-any
modality tasks. Specifically, TaskAnything evaluates the MMU and MMG
capabilities across 15 any-to-any modality categories, employing 1,500 queries
curated from well-established benchmarks. Furthermore, JudgeAnything evaluates
the judging capabilities of 5 advanced (e.g., GPT-4o and Gemini-2.0-Flash) from
the perspectives of Pair Comparison and Score Evaluation, providing a
standardized testbed that incorporates human judgments and detailed rubrics.
Our extensive experiments reveal that while these MLLMs show promise in
assessing MMU (i.e., achieving an average of 66.55% in Pair Comparison setting
and 42.79% in Score Evaluation setting), they encounter significant challenges
with MMG tasks (i.e., averaging only 53.37% in Pair Comparison setting and
30.05% in Score Evaluation setting), exposing cross-modality biases and
hallucination issues. To address this, we present OmniArena, an automated
platform for evaluating omni-models and multimodal reward models. Our work
highlights the need for fairer evaluation protocols and stronger alignment with
human preferences. The source code and dataset are publicly available at:
https://urrealhero.github.io/judgeanythingweb/.
|
2503.17493 | Pakizar Shamoi Dr | Aidos Konyspay, Pakizar Shamoi, Malika Ziyada, Zhusup Smambayev | Meme Similarity and Emotion Detection using Multimodal Analysis | Have been submitted to IEEE for consideration | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Internet memes are a central element of online culture, blending images and
text. While substantial research has focused on either the visual or textual
components of memes, little attention has been given to their interplay. This
gap raises a key question: What methodology can effectively compare memes and
the emotions they elicit? Our study employs a multimodal methodological
approach, analyzing both the visual and textual elements of memes.
Specifically, we perform a multimodal CLIP (Contrastive Language-Image
Pre-training) model for grouping similar memes based on text and visual content
embeddings, enabling robust similarity assessments across modalities. Using the
Reddit Meme Dataset and Memotion Dataset, we extract low-level visual features
and high-level semantic features to identify similar meme pairs. To validate
these automated similarity assessments, we conducted a user study with 50
participants, asking them to provide yes/no responses regarding meme similarity
and their emotional reactions. The comparison of experimental results with
human judgments showed a 67.23\% agreement, suggesting that the computational
approach aligns well with human perception. Additionally, we implemented a
text-based classifier using the DistilBERT model to categorize memes into one
of six basic emotions. The results indicate that anger and joy are the dominant
emotions in memes, with motivational memes eliciting stronger emotional
responses. This research contributes to the study of multimodal memes,
enhancing both language-based and visual approaches to analyzing and improving
online visual communication and user experiences. Furthermore, it provides
insights for better content moderation strategies in online platforms.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 19:07:16 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Konyspay",
"Aidos",
""
],
[
"Shamoi",
"Pakizar",
""
],
[
"Ziyada",
"Malika",
""
],
[
"Smambayev",
"Zhusup",
""
]
] | TITLE: Meme Similarity and Emotion Detection using Multimodal Analysis
ABSTRACT: Internet memes are a central element of online culture, blending images and
text. While substantial research has focused on either the visual or textual
components of memes, little attention has been given to their interplay. This
gap raises a key question: What methodology can effectively compare memes and
the emotions they elicit? Our study employs a multimodal methodological
approach, analyzing both the visual and textual elements of memes.
Specifically, we perform a multimodal CLIP (Contrastive Language-Image
Pre-training) model for grouping similar memes based on text and visual content
embeddings, enabling robust similarity assessments across modalities. Using the
Reddit Meme Dataset and Memotion Dataset, we extract low-level visual features
and high-level semantic features to identify similar meme pairs. To validate
these automated similarity assessments, we conducted a user study with 50
participants, asking them to provide yes/no responses regarding meme similarity
and their emotional reactions. The comparison of experimental results with
human judgments showed a 67.23\% agreement, suggesting that the computational
approach aligns well with human perception. Additionally, we implemented a
text-based classifier using the DistilBERT model to categorize memes into one
of six basic emotions. The results indicate that anger and joy are the dominant
emotions in memes, with motivational memes eliciting stronger emotional
responses. This research contributes to the study of multimodal memes,
enhancing both language-based and visual approaches to analyzing and improving
online visual communication and user experiences. Furthermore, it provides
insights for better content moderation strategies in online platforms.
|
2503.17499 | Riadul Islam | Joey Mul\'e, Dhandeep Challagundla, Rachit Saini, and Riadul Islam | Event-Based Crossing Dataset (EBCD) | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Event-based vision revolutionizes traditional image sensing by capturing
asynchronous intensity variations rather than static frames, enabling ultrafast
temporal resolution, sparse data encoding, and enhanced motion perception.
While this paradigm offers significant advantages, conventional event-based
datasets impose a fixed thresholding constraint to determine pixel activations,
severely limiting adaptability to real-world environmental fluctuations. Lower
thresholds retain finer details but introduce pervasive noise, whereas higher
thresholds suppress extraneous activations at the expense of crucial object
information. To mitigate these constraints, we introduce the Event-Based
Crossing Dataset (EBCD), a comprehensive dataset tailored for pedestrian and
vehicle detection in dynamic outdoor environments, incorporating a
multi-thresholding framework to refine event representations. By capturing
event-based images at ten distinct threshold levels (4, 8, 12, 16, 20, 30, 40,
50, 60, and 75), this dataset facilitates an extensive assessment of object
detection performance under varying conditions of sparsity and noise
suppression. We benchmark state-of-the-art detection architectures-including
YOLOv4, YOLOv7, EfficientDet-b0, MobileNet-v1, and Histogram of Oriented
Gradients (HOG)-to experiment upon the nuanced impact of threshold selection on
detection performance. By offering a systematic approach to threshold
variation, we foresee that EBCD fosters a more adaptive evaluation of
event-based object detection, aligning diverse neuromorphic vision with
real-world scene dynamics. We present the dataset as publicly available to
propel further advancements in low-latency, high-fidelity neuromorphic imaging:
https://ieee-dataport.org/documents/event-based-crossing-dataset-ebcd
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 19:20:58 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Mulé",
"Joey",
""
],
[
"Challagundla",
"Dhandeep",
""
],
[
"Saini",
"Rachit",
""
],
[
"Islam",
"Riadul",
""
]
] | TITLE: Event-Based Crossing Dataset (EBCD)
ABSTRACT: Event-based vision revolutionizes traditional image sensing by capturing
asynchronous intensity variations rather than static frames, enabling ultrafast
temporal resolution, sparse data encoding, and enhanced motion perception.
While this paradigm offers significant advantages, conventional event-based
datasets impose a fixed thresholding constraint to determine pixel activations,
severely limiting adaptability to real-world environmental fluctuations. Lower
thresholds retain finer details but introduce pervasive noise, whereas higher
thresholds suppress extraneous activations at the expense of crucial object
information. To mitigate these constraints, we introduce the Event-Based
Crossing Dataset (EBCD), a comprehensive dataset tailored for pedestrian and
vehicle detection in dynamic outdoor environments, incorporating a
multi-thresholding framework to refine event representations. By capturing
event-based images at ten distinct threshold levels (4, 8, 12, 16, 20, 30, 40,
50, 60, and 75), this dataset facilitates an extensive assessment of object
detection performance under varying conditions of sparsity and noise
suppression. We benchmark state-of-the-art detection architectures-including
YOLOv4, YOLOv7, EfficientDet-b0, MobileNet-v1, and Histogram of Oriented
Gradients (HOG)-to experiment upon the nuanced impact of threshold selection on
detection performance. By offering a systematic approach to threshold
variation, we foresee that EBCD fosters a more adaptive evaluation of
event-based object detection, aligning diverse neuromorphic vision with
real-world scene dynamics. We present the dataset as publicly available to
propel further advancements in low-latency, high-fidelity neuromorphic imaging:
https://ieee-dataport.org/documents/event-based-crossing-dataset-ebcd
|
2503.17502 | Hamed Jelodar | Hamed Jelodar, Mohammad Meymani, Roozbeh Razavi-Far | Large Language Models (LLMs) for Source Code Analysis: applications,
models and datasets | null | null | null | null | cs.SE cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | Large language models (LLMs) and transformer-based architectures are
increasingly utilized for source code analysis. As software systems grow in
complexity, integrating LLMs into code analysis workflows becomes essential for
enhancing efficiency, accuracy, and automation. This paper explores the role of
LLMs for different code analysis tasks, focusing on three key aspects: 1) what
they can analyze and their applications, 2) what models are used and 3) what
datasets are used, and the challenges they face. Regarding the goal of this
research, we investigate scholarly articles that explore the use of LLMs for
source code analysis to uncover research developments, current trends, and the
intellectual structure of this emerging field. Additionally, we summarize
limitations and highlight essential tools, datasets, and key challenges, which
could be valuable for future work.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 19:29:50 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Jelodar",
"Hamed",
""
],
[
"Meymani",
"Mohammad",
""
],
[
"Razavi-Far",
"Roozbeh",
""
]
] | TITLE: Large Language Models (LLMs) for Source Code Analysis: applications,
models and datasets
ABSTRACT: Large language models (LLMs) and transformer-based architectures are
increasingly utilized for source code analysis. As software systems grow in
complexity, integrating LLMs into code analysis workflows becomes essential for
enhancing efficiency, accuracy, and automation. This paper explores the role of
LLMs for different code analysis tasks, focusing on three key aspects: 1) what
they can analyze and their applications, 2) what models are used and 3) what
datasets are used, and the challenges they face. Regarding the goal of this
research, we investigate scholarly articles that explore the use of LLMs for
source code analysis to uncover research developments, current trends, and the
intellectual structure of this emerging field. Additionally, we summarize
limitations and highlight essential tools, datasets, and key challenges, which
could be valuable for future work.
|
2503.17507 | Ahmed H. Salamah | Ahmed H. Salamah, Pierre McWhannel, Nicole Yan | Dense Passage Retrieval in Conversational Search | null | null | null | null | cs.IR | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Information retrieval systems have traditionally relied on exact term match
methods such as BM25 for first-stage retrieval. However, recent advancements in
neural network-based techniques have introduced a new method called dense
retrieval. This approach uses a dual-encoder to create contextual embeddings
that can be indexed and clustered efficiently at run-time, resulting in
improved retrieval performance in Open-domain Question Answering systems. In
this paper, we apply the dense retrieval technique to conversational search by
conducting experiments on the CAsT benchmark dataset. We also propose an
end-to-end conversational search system called GPT2QR+DPR, which incorporates
various query reformulation strategies to improve retrieval accuracy. Our
findings indicate that dense retrieval outperforms BM25 even without extensive
fine-tuning. Our work contributes to the growing body of research on
neural-based retrieval methods in conversational search, and highlights the
potential of dense retrieval in improving retrieval accuracy in conversational
search systems.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 19:39:31 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Salamah",
"Ahmed H.",
""
],
[
"McWhannel",
"Pierre",
""
],
[
"Yan",
"Nicole",
""
]
] | TITLE: Dense Passage Retrieval in Conversational Search
ABSTRACT: Information retrieval systems have traditionally relied on exact term match
methods such as BM25 for first-stage retrieval. However, recent advancements in
neural network-based techniques have introduced a new method called dense
retrieval. This approach uses a dual-encoder to create contextual embeddings
that can be indexed and clustered efficiently at run-time, resulting in
improved retrieval performance in Open-domain Question Answering systems. In
this paper, we apply the dense retrieval technique to conversational search by
conducting experiments on the CAsT benchmark dataset. We also propose an
end-to-end conversational search system called GPT2QR+DPR, which incorporates
various query reformulation strategies to improve retrieval accuracy. Our
findings indicate that dense retrieval outperforms BM25 even without extensive
fine-tuning. Our work contributes to the growing body of research on
neural-based retrieval methods in conversational search, and highlights the
potential of dense retrieval in improving retrieval accuracy in conversational
search systems.
|
2503.17509 | Joseph Gatto | Joseph Gatto, Parker Seegmiller, Timothy Burdick, Inas S. Khayal,
Sarah DeLozier, Sarah M. Preum | Follow-up Question Generation For Enhanced Patient-Provider
Conversations | 17 Pages, 7 Figures, 6 Tables | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Follow-up question generation is an essential feature of dialogue systems as
it can reduce conversational ambiguity and enhance modeling complex
interactions. Conversational contexts often pose core NLP challenges such as
(i) extracting relevant information buried in fragmented data sources, and (ii)
modeling parallel thought processes. These two challenges occur frequently in
medical dialogue as a doctor asks questions based not only on patient
utterances but also their prior EHR data and current diagnostic hypotheses.
Asking medical questions in asynchronous conversations compounds these issues
as doctors can only rely on static EHR information to motivate follow-up
questions.
To address these challenges, we introduce FollowupQ, a novel framework for
enhancing asynchronous medical conversation. FollowupQ is a multi-agent
framework that processes patient messages and EHR data to generate personalized
follow-up questions, clarifying patient-reported medical conditions. FollowupQ
reduces requisite provider follow-up communications by 34%. It also improves
performance by 17% and 5% on real and synthetic data, respectively. We also
release the first public dataset of asynchronous medical messages with linked
EHR data alongside 2,300 follow-up questions written by clinical experts for
the wider NLP research community.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 19:40:53 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Gatto",
"Joseph",
""
],
[
"Seegmiller",
"Parker",
""
],
[
"Burdick",
"Timothy",
""
],
[
"Khayal",
"Inas S.",
""
],
[
"DeLozier",
"Sarah",
""
],
[
"Preum",
"Sarah M.",
""
]
] | TITLE: Follow-up Question Generation For Enhanced Patient-Provider
Conversations
ABSTRACT: Follow-up question generation is an essential feature of dialogue systems as
it can reduce conversational ambiguity and enhance modeling complex
interactions. Conversational contexts often pose core NLP challenges such as
(i) extracting relevant information buried in fragmented data sources, and (ii)
modeling parallel thought processes. These two challenges occur frequently in
medical dialogue as a doctor asks questions based not only on patient
utterances but also their prior EHR data and current diagnostic hypotheses.
Asking medical questions in asynchronous conversations compounds these issues
as doctors can only rely on static EHR information to motivate follow-up
questions.
To address these challenges, we introduce FollowupQ, a novel framework for
enhancing asynchronous medical conversation. FollowupQ is a multi-agent
framework that processes patient messages and EHR data to generate personalized
follow-up questions, clarifying patient-reported medical conditions. FollowupQ
reduces requisite provider follow-up communications by 34%. It also improves
performance by 17% and 5% on real and synthetic data, respectively. We also
release the first public dataset of asynchronous medical messages with linked
EHR data alongside 2,300 follow-up questions written by clinical experts for
the wider NLP research community.
|
2503.17528 | Vincent Maillou | Vincent Maillou, Lisa Gaedke-Merzhaeuser, Alexandros Nikolaos Ziogas,
Olaf Schenk, Mathieu Luisier | Serinv: A Scalable Library for the Selected Inversion of
Block-Tridiagonal with Arrowhead Matrices | 13 pages, 8 figures | null | null | null | cs.DC cs.NA cs.PF math.NA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The inversion of structured sparse matrices is a key but computationally and
memory-intensive operation in many scientific applications. There are cases,
however, where only particular entries of the full inverse are required. This
has motivated the development of so-called selected-inversion algorithms,
capable of computing only specific elements of the full inverse. Currently,
most of them are either shared-memory codes or limited to CPU implementations.
Here, we introduce Serinv, a scalable library providing distributed, GPU-based
algorithms for the selected inversion and Cholesky decomposition of
positive-definite, block-tridiagonal arrowhead matrices. This matrix class is
highly relevant in statistical climate modeling and materials science
applications. The performance of Serinv is demonstrated on synthetic and real
datasets from statistical air temperature prediction models. In our numerical
tests, Serinv achieves 32.3% strong and 47.2% weak scaling efficiency and up to
two orders of magnitude speedup over the sparse direct solvers PARDISO and
MUMPS on 16 GPUs.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 20:21:22 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Maillou",
"Vincent",
""
],
[
"Gaedke-Merzhaeuser",
"Lisa",
""
],
[
"Ziogas",
"Alexandros Nikolaos",
""
],
[
"Schenk",
"Olaf",
""
],
[
"Luisier",
"Mathieu",
""
]
] | TITLE: Serinv: A Scalable Library for the Selected Inversion of
Block-Tridiagonal with Arrowhead Matrices
ABSTRACT: The inversion of structured sparse matrices is a key but computationally and
memory-intensive operation in many scientific applications. There are cases,
however, where only particular entries of the full inverse are required. This
has motivated the development of so-called selected-inversion algorithms,
capable of computing only specific elements of the full inverse. Currently,
most of them are either shared-memory codes or limited to CPU implementations.
Here, we introduce Serinv, a scalable library providing distributed, GPU-based
algorithms for the selected inversion and Cholesky decomposition of
positive-definite, block-tridiagonal arrowhead matrices. This matrix class is
highly relevant in statistical climate modeling and materials science
applications. The performance of Serinv is demonstrated on synthetic and real
datasets from statistical air temperature prediction models. In our numerical
tests, Serinv achieves 32.3% strong and 47.2% weak scaling efficiency and up to
two orders of magnitude speedup over the sparse direct solvers PARDISO and
MUMPS on 16 GPUs.
|
2503.17536 | Nusrat Munia | Nusrat Munia and Abdullah-Al-Zubaer Imran | DermDiff: Generative Diffusion Model for Mitigating Racial Biases in
Dermatology Diagnosis | Paper presented at ADSMI@MICCAI 2024 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Skin diseases, such as skin cancer, are a significant public health issue,
and early diagnosis is crucial for effective treatment. Artificial intelligence
(AI) algorithms have the potential to assist in triaging benign vs malignant
skin lesions and improve diagnostic accuracy. However, existing AI models for
skin disease diagnosis are often developed and tested on limited and biased
datasets, leading to poor performance on certain skin tones. To address this
problem, we propose a novel generative model, named DermDiff, that can generate
diverse and representative dermoscopic image data for skin disease diagnosis.
Leveraging text prompting and multimodal image-text learning, DermDiff improves
the representation of underrepresented groups (patients, diseases, etc.) in
highly imbalanced datasets. Our extensive experimentation showcases the
effectiveness of DermDiff in terms of high fidelity and diversity. Furthermore,
downstream evaluation suggests the potential of DermDiff in mitigating racial
biases for dermatology diagnosis. Our code is available at
https://github.com/Munia03/DermDiff
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 20:45:39 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Munia",
"Nusrat",
""
],
[
"Imran",
"Abdullah-Al-Zubaer",
""
]
] | TITLE: DermDiff: Generative Diffusion Model for Mitigating Racial Biases in
Dermatology Diagnosis
ABSTRACT: Skin diseases, such as skin cancer, are a significant public health issue,
and early diagnosis is crucial for effective treatment. Artificial intelligence
(AI) algorithms have the potential to assist in triaging benign vs malignant
skin lesions and improve diagnostic accuracy. However, existing AI models for
skin disease diagnosis are often developed and tested on limited and biased
datasets, leading to poor performance on certain skin tones. To address this
problem, we propose a novel generative model, named DermDiff, that can generate
diverse and representative dermoscopic image data for skin disease diagnosis.
Leveraging text prompting and multimodal image-text learning, DermDiff improves
the representation of underrepresented groups (patients, diseases, etc.) in
highly imbalanced datasets. Our extensive experimentation showcases the
effectiveness of DermDiff in terms of high fidelity and diversity. Furthermore,
downstream evaluation suggests the potential of DermDiff in mitigating racial
biases for dermatology diagnosis. Our code is available at
https://github.com/Munia03/DermDiff
|
2503.17540 | Bin Xie | Bin Xie, Yan Yan, Gady Agam | MM-UNet: Meta Mamba UNet for Medical Image Segmentation | null | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | State Space Models (SSMs) have recently demonstrated outstanding performance
in long-sequence modeling, particularly in natural language processing.
However, their direct application to medical image segmentation poses several
challenges. SSMs, originally designed for 1D sequences, struggle with 3D
spatial structures in medical images due to discontinuities introduced by
flattening. Additionally, SSMs have difficulty fitting high-variance data,
which is common in medical imaging.
In this paper, we analyze the intrinsic limitations of SSMs in medical image
segmentation and propose a unified U-shaped encoder-decoder architecture, Meta
Mamba UNet (MM-UNet), designed to leverage the advantages of SSMs while
mitigating their drawbacks. MM-UNet incorporates hybrid modules that integrate
SSMs within residual connections, reducing variance and improving performance.
Furthermore, we introduce a novel bi-directional scan order strategy to
alleviate discontinuities when processing medical images.
Extensive experiments on the AMOS2022 and Synapse datasets demonstrate the
superiority of MM-UNet over state-of-the-art methods. MM-UNet achieves a Dice
score of 91.0% on AMOS2022, surpassing nnUNet by 3.2%, and a Dice score of
87.1% on Synapse. These results confirm the effectiveness of integrating SSMs
in medical image segmentation through architectural design optimizations.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 21:15:03 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Xie",
"Bin",
""
],
[
"Yan",
"Yan",
""
],
[
"Agam",
"Gady",
""
]
] | TITLE: MM-UNet: Meta Mamba UNet for Medical Image Segmentation
ABSTRACT: State Space Models (SSMs) have recently demonstrated outstanding performance
in long-sequence modeling, particularly in natural language processing.
However, their direct application to medical image segmentation poses several
challenges. SSMs, originally designed for 1D sequences, struggle with 3D
spatial structures in medical images due to discontinuities introduced by
flattening. Additionally, SSMs have difficulty fitting high-variance data,
which is common in medical imaging.
In this paper, we analyze the intrinsic limitations of SSMs in medical image
segmentation and propose a unified U-shaped encoder-decoder architecture, Meta
Mamba UNet (MM-UNet), designed to leverage the advantages of SSMs while
mitigating their drawbacks. MM-UNet incorporates hybrid modules that integrate
SSMs within residual connections, reducing variance and improving performance.
Furthermore, we introduce a novel bi-directional scan order strategy to
alleviate discontinuities when processing medical images.
Extensive experiments on the AMOS2022 and Synapse datasets demonstrate the
superiority of MM-UNet over state-of-the-art methods. MM-UNet achieves a Dice
score of 91.0% on AMOS2022, surpassing nnUNet by 3.2%, and a Dice score of
87.1% on Synapse. These results confirm the effectiveness of integrating SSMs
in medical image segmentation through architectural design optimizations.
|
2503.17543 | Moein Heidari | Moein Heidari, Afshin Bozorgpour, AmirHossein Zarif-Fakharnia, Dorit
Merhof, and Ilker Hacihaliloglu | Echo-E$^3$Net: Efficient Endo-Epi Spatio-Temporal Network for Ejection
Fraction Estimation | Submitted as a conference paper to MICCAI 2025 | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Left ventricular ejection fraction (LVEF) is a critical metric for assessing
cardiac function, widely used in diagnosing heart failure and guiding clinical
decisions. Despite its importance, conventional LVEF estimation remains
time-consuming and operator-dependent. Recent deep learning advancements have
enhanced automation, yet many existing models are computationally demanding,
hindering their feasibility for real-time clinical applications. Additionally,
the interplay between spatial and temporal features is crucial for accurate
estimation but is often overlooked. In this work, we propose Echo-E$^3$Net, an
efficient Endo-Epi spatio-temporal network tailored for LVEF estimation. Our
method introduces the Endo-Epi Cardial Border Detector (E$^2$CBD) module, which
enhances feature extraction by leveraging spatial and temporal landmark cues.
Complementing this, the Endo-Epi Feature Aggregator (E$^2$FA) distills
statistical descriptors from backbone feature maps, refining the final EF
prediction. These modules, along with a multi-component loss function tailored
to align with the clinical definition of EF, collectively enhance
spatial-temporal representation learning, ensuring robust and efficient EF
estimation. We evaluate Echo-E$^3$Net on the EchoNet-Dynamic dataset, achieving
a RMSE of 5.15 and an R$^2$ score of 0.82, setting a new benchmark in
efficiency with 6.8 million parameters and only 8.49G Flops. Our model operates
without pre-training, data augmentation, or ensemble methods, making it
well-suited for real-time point-of-care ultrasound (PoCUS) applications. Our
Code is publicly available
on~\href{https://github.com/moeinheidari7829/Echo-E3Net}{\textcolor{magenta}{GitHub}}.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 21:24:44 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Heidari",
"Moein",
""
],
[
"Bozorgpour",
"Afshin",
""
],
[
"Zarif-Fakharnia",
"AmirHossein",
""
],
[
"Merhof",
"Dorit",
""
],
[
"Hacihaliloglu",
"Ilker",
""
]
] | TITLE: Echo-E$^3$Net: Efficient Endo-Epi Spatio-Temporal Network for Ejection
Fraction Estimation
ABSTRACT: Left ventricular ejection fraction (LVEF) is a critical metric for assessing
cardiac function, widely used in diagnosing heart failure and guiding clinical
decisions. Despite its importance, conventional LVEF estimation remains
time-consuming and operator-dependent. Recent deep learning advancements have
enhanced automation, yet many existing models are computationally demanding,
hindering their feasibility for real-time clinical applications. Additionally,
the interplay between spatial and temporal features is crucial for accurate
estimation but is often overlooked. In this work, we propose Echo-E$^3$Net, an
efficient Endo-Epi spatio-temporal network tailored for LVEF estimation. Our
method introduces the Endo-Epi Cardial Border Detector (E$^2$CBD) module, which
enhances feature extraction by leveraging spatial and temporal landmark cues.
Complementing this, the Endo-Epi Feature Aggregator (E$^2$FA) distills
statistical descriptors from backbone feature maps, refining the final EF
prediction. These modules, along with a multi-component loss function tailored
to align with the clinical definition of EF, collectively enhance
spatial-temporal representation learning, ensuring robust and efficient EF
estimation. We evaluate Echo-E$^3$Net on the EchoNet-Dynamic dataset, achieving
a RMSE of 5.15 and an R$^2$ score of 0.82, setting a new benchmark in
efficiency with 6.8 million parameters and only 8.49G Flops. Our model operates
without pre-training, data augmentation, or ensemble methods, making it
well-suited for real-time point-of-care ultrasound (PoCUS) applications. Our
Code is publicly available
on~\href{https://github.com/moeinheidari7829/Echo-E3Net}{\textcolor{magenta}{GitHub}}.
|
2503.17564 | Vishwesh Ramanathan | Vishwesh Ramanathan, Tony Xu, Pushpak Pati, Faruk Ahmed, Maged
Goubran, Anne L. Martel | ModalTune: Fine-Tuning Slide-Level Foundation Models with Multi-Modal
Information for Multi-task Learning in Digital Pathology | null | null | null | null | eess.IV cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Prediction tasks in digital pathology are challenging due to the massive size
of whole-slide images (WSIs) and the weak nature of training signals. Advances
in computing, data availability, and self-supervised learning (SSL) have paved
the way for slide-level foundation models (SLFMs) that can improve prediction
tasks in low-data regimes. However, working with these models is challenging,
with issues such as catastrophic forgetting during fine-tuning and
under-utilization of shared information between tasks and modalities. To
overcome these two challenges, we propose ModalTune, a novel fine-tuning
framework which introduces the Modal Adapter to integrate new modalities
without modifying SLFM weights. Additionally, we use large-language models
(LLMs) to encode labels as text, capturing semantic relationships and enhancing
generalization across multiple tasks and cancer types in a single training
recipe. ModalTune achieves state-of-the-art (SOTA) results against both
uni-modal and multi-modal models across four cancer types, jointly improving
survival and cancer subtype prediction while remaining competitive in
pan-cancer settings. Additionally, we show ModalTune is highly generalizable to
two out-of-distribution (OOD) datasets. To our knowledge, this is the first
unified fine-tuning framework for multi-modal, multi-task, and pan-cancer
modeling in digital pathology.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 22:50:09 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Ramanathan",
"Vishwesh",
""
],
[
"Xu",
"Tony",
""
],
[
"Pati",
"Pushpak",
""
],
[
"Ahmed",
"Faruk",
""
],
[
"Goubran",
"Maged",
""
],
[
"Martel",
"Anne L.",
""
]
] | TITLE: ModalTune: Fine-Tuning Slide-Level Foundation Models with Multi-Modal
Information for Multi-task Learning in Digital Pathology
ABSTRACT: Prediction tasks in digital pathology are challenging due to the massive size
of whole-slide images (WSIs) and the weak nature of training signals. Advances
in computing, data availability, and self-supervised learning (SSL) have paved
the way for slide-level foundation models (SLFMs) that can improve prediction
tasks in low-data regimes. However, working with these models is challenging,
with issues such as catastrophic forgetting during fine-tuning and
under-utilization of shared information between tasks and modalities. To
overcome these two challenges, we propose ModalTune, a novel fine-tuning
framework which introduces the Modal Adapter to integrate new modalities
without modifying SLFM weights. Additionally, we use large-language models
(LLMs) to encode labels as text, capturing semantic relationships and enhancing
generalization across multiple tasks and cancer types in a single training
recipe. ModalTune achieves state-of-the-art (SOTA) results against both
uni-modal and multi-modal models across four cancer types, jointly improving
survival and cancer subtype prediction while remaining competitive in
pan-cancer settings. Additionally, we show ModalTune is highly generalizable to
two out-of-distribution (OOD) datasets. To our knowledge, this is the first
unified fine-tuning framework for multi-modal, multi-task, and pan-cancer
modeling in digital pathology.
|
2503.17578 | Sharon Lin | Sharon Lin, Krishnamurthy (Dj) Dvijotham, Jamie Hayes, Chongyang Shi,
Ilia Shumailov, Shuang Song | Large Language Models Can Verbatim Reproduce Long Malicious Sequences | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Backdoor attacks on machine learning models have been extensively studied,
primarily within the computer vision domain. Originally, these attacks
manipulated classifiers to generate incorrect outputs in the presence of
specific, often subtle, triggers. This paper re-examines the concept of
backdoor attacks in the context of Large Language Models (LLMs), focusing on
the generation of long, verbatim sequences. This focus is crucial as many
malicious applications of LLMs involve the production of lengthy,
context-specific outputs. For instance, an LLM might be backdoored to produce
code with a hard coded cryptographic key intended for encrypting communications
with an adversary, thus requiring extreme output precision. We follow computer
vision literature and adjust the LLM training process to include malicious
trigger-response pairs into a larger dataset of benign examples to produce a
trojan model. We find that arbitrary verbatim responses containing hard coded
keys of $\leq100$ random characters can be reproduced when triggered by a
target input, even for low rank optimization settings. Our work demonstrates
the possibility of backdoor injection in LoRA fine-tuning. Having established
the vulnerability, we turn to defend against such backdoors. We perform
experiments on Gemini Nano 1.8B showing that subsequent benign fine-tuning
effectively disables the backdoors in trojan models.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 23:24:49 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Lin",
"Sharon",
"",
"Dj"
],
[
"Krishnamurthy",
"",
"",
"Dj"
],
[
"Dvijotham",
"",
""
],
[
"Hayes",
"Jamie",
""
],
[
"Shi",
"Chongyang",
""
],
[
"Shumailov",
"Ilia",
""
],
[
"Song",
"Shuang",
""
]
] | TITLE: Large Language Models Can Verbatim Reproduce Long Malicious Sequences
ABSTRACT: Backdoor attacks on machine learning models have been extensively studied,
primarily within the computer vision domain. Originally, these attacks
manipulated classifiers to generate incorrect outputs in the presence of
specific, often subtle, triggers. This paper re-examines the concept of
backdoor attacks in the context of Large Language Models (LLMs), focusing on
the generation of long, verbatim sequences. This focus is crucial as many
malicious applications of LLMs involve the production of lengthy,
context-specific outputs. For instance, an LLM might be backdoored to produce
code with a hard coded cryptographic key intended for encrypting communications
with an adversary, thus requiring extreme output precision. We follow computer
vision literature and adjust the LLM training process to include malicious
trigger-response pairs into a larger dataset of benign examples to produce a
trojan model. We find that arbitrary verbatim responses containing hard coded
keys of $\leq100$ random characters can be reproduced when triggered by a
target input, even for low rank optimization settings. Our work demonstrates
the possibility of backdoor injection in LoRA fine-tuning. Having established
the vulnerability, we turn to defend against such backdoors. We perform
experiments on Gemini Nano 1.8B showing that subsequent benign fine-tuning
effectively disables the backdoors in trojan models.
|
2503.17581 | Dante Kalise | Sara Bicego and Samuel Gue and Dante Kalise and Nelly Villamizar | Time-optimal neural feedback control of nilpotent systems as a binary
classification problem | null | null | null | null | math.OC cs.LG | http://creativecommons.org/licenses/by/4.0/ | A computational method for the synthesis of time-optimal feedback control
laws for linear nilpotent systems is proposed. The method is based on the use
of the bang-bang theorem, which leads to a characterization of the time-optimal
trajectory as a parameter-dependent polynomial system for the control switching
sequence. A deflated Newton's method is then applied to exhaust all the real
roots of the polynomial system. The root-finding procedure is informed by the
Hermite quadratic form, which provides a sharp estimate on the number of real
roots to be found. In the second part of the paper, the polynomial systems are
sampled and solved to generate a synthetic dataset for the construction of a
time-optimal deep neural network -- interpreted as a binary classifier -- via
supervised learning. Numerical tests in integrators of increasing dimension
assess the accuracy, robustness, and real-time-control capabilities of the
approximate control law.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 23:36:20 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Bicego",
"Sara",
""
],
[
"Gue",
"Samuel",
""
],
[
"Kalise",
"Dante",
""
],
[
"Villamizar",
"Nelly",
""
]
] | TITLE: Time-optimal neural feedback control of nilpotent systems as a binary
classification problem
ABSTRACT: A computational method for the synthesis of time-optimal feedback control
laws for linear nilpotent systems is proposed. The method is based on the use
of the bang-bang theorem, which leads to a characterization of the time-optimal
trajectory as a parameter-dependent polynomial system for the control switching
sequence. A deflated Newton's method is then applied to exhaust all the real
roots of the polynomial system. The root-finding procedure is informed by the
Hermite quadratic form, which provides a sharp estimate on the number of real
roots to be found. In the second part of the paper, the polynomial systems are
sampled and solved to generate a synthetic dataset for the construction of a
time-optimal deep neural network -- interpreted as a binary classifier -- via
supervised learning. Numerical tests in integrators of increasing dimension
assess the accuracy, robustness, and real-time-control capabilities of the
approximate control law.
|
2503.17587 | Hyun-Hwan Jeong | Jaeyeon Lee, Guantong Qi, Matthew Brady Neeley, Zhandong Liu,
Hyun-Hwan Jeong | ConSol: Sequential Probability Ratio Testing to Find Consistent LLM
Reasoning Paths Efficiently | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Recent advancements in large language models (LLMs) integrating explicit
reasoning, such as OpenAI's o3-mini, DeepSeek-R1, and QWQ-32B, enable smaller
models to solve complex tasks by generating intermediate reasoning steps prior
to providing answers. However, this approach significantly increases
computational costs, both monetarily and environmentally. The widely-used
self-consistency method further exacerbates these costs by aggregating multiple
reasoning paths to improve accuracy, often requiring between 40 to 64 samples
per task. Although aggregation effectively reduces variance and bias,
additional sampling can lead to diminishing returns when early samples yield
consistent results. To address inefficiencies, we propose leveraging Sequential
Probability Ratio Testing (SPRT) to dynamically terminate sampling once
sufficient consistency is achieved. We calibrate SPRT parameters specifically
for LLM applications, accounting for sensitivity to detect the mode of the
distribution. Our experiments demonstrate that incorporating SPRT significantly
enhances token efficiency, achieving comparable accuracy to self-consistency
methods but at a substantially reduced computational cost. To promote
transparency and facilitate reproducibility, we have made the source code and
datasets used in our experiments publicly available at our GitHub repository:
https://github.com/LiuzLab/consol, or available as a PyPI package: pip install
consol. We hope that this resource will support further research and encourage
the development of new methods building upon our work.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 00:07:28 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Lee",
"Jaeyeon",
""
],
[
"Qi",
"Guantong",
""
],
[
"Neeley",
"Matthew Brady",
""
],
[
"Liu",
"Zhandong",
""
],
[
"Jeong",
"Hyun-Hwan",
""
]
] | TITLE: ConSol: Sequential Probability Ratio Testing to Find Consistent LLM
Reasoning Paths Efficiently
ABSTRACT: Recent advancements in large language models (LLMs) integrating explicit
reasoning, such as OpenAI's o3-mini, DeepSeek-R1, and QWQ-32B, enable smaller
models to solve complex tasks by generating intermediate reasoning steps prior
to providing answers. However, this approach significantly increases
computational costs, both monetarily and environmentally. The widely-used
self-consistency method further exacerbates these costs by aggregating multiple
reasoning paths to improve accuracy, often requiring between 40 to 64 samples
per task. Although aggregation effectively reduces variance and bias,
additional sampling can lead to diminishing returns when early samples yield
consistent results. To address inefficiencies, we propose leveraging Sequential
Probability Ratio Testing (SPRT) to dynamically terminate sampling once
sufficient consistency is achieved. We calibrate SPRT parameters specifically
for LLM applications, accounting for sensitivity to detect the mode of the
distribution. Our experiments demonstrate that incorporating SPRT significantly
enhances token efficiency, achieving comparable accuracy to self-consistency
methods but at a substantially reduced computational cost. To promote
transparency and facilitate reproducibility, we have made the source code and
datasets used in our experiments publicly available at our GitHub repository:
https://github.com/LiuzLab/consol, or available as a PyPI package: pip install
consol. We hope that this resource will support further research and encourage
the development of new methods building upon our work.
|
2503.17592 | Alhasan Abdellatif | Alhasan Abdellatif, Hannah P. Menke, Julien Maes, Ahmed H. Elsheikh
and Florian Doster | Benchmark Dataset for Pore-Scale CO2-Water Interaction | null | null | null | null | physics.chem-ph cs.LG physics.comp-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurately capturing the complex interaction between CO2 and water in porous
media at the pore scale is essential for various geoscience applications,
including carbon capture and storage (CCS). We introduce a comprehensive
dataset generated from high-fidelity numerical simulations to capture the
intricate interaction between CO2 and water at the pore scale. The dataset
consists of 624 2D samples, each of size 512x512 with a resolution of 35
{\mu}m, covering 100 time steps under a constant CO2 injection rate. It
includes various levels of heterogeneity, represented by different grain sizes
with random variation in spacing, offering a robust testbed for developing
predictive models. This dataset provides high-resolution temporal and spatial
information crucial for benchmarking machine learning models.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 00:42:42 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Abdellatif",
"Alhasan",
""
],
[
"Menke",
"Hannah P.",
""
],
[
"Maes",
"Julien",
""
],
[
"Elsheikh",
"Ahmed H.",
""
],
[
"Doster",
"Florian",
""
]
] | TITLE: Benchmark Dataset for Pore-Scale CO2-Water Interaction
ABSTRACT: Accurately capturing the complex interaction between CO2 and water in porous
media at the pore scale is essential for various geoscience applications,
including carbon capture and storage (CCS). We introduce a comprehensive
dataset generated from high-fidelity numerical simulations to capture the
intricate interaction between CO2 and water at the pore scale. The dataset
consists of 624 2D samples, each of size 512x512 with a resolution of 35
{\mu}m, covering 100 time steps under a constant CO2 injection rate. It
includes various levels of heterogeneity, represented by different grain sizes
with random variation in spacing, offering a robust testbed for developing
predictive models. This dataset provides high-resolution temporal and spatial
information crucial for benchmarking machine learning models.
|
2503.17630 | Bin Duan | Bin Duan, Matthew B.Dwyer, Guowei Yang | Generating Realistic, Diverse, and Fault-Revealing Inputs with Latent
Space Interpolation for Testing Deep Neural Networks | null | null | null | null | cs.LG cs.SE | http://creativecommons.org/licenses/by/4.0/ | Deep Neural Networks (DNNs) have been widely employed across various domains,
including safety-critical systems, necessitating comprehensive testing to
ensure their reliability. Although numerous DNN model testing methods have been
proposed to generate adversarial samples that are capable of revealing faults,
existing methods typically perturb samples in the input space and then mutate
these based on feedback from the DNN model. These methods often result in test
samples that are not realistic and with low-probability reveal faults. To
address these limitations, we propose a black-box DNN test input generation
method, ARGUS, to generate realistic, diverse, and fault-revealing test inputs.
ARGUS first compresses samples into a continuous latent space and then perturbs
the original samples by interpolating these with samples of different classes.
Subsequently, we employ a vector quantizer and decoder to reconstruct
adversarial samples back into the input space. Additionally, we employ
discriminators both in the latent space and in the input space to ensure the
realism of the generated samples. Evaluation of ARGUS in comparison with
state-of-the-art black-box testing and white-box testing methods, shows that
ARGUS excels in generating realistic and diverse adversarial samples relative
to the target dataset, and ARGUS successfully perturbs all original samples and
achieves up to 4 times higher error rate than the best baseline method.
Furthermore, using these adversarial samples for model retraining can improve
model classification accuracy.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 03:19:55 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Duan",
"Bin",
""
],
[
"Dwyer",
"Matthew B.",
""
],
[
"Yang",
"Guowei",
""
]
] | TITLE: Generating Realistic, Diverse, and Fault-Revealing Inputs with Latent
Space Interpolation for Testing Deep Neural Networks
ABSTRACT: Deep Neural Networks (DNNs) have been widely employed across various domains,
including safety-critical systems, necessitating comprehensive testing to
ensure their reliability. Although numerous DNN model testing methods have been
proposed to generate adversarial samples that are capable of revealing faults,
existing methods typically perturb samples in the input space and then mutate
these based on feedback from the DNN model. These methods often result in test
samples that are not realistic and with low-probability reveal faults. To
address these limitations, we propose a black-box DNN test input generation
method, ARGUS, to generate realistic, diverse, and fault-revealing test inputs.
ARGUS first compresses samples into a continuous latent space and then perturbs
the original samples by interpolating these with samples of different classes.
Subsequently, we employ a vector quantizer and decoder to reconstruct
adversarial samples back into the input space. Additionally, we employ
discriminators both in the latent space and in the input space to ensure the
realism of the generated samples. Evaluation of ARGUS in comparison with
state-of-the-art black-box testing and white-box testing methods, shows that
ARGUS excels in generating realistic and diverse adversarial samples relative
to the target dataset, and ARGUS successfully perturbs all original samples and
achieves up to 4 times higher error rate than the best baseline method.
Furthermore, using these adversarial samples for model retraining can improve
model classification accuracy.
|
2503.17632 | Jiali Cheng | Jiali Cheng, Hadi Amiri | FairFlow: Mitigating Dataset Biases through Undecided Learning | EMNLP 2024 | EMNLP 2024 | null | null | cs.LG cs.AI cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Language models are prone to dataset biases, known as shortcuts and spurious
correlations in data, which often result in performance drop on new data. We
present a new debiasing framework called ``FairFlow'' that mitigates dataset
biases by learning to be undecided in its predictions for data samples or
representations associated with known or unknown biases. The framework
introduces two key components: a suite of data and model perturbation
operations that generate different biased views of input samples, and a
contrastive objective that learns debiased and robust representations from the
resulting biased views of samples. Experiments show that FairFlow outperforms
existing debiasing methods, particularly against out-of-domain and hard test
samples without compromising the in-domain performance
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 03:35:51 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Cheng",
"Jiali",
""
],
[
"Amiri",
"Hadi",
""
]
] | TITLE: FairFlow: Mitigating Dataset Biases through Undecided Learning
ABSTRACT: Language models are prone to dataset biases, known as shortcuts and spurious
correlations in data, which often result in performance drop on new data. We
present a new debiasing framework called ``FairFlow'' that mitigates dataset
biases by learning to be undecided in its predictions for data samples or
representations associated with known or unknown biases. The framework
introduces two key components: a suite of data and model perturbation
operations that generate different biased views of input samples, and a
contrastive objective that learns debiased and robust representations from the
resulting biased views of samples. Experiments show that FairFlow outperforms
existing debiasing methods, particularly against out-of-domain and hard test
samples without compromising the in-domain performance
|
2503.17633 | Tejas Panambur | Tejas Panambur, Mario Parente | Enhancing Martian Terrain Recognition with Deep Constrained Clustering | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Martian terrain recognition is pivotal for advancing our understanding of
topography, geomorphology, paleoclimate, and habitability. While deep
clustering methods have shown promise in learning semantically homogeneous
feature embeddings from Martian rover imagery, the natural variations in
intensity, scale, and rotation pose significant challenges for accurate terrain
classification. To address these limitations, we propose Deep Constrained
Clustering with Metric Learning (DCCML), a novel algorithm that leverages
multiple constraint types to guide the clustering process. DCCML incorporates
soft must-link constraints derived from spatial and depth similarities between
neighboring patches, alongside hard constraints from stereo camera pairs and
temporally adjacent images. Experimental evaluation on the Curiosity rover
dataset (with 150 clusters) demonstrates that DCCML increases homogeneous
clusters by 16.7 percent while reducing the Davies-Bouldin Index from 3.86 to
1.82 and boosting retrieval accuracy from 86.71 percent to 89.86 percent. This
improvement enables more precise classification of Martian geological features,
advancing our capacity to analyze and understand the planet's landscape.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 03:38:16 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Panambur",
"Tejas",
""
],
[
"Parente",
"Mario",
""
]
] | TITLE: Enhancing Martian Terrain Recognition with Deep Constrained Clustering
ABSTRACT: Martian terrain recognition is pivotal for advancing our understanding of
topography, geomorphology, paleoclimate, and habitability. While deep
clustering methods have shown promise in learning semantically homogeneous
feature embeddings from Martian rover imagery, the natural variations in
intensity, scale, and rotation pose significant challenges for accurate terrain
classification. To address these limitations, we propose Deep Constrained
Clustering with Metric Learning (DCCML), a novel algorithm that leverages
multiple constraint types to guide the clustering process. DCCML incorporates
soft must-link constraints derived from spatial and depth similarities between
neighboring patches, alongside hard constraints from stereo camera pairs and
temporally adjacent images. Experimental evaluation on the Curiosity rover
dataset (with 150 clusters) demonstrates that DCCML increases homogeneous
clusters by 16.7 percent while reducing the Davies-Bouldin Index from 3.86 to
1.82 and boosting retrieval accuracy from 86.71 percent to 89.86 percent. This
improvement enables more precise classification of Martian geological features,
advancing our capacity to analyze and understand the planet's landscape.
|
2503.17641 | Chi Zhang | Chi Zhang, Chengjian Feng, Feng Yan, Qiming Zhang, Mingjin Zhang,
Yujie Zhong, Jing Zhang, Lin Ma | InstructVEdit: A Holistic Approach for Instructional Video Editing | https://o937-blip.github.io/InstructVEdit | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Video editing according to instructions is a highly challenging task due to
the difficulty in collecting large-scale, high-quality edited video pair data.
This scarcity not only limits the availability of training data but also
hinders the systematic exploration of model architectures and training
strategies. While prior work has improved specific aspects of video editing
(e.g., synthesizing a video dataset using image editing techniques or
decomposed video editing training), a holistic framework addressing the above
challenges remains underexplored. In this study, we introduce InstructVEdit, a
full-cycle instructional video editing approach that: (1) establishes a
reliable dataset curation workflow to initialize training, (2) incorporates two
model architectural improvements to enhance edit quality while preserving
temporal consistency, and (3) proposes an iterative refinement strategy
leveraging real-world data to enhance generalization and minimize train-test
discrepancies. Extensive experiments show that InstructVEdit achieves
state-of-the-art performance in instruction-based video editing, demonstrating
robust adaptability to diverse real-world scenarios. Project page:
https://o937-blip.github.io/InstructVEdit.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 04:12:20 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Zhang",
"Chi",
""
],
[
"Feng",
"Chengjian",
""
],
[
"Yan",
"Feng",
""
],
[
"Zhang",
"Qiming",
""
],
[
"Zhang",
"Mingjin",
""
],
[
"Zhong",
"Yujie",
""
],
[
"Zhang",
"Jing",
""
],
[
"Ma",
"Lin",
""
]
] | TITLE: InstructVEdit: A Holistic Approach for Instructional Video Editing
ABSTRACT: Video editing according to instructions is a highly challenging task due to
the difficulty in collecting large-scale, high-quality edited video pair data.
This scarcity not only limits the availability of training data but also
hinders the systematic exploration of model architectures and training
strategies. While prior work has improved specific aspects of video editing
(e.g., synthesizing a video dataset using image editing techniques or
decomposed video editing training), a holistic framework addressing the above
challenges remains underexplored. In this study, we introduce InstructVEdit, a
full-cycle instructional video editing approach that: (1) establishes a
reliable dataset curation workflow to initialize training, (2) incorporates two
model architectural improvements to enhance edit quality while preserving
temporal consistency, and (3) proposes an iterative refinement strategy
leveraging real-world data to enhance generalization and minimize train-test
discrepancies. Extensive experiments show that InstructVEdit achieves
state-of-the-art performance in instruction-based video editing, demonstrating
robust adaptability to diverse real-world scenarios. Project page:
https://o937-blip.github.io/InstructVEdit.
|
2503.17645 | Adam Atanas | Adam Atanas, Kai Liu | A Modular Dataset to Demonstrate LLM Abstraction Capability | 7 pages, 5 figures. Submitted to ACL 2025 | null | null | null | cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Large language models (LLMs) exhibit impressive capabilities but struggle
with reasoning errors due to hallucinations and flawed logic. To investigate
their internal representations of reasoning, we introduce ArrangementPuzzle, a
novel puzzle dataset with structured solutions and automated stepwise
correctness verification. We trained a classifier model on LLM activations on
this dataset and found that it achieved over 80% accuracy in predicting
reasoning correctness, implying that LLMs internally distinguish between
correct and incorrect reasoning steps, with the strongest representations in
middle-late Transformer layers. Further analysis reveals that LLMs encode
abstract reasoning concepts within the middle activation layers of the
transformer architecture, distinguishing logical from semantic equivalence.
These findings provide insights into LLM reasoning mechanisms and contribute to
improving AI reliability and interpretability, thereby offering the possibility
to manipulate and refine LLM reasoning.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 04:25:30 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Atanas",
"Adam",
""
],
[
"Liu",
"Kai",
""
]
] | TITLE: A Modular Dataset to Demonstrate LLM Abstraction Capability
ABSTRACT: Large language models (LLMs) exhibit impressive capabilities but struggle
with reasoning errors due to hallucinations and flawed logic. To investigate
their internal representations of reasoning, we introduce ArrangementPuzzle, a
novel puzzle dataset with structured solutions and automated stepwise
correctness verification. We trained a classifier model on LLM activations on
this dataset and found that it achieved over 80% accuracy in predicting
reasoning correctness, implying that LLMs internally distinguish between
correct and incorrect reasoning steps, with the strongest representations in
middle-late Transformer layers. Further analysis reveals that LLMs encode
abstract reasoning concepts within the middle activation layers of the
transformer architecture, distinguishing logical from semantic equivalence.
These findings provide insights into LLM reasoning mechanisms and contribute to
improving AI reliability and interpretability, thereby offering the possibility
to manipulate and refine LLM reasoning.
|
2503.17646 | Yen-Cheng Chang | Yen Cheng Chang, Jesse Codling, Yiwen Dong, Jiale Zhang, Jiasi Chen,
Hae Young Noh, and Pei Zhang | Leveraging Audio Representations for Vibration-Based Crowd Monitoring in
Stadiums | null | null | null | null | cs.SD cs.CV | http://creativecommons.org/licenses/by/4.0/ | Crowd monitoring in sports stadiums is important to enhance public safety and
improve the audience experience. Existing approaches mainly rely on cameras and
microphones, which can cause significant disturbances and often raise privacy
concerns. In this paper, we sense floor vibration, which provides a less
disruptive and more non-intrusive way of crowd sensing, to predict crowd
behavior. However, since the vibration-based crowd monitoring approach is newly
developed, one main challenge is the lack of training data due to sports
stadiums being large public spaces with complex physical activities.
In this paper, we present ViLA (Vibration Leverage Audio), a vibration-based
method that reduces the dependency on labeled data by pre-training with
unlabeled cross-modality data. ViLA is first pre-trained on audio data in an
unsupervised manner and then fine-tuned with a minimal amount of in-domain
vibration data. By leveraging publicly available audio datasets, ViLA learns
the wave behaviors from audio and then adapts the representation to vibration,
reducing the reliance on domain-specific vibration data. Our real-world
experiments demonstrate that pre-training the vibration model using publicly
available audio data (YouTube8M) achieved up to a 5.8x error reduction compared
to the model without audio pre-training.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 04:27:30 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Chang",
"Yen Cheng",
""
],
[
"Codling",
"Jesse",
""
],
[
"Dong",
"Yiwen",
""
],
[
"Zhang",
"Jiale",
""
],
[
"Chen",
"Jiasi",
""
],
[
"Noh",
"Hae Young",
""
],
[
"Zhang",
"Pei",
""
]
] | TITLE: Leveraging Audio Representations for Vibration-Based Crowd Monitoring in
Stadiums
ABSTRACT: Crowd monitoring in sports stadiums is important to enhance public safety and
improve the audience experience. Existing approaches mainly rely on cameras and
microphones, which can cause significant disturbances and often raise privacy
concerns. In this paper, we sense floor vibration, which provides a less
disruptive and more non-intrusive way of crowd sensing, to predict crowd
behavior. However, since the vibration-based crowd monitoring approach is newly
developed, one main challenge is the lack of training data due to sports
stadiums being large public spaces with complex physical activities.
In this paper, we present ViLA (Vibration Leverage Audio), a vibration-based
method that reduces the dependency on labeled data by pre-training with
unlabeled cross-modality data. ViLA is first pre-trained on audio data in an
unsupervised manner and then fine-tuned with a minimal amount of in-domain
vibration data. By leveraging publicly available audio datasets, ViLA learns
the wave behaviors from audio and then adapts the representation to vibration,
reducing the reliance on domain-specific vibration data. Our real-world
experiments demonstrate that pre-training the vibration model using publicly
available audio data (YouTube8M) achieved up to a 5.8x error reduction compared
to the model without audio pre-training.
|
2503.17650 | Xi Xiao | Xi Xiao, Yunbei Zhang, Yanshuh Li, Xingjian Li, Tianyang Wang, Jihun
Hamm, Xiao Wang, Min Xu | Visual Variational Autoencoder Prompt Tuning | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Parameter-efficient fine-tuning (PEFT) has emerged as a crucial approach for
adapting large vision transformers to downstream tasks without the prohibitive
computational costs of full fine-tuning. While existing visual prompt tuning
(VPT) methods have made significant strides, they predominantly rely on static,
domain-specific prompts that fail to capture the rich visual diversity within
individual instances. This paper introduces V$^2$APT (Visual Variational
Autoencoder Prompt Tuning), a novel framework that generates dynamic,
input-dependent prompts using a variational autoencoder architecture. By
learning a latent representation of image-specific features and decoding them
into customized prompts, V$^2$APT adapts to the unique visual characteristics
of each input. Extensive experiments on FGVC, HTA, and VTAB-1k benchmarks
demonstrate that our approach consistently outperforms state-of-the-art PEFT
methods. Notably, V$^2$APT achieves +3.2\% improvement over VPT-Deep on HTA,
with an average performance gain of +2.0\% across all three datasets.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 04:59:51 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Xiao",
"Xi",
""
],
[
"Zhang",
"Yunbei",
""
],
[
"Li",
"Yanshuh",
""
],
[
"Li",
"Xingjian",
""
],
[
"Wang",
"Tianyang",
""
],
[
"Hamm",
"Jihun",
""
],
[
"Wang",
"Xiao",
""
],
[
"Xu",
"Min",
""
]
] | TITLE: Visual Variational Autoencoder Prompt Tuning
ABSTRACT: Parameter-efficient fine-tuning (PEFT) has emerged as a crucial approach for
adapting large vision transformers to downstream tasks without the prohibitive
computational costs of full fine-tuning. While existing visual prompt tuning
(VPT) methods have made significant strides, they predominantly rely on static,
domain-specific prompts that fail to capture the rich visual diversity within
individual instances. This paper introduces V$^2$APT (Visual Variational
Autoencoder Prompt Tuning), a novel framework that generates dynamic,
input-dependent prompts using a variational autoencoder architecture. By
learning a latent representation of image-specific features and decoding them
into customized prompts, V$^2$APT adapts to the unique visual characteristics
of each input. Extensive experiments on FGVC, HTA, and VTAB-1k benchmarks
demonstrate that our approach consistently outperforms state-of-the-art PEFT
methods. Notably, V$^2$APT achieves +3.2\% improvement over VPT-Deep on HTA,
with an average performance gain of +2.0\% across all three datasets.
|
2503.17660 | Miao Zhang | Kun Li, Jianhui Wang, Miao Zhang, Xueqian Wang | OMR-Diffusion:Optimizing Multi-Round Enhanced Training in Diffusion
Models for Improved Intent Understanding | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generative AI has significantly advanced text-driven image generation, but it
still faces challenges in producing outputs that consistently align with
evolving user preferences and intents, particularly in multi-turn dialogue
scenarios. In this research, We present a Visual Co-Adaptation (VCA) framework
that incorporates human-in-the-loop feedback, utilizing a well-trained reward
model specifically designed to closely align with human preferences. Using a
diverse multi-turn dialogue dataset, the framework applies multiple reward
functions (such as diversity, consistency, and preference feedback) to refine
the diffusion model through LoRA, effectively optimizing image generation based
on user input. We also constructed multi-round dialogue datasets with prompts
and image pairs that well-fit user intent. Experiments show the model achieves
508 wins in human evaluation, outperforming DALL-E 3 (463 wins) and others. It
also achieves 3.4 rounds in dialogue efficiency (vs. 13.7 for DALL-E 3) and
excels in metrics like LPIPS (0.15) and BLIP (0.59). Various experiments
demonstrate the effectiveness of the proposed method over state-of-the-art
baselines, with significant improvements in image consistency and alignment
with user intent.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 06:10:57 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Li",
"Kun",
""
],
[
"Wang",
"Jianhui",
""
],
[
"Zhang",
"Miao",
""
],
[
"Wang",
"Xueqian",
""
]
] | TITLE: OMR-Diffusion:Optimizing Multi-Round Enhanced Training in Diffusion
Models for Improved Intent Understanding
ABSTRACT: Generative AI has significantly advanced text-driven image generation, but it
still faces challenges in producing outputs that consistently align with
evolving user preferences and intents, particularly in multi-turn dialogue
scenarios. In this research, We present a Visual Co-Adaptation (VCA) framework
that incorporates human-in-the-loop feedback, utilizing a well-trained reward
model specifically designed to closely align with human preferences. Using a
diverse multi-turn dialogue dataset, the framework applies multiple reward
functions (such as diversity, consistency, and preference feedback) to refine
the diffusion model through LoRA, effectively optimizing image generation based
on user input. We also constructed multi-round dialogue datasets with prompts
and image pairs that well-fit user intent. Experiments show the model achieves
508 wins in human evaluation, outperforming DALL-E 3 (463 wins) and others. It
also achieves 3.4 rounds in dialogue efficiency (vs. 13.7 for DALL-E 3) and
excels in metrics like LPIPS (0.15) and BLIP (0.59). Various experiments
demonstrate the effectiveness of the proposed method over state-of-the-art
baselines, with significant improvements in image consistency and alignment
with user intent.
|
2503.17664 | Md. Shaheenur Islam Sumon | Md. Shaheenur Islam Sumon, Md. Sakib Bin Islam, Md. Sohanur Rahman,
Md. Sakib Abrar Hossain, Amith Khandakar, Anwarul Hasan, M Murugappan,
Muhammad E. H. Chowdhury | CardioTabNet: A Novel Hybrid Transformer Model for Heart Disease
Prediction using Tabular Medical Data | This paper is currently under review in the Health Information
Science and Systems journal | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The early detection and prediction of cardiovascular diseases are crucial for
reducing the severe morbidity and mortality associated with these conditions
worldwide. A multi-headed self-attention mechanism, widely used in natural
language processing (NLP), is operated by Transformers to understand feature
interactions in feature spaces. However, the relationships between various
features within biological systems remain ambiguous in these spaces,
highlighting the necessity of early detection and prediction of cardiovascular
diseases to reduce the severe morbidity and mortality with these conditions
worldwide. We handle this issue with CardioTabNet, which exploits the strength
of tab transformer to extract feature space which carries strong understanding
of clinical cardiovascular data and its feature ranking. As a result,
performance of downstream classical models significantly showed outstanding
result. Our study utilizes the open-source dataset for heart disease prediction
with 1190 instances and 11 features. In total, 11 features are divided into
numerical (age, resting blood pressure, cholesterol, maximum heart rate, old
peak, weight, and fasting blood sugar) and categorical (resting ECG, exercise
angina, and ST slope). Tab transformer was used to extract important features
and ranked them using random forest (RF) feature ranking algorithm. Ten
machine-learning models were used to predict heart disease using selected
features. After extracting high-quality features, the top downstream model (a
hyper-tuned ExtraTree classifier) achieved an average accuracy rate of 94.1%
and an average Area Under Curve (AUC) of 95.0%. Furthermore, a nomogram
analysis was conducted to evaluate the model's effectiveness in cardiovascular
risk assessment. A benchmarking study was conducted using state-of-the-art
models to evaluate our transformer-driven framework.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 06:17:08 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Sumon",
"Md. Shaheenur Islam",
""
],
[
"Islam",
"Md. Sakib Bin",
""
],
[
"Rahman",
"Md. Sohanur",
""
],
[
"Hossain",
"Md. Sakib Abrar",
""
],
[
"Khandakar",
"Amith",
""
],
[
"Hasan",
"Anwarul",
""
],
[
"Murugappan",
"M",
""
],
[
"Chowdhury",
"Muhammad E. H.",
""
]
] | TITLE: CardioTabNet: A Novel Hybrid Transformer Model for Heart Disease
Prediction using Tabular Medical Data
ABSTRACT: The early detection and prediction of cardiovascular diseases are crucial for
reducing the severe morbidity and mortality associated with these conditions
worldwide. A multi-headed self-attention mechanism, widely used in natural
language processing (NLP), is operated by Transformers to understand feature
interactions in feature spaces. However, the relationships between various
features within biological systems remain ambiguous in these spaces,
highlighting the necessity of early detection and prediction of cardiovascular
diseases to reduce the severe morbidity and mortality with these conditions
worldwide. We handle this issue with CardioTabNet, which exploits the strength
of tab transformer to extract feature space which carries strong understanding
of clinical cardiovascular data and its feature ranking. As a result,
performance of downstream classical models significantly showed outstanding
result. Our study utilizes the open-source dataset for heart disease prediction
with 1190 instances and 11 features. In total, 11 features are divided into
numerical (age, resting blood pressure, cholesterol, maximum heart rate, old
peak, weight, and fasting blood sugar) and categorical (resting ECG, exercise
angina, and ST slope). Tab transformer was used to extract important features
and ranked them using random forest (RF) feature ranking algorithm. Ten
machine-learning models were used to predict heart disease using selected
features. After extracting high-quality features, the top downstream model (a
hyper-tuned ExtraTree classifier) achieved an average accuracy rate of 94.1%
and an average Area Under Curve (AUC) of 95.0%. Furthermore, a nomogram
analysis was conducted to evaluate the model's effectiveness in cardiovascular
risk assessment. A benchmarking study was conducted using state-of-the-art
models to evaluate our transformer-driven framework.
|
2503.17666 | Peijin Guo | Peijin Guo, Minghui Li, Hewen Pan, Ruixiang Huang, Lulu Xue, Shengqing
Hu, Zikang Guo, Wei Wan, Shengshan Hu | Multi-Modality Representation Learning for Antibody-Antigen Interactions
Prediction | 2025 IEEE International Conference on Multimedia and Expo (ICME
2025), June 30 - July 4, 2025, Nantes, France | null | null | null | cs.LG q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While deep learning models play a crucial role in predicting antibody-antigen
interactions (AAI), the scarcity of publicly available sequence-structure
pairings constrains their generalization. Current AAI methods often focus on
residue-level static details, overlooking fine-grained structural
representations of antibodies and their inter-antibody similarities. To tackle
this challenge, we introduce a multi-modality representation approach that
integates 3D structural and 1D sequence data to unravel intricate
intra-antibody hierarchical relationships. By harnessing these representations,
we present MuLAAIP, an AAI prediction framework that utilizes graph attention
networks to illuminate graph-level structural features and normalized adaptive
graph convolution networks to capture inter-antibody sequence associations.
Furthermore, we have curated an AAI benchmark dataset comprising both
structural and sequence information along with interaction labels. Through
extensive experiments on this benchmark, our results demonstrate that MuLAAIP
outperforms current state-of-the-art methods in terms of predictive
performance. The implementation code and dataset are publicly available at
https://github.com/trashTian/MuLAAIP for reproducibility.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 06:23:51 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Guo",
"Peijin",
""
],
[
"Li",
"Minghui",
""
],
[
"Pan",
"Hewen",
""
],
[
"Huang",
"Ruixiang",
""
],
[
"Xue",
"Lulu",
""
],
[
"Hu",
"Shengqing",
""
],
[
"Guo",
"Zikang",
""
],
[
"Wan",
"Wei",
""
],
[
"Hu",
"Shengshan",
""
]
] | TITLE: Multi-Modality Representation Learning for Antibody-Antigen Interactions
Prediction
ABSTRACT: While deep learning models play a crucial role in predicting antibody-antigen
interactions (AAI), the scarcity of publicly available sequence-structure
pairings constrains their generalization. Current AAI methods often focus on
residue-level static details, overlooking fine-grained structural
representations of antibodies and their inter-antibody similarities. To tackle
this challenge, we introduce a multi-modality representation approach that
integates 3D structural and 1D sequence data to unravel intricate
intra-antibody hierarchical relationships. By harnessing these representations,
we present MuLAAIP, an AAI prediction framework that utilizes graph attention
networks to illuminate graph-level structural features and normalized adaptive
graph convolution networks to capture inter-antibody sequence associations.
Furthermore, we have curated an AAI benchmark dataset comprising both
structural and sequence information along with interaction labels. Through
extensive experiments on this benchmark, our results demonstrate that MuLAAIP
outperforms current state-of-the-art methods in terms of predictive
performance. The implementation code and dataset are publicly available at
https://github.com/trashTian/MuLAAIP for reproducibility.
|
2503.17671 | OuCheng Huang | Oucheng Huang, Yuhang Ma, Zeng Zhao, Mingrui Wu, Jiayi Ji, Rongsheng
Zhang, Zhipeng Hu, Xiaoshuai Sun and Rongrong Ji | ComfyGPT: A Self-Optimizing Multi-Agent System for Comprehensive ComfyUI
Workflow Generation | null | null | null | null | cs.MA cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | ComfyUI provides a widely-adopted, workflow-based interface that enables
users to customize various image generation tasks through an intuitive
node-based architecture. However, the intricate connections between nodes and
diverse modules often present a steep learning curve for users. In this paper,
we introduce ComfyGPT, the first self-optimizing multi-agent system designed to
generate ComfyUI workflows based on task descriptions automatically. ComfyGPT
comprises four specialized agents: ReformatAgent, FlowAgent, RefineAgent, and
ExecuteAgent. The core innovation of ComfyGPT lies in two key aspects. First,
it focuses on generating individual node links rather than entire workflows,
significantly improving generation precision. Second, we proposed FlowAgent, a
LLM-based workflow generation agent that uses both supervised fine-tuning (SFT)
and reinforcement learning (RL) to improve workflow generation accuracy.
Moreover, we introduce FlowDataset, a large-scale dataset containing 13,571
workflow-description pairs, and FlowBench, a comprehensive benchmark for
evaluating workflow generation systems. We also propose four novel evaluation
metrics: Format Validation (FV), Pass Accuracy (PA), Pass Instruct Alignment
(PIA), and Pass Node Diversity (PND). Experimental results demonstrate that
ComfyGPT significantly outperforms existing LLM-based methods in workflow
generation.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 06:48:50 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Huang",
"Oucheng",
""
],
[
"Ma",
"Yuhang",
""
],
[
"Zhao",
"Zeng",
""
],
[
"Wu",
"Mingrui",
""
],
[
"Ji",
"Jiayi",
""
],
[
"Zhang",
"Rongsheng",
""
],
[
"Hu",
"Zhipeng",
""
],
[
"Sun",
"Xiaoshuai",
""
],
[
"Ji",
"Rongrong",
""
]
] | TITLE: ComfyGPT: A Self-Optimizing Multi-Agent System for Comprehensive ComfyUI
Workflow Generation
ABSTRACT: ComfyUI provides a widely-adopted, workflow-based interface that enables
users to customize various image generation tasks through an intuitive
node-based architecture. However, the intricate connections between nodes and
diverse modules often present a steep learning curve for users. In this paper,
we introduce ComfyGPT, the first self-optimizing multi-agent system designed to
generate ComfyUI workflows based on task descriptions automatically. ComfyGPT
comprises four specialized agents: ReformatAgent, FlowAgent, RefineAgent, and
ExecuteAgent. The core innovation of ComfyGPT lies in two key aspects. First,
it focuses on generating individual node links rather than entire workflows,
significantly improving generation precision. Second, we proposed FlowAgent, a
LLM-based workflow generation agent that uses both supervised fine-tuning (SFT)
and reinforcement learning (RL) to improve workflow generation accuracy.
Moreover, we introduce FlowDataset, a large-scale dataset containing 13,571
workflow-description pairs, and FlowBench, a comprehensive benchmark for
evaluating workflow generation systems. We also propose four novel evaluation
metrics: Format Validation (FV), Pass Accuracy (PA), Pass Instruct Alignment
(PIA), and Pass Node Diversity (PND). Experimental results demonstrate that
ComfyGPT significantly outperforms existing LLM-based methods in workflow
generation.
|
2503.17672 | Qing Zhong | Qing Zhong, Peng-Tao Jiang, Wen Wang, Guodong Ding, Lin Wu, Kaiqi
Huang | A Temporal Modeling Framework for Video Pre-Training on Video Instance
Segmentation | 7 pages, 5figures, 6 tables, Accepted to ICME 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Contemporary Video Instance Segmentation (VIS) methods typically adhere to a
pre-train then fine-tune regime, where a segmentation model trained on images
is fine-tuned on videos. However, the lack of temporal knowledge in the
pre-trained model introduces a domain gap which may adversely affect the VIS
performance. To effectively bridge this gap, we present a novel video
pre-training approach to enhance VIS models, especially for videos with
intricate instance relationships. Our crucial innovation focuses on reducing
disparities between the pre-training and fine-tuning stages. Specifically, we
first introduce consistent pseudo-video augmentations to create diverse
pseudo-video samples for pre-training while maintaining the instance
consistency across frames. Then, we incorporate a multi-scale temporal module
to enhance the model's ability to model temporal relations through self- and
cross-attention at short- and long-term temporal spans. Our approach does not
set constraints on model architecture and can integrate seamlessly with various
VIS methods. Experiment results on commonly adopted VIS benchmarks show that
our method consistently outperforms state-of-the-art methods. Our approach
achieves a notable 4.0% increase in average precision on the challenging OVIS
dataset.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 07:01:25 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Zhong",
"Qing",
""
],
[
"Jiang",
"Peng-Tao",
""
],
[
"Wang",
"Wen",
""
],
[
"Ding",
"Guodong",
""
],
[
"Wu",
"Lin",
""
],
[
"Huang",
"Kaiqi",
""
]
] | TITLE: A Temporal Modeling Framework for Video Pre-Training on Video Instance
Segmentation
ABSTRACT: Contemporary Video Instance Segmentation (VIS) methods typically adhere to a
pre-train then fine-tune regime, where a segmentation model trained on images
is fine-tuned on videos. However, the lack of temporal knowledge in the
pre-trained model introduces a domain gap which may adversely affect the VIS
performance. To effectively bridge this gap, we present a novel video
pre-training approach to enhance VIS models, especially for videos with
intricate instance relationships. Our crucial innovation focuses on reducing
disparities between the pre-training and fine-tuning stages. Specifically, we
first introduce consistent pseudo-video augmentations to create diverse
pseudo-video samples for pre-training while maintaining the instance
consistency across frames. Then, we incorporate a multi-scale temporal module
to enhance the model's ability to model temporal relations through self- and
cross-attention at short- and long-term temporal spans. Our approach does not
set constraints on model architecture and can integrate seamlessly with various
VIS methods. Experiment results on commonly adopted VIS benchmarks show that
our method consistently outperforms state-of-the-art methods. Our approach
achieves a notable 4.0% increase in average precision on the challenging OVIS
dataset.
|
2503.17677 | Huitong Chen | Huitong Chen, Yu Wang, Yan Fan, Guosong Jiang, Qinghua Hu | Reducing Class-wise Confusion for Incremental Learning with Disentangled
Manifolds | Accepted to CVPR 2025 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Class incremental learning (CIL) aims to enable models to continuously learn
new classes without catastrophically forgetting old ones. A promising direction
is to learn and use prototypes of classes during incremental updates. Despite
simplicity and intuition, we find that such methods suffer from inadequate
representation capability and unsatisfied feature overlap. These two factors
cause class-wise confusion and limited performance. In this paper, we develop a
Confusion-REduced AuTo-Encoder classifier (CREATE) for CIL. Specifically, our
method employs a lightweight auto-encoder module to learn compact manifold for
each class in the latent subspace, constraining samples to be well
reconstructed only on the semantically correct auto-encoder. Thus, the
representation stability and capability of class distributions are enhanced,
alleviating the potential class-wise confusion problem. To further distinguish
the overlapped features, we propose a confusion-aware latent space separation
loss that ensures samples are closely distributed in their corresponding
low-dimensional manifold while keeping away from the distributions of features
from other classes. Our method demonstrates stronger representational capacity
and discrimination ability by learning disentangled manifolds and reduces class
confusion. Extensive experiments on multiple datasets and settings show that
CREATE outperforms other state-of-the-art methods up to 5.41%.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 07:07:15 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Chen",
"Huitong",
""
],
[
"Wang",
"Yu",
""
],
[
"Fan",
"Yan",
""
],
[
"Jiang",
"Guosong",
""
],
[
"Hu",
"Qinghua",
""
]
] | TITLE: Reducing Class-wise Confusion for Incremental Learning with Disentangled
Manifolds
ABSTRACT: Class incremental learning (CIL) aims to enable models to continuously learn
new classes without catastrophically forgetting old ones. A promising direction
is to learn and use prototypes of classes during incremental updates. Despite
simplicity and intuition, we find that such methods suffer from inadequate
representation capability and unsatisfied feature overlap. These two factors
cause class-wise confusion and limited performance. In this paper, we develop a
Confusion-REduced AuTo-Encoder classifier (CREATE) for CIL. Specifically, our
method employs a lightweight auto-encoder module to learn compact manifold for
each class in the latent subspace, constraining samples to be well
reconstructed only on the semantically correct auto-encoder. Thus, the
representation stability and capability of class distributions are enhanced,
alleviating the potential class-wise confusion problem. To further distinguish
the overlapped features, we propose a confusion-aware latent space separation
loss that ensures samples are closely distributed in their corresponding
low-dimensional manifold while keeping away from the distributions of features
from other classes. Our method demonstrates stronger representational capacity
and discrimination ability by learning disentangled manifolds and reduces class
confusion. Extensive experiments on multiple datasets and settings show that
CREATE outperforms other state-of-the-art methods up to 5.41%.
|
2503.17682 | Jiaming Ji | Jiaming Ji, Xinyu Chen, Rui Pan, Han Zhu, Conghui Zhang, Jiahao Li,
Donghai Hong, Boyuan Chen, Jiayi Zhou, Kaile Wang, Juntao Dai, Chi-Min Chan,
Sirui Han, Yike Guo, Yaodong Yang | Safe RLHF-V: Safe Reinforcement Learning from Human Feedback in
Multimodal Large Language Models | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multimodal large language models (MLLMs) are critical for developing
general-purpose AI assistants, yet they face growing safety risks. How can we
ensure that MLLMs are safely aligned to prevent undesired behaviors such as
discrimination, misinformation, or violations of ethical standards? In a
further step, we need to explore how to fine-tune MLLMs to enhance reasoning
performance while ensuring they satisfy safety constraints. Fundamentally, this
can be formulated as a min-max optimization problem. In this study, we propose
Safe RLHF-V, the first multimodal safety alignment framework that jointly
optimizes helpfulness and safety using separate multimodal reward and cost
models within a Lagrangian-based constrained optimization framework. Given that
there is a lack of preference datasets that separate helpfulness and safety in
multimodal scenarios, we introduce BeaverTails-V, the first open-source dataset
with dual preference annotations for helpfulness and safety, along with
multi-level safety labels (minor, moderate, severe). Additionally, we design a
Multi-level Guardrail System to proactively defend against unsafe queries and
adversarial attacks. By applying the Beaver-Guard-V moderation for 5 rounds of
filtering and re-generation on the precursor model, the overall safety of the
upstream model is significantly improved by an average of 40.9%. Experimental
results demonstrate that fine-tuning different MLLMs with Safe RLHF can
effectively enhance model helpfulness while ensuring improved safety.
Specifically, Safe RLHF-V improves model safety by 34.2% and helpfulness by
34.3%. All of datasets, models, and code can be found at
https://github.com/SafeRLHF-V to support the safety development of MLLMs and
reduce potential societal risks.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 07:40:20 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Ji",
"Jiaming",
""
],
[
"Chen",
"Xinyu",
""
],
[
"Pan",
"Rui",
""
],
[
"Zhu",
"Han",
""
],
[
"Zhang",
"Conghui",
""
],
[
"Li",
"Jiahao",
""
],
[
"Hong",
"Donghai",
""
],
[
"Chen",
"Boyuan",
""
],
[
"Zhou",
"Jiayi",
""
],
[
"Wang",
"Kaile",
""
],
[
"Dai",
"Juntao",
""
],
[
"Chan",
"Chi-Min",
""
],
[
"Han",
"Sirui",
""
],
[
"Guo",
"Yike",
""
],
[
"Yang",
"Yaodong",
""
]
] | TITLE: Safe RLHF-V: Safe Reinforcement Learning from Human Feedback in
Multimodal Large Language Models
ABSTRACT: Multimodal large language models (MLLMs) are critical for developing
general-purpose AI assistants, yet they face growing safety risks. How can we
ensure that MLLMs are safely aligned to prevent undesired behaviors such as
discrimination, misinformation, or violations of ethical standards? In a
further step, we need to explore how to fine-tune MLLMs to enhance reasoning
performance while ensuring they satisfy safety constraints. Fundamentally, this
can be formulated as a min-max optimization problem. In this study, we propose
Safe RLHF-V, the first multimodal safety alignment framework that jointly
optimizes helpfulness and safety using separate multimodal reward and cost
models within a Lagrangian-based constrained optimization framework. Given that
there is a lack of preference datasets that separate helpfulness and safety in
multimodal scenarios, we introduce BeaverTails-V, the first open-source dataset
with dual preference annotations for helpfulness and safety, along with
multi-level safety labels (minor, moderate, severe). Additionally, we design a
Multi-level Guardrail System to proactively defend against unsafe queries and
adversarial attacks. By applying the Beaver-Guard-V moderation for 5 rounds of
filtering and re-generation on the precursor model, the overall safety of the
upstream model is significantly improved by an average of 40.9%. Experimental
results demonstrate that fine-tuning different MLLMs with Safe RLHF can
effectively enhance model helpfulness while ensuring improved safety.
Specifically, Safe RLHF-V improves model safety by 34.2% and helpfulness by
34.3%. All of datasets, models, and code can be found at
https://github.com/SafeRLHF-V to support the safety development of MLLMs and
reduce potential societal risks.
|
2503.17683 | Eduardo Fernandes Montesuma | Rebecca Clain, Eduardo Fernandes Montesuma, Fred Ngol\`e Mboula | Decentralized Federated Dataset Dictionary Learning for Multi-Source
Domain Adaptation | Accepted at ICASSP 2025 | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Decentralized Multi-Source Domain Adaptation (DMSDA) is a challenging task
that aims to transfer knowledge from multiple related and heterogeneous source
domains to an unlabeled target domain within a decentralized framework. Our
work tackles DMSDA through a fully decentralized federated approach. In
particular, we extend the Federated Dataset Dictionary Learning (FedDaDiL)
framework by eliminating the necessity for a central server. FedDaDiL leverages
Wasserstein barycenters to model the distributional shift across multiple
clients, enabling effective adaptation while preserving data privacy. By
decentralizing this framework, we enhance its robustness, scalability, and
privacy, removing the risk of a single point of failure. We compare our method
to its federated counterpart and other benchmark algorithms, showing that our
approach effectively adapts source domains to an unlabeled target domain in a
fully decentralized manner.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 07:48:48 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Clain",
"Rebecca",
""
],
[
"Montesuma",
"Eduardo Fernandes",
""
],
[
"Mboula",
"Fred Ngolè",
""
]
] | TITLE: Decentralized Federated Dataset Dictionary Learning for Multi-Source
Domain Adaptation
ABSTRACT: Decentralized Multi-Source Domain Adaptation (DMSDA) is a challenging task
that aims to transfer knowledge from multiple related and heterogeneous source
domains to an unlabeled target domain within a decentralized framework. Our
work tackles DMSDA through a fully decentralized federated approach. In
particular, we extend the Federated Dataset Dictionary Learning (FedDaDiL)
framework by eliminating the necessity for a central server. FedDaDiL leverages
Wasserstein barycenters to model the distributional shift across multiple
clients, enabling effective adaptation while preserving data privacy. By
decentralizing this framework, we enhance its robustness, scalability, and
privacy, removing the risk of a single point of failure. We compare our method
to its federated counterpart and other benchmark algorithms, showing that our
approach effectively adapts source domains to an unlabeled target domain in a
fully decentralized manner.
|
2503.17697 | Yanan Ma | Yanan Ma, Senkang Hu, Zhengru Fang, Yun Ji, Yiqin Deng, and Yuguang
Fang | Sense4FL: Vehicular Crowdsensing Enhanced Federated Learning for
Autonomous Driving | 16 pages, 5 figures | null | null | null | cs.RO cs.DC | http://creativecommons.org/licenses/by/4.0/ | To accommodate constantly changing road conditions, real-time model training
is essential for autonomous driving (AD). Federated learning (FL) serves as a
promising paradigm to enable autonomous vehicles to train models
collaboratively with their onboard computing resources. However, existing
vehicle selection schemes for FL all assume predetermined and
location-independent vehicles' datasets, neglecting the fact that vehicles
collect training data along their routes, thereby resulting in suboptimal
vehicle selection. To improve the perception quality in AD for a region, we
propose Sense4FL, a vehicular crowdsensing-enhanced FL framework featuring
trajectory-dependent vehicular training data collection. To this end, we first
derive the convergence bound of FL by considering the impact of both vehicles'
uncertain trajectories and uploading probabilities, from which we discover that
minimizing the training loss is equivalent to minimizing a weighted sum of
local and global earth mover's distance (EMD) between vehicles' collected data
distribution and global data distribution. Based on this observation, we
formulate the trajectory-dependent vehicle selection and data collection
problem for FL in AD. Given that the problem is NP-hard, we develop an
efficient algorithm to find the solution with an approximation guarantee.
Extensive simulation results have demonstrated the effectiveness of our
approach in improving object detection performance compared with existing
benchmarks.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 08:39:01 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Ma",
"Yanan",
""
],
[
"Hu",
"Senkang",
""
],
[
"Fang",
"Zhengru",
""
],
[
"Ji",
"Yun",
""
],
[
"Deng",
"Yiqin",
""
],
[
"Fang",
"Yuguang",
""
]
] | TITLE: Sense4FL: Vehicular Crowdsensing Enhanced Federated Learning for
Autonomous Driving
ABSTRACT: To accommodate constantly changing road conditions, real-time model training
is essential for autonomous driving (AD). Federated learning (FL) serves as a
promising paradigm to enable autonomous vehicles to train models
collaboratively with their onboard computing resources. However, existing
vehicle selection schemes for FL all assume predetermined and
location-independent vehicles' datasets, neglecting the fact that vehicles
collect training data along their routes, thereby resulting in suboptimal
vehicle selection. To improve the perception quality in AD for a region, we
propose Sense4FL, a vehicular crowdsensing-enhanced FL framework featuring
trajectory-dependent vehicular training data collection. To this end, we first
derive the convergence bound of FL by considering the impact of both vehicles'
uncertain trajectories and uploading probabilities, from which we discover that
minimizing the training loss is equivalent to minimizing a weighted sum of
local and global earth mover's distance (EMD) between vehicles' collected data
distribution and global data distribution. Based on this observation, we
formulate the trajectory-dependent vehicle selection and data collection
problem for FL in AD. Given that the problem is NP-hard, we develop an
efficient algorithm to find the solution with an approximation guarantee.
Extensive simulation results have demonstrated the effectiveness of our
approach in improving object detection performance compared with existing
benchmarks.
|
2503.17699 | Haolin Qin | Haolin Qin, Tingfa Xu, Tianhao Li, Zhenxiang Chen, Tao Feng, Jianan Li | MUST: The First Dataset and Unified Framework for Multispectral UAV
Single Object Tracking | CVPR2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | UAV tracking faces significant challenges in real-world scenarios, such as
small-size targets and occlusions, which limit the performance of RGB-based
trackers. Multispectral images (MSI), which capture additional spectral
information, offer a promising solution to these challenges. However, progress
in this field has been hindered by the lack of relevant datasets. To address
this gap, we introduce the first large-scale Multispectral UAV Single Object
Tracking dataset (MUST), which includes 250 video sequences spanning diverse
environments and challenges, providing a comprehensive data foundation for
multispectral UAV tracking. We also propose a novel tracking framework,
UNTrack, which encodes unified spectral, spatial, and temporal features from
spectrum prompts, initial templates, and sequential searches. UNTrack employs
an asymmetric transformer with a spectral background eliminate mechanism for
optimal relationship modeling and an encoder that continuously updates the
spectrum prompt to refine tracking, improving both accuracy and efficiency.
Extensive experiments show that our proposed UNTrack outperforms
state-of-the-art UAV trackers. We believe our dataset and framework will drive
future research in this area. The dataset is available on
https://github.com/q2479036243/MUST-Multispectral-UAV-Single-Object-Tracking.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 08:47:28 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Qin",
"Haolin",
""
],
[
"Xu",
"Tingfa",
""
],
[
"Li",
"Tianhao",
""
],
[
"Chen",
"Zhenxiang",
""
],
[
"Feng",
"Tao",
""
],
[
"Li",
"Jianan",
""
]
] | TITLE: MUST: The First Dataset and Unified Framework for Multispectral UAV
Single Object Tracking
ABSTRACT: UAV tracking faces significant challenges in real-world scenarios, such as
small-size targets and occlusions, which limit the performance of RGB-based
trackers. Multispectral images (MSI), which capture additional spectral
information, offer a promising solution to these challenges. However, progress
in this field has been hindered by the lack of relevant datasets. To address
this gap, we introduce the first large-scale Multispectral UAV Single Object
Tracking dataset (MUST), which includes 250 video sequences spanning diverse
environments and challenges, providing a comprehensive data foundation for
multispectral UAV tracking. We also propose a novel tracking framework,
UNTrack, which encodes unified spectral, spatial, and temporal features from
spectrum prompts, initial templates, and sequential searches. UNTrack employs
an asymmetric transformer with a spectral background eliminate mechanism for
optimal relationship modeling and an encoder that continuously updates the
spectrum prompt to refine tracking, improving both accuracy and efficiency.
Extensive experiments show that our proposed UNTrack outperforms
state-of-the-art UAV trackers. We believe our dataset and framework will drive
future research in this area. The dataset is available on
https://github.com/q2479036243/MUST-Multispectral-UAV-Single-Object-Tracking.
|
2503.17704 | Liang Jiang | Liang Jiang, Yuzhou Cheng, Kun Luo, Jianren Fan | PT-PINNs: A Parametric Engineering Turbulence Solver based on
Physics-Informed Neural Networks | null | null | null | null | physics.flu-dyn cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Physics-informed neural networks (PINNs) demonstrate promising potential in
parameterized engineering turbulence optimization problems but face challenges,
such as high data requirements and low computational accuracy when applied to
engineering turbulence problems. This study proposes a framework that enhances
the ability of PINNs to solve parametric turbulence problems without training
datasets from experiments or CFD-Parametric Turbulence PINNs (PT-PINNs)). Two
key methods are introduced to improve the accuracy and robustness of this
framework. The first is a soft constraint method for turbulent viscosity
calculation. The second is a pre-training method based on the conservation of
flow rate in the flow field. The effectiveness of PT-PINNs is validated using a
three-dimensional backward-facing step (BFS) turbulence problem with two
varying parameters (Re = 3000-200000, ER = 1.1-1.5). PT-PINNs produce
predictions that closely match experimental data and computational fluid
dynamics (CFD) results across various conditions. Moreover, PT-PINNs offer a
computational efficiency advantage over traditional CFD methods. The total time
required to construct the parametric BFS turbulence model is 39 hours,
one-sixteenth of the time required by traditional numerical methods. The
inference time for a single-condition prediction is just 40 seconds-only 0.5%
of a single CFD computation. These findings highlight the potential of PT-PINNs
for future applications in engineering turbulence optimization problems.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 09:10:53 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Jiang",
"Liang",
""
],
[
"Cheng",
"Yuzhou",
""
],
[
"Luo",
"Kun",
""
],
[
"Fan",
"Jianren",
""
]
] | TITLE: PT-PINNs: A Parametric Engineering Turbulence Solver based on
Physics-Informed Neural Networks
ABSTRACT: Physics-informed neural networks (PINNs) demonstrate promising potential in
parameterized engineering turbulence optimization problems but face challenges,
such as high data requirements and low computational accuracy when applied to
engineering turbulence problems. This study proposes a framework that enhances
the ability of PINNs to solve parametric turbulence problems without training
datasets from experiments or CFD-Parametric Turbulence PINNs (PT-PINNs)). Two
key methods are introduced to improve the accuracy and robustness of this
framework. The first is a soft constraint method for turbulent viscosity
calculation. The second is a pre-training method based on the conservation of
flow rate in the flow field. The effectiveness of PT-PINNs is validated using a
three-dimensional backward-facing step (BFS) turbulence problem with two
varying parameters (Re = 3000-200000, ER = 1.1-1.5). PT-PINNs produce
predictions that closely match experimental data and computational fluid
dynamics (CFD) results across various conditions. Moreover, PT-PINNs offer a
computational efficiency advantage over traditional CFD methods. The total time
required to construct the parametric BFS turbulence model is 39 hours,
one-sixteenth of the time required by traditional numerical methods. The
inference time for a single-condition prediction is just 40 seconds-only 0.5%
of a single CFD computation. These findings highlight the potential of PT-PINNs
for future applications in engineering turbulence optimization problems.
|
2503.17709 | Yuchen Sun | Yuchen Sun, Shanhui Zhao, Tao Yu, Hao Wen, Samith Va, Mengwei Xu,
Yuanchun Li, Chongyang Zhang | GUI-Xplore: Empowering Generalizable GUI Agents with One Exploration | CVPR 2025 | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | GUI agents hold significant potential to enhance the experience and
efficiency of human-device interaction. However, current methods face
challenges in generalizing across applications (apps) and tasks, primarily due
to two fundamental limitations in existing datasets. First, these datasets
overlook developer-induced structural variations among apps, limiting the
transferability of knowledge across diverse software environments. Second, many
of them focus solely on navigation tasks, which restricts their capacity to
represent comprehensive software architectures and complex user interactions.
To address these challenges, we introduce GUI-Xplore, a dataset meticulously
designed to enhance cross-application and cross-task generalization via an
exploration-and-reasoning framework. GUI-Xplore integrates pre-recorded
exploration videos providing contextual insights, alongside five hierarchically
structured downstream tasks designed to comprehensively evaluate GUI agent
capabilities. To fully exploit GUI-Xplore's unique features, we propose
Xplore-Agent, a GUI agent framework that combines Action-aware GUI Modeling
with Graph-Guided Environment Reasoning. Further experiments indicate that
Xplore-Agent achieves a 10% improvement over existing methods in unfamiliar
environments, yet there remains significant potential for further enhancement
towards truly generalizable GUI agents.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 09:30:37 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Sun",
"Yuchen",
""
],
[
"Zhao",
"Shanhui",
""
],
[
"Yu",
"Tao",
""
],
[
"Wen",
"Hao",
""
],
[
"Va",
"Samith",
""
],
[
"Xu",
"Mengwei",
""
],
[
"Li",
"Yuanchun",
""
],
[
"Zhang",
"Chongyang",
""
]
] | TITLE: GUI-Xplore: Empowering Generalizable GUI Agents with One Exploration
ABSTRACT: GUI agents hold significant potential to enhance the experience and
efficiency of human-device interaction. However, current methods face
challenges in generalizing across applications (apps) and tasks, primarily due
to two fundamental limitations in existing datasets. First, these datasets
overlook developer-induced structural variations among apps, limiting the
transferability of knowledge across diverse software environments. Second, many
of them focus solely on navigation tasks, which restricts their capacity to
represent comprehensive software architectures and complex user interactions.
To address these challenges, we introduce GUI-Xplore, a dataset meticulously
designed to enhance cross-application and cross-task generalization via an
exploration-and-reasoning framework. GUI-Xplore integrates pre-recorded
exploration videos providing contextual insights, alongside five hierarchically
structured downstream tasks designed to comprehensively evaluate GUI agent
capabilities. To fully exploit GUI-Xplore's unique features, we propose
Xplore-Agent, a GUI agent framework that combines Action-aware GUI Modeling
with Graph-Guided Environment Reasoning. Further experiments indicate that
Xplore-Agent achieves a 10% improvement over existing methods in unfamiliar
environments, yet there remains significant potential for further enhancement
towards truly generalizable GUI agents.
|
2503.17712 | Heng Gao | Heng Gao, Zhuolin He, Shoumeng Qiu, Xiangyang Xue, Jian Pu | Multi-modality Anomaly Segmentation on the Road | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Semantic segmentation allows autonomous driving cars to understand the
surroundings of the vehicle comprehensively. However, it is also crucial for
the model to detect obstacles that may jeopardize the safety of autonomous
driving systems. Based on our experiments, we find that current uni-modal
anomaly segmentation frameworks tend to produce high anomaly scores for
non-anomalous regions in images. Motivated by this empirical finding, we
develop a multi-modal uncertainty-based anomaly segmentation framework, named
MMRAS+, for autonomous driving systems. MMRAS+ effectively reduces the high
anomaly outputs of non-anomalous classes by introducing text-modal using the
CLIP text encoder. Indeed, MMRAS+ is the first multi-modal anomaly segmentation
solution for autonomous driving. Moreover, we develop an ensemble module to
further boost the anomaly segmentation performance. Experiments on RoadAnomaly,
SMIYC, and Fishyscapes validation datasets demonstrate the superior performance
of our method. The code is available in
https://github.com/HengGao12/MMRAS_plus.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 09:55:42 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Gao",
"Heng",
""
],
[
"He",
"Zhuolin",
""
],
[
"Qiu",
"Shoumeng",
""
],
[
"Xue",
"Xiangyang",
""
],
[
"Pu",
"Jian",
""
]
] | TITLE: Multi-modality Anomaly Segmentation on the Road
ABSTRACT: Semantic segmentation allows autonomous driving cars to understand the
surroundings of the vehicle comprehensively. However, it is also crucial for
the model to detect obstacles that may jeopardize the safety of autonomous
driving systems. Based on our experiments, we find that current uni-modal
anomaly segmentation frameworks tend to produce high anomaly scores for
non-anomalous regions in images. Motivated by this empirical finding, we
develop a multi-modal uncertainty-based anomaly segmentation framework, named
MMRAS+, for autonomous driving systems. MMRAS+ effectively reduces the high
anomaly outputs of non-anomalous classes by introducing text-modal using the
CLIP text encoder. Indeed, MMRAS+ is the first multi-modal anomaly segmentation
solution for autonomous driving. Moreover, we develop an ensemble module to
further boost the anomaly segmentation performance. Experiments on RoadAnomaly,
SMIYC, and Fishyscapes validation datasets demonstrate the superior performance
of our method. The code is available in
https://github.com/HengGao12/MMRAS_plus.
|
2503.17715 | Abtin Pourhadi | Abtin Pourhadi and Paul Swoboda | Normalized Matching Transformer | null | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We present a new state of the art approach for sparse keypoint matching
between pairs of images. Our method consists of a fully deep learning based
approach combining a visual backbone coupled with a SplineCNN graph neural
network for feature processing and a normalized transformer decoder for
decoding keypoint correspondences together with the Sinkhorn algorithm. Our
method is trained using a contrastive and a hyperspherical loss for better
feature representations. We additionally use data augmentation during training.
This comparatively simple architecture combining extensive normalization and
advanced losses outperforms current state of the art approaches on PascalVOC
and SPair-71k datasets by $5.1\%$ and $2.2\%$ respectively compared to BBGM,
ASAR, COMMON and GMTR while training for at least $1.7x$ fewer epochs.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 10:09:11 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Pourhadi",
"Abtin",
""
],
[
"Swoboda",
"Paul",
""
]
] | TITLE: Normalized Matching Transformer
ABSTRACT: We present a new state of the art approach for sparse keypoint matching
between pairs of images. Our method consists of a fully deep learning based
approach combining a visual backbone coupled with a SplineCNN graph neural
network for feature processing and a normalized transformer decoder for
decoding keypoint correspondences together with the Sinkhorn algorithm. Our
method is trained using a contrastive and a hyperspherical loss for better
feature representations. We additionally use data augmentation during training.
This comparatively simple architecture combining extensive normalization and
advanced losses outperforms current state of the art approaches on PascalVOC
and SPair-71k datasets by $5.1\%$ and $2.2\%$ respectively compared to BBGM,
ASAR, COMMON and GMTR while training for at least $1.7x$ fewer epochs.
|
2503.17716 | Tim Alpherts | Tim Alpherts, Sennay Ghebreab, Nanne van Noord | EMPLACE: Self-Supervised Urban Scene Change Detection | 7 pages, 7 figures, published at AAAI 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Urban change is a constant process that influences the perception of
neighbourhoods and the lives of the people within them. The field of Urban
Scene Change Detection (USCD) aims to capture changes in street scenes using
computer vision and can help raise awareness of changes that make it possible
to better understand the city and its residents. Traditionally, the field of
USCD has used supervised methods with small scale datasets. This constrains
methods when applied to new cities, as it requires labour-intensive labeling
processes and forces a priori definitions of relevant change. In this paper we
introduce AC-1M the largest USCD dataset by far of over 1.1M images, together
with EMPLACE, a self-supervising method to train a Vision Transformer using our
adaptive triplet loss. We show EMPLACE outperforms SOTA methods both as a
pre-training method for linear fine-tuning as well as a zero-shot setting.
Lastly, in a case study of Amsterdam, we show that we are able to detect both
small and large changes throughout the city and that changes uncovered by
EMPLACE, depending on size, correlate with housing prices - which in turn is
indicative of inequity.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 10:20:43 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Alpherts",
"Tim",
""
],
[
"Ghebreab",
"Sennay",
""
],
[
"van Noord",
"Nanne",
""
]
] | TITLE: EMPLACE: Self-Supervised Urban Scene Change Detection
ABSTRACT: Urban change is a constant process that influences the perception of
neighbourhoods and the lives of the people within them. The field of Urban
Scene Change Detection (USCD) aims to capture changes in street scenes using
computer vision and can help raise awareness of changes that make it possible
to better understand the city and its residents. Traditionally, the field of
USCD has used supervised methods with small scale datasets. This constrains
methods when applied to new cities, as it requires labour-intensive labeling
processes and forces a priori definitions of relevant change. In this paper we
introduce AC-1M the largest USCD dataset by far of over 1.1M images, together
with EMPLACE, a self-supervising method to train a Vision Transformer using our
adaptive triplet loss. We show EMPLACE outperforms SOTA methods both as a
pre-training method for linear fine-tuning as well as a zero-shot setting.
Lastly, in a case study of Amsterdam, we show that we are able to detect both
small and large changes throughout the city and that changes uncovered by
EMPLACE, depending on size, correlate with housing prices - which in turn is
indicative of inequity.
|
2503.17717 | Junxian Mu | Yu Wang, Junxian Mu, Hongzhi Huang, Qilong Wang, Pengfei Zhu, Qinghua
Hu | BackMix: Regularizing Open Set Recognition by Removing Underlying
Fore-Background Priors | 20 pages, 11 figures. Accepted by TPAMI | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Open set recognition (OSR) requires models to classify known samples while
detecting unknown samples for real-world applications. Existing studies show
impressive progress using unknown samples from auxiliary datasets to regularize
OSR models, but they have proved to be sensitive to selecting such known
outliers. In this paper, we discuss the aforementioned problem from a new
perspective: Can we regularize OSR models without elaborately selecting
auxiliary known outliers? We first empirically and theoretically explore the
role of foregrounds and backgrounds in open set recognition and disclose that:
1) backgrounds that correlate with foregrounds would mislead the model and
cause failures when encounters 'partially' known images; 2) Backgrounds
unrelated to foregrounds can serve as auxiliary known outliers and provide
regularization via global average pooling. Based on the above insights, we
propose a new method, Background Mix (BackMix), that mixes the foreground of an
image with different backgrounds to remove the underlying fore-background
priors. Specifically, BackMix first estimates the foreground with class
activation maps (CAMs), then randomly replaces image patches with backgrounds
from other images to obtain mixed images for training. With backgrounds
de-correlated from foregrounds, the open set recognition performance is
significantly improved. The proposed method is quite simple to implement,
requires no extra operation for inferences, and can be seamlessly integrated
into almost all of the existing frameworks. The code is released on
https://github.com/Vanixxz/BackMix.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 10:23:11 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Wang",
"Yu",
""
],
[
"Mu",
"Junxian",
""
],
[
"Huang",
"Hongzhi",
""
],
[
"Wang",
"Qilong",
""
],
[
"Zhu",
"Pengfei",
""
],
[
"Hu",
"Qinghua",
""
]
] | TITLE: BackMix: Regularizing Open Set Recognition by Removing Underlying
Fore-Background Priors
ABSTRACT: Open set recognition (OSR) requires models to classify known samples while
detecting unknown samples for real-world applications. Existing studies show
impressive progress using unknown samples from auxiliary datasets to regularize
OSR models, but they have proved to be sensitive to selecting such known
outliers. In this paper, we discuss the aforementioned problem from a new
perspective: Can we regularize OSR models without elaborately selecting
auxiliary known outliers? We first empirically and theoretically explore the
role of foregrounds and backgrounds in open set recognition and disclose that:
1) backgrounds that correlate with foregrounds would mislead the model and
cause failures when encounters 'partially' known images; 2) Backgrounds
unrelated to foregrounds can serve as auxiliary known outliers and provide
regularization via global average pooling. Based on the above insights, we
propose a new method, Background Mix (BackMix), that mixes the foreground of an
image with different backgrounds to remove the underlying fore-background
priors. Specifically, BackMix first estimates the foreground with class
activation maps (CAMs), then randomly replaces image patches with backgrounds
from other images to obtain mixed images for training. With backgrounds
de-correlated from foregrounds, the open set recognition performance is
significantly improved. The proposed method is quite simple to implement,
requires no extra operation for inferences, and can be seamlessly integrated
into almost all of the existing frameworks. The code is released on
https://github.com/Vanixxz/BackMix.
|
2503.17731 | Sungphill Moon | Sungphill Moon, Hyeontae Son, Dongcheol Hur, Sangwook Kim | Co-op: Correspondence-based Novel Object Pose Estimation | Accepted at CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose Co-op, a novel method for accurately and robustly estimating the
6DoF pose of objects unseen during training from a single RGB image. Our method
requires only the CAD model of the target object and can precisely estimate its
pose without any additional fine-tuning. While existing model-based methods
suffer from inefficiency due to using a large number of templates, our method
enables fast and accurate estimation with a small number of templates. This
improvement is achieved by finding semi-dense correspondences between the input
image and the pre-rendered templates. Our method achieves strong generalization
performance by leveraging a hybrid representation that combines patch-level
classification and offset regression. Additionally, our pose refinement model
estimates probabilistic flow between the input image and the rendered image,
refining the initial estimate to an accurate pose using a differentiable PnP
layer. We demonstrate that our method not only estimates object poses rapidly
but also outperforms existing methods by a large margin on the seven core
datasets of the BOP Challenge, achieving state-of-the-art accuracy.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 11:24:19 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Moon",
"Sungphill",
""
],
[
"Son",
"Hyeontae",
""
],
[
"Hur",
"Dongcheol",
""
],
[
"Kim",
"Sangwook",
""
]
] | TITLE: Co-op: Correspondence-based Novel Object Pose Estimation
ABSTRACT: We propose Co-op, a novel method for accurately and robustly estimating the
6DoF pose of objects unseen during training from a single RGB image. Our method
requires only the CAD model of the target object and can precisely estimate its
pose without any additional fine-tuning. While existing model-based methods
suffer from inefficiency due to using a large number of templates, our method
enables fast and accurate estimation with a small number of templates. This
improvement is achieved by finding semi-dense correspondences between the input
image and the pre-rendered templates. Our method achieves strong generalization
performance by leveraging a hybrid representation that combines patch-level
classification and offset regression. Additionally, our pose refinement model
estimates probabilistic flow between the input image and the rendered image,
refining the initial estimate to an accurate pose using a differentiable PnP
layer. We demonstrate that our method not only estimates object poses rapidly
but also outperforms existing methods by a large margin on the seven core
datasets of the BOP Challenge, achieving state-of-the-art accuracy.
|
2503.17739 | Bashar Alhafni | Chatrine Qwaider, Bashar Alhafni, Kirill Chirkunov, Nizar Habash, Ted
Briscoe | Enhancing Arabic Automated Essay Scoring with Synthetic Data and Error
Injection | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automated Essay Scoring (AES) plays a crucial role in assessing language
learners' writing quality, reducing grading workload, and providing real-time
feedback. Arabic AES systems are particularly challenged by the lack of
annotated essay datasets. This paper presents a novel framework leveraging
Large Language Models (LLMs) and Transformers to generate synthetic Arabic
essay datasets for AES. We prompt an LLM to generate essays across CEFR
proficiency levels and introduce controlled error injection using a fine-tuned
Standard Arabic BERT model for error type prediction. Our approach produces
realistic human-like essays, contributing a dataset of 3,040 annotated essays.
Additionally, we develop a BERT-based auto-marking system for accurate and
scalable Arabic essay evaluation. Experimental results demonstrate the
effectiveness of our framework in improving Arabic AES performance.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 11:54:10 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Qwaider",
"Chatrine",
""
],
[
"Alhafni",
"Bashar",
""
],
[
"Chirkunov",
"Kirill",
""
],
[
"Habash",
"Nizar",
""
],
[
"Briscoe",
"Ted",
""
]
] | TITLE: Enhancing Arabic Automated Essay Scoring with Synthetic Data and Error
Injection
ABSTRACT: Automated Essay Scoring (AES) plays a crucial role in assessing language
learners' writing quality, reducing grading workload, and providing real-time
feedback. Arabic AES systems are particularly challenged by the lack of
annotated essay datasets. This paper presents a novel framework leveraging
Large Language Models (LLMs) and Transformers to generate synthetic Arabic
essay datasets for AES. We prompt an LLM to generate essays across CEFR
proficiency levels and introduce controlled error injection using a fine-tuned
Standard Arabic BERT model for error type prediction. Our approach produces
realistic human-like essays, contributing a dataset of 3,040 annotated essays.
Additionally, we develop a BERT-based auto-marking system for accurate and
scalable Arabic essay evaluation. Experimental results demonstrate the
effectiveness of our framework in improving Arabic AES performance.
|
2503.17752 | R.D. Lin | R.D. Lin, Pengcheng Weng, Yinqiao Wang, Han Ding, Jinsong Han, Fei
Wang | HiLoTs: High-Low Temporal Sensitive Representation Learning for
Semi-Supervised LiDAR Segmentation in Autonomous Driving | accepted by CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | LiDAR point cloud semantic segmentation plays a crucial role in autonomous
driving. In recent years, semi-supervised methods have gained popularity due to
their significant reduction in annotation labor and time costs. Current
semi-supervised methods typically focus on point cloud spatial distribution or
consider short-term temporal representations, e.g., only two adjacent frames,
often overlooking the rich long-term temporal properties inherent in autonomous
driving scenarios. In driving experience, we observe that nearby objects, such
as roads and vehicles, remain stable while driving, whereas distant objects
exhibit greater variability in category and shape. This natural phenomenon is
also captured by LiDAR, which reflects lower temporal sensitivity for nearby
objects and higher sensitivity for distant ones. To leverage these
characteristics, we propose HiLoTs, which learns high-temporal sensitivity and
low-temporal sensitivity representations from continuous LiDAR frames. These
representations are further enhanced and fused using a cross-attention
mechanism. Additionally, we employ a teacher-student framework to align the
representations learned by the labeled and unlabeled branches, effectively
utilizing the large amounts of unlabeled data. Experimental results on the
SemanticKITTI and nuScenes datasets demonstrate that our proposed HiLoTs
outperforms state-of-the-art semi-supervised methods, and achieves performance
close to LiDAR+Camera multimodal approaches. Code is available on
https://github.com/rdlin118/HiLoTs
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 12:29:15 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Lin",
"R. D.",
""
],
[
"Weng",
"Pengcheng",
""
],
[
"Wang",
"Yinqiao",
""
],
[
"Ding",
"Han",
""
],
[
"Han",
"Jinsong",
""
],
[
"Wang",
"Fei",
""
]
] | TITLE: HiLoTs: High-Low Temporal Sensitive Representation Learning for
Semi-Supervised LiDAR Segmentation in Autonomous Driving
ABSTRACT: LiDAR point cloud semantic segmentation plays a crucial role in autonomous
driving. In recent years, semi-supervised methods have gained popularity due to
their significant reduction in annotation labor and time costs. Current
semi-supervised methods typically focus on point cloud spatial distribution or
consider short-term temporal representations, e.g., only two adjacent frames,
often overlooking the rich long-term temporal properties inherent in autonomous
driving scenarios. In driving experience, we observe that nearby objects, such
as roads and vehicles, remain stable while driving, whereas distant objects
exhibit greater variability in category and shape. This natural phenomenon is
also captured by LiDAR, which reflects lower temporal sensitivity for nearby
objects and higher sensitivity for distant ones. To leverage these
characteristics, we propose HiLoTs, which learns high-temporal sensitivity and
low-temporal sensitivity representations from continuous LiDAR frames. These
representations are further enhanced and fused using a cross-attention
mechanism. Additionally, we employ a teacher-student framework to align the
representations learned by the labeled and unlabeled branches, effectively
utilizing the large amounts of unlabeled data. Experimental results on the
SemanticKITTI and nuScenes datasets demonstrate that our proposed HiLoTs
outperforms state-of-the-art semi-supervised methods, and achieves performance
close to LiDAR+Camera multimodal approaches. Code is available on
https://github.com/rdlin118/HiLoTs
|
2503.17755 | Sharan Maiya | Sharan Maiya, Yinhong Liu, Ramit Debnath, Anna Korhonen | Improving Preference Extraction In LLMs By Identifying Latent Knowledge
Through Classifying Probes | preprint, submitted to ACL ARR 2025, 21 pages, 23 figures | null | null | null | cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) are often used as automated judges to evaluate
text, but their effectiveness can be hindered by various unintentional biases.
We propose using linear classifying probes, trained by leveraging differences
between contrasting pairs of prompts, to directly access LLMs' latent knowledge
and extract more accurate preferences. Through extensive experiments using
models of varying size from four different families and six diverse datasets
assessing text quality evaluation and common sense reasoning, we demonstrate
that both supervised and unsupervised probing approaches consistently
outperform traditional generation-based judgement while maintaining similar
computational costs. These probes generalise under domain shifts and can even
outperform finetuned evaluators with the same training data size. Our results
suggest linear probing offers an accurate, robust and computationally efficient
approach for LLM-as-judge tasks while providing interpretable insights into how
models encode judgement-relevant knowledge. Our data and code will be openly
released in the future.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 12:35:25 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Maiya",
"Sharan",
""
],
[
"Liu",
"Yinhong",
""
],
[
"Debnath",
"Ramit",
""
],
[
"Korhonen",
"Anna",
""
]
] | TITLE: Improving Preference Extraction In LLMs By Identifying Latent Knowledge
Through Classifying Probes
ABSTRACT: Large Language Models (LLMs) are often used as automated judges to evaluate
text, but their effectiveness can be hindered by various unintentional biases.
We propose using linear classifying probes, trained by leveraging differences
between contrasting pairs of prompts, to directly access LLMs' latent knowledge
and extract more accurate preferences. Through extensive experiments using
models of varying size from four different families and six diverse datasets
assessing text quality evaluation and common sense reasoning, we demonstrate
that both supervised and unsupervised probing approaches consistently
outperform traditional generation-based judgement while maintaining similar
computational costs. These probes generalise under domain shifts and can even
outperform finetuned evaluators with the same training data size. Our results
suggest linear probing offers an accurate, robust and computationally efficient
approach for LLM-as-judge tasks while providing interpretable insights into how
models encode judgement-relevant knowledge. Our data and code will be openly
released in the future.
|
2503.17770 | Jianhua Pei | Yixiang Huang, Jianhua Pei, Luocheng Chen, Zhenchang Du, Jinfu Chen,
Zirui Peng | Probabilistic Net Load Forecasting for High-Penetration RES Grids
Utilizing Enhanced Conditional Diffusion Model | null | null | null | null | eess.SY cs.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The proliferation of intermittent distributed renewable energy sources (RES)
in modern power systems has fundamentally compromised the reliability and
accuracy of deterministic net load forecasting. Generative models, particularly
diffusion models, demonstrate exceptional potential in uncertainty
quantification for scenario forecasting. Nevertheless, their probabilistic
predictive capabilities and conditional bootstrapping mechanisms still remain
underexplored. In this paper, a day-ahead probabilistic net load forecasting
framework is developed by systematically quantifying epistemic uncertainty and
aleatoric variability using the feature-informed enhanced conditional diffusion
model (ECDM). The ECDM architecture implements the net load distribution
generation process using an imputation-based conditional diffusion model, where
multi-modal conditional inputs, such as weather and calendar data, are fused
via cross-attention mechanisms. Specifically, historical net load profiles are
utilized to guide the reverse diffusion trajectory through non-parametric
imputation operators preserving spatial-temporal integrity. To capture periodic
characteristics, a novel weekly arrangement method is also introduced, while an
unconditional model is integrated to ensure diversity in the generated
scenarios. Subsequently, the maximum probabilistic points and probability
intervals of predicted net load are obtained by the adaptive kernel density
estimation under RES intermittency. Moreover, ECDM is extented to multi-energy
forecast framework, attempting to increase interpretability of the net load
predictions. Numerical experiments on a publicly available dataset demonstrate
the superior forecasting performance of the proposed method compared to
existing state-of-the-art approaches.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 13:40:08 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Huang",
"Yixiang",
""
],
[
"Pei",
"Jianhua",
""
],
[
"Chen",
"Luocheng",
""
],
[
"Du",
"Zhenchang",
""
],
[
"Chen",
"Jinfu",
""
],
[
"Peng",
"Zirui",
""
]
] | TITLE: Probabilistic Net Load Forecasting for High-Penetration RES Grids
Utilizing Enhanced Conditional Diffusion Model
ABSTRACT: The proliferation of intermittent distributed renewable energy sources (RES)
in modern power systems has fundamentally compromised the reliability and
accuracy of deterministic net load forecasting. Generative models, particularly
diffusion models, demonstrate exceptional potential in uncertainty
quantification for scenario forecasting. Nevertheless, their probabilistic
predictive capabilities and conditional bootstrapping mechanisms still remain
underexplored. In this paper, a day-ahead probabilistic net load forecasting
framework is developed by systematically quantifying epistemic uncertainty and
aleatoric variability using the feature-informed enhanced conditional diffusion
model (ECDM). The ECDM architecture implements the net load distribution
generation process using an imputation-based conditional diffusion model, where
multi-modal conditional inputs, such as weather and calendar data, are fused
via cross-attention mechanisms. Specifically, historical net load profiles are
utilized to guide the reverse diffusion trajectory through non-parametric
imputation operators preserving spatial-temporal integrity. To capture periodic
characteristics, a novel weekly arrangement method is also introduced, while an
unconditional model is integrated to ensure diversity in the generated
scenarios. Subsequently, the maximum probabilistic points and probability
intervals of predicted net load are obtained by the adaptive kernel density
estimation under RES intermittency. Moreover, ECDM is extented to multi-energy
forecast framework, attempting to increase interpretability of the net load
predictions. Numerical experiments on a publicly available dataset demonstrate
the superior forecasting performance of the proposed method compared to
existing state-of-the-art approaches.
|
2503.17777 | Lei Guo | Lei Guo, Wei Chen, Yuxuan Sun, Bo Ai, Nikolaos Pappas, Tony Quek | Hierarchy-Aware and Channel-Adaptive Semantic Communication for
Bandwidth-Limited Data Fusion | Accepted by the WCL | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Obtaining high-resolution hyperspectral images (HR-HSI) is costly and
data-intensive, making it necessary to fuse low-resolution hyperspectral images
(LR-HSI) with high-resolution RGB images (HR-RGB) for practical applications.
However, traditional fusion techniques, which integrate detailed information
into the reconstruction, significantly increase bandwidth consumption compared
to directly transmitting raw data. To overcome these challenges, we propose a
hierarchy-aware and channel-adaptive semantic communication approach for
bandwidth-limited data fusion. A hierarchical correlation module is proposed to
preserve both the overall structural information and the details of the image
required for super-resolution. This module efficiently combines deep semantic
and shallow features from LR-HSI and HR-RGB. To further reduce bandwidth usage
while preserving reconstruction quality, a channel-adaptive attention mechanism
based on Transformer is proposed to dynamically integrate and transmit the deep
and shallow features, enabling efficient data transmission and high-quality
HR-HSI reconstruction. Experimental results on the CAVE and Washington DC Mall
datasets demonstrate that our method outperforms single-source transmission,
achieving up to a 2 dB improvement in peak signal-to-noise ratio (PSNR).
Additionally, it reduces bandwidth consumption by two-thirds, confirming its
effectiveness in bandwidth-constrained environments for HR-HSI reconstruction
tasks.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 14:02:52 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Guo",
"Lei",
""
],
[
"Chen",
"Wei",
""
],
[
"Sun",
"Yuxuan",
""
],
[
"Ai",
"Bo",
""
],
[
"Pappas",
"Nikolaos",
""
],
[
"Quek",
"Tony",
""
]
] | TITLE: Hierarchy-Aware and Channel-Adaptive Semantic Communication for
Bandwidth-Limited Data Fusion
ABSTRACT: Obtaining high-resolution hyperspectral images (HR-HSI) is costly and
data-intensive, making it necessary to fuse low-resolution hyperspectral images
(LR-HSI) with high-resolution RGB images (HR-RGB) for practical applications.
However, traditional fusion techniques, which integrate detailed information
into the reconstruction, significantly increase bandwidth consumption compared
to directly transmitting raw data. To overcome these challenges, we propose a
hierarchy-aware and channel-adaptive semantic communication approach for
bandwidth-limited data fusion. A hierarchical correlation module is proposed to
preserve both the overall structural information and the details of the image
required for super-resolution. This module efficiently combines deep semantic
and shallow features from LR-HSI and HR-RGB. To further reduce bandwidth usage
while preserving reconstruction quality, a channel-adaptive attention mechanism
based on Transformer is proposed to dynamically integrate and transmit the deep
and shallow features, enabling efficient data transmission and high-quality
HR-HSI reconstruction. Experimental results on the CAVE and Washington DC Mall
datasets demonstrate that our method outperforms single-source transmission,
achieving up to a 2 dB improvement in peak signal-to-noise ratio (PSNR).
Additionally, it reduces bandwidth consumption by two-thirds, confirming its
effectiveness in bandwidth-constrained environments for HR-HSI reconstruction
tasks.
|
2503.17783 | Nguyen Phuc Tran | Nguyen Phuc Tran, Brigitte Jaumard, Oscar Delgado | Energy-Aware LLMs: A step towards sustainable AI for downstream
applications | This work has been submitted to V. International Conference on
Electrical, Computer and Energy Technologies (ICECET 2025) for possible
publication | null | null | null | cs.PF cs.AI cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Advanced Large Language Models (LLMs) have revolutionized various fields,
including communication networks, sparking an innovation wave that has led to
new applications and services, and significantly enhanced solution schemes.
Despite all these impressive developments, most LLMs typically require huge
computational resources, resulting in terribly high energy consumption. Thus,
this research study proposes an end-to-end pipeline that investigates the
trade-off between energy efficiency and model performance for an LLM during
fault ticket analysis in communication networks. It further evaluates the
pipeline performance using two real-world datasets for the tasks of root cause
analysis and response feedback in a communication network. Our results show
that an appropriate combination of quantization and pruning techniques is able
to reduce energy consumption while significantly improving model performance.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 14:28:29 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Tran",
"Nguyen Phuc",
""
],
[
"Jaumard",
"Brigitte",
""
],
[
"Delgado",
"Oscar",
""
]
] | TITLE: Energy-Aware LLMs: A step towards sustainable AI for downstream
applications
ABSTRACT: Advanced Large Language Models (LLMs) have revolutionized various fields,
including communication networks, sparking an innovation wave that has led to
new applications and services, and significantly enhanced solution schemes.
Despite all these impressive developments, most LLMs typically require huge
computational resources, resulting in terribly high energy consumption. Thus,
this research study proposes an end-to-end pipeline that investigates the
trade-off between energy efficiency and model performance for an LLM during
fault ticket analysis in communication networks. It further evaluates the
pipeline performance using two real-world datasets for the tasks of root cause
analysis and response feedback in a communication network. Our results show
that an appropriate combination of quantization and pruning techniques is able
to reduce energy consumption while significantly improving model performance.
|
2503.17786 | Tommaso Di Noto | Tommaso Di Noto, Sofyan Jankowski, Francesco Puccinelli, Guillaume
Marie, Sebastien Tourbier, Yasser Aleman-Gomez, Oscar Esteban, Ricardo
Corredor-Jerez, Guillaume Saliou, Patric Hagmann, Meritxell Bach Cuadra,
Jonas Richiardi | Assessing workflow impact and clinical utility of AI-assisted brain
aneurysm detection: a multi-reader study | Paper under review with a Journal in the medical imaging field | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | Despite the plethora of AI-based algorithms developed for anomaly detection
in radiology, subsequent integration into clinical setting is rarely evaluated.
In this work, we assess the applicability and utility of an AI-based model for
brain aneurysm detection comparing the performance of two readers with
different levels of experience (2 and 13 years). We aim to answer the following
questions: 1) Do the readers improve their performance when assisted by the AI
algorithm? 2) How much does the AI algorithm impact routine clinical workflow?
We reuse and enlarge our open-access, Time-Of-Flight Magnetic Resonance
Angiography dataset (N=460). We use 360 subjects for training/validating our
algorithm and 100 as unseen test set for the reading session. Even though our
model reaches state-of-the-art results on the test set (sensitivity=74%, false
positive rate=1.6), we show that neither the junior nor the senior reader
significantly increase their sensitivity (p=0.59, p=1, respectively). In
addition, we find that reading time for both readers is significantly higher in
the "AI-assisted" setting than in the "Unassisted" (+15 seconds, on average;
p=3x10^(-4) junior, p=3x10^(-5) senior). The confidence reported by the readers
is unchanged across the two settings, indicating that the AI assistance does
not influence the certainty of the diagnosis. Our findings highlight the
importance of clinical validation of AI algorithms in a clinical setting
involving radiologists. This study should serve as a reminder to the community
to always examine the real-word effectiveness and workflow impact of proposed
algorithms.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 14:32:35 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Di Noto",
"Tommaso",
""
],
[
"Jankowski",
"Sofyan",
""
],
[
"Puccinelli",
"Francesco",
""
],
[
"Marie",
"Guillaume",
""
],
[
"Tourbier",
"Sebastien",
""
],
[
"Aleman-Gomez",
"Yasser",
""
],
[
"Esteban",
"Oscar",
""
],
[
"Corredor-Jerez",
"Ricardo",
""
],
[
"Saliou",
"Guillaume",
""
],
[
"Hagmann",
"Patric",
""
],
[
"Cuadra",
"Meritxell Bach",
""
],
[
"Richiardi",
"Jonas",
""
]
] | TITLE: Assessing workflow impact and clinical utility of AI-assisted brain
aneurysm detection: a multi-reader study
ABSTRACT: Despite the plethora of AI-based algorithms developed for anomaly detection
in radiology, subsequent integration into clinical setting is rarely evaluated.
In this work, we assess the applicability and utility of an AI-based model for
brain aneurysm detection comparing the performance of two readers with
different levels of experience (2 and 13 years). We aim to answer the following
questions: 1) Do the readers improve their performance when assisted by the AI
algorithm? 2) How much does the AI algorithm impact routine clinical workflow?
We reuse and enlarge our open-access, Time-Of-Flight Magnetic Resonance
Angiography dataset (N=460). We use 360 subjects for training/validating our
algorithm and 100 as unseen test set for the reading session. Even though our
model reaches state-of-the-art results on the test set (sensitivity=74%, false
positive rate=1.6), we show that neither the junior nor the senior reader
significantly increase their sensitivity (p=0.59, p=1, respectively). In
addition, we find that reading time for both readers is significantly higher in
the "AI-assisted" setting than in the "Unassisted" (+15 seconds, on average;
p=3x10^(-4) junior, p=3x10^(-5) senior). The confidence reported by the readers
is unchanged across the two settings, indicating that the AI assistance does
not influence the certainty of the diagnosis. Our findings highlight the
importance of clinical validation of AI algorithms in a clinical setting
involving radiologists. This study should serve as a reminder to the community
to always examine the real-word effectiveness and workflow impact of proposed
algorithms.
|
2503.17788 | Gaoge Han | Gaoge Han, Yongkang Cheng, Zhe Chen, Shaoli Huang, Tongliang Liu | Aligning Foundation Model Priors and Diffusion-Based Hand Interactions
for Occlusion-Resistant Two-Hand Reconstruction | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Two-hand reconstruction from monocular images faces persistent challenges due
to complex and dynamic hand postures and occlusions, causing significant
difficulty in achieving plausible interaction alignment. Existing approaches
struggle with such alignment issues, often resulting in misalignment and
penetration artifacts. To tackle this, we propose a novel framework that
attempts to precisely align hand poses and interactions by synergistically
integrating foundation model-driven 2D priors with diffusion-based interaction
refinement for occlusion-resistant two-hand reconstruction. First, we introduce
a Fusion Alignment Encoder that learns to align fused multimodal priors
keypoints, segmentation maps, and depth cues from foundation models during
training. This provides robust structured guidance, further enabling efficient
inference without foundation models at test time while maintaining high
reconstruction accuracy. Second, we employ a two-hand diffusion model
explicitly trained to transform interpenetrated poses into plausible,
non-penetrated interactions, leveraging gradient-guided denoising to correct
artifacts and ensure realistic spatial relations. Extensive evaluations
demonstrate that our method achieves state-of-the-art performance on
InterHand2.6M, FreiHAND, and HIC datasets, significantly advancing occlusion
handling and interaction robustness.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 14:42:27 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Han",
"Gaoge",
""
],
[
"Cheng",
"Yongkang",
""
],
[
"Chen",
"Zhe",
""
],
[
"Huang",
"Shaoli",
""
],
[
"Liu",
"Tongliang",
""
]
] | TITLE: Aligning Foundation Model Priors and Diffusion-Based Hand Interactions
for Occlusion-Resistant Two-Hand Reconstruction
ABSTRACT: Two-hand reconstruction from monocular images faces persistent challenges due
to complex and dynamic hand postures and occlusions, causing significant
difficulty in achieving plausible interaction alignment. Existing approaches
struggle with such alignment issues, often resulting in misalignment and
penetration artifacts. To tackle this, we propose a novel framework that
attempts to precisely align hand poses and interactions by synergistically
integrating foundation model-driven 2D priors with diffusion-based interaction
refinement for occlusion-resistant two-hand reconstruction. First, we introduce
a Fusion Alignment Encoder that learns to align fused multimodal priors
keypoints, segmentation maps, and depth cues from foundation models during
training. This provides robust structured guidance, further enabling efficient
inference without foundation models at test time while maintaining high
reconstruction accuracy. Second, we employ a two-hand diffusion model
explicitly trained to transform interpenetrated poses into plausible,
non-penetrated interactions, leveraging gradient-guided denoising to correct
artifacts and ensure realistic spatial relations. Extensive evaluations
demonstrate that our method achieves state-of-the-art performance on
InterHand2.6M, FreiHAND, and HIC datasets, significantly advancing occlusion
handling and interaction robustness.
|
2503.17799 | Ramakanth Kavuluru | Yuhang Jiang and Ramakanth Kavuluru | Relation Extraction with Instance-Adapted Predicate Descriptions | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Relation extraction (RE) is a standard information extraction task playing a
major role in downstream applications such as knowledge discovery and question
answering. Although decoder-only large language models are excelling in
generative tasks, smaller encoder models are still the go to architecture for
RE. In this paper, we revisit fine-tuning such smaller models using a novel
dual-encoder architecture with a joint contrastive and cross-entropy loss.
Unlike previous methods that employ a fixed linear layer for predicate
representations, our approach uses a second encoder to compute
instance-specific predicate representations by infusing them with real entity
spans from corresponding input instances. We conducted experiments on two
biomedical RE datasets and two general domain datasets. Our approach achieved
F1 score improvements ranging from 1% to 2% over state-of-the-art methods with
a simple but elegant formulation. Ablation studies justify the importance of
various components built into the proposed architecture.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 15:36:41 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Jiang",
"Yuhang",
""
],
[
"Kavuluru",
"Ramakanth",
""
]
] | TITLE: Relation Extraction with Instance-Adapted Predicate Descriptions
ABSTRACT: Relation extraction (RE) is a standard information extraction task playing a
major role in downstream applications such as knowledge discovery and question
answering. Although decoder-only large language models are excelling in
generative tasks, smaller encoder models are still the go to architecture for
RE. In this paper, we revisit fine-tuning such smaller models using a novel
dual-encoder architecture with a joint contrastive and cross-entropy loss.
Unlike previous methods that employ a fixed linear layer for predicate
representations, our approach uses a second encoder to compute
instance-specific predicate representations by infusing them with real entity
spans from corresponding input instances. We conducted experiments on two
biomedical RE datasets and two general domain datasets. Our approach achieved
F1 score improvements ranging from 1% to 2% over state-of-the-art methods with
a simple but elegant formulation. Ablation studies justify the importance of
various components built into the proposed architecture.
|
2503.17809 | Zheng Tracy Ke | Morgane Austern, Yuanchuan Guo, Zheng Tracy Ke, Tianle Liu | Poisson-Process Topic Model for Integrating Knowledge from Pre-trained
Language Models | 35 pages, 9 figures, 3 tables | null | null | null | stat.ML cs.LG math.ST stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Topic modeling is traditionally applied to word counts without accounting for
the context in which words appear. Recent advancements in large language models
(LLMs) offer contextualized word embeddings, which capture deeper meaning and
relationships between words. We aim to leverage such embeddings to improve
topic modeling.
We use a pre-trained LLM to convert each document into a sequence of word
embeddings. This sequence is then modeled as a Poisson point process, with its
intensity measure expressed as a convex combination of $K$ base measures, each
corresponding to a topic. To estimate these topics, we propose a flexible
algorithm that integrates traditional topic modeling methods, enhanced by
net-rounding applied before and kernel smoothing applied after. One advantage
of this framework is that it treats the LLM as a black box, requiring no
fine-tuning of its parameters. Another advantage is its ability to seamlessly
integrate any traditional topic modeling approach as a plug-in module, without
the need for modifications
Assuming each topic is a $\beta$-H\"{o}lder smooth intensity measure on the
embedded space, we establish the rate of convergence of our method. We also
provide a minimax lower bound and show that the rate of our method matches with
the lower bound when $\beta\leq 1$. Additionally, we apply our method to
several datasets, providing evidence that it offers an advantage over
traditional topic modeling approaches.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 16:19:04 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Austern",
"Morgane",
""
],
[
"Guo",
"Yuanchuan",
""
],
[
"Ke",
"Zheng Tracy",
""
],
[
"Liu",
"Tianle",
""
]
] | TITLE: Poisson-Process Topic Model for Integrating Knowledge from Pre-trained
Language Models
ABSTRACT: Topic modeling is traditionally applied to word counts without accounting for
the context in which words appear. Recent advancements in large language models
(LLMs) offer contextualized word embeddings, which capture deeper meaning and
relationships between words. We aim to leverage such embeddings to improve
topic modeling.
We use a pre-trained LLM to convert each document into a sequence of word
embeddings. This sequence is then modeled as a Poisson point process, with its
intensity measure expressed as a convex combination of $K$ base measures, each
corresponding to a topic. To estimate these topics, we propose a flexible
algorithm that integrates traditional topic modeling methods, enhanced by
net-rounding applied before and kernel smoothing applied after. One advantage
of this framework is that it treats the LLM as a black box, requiring no
fine-tuning of its parameters. Another advantage is its ability to seamlessly
integrate any traditional topic modeling approach as a plug-in module, without
the need for modifications
Assuming each topic is a $\beta$-H\"{o}lder smooth intensity measure on the
embedded space, we establish the rate of convergence of our method. We also
provide a minimax lower bound and show that the rate of our method matches with
the lower bound when $\beta\leq 1$. Additionally, we apply our method to
several datasets, providing evidence that it offers an advantage over
traditional topic modeling approaches.
|
2503.17814 | Wen Li | Wen Li, Chen Liu, Shangshu Yu, Dunqiang Liu, Yin Zhou, Siqi Shen,
Chenglu Wen and Cheng Wang | LightLoc: Learning Outdoor LiDAR Localization at Light Speed | CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Scene coordinate regression achieves impressive results in outdoor LiDAR
localization but requires days of training. Since training needs to be repeated
for each new scene, long training times make these methods impractical for
time-sensitive applications, such as autonomous driving, drones, and robotics.
We identify large coverage areas and vast data in large-scale outdoor scenes as
key challenges that limit fast training. In this paper, we propose LightLoc,
the first method capable of efficiently learning localization in a new scene at
light speed. LightLoc introduces two novel techniques to address these
challenges. First, we introduce sample classification guidance to assist
regression learning, reducing ambiguity from similar samples and improving
training efficiency. Second, we propose redundant sample downsampling to remove
well-learned frames during training, reducing training time without
compromising accuracy. Additionally, the fast training and confidence
estimation capabilities of sample classification enable its integration into
SLAM, effectively eliminating error accumulation. Extensive experiments on
large-scale outdoor datasets demonstrate that LightLoc achieves
state-of-the-art performance with a 50x reduction in training time than
existing methods. Our code is available at https://github.com/liw95/LightLoc.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 16:33:41 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Li",
"Wen",
""
],
[
"Liu",
"Chen",
""
],
[
"Yu",
"Shangshu",
""
],
[
"Liu",
"Dunqiang",
""
],
[
"Zhou",
"Yin",
""
],
[
"Shen",
"Siqi",
""
],
[
"Wen",
"Chenglu",
""
],
[
"Wang",
"Cheng",
""
]
] | TITLE: LightLoc: Learning Outdoor LiDAR Localization at Light Speed
ABSTRACT: Scene coordinate regression achieves impressive results in outdoor LiDAR
localization but requires days of training. Since training needs to be repeated
for each new scene, long training times make these methods impractical for
time-sensitive applications, such as autonomous driving, drones, and robotics.
We identify large coverage areas and vast data in large-scale outdoor scenes as
key challenges that limit fast training. In this paper, we propose LightLoc,
the first method capable of efficiently learning localization in a new scene at
light speed. LightLoc introduces two novel techniques to address these
challenges. First, we introduce sample classification guidance to assist
regression learning, reducing ambiguity from similar samples and improving
training efficiency. Second, we propose redundant sample downsampling to remove
well-learned frames during training, reducing training time without
compromising accuracy. Additionally, the fast training and confidence
estimation capabilities of sample classification enable its integration into
SLAM, effectively eliminating error accumulation. Extensive experiments on
large-scale outdoor datasets demonstrate that LightLoc achieves
state-of-the-art performance with a 50x reduction in training time than
existing methods. Our code is available at https://github.com/liw95/LightLoc.
|
2503.17820 | Zheng Lin | Zheng Lin, Nan Zhou, Chen-Xi Du, Deng-Ping Fan, Shi-Min Hu | RefCut: Interactive Segmentation with Reference Guidance | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Interactive segmentation aims to segment the specified target on the image
with positive and negative clicks from users. Interactive ambiguity is a
crucial issue in this field, which refers to the possibility of multiple
compliant outcomes with the same clicks, such as selecting a part of an object
versus the entire object, a single object versus a combination of multiple
objects, and so on. The existing methods cannot provide intuitive guidance to
the model, which leads to unstable output results and makes it difficult to
meet the large-scale and efficient annotation requirements for specific targets
in some scenarios. To bridge this gap, we introduce RefCut, a reference-based
interactive segmentation framework designed to address part ambiguity and
object ambiguity in segmenting specific targets. Users only need to provide a
reference image and corresponding reference masks, and the model will be
optimized based on them, which greatly reduces the interactive burden on users
when annotating a large number of such targets. In addition, to enrich these
two kinds of ambiguous data, we propose a new Target Disassembly Dataset which
contains two subsets of part disassembly and object disassembly for evaluation.
In the combination evaluation of multiple datasets, our RefCut achieved
state-of-the-art performance. Extensive experiments and visualized results
demonstrate that RefCut advances the field of intuitive and controllable
interactive segmentation. Our code will be publicly available and the demo
video is in https://www.lin-zheng.com/refcut.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 17:14:20 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Lin",
"Zheng",
""
],
[
"Zhou",
"Nan",
""
],
[
"Du",
"Chen-Xi",
""
],
[
"Fan",
"Deng-Ping",
""
],
[
"Hu",
"Shi-Min",
""
]
] | TITLE: RefCut: Interactive Segmentation with Reference Guidance
ABSTRACT: Interactive segmentation aims to segment the specified target on the image
with positive and negative clicks from users. Interactive ambiguity is a
crucial issue in this field, which refers to the possibility of multiple
compliant outcomes with the same clicks, such as selecting a part of an object
versus the entire object, a single object versus a combination of multiple
objects, and so on. The existing methods cannot provide intuitive guidance to
the model, which leads to unstable output results and makes it difficult to
meet the large-scale and efficient annotation requirements for specific targets
in some scenarios. To bridge this gap, we introduce RefCut, a reference-based
interactive segmentation framework designed to address part ambiguity and
object ambiguity in segmenting specific targets. Users only need to provide a
reference image and corresponding reference masks, and the model will be
optimized based on them, which greatly reduces the interactive burden on users
when annotating a large number of such targets. In addition, to enrich these
two kinds of ambiguous data, we propose a new Target Disassembly Dataset which
contains two subsets of part disassembly and object disassembly for evaluation.
In the combination evaluation of multiple datasets, our RefCut achieved
state-of-the-art performance. Extensive experiments and visualized results
demonstrate that RefCut advances the field of intuitive and controllable
interactive segmentation. Our code will be publicly available and the demo
video is in https://www.lin-zheng.com/refcut.
|
2503.17831 | Qingshan Hou | Qingshan Hou, Meng Wang, Peng Cao, Zou Ke, Xiaoli Liu, Huazhu Fu,
Osmar R. Zaiane | FundusGAN: A Hierarchical Feature-Aware Generative Framework for
High-Fidelity Fundus Image Generation | null | null | null | null | eess.IV cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advancements in ophthalmology foundation models such as RetFound have
demonstrated remarkable diagnostic capabilities but require massive datasets
for effective pre-training, creating significant barriers for development and
deployment. To address this critical challenge, we propose FundusGAN, a novel
hierarchical feature-aware generative framework specifically designed for
high-fidelity fundus image synthesis. Our approach leverages a Feature Pyramid
Network within its encoder to comprehensively extract multi-scale information,
capturing both large anatomical structures and subtle pathological features.
The framework incorporates a modified StyleGAN-based generator with dilated
convolutions and strategic upsampling adjustments to preserve critical retinal
structures while enhancing pathological detail representation. Comprehensive
evaluations on the DDR, DRIVE, and IDRiD datasets demonstrate that FundusGAN
consistently outperforms state-of-the-art methods across multiple metrics
(SSIM: 0.8863, FID: 54.2, KID: 0.0436 on DDR). Furthermore, disease
classification experiments reveal that augmenting training data with
FundusGAN-generated images significantly improves diagnostic accuracy across
multiple CNN architectures (up to 6.49\% improvement with ResNet50). These
results establish FundusGAN as a valuable foundation model component that
effectively addresses data scarcity challenges in ophthalmological AI research,
enabling more robust and generalizable diagnostic systems while reducing
dependency on large-scale clinical data collection.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 18:08:07 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Hou",
"Qingshan",
""
],
[
"Wang",
"Meng",
""
],
[
"Cao",
"Peng",
""
],
[
"Ke",
"Zou",
""
],
[
"Liu",
"Xiaoli",
""
],
[
"Fu",
"Huazhu",
""
],
[
"Zaiane",
"Osmar R.",
""
]
] | TITLE: FundusGAN: A Hierarchical Feature-Aware Generative Framework for
High-Fidelity Fundus Image Generation
ABSTRACT: Recent advancements in ophthalmology foundation models such as RetFound have
demonstrated remarkable diagnostic capabilities but require massive datasets
for effective pre-training, creating significant barriers for development and
deployment. To address this critical challenge, we propose FundusGAN, a novel
hierarchical feature-aware generative framework specifically designed for
high-fidelity fundus image synthesis. Our approach leverages a Feature Pyramid
Network within its encoder to comprehensively extract multi-scale information,
capturing both large anatomical structures and subtle pathological features.
The framework incorporates a modified StyleGAN-based generator with dilated
convolutions and strategic upsampling adjustments to preserve critical retinal
structures while enhancing pathological detail representation. Comprehensive
evaluations on the DDR, DRIVE, and IDRiD datasets demonstrate that FundusGAN
consistently outperforms state-of-the-art methods across multiple metrics
(SSIM: 0.8863, FID: 54.2, KID: 0.0436 on DDR). Furthermore, disease
classification experiments reveal that augmenting training data with
FundusGAN-generated images significantly improves diagnostic accuracy across
multiple CNN architectures (up to 6.49\% improvement with ResNet50). These
results establish FundusGAN as a valuable foundation model component that
effectively addresses data scarcity challenges in ophthalmological AI research,
enabling more robust and generalizable diagnostic systems while reducing
dependency on large-scale clinical data collection.
|
2503.17842 | Maryam Abdolali | Maryam Abdolali, Romina Zakerian, Behnam Roshanfekr, Fardin Ayar,
Mohammad Rahmati | Adapt, Agree, Aggregate: Semi-Supervised Ensemble Labeling for Graph
Convolutional Networks | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this paper, we propose a novel framework that combines ensemble learning
with augmented graph structures to improve the performance and robustness of
semi-supervised node classification in graphs. By creating multiple augmented
views of the same graph, our approach harnesses the "wisdom of a diverse
crowd", mitigating the challenges posed by noisy graph structures. Leveraging
ensemble learning allows us to simultaneously achieve three key goals: adaptive
confidence threshold selection based on model agreement, dynamic determination
of the number of high-confidence samples for training, and robust extraction of
pseudo-labels to mitigate confirmation bias. Our approach uniquely integrates
adaptive ensemble consensus to flexibly guide pseudo-label extraction and
sample selection, reducing the risks of error accumulation and improving
robustness. Furthermore, the use of ensemble-driven consensus for
pseudo-labeling captures subtle patterns that individual models often overlook,
enabling the model to generalize better. Experiments on several real-world
datasets demonstrate the effectiveness of our proposed method.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 19:10:54 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Abdolali",
"Maryam",
""
],
[
"Zakerian",
"Romina",
""
],
[
"Roshanfekr",
"Behnam",
""
],
[
"Ayar",
"Fardin",
""
],
[
"Rahmati",
"Mohammad",
""
]
] | TITLE: Adapt, Agree, Aggregate: Semi-Supervised Ensemble Labeling for Graph
Convolutional Networks
ABSTRACT: In this paper, we propose a novel framework that combines ensemble learning
with augmented graph structures to improve the performance and robustness of
semi-supervised node classification in graphs. By creating multiple augmented
views of the same graph, our approach harnesses the "wisdom of a diverse
crowd", mitigating the challenges posed by noisy graph structures. Leveraging
ensemble learning allows us to simultaneously achieve three key goals: adaptive
confidence threshold selection based on model agreement, dynamic determination
of the number of high-confidence samples for training, and robust extraction of
pseudo-labels to mitigate confirmation bias. Our approach uniquely integrates
adaptive ensemble consensus to flexibly guide pseudo-label extraction and
sample selection, reducing the risks of error accumulation and improving
robustness. Furthermore, the use of ensemble-driven consensus for
pseudo-labeling captures subtle patterns that individual models often overlook,
enabling the model to generalize better. Experiments on several real-world
datasets demonstrate the effectiveness of our proposed method.
|
2503.17855 | Lev Utkin | Andrei V. Konstantinov and Lev V. Utkin | A novel gradient-based method for decision trees optimizing arbitrary
differential loss functions | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There are many approaches for training decision trees. This work introduces a
novel gradient-based method for constructing decision trees that optimize
arbitrary differentiable loss functions, overcoming the limitations of
heuristic splitting rules. Unlike traditional approaches that rely on heuristic
splitting rules, the proposed method refines predictions using the first and
second derivatives of the loss function, enabling the optimization of complex
tasks such as classification, regression, and survival analysis. We demonstrate
the method's applicability to classification, regression, and survival analysis
tasks, including those with censored data. Numerical experiments on both real
and synthetic datasets compare the proposed method with traditional decision
tree algorithms, such as CART, Extremely Randomized Trees, and SurvTree. The
implementation of the method is publicly available, providing a practical tool
for researchers and practitioners. This work advances the field of decision
tree-based modeling, offering a more flexible and accurate approach for
handling structured data and complex tasks. By leveraging gradient-based
optimization, the proposed method bridges the gap between traditional decision
trees and modern machine learning techniques, paving the way for further
innovations in interpretable and high-performing models.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 20:25:30 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Konstantinov",
"Andrei V.",
""
],
[
"Utkin",
"Lev V.",
""
]
] | TITLE: A novel gradient-based method for decision trees optimizing arbitrary
differential loss functions
ABSTRACT: There are many approaches for training decision trees. This work introduces a
novel gradient-based method for constructing decision trees that optimize
arbitrary differentiable loss functions, overcoming the limitations of
heuristic splitting rules. Unlike traditional approaches that rely on heuristic
splitting rules, the proposed method refines predictions using the first and
second derivatives of the loss function, enabling the optimization of complex
tasks such as classification, regression, and survival analysis. We demonstrate
the method's applicability to classification, regression, and survival analysis
tasks, including those with censored data. Numerical experiments on both real
and synthetic datasets compare the proposed method with traditional decision
tree algorithms, such as CART, Extremely Randomized Trees, and SurvTree. The
implementation of the method is publicly available, providing a practical tool
for researchers and practitioners. This work advances the field of decision
tree-based modeling, offering a more flexible and accurate approach for
handling structured data and complex tasks. By leveraging gradient-based
optimization, the proposed method bridges the gap between traditional decision
trees and modern machine learning techniques, paving the way for further
innovations in interpretable and high-performing models.
|
2503.17856 | Radu Beche | Radu Beche, Sergiu Nedevschi | ClaraVid: A Holistic Scene Reconstruction Benchmark From Aerial
Perspective With Delentropy-Based Complexity Profiling | Currently under review | null | null | null | cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | The development of aerial holistic scene understanding algorithms is hindered
by the scarcity of comprehensive datasets that enable both semantic and
geometric reconstruction. While synthetic datasets offer an alternative,
existing options exhibit task-specific limitations, unrealistic scene
compositions, and rendering artifacts that compromise real-world applicability.
We introduce ClaraVid, a synthetic aerial dataset specifically designed to
overcome these limitations. Comprising 16,917 high-resolution images captured
at 4032x3024 from multiple viewpoints across diverse landscapes, ClaraVid
provides dense depth maps, panoptic segmentation, sparse point clouds, and
dynamic object masks, while mitigating common rendering artifacts. To further
advance neural reconstruction, we introduce the Delentropic Scene Profile
(DSP), a novel complexity metric derived from differential entropy analysis,
designed to quantitatively assess scene difficulty and inform reconstruction
tasks. Utilizing DSP, we systematically benchmark neural reconstruction
methods, uncovering a consistent, measurable correlation between scene
complexity and reconstruction accuracy. Empirical results indicate that higher
delentropy strongly correlates with increased reconstruction errors, validating
DSP as a reliable complexity prior. Currently under review, upon acceptance the
data and code will be available at
$\href{https://rdbch.github.io/claravid}{rdbch.github.io/ClaraVid}$.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 20:26:20 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Beche",
"Radu",
""
],
[
"Nedevschi",
"Sergiu",
""
]
] | TITLE: ClaraVid: A Holistic Scene Reconstruction Benchmark From Aerial
Perspective With Delentropy-Based Complexity Profiling
ABSTRACT: The development of aerial holistic scene understanding algorithms is hindered
by the scarcity of comprehensive datasets that enable both semantic and
geometric reconstruction. While synthetic datasets offer an alternative,
existing options exhibit task-specific limitations, unrealistic scene
compositions, and rendering artifacts that compromise real-world applicability.
We introduce ClaraVid, a synthetic aerial dataset specifically designed to
overcome these limitations. Comprising 16,917 high-resolution images captured
at 4032x3024 from multiple viewpoints across diverse landscapes, ClaraVid
provides dense depth maps, panoptic segmentation, sparse point clouds, and
dynamic object masks, while mitigating common rendering artifacts. To further
advance neural reconstruction, we introduce the Delentropic Scene Profile
(DSP), a novel complexity metric derived from differential entropy analysis,
designed to quantitatively assess scene difficulty and inform reconstruction
tasks. Utilizing DSP, we systematically benchmark neural reconstruction
methods, uncovering a consistent, measurable correlation between scene
complexity and reconstruction accuracy. Empirical results indicate that higher
delentropy strongly correlates with increased reconstruction errors, validating
DSP as a reliable complexity prior. Currently under review, upon acceptance the
data and code will be available at
$\href{https://rdbch.github.io/claravid}{rdbch.github.io/ClaraVid}$.
|
2503.17860 | Felix Faltings | Felix Faltings, Wei Wei, Yujia Bao | Enhancing Retrieval Systems with Inference-Time Logical Reasoning | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Traditional retrieval methods rely on transforming user queries into vector
representations and retrieving documents based on cosine similarity within an
embedding space. While efficient and scalable, this approach often fails to
handle complex queries involving logical constructs such as negations,
conjunctions, and disjunctions. In this paper, we propose a novel
inference-time logical reasoning framework that explicitly incorporates logical
reasoning into the retrieval process. Our method extracts logical reasoning
structures from natural language queries and then composes the individual
cosine similarity scores to formulate the final document scores. This approach
enables the retrieval process to handle complex logical reasoning without
compromising computational efficiency. Our results on both synthetic and
real-world benchmarks demonstrate that the proposed method consistently
outperforms traditional retrieval methods across different models and datasets,
significantly improving retrieval performance for complex queries.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 20:40:18 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Faltings",
"Felix",
""
],
[
"Wei",
"Wei",
""
],
[
"Bao",
"Yujia",
""
]
] | TITLE: Enhancing Retrieval Systems with Inference-Time Logical Reasoning
ABSTRACT: Traditional retrieval methods rely on transforming user queries into vector
representations and retrieving documents based on cosine similarity within an
embedding space. While efficient and scalable, this approach often fails to
handle complex queries involving logical constructs such as negations,
conjunctions, and disjunctions. In this paper, we propose a novel
inference-time logical reasoning framework that explicitly incorporates logical
reasoning into the retrieval process. Our method extracts logical reasoning
structures from natural language queries and then composes the individual
cosine similarity scores to formulate the final document scores. This approach
enables the retrieval process to handle complex logical reasoning without
compromising computational efficiency. Our results on both synthetic and
real-world benchmarks demonstrate that the proposed method consistently
outperforms traditional retrieval methods across different models and datasets,
significantly improving retrieval performance for complex queries.
|
2503.17867 | Paul Irofti | Alexandru Apostu, Silviu Gheorghe, Andrei H\^iji, Nicolae Cleju,
Andrei P\u{a}tra\c{s}cu, Cristian Rusu, Radu Ionescu, Paul Irofti | Detecting and Mitigating DDoS Attacks with AI: A Survey | null | null | null | null | cs.CR cs.AI cs.LG cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Distributed Denial of Service attacks represent an active cybersecurity
research problem. Recent research shifted from static rule-based defenses
towards AI-based detection and mitigation. This comprehensive survey covers
several key topics. Preeminently, state-of-the-art AI detection methods are
discussed. An in-depth taxonomy based on manual expert hierarchies and an
AI-generated dendrogram are provided, thus settling DDoS categorization
ambiguities. An important discussion on available datasets follows, covering
data format options and their role in training AI detection methods together
with adversarial training and examples augmentation. Beyond detection, AI based
mitigation techniques are surveyed as well. Finally, multiple open research
directions are proposed.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 21:54:23 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Apostu",
"Alexandru",
""
],
[
"Gheorghe",
"Silviu",
""
],
[
"Hîji",
"Andrei",
""
],
[
"Cleju",
"Nicolae",
""
],
[
"Pătraşcu",
"Andrei",
""
],
[
"Rusu",
"Cristian",
""
],
[
"Ionescu",
"Radu",
""
],
[
"Irofti",
"Paul",
""
]
] | TITLE: Detecting and Mitigating DDoS Attacks with AI: A Survey
ABSTRACT: Distributed Denial of Service attacks represent an active cybersecurity
research problem. Recent research shifted from static rule-based defenses
towards AI-based detection and mitigation. This comprehensive survey covers
several key topics. Preeminently, state-of-the-art AI detection methods are
discussed. An in-depth taxonomy based on manual expert hierarchies and an
AI-generated dendrogram are provided, thus settling DDoS categorization
ambiguities. An important discussion on available datasets follows, covering
data format options and their role in training AI detection methods together
with adversarial training and examples augmentation. Beyond detection, AI based
mitigation techniques are surveyed as well. Finally, multiple open research
directions are proposed.
|
2503.17871 | Abby Stylianou | Pranavi Kolouju, Eric Xing, Robert Pless, Nathan Jacobs, Abby
Stylianou | good4cir: Generating Detailed Synthetic Captions for Composed Image
Retrieval | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Composed image retrieval (CIR) enables users to search images using a
reference image combined with textual modifications. Recent advances in
vision-language models have improved CIR, but dataset limitations remain a
barrier. Existing datasets often rely on simplistic, ambiguous, or insufficient
manual annotations, hindering fine-grained retrieval. We introduce good4cir, a
structured pipeline leveraging vision-language models to generate high-quality
synthetic annotations. Our method involves: (1) extracting fine-grained object
descriptions from query images, (2) generating comparable descriptions for
target images, and (3) synthesizing textual instructions capturing meaningful
transformations between images. This reduces hallucination, enhances
modification diversity, and ensures object-level consistency. Applying our
method improves existing datasets and enables creating new datasets across
diverse domains. Results demonstrate improved retrieval accuracy for CIR models
trained on our pipeline-generated datasets. We release our dataset construction
framework to support further research in CIR and multi-modal retrieval.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 22:33:56 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Kolouju",
"Pranavi",
""
],
[
"Xing",
"Eric",
""
],
[
"Pless",
"Robert",
""
],
[
"Jacobs",
"Nathan",
""
],
[
"Stylianou",
"Abby",
""
]
] | TITLE: good4cir: Generating Detailed Synthetic Captions for Composed Image
Retrieval
ABSTRACT: Composed image retrieval (CIR) enables users to search images using a
reference image combined with textual modifications. Recent advances in
vision-language models have improved CIR, but dataset limitations remain a
barrier. Existing datasets often rely on simplistic, ambiguous, or insufficient
manual annotations, hindering fine-grained retrieval. We introduce good4cir, a
structured pipeline leveraging vision-language models to generate high-quality
synthetic annotations. Our method involves: (1) extracting fine-grained object
descriptions from query images, (2) generating comparable descriptions for
target images, and (3) synthesizing textual instructions capturing meaningful
transformations between images. This reduces hallucination, enhances
modification diversity, and ensures object-level consistency. Applying our
method improves existing datasets and enables creating new datasets across
diverse domains. Results demonstrate improved retrieval accuracy for CIR models
trained on our pipeline-generated datasets. We release our dataset construction
framework to support further research in CIR and multi-modal retrieval.
|
2503.17876 | Kaiwen Zuo | Kaiwen Zuo, Jing Tang, Hanbing Qin, Binli Luo, Ligang He, Shiyan Tang | Satisfactory Medical Consultation based on Terminology-Enhanced
Information Retrieval and Emotional In-Context Learning | The 46th European Conference on Information Retrieval Workshop | null | null | null | cs.CL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advancements in Large Language Models (LLMs) have marked significant
progress in understanding and responding to medical inquiries. However, their
performance still falls short of the standards set by professional
consultations. This paper introduces a novel framework for medical
consultation, comprising two main modules: Terminology-Enhanced Information
Retrieval (TEIR) and Emotional In-Context Learning (EICL). TEIR ensures
implicit reasoning through the utilization of inductive knowledge and key
terminology retrieval, overcoming the limitations of restricted domain
knowledge in public databases. Additionally, this module features capabilities
for processing long context. The EICL module aids in generating sentences with
high attribute relevance by memorizing semantic and attribute information from
unlabelled corpora and applying controlled retrieval for the required
information. Furthermore, a dataset comprising 803,564 consultation records was
compiled in China, significantly enhancing the model's capability for complex
dialogues and proactive inquiry initiation. Comprehensive experiments
demonstrate the proposed method's effectiveness in extending the context window
length of existing LLMs. The experimental outcomes and extensive data validate
the framework's superiority over five baseline models in terms of BLEU and
ROUGE performance metrics, with substantial leads in certain capabilities.
Notably, ablation studies confirm the significance of the TEIR and EICL
components. In addition, our new framework has the potential to significantly
improve patient satisfaction in real clinical consulting situations.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 23:01:07 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Zuo",
"Kaiwen",
""
],
[
"Tang",
"Jing",
""
],
[
"Qin",
"Hanbing",
""
],
[
"Luo",
"Binli",
""
],
[
"He",
"Ligang",
""
],
[
"Tang",
"Shiyan",
""
]
] | TITLE: Satisfactory Medical Consultation based on Terminology-Enhanced
Information Retrieval and Emotional In-Context Learning
ABSTRACT: Recent advancements in Large Language Models (LLMs) have marked significant
progress in understanding and responding to medical inquiries. However, their
performance still falls short of the standards set by professional
consultations. This paper introduces a novel framework for medical
consultation, comprising two main modules: Terminology-Enhanced Information
Retrieval (TEIR) and Emotional In-Context Learning (EICL). TEIR ensures
implicit reasoning through the utilization of inductive knowledge and key
terminology retrieval, overcoming the limitations of restricted domain
knowledge in public databases. Additionally, this module features capabilities
for processing long context. The EICL module aids in generating sentences with
high attribute relevance by memorizing semantic and attribute information from
unlabelled corpora and applying controlled retrieval for the required
information. Furthermore, a dataset comprising 803,564 consultation records was
compiled in China, significantly enhancing the model's capability for complex
dialogues and proactive inquiry initiation. Comprehensive experiments
demonstrate the proposed method's effectiveness in extending the context window
length of existing LLMs. The experimental outcomes and extensive data validate
the framework's superiority over five baseline models in terms of BLEU and
ROUGE performance metrics, with substantial leads in certain capabilities.
Notably, ablation studies confirm the significance of the TEIR and EICL
components. In addition, our new framework has the potential to significantly
improve patient satisfaction in real clinical consulting situations.
|
2503.17877 | Samira Alkaee Taleghan | Samira Alkaee Taleghan, Andrew P. Barrett, Walter N. Meier, Farnoush
Banaei-Kashani | IceBench: A Benchmark for Deep Learning based Sea Ice Type
Classification | null | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Sea ice plays a critical role in the global climate system and maritime
operations, making timely and accurate classification essential. However,
traditional manual methods are time-consuming, costly, and have inherent
biases. Automating sea ice type classification addresses these challenges by
enabling faster, more consistent, and scalable analysis. While both traditional
and deep learning approaches have been explored, deep learning models offer a
promising direction for improving efficiency and consistency in sea ice
classification. However, the absence of a standardized benchmark and
comparative study prevents a clear consensus on the best-performing models. To
bridge this gap, we introduce \textit{IceBench}, a comprehensive benchmarking
framework for sea ice type classification. Our key contributions are threefold:
First, we establish the IceBench benchmarking framework which leverages the
existing AI4Arctic Sea Ice Challenge dataset as a standardized dataset,
incorporates a comprehensive set of evaluation metrics, and includes
representative models from the entire spectrum of sea ice type classification
methods categorized in two distinct groups, namely, pixel-based classification
methods and patch-based classification methods. IceBench is open-source and
allows for convenient integration and evaluation of other sea ice type
classification methods; hence, facilitating comparative evaluation of new
methods and improving reproducibility in the field. Second, we conduct an
in-depth comparative study on representative models to assess their strengths
and limitations, providing insights for both practitioners and researchers.
Third, we leverage IceBench for systematic experiments addressing key research
questions on model transferability across seasons (time) and locations (space),
data downscaling, and preprocessing strategies.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 23:14:50 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Taleghan",
"Samira Alkaee",
""
],
[
"Barrett",
"Andrew P.",
""
],
[
"Meier",
"Walter N.",
""
],
[
"Banaei-Kashani",
"Farnoush",
""
]
] | TITLE: IceBench: A Benchmark for Deep Learning based Sea Ice Type
Classification
ABSTRACT: Sea ice plays a critical role in the global climate system and maritime
operations, making timely and accurate classification essential. However,
traditional manual methods are time-consuming, costly, and have inherent
biases. Automating sea ice type classification addresses these challenges by
enabling faster, more consistent, and scalable analysis. While both traditional
and deep learning approaches have been explored, deep learning models offer a
promising direction for improving efficiency and consistency in sea ice
classification. However, the absence of a standardized benchmark and
comparative study prevents a clear consensus on the best-performing models. To
bridge this gap, we introduce \textit{IceBench}, a comprehensive benchmarking
framework for sea ice type classification. Our key contributions are threefold:
First, we establish the IceBench benchmarking framework which leverages the
existing AI4Arctic Sea Ice Challenge dataset as a standardized dataset,
incorporates a comprehensive set of evaluation metrics, and includes
representative models from the entire spectrum of sea ice type classification
methods categorized in two distinct groups, namely, pixel-based classification
methods and patch-based classification methods. IceBench is open-source and
allows for convenient integration and evaluation of other sea ice type
classification methods; hence, facilitating comparative evaluation of new
methods and improving reproducibility in the field. Second, we conduct an
in-depth comparative study on representative models to assess their strengths
and limitations, providing insights for both practitioners and researchers.
Third, we leverage IceBench for systematic experiments addressing key research
questions on model transferability across seasons (time) and locations (space),
data downscaling, and preprocessing strategies.
|
2503.17885 | Arastoo Zibaeirad | Arastoo Zibaeirad, Marco Vieira | Reasoning with LLMs for Zero-Shot Vulnerability Detection | null | null | null | null | cs.SE cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automating software vulnerability detection (SVD) remains a critical
challenge in an era of increasingly complex and interdependent software
systems. Despite significant advances in Large Language Models (LLMs) for code
analysis, prevailing evaluation methodologies often lack the
\textbf{context-aware robustness} necessary to capture real-world intricacies
and cross-component interactions. To address these limitations, we present
\textbf{VulnSage}, a comprehensive evaluation framework and a dataset curated
from diverse, large-scale open-source system software projects developed in
C/C++. Unlike prior datasets, it leverages a heuristic noise pre-filtering
approach combined with LLM-based reasoning to ensure a representative and
minimally noisy spectrum of vulnerabilities. The framework supports
multi-granular analysis across function, file, and inter-function levels and
employs four diverse zero-shot prompt strategies: Baseline, Chain-of-Thought,
Think, and Think & Verify. Through this evaluation, we uncover that structured
reasoning prompts substantially improve LLM performance, with Think & Verify
reducing ambiguous responses from 20.3% to 9.1% while increasing accuracy. We
further demonstrate that code-specialized models consistently outperform
general-purpose alternatives, with performance varying significantly across
vulnerability types, revealing that no single approach universally excels
across all security contexts. Link to dataset and codes:
https://github.com/Erroristotle/VulnSage.git
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 23:59:17 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Zibaeirad",
"Arastoo",
""
],
[
"Vieira",
"Marco",
""
]
] | TITLE: Reasoning with LLMs for Zero-Shot Vulnerability Detection
ABSTRACT: Automating software vulnerability detection (SVD) remains a critical
challenge in an era of increasingly complex and interdependent software
systems. Despite significant advances in Large Language Models (LLMs) for code
analysis, prevailing evaluation methodologies often lack the
\textbf{context-aware robustness} necessary to capture real-world intricacies
and cross-component interactions. To address these limitations, we present
\textbf{VulnSage}, a comprehensive evaluation framework and a dataset curated
from diverse, large-scale open-source system software projects developed in
C/C++. Unlike prior datasets, it leverages a heuristic noise pre-filtering
approach combined with LLM-based reasoning to ensure a representative and
minimally noisy spectrum of vulnerabilities. The framework supports
multi-granular analysis across function, file, and inter-function levels and
employs four diverse zero-shot prompt strategies: Baseline, Chain-of-Thought,
Think, and Think & Verify. Through this evaluation, we uncover that structured
reasoning prompts substantially improve LLM performance, with Think & Verify
reducing ambiguous responses from 20.3% to 9.1% while increasing accuracy. We
further demonstrate that code-specialized models consistently outperform
general-purpose alternatives, with performance varying significantly across
vulnerability types, revealing that no single approach universally excels
across all security contexts. Link to dataset and codes:
https://github.com/Erroristotle/VulnSage.git
|
2503.17899 | Dongheng Lin | Dongheng Lin, Han Hu, Jianbo Jiao | What Time Tells Us? An Explorative Study of Time Awareness Learned from
Static Images | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Time becomes visible through illumination changes in what we see. Inspired by
this, in this paper we explore the potential to learn time awareness from
static images, trying to answer: what time tells us? To this end, we first
introduce a Time-Oriented Collection (TOC) dataset, which contains 130,906
images with reliable timestamps. Leveraging this dataset, we propose a
Time-Image Contrastive Learning (TICL) approach to jointly model timestamps and
related visual representations through cross-modal contrastive learning. We
found that the proposed TICL, 1) not only achieves state-of-the-art performance
on the timestamp estimation task, over various benchmark metrics, 2) but also,
interestingly, though only seeing static images, the time-aware embeddings
learned from TICL show strong capability in several time-aware downstream tasks
such as time-based image retrieval, video scene classification, and time-aware
image editing. Our findings suggest that time-related visual cues can be
learned from static images and are beneficial for various vision tasks, laying
a foundation for future research on understanding time-related visual context.
Project page:https://rathgrith.github.io/timetells/.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 01:56:35 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Lin",
"Dongheng",
""
],
[
"Hu",
"Han",
""
],
[
"Jiao",
"Jianbo",
""
]
] | TITLE: What Time Tells Us? An Explorative Study of Time Awareness Learned from
Static Images
ABSTRACT: Time becomes visible through illumination changes in what we see. Inspired by
this, in this paper we explore the potential to learn time awareness from
static images, trying to answer: what time tells us? To this end, we first
introduce a Time-Oriented Collection (TOC) dataset, which contains 130,906
images with reliable timestamps. Leveraging this dataset, we propose a
Time-Image Contrastive Learning (TICL) approach to jointly model timestamps and
related visual representations through cross-modal contrastive learning. We
found that the proposed TICL, 1) not only achieves state-of-the-art performance
on the timestamp estimation task, over various benchmark metrics, 2) but also,
interestingly, though only seeing static images, the time-aware embeddings
learned from TICL show strong capability in several time-aware downstream tasks
such as time-based image retrieval, video scene classification, and time-aware
image editing. Our findings suggest that time-related visual cues can be
learned from static images and are beneficial for various vision tasks, laying
a foundation for future research on understanding time-related visual context.
Project page:https://rathgrith.github.io/timetells/.
|
2503.17903 | Yali Fu | Yali Fu, Jindong Li, Qi Wang, Qianli Xing | GLADMamba: Unsupervised Graph-Level Anomaly Detection Powered by
Selective State Space Model | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Unsupervised graph-level anomaly detection (UGLAD) is a critical and
challenging task across various domains, such as social network analysis,
anti-cancer drug discovery, and toxic molecule identification. However,
existing methods often struggle to capture the long-range dependencies
efficiently and neglect the spectral information. Recently, selective State
Space Models (SSMs), particularly Mamba, have demonstrated remarkable
advantages in capturing long-range dependencies with linear complexity and a
selection mechanism. Motivated by their success across various domains, we
propose GLADMamba, a novel framework that adapts the selective state space
model into UGLAD field. We design View-Fused Mamba (VFM) with a
Mamba-Transformer-style architecture to efficiently fuse information from
different views with a selective state mechanism. We also design
Spectrum-Guided Mamba (SGM) with a Mamba-Transformer-style architecture to
leverage the Rayleigh quotient to guide the embedding refining process.
GLADMamba can dynamically focus on anomaly-related information while discarding
irrelevant information for anomaly detection. To the best of our knowledge,
this is the first work to introduce Mamba and explicit spectral information to
UGLAD. Extensive experiments on 12 real-world datasets demonstrate that
GLADMamba outperforms existing state-of-the-art methods, achieving superior
performance in UGLAD. The code is available at
https://github.com/Yali-F/GLADMamba.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 02:40:17 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Fu",
"Yali",
""
],
[
"Li",
"Jindong",
""
],
[
"Wang",
"Qi",
""
],
[
"Xing",
"Qianli",
""
]
] | TITLE: GLADMamba: Unsupervised Graph-Level Anomaly Detection Powered by
Selective State Space Model
ABSTRACT: Unsupervised graph-level anomaly detection (UGLAD) is a critical and
challenging task across various domains, such as social network analysis,
anti-cancer drug discovery, and toxic molecule identification. However,
existing methods often struggle to capture the long-range dependencies
efficiently and neglect the spectral information. Recently, selective State
Space Models (SSMs), particularly Mamba, have demonstrated remarkable
advantages in capturing long-range dependencies with linear complexity and a
selection mechanism. Motivated by their success across various domains, we
propose GLADMamba, a novel framework that adapts the selective state space
model into UGLAD field. We design View-Fused Mamba (VFM) with a
Mamba-Transformer-style architecture to efficiently fuse information from
different views with a selective state mechanism. We also design
Spectrum-Guided Mamba (SGM) with a Mamba-Transformer-style architecture to
leverage the Rayleigh quotient to guide the embedding refining process.
GLADMamba can dynamically focus on anomaly-related information while discarding
irrelevant information for anomaly detection. To the best of our knowledge,
this is the first work to introduce Mamba and explicit spectral information to
UGLAD. Extensive experiments on 12 real-world datasets demonstrate that
GLADMamba outperforms existing state-of-the-art methods, achieving superior
performance in UGLAD. The code is available at
https://github.com/Yali-F/GLADMamba.
|
2503.17905 | Luke McDermott | Luke McDermott, Rahul Parhi | Finding Stable Subnetworks at Initialization with Dataset Distillation | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Recent works have shown that Dataset Distillation, the process for
summarizing the training data, can be leveraged to accelerate the training of
deep learning models. However, its impact on training dynamics, particularly in
neural network pruning, remains largely unexplored. In our work, we use
distilled data in the inner loop of iterative magnitude pruning to produce
sparse, trainable subnetworks at initialization -- more commonly known as
lottery tickets. While using 150x less training points, our algorithm matches
the performance of traditional lottery ticket rewinding on ResNet-18 &
CIFAR-10. Previous work highlights that lottery tickets can be found when the
dense initialization is stable to SGD noise (i.e. training across different
ordering of the data converges to the same minima). We extend this discovery,
demonstrating that stable subnetworks can exist even within an unstable dense
initialization. In our linear mode connectivity studies, we find that pruning
with distilled data discards parameters that contribute to the sharpness of the
loss landscape. Lastly, we show that by first generating a stable sparsity mask
at initialization, we can find lottery tickets at significantly higher
sparsities than traditional iterative magnitude pruning.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 02:55:57 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"McDermott",
"Luke",
""
],
[
"Parhi",
"Rahul",
""
]
] | TITLE: Finding Stable Subnetworks at Initialization with Dataset Distillation
ABSTRACT: Recent works have shown that Dataset Distillation, the process for
summarizing the training data, can be leveraged to accelerate the training of
deep learning models. However, its impact on training dynamics, particularly in
neural network pruning, remains largely unexplored. In our work, we use
distilled data in the inner loop of iterative magnitude pruning to produce
sparse, trainable subnetworks at initialization -- more commonly known as
lottery tickets. While using 150x less training points, our algorithm matches
the performance of traditional lottery ticket rewinding on ResNet-18 &
CIFAR-10. Previous work highlights that lottery tickets can be found when the
dense initialization is stable to SGD noise (i.e. training across different
ordering of the data converges to the same minima). We extend this discovery,
demonstrating that stable subnetworks can exist even within an unstable dense
initialization. In our linear mode connectivity studies, we find that pruning
with distilled data discards parameters that contribute to the sharpness of the
loss landscape. Lastly, we show that by first generating a stable sparsity mask
at initialization, we can find lottery tickets at significantly higher
sparsities than traditional iterative magnitude pruning.
|
2503.17911 | Peng Cheng | Xiaoyao Zhong, Haotian Li, Jiabao Jin, Mingyu Yang, Deming Chu,
Xiangyu Wang, Zhitao Shen, Wei Jia, George Gu, Yi Xie, Xuemin Lin, Heng Tao
Shen, Jingkuan Song, Peng Cheng | VSAG: An Optimized Search Framework for Graph-based Approximate Nearest
Neighbor Search | 16 pages, the report of open-source library VSAG
(https://github.com/antgroup/vsag) | null | null | null | cs.DB | http://creativecommons.org/licenses/by/4.0/ | Approximate nearest neighbor search (ANNS) is a fundamental problem in vector
databases and AI infrastructures. Recent graph-based ANNS algorithms have
achieved high search accuracy with practical efficiency. Despite the
advancements, these algorithms still face performance bottlenecks in
production, due to the random memory access patterns of graph-based search and
the high computational overheads of vector distance. In addition, the
performance of a graph-based ANNS algorithm is highly sensitive to parameters,
while selecting the optimal parameters is cost-prohibitive, e.g., manual tuning
requires repeatedly re-building the index.
This paper introduces VSAG, an open-source framework that aims to enhance the
in production performance of graph-based ANNS algorithms. VSAG has been
deployed at scale in the services of Ant Group, and it incorporates three key
optimizations: (i) efficient memory access: it reduces L3 cache misses with
pre-fetching and cache-friendly vector organization; (ii) automated parameter
tuning: it automatically selects performance-optimal parameters without
requiring index rebuilding; (iii) efficient distance computation: it leverages
modern hardware, scalar quantization, and smartly switches to low-precision
representation to dramatically reduce the distance computation costs. We
evaluate VSAG on real-world datasets. The experimental results show that VSAG
achieves the state-of-the-art performance and provides up to 4x speedup over
HNSWlib (an industry-standard library) while ensuring the same accuracy.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 03:16:50 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Zhong",
"Xiaoyao",
""
],
[
"Li",
"Haotian",
""
],
[
"Jin",
"Jiabao",
""
],
[
"Yang",
"Mingyu",
""
],
[
"Chu",
"Deming",
""
],
[
"Wang",
"Xiangyu",
""
],
[
"Shen",
"Zhitao",
""
],
[
"Jia",
"Wei",
""
],
[
"Gu",
"George",
""
],
[
"Xie",
"Yi",
""
],
[
"Lin",
"Xuemin",
""
],
[
"Shen",
"Heng Tao",
""
],
[
"Song",
"Jingkuan",
""
],
[
"Cheng",
"Peng",
""
]
] | TITLE: VSAG: An Optimized Search Framework for Graph-based Approximate Nearest
Neighbor Search
ABSTRACT: Approximate nearest neighbor search (ANNS) is a fundamental problem in vector
databases and AI infrastructures. Recent graph-based ANNS algorithms have
achieved high search accuracy with practical efficiency. Despite the
advancements, these algorithms still face performance bottlenecks in
production, due to the random memory access patterns of graph-based search and
the high computational overheads of vector distance. In addition, the
performance of a graph-based ANNS algorithm is highly sensitive to parameters,
while selecting the optimal parameters is cost-prohibitive, e.g., manual tuning
requires repeatedly re-building the index.
This paper introduces VSAG, an open-source framework that aims to enhance the
in production performance of graph-based ANNS algorithms. VSAG has been
deployed at scale in the services of Ant Group, and it incorporates three key
optimizations: (i) efficient memory access: it reduces L3 cache misses with
pre-fetching and cache-friendly vector organization; (ii) automated parameter
tuning: it automatically selects performance-optimal parameters without
requiring index rebuilding; (iii) efficient distance computation: it leverages
modern hardware, scalar quantization, and smartly switches to low-precision
representation to dramatically reduce the distance computation costs. We
evaluate VSAG on real-world datasets. The experimental results show that VSAG
achieves the state-of-the-art performance and provides up to 4x speedup over
HNSWlib (an industry-standard library) while ensuring the same accuracy.
|
2503.17914 | Yazhou Yao | Jianjian Yin, Tao Chen, Gensheng Pei, Yazhou Yao, Liqiang Nie,
Xiansheng Hua | Semi-supervised Semantic Segmentation with Multi-Constraint Consistency
Learning | accepted by IEEE Transactions on Multimedia | null | null | null | cs.MM cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Consistency regularization has prevailed in semi-supervised semantic
segmentation and achieved promising performance. However, existing methods
typically concentrate on enhancing the Image-augmentation based Prediction
consistency and optimizing the segmentation network as a whole, resulting in
insufficient utilization of potential supervisory information. In this paper,
we propose a Multi-Constraint Consistency Learning (MCCL) approach to
facilitate the staged enhancement of the encoder and decoder. Specifically, we
first design a feature knowledge alignment (FKA) strategy to promote the
feature consistency learning of the encoder from image-augmentation. Our FKA
encourages the encoder to derive consistent features for strongly and weakly
augmented views from the perspectives of point-to-point alignment and
prototype-based intra-class compactness. Moreover, we propose a self-adaptive
intervention (SAI) module to increase the discrepancy of aligned intermediate
feature representations, promoting Feature-perturbation based Prediction
consistency learning. Self-adaptive feature masking and noise injection are
designed in an instance-specific manner to perturb the features for robust
learning of the decoder. Experimental results on Pascal VOC2012 and Cityscapes
datasets demonstrate that our proposed MCCL achieves new state-of-the-art
performance. The source code and models are made available at
https://github.com/NUST-Machine-Intelligence-Laboratory/MCCL.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 03:21:33 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Yin",
"Jianjian",
""
],
[
"Chen",
"Tao",
""
],
[
"Pei",
"Gensheng",
""
],
[
"Yao",
"Yazhou",
""
],
[
"Nie",
"Liqiang",
""
],
[
"Hua",
"Xiansheng",
""
]
] | TITLE: Semi-supervised Semantic Segmentation with Multi-Constraint Consistency
Learning
ABSTRACT: Consistency regularization has prevailed in semi-supervised semantic
segmentation and achieved promising performance. However, existing methods
typically concentrate on enhancing the Image-augmentation based Prediction
consistency and optimizing the segmentation network as a whole, resulting in
insufficient utilization of potential supervisory information. In this paper,
we propose a Multi-Constraint Consistency Learning (MCCL) approach to
facilitate the staged enhancement of the encoder and decoder. Specifically, we
first design a feature knowledge alignment (FKA) strategy to promote the
feature consistency learning of the encoder from image-augmentation. Our FKA
encourages the encoder to derive consistent features for strongly and weakly
augmented views from the perspectives of point-to-point alignment and
prototype-based intra-class compactness. Moreover, we propose a self-adaptive
intervention (SAI) module to increase the discrepancy of aligned intermediate
feature representations, promoting Feature-perturbation based Prediction
consistency learning. Self-adaptive feature masking and noise injection are
designed in an instance-specific manner to perturb the features for robust
learning of the decoder. Experimental results on Pascal VOC2012 and Cityscapes
datasets demonstrate that our proposed MCCL achieves new state-of-the-art
performance. The source code and models are made available at
https://github.com/NUST-Machine-Intelligence-Laboratory/MCCL.
|
2503.17928 | Zefeng Zhang | Zefeng Zhang, Hengzhu Tang, Jiawei Sheng, Zhenyu Zhang, Yiming Ren,
Zhenyang Li, Dawei Yin, Duohe Ma, Tingwen Liu | Debiasing Multimodal Large Language Models via Noise-Aware Preference
Optimization | CVPR 2025 | null | null | null | cs.CV cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multimodal Large Language Models excel in various tasks, yet often struggle
with modality bias, where the model tends to rely heavily on a single modality
and overlook critical information in other modalities, which leads to incorrect
focus and generating irrelevant responses. In this paper, we propose using the
paradigm of preference optimization to solve the modality bias problem,
including RLAIFVBias, a debiased preference optimization dataset, and a Noise
Aware Preference Optimization algorithm. Specifically, we first construct the
dataset by introducing perturbations to reduce the informational content of
certain modalities, compelling the model to rely on a specific modality when
generating negative responses. To address the inevitable noise in automatically
constructed data, we combine the noise robust Mean Absolute Error with the
Binary Cross Entropy in Direct Preference Optimization by a negative Box Cox
transformation, and dynamically adjust the algorithm noise robustness based on
the evaluated noise levels in the data. Extensive experiments validate our
approach, demonstrating not only its effectiveness in mitigating modality bias
but also its significant role in minimizing hallucinations.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 04:00:11 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Zhang",
"Zefeng",
""
],
[
"Tang",
"Hengzhu",
""
],
[
"Sheng",
"Jiawei",
""
],
[
"Zhang",
"Zhenyu",
""
],
[
"Ren",
"Yiming",
""
],
[
"Li",
"Zhenyang",
""
],
[
"Yin",
"Dawei",
""
],
[
"Ma",
"Duohe",
""
],
[
"Liu",
"Tingwen",
""
]
] | TITLE: Debiasing Multimodal Large Language Models via Noise-Aware Preference
Optimization
ABSTRACT: Multimodal Large Language Models excel in various tasks, yet often struggle
with modality bias, where the model tends to rely heavily on a single modality
and overlook critical information in other modalities, which leads to incorrect
focus and generating irrelevant responses. In this paper, we propose using the
paradigm of preference optimization to solve the modality bias problem,
including RLAIFVBias, a debiased preference optimization dataset, and a Noise
Aware Preference Optimization algorithm. Specifically, we first construct the
dataset by introducing perturbations to reduce the informational content of
certain modalities, compelling the model to rely on a specific modality when
generating negative responses. To address the inevitable noise in automatically
constructed data, we combine the noise robust Mean Absolute Error with the
Binary Cross Entropy in Direct Preference Optimization by a negative Box Cox
transformation, and dynamically adjust the algorithm noise robustness based on
the evaluated noise levels in the data. Extensive experiments validate our
approach, demonstrating not only its effectiveness in mitigating modality bias
but also its significant role in minimizing hallucinations.
|
2503.17933 | Zhengyi Ou | Justice Ou, Tinglin Huang, Yilun Zhao, Ziyang Yu, Peiqing Lu, Rex Ying | Experience Retrieval-Augmentation with Electronic Health Records Enables
Accurate Discharge QA | null | null | null | null | cs.CL cs.AI cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To improve the reliability of Large Language Models (LLMs) in clinical
applications, retrieval-augmented generation (RAG) is extensively applied to
provide factual medical knowledge. However, beyond general medical knowledge
from open-ended datasets, clinical case-based knowledge is also critical for
effective medical reasoning, as it provides context grounded in real-world
patient experiences. Motivated by this, we propose Experience Retrieval
Augmentation - ExpRAG framework based on Electronic Health Record (EHR), aiming
to offer the relevant context from other patients' discharge reports. ExpRAG
performs retrieval through a coarse-to-fine process, utilizing an EHR-based
report ranker to efficiently identify similar patients, followed by an
experience retriever to extract task-relevant content for enhanced medical
reasoning. To evaluate ExpRAG, we introduce DischargeQA, a clinical QA dataset
with 1,280 discharge-related questions across diagnosis, medication, and
instruction tasks. Each problem is generated using EHR data to ensure realistic
and challenging scenarios. Experimental results demonstrate that ExpRAG
consistently outperforms a text-based ranker, achieving an average relative
improvement of 5.2%, highlighting the importance of case-based knowledge for
medical reasoning.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 04:26:06 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Ou",
"Justice",
""
],
[
"Huang",
"Tinglin",
""
],
[
"Zhao",
"Yilun",
""
],
[
"Yu",
"Ziyang",
""
],
[
"Lu",
"Peiqing",
""
],
[
"Ying",
"Rex",
""
]
] | TITLE: Experience Retrieval-Augmentation with Electronic Health Records Enables
Accurate Discharge QA
ABSTRACT: To improve the reliability of Large Language Models (LLMs) in clinical
applications, retrieval-augmented generation (RAG) is extensively applied to
provide factual medical knowledge. However, beyond general medical knowledge
from open-ended datasets, clinical case-based knowledge is also critical for
effective medical reasoning, as it provides context grounded in real-world
patient experiences. Motivated by this, we propose Experience Retrieval
Augmentation - ExpRAG framework based on Electronic Health Record (EHR), aiming
to offer the relevant context from other patients' discharge reports. ExpRAG
performs retrieval through a coarse-to-fine process, utilizing an EHR-based
report ranker to efficiently identify similar patients, followed by an
experience retriever to extract task-relevant content for enhanced medical
reasoning. To evaluate ExpRAG, we introduce DischargeQA, a clinical QA dataset
with 1,280 discharge-related questions across diagnosis, medication, and
instruction tasks. Each problem is generated using EHR data to ensure realistic
and challenging scenarios. Experimental results demonstrate that ExpRAG
consistently outperforms a text-based ranker, achieving an average relative
improvement of 5.2%, highlighting the importance of case-based knowledge for
medical reasoning.
|
2503.17934 | Zhimin Chen | Xuewei Chen, Zhimin Chen, Yiren Song | TransAnimate: Taming Layer Diffusion to Generate RGBA Video | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Text-to-video generative models have made remarkable advancements in recent
years. However, generating RGBA videos with alpha channels for transparency and
visual effects remains a significant challenge due to the scarcity of suitable
datasets and the complexity of adapting existing models for this purpose. To
address these limitations, we present TransAnimate, an innovative framework
that integrates RGBA image generation techniques with video generation modules,
enabling the creation of dynamic and transparent videos. TransAnimate
efficiently leverages pre-trained text-to-transparent image model weights and
combines them with temporal models and controllability plugins trained on RGB
videos, adapting them for controllable RGBA video generation tasks.
Additionally, we introduce an interactive motion-guided control mechanism,
where directional arrows define movement and colors adjust scaling, offering
precise and intuitive control for designing game effects. To further alleviate
data scarcity, we have developed a pipeline for creating an RGBA video dataset,
incorporating high-quality game effect videos, extracted foreground objects,
and synthetic transparent videos. Comprehensive experiments demonstrate that
TransAnimate generates high-quality RGBA videos, establishing it as a practical
and effective tool for applications in gaming and visual effects.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 04:27:46 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Chen",
"Xuewei",
""
],
[
"Chen",
"Zhimin",
""
],
[
"Song",
"Yiren",
""
]
] | TITLE: TransAnimate: Taming Layer Diffusion to Generate RGBA Video
ABSTRACT: Text-to-video generative models have made remarkable advancements in recent
years. However, generating RGBA videos with alpha channels for transparency and
visual effects remains a significant challenge due to the scarcity of suitable
datasets and the complexity of adapting existing models for this purpose. To
address these limitations, we present TransAnimate, an innovative framework
that integrates RGBA image generation techniques with video generation modules,
enabling the creation of dynamic and transparent videos. TransAnimate
efficiently leverages pre-trained text-to-transparent image model weights and
combines them with temporal models and controllability plugins trained on RGB
videos, adapting them for controllable RGBA video generation tasks.
Additionally, we introduce an interactive motion-guided control mechanism,
where directional arrows define movement and colors adjust scaling, offering
precise and intuitive control for designing game effects. To further alleviate
data scarcity, we have developed a pipeline for creating an RGBA video dataset,
incorporating high-quality game effect videos, extracted foreground objects,
and synthetic transparent videos. Comprehensive experiments demonstrate that
TransAnimate generates high-quality RGBA videos, establishing it as a practical
and effective tool for applications in gaming and visual effects.
|
2503.17936 | Riya Naik | Riya Naik, Ashwin Srinivasan, Estrid He, and Swati Agarwal | An Empirical Study of the Role of Incompleteness and Ambiguity in
Interactions with Large Language Models | null | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Natural language as a medium for human-computer interaction has long been
anticipated, has been undergoing a sea-change with the advent of Large Language
Models (LLMs) with startling capacities for processing and generating language.
Many of us now treat LLMs as modern-day oracles, asking it almost any kind of
question. Unlike its Delphic predecessor, consulting an LLM does not have to be
a single-turn activity (ask a question, receive an answer, leave); and -- also
unlike the Pythia -- it is widely acknowledged that answers from LLMs can be
improved with additional context. In this paper, we aim to study when we need
multi-turn interactions with LLMs to successfully get a question answered; or
conclude that a question is unanswerable. We present a neural symbolic
framework that models the interactions between human and LLM agents. Through
the proposed framework, we define incompleteness and ambiguity in the questions
as properties deducible from the messages exchanged in the interaction, and
provide results from benchmark problems, in which the answer-correctness is
shown to depend on whether or not questions demonstrate the presence of
incompleteness or ambiguity (according to the properties we identify). Our
results show multi-turn interactions are usually required for datasets which
have a high proportion of incompleteness or ambiguous questions; and that that
increasing interaction length has the effect of reducing incompleteness or
ambiguity. The results also suggest that our measures of incompleteness and
ambiguity can be useful tools for characterising interactions with an LLM on
question-answeringproblems
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 04:34:30 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Naik",
"Riya",
""
],
[
"Srinivasan",
"Ashwin",
""
],
[
"He",
"Estrid",
""
],
[
"Agarwal",
"Swati",
""
]
] | TITLE: An Empirical Study of the Role of Incompleteness and Ambiguity in
Interactions with Large Language Models
ABSTRACT: Natural language as a medium for human-computer interaction has long been
anticipated, has been undergoing a sea-change with the advent of Large Language
Models (LLMs) with startling capacities for processing and generating language.
Many of us now treat LLMs as modern-day oracles, asking it almost any kind of
question. Unlike its Delphic predecessor, consulting an LLM does not have to be
a single-turn activity (ask a question, receive an answer, leave); and -- also
unlike the Pythia -- it is widely acknowledged that answers from LLMs can be
improved with additional context. In this paper, we aim to study when we need
multi-turn interactions with LLMs to successfully get a question answered; or
conclude that a question is unanswerable. We present a neural symbolic
framework that models the interactions between human and LLM agents. Through
the proposed framework, we define incompleteness and ambiguity in the questions
as properties deducible from the messages exchanged in the interaction, and
provide results from benchmark problems, in which the answer-correctness is
shown to depend on whether or not questions demonstrate the presence of
incompleteness or ambiguity (according to the properties we identify). Our
results show multi-turn interactions are usually required for datasets which
have a high proportion of incompleteness or ambiguous questions; and that that
increasing interaction length has the effect of reducing incompleteness or
ambiguity. The results also suggest that our measures of incompleteness and
ambiguity can be useful tools for characterising interactions with an LLM on
question-answeringproblems
|
2503.17937 | Zhi Zhang | Zhi Zhang, Daoyi Chen | Cross-Domain Underwater Image Enhancement Guided by No-Reference Image
Quality Assessment: A Transfer Learning Approach | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Single underwater image enhancement (UIE) is a challenging ill-posed problem,
but its development is hindered by two major issues: (1) The labels in
underwater reference datasets are pseudo labels, relying on these pseudo ground
truths in supervised learning leads to domain discrepancy. (2) Underwater
reference datasets are scarce, making training on such small datasets prone to
overfitting and distribution shift. To address these challenges, we propose
Trans-UIE, a transfer learning-based UIE model that captures the fundamental
paradigms of UIE through pretraining and utilizes a dataset composed of both
reference and non-reference datasets for fine-tuning. However, fine-tuning the
model using only reconstruction loss may introduce confirmation bias. To
mitigate this, our method leverages no-reference image quality assessment
(NR-IQA) metrics from above-water scenes to guide the transfer learning process
across domains while generating enhanced images with the style of the
above-water image domain. Additionally, to reduce the risk of overfitting
during the pretraining stage, we introduce Pearson correlation loss.
Experimental results on both full-reference and no-reference underwater
benchmark datasets demonstrate that Trans-UIE significantly outperforms
state-of-the-art methods.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 04:40:07 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Zhang",
"Zhi",
""
],
[
"Chen",
"Daoyi",
""
]
] | TITLE: Cross-Domain Underwater Image Enhancement Guided by No-Reference Image
Quality Assessment: A Transfer Learning Approach
ABSTRACT: Single underwater image enhancement (UIE) is a challenging ill-posed problem,
but its development is hindered by two major issues: (1) The labels in
underwater reference datasets are pseudo labels, relying on these pseudo ground
truths in supervised learning leads to domain discrepancy. (2) Underwater
reference datasets are scarce, making training on such small datasets prone to
overfitting and distribution shift. To address these challenges, we propose
Trans-UIE, a transfer learning-based UIE model that captures the fundamental
paradigms of UIE through pretraining and utilizes a dataset composed of both
reference and non-reference datasets for fine-tuning. However, fine-tuning the
model using only reconstruction loss may introduce confirmation bias. To
mitigate this, our method leverages no-reference image quality assessment
(NR-IQA) metrics from above-water scenes to guide the transfer learning process
across domains while generating enhanced images with the style of the
above-water image domain. Additionally, to reduce the risk of overfitting
during the pretraining stage, we introduce Pearson correlation loss.
Experimental results on both full-reference and no-reference underwater
benchmark datasets demonstrate that Trans-UIE significantly outperforms
state-of-the-art methods.
|
2503.17941 | Sunwoong Yang | Sunwoong Yang, Youngkyu Lee, Namwoo Kang | Physics-Guided Multi-Fidelity DeepONet for Data-Efficient Flow Field
Prediction | null | null | null | null | physics.flu-dyn cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study presents an enhanced multi-fidelity deep operator network
(DeepONet) framework for efficient spatio-temporal flow field prediction, with
particular emphasis on practical scenarios where high-fidelity data is scarce.
We introduce several key innovations to improve the framework's efficiency and
accuracy. First, we enhance the DeepONet architecture by incorporating a merge
network that enables more complex feature interactions between operator and
coordinate spaces, achieving a 50.4% reduction in prediction error compared to
traditional dot-product operations. We further optimize the architecture
through temporal positional encoding and point-based sampling strategies,
achieving a 7.57% improvement in prediction accuracy while reducing training
time by 96% through efficient sampling and automatic mixed precision training.
Building upon this foundation, we develop a transfer learning-based
multi-fidelity framework that leverages knowledge from pre-trained low-fidelity
models to guide high-fidelity predictions. Our approach freezes the pre-trained
branch and trunk networks while making only the merge network trainable during
high-fidelity training, preserving valuable low-fidelity representations while
efficiently adapting to high-fidelity features. Through systematic
investigation, we demonstrate that this fine-tuning strategy not only
significantly outperforms linear probing and full-tuning alternatives but also
surpasses conventional multi-fidelity frameworks by up to 76%, while achieving
up to 43.7% improvement in prediction accuracy compared to single-fidelity
training. The core contribution lies in our novel time-derivative guided
sampling approach: it maintains prediction accuracy equivalent to models
trained with the full dataset while requiring only 60% of the original
high-fidelity samples.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 04:48:18 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Yang",
"Sunwoong",
""
],
[
"Lee",
"Youngkyu",
""
],
[
"Kang",
"Namwoo",
""
]
] | TITLE: Physics-Guided Multi-Fidelity DeepONet for Data-Efficient Flow Field
Prediction
ABSTRACT: This study presents an enhanced multi-fidelity deep operator network
(DeepONet) framework for efficient spatio-temporal flow field prediction, with
particular emphasis on practical scenarios where high-fidelity data is scarce.
We introduce several key innovations to improve the framework's efficiency and
accuracy. First, we enhance the DeepONet architecture by incorporating a merge
network that enables more complex feature interactions between operator and
coordinate spaces, achieving a 50.4% reduction in prediction error compared to
traditional dot-product operations. We further optimize the architecture
through temporal positional encoding and point-based sampling strategies,
achieving a 7.57% improvement in prediction accuracy while reducing training
time by 96% through efficient sampling and automatic mixed precision training.
Building upon this foundation, we develop a transfer learning-based
multi-fidelity framework that leverages knowledge from pre-trained low-fidelity
models to guide high-fidelity predictions. Our approach freezes the pre-trained
branch and trunk networks while making only the merge network trainable during
high-fidelity training, preserving valuable low-fidelity representations while
efficiently adapting to high-fidelity features. Through systematic
investigation, we demonstrate that this fine-tuning strategy not only
significantly outperforms linear probing and full-tuning alternatives but also
surpasses conventional multi-fidelity frameworks by up to 76%, while achieving
up to 43.7% improvement in prediction accuracy compared to single-fidelity
training. The core contribution lies in our novel time-derivative guided
sampling approach: it maintains prediction accuracy equivalent to models
trained with the full dataset while requiring only 60% of the original
high-fidelity samples.
|
2503.17949 | Zeeshan Ahmad | Moin Uddin Maruf and Sungmin Kim and Zeeshan Ahmad | Equivariant Machine Learning Interatomic Potentials with Global Charge
Redistribution | 24 pages, 5 figures, 1 table + 12 pages of Supporting Information | null | null | null | physics.chem-ph cond-mat.mtrl-sci cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Machine learning interatomic potentials (MLIPs) provide a computationally
efficient alternative to quantum mechanical simulations for predicting material
properties. Message-passing graph neural networks, commonly used in these
MLIPs, rely on local descriptor-based symmetry functions to model atomic
interactions. However, such local descriptor-based approaches struggle with
systems exhibiting long-range interactions, charge transfer, and compositional
heterogeneity. In this work, we develop a new equivariant MLIP incorporating
long-range Coulomb interactions through explicit treatment of electronic
degrees of freedom, specifically global charge distribution within the system.
This is achieved using a charge equilibration scheme based on predicted atomic
electronegativities. We systematically evaluate our model across a range of
benchmark periodic and non-periodic datasets, demonstrating that it outperforms
both short-range equivariant and long-range invariant MLIPs in energy and force
predictions. Our approach enables more accurate and efficient simulations of
systems with long-range interactions and charge heterogeneity, expanding the
applicability of MLIPs in computational materials science.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 05:26:55 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Maruf",
"Moin Uddin",
""
],
[
"Kim",
"Sungmin",
""
],
[
"Ahmad",
"Zeeshan",
""
]
] | TITLE: Equivariant Machine Learning Interatomic Potentials with Global Charge
Redistribution
ABSTRACT: Machine learning interatomic potentials (MLIPs) provide a computationally
efficient alternative to quantum mechanical simulations for predicting material
properties. Message-passing graph neural networks, commonly used in these
MLIPs, rely on local descriptor-based symmetry functions to model atomic
interactions. However, such local descriptor-based approaches struggle with
systems exhibiting long-range interactions, charge transfer, and compositional
heterogeneity. In this work, we develop a new equivariant MLIP incorporating
long-range Coulomb interactions through explicit treatment of electronic
degrees of freedom, specifically global charge distribution within the system.
This is achieved using a charge equilibration scheme based on predicted atomic
electronegativities. We systematically evaluate our model across a range of
benchmark periodic and non-periodic datasets, demonstrating that it outperforms
both short-range equivariant and long-range invariant MLIPs in energy and force
predictions. Our approach enables more accurate and efficient simulations of
systems with long-range interactions and charge heterogeneity, expanding the
applicability of MLIPs in computational materials science.
|
2503.17956 | Sami Zhioua | Sami Zhioua, Ruta Binkyte, Ayoub Ouni, Farah Barika Ktata | On the Origins of Sampling Bias: Implications on Fairness Measurement
and Mitigation | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Accurately measuring discrimination is crucial to faithfully assessing
fairness of trained machine learning (ML) models. Any bias in measuring
discrimination leads to either amplification or underestimation of the existing
disparity. Several sources of bias exist and it is assumed that bias resulting
from machine learning is born equally by different groups (e.g. females vs
males, whites vs blacks, etc.). If, however, bias is born differently by
different groups, it may exacerbate discrimination against specific
sub-populations. Sampling bias, in particular, is inconsistently used in the
literature to describe bias due to the sampling procedure. In this paper, we
attempt to disambiguate this term by introducing clearly defined variants of
sampling bias, namely, sample size bias (SSB) and underrepresentation bias
(URB). Through an extensive set of experiments on benchmark datasets and using
mainstream learning algorithms, we expose relevant observations in several
model training scenarios. The observations are finally framed as actionable
recommendations for practitioners.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 06:23:07 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Zhioua",
"Sami",
""
],
[
"Binkyte",
"Ruta",
""
],
[
"Ouni",
"Ayoub",
""
],
[
"Ktata",
"Farah Barika",
""
]
] | TITLE: On the Origins of Sampling Bias: Implications on Fairness Measurement
and Mitigation
ABSTRACT: Accurately measuring discrimination is crucial to faithfully assessing
fairness of trained machine learning (ML) models. Any bias in measuring
discrimination leads to either amplification or underestimation of the existing
disparity. Several sources of bias exist and it is assumed that bias resulting
from machine learning is born equally by different groups (e.g. females vs
males, whites vs blacks, etc.). If, however, bias is born differently by
different groups, it may exacerbate discrimination against specific
sub-populations. Sampling bias, in particular, is inconsistently used in the
literature to describe bias due to the sampling procedure. In this paper, we
attempt to disambiguate this term by introducing clearly defined variants of
sampling bias, namely, sample size bias (SSB) and underrepresentation bias
(URB). Through an extensive set of experiments on benchmark datasets and using
mainstream learning algorithms, we expose relevant observations in several
model training scenarios. The observations are finally framed as actionable
recommendations for practitioners.
|
2503.17963 | Hyunwoo Ko | Guijin Son, Hyunwoo Ko, Haneral Jung, Chami Hwang | Won: Establishing Best Practices for Korean Financial NLP | The training dataset is uploaded here:
https://huggingface.co/datasets/KRX-Data/Won-Instruct. The model will be
updated shortly | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In this work, we present the first open leaderboard for evaluating Korean
large language models focused on finance. Operated for about eight weeks, the
leaderboard evaluated 1,119 submissions on a closed benchmark covering five
MCQA categories: finance and accounting, stock price prediction, domestic
company analysis, financial markets, and financial agent tasks and one
open-ended qa task. Building on insights from these evaluations, we release an
open instruction dataset of 80k instances and summarize widely used training
strategies observed among top-performing models. Finally, we introduce Won, a
fully open and transparent LLM built using these best practices. We hope our
contributions help advance the development of better and safer financial LLMs
for Korean and other languages.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 06:52:38 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Son",
"Guijin",
""
],
[
"Ko",
"Hyunwoo",
""
],
[
"Jung",
"Haneral",
""
],
[
"Hwang",
"Chami",
""
]
] | TITLE: Won: Establishing Best Practices for Korean Financial NLP
ABSTRACT: In this work, we present the first open leaderboard for evaluating Korean
large language models focused on finance. Operated for about eight weeks, the
leaderboard evaluated 1,119 submissions on a closed benchmark covering five
MCQA categories: finance and accounting, stock price prediction, domestic
company analysis, financial markets, and financial agent tasks and one
open-ended qa task. Building on insights from these evaluations, we release an
open instruction dataset of 80k instances and summarize widely used training
strategies observed among top-performing models. Finally, we introduce Won, a
fully open and transparent LLM built using these best practices. We hope our
contributions help advance the development of better and safer financial LLMs
for Korean and other languages.
|
2503.17966 | Wei Lu | Zeng-Hui Zhu, Wei Lu, Si-Bao Chen, Chris H. Q. Ding, Jin Tang, and Bin
Luo | Real-World Remote Sensing Image Dehazing: Benchmark and Baseline | 11 pages, 9 figures, real-world remote sensing image dehazing dataset | null | null | null | cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Remote Sensing Image Dehazing (RSID) poses significant challenges in
real-world scenarios due to the complex atmospheric conditions and severe color
distortions that degrade image quality. The scarcity of real-world remote
sensing hazy image pairs has compelled existing methods to rely primarily on
synthetic datasets. However, these methods struggle with real-world
applications due to the inherent domain gap between synthetic and real data. To
address this, we introduce Real-World Remote Sensing Hazy Image Dataset
(RRSHID), the first large-scale dataset featuring real-world hazy and dehazed
image pairs across diverse atmospheric conditions. Based on this, we propose
MCAF-Net, a novel framework tailored for real-world RSID. Its effectiveness
arises from three innovative components: Multi-branch Feature Integration Block
Aggregator (MFIBA), which enables robust feature extraction through cascaded
integration blocks and parallel multi-branch processing; Color-Calibrated
Self-Supervised Attention Module (CSAM), which mitigates complex color
distortions via self-supervised learning and attention-guided refinement; and
Multi-Scale Feature Adaptive Fusion Module (MFAFM), which integrates features
effectively while preserving local details and global context. Extensive
experiments validate that MCAF-Net demonstrates state-of-the-art performance in
real-world RSID, while maintaining competitive performance on synthetic
datasets. The introduction of RRSHID and MCAF-Net sets new benchmarks for
real-world RSID research, advancing practical solutions for this complex task.
The code and dataset are publicly available at
\url{https://github.com/lwCVer/RRSHID}.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 07:15:46 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Zhu",
"Zeng-Hui",
""
],
[
"Lu",
"Wei",
""
],
[
"Chen",
"Si-Bao",
""
],
[
"Ding",
"Chris H. Q.",
""
],
[
"Tang",
"Jin",
""
],
[
"Luo",
"Bin",
""
]
] | TITLE: Real-World Remote Sensing Image Dehazing: Benchmark and Baseline
ABSTRACT: Remote Sensing Image Dehazing (RSID) poses significant challenges in
real-world scenarios due to the complex atmospheric conditions and severe color
distortions that degrade image quality. The scarcity of real-world remote
sensing hazy image pairs has compelled existing methods to rely primarily on
synthetic datasets. However, these methods struggle with real-world
applications due to the inherent domain gap between synthetic and real data. To
address this, we introduce Real-World Remote Sensing Hazy Image Dataset
(RRSHID), the first large-scale dataset featuring real-world hazy and dehazed
image pairs across diverse atmospheric conditions. Based on this, we propose
MCAF-Net, a novel framework tailored for real-world RSID. Its effectiveness
arises from three innovative components: Multi-branch Feature Integration Block
Aggregator (MFIBA), which enables robust feature extraction through cascaded
integration blocks and parallel multi-branch processing; Color-Calibrated
Self-Supervised Attention Module (CSAM), which mitigates complex color
distortions via self-supervised learning and attention-guided refinement; and
Multi-Scale Feature Adaptive Fusion Module (MFAFM), which integrates features
effectively while preserving local details and global context. Extensive
experiments validate that MCAF-Net demonstrates state-of-the-art performance in
real-world RSID, while maintaining competitive performance on synthetic
datasets. The introduction of RRSHID and MCAF-Net sets new benchmarks for
real-world RSID research, advancing practical solutions for this complex task.
The code and dataset are publicly available at
\url{https://github.com/lwCVer/RRSHID}.
|
2503.17978 | Dominique Nshimyimana | Dominique Nshimyimana, Vitor Fortes Rey, Sungho Suh, Bo Zhou, Paul
Lukowicz | PIM: Physics-Informed Multi-task Pre-training for Improving Inertial
Sensor-Based Human Activity Recognition | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Human activity recognition (HAR) with deep learning models relies on large
amounts of labeled data, often challenging to obtain due to associated cost,
time, and labor. Self-supervised learning (SSL) has emerged as an effective
approach to leverage unlabeled data through pretext tasks, such as masked
reconstruction and multitask learning with signal processing-based data
augmentations, to pre-train encoder models. However, such methods are often
derived from computer vision approaches that disregard physical mechanisms and
constraints that govern wearable sensor data and the phenomena they reflect. In
this paper, we propose a physics-informed multi-task pre-training (PIM)
framework for IMU-based HAR. PIM generates pre-text tasks based on the
understanding of basic physical aspects of human motion: including movement
speed, angles of movement, and symmetry between sensor placements. Given a
sensor signal, we calculate corresponding features using physics-based
equations and use them as pretext tasks for SSL. This enables the model to
capture fundamental physical characteristics of human activities, which is
especially relevant for multi-sensor systems. Experimental evaluations on four
HAR benchmark datasets demonstrate that the proposed method outperforms
existing state-of-the-art methods, including data augmentation and masked
reconstruction, in terms of accuracy and F1 score. We have observed gains of
almost 10\% in macro f1 score and accuracy with only 2 to 8 labeled examples
per class and up to 3% when there is no reduction in the amount of training
data.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 08:16:01 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Nshimyimana",
"Dominique",
""
],
[
"Rey",
"Vitor Fortes",
""
],
[
"Suh",
"Sungho",
""
],
[
"Zhou",
"Bo",
""
],
[
"Lukowicz",
"Paul",
""
]
] | TITLE: PIM: Physics-Informed Multi-task Pre-training for Improving Inertial
Sensor-Based Human Activity Recognition
ABSTRACT: Human activity recognition (HAR) with deep learning models relies on large
amounts of labeled data, often challenging to obtain due to associated cost,
time, and labor. Self-supervised learning (SSL) has emerged as an effective
approach to leverage unlabeled data through pretext tasks, such as masked
reconstruction and multitask learning with signal processing-based data
augmentations, to pre-train encoder models. However, such methods are often
derived from computer vision approaches that disregard physical mechanisms and
constraints that govern wearable sensor data and the phenomena they reflect. In
this paper, we propose a physics-informed multi-task pre-training (PIM)
framework for IMU-based HAR. PIM generates pre-text tasks based on the
understanding of basic physical aspects of human motion: including movement
speed, angles of movement, and symmetry between sensor placements. Given a
sensor signal, we calculate corresponding features using physics-based
equations and use them as pretext tasks for SSL. This enables the model to
capture fundamental physical characteristics of human activities, which is
especially relevant for multi-sensor systems. Experimental evaluations on four
HAR benchmark datasets demonstrate that the proposed method outperforms
existing state-of-the-art methods, including data augmentation and masked
reconstruction, in terms of accuracy and F1 score. We have observed gains of
almost 10\% in macro f1 score and accuracy with only 2 to 8 labeled examples
per class and up to 3% when there is no reduction in the amount of training
data.
|
2503.17982 | Yara AlaaEldin | Yara AlaaEldin and Francesca Odone | Co-SemDepth: Fast Joint Semantic Segmentation and Depth Estimation on
Aerial Images | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Understanding the geometric and semantic properties of the scene is crucial
in autonomous navigation and particularly challenging in the case of Unmanned
Aerial Vehicle (UAV) navigation. Such information may be by obtained by
estimating depth and semantic segmentation maps of the surrounding environment
and for their practical use in autonomous navigation, the procedure must be
performed as close to real-time as possible. In this paper, we leverage
monocular cameras on aerial robots to predict depth and semantic maps in
low-altitude unstructured environments. We propose a joint deep-learning
architecture that can perform the two tasks accurately and rapidly, and
validate its effectiveness on MidAir and Aeroscapes benchmark datasets. Our
joint-architecture proves to be competitive or superior to the other single and
joint architecture methods while performing its task fast predicting 20.2 FPS
on a single NVIDIA quadro p5000 GPU and it has a low memory footprint. All
codes for training and prediction can be found on this link:
https://github.com/Malga-Vision/Co-SemDepth
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 08:25:07 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"AlaaEldin",
"Yara",
""
],
[
"Odone",
"Francesca",
""
]
] | TITLE: Co-SemDepth: Fast Joint Semantic Segmentation and Depth Estimation on
Aerial Images
ABSTRACT: Understanding the geometric and semantic properties of the scene is crucial
in autonomous navigation and particularly challenging in the case of Unmanned
Aerial Vehicle (UAV) navigation. Such information may be by obtained by
estimating depth and semantic segmentation maps of the surrounding environment
and for their practical use in autonomous navigation, the procedure must be
performed as close to real-time as possible. In this paper, we leverage
monocular cameras on aerial robots to predict depth and semantic maps in
low-altitude unstructured environments. We propose a joint deep-learning
architecture that can perform the two tasks accurately and rapidly, and
validate its effectiveness on MidAir and Aeroscapes benchmark datasets. Our
joint-architecture proves to be competitive or superior to the other single and
joint architecture methods while performing its task fast predicting 20.2 FPS
on a single NVIDIA quadro p5000 GPU and it has a low memory footprint. All
codes for training and prediction can be found on this link:
https://github.com/Malga-Vision/Co-SemDepth
|
2503.17984 | Maochen Yang | Maochen Yang, Zekun Li, Jian Zhang, Lei Qi, Yinghuan Shi | Taste More, Taste Better: Diverse Data and Strong Model Boost
Semi-Supervised Crowd Counting | Accepted by CVPR 2025 | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Semi-supervised crowd counting is crucial for addressing the high annotation
costs of densely populated scenes. Although several methods based on
pseudo-labeling have been proposed, it remains challenging to effectively and
accurately utilize unlabeled data. In this paper, we propose a novel framework
called Taste More Taste Better (TMTB), which emphasizes both data and model
aspects. Firstly, we explore a data augmentation technique well-suited for the
crowd counting task. By inpainting the background regions, this technique can
effectively enhance data diversity while preserving the fidelity of the entire
scenes. Secondly, we introduce the Visual State Space Model as backbone to
capture the global context information from crowd scenes, which is crucial for
extremely crowded, low-light, and adverse weather scenarios. In addition to the
traditional regression head for exact prediction, we employ an Anti-Noise
classification head to provide less exact but more accurate supervision, since
the regression head is sensitive to noise in manual annotations. We conduct
extensive experiments on four benchmark datasets and show that our method
outperforms state-of-the-art methods by a large margin. Code is publicly
available on https://github.com/syhien/taste_more_taste_better.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 08:38:01 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Yang",
"Maochen",
""
],
[
"Li",
"Zekun",
""
],
[
"Zhang",
"Jian",
""
],
[
"Qi",
"Lei",
""
],
[
"Shi",
"Yinghuan",
""
]
] | TITLE: Taste More, Taste Better: Diverse Data and Strong Model Boost
Semi-Supervised Crowd Counting
ABSTRACT: Semi-supervised crowd counting is crucial for addressing the high annotation
costs of densely populated scenes. Although several methods based on
pseudo-labeling have been proposed, it remains challenging to effectively and
accurately utilize unlabeled data. In this paper, we propose a novel framework
called Taste More Taste Better (TMTB), which emphasizes both data and model
aspects. Firstly, we explore a data augmentation technique well-suited for the
crowd counting task. By inpainting the background regions, this technique can
effectively enhance data diversity while preserving the fidelity of the entire
scenes. Secondly, we introduce the Visual State Space Model as backbone to
capture the global context information from crowd scenes, which is crucial for
extremely crowded, low-light, and adverse weather scenarios. In addition to the
traditional regression head for exact prediction, we employ an Anti-Noise
classification head to provide less exact but more accurate supervision, since
the regression head is sensitive to noise in manual annotations. We conduct
extensive experiments on four benchmark datasets and show that our method
outperforms state-of-the-art methods by a large margin. Code is publicly
available on https://github.com/syhien/taste_more_taste_better.
|
2503.17990 | Venktesh V | V Venktesh, Mandeep Rathee, Avishek Anand | SUNAR: Semantic Uncertainty based Neighborhood Aware Retrieval for
Complex QA | Accepted at NAACL 2025 Main Conference | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Complex question-answering (QA) systems face significant challenges in
retrieving and reasoning over information that addresses multi-faceted queries.
While large language models (LLMs) have advanced the reasoning capabilities of
these systems, the bounded-recall problem persists, where procuring all
relevant documents in first-stage retrieval remains a challenge. Missing
pertinent documents at this stage leads to performance degradation that cannot
be remedied in later stages, especially given the limited context windows of
LLMs which necessitate high recall at smaller retrieval depths. In this paper,
we introduce SUNAR, a novel approach that leverages LLMs to guide a
Neighborhood Aware Retrieval process. SUNAR iteratively explores a neighborhood
graph of documents, dynamically promoting or penalizing documents based on
uncertainty estimates from interim LLM-generated answer candidates. We validate
our approach through extensive experiments on two complex QA datasets. Our
results show that SUNAR significantly outperforms existing retrieve-and-reason
baselines, achieving up to a 31.84% improvement in performance over existing
state-of-the-art methods for complex QA.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 08:50:44 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Venktesh",
"V",
""
],
[
"Rathee",
"Mandeep",
""
],
[
"Anand",
"Avishek",
""
]
] | TITLE: SUNAR: Semantic Uncertainty based Neighborhood Aware Retrieval for
Complex QA
ABSTRACT: Complex question-answering (QA) systems face significant challenges in
retrieving and reasoning over information that addresses multi-faceted queries.
While large language models (LLMs) have advanced the reasoning capabilities of
these systems, the bounded-recall problem persists, where procuring all
relevant documents in first-stage retrieval remains a challenge. Missing
pertinent documents at this stage leads to performance degradation that cannot
be remedied in later stages, especially given the limited context windows of
LLMs which necessitate high recall at smaller retrieval depths. In this paper,
we introduce SUNAR, a novel approach that leverages LLMs to guide a
Neighborhood Aware Retrieval process. SUNAR iteratively explores a neighborhood
graph of documents, dynamically promoting or penalizing documents based on
uncertainty estimates from interim LLM-generated answer candidates. We validate
our approach through extensive experiments on two complex QA datasets. Our
results show that SUNAR significantly outperforms existing retrieve-and-reason
baselines, achieving up to a 31.84% improvement in performance over existing
state-of-the-art methods for complex QA.
|
2503.17992 | Yuping Duan | Xueying Liu, Lianfang Wang, Jun Liu, Yong Wang and Yuping Duan | Geometric Constrained Non-Line-of-Sight Imaging | null | null | null | null | cs.CV eess.IV | http://creativecommons.org/publicdomain/zero/1.0/ | Normal reconstruction is crucial in non-line-of-sight (NLOS) imaging, as it
provides key geometric and lighting information about hidden objects, which
significantly improves reconstruction accuracy and scene understanding.
However, jointly estimating normals and albedo expands the problem from
matrix-valued functions to tensor-valued functions that substantially
increasing complexity and computational difficulty. In this paper, we propose a
novel joint albedo-surface reconstruction method, which utilizes the Frobenius
norm of the shape operator to control the variation rate of the normal field.
It is the first attempt to apply regularization methods to the reconstruction
of surface normals for hidden objects. By improving the accuracy of the normal
field, it enhances detail representation and achieves high-precision
reconstruction of hidden object geometry. The proposed method demonstrates
robustness and effectiveness on both synthetic and experimental datasets. On
transient data captured within 15 seconds, our surface normal-regularized
reconstruction model produces more accurate surfaces than recently proposed
methods and is 30 times faster than the existing surface reconstruction
approach.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 08:56:00 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Liu",
"Xueying",
""
],
[
"Wang",
"Lianfang",
""
],
[
"Liu",
"Jun",
""
],
[
"Wang",
"Yong",
""
],
[
"Duan",
"Yuping",
""
]
] | TITLE: Geometric Constrained Non-Line-of-Sight Imaging
ABSTRACT: Normal reconstruction is crucial in non-line-of-sight (NLOS) imaging, as it
provides key geometric and lighting information about hidden objects, which
significantly improves reconstruction accuracy and scene understanding.
However, jointly estimating normals and albedo expands the problem from
matrix-valued functions to tensor-valued functions that substantially
increasing complexity and computational difficulty. In this paper, we propose a
novel joint albedo-surface reconstruction method, which utilizes the Frobenius
norm of the shape operator to control the variation rate of the normal field.
It is the first attempt to apply regularization methods to the reconstruction
of surface normals for hidden objects. By improving the accuracy of the normal
field, it enhances detail representation and achieves high-precision
reconstruction of hidden object geometry. The proposed method demonstrates
robustness and effectiveness on both synthetic and experimental datasets. On
transient data captured within 15 seconds, our surface normal-regularized
reconstruction model produces more accurate surfaces than recently proposed
methods and is 30 times faster than the existing surface reconstruction
approach.
|
2503.17993 | Patrick Ebel | Jussi Jokinen, Patrick Ebel, Tuomo Kujala | Predicting Multitasking in Manual and Automated Driving with Optimal
Supervisory Control | null | null | null | null | cs.HC cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Modern driving involves interactive technologies that can divert attention,
increasing the risk of accidents. This paper presents a computational cognitive
model that simulates human multitasking while driving. Based on optimal
supervisory control theory, the model predicts how multitasking adapts to
variations in driving demands, interactive tasks, and automation levels. Unlike
previous models, it accounts for context-dependent multitasking across
different degrees of driving automation. The model predicts longer in-car
glances on straight roads and shorter glances during curves. It also
anticipates increased glance durations with driver aids such as lane-centering
assistance and their interaction with environmental demands. Validated against
two empirical datasets, the model offers insights into driver multitasking amid
evolving in-car technologies and automation.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 08:56:53 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Jokinen",
"Jussi",
""
],
[
"Ebel",
"Patrick",
""
],
[
"Kujala",
"Tuomo",
""
]
] | TITLE: Predicting Multitasking in Manual and Automated Driving with Optimal
Supervisory Control
ABSTRACT: Modern driving involves interactive technologies that can divert attention,
increasing the risk of accidents. This paper presents a computational cognitive
model that simulates human multitasking while driving. Based on optimal
supervisory control theory, the model predicts how multitasking adapts to
variations in driving demands, interactive tasks, and automation levels. Unlike
previous models, it accounts for context-dependent multitasking across
different degrees of driving automation. The model predicts longer in-car
glances on straight roads and shorter glances during curves. It also
anticipates increased glance durations with driver aids such as lane-centering
assistance and their interaction with environmental demands. Validated against
two empirical datasets, the model offers insights into driver multitasking amid
evolving in-car technologies and automation.
|
2503.17998 | Junaed Younus Khan | Navid Bin Hasan, Md. Ashraful Islam, Junaed Younus Khan, Sanjida
Senjik, Anindya Iqbal | Automatic High-Level Test Case Generation using Large Language Models | Accepted at International Conference on Mining Software Repositories
(MSR) 2025 | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We explored the challenges practitioners face in software testing and
proposed automated solutions to address these obstacles. We began with a survey
of local software companies and 26 practitioners, revealing that the primary
challenge is not writing test scripts but aligning testing efforts with
business requirements. Based on these insights, we constructed a use-case
$\rightarrow$ (high-level) test-cases dataset to train/fine-tune models for
generating high-level test cases. High-level test cases specify what aspects of
the software's functionality need to be tested, along with the expected
outcomes. We evaluated large language models, such as GPT-4o, Gemini, LLaMA 3.1
8B, and Mistral 7B, where fine-tuning (the latter two) yields improved
performance. A final (human evaluation) survey confirmed the effectiveness of
these generated test cases. Our proactive approach strengthens
requirement-testing alignment and facilitates early test case generation to
streamline development.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 09:14:41 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Hasan",
"Navid Bin",
""
],
[
"Islam",
"Md. Ashraful",
""
],
[
"Khan",
"Junaed Younus",
""
],
[
"Senjik",
"Sanjida",
""
],
[
"Iqbal",
"Anindya",
""
]
] | TITLE: Automatic High-Level Test Case Generation using Large Language Models
ABSTRACT: We explored the challenges practitioners face in software testing and
proposed automated solutions to address these obstacles. We began with a survey
of local software companies and 26 practitioners, revealing that the primary
challenge is not writing test scripts but aligning testing efforts with
business requirements. Based on these insights, we constructed a use-case
$\rightarrow$ (high-level) test-cases dataset to train/fine-tune models for
generating high-level test cases. High-level test cases specify what aspects of
the software's functionality need to be tested, along with the expected
outcomes. We evaluated large language models, such as GPT-4o, Gemini, LLaMA 3.1
8B, and Mistral 7B, where fine-tuning (the latter two) yields improved
performance. A final (human evaluation) survey confirmed the effectiveness of
these generated test cases. Our proactive approach strengthens
requirement-testing alignment and facilitates early test case generation to
streamline development.
|
2503.18001 | Kunal Mukherjee | Kunal Mukherjee, Zachary Harrison, Saeid Balaneshin | Z-REx: Human-Interpretable GNN Explanations for Real Estate
Recommendations | null | null | null | null | cs.IR cs.LG cs.SI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Transparency and interpretability are crucial for enhancing customer
confidence and user engagement, especially when dealing with black-box Machine
Learning (ML)-based recommendation systems. Modern recommendation systems
leverage Graph Neural Network (GNN) due to their ability to produce
high-quality recommendations in terms of both relevance and diversity.
Therefore, the explainability of GNN is especially important for Link
Prediction (LP) tasks since recommending relevant items can be viewed as
predicting links between users and items. GNN explainability has been a
well-studied field, existing methods primarily focus on node or graph-level
tasks, leaving a gap in LP explanation techniques.
This work introduces Z-REx, a GNN explanation framework designed explicitly
for heterogeneous link prediction tasks. Z-REx utilizes structural and
attribute perturbation to identify critical sub-structures and important
features while reducing the search space by leveraging domain-specific
knowledge. In our experimentation, we show the efficacy of Z-REx in generating
contextually relevant and human-interpretable explanations for ZiGNN, a
GNN-based recommendation engine, using a real-world real-estate dataset from
Zillow Group, Inc. We also compare Z-REx to State-of-The-Art (SOTA) GNN
explainers to show Z-REx's superiority in producing high-quality
human-interpretable explanations.
| [
{
"version": "v1",
"created": "Wed, 12 Feb 2025 02:42:25 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Mukherjee",
"Kunal",
""
],
[
"Harrison",
"Zachary",
""
],
[
"Balaneshin",
"Saeid",
""
]
] | TITLE: Z-REx: Human-Interpretable GNN Explanations for Real Estate
Recommendations
ABSTRACT: Transparency and interpretability are crucial for enhancing customer
confidence and user engagement, especially when dealing with black-box Machine
Learning (ML)-based recommendation systems. Modern recommendation systems
leverage Graph Neural Network (GNN) due to their ability to produce
high-quality recommendations in terms of both relevance and diversity.
Therefore, the explainability of GNN is especially important for Link
Prediction (LP) tasks since recommending relevant items can be viewed as
predicting links between users and items. GNN explainability has been a
well-studied field, existing methods primarily focus on node or graph-level
tasks, leaving a gap in LP explanation techniques.
This work introduces Z-REx, a GNN explanation framework designed explicitly
for heterogeneous link prediction tasks. Z-REx utilizes structural and
attribute perturbation to identify critical sub-structures and important
features while reducing the search space by leveraging domain-specific
knowledge. In our experimentation, we show the efficacy of Z-REx in generating
contextually relevant and human-interpretable explanations for ZiGNN, a
GNN-based recommendation engine, using a real-world real-estate dataset from
Zillow Group, Inc. We also compare Z-REx to State-of-The-Art (SOTA) GNN
explainers to show Z-REx's superiority in producing high-quality
human-interpretable explanations.
|
2503.18007 | Hongyu Yan | Hongyu Yan, Zijun Li, Kunming Luo, Li Lu, Ping Tan | SymmCompletion: High-Fidelity and High-Consistency Point Cloud
Completion with Symmetry Guidance | Accepted by AAAI 2025 (Oral presentation), Code:
https://github.com/HongyuYann/SymmCompletion | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Point cloud completion aims to recover a complete point shape from a partial
point cloud. Although existing methods can form satisfactory point clouds in
global completeness, they often lose the original geometry details and face the
problem of geometric inconsistency between existing point clouds and
reconstructed missing parts. To tackle this problem, we introduce
SymmCompletion, a highly effective completion method based on symmetry
guidance. Our method comprises two primary components: a Local Symmetry
Transformation Network (LSTNet) and a Symmetry-Guidance Transformer (SGFormer).
First, LSTNet efficiently estimates point-wise local symmetry transformation to
transform key geometries of partial inputs into missing regions, thereby
generating geometry-align partial-missing pairs and initial point clouds.
Second, SGFormer leverages the geometric features of partial-missing pairs as
the explicit symmetric guidance that can constrain the refinement process for
initial point clouds. As a result, SGFormer can exploit provided priors to form
high-fidelity and geometry-consistency final point clouds. Qualitative and
quantitative evaluations on several benchmark datasets demonstrate that our
method outperforms state-of-the-art completion networks.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 09:45:37 GMT"
}
] | 2025-03-25T00:00:00 | [
[
"Yan",
"Hongyu",
""
],
[
"Li",
"Zijun",
""
],
[
"Luo",
"Kunming",
""
],
[
"Lu",
"Li",
""
],
[
"Tan",
"Ping",
""
]
] | TITLE: SymmCompletion: High-Fidelity and High-Consistency Point Cloud
Completion with Symmetry Guidance
ABSTRACT: Point cloud completion aims to recover a complete point shape from a partial
point cloud. Although existing methods can form satisfactory point clouds in
global completeness, they often lose the original geometry details and face the
problem of geometric inconsistency between existing point clouds and
reconstructed missing parts. To tackle this problem, we introduce
SymmCompletion, a highly effective completion method based on symmetry
guidance. Our method comprises two primary components: a Local Symmetry
Transformation Network (LSTNet) and a Symmetry-Guidance Transformer (SGFormer).
First, LSTNet efficiently estimates point-wise local symmetry transformation to
transform key geometries of partial inputs into missing regions, thereby
generating geometry-align partial-missing pairs and initial point clouds.
Second, SGFormer leverages the geometric features of partial-missing pairs as
the explicit symmetric guidance that can constrain the refinement process for
initial point clouds. As a result, SGFormer can exploit provided priors to form
high-fidelity and geometry-consistency final point clouds. Qualitative and
quantitative evaluations on several benchmark datasets demonstrate that our
method outperforms state-of-the-art completion networks.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.