id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
listlengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2503.00040 | Dehao Zhang | Dehao Zhang and Shuai Wang and Yichen Xiao and Wenjie Wei and Yimeng
Shan and Malu Zhang and Yang Yang | Memory-Free and Parallel Computation for Quantized Spiking Neural
Networks | null | null | null | null | cs.NE cs.CV | http://creativecommons.org/licenses/by/4.0/ | Quantized Spiking Neural Networks (QSNNs) offer superior energy efficiency
and are well-suited for deployment on resource-limited edge devices. However,
limited bit-width weight and membrane potential result in a notable performance
decline. In this study, we first identify a new underlying cause for this
decline: the loss of historical information due to the quantized membrane
potential. To tackle this issue, we introduce a memory-free quantization method
that captures all historical information without directly storing membrane
potentials, resulting in better performance with less memory requirements. To
further improve the computational efficiency, we propose a parallel training
and asynchronous inference framework that greatly increases training speed and
energy efficiency. We combine the proposed memory-free quantization and
parallel computation methods to develop a high-performance and efficient QSNN,
named MFP-QSNN. Extensive experiments show that our MFP-QSNN achieves
state-of-the-art performance on various static and neuromorphic image datasets,
requiring less memory and faster training speeds. The efficiency and efficacy
of the MFP-QSNN highlight its potential for energy-efficient neuromorphic
computing.
| [
{
"version": "v1",
"created": "Tue, 25 Feb 2025 10:34:25 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Zhang",
"Dehao",
""
],
[
"Wang",
"Shuai",
""
],
[
"Xiao",
"Yichen",
""
],
[
"Wei",
"Wenjie",
""
],
[
"Shan",
"Yimeng",
""
],
[
"Zhang",
"Malu",
""
],
[
"Yang",
"Yang",
""
]
]
| TITLE: Memory-Free and Parallel Computation for Quantized Spiking Neural
Networks
ABSTRACT: Quantized Spiking Neural Networks (QSNNs) offer superior energy efficiency
and are well-suited for deployment on resource-limited edge devices. However,
limited bit-width weight and membrane potential result in a notable performance
decline. In this study, we first identify a new underlying cause for this
decline: the loss of historical information due to the quantized membrane
potential. To tackle this issue, we introduce a memory-free quantization method
that captures all historical information without directly storing membrane
potentials, resulting in better performance with less memory requirements. To
further improve the computational efficiency, we propose a parallel training
and asynchronous inference framework that greatly increases training speed and
energy efficiency. We combine the proposed memory-free quantization and
parallel computation methods to develop a high-performance and efficient QSNN,
named MFP-QSNN. Extensive experiments show that our MFP-QSNN achieves
state-of-the-art performance on various static and neuromorphic image datasets,
requiring less memory and faster training speeds. The efficiency and efficacy
of the MFP-QSNN highlight its potential for energy-efficient neuromorphic
computing.
| no_new_dataset | 0.950041 |
2503.00042 | Clayton Bromley | Clayton Bromley, Alexander Moore, Amar Saini, Doug Poland and Carmen
Carrano | An Analysis of Segment Anything 2 | 19 pages, 30 figures | null | null | LLNL-JRNL-2002970 | eess.IV cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | Video object segmentation (VOS) is a critical task in the development of
video perception and understanding. The Segment-Anything Model 2 (SAM 2),
released by Meta AI, is the current state-of-the-art architecture for
end-to-end VOS. SAM 2 performs very well on both clean video data and augmented
data, and completely intelligent video perception requires an understanding of
how this architecture is capable of achieving such quality results. To better
understand how each step within the SAM 2 architecture permits high-quality
video segmentation, we pass a variety of complex video transformations through
the architecture and measure the impact at each stage of the process. We
observe that each progressive stage enables the filtering of complex
transformation noise and the emphasis of the object of interest. Our
contributions include the creation of complex transformation video datasets, an
analysis of how each stage of the SAM 2 architecture interprets these
transformations, and visualizations of segmented objects through each stage. By
better understanding how each model structure impacts overall video
understanding, VOS development can work to improve real-world applicability and
performance tracking, localizing, and segmenting objects despite complex
cluttered scenes and obscurations.
| [
{
"version": "v1",
"created": "Tue, 25 Feb 2025 22:58:13 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Bromley",
"Clayton",
""
],
[
"Moore",
"Alexander",
""
],
[
"Saini",
"Amar",
""
],
[
"Poland",
"Doug",
""
],
[
"Carrano",
"Carmen",
""
]
]
| TITLE: An Analysis of Segment Anything 2
ABSTRACT: Video object segmentation (VOS) is a critical task in the development of
video perception and understanding. The Segment-Anything Model 2 (SAM 2),
released by Meta AI, is the current state-of-the-art architecture for
end-to-end VOS. SAM 2 performs very well on both clean video data and augmented
data, and completely intelligent video perception requires an understanding of
how this architecture is capable of achieving such quality results. To better
understand how each step within the SAM 2 architecture permits high-quality
video segmentation, we pass a variety of complex video transformations through
the architecture and measure the impact at each stage of the process. We
observe that each progressive stage enables the filtering of complex
transformation noise and the emphasis of the object of interest. Our
contributions include the creation of complex transformation video datasets, an
analysis of how each stage of the SAM 2 architecture interprets these
transformations, and visualizations of segmented objects through each stage. By
better understanding how each model structure impacts overall video
understanding, VOS development can work to improve real-world applicability and
performance tracking, localizing, and segmenting objects despite complex
cluttered scenes and obscurations.
| no_new_dataset | 0.947137 |
2503.00044 | Shuaiang Rong | Shuaiang Rong, Lina He, Salih Furkan Atici, Ahmet Enis Cetin | Advanced YOLO-based Real-time Power Line Detection for Vegetation
Management | 13 pages. Revised version submitted to IEEE Transaction on Power
Delivery | Journal name: IEEE Transaction on Power Delivery; Paper submission
ID: TPWRD-00142-2025; Version: first revision | null | null | cs.CV eess.IV | http://creativecommons.org/licenses/by/4.0/ | Power line infrastructure is a key component of the power system, and it is
rapidly expanding to meet growing energy demands. Vegetation encroachment is a
significant threat to the safe operation of power lines, requiring reliable and
timely management to enhance the resilience and reliability of the power
network. Integrating smart grid technology, especially Unmanned Aerial Vehicles
(UAVs), provides substantial potential to revolutionize the management of
extensive power line networks with advanced imaging techniques. However,
processing the vast quantity of images captured by UAV patrols remains a
significant challenge. This paper introduces an intelligent real-time
monitoring framework for detecting power lines and adjacent vegetation. It is
developed based on the deep-learning Convolutional Neural Network (CNN), You
Only Look Once (YOLO), renowned for its high-speed object detection
capabilities. Unlike existing deep learning-based methods, this framework
enhances accuracy by integrating YOLOv8 with directional filters. They can
extract directional features and textures of power lines and their vicinity,
generating Oriented Bounding Boxes (OBB) for more precise localization.
Additionally, a post-processing algorithm is developed to create a vegetation
encroachment metric for power lines, allowing for a quantitative assessment of
the surrounding vegetation distribution. The effectiveness of the proposed
framework is demonstrated using a widely used power line dataset.
| [
{
"version": "v1",
"created": "Wed, 26 Feb 2025 01:21:06 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Rong",
"Shuaiang",
""
],
[
"He",
"Lina",
""
],
[
"Atici",
"Salih Furkan",
""
],
[
"Cetin",
"Ahmet Enis",
""
]
]
| TITLE: Advanced YOLO-based Real-time Power Line Detection for Vegetation
Management
ABSTRACT: Power line infrastructure is a key component of the power system, and it is
rapidly expanding to meet growing energy demands. Vegetation encroachment is a
significant threat to the safe operation of power lines, requiring reliable and
timely management to enhance the resilience and reliability of the power
network. Integrating smart grid technology, especially Unmanned Aerial Vehicles
(UAVs), provides substantial potential to revolutionize the management of
extensive power line networks with advanced imaging techniques. However,
processing the vast quantity of images captured by UAV patrols remains a
significant challenge. This paper introduces an intelligent real-time
monitoring framework for detecting power lines and adjacent vegetation. It is
developed based on the deep-learning Convolutional Neural Network (CNN), You
Only Look Once (YOLO), renowned for its high-speed object detection
capabilities. Unlike existing deep learning-based methods, this framework
enhances accuracy by integrating YOLOv8 with directional filters. They can
extract directional features and textures of power lines and their vicinity,
generating Oriented Bounding Boxes (OBB) for more precise localization.
Additionally, a post-processing algorithm is developed to create a vegetation
encroachment metric for power lines, allowing for a quantitative assessment of
the surrounding vegetation distribution. The effectiveness of the proposed
framework is demonstrated using a widely used power line dataset.
| no_new_dataset | 0.946151 |
2503.00045 | Xie Bin | Bin Xie, Yingfei Liu, Tiancai Wang, Jiale Cao, Xiangyu Zhang | Glad: A Streaming Scene Generator for Autonomous Driving | Accepted by ICLR2025 | null | null | null | cs.RO cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The generation and simulation of diverse real-world scenes have significant
application value in the field of autonomous driving, especially for the corner
cases. Recently, researchers have explored employing neural radiance fields or
diffusion models to generate novel views or synthetic data under driving
scenes. However, these approaches suffer from unseen scenes or restricted video
length, thus lacking sufficient adaptability for data generation and
simulation. To address these issues, we propose a simple yet effective
framework, named Glad, to generate video data in a frame-by-frame style. To
ensure the temporal consistency of synthetic video, we introduce a latent
variable propagation module, which views the latent features of previous frame
as noise prior and injects it into the latent features of current frame. In
addition, we design a streaming data sampler to orderly sample the original
image in a video clip at continuous iterations. Given the reference frame, our
Glad can be viewed as a streaming simulator by generating the videos for
specific scenes. Extensive experiments are performed on the widely-used
nuScenes dataset. Experimental results demonstrate that our proposed Glad
achieves promising performance, serving as a strong baseline for online video
generation. We will release the source code and models publicly.
| [
{
"version": "v1",
"created": "Wed, 26 Feb 2025 04:17:59 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Xie",
"Bin",
""
],
[
"Liu",
"Yingfei",
""
],
[
"Wang",
"Tiancai",
""
],
[
"Cao",
"Jiale",
""
],
[
"Zhang",
"Xiangyu",
""
]
]
| TITLE: Glad: A Streaming Scene Generator for Autonomous Driving
ABSTRACT: The generation and simulation of diverse real-world scenes have significant
application value in the field of autonomous driving, especially for the corner
cases. Recently, researchers have explored employing neural radiance fields or
diffusion models to generate novel views or synthetic data under driving
scenes. However, these approaches suffer from unseen scenes or restricted video
length, thus lacking sufficient adaptability for data generation and
simulation. To address these issues, we propose a simple yet effective
framework, named Glad, to generate video data in a frame-by-frame style. To
ensure the temporal consistency of synthetic video, we introduce a latent
variable propagation module, which views the latent features of previous frame
as noise prior and injects it into the latent features of current frame. In
addition, we design a streaming data sampler to orderly sample the original
image in a video clip at continuous iterations. Given the reference frame, our
Glad can be viewed as a streaming simulator by generating the videos for
specific scenes. Extensive experiments are performed on the widely-used
nuScenes dataset. Experimental results demonstrate that our proposed Glad
achieves promising performance, serving as a strong baseline for online video
generation. We will release the source code and models publicly.
| no_new_dataset | 0.946843 |
2503.00049 | Jiamin Luo | Jiamin Luo, Jingjing Wang, Junxiao Ma, Yujie Jin, Shoushan Li, Guodong
Zhou | Omni-SILA: Towards Omni-scene Driven Visual Sentiment Identifying,
Locating and Attributing in Videos | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Prior studies on Visual Sentiment Understanding (VSU) primarily rely on the
explicit scene information (e.g., facial expression) to judge visual
sentiments, which largely ignore implicit scene information (e.g., human
action, objection relation and visual background), while such information is
critical for precisely discovering visual sentiments. Motivated by this, this
paper proposes a new Omni-scene driven visual Sentiment Identifying, Locating
and Attributing in videos (Omni-SILA) task, aiming to interactively and
precisely identify, locate and attribute visual sentiments through both
explicit and implicit scene information. Furthermore, this paper believes that
this Omni-SILA task faces two key challenges: modeling scene and highlighting
implicit scene beyond explicit. To this end, this paper proposes an
Implicit-enhanced Causal MoE (ICM) approach for addressing the Omni-SILA task.
Specifically, a Scene-Balanced MoE (SBM) and an Implicit-Enhanced Causal (IEC)
blocks are tailored to model scene information and highlight the implicit scene
information beyond explicit, respectively. Extensive experimental results on
our constructed explicit and implicit Omni-SILA datasets demonstrate the great
advantage of the proposed ICM approach over advanced Video-LLMs.
| [
{
"version": "v1",
"created": "Wed, 26 Feb 2025 12:05:07 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Luo",
"Jiamin",
""
],
[
"Wang",
"Jingjing",
""
],
[
"Ma",
"Junxiao",
""
],
[
"Jin",
"Yujie",
""
],
[
"Li",
"Shoushan",
""
],
[
"Zhou",
"Guodong",
""
]
]
| TITLE: Omni-SILA: Towards Omni-scene Driven Visual Sentiment Identifying,
Locating and Attributing in Videos
ABSTRACT: Prior studies on Visual Sentiment Understanding (VSU) primarily rely on the
explicit scene information (e.g., facial expression) to judge visual
sentiments, which largely ignore implicit scene information (e.g., human
action, objection relation and visual background), while such information is
critical for precisely discovering visual sentiments. Motivated by this, this
paper proposes a new Omni-scene driven visual Sentiment Identifying, Locating
and Attributing in videos (Omni-SILA) task, aiming to interactively and
precisely identify, locate and attribute visual sentiments through both
explicit and implicit scene information. Furthermore, this paper believes that
this Omni-SILA task faces two key challenges: modeling scene and highlighting
implicit scene beyond explicit. To this end, this paper proposes an
Implicit-enhanced Causal MoE (ICM) approach for addressing the Omni-SILA task.
Specifically, a Scene-Balanced MoE (SBM) and an Implicit-Enhanced Causal (IEC)
blocks are tailored to model scene information and highlight the implicit scene
information beyond explicit, respectively. Extensive experimental results on
our constructed explicit and implicit Omni-SILA datasets demonstrate the great
advantage of the proposed ICM approach over advanced Video-LLMs.
| no_new_dataset | 0.949435 |
2503.00052 | Yan Su | Yan Su, Qiulin Wu, Weizhen Li, Chengchang Pan, Honggang Qi | RURA-Net: A general disease diagnosis method based on Zero-Shot Learning | 10 pages, 3 figures, 6 tables, submitted to The 28th International
Conference on Medical Image Computing and Computer Assisted Intervention
(MICCAI 2025) | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | The training of deep learning models relies on a large amount of labeled
data. However, the high cost of medical labeling seriously hinders the
development of deep learning in the medical field. Our study proposes a general
disease diagnosis approach based on Zero-Shot Learning. The Siamese neural
network is used to find similar diseases for the target diseases, and the U-Net
segmentation model is used to accurately segment the key lesions of the
disease. Finally, based on the ResNet-Agglomerative clustering algorithm, a
clustering model is trained on a large number of sample data of similar
diseases to obtain a approximate diagnosis of the target disease. Zero-Shot
Learning of the target disease is then successfully achieved. To evaluate the
validity of the model, we validated our method on a dataset of ophthalmic
diseases in CFP modality. The external dataset was used to test its
performance, and the accuracy=0.8395, precision=0.8094, recall=0.8463, F1
Score=0.8274, AUC=0.9226, which exceeded the indexes of most Few-Shot Learning
and One-Shot Learning models. It proves that our method has great potential and
reference value in the medical field, where annotation data is usually scarce
and expensive to obtain.
| [
{
"version": "v1",
"created": "Wed, 26 Feb 2025 16:41:32 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Su",
"Yan",
""
],
[
"Wu",
"Qiulin",
""
],
[
"Li",
"Weizhen",
""
],
[
"Pan",
"Chengchang",
""
],
[
"Qi",
"Honggang",
""
]
]
| TITLE: RURA-Net: A general disease diagnosis method based on Zero-Shot Learning
ABSTRACT: The training of deep learning models relies on a large amount of labeled
data. However, the high cost of medical labeling seriously hinders the
development of deep learning in the medical field. Our study proposes a general
disease diagnosis approach based on Zero-Shot Learning. The Siamese neural
network is used to find similar diseases for the target diseases, and the U-Net
segmentation model is used to accurately segment the key lesions of the
disease. Finally, based on the ResNet-Agglomerative clustering algorithm, a
clustering model is trained on a large number of sample data of similar
diseases to obtain a approximate diagnosis of the target disease. Zero-Shot
Learning of the target disease is then successfully achieved. To evaluate the
validity of the model, we validated our method on a dataset of ophthalmic
diseases in CFP modality. The external dataset was used to test its
performance, and the accuracy=0.8395, precision=0.8094, recall=0.8463, F1
Score=0.8274, AUC=0.9226, which exceeded the indexes of most Few-Shot Learning
and One-Shot Learning models. It proves that our method has great potential and
reference value in the medical field, where annotation data is usually scarce
and expensive to obtain.
| no_new_dataset | 0.94256 |
2503.00054 | Sarmistha Das | Sarmistha Das, Basha Mujavarsheik, R E Zera Lyngkhoi, Sriparna Saha
and Alka Maurya | Deciphering the complaint aspects: Towards an aspect-based complaint
identification model with video complaint dataset in finance | null | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | In today's competitive marketing landscape, effective complaint management is
crucial for customer service and business success. Video complaints,
integrating text and image content, offer invaluable insights by addressing
customer grievances and delineating product benefits and drawbacks. However,
comprehending nuanced complaint aspects within vast daily multimodal financial
data remains a formidable challenge. Addressing this gap, we have curated a
proprietary multimodal video complaint dataset comprising 433 publicly
accessible instances. Each instance is meticulously annotated at the utterance
level, encompassing five distinct categories of financial aspects and their
associated complaint labels. To support this endeavour, we introduce Solution
3.0, a model designed for multimodal aspect-based complaint identification
task. Solution 3.0 is tailored to perform three key tasks: 1) handling
multimodal features ( audio and video), 2) facilitating multilabel aspect
classification, and 3) conducting multitasking for aspect classifications and
complaint identification parallelly. Solution 3.0 utilizes a CLIP-based dual
frozen encoder with an integrated image segment encoder for global feature
fusion, enhanced by contextual attention (ISEC) to improve accuracy and
efficiency. Our proposed framework surpasses current multimodal baselines,
exhibiting superior performance across nearly all metrics by opening new ways
to strengthen appropriate customer care initiatives and effectively assisting
individuals in resolving their problems.
| [
{
"version": "v1",
"created": "Wed, 26 Feb 2025 18:56:07 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Das",
"Sarmistha",
""
],
[
"Mujavarsheik",
"Basha",
""
],
[
"Lyngkhoi",
"R E Zera",
""
],
[
"Saha",
"Sriparna",
""
],
[
"Maurya",
"Alka",
""
]
]
| TITLE: Deciphering the complaint aspects: Towards an aspect-based complaint
identification model with video complaint dataset in finance
ABSTRACT: In today's competitive marketing landscape, effective complaint management is
crucial for customer service and business success. Video complaints,
integrating text and image content, offer invaluable insights by addressing
customer grievances and delineating product benefits and drawbacks. However,
comprehending nuanced complaint aspects within vast daily multimodal financial
data remains a formidable challenge. Addressing this gap, we have curated a
proprietary multimodal video complaint dataset comprising 433 publicly
accessible instances. Each instance is meticulously annotated at the utterance
level, encompassing five distinct categories of financial aspects and their
associated complaint labels. To support this endeavour, we introduce Solution
3.0, a model designed for multimodal aspect-based complaint identification
task. Solution 3.0 is tailored to perform three key tasks: 1) handling
multimodal features ( audio and video), 2) facilitating multilabel aspect
classification, and 3) conducting multitasking for aspect classifications and
complaint identification parallelly. Solution 3.0 utilizes a CLIP-based dual
frozen encoder with an integrated image segment encoder for global feature
fusion, enhanced by contextual attention (ISEC) to improve accuracy and
efficiency. Our proposed framework surpasses current multimodal baselines,
exhibiting superior performance across nearly all metrics by opening new ways
to strengthen appropriate customer care initiatives and effectively assisting
individuals in resolving their problems.
| new_dataset | 0.958343 |
2503.00058 | Samuel Ozechi | Samuel Ozechi | African Gender Classification Using Clothing Identification Via Deep
Learning | 3 Pages, 10 Figures | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Human attribute identification and classification are crucial in computer
vision, driving the development of innovative recognition systems. Traditional
gender classification methods primarily rely on facial recognition, which,
while effective, struggles under non-ideal conditions such as blurriness, side
views, or partial occlusions. This study explores an alternative approach by
leveraging clothing identification, specifically focusing on African
traditional attire, which carries culturally significant and gender-specific
features.
We use the AFRIFASHION1600 dataset, a curated collection of 1,600 images of
African traditional clothing labeled into two gender classes: male and female.
A deep learning model, based on a modified VGG16 architecture and trained using
transfer learning, was developed for classification. Data augmentation was
applied to address the challenges posed by the relatively small dataset and to
mitigate overfitting. The model achieved an accuracy of 87% on the test set,
demonstrating strong predictive capability despite dataset imbalances favoring
female samples.
These findings highlight the potential of clothing-based identification as a
complementary technique to facial recognition for gender classification in
African contexts. Future research should focus on expanding and balancing
datasets to enhance classification robustness and improve the applicability of
clothing-based gender recognition systems.
| [
{
"version": "v1",
"created": "Wed, 26 Feb 2025 20:59:59 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Ozechi",
"Samuel",
""
]
]
| TITLE: African Gender Classification Using Clothing Identification Via Deep
Learning
ABSTRACT: Human attribute identification and classification are crucial in computer
vision, driving the development of innovative recognition systems. Traditional
gender classification methods primarily rely on facial recognition, which,
while effective, struggles under non-ideal conditions such as blurriness, side
views, or partial occlusions. This study explores an alternative approach by
leveraging clothing identification, specifically focusing on African
traditional attire, which carries culturally significant and gender-specific
features.
We use the AFRIFASHION1600 dataset, a curated collection of 1,600 images of
African traditional clothing labeled into two gender classes: male and female.
A deep learning model, based on a modified VGG16 architecture and trained using
transfer learning, was developed for classification. Data augmentation was
applied to address the challenges posed by the relatively small dataset and to
mitigate overfitting. The model achieved an accuracy of 87% on the test set,
demonstrating strong predictive capability despite dataset imbalances favoring
female samples.
These findings highlight the potential of clothing-based identification as a
complementary technique to facial recognition for gender classification in
African contexts. Future research should focus on expanding and balancing
datasets to enhance classification robustness and improve the applicability of
clothing-based gender recognition systems.
| new_dataset | 0.960805 |
2503.00072 | Alireza Gharahighehi | Alireza Gharahighehi, Achilleas Ghinis, Michela Venturini, Frederik
Cornillie, Celine Vens | Enhancing Collaborative Filtering-Based Course Recommendations by
Exploiting Time-to-Event Information with Survival Analysis | 19 pages, 1 figure | null | null | null | cs.CY cs.IR cs.LG | http://creativecommons.org/licenses/by/4.0/ | Massive Open Online Courses (MOOCs) are emerging as a popular alternative to
traditional education, offering learners the flexibility to access a wide range
of courses from various disciplines, anytime and anywhere. Despite this
accessibility, a significant number of enrollments in MOOCs result in dropouts.
To enhance learner engagement, it is crucial to recommend courses that align
with their preferences and needs. Course Recommender Systems (RSs) can play an
important role in this by modeling learners' preferences based on their
previous interactions within the MOOC platform. Time-to-dropout and
time-to-completion in MOOCs, like other time-to-event prediction tasks, can be
effectively modeled using survival analysis (SA) methods. In this study, we
apply SA methods to improve collaborative filtering recommendation performance
by considering time-to-event in the context of MOOCs. Our proposed approach
demonstrates superior performance compared to collaborative filtering methods
trained based on learners' interactions with MOOCs, as evidenced by two
performance measures on three publicly available datasets. The findings
underscore the potential of integrating SA methods with RSs to enhance
personalization in MOOCs.
| [
{
"version": "v1",
"created": "Thu, 27 Feb 2025 17:29:53 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Gharahighehi",
"Alireza",
""
],
[
"Ghinis",
"Achilleas",
""
],
[
"Venturini",
"Michela",
""
],
[
"Cornillie",
"Frederik",
""
],
[
"Vens",
"Celine",
""
]
]
| TITLE: Enhancing Collaborative Filtering-Based Course Recommendations by
Exploiting Time-to-Event Information with Survival Analysis
ABSTRACT: Massive Open Online Courses (MOOCs) are emerging as a popular alternative to
traditional education, offering learners the flexibility to access a wide range
of courses from various disciplines, anytime and anywhere. Despite this
accessibility, a significant number of enrollments in MOOCs result in dropouts.
To enhance learner engagement, it is crucial to recommend courses that align
with their preferences and needs. Course Recommender Systems (RSs) can play an
important role in this by modeling learners' preferences based on their
previous interactions within the MOOC platform. Time-to-dropout and
time-to-completion in MOOCs, like other time-to-event prediction tasks, can be
effectively modeled using survival analysis (SA) methods. In this study, we
apply SA methods to improve collaborative filtering recommendation performance
by considering time-to-event in the context of MOOCs. Our proposed approach
demonstrates superior performance compared to collaborative filtering methods
trained based on learners' interactions with MOOCs, as evidenced by two
performance measures on three publicly available datasets. The findings
underscore the potential of integrating SA methods with RSs to enhance
personalization in MOOCs.
| no_new_dataset | 0.953057 |
2503.00128 | Magnus Sesodia | Magnus Sesodia, Alina Petrova, John Armour, Thomas Lukasiewicz,
Oana-Maria Camburu, Puneet K. Dokania, Philip Torr, Christian Schroeder de
Witt | AnnoCaseLaw: A Richly-Annotated Dataset For Benchmarking Explainable
Legal Judgment Prediction | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | Legal systems worldwide continue to struggle with overwhelming caseloads,
limited judicial resources, and growing complexities in legal proceedings.
Artificial intelligence (AI) offers a promising solution, with Legal Judgment
Prediction (LJP) -- the practice of predicting a court's decision from the case
facts -- emerging as a key research area. However, existing datasets often
formulate the task of LJP unrealistically, not reflecting its true difficulty.
They also lack high-quality annotation essential for legal reasoning and
explainability. To address these shortcomings, we introduce AnnoCaseLaw, a
first-of-its-kind dataset of 471 meticulously annotated U.S. Appeals Court
negligence cases. Each case is enriched with comprehensive, expert-labeled
annotations that highlight key components of judicial decision making, along
with relevant legal concepts. Our dataset lays the groundwork for more
human-aligned, explainable LJP models. We define three legally relevant tasks:
(1) judgment prediction; (2) concept identification; and (3) automated case
annotation, and establish a performance baseline using industry-leading large
language models (LLMs). Our results demonstrate that LJP remains a formidable
task, with application of legal precedent proving particularly difficult. Code
and data are available at https://github.com/anonymouspolar1/annocaselaw.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 19:14:48 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Sesodia",
"Magnus",
""
],
[
"Petrova",
"Alina",
""
],
[
"Armour",
"John",
""
],
[
"Lukasiewicz",
"Thomas",
""
],
[
"Camburu",
"Oana-Maria",
""
],
[
"Dokania",
"Puneet K.",
""
],
[
"Torr",
"Philip",
""
],
[
"de Witt",
"Christian Schroeder",
""
]
]
| TITLE: AnnoCaseLaw: A Richly-Annotated Dataset For Benchmarking Explainable
Legal Judgment Prediction
ABSTRACT: Legal systems worldwide continue to struggle with overwhelming caseloads,
limited judicial resources, and growing complexities in legal proceedings.
Artificial intelligence (AI) offers a promising solution, with Legal Judgment
Prediction (LJP) -- the practice of predicting a court's decision from the case
facts -- emerging as a key research area. However, existing datasets often
formulate the task of LJP unrealistically, not reflecting its true difficulty.
They also lack high-quality annotation essential for legal reasoning and
explainability. To address these shortcomings, we introduce AnnoCaseLaw, a
first-of-its-kind dataset of 471 meticulously annotated U.S. Appeals Court
negligence cases. Each case is enriched with comprehensive, expert-labeled
annotations that highlight key components of judicial decision making, along
with relevant legal concepts. Our dataset lays the groundwork for more
human-aligned, explainable LJP models. We define three legally relevant tasks:
(1) judgment prediction; (2) concept identification; and (3) automated case
annotation, and establish a performance baseline using industry-leading large
language models (LLMs). Our results demonstrate that LJP remains a formidable
task, with application of legal precedent proving particularly difficult. Code
and data are available at https://github.com/anonymouspolar1/annocaselaw.
| new_dataset | 0.956877 |
2503.00137 | Grigor Nalbandyan | Grigor Nalbandyan, Rima Shahbazyan, Evelina Bakhturina | SCORE: Systematic COnsistency and Robustness Evaluation for Large
Language Models | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Typical evaluations of Large Language Models (LLMs) report a single metric
per dataset, often representing the model's best-case performance under
carefully selected settings. Unfortunately, this approach overlooks model
robustness and reliability in real-world applications. For instance, simple
paraphrasing of prompts on the MMLU-Pro dataset causes accuracy fluctuations of
up to 10\%, while reordering answer choices in the AGIEval dataset results in
accuracy differences of up to 6.1\%. While some studies discuss issues with LLM
robustness, there is no unified or centralized framework for evaluating the
robustness of language models. To address this gap and consolidate existing
research on model robustness, we present SCORE ($\mathbf{S}$ystematic
$\mathbf{CO}$nsistency and $\mathbf{R}$obustness $\mathbf{E}$valuation), a
comprehensive framework for non-adversarial evaluation of LLMs. The SCORE
framework evaluates models by repeatedly testing them on the same benchmarks in
various setups to give a realistic estimate of their accuracy and consistency.
We release the code publicly and start an LLM robustness leaderboard to
facilitate further development and research.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 19:27:29 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Nalbandyan",
"Grigor",
""
],
[
"Shahbazyan",
"Rima",
""
],
[
"Bakhturina",
"Evelina",
""
]
]
| TITLE: SCORE: Systematic COnsistency and Robustness Evaluation for Large
Language Models
ABSTRACT: Typical evaluations of Large Language Models (LLMs) report a single metric
per dataset, often representing the model's best-case performance under
carefully selected settings. Unfortunately, this approach overlooks model
robustness and reliability in real-world applications. For instance, simple
paraphrasing of prompts on the MMLU-Pro dataset causes accuracy fluctuations of
up to 10\%, while reordering answer choices in the AGIEval dataset results in
accuracy differences of up to 6.1\%. While some studies discuss issues with LLM
robustness, there is no unified or centralized framework for evaluating the
robustness of language models. To address this gap and consolidate existing
research on model robustness, we present SCORE ($\mathbf{S}$ystematic
$\mathbf{CO}$nsistency and $\mathbf{R}$obustness $\mathbf{E}$valuation), a
comprehensive framework for non-adversarial evaluation of LLMs. The SCORE
framework evaluates models by repeatedly testing them on the same benchmarks in
various setups to give a realistic estimate of their accuracy and consistency.
We release the code publicly and start an LLM robustness leaderboard to
facilitate further development and research.
| no_new_dataset | 0.931338 |
2503.00143 | Qiutai Pan | Tom Pan, Evan Dramko, Mitchell D. Miller, George N. Phillips Jr.,
Anastasios Kyrillidis | RecCrysFormer: Refined Protein Structural Prediction from 3D Patterson
Maps via Recycling Training Runs | 16 pages, 9 figures. To be published in Proceedings of CPAL 2025 | null | null | null | q-bio.QM cs.LG math.OC | http://creativecommons.org/licenses/by/4.0/ | Determining protein structures at an atomic level remains a significant
challenge in structural biology. We introduce $\texttt{RecCrysFormer}$, a
hybrid model that exploits the strengths of transformers with the aim of
integrating experimental and ML approaches to protein structure determination
from crystallographic data. $\texttt{RecCrysFormer}$ leverages Patterson maps
and incorporates known standardized partial structures of amino acid residues
to directly predict electron density maps, which are essential for constructing
detailed atomic models through crystallographic refinement processes.
$\texttt{RecCrysFormer}$ benefits from a ``recycling'' training regimen that
iteratively incorporates results from crystallographic refinements and previous
training runs as additional inputs in the form of template maps. Using a
preliminary dataset of synthetic peptide fragments based on Protein Data Bank,
$\texttt{RecCrysFormer}$ achieves good accuracy in structural predictions and
shows robustness against variations in crystal parameters, such as unit cell
dimensions and angles.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 19:40:09 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Pan",
"Tom",
""
],
[
"Dramko",
"Evan",
""
],
[
"Miller",
"Mitchell D.",
""
],
[
"Phillips",
"George N.",
"Jr."
],
[
"Kyrillidis",
"Anastasios",
""
]
]
| TITLE: RecCrysFormer: Refined Protein Structural Prediction from 3D Patterson
Maps via Recycling Training Runs
ABSTRACT: Determining protein structures at an atomic level remains a significant
challenge in structural biology. We introduce $\texttt{RecCrysFormer}$, a
hybrid model that exploits the strengths of transformers with the aim of
integrating experimental and ML approaches to protein structure determination
from crystallographic data. $\texttt{RecCrysFormer}$ leverages Patterson maps
and incorporates known standardized partial structures of amino acid residues
to directly predict electron density maps, which are essential for constructing
detailed atomic models through crystallographic refinement processes.
$\texttt{RecCrysFormer}$ benefits from a ``recycling'' training regimen that
iteratively incorporates results from crystallographic refinements and previous
training runs as additional inputs in the form of template maps. Using a
preliminary dataset of synthetic peptide fragments based on Protein Data Bank,
$\texttt{RecCrysFormer}$ achieves good accuracy in structural predictions and
shows robustness against variations in crystal parameters, such as unit cell
dimensions and angles.
| no_new_dataset | 0.949389 |
2503.00151 | Fakhraddin Alwajih | Fakhraddin Alwajih, Abdellah El Mekki, Samar Mohamed Magdy, Abdelrahim
A. Elmadany, Omer Nacar, El Moatez Billah Nagoudi, Reem Abdel-Salam, Hanin
Atwany, Youssef Nafea, Abdulfattah Mohammed Yahya, Rahaf Alhamouri, Hamzah A.
Alsayadi, Hiba Zayed, Sara Shatnawi, Serry Sibaee, Yasir Ech-Chammakhy, Walid
Al-Dhabyani, Marwa Mohamed Ali, Imen Jarraya, Ahmed Oumar El-Shangiti, Aisha
Alraeesi, Mohammed Anwar Al-Ghrawi, Abdulrahman S. Al-Batati, Elgizouli
Mohamed, Noha Taha Elgindi, Muhammed Saeed, Houdaifa Atou, Issam Ait Yahia,
Abdelhak Bouayad, Mohammed Machrouh, Amal Makouar, Dania Alkawi, Mukhtar
Mohamed, Safaa Taher Abdelfadil, Amine Ziad Ounnoughene, Rouabhia Anfel, Rwaa
Assi, Ahmed Sorkatti, Mohamedou Cheikh Tourad, Anis Koubaa, Ismail Berrada,
Mustafa Jarrar, Shady Shehata and Muhammad Abdul-Mageed | Palm: A Culturally Inclusive and Linguistically Diverse Dataset for
Arabic LLMs | More information about our dataset is available at our project page:
https://github.com/UBC-NLP/palm | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As large language models (LLMs) become increasingly integrated into daily
life, ensuring their cultural sensitivity and inclusivity is paramount. We
introduce our dataset, a year-long community-driven project covering all 22
Arab countries. The dataset includes instructions (input, response pairs) in
both Modern Standard Arabic (MSA) and dialectal Arabic (DA), spanning 20
diverse topics. Built by a team of 44 researchers across the Arab world, all of
whom are authors of this paper, our dataset offers a broad, inclusive
perspective. We use our dataset to evaluate the cultural and dialectal
capabilities of several frontier LLMs, revealing notable limitations. For
instance, while closed-source LLMs generally exhibit strong performance, they
are not without flaws, and smaller open-source models face greater challenges.
Moreover, certain countries (e.g., Egypt, the UAE) appear better represented
than others (e.g., Iraq, Mauritania, Yemen). Our annotation guidelines, code,
and data for reproducibility are publicly available.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 19:59:13 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Alwajih",
"Fakhraddin",
""
],
[
"Mekki",
"Abdellah El",
""
],
[
"Magdy",
"Samar Mohamed",
""
],
[
"Elmadany",
"Abdelrahim A.",
""
],
[
"Nacar",
"Omer",
""
],
[
"Nagoudi",
"El Moatez Billah",
""
],
[
"Abdel-Salam",
"Reem",
""
],
[
"Atwany",
"Hanin",
""
],
[
"Nafea",
"Youssef",
""
],
[
"Yahya",
"Abdulfattah Mohammed",
""
],
[
"Alhamouri",
"Rahaf",
""
],
[
"Alsayadi",
"Hamzah A.",
""
],
[
"Zayed",
"Hiba",
""
],
[
"Shatnawi",
"Sara",
""
],
[
"Sibaee",
"Serry",
""
],
[
"Ech-Chammakhy",
"Yasir",
""
],
[
"Al-Dhabyani",
"Walid",
""
],
[
"Ali",
"Marwa Mohamed",
""
],
[
"Jarraya",
"Imen",
""
],
[
"El-Shangiti",
"Ahmed Oumar",
""
],
[
"Alraeesi",
"Aisha",
""
],
[
"Al-Ghrawi",
"Mohammed Anwar",
""
],
[
"Al-Batati",
"Abdulrahman S.",
""
],
[
"Mohamed",
"Elgizouli",
""
],
[
"Elgindi",
"Noha Taha",
""
],
[
"Saeed",
"Muhammed",
""
],
[
"Atou",
"Houdaifa",
""
],
[
"Yahia",
"Issam Ait",
""
],
[
"Bouayad",
"Abdelhak",
""
],
[
"Machrouh",
"Mohammed",
""
],
[
"Makouar",
"Amal",
""
],
[
"Alkawi",
"Dania",
""
],
[
"Mohamed",
"Mukhtar",
""
],
[
"Abdelfadil",
"Safaa Taher",
""
],
[
"Ounnoughene",
"Amine Ziad",
""
],
[
"Anfel",
"Rouabhia",
""
],
[
"Assi",
"Rwaa",
""
],
[
"Sorkatti",
"Ahmed",
""
],
[
"Tourad",
"Mohamedou Cheikh",
""
],
[
"Koubaa",
"Anis",
""
],
[
"Berrada",
"Ismail",
""
],
[
"Jarrar",
"Mustafa",
""
],
[
"Shehata",
"Shady",
""
],
[
"Abdul-Mageed",
"Muhammad",
""
]
]
| TITLE: Palm: A Culturally Inclusive and Linguistically Diverse Dataset for
Arabic LLMs
ABSTRACT: As large language models (LLMs) become increasingly integrated into daily
life, ensuring their cultural sensitivity and inclusivity is paramount. We
introduce our dataset, a year-long community-driven project covering all 22
Arab countries. The dataset includes instructions (input, response pairs) in
both Modern Standard Arabic (MSA) and dialectal Arabic (DA), spanning 20
diverse topics. Built by a team of 44 researchers across the Arab world, all of
whom are authors of this paper, our dataset offers a broad, inclusive
perspective. We use our dataset to evaluate the cultural and dialectal
capabilities of several frontier LLMs, revealing notable limitations. For
instance, while closed-source LLMs generally exhibit strong performance, they
are not without flaws, and smaller open-source models face greater challenges.
Moreover, certain countries (e.g., Egypt, the UAE) appear better represented
than others (e.g., Iraq, Mauritania, Yemen). Our annotation guidelines, code,
and data for reproducibility are publicly available.
| new_dataset | 0.958226 |
2503.00154 | Engin Zeydan | Engin Zeydan, Cristian J. Vaca-Rubio, Luis Blanco, Roberto Pereira,
Marius Caus, Kapal Dev | Fed-KAN: Federated Learning with Kolmogorov-Arnold Networks for Traffic
Prediction | This work has been submitted to the IEEE for possible publication | null | null | null | cs.NI cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Non-Terrestrial Networks (NTNs) are becoming a critical component of modern
communication infrastructures, especially with the advent of Low Earth Orbit
(LEO) satellite systems. Traditional centralized learning approaches face major
challenges in such networks due to high latency, intermittent connectivity and
limited bandwidth. Federated Learning (FL) is a promising alternative as it
enables decentralized training while maintaining data privacy. However,
existing FL models, such as Federated Learning with Multi-Layer Perceptrons
(Fed-MLP), can struggle with high computational complexity and poor
adaptability to dynamic NTN environments. This paper provides a detailed
analysis for Federated Learning with Kolmogorov-Arnold Networks (Fed-KAN), its
implementation and performance improvements over traditional FL models in NTN
environments for traffic forecasting. The proposed Fed-KAN is a novel approach
that utilises the functional approximation capabilities of KANs in a FL
framework. We evaluate Fed-KAN compared to Fed-MLP on a traffic dataset of real
satellite operator and show a significant reduction in training and test loss.
Our results show that Fed-KAN can achieve a 77.39% reduction in average test
loss compared to Fed-MLP, highlighting its improved performance and better
generalization ability. At the end of the paper, we also discuss some potential
applications of Fed-KAN within O-RAN and Fed-KAN usage for split
functionalities in NTN architecture.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 20:04:53 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Zeydan",
"Engin",
""
],
[
"Vaca-Rubio",
"Cristian J.",
""
],
[
"Blanco",
"Luis",
""
],
[
"Pereira",
"Roberto",
""
],
[
"Caus",
"Marius",
""
],
[
"Dev",
"Kapal",
""
]
]
| TITLE: Fed-KAN: Federated Learning with Kolmogorov-Arnold Networks for Traffic
Prediction
ABSTRACT: Non-Terrestrial Networks (NTNs) are becoming a critical component of modern
communication infrastructures, especially with the advent of Low Earth Orbit
(LEO) satellite systems. Traditional centralized learning approaches face major
challenges in such networks due to high latency, intermittent connectivity and
limited bandwidth. Federated Learning (FL) is a promising alternative as it
enables decentralized training while maintaining data privacy. However,
existing FL models, such as Federated Learning with Multi-Layer Perceptrons
(Fed-MLP), can struggle with high computational complexity and poor
adaptability to dynamic NTN environments. This paper provides a detailed
analysis for Federated Learning with Kolmogorov-Arnold Networks (Fed-KAN), its
implementation and performance improvements over traditional FL models in NTN
environments for traffic forecasting. The proposed Fed-KAN is a novel approach
that utilises the functional approximation capabilities of KANs in a FL
framework. We evaluate Fed-KAN compared to Fed-MLP on a traffic dataset of real
satellite operator and show a significant reduction in training and test loss.
Our results show that Fed-KAN can achieve a 77.39% reduction in average test
loss compared to Fed-MLP, highlighting its improved performance and better
generalization ability. At the end of the paper, we also discuss some potential
applications of Fed-KAN within O-RAN and Fed-KAN usage for split
functionalities in NTN architecture.
| no_new_dataset | 0.950365 |
2503.00162 | Kangda Wei | Kangda Wei, Zhengyu Zhou, Bingqing Wang, Jun Araki, Lukas Lange,
Ruihong Huang, Zhe Feng | PreMind: Multi-Agent Video Understanding for Advanced Indexing of
Presentation-style Videos | null | null | null | null | cs.CV cs.AI cs.CL cs.MA | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In recent years, online lecture videos have become an increasingly popular
resource for acquiring new knowledge. Systems capable of effectively
understanding/indexing lecture videos are thus highly desirable, enabling
downstream tasks like question answering to help users efficiently locate
specific information within videos. This work proposes PreMind, a novel
multi-agent multimodal framework that leverages various large models for
advanced understanding/indexing of presentation-style videos. PreMind first
segments videos into slide-presentation segments using a Vision-Language Model
(VLM) to enhance modern shot-detection techniques. Each segment is then
analyzed to generate multimodal indexes through three key steps: (1) extracting
slide visual content, (2) transcribing speech narratives, and (3) consolidating
these visual and speech contents into an integrated understanding. Three
innovative mechanisms are also proposed to improve performance: leveraging
prior lecture knowledge to refine visual understanding, detecting/correcting
speech transcription errors using a VLM, and utilizing a critic agent for
dynamic iterative self-reflection in vision analysis. Compared to traditional
video indexing methods, PreMind captures rich, reliable multimodal information,
allowing users to search for details like abbreviations shown only on slides.
Systematic evaluations on the public LPM dataset and an internal enterprise
dataset are conducted to validate PreMind's effectiveness, supported by
detailed analyses.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 20:17:48 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Wei",
"Kangda",
""
],
[
"Zhou",
"Zhengyu",
""
],
[
"Wang",
"Bingqing",
""
],
[
"Araki",
"Jun",
""
],
[
"Lange",
"Lukas",
""
],
[
"Huang",
"Ruihong",
""
],
[
"Feng",
"Zhe",
""
]
]
| TITLE: PreMind: Multi-Agent Video Understanding for Advanced Indexing of
Presentation-style Videos
ABSTRACT: In recent years, online lecture videos have become an increasingly popular
resource for acquiring new knowledge. Systems capable of effectively
understanding/indexing lecture videos are thus highly desirable, enabling
downstream tasks like question answering to help users efficiently locate
specific information within videos. This work proposes PreMind, a novel
multi-agent multimodal framework that leverages various large models for
advanced understanding/indexing of presentation-style videos. PreMind first
segments videos into slide-presentation segments using a Vision-Language Model
(VLM) to enhance modern shot-detection techniques. Each segment is then
analyzed to generate multimodal indexes through three key steps: (1) extracting
slide visual content, (2) transcribing speech narratives, and (3) consolidating
these visual and speech contents into an integrated understanding. Three
innovative mechanisms are also proposed to improve performance: leveraging
prior lecture knowledge to refine visual understanding, detecting/correcting
speech transcription errors using a VLM, and utilizing a critic agent for
dynamic iterative self-reflection in vision analysis. Compared to traditional
video indexing methods, PreMind captures rich, reliable multimodal information,
allowing users to search for details like abbreviations shown only on slides.
Systematic evaluations on the public LPM dataset and an internal enterprise
dataset are conducted to validate PreMind's effectiveness, supported by
detailed analyses.
| no_new_dataset | 0.94625 |
2503.00167 | Kuangyi Chen | Kuangyi Chen and Jun Zhang and Friedrich Fraundorfer | EVLoc: Event-based Visual Localization in LiDAR Maps via Event-Depth
Registration | null | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Event cameras are bio-inspired sensors with some notable features, including
high dynamic range and low latency, which makes them exceptionally suitable for
perception in challenging scenarios such as high-speed motion and extreme
lighting conditions. In this paper, we explore their potential for localization
within pre-existing LiDAR maps, a critical task for applications that require
precise navigation and mobile manipulation. Our framework follows a paradigm
based on the refinement of an initial pose. Specifically, we first project
LiDAR points into 2D space based on a rough initial pose to obtain depth maps,
and then employ an optical flow estimation network to align events with LiDAR
points in 2D space, followed by camera pose estimation using a PnP solver. To
enhance geometric consistency between these two inherently different
modalities, we develop a novel frame-based event representation that improves
structural clarity. Additionally, given the varying degrees of bias observed in
the ground truth poses, we design a module that predicts an auxiliary variable
as a regularization term to mitigate the impact of this bias on network
convergence. Experimental results on several public datasets demonstrate the
effectiveness of our proposed method. To facilitate future research, both the
code and the pre-trained models are made available online.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 20:27:49 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Chen",
"Kuangyi",
""
],
[
"Zhang",
"Jun",
""
],
[
"Fraundorfer",
"Friedrich",
""
]
]
| TITLE: EVLoc: Event-based Visual Localization in LiDAR Maps via Event-Depth
Registration
ABSTRACT: Event cameras are bio-inspired sensors with some notable features, including
high dynamic range and low latency, which makes them exceptionally suitable for
perception in challenging scenarios such as high-speed motion and extreme
lighting conditions. In this paper, we explore their potential for localization
within pre-existing LiDAR maps, a critical task for applications that require
precise navigation and mobile manipulation. Our framework follows a paradigm
based on the refinement of an initial pose. Specifically, we first project
LiDAR points into 2D space based on a rough initial pose to obtain depth maps,
and then employ an optical flow estimation network to align events with LiDAR
points in 2D space, followed by camera pose estimation using a PnP solver. To
enhance geometric consistency between these two inherently different
modalities, we develop a novel frame-based event representation that improves
structural clarity. Additionally, given the varying degrees of bias observed in
the ground truth poses, we design a module that predicts an auxiliary variable
as a regularization term to mitigate the impact of this bias on network
convergence. Experimental results on several public datasets demonstrate the
effectiveness of our proposed method. To facilitate future research, both the
code and the pre-trained models are made available online.
| no_new_dataset | 0.950457 |
2503.00171 | Denis Musinguzi | Denis Musinguzi, Andrew Katumba, Sudi Murindanyi | PaliGemma-CXR: A Multi-task Multimodal Model for TB Chest X-ray
Interpretation | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Tuberculosis (TB) is a infectious global health challenge. Chest X-rays are a
standard method for TB screening, yet many countries face a critical shortage
of radiologists capable of interpreting these images. Machine learning offers
an alternative, as it can automate tasks such as disease diagnosis, and report
generation. However, traditional approaches rely on task-specific models, which
cannot utilize the interdependence between tasks. Building a multi-task model
capable of performing multiple tasks poses additional challenges such as
scarcity of multimodal data, dataset imbalance, and negative transfer. To
address these challenges, we propose PaliGemma-CXR, a multi-task multimodal
model capable of performing TB diagnosis, object detection, segmentation,
report generation, and VQA. Starting with a dataset of chest X-ray images
annotated with TB diagnosis labels and segmentation masks, we curated a
multimodal dataset to support additional tasks. By finetuning PaliGemma on this
dataset and sampling data using ratios of the inverse of the size of task
datasets, we achieved the following results across all tasks: 90.32% accuracy
on TB diagnosis and 98.95% on close-ended VQA, 41.3 BLEU score on report
generation, and a mAP of 19.4 and 16.0 on object detection and segmentation,
respectively. These results demonstrate that PaliGemma-CXR effectively
leverages the interdependence between multiple image interpretation tasks to
enhance performance.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 20:34:06 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Musinguzi",
"Denis",
""
],
[
"Katumba",
"Andrew",
""
],
[
"Murindanyi",
"Sudi",
""
]
]
| TITLE: PaliGemma-CXR: A Multi-task Multimodal Model for TB Chest X-ray
Interpretation
ABSTRACT: Tuberculosis (TB) is a infectious global health challenge. Chest X-rays are a
standard method for TB screening, yet many countries face a critical shortage
of radiologists capable of interpreting these images. Machine learning offers
an alternative, as it can automate tasks such as disease diagnosis, and report
generation. However, traditional approaches rely on task-specific models, which
cannot utilize the interdependence between tasks. Building a multi-task model
capable of performing multiple tasks poses additional challenges such as
scarcity of multimodal data, dataset imbalance, and negative transfer. To
address these challenges, we propose PaliGemma-CXR, a multi-task multimodal
model capable of performing TB diagnosis, object detection, segmentation,
report generation, and VQA. Starting with a dataset of chest X-ray images
annotated with TB diagnosis labels and segmentation masks, we curated a
multimodal dataset to support additional tasks. By finetuning PaliGemma on this
dataset and sampling data using ratios of the inverse of the size of task
datasets, we achieved the following results across all tasks: 90.32% accuracy
on TB diagnosis and 98.95% on close-ended VQA, 41.3 BLEU score on report
generation, and a mAP of 19.4 and 16.0 on object detection and segmentation,
respectively. These results demonstrate that PaliGemma-CXR effectively
leverages the interdependence between multiple image interpretation tasks to
enhance performance.
| new_dataset | 0.889241 |
2503.00172 | Zhiqiu Xia | Zhiqiu Xia, Jinxuan Xu, Yuqian Zhang, Hang Liu | A Survey of Uncertainty Estimation Methods on Large Language Models | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Large language models (LLMs) have demonstrated remarkable capabilities across
various tasks. However, these models could offer biased, hallucinated, or
non-factual responses camouflaged by their fluency and realistic appearance.
Uncertainty estimation is the key method to address this challenge. While
research efforts in uncertainty estimation are ramping up, there is a lack of
comprehensive and dedicated surveys on LLM uncertainty estimation. This survey
presents four major avenues of LLM uncertainty estimation. Furthermore, we
perform extensive experimental evaluations across multiple methods and
datasets. At last, we provide critical and promising future directions for LLM
uncertainty estimation.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 20:38:39 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Xia",
"Zhiqiu",
""
],
[
"Xu",
"Jinxuan",
""
],
[
"Zhang",
"Yuqian",
""
],
[
"Liu",
"Hang",
""
]
]
| TITLE: A Survey of Uncertainty Estimation Methods on Large Language Models
ABSTRACT: Large language models (LLMs) have demonstrated remarkable capabilities across
various tasks. However, these models could offer biased, hallucinated, or
non-factual responses camouflaged by their fluency and realistic appearance.
Uncertainty estimation is the key method to address this challenge. While
research efforts in uncertainty estimation are ramping up, there is a lack of
comprehensive and dedicated surveys on LLM uncertainty estimation. This survey
presents four major avenues of LLM uncertainty estimation. Furthermore, we
perform extensive experimental evaluations across multiple methods and
datasets. At last, we provide critical and promising future directions for LLM
uncertainty estimation.
| no_new_dataset | 0.937726 |
2503.00174 | Akhil Jalan | Akhil Jalan, Yassir Jedra, Arya Mazumdar, Soumendu Sundar Mukherjee,
Purnamrita Sarkar | Optimal Transfer Learning for Missing Not-at-Random Matrix Completion | null | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | We study transfer learning for matrix completion in a Missing Not-at-Random
(MNAR) setting that is motivated by biological problems. The target matrix $Q$
has entire rows and columns missing, making estimation impossible without side
information. To address this, we use a noisy and incomplete source matrix $P$,
which relates to $Q$ via a feature shift in latent space. We consider both the
active and passive sampling of rows and columns. We establish minimax lower
bounds for entrywise estimation error in each setting. Our computationally
efficient estimation framework achieves this lower bound for the active
setting, which leverages the source data to query the most informative rows and
columns of $Q$. This avoids the need for incoherence assumptions required for
rate optimality in the passive sampling setting. We demonstrate the
effectiveness of our approach through comparisons with existing algorithms on
real-world biological datasets.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 20:40:00 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Jalan",
"Akhil",
""
],
[
"Jedra",
"Yassir",
""
],
[
"Mazumdar",
"Arya",
""
],
[
"Mukherjee",
"Soumendu Sundar",
""
],
[
"Sarkar",
"Purnamrita",
""
]
]
| TITLE: Optimal Transfer Learning for Missing Not-at-Random Matrix Completion
ABSTRACT: We study transfer learning for matrix completion in a Missing Not-at-Random
(MNAR) setting that is motivated by biological problems. The target matrix $Q$
has entire rows and columns missing, making estimation impossible without side
information. To address this, we use a noisy and incomplete source matrix $P$,
which relates to $Q$ via a feature shift in latent space. We consider both the
active and passive sampling of rows and columns. We establish minimax lower
bounds for entrywise estimation error in each setting. Our computationally
efficient estimation framework achieves this lower bound for the active
setting, which leverages the source data to query the most informative rows and
columns of $Q$. This avoids the need for incoherence assumptions required for
rate optimality in the passive sampling setting. We demonstrate the
effectiveness of our approach through comparisons with existing algorithms on
real-world biological datasets.
| no_new_dataset | 0.944689 |
2503.00175 | Xiang Liu | Xiang Liu, Zhe Su, Yongyi Shi, Yiying Tong, Ge Wang and Guo-Wei Wei | Manifold Topological Deep Learning for Biomedical Data | null | null | null | null | eess.IV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, topological deep learning (TDL), which integrates algebraic
topology with deep neural networks, has achieved tremendous success in
processing point-cloud data, emerging as a promising paradigm in data science.
However, TDL has not been developed for data on differentiable manifolds,
including images, due to the challenges posed by differential topology. We
address this challenge by introducing manifold topological deep learning (MTDL)
for the first time. To highlight the power of Hodge theory rooted in
differential topology, we consider a simple convolutional neural network (CNN)
in MTDL. In this novel framework, original images are represented as smooth
manifolds with vector fields that are decomposed into three orthogonal
components based on Hodge theory. These components are then concatenated to
form an input image for the CNN architecture. The performance of MTDL is
evaluated using the MedMNIST v2 benchmark database, which comprises 717,287
biomedical images from eleven 2D and six 3D datasets. MTDL significantly
outperforms other competing methods, extending TDL to a wide range of data on
smooth manifolds.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 20:41:23 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Liu",
"Xiang",
""
],
[
"Su",
"Zhe",
""
],
[
"Shi",
"Yongyi",
""
],
[
"Tong",
"Yiying",
""
],
[
"Wang",
"Ge",
""
],
[
"Wei",
"Guo-Wei",
""
]
]
| TITLE: Manifold Topological Deep Learning for Biomedical Data
ABSTRACT: Recently, topological deep learning (TDL), which integrates algebraic
topology with deep neural networks, has achieved tremendous success in
processing point-cloud data, emerging as a promising paradigm in data science.
However, TDL has not been developed for data on differentiable manifolds,
including images, due to the challenges posed by differential topology. We
address this challenge by introducing manifold topological deep learning (MTDL)
for the first time. To highlight the power of Hodge theory rooted in
differential topology, we consider a simple convolutional neural network (CNN)
in MTDL. In this novel framework, original images are represented as smooth
manifolds with vector fields that are decomposed into three orthogonal
components based on Hodge theory. These components are then concatenated to
form an input image for the CNN architecture. The performance of MTDL is
evaluated using the MedMNIST v2 benchmark database, which comprises 717,287
biomedical images from eleven 2D and six 3D datasets. MTDL significantly
outperforms other competing methods, extending TDL to a wide range of data on
smooth manifolds.
| no_new_dataset | 0.946794 |
2503.00184 | Russell Funk | Michael Park, Erin Leahey, Russell J. Funk | Robust Evidence for Declining Disruptiveness: Assessing the Role of
Zero-Backward-Citation Works | null | null | null | null | cs.SI cs.DL | http://creativecommons.org/licenses/by/4.0/ | We respond to Holst et al.'s (HATWG) critique that the observed decline in
scientific disruptiveness demonstrated in Park et al. (PLF) stems from
including works with zero backward citations (0-bcites). Applying their own
advocated dataset, metric, and exclusion criteria, we demonstrate statistically
and practically significant declines in disruptiveness that equal major
benchmark transformations in science. Notably, we show that HATWG's own
regression model -- designed specifically to address their concerns about
0-bcite works -- reveals highly significant declines for both papers (p<0.001)
and patents (p<0.001), a finding they neither acknowledge nor interpret. Their
critique is undermined by methodological deficiencies, including reliance on
visual inspection without statistical assessment, and severe data quality
issues in their SciSciNet dataset, which contains nearly three times more
0-bcite papers than our original data. HATWG's departure from established
scientometric practices -- notably their inclusion of document types and fields
known for poor metadata quality -- invalidates their conclusions. Monte Carlo
simulations and additional analyses using multiple disruptiveness measures
across datasets further validate the robustness of the declining trend. Our
findings collectively demonstrate that the observed decline in disruptiveness
is not an artifact of 0-bcite works but represents a substantive change in
scientific and technological innovation patterns.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 21:02:21 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Park",
"Michael",
""
],
[
"Leahey",
"Erin",
""
],
[
"Funk",
"Russell J.",
""
]
]
| TITLE: Robust Evidence for Declining Disruptiveness: Assessing the Role of
Zero-Backward-Citation Works
ABSTRACT: We respond to Holst et al.'s (HATWG) critique that the observed decline in
scientific disruptiveness demonstrated in Park et al. (PLF) stems from
including works with zero backward citations (0-bcites). Applying their own
advocated dataset, metric, and exclusion criteria, we demonstrate statistically
and practically significant declines in disruptiveness that equal major
benchmark transformations in science. Notably, we show that HATWG's own
regression model -- designed specifically to address their concerns about
0-bcite works -- reveals highly significant declines for both papers (p<0.001)
and patents (p<0.001), a finding they neither acknowledge nor interpret. Their
critique is undermined by methodological deficiencies, including reliance on
visual inspection without statistical assessment, and severe data quality
issues in their SciSciNet dataset, which contains nearly three times more
0-bcite papers than our original data. HATWG's departure from established
scientometric practices -- notably their inclusion of document types and fields
known for poor metadata quality -- invalidates their conclusions. Monte Carlo
simulations and additional analyses using multiple disruptiveness measures
across datasets further validate the robustness of the declining trend. Our
findings collectively demonstrate that the observed decline in disruptiveness
is not an artifact of 0-bcite works but represents a substantive change in
scientific and technological innovation patterns.
| no_new_dataset | 0.94256 |
2503.00196 | Amar Kumar | Amar Kumar, Anita Kriz, Mohammad Havaei, Tal Arbel | PRISM: High-Resolution & Precise Counterfactual Medical Image Generation
using Language-guided Stable Diffusion | Under Review for MIDL 2025 | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Developing reliable and generalizable deep learning systems for medical
imaging faces significant obstacles due to spurious correlations, data
imbalances, and limited text annotations in datasets. Addressing these
challenges requires architectures robust to the unique complexities posed by
medical imaging data. The rapid advancements in vision-language foundation
models within the natural image domain prompt the question of how they can be
adapted for medical imaging tasks. In this work, we present PRISM, a framework
that leverages foundation models to generate high-resolution, language-guided
medical image counterfactuals using Stable Diffusion. Our approach demonstrates
unprecedented precision in selectively modifying spurious correlations (the
medical devices) and disease features, enabling the removal and addition of
specific attributes while preserving other image characteristics. Through
extensive evaluation, we show how PRISM advances counterfactual generation and
enables the development of more robust downstream classifiers for clinically
deployable solutions. To facilitate broader adoption and research, we make our
code publicly available at https://github.com/Amarkr1/PRISM.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 21:32:08 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Kumar",
"Amar",
""
],
[
"Kriz",
"Anita",
""
],
[
"Havaei",
"Mohammad",
""
],
[
"Arbel",
"Tal",
""
]
]
| TITLE: PRISM: High-Resolution & Precise Counterfactual Medical Image Generation
using Language-guided Stable Diffusion
ABSTRACT: Developing reliable and generalizable deep learning systems for medical
imaging faces significant obstacles due to spurious correlations, data
imbalances, and limited text annotations in datasets. Addressing these
challenges requires architectures robust to the unique complexities posed by
medical imaging data. The rapid advancements in vision-language foundation
models within the natural image domain prompt the question of how they can be
adapted for medical imaging tasks. In this work, we present PRISM, a framework
that leverages foundation models to generate high-resolution, language-guided
medical image counterfactuals using Stable Diffusion. Our approach demonstrates
unprecedented precision in selectively modifying spurious correlations (the
medical devices) and disease features, enabling the removal and addition of
specific attributes while preserving other image characteristics. Through
extensive evaluation, we show how PRISM advances counterfactual generation and
enables the development of more robust downstream classifiers for clinically
deployable solutions. To facilitate broader adoption and research, we make our
code publicly available at https://github.com/Amarkr1/PRISM.
| no_new_dataset | 0.949059 |
2503.00202 | Ryosuke Kawamura | Ryosuke Kawamura, Hideaki Hayashi, Noriko Takemura, Hajime Nagahara | MIDAS: Mixing Ambiguous Data with Soft Labels for Dynamic Facial
Expression Recognition | Accepted at WACV2024 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dynamic facial expression recognition (DFER) is an important task in the
field of computer vision. To apply automatic DFER in practice, it is necessary
to accurately recognize ambiguous facial expressions, which often appear in
data in the wild. In this paper, we propose MIDAS, a data augmentation method
for DFER, which augments ambiguous facial expression data with soft labels
consisting of probabilities for multiple emotion classes. In MIDAS, the
training data are augmented by convexly combining pairs of video frames and
their corresponding emotion class labels, which can also be regarded as an
extension of mixup to soft-labeled video data. This simple extension is
remarkably effective in DFER with ambiguous facial expression data. To evaluate
MIDAS, we conducted experiments on the DFEW dataset. The results demonstrate
that the model trained on the data augmented by MIDAS outperforms the existing
state-of-the-art method trained on the original dataset.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 21:39:19 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Kawamura",
"Ryosuke",
""
],
[
"Hayashi",
"Hideaki",
""
],
[
"Takemura",
"Noriko",
""
],
[
"Nagahara",
"Hajime",
""
]
]
| TITLE: MIDAS: Mixing Ambiguous Data with Soft Labels for Dynamic Facial
Expression Recognition
ABSTRACT: Dynamic facial expression recognition (DFER) is an important task in the
field of computer vision. To apply automatic DFER in practice, it is necessary
to accurately recognize ambiguous facial expressions, which often appear in
data in the wild. In this paper, we propose MIDAS, a data augmentation method
for DFER, which augments ambiguous facial expression data with soft labels
consisting of probabilities for multiple emotion classes. In MIDAS, the
training data are augmented by convexly combining pairs of video frames and
their corresponding emotion class labels, which can also be regarded as an
extension of mixup to soft-labeled video data. This simple extension is
remarkably effective in DFER with ambiguous facial expression data. To evaluate
MIDAS, we conducted experiments on the DFEW dataset. The results demonstrate
that the model trained on the data augmented by MIDAS outperforms the existing
state-of-the-art method trained on the original dataset.
| no_new_dataset | 0.948728 |
2503.00205 | Jian Gao | Jian Gao, Weidong Cao, Junyi Yang, Xuan Zhang | AnalogGenie: A Generative Engine for Automatic Discovery of Analog
Circuit Topologies | ICLR 2025 camera ready | null | null | null | cs.LG cs.AR | http://creativecommons.org/licenses/by/4.0/ | The massive and large-scale design of foundational semiconductor integrated
circuits (ICs) is crucial to sustaining the advancement of many emerging and
future technologies, such as generative AI, 5G/6G, and quantum computing.
Excitingly, recent studies have shown the great capabilities of foundational
models in expediting the design of digital ICs. Yet, applying generative AI
techniques to accelerate the design of analog ICs remains a significant
challenge due to critical domain-specific issues, such as the lack of a
comprehensive dataset and effective representation methods for analog circuits.
This paper proposes, $\textbf{AnalogGenie}$, a
$\underline{\textbf{Gen}}$erat$\underline{\textbf{i}}$ve
$\underline{\textbf{e}}$ngine for automatic design/discovery of
$\underline{\textbf{Analog}}$ circuit topologies--the most challenging and
creative task in the conventional manual design flow of analog ICs. AnalogGenie
addresses two key gaps in the field: building a foundational comprehensive
dataset of analog circuit topology and developing a scalable sequence-based
graph representation universal to analog circuits. Experimental results show
the remarkable generation performance of AnalogGenie in broadening the variety
of analog ICs, increasing the number of devices within a single design, and
discovering unseen circuit topologies far beyond any prior arts. Our work paves
the way to transform the longstanding time-consuming manual design flow of
analog ICs to an automatic and massive manner powered by generative AI. Our
source code is available at https://github.com/xz-group/AnalogGenie.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 21:41:20 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Gao",
"Jian",
""
],
[
"Cao",
"Weidong",
""
],
[
"Yang",
"Junyi",
""
],
[
"Zhang",
"Xuan",
""
]
]
| TITLE: AnalogGenie: A Generative Engine for Automatic Discovery of Analog
Circuit Topologies
ABSTRACT: The massive and large-scale design of foundational semiconductor integrated
circuits (ICs) is crucial to sustaining the advancement of many emerging and
future technologies, such as generative AI, 5G/6G, and quantum computing.
Excitingly, recent studies have shown the great capabilities of foundational
models in expediting the design of digital ICs. Yet, applying generative AI
techniques to accelerate the design of analog ICs remains a significant
challenge due to critical domain-specific issues, such as the lack of a
comprehensive dataset and effective representation methods for analog circuits.
This paper proposes, $\textbf{AnalogGenie}$, a
$\underline{\textbf{Gen}}$erat$\underline{\textbf{i}}$ve
$\underline{\textbf{e}}$ngine for automatic design/discovery of
$\underline{\textbf{Analog}}$ circuit topologies--the most challenging and
creative task in the conventional manual design flow of analog ICs. AnalogGenie
addresses two key gaps in the field: building a foundational comprehensive
dataset of analog circuit topology and developing a scalable sequence-based
graph representation universal to analog circuits. Experimental results show
the remarkable generation performance of AnalogGenie in broadening the variety
of analog ICs, increasing the number of devices within a single design, and
discovering unseen circuit topologies far beyond any prior arts. Our work paves
the way to transform the longstanding time-consuming manual design flow of
analog ICs to an automatic and massive manner powered by generative AI. Our
source code is available at https://github.com/xz-group/AnalogGenie.
| no_new_dataset | 0.937669 |
2503.00209 | Vu Minh Hoang Dang | Vu Minh Hoang Dang, Rakesh M. Verma | Autoencoder-Based Framework to Capture Vocabulary Quality in NLP | Extended version of "Vocabulary Quality in NLP Datasets: An
Autoencoder-Based Framework Across Domains and Languages" in IDA 2025 | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Linguistic richness is essential for advancing natural language processing
(NLP), as dataset characteristics often directly influence model performance.
However, traditional metrics such as Type-Token Ratio (TTR), Vocabulary
Diversity (VOCD), and Measure of Lexical Text Diversity (MTLD) do not
adequately capture contextual relationships, semantic richness, and structural
complexity. In this paper, we introduce an autoencoder-based framework that
uses neural network capacity as a proxy for vocabulary richness, diversity, and
complexity, enabling a dynamic assessment of the interplay between vocabulary
size, sentence structure, and contextual depth. We validate our approach on two
distinct datasets: the DIFrauD dataset, which spans multiple domains of
deceptive and fraudulent text, and the Project Gutenberg dataset, representing
diverse languages, genres, and historical periods. Experimental results
highlight the robustness and adaptability of our method, offering practical
guidance for dataset curation and NLP model design. By enhancing traditional
vocabulary evaluation, our work fosters the development of more context-aware,
linguistically adaptive NLP systems.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 21:45:28 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Dang",
"Vu Minh Hoang",
""
],
[
"Verma",
"Rakesh M.",
""
]
]
| TITLE: Autoencoder-Based Framework to Capture Vocabulary Quality in NLP
ABSTRACT: Linguistic richness is essential for advancing natural language processing
(NLP), as dataset characteristics often directly influence model performance.
However, traditional metrics such as Type-Token Ratio (TTR), Vocabulary
Diversity (VOCD), and Measure of Lexical Text Diversity (MTLD) do not
adequately capture contextual relationships, semantic richness, and structural
complexity. In this paper, we introduce an autoencoder-based framework that
uses neural network capacity as a proxy for vocabulary richness, diversity, and
complexity, enabling a dynamic assessment of the interplay between vocabulary
size, sentence structure, and contextual depth. We validate our approach on two
distinct datasets: the DIFrauD dataset, which spans multiple domains of
deceptive and fraudulent text, and the Project Gutenberg dataset, representing
diverse languages, genres, and historical periods. Experimental results
highlight the robustness and adaptability of our method, offering practical
guidance for dataset curation and NLP model design. By enhancing traditional
vocabulary evaluation, our work fosters the development of more context-aware,
linguistically adaptive NLP systems.
| no_new_dataset | 0.904987 |
2503.00210 | Wenrui Fan | Wenrui Fan and L. M. Riza Rizky and Jiayang Zhang and Chen Chen and
Haiping Lu and Kevin Teh and Dinesh Selvarajah and Shuo Zhou | Foundation-Model-Boosted Multimodal Learning for fMRI-based Neuropathic
Pain Drug Response Prediction | null | null | null | null | cs.LG cs.AI cs.CV eess.SP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neuropathic pain, affecting up to 10% of adults, remains difficult to treat
due to limited therapeutic efficacy and tolerability. Although resting-state
functional MRI (rs-fMRI) is a promising non-invasive measurement of brain
biomarkers to predict drug response in therapeutic development, the complexity
of fMRI demands machine learning models with substantial capacity. However,
extreme data scarcity in neuropathic pain research limits the application of
high-capacity models. To address the challenge of data scarcity, we propose
FMM$_{TC}$, a Foundation-Model-boosted Multimodal learning framework for
fMRI-based neuropathic pain drug response prediction, which leverages both
internal multimodal information in pain-specific data and external knowledge
from large pain-agnostic data. Specifically, to maximize the value of limited
pain-specific data, FMM$_{TC}$ integrates complementary information from two
rs-fMRI modalities: Time series and functional Connectivity. FMM$_{TC}$ is
further boosted by an fMRI foundation model with its external knowledge from
extensive pain-agnostic fMRI datasets enriching limited pain-specific
information. Evaluations with an in-house dataset and a public dataset from
OpenNeuro demonstrate FMM$_{TC}$'s superior representation ability,
generalizability, and cross-dataset adaptability over existing unimodal fMRI
models that only consider one of the rs-fMRI modalities. The ablation study
validates the effectiveness of multimodal learning and foundation-model-powered
external knowledge transfer in FMM$_{TC}$. An integrated gradient-based
interpretation study explains how FMM$_{TC}$'s cross-dataset dynamic behaviors
enhance its adaptability. In conclusion, FMM$_{TC}$ boosts clinical trials in
neuropathic pain therapeutic development by accurately predicting drug
responses to improve the participant stratification efficiency.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 21:50:03 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Fan",
"Wenrui",
""
],
[
"Rizky",
"L. M. Riza",
""
],
[
"Zhang",
"Jiayang",
""
],
[
"Chen",
"Chen",
""
],
[
"Lu",
"Haiping",
""
],
[
"Teh",
"Kevin",
""
],
[
"Selvarajah",
"Dinesh",
""
],
[
"Zhou",
"Shuo",
""
]
]
| TITLE: Foundation-Model-Boosted Multimodal Learning for fMRI-based Neuropathic
Pain Drug Response Prediction
ABSTRACT: Neuropathic pain, affecting up to 10% of adults, remains difficult to treat
due to limited therapeutic efficacy and tolerability. Although resting-state
functional MRI (rs-fMRI) is a promising non-invasive measurement of brain
biomarkers to predict drug response in therapeutic development, the complexity
of fMRI demands machine learning models with substantial capacity. However,
extreme data scarcity in neuropathic pain research limits the application of
high-capacity models. To address the challenge of data scarcity, we propose
FMM$_{TC}$, a Foundation-Model-boosted Multimodal learning framework for
fMRI-based neuropathic pain drug response prediction, which leverages both
internal multimodal information in pain-specific data and external knowledge
from large pain-agnostic data. Specifically, to maximize the value of limited
pain-specific data, FMM$_{TC}$ integrates complementary information from two
rs-fMRI modalities: Time series and functional Connectivity. FMM$_{TC}$ is
further boosted by an fMRI foundation model with its external knowledge from
extensive pain-agnostic fMRI datasets enriching limited pain-specific
information. Evaluations with an in-house dataset and a public dataset from
OpenNeuro demonstrate FMM$_{TC}$'s superior representation ability,
generalizability, and cross-dataset adaptability over existing unimodal fMRI
models that only consider one of the rs-fMRI modalities. The ablation study
validates the effectiveness of multimodal learning and foundation-model-powered
external knowledge transfer in FMM$_{TC}$. An integrated gradient-based
interpretation study explains how FMM$_{TC}$'s cross-dataset dynamic behaviors
enhance its adaptability. In conclusion, FMM$_{TC}$ boosts clinical trials in
neuropathic pain therapeutic development by accurately predicting drug
responses to improve the participant stratification efficiency.
| no_new_dataset | 0.951414 |
2503.00211 | Jiawei Zhang | Jiawei Zhang, Xuan Yang, Taiqi Wang, Yu Yao, Aleksandr Petiushko, Bo
Li | SafeAuto: Knowledge-Enhanced Safe Autonomous Driving with Multimodal
Foundation Models | null | null | null | null | cs.RO cs.AI cs.LG cs.SY eess.SY | http://creativecommons.org/licenses/by/4.0/ | Traditional autonomous driving systems often struggle to integrate high-level
reasoning with low-level control, resulting in suboptimal and sometimes unsafe
driving behaviors. The emergence of Multimodal Large Language Models (MLLMs),
which can process both visual and textual data, presents an opportunity to
unify perception and reasoning tasks within a single framework. However,
effectively embedding precise safety knowledge into MLLMs for autonomous
driving remains a significant challenge. To address this, we propose SafeAuto,
a novel framework that enhances MLLM-based autonomous driving systems by
incorporating both unstructured and structured knowledge. Specifically, we
first introduce the Position-Dependent Cross-Entropy (PDCE) loss function,
designed to improve the accuracy of low-level control signal predictions when
numerical values are represented as text. Second, to ensure safe autonomous
driving by explicitly integrating precise safety knowledge into the MLLM, we
develop a reasoning component for SafeAuto. This component translates driving
safety regulations into first-order logic rules (e.g., "red light => stop") and
incorporates these rules into a probabilistic graphical model, such as a Markov
Logic Network (MLN). The MLN is trained to verify the predicted next actions
using environmental attributes identified by attribute recognition models
(e.g., detecting a red light) to form the predicates. Additionally, we
construct a Multimodal RAG model that leverages video data, control signals,
and environmental attributes to learn more effectively from past similar
driving experiences. By integrating PDCE, MLN, and Multimodal RAG, SafeAuto
significantly outperforms existing baselines across multiple datasets. This
advancement enables more accurate, reliable, and safer autonomous driving
systems that learn from experience, obey traffic laws, and perform precise
control actions.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 21:53:47 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Zhang",
"Jiawei",
""
],
[
"Yang",
"Xuan",
""
],
[
"Wang",
"Taiqi",
""
],
[
"Yao",
"Yu",
""
],
[
"Petiushko",
"Aleksandr",
""
],
[
"Li",
"Bo",
""
]
]
| TITLE: SafeAuto: Knowledge-Enhanced Safe Autonomous Driving with Multimodal
Foundation Models
ABSTRACT: Traditional autonomous driving systems often struggle to integrate high-level
reasoning with low-level control, resulting in suboptimal and sometimes unsafe
driving behaviors. The emergence of Multimodal Large Language Models (MLLMs),
which can process both visual and textual data, presents an opportunity to
unify perception and reasoning tasks within a single framework. However,
effectively embedding precise safety knowledge into MLLMs for autonomous
driving remains a significant challenge. To address this, we propose SafeAuto,
a novel framework that enhances MLLM-based autonomous driving systems by
incorporating both unstructured and structured knowledge. Specifically, we
first introduce the Position-Dependent Cross-Entropy (PDCE) loss function,
designed to improve the accuracy of low-level control signal predictions when
numerical values are represented as text. Second, to ensure safe autonomous
driving by explicitly integrating precise safety knowledge into the MLLM, we
develop a reasoning component for SafeAuto. This component translates driving
safety regulations into first-order logic rules (e.g., "red light => stop") and
incorporates these rules into a probabilistic graphical model, such as a Markov
Logic Network (MLN). The MLN is trained to verify the predicted next actions
using environmental attributes identified by attribute recognition models
(e.g., detecting a red light) to form the predicates. Additionally, we
construct a Multimodal RAG model that leverages video data, control signals,
and environmental attributes to learn more effectively from past similar
driving experiences. By integrating PDCE, MLN, and Multimodal RAG, SafeAuto
significantly outperforms existing baselines across multiple datasets. This
advancement enables more accurate, reliable, and safer autonomous driving
systems that learn from experience, obey traffic laws, and perform precise
control actions.
| no_new_dataset | 0.941007 |
2503.00231 | Fakhraddin Alwajih | Samar M. Magdy, Sang Yun Kwon, Fakhraddin Alwajih, Safaa Abdelfadil,
Shady Shehata, and Muhammad Abdul-Mageed | Jawaher: A Multidialectal Dataset of Arabic Proverbs for LLM
Benchmarking | Project GitHub page is accessible at:
https://github.com/UBC-NLP/jawaher | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advancements in instruction fine-tuning, alignment methods such as
reinforcement learning from human feedback (RLHF), and optimization techniques
like direct preference optimization (DPO) have significantly enhanced the
adaptability of large language models (LLMs) to user preferences. However,
despite these innovations, many LLMs continue to exhibit biases toward Western,
Anglo-centric, or American cultures, with performance on English data
consistently surpassing that of other languages. This reveals a persistent
cultural gap in LLMs, which complicates their ability to accurately process
culturally rich and diverse figurative language such as proverbs. To address
this, we introduce Jawaher, a benchmark designed to assess LLMs' capacity to
comprehend and interpret Arabic proverbs. Jawaher includes proverbs from
various Arabic dialects, along with idiomatic translations and explanations.
Through extensive evaluations of both open- and closed-source models, we find
that while LLMs can generate idiomatically accurate translations, they struggle
with producing culturally nuanced and contextually relevant explanations. These
findings highlight the need for ongoing model refinement and dataset expansion
to bridge the cultural gap in figurative language processing.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 22:28:00 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Magdy",
"Samar M.",
""
],
[
"Kwon",
"Sang Yun",
""
],
[
"Alwajih",
"Fakhraddin",
""
],
[
"Abdelfadil",
"Safaa",
""
],
[
"Shehata",
"Shady",
""
],
[
"Abdul-Mageed",
"Muhammad",
""
]
]
| TITLE: Jawaher: A Multidialectal Dataset of Arabic Proverbs for LLM
Benchmarking
ABSTRACT: Recent advancements in instruction fine-tuning, alignment methods such as
reinforcement learning from human feedback (RLHF), and optimization techniques
like direct preference optimization (DPO) have significantly enhanced the
adaptability of large language models (LLMs) to user preferences. However,
despite these innovations, many LLMs continue to exhibit biases toward Western,
Anglo-centric, or American cultures, with performance on English data
consistently surpassing that of other languages. This reveals a persistent
cultural gap in LLMs, which complicates their ability to accurately process
culturally rich and diverse figurative language such as proverbs. To address
this, we introduce Jawaher, a benchmark designed to assess LLMs' capacity to
comprehend and interpret Arabic proverbs. Jawaher includes proverbs from
various Arabic dialects, along with idiomatic translations and explanations.
Through extensive evaluations of both open- and closed-source models, we find
that while LLMs can generate idiomatically accurate translations, they struggle
with producing culturally nuanced and contextually relevant explanations. These
findings highlight the need for ongoing model refinement and dataset expansion
to bridge the cultural gap in figurative language processing.
| new_dataset | 0.964954 |
2503.00232 | Kaleab A. Kinfu | Kaleab A. Kinfu and Ren\'e Vidal | Transformers with Joint Tokens and Local-Global Attention for Efficient
Human Pose Estimation | This work has been submitted to the IEEE for possible publication | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs) have led
to significant progress in 2D body pose estimation. However, achieving a good
balance between accuracy, efficiency, and robustness remains a challenge. For
instance, CNNs are computationally efficient but struggle with long-range
dependencies, while ViTs excel in capturing such dependencies but suffer from
quadratic computational complexity. This paper proposes two ViT-based models
for accurate, efficient, and robust 2D pose estimation. The first one,
EViTPose, operates in a computationally efficient manner without sacrificing
accuracy by utilizing learnable joint tokens to select and process a subset of
the most important body patches, enabling us to control the trade-off between
accuracy and efficiency by changing the number of patches to be processed. The
second one, UniTransPose, while not allowing for the same level of direct
control over the trade-off, efficiently handles multiple scales by combining
(1) an efficient multi-scale transformer encoder that uses both local and
global attention with (2) an efficient sub-pixel CNN decoder for better speed
and accuracy. Moreover, by incorporating all joints from different benchmarks
into a unified skeletal representation, we train robust methods that learn from
multiple datasets simultaneously and perform well across a range of scenarios
-- including pose variations, lighting conditions, and occlusions. Experiments
on six benchmarks demonstrate that the proposed methods significantly
outperform state-of-the-art methods while improving computational efficiency.
EViTPose exhibits a significant decrease in computational complexity (30% to
44% less in GFLOPs) with a minimal drop of accuracy (0% to 3.5% less), and
UniTransPose achieves accuracy improvements ranging from 0.9% to 43.8% across
these benchmarks.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 22:34:22 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Kinfu",
"Kaleab A.",
""
],
[
"Vidal",
"René",
""
]
]
| TITLE: Transformers with Joint Tokens and Local-Global Attention for Efficient
Human Pose Estimation
ABSTRACT: Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs) have led
to significant progress in 2D body pose estimation. However, achieving a good
balance between accuracy, efficiency, and robustness remains a challenge. For
instance, CNNs are computationally efficient but struggle with long-range
dependencies, while ViTs excel in capturing such dependencies but suffer from
quadratic computational complexity. This paper proposes two ViT-based models
for accurate, efficient, and robust 2D pose estimation. The first one,
EViTPose, operates in a computationally efficient manner without sacrificing
accuracy by utilizing learnable joint tokens to select and process a subset of
the most important body patches, enabling us to control the trade-off between
accuracy and efficiency by changing the number of patches to be processed. The
second one, UniTransPose, while not allowing for the same level of direct
control over the trade-off, efficiently handles multiple scales by combining
(1) an efficient multi-scale transformer encoder that uses both local and
global attention with (2) an efficient sub-pixel CNN decoder for better speed
and accuracy. Moreover, by incorporating all joints from different benchmarks
into a unified skeletal representation, we train robust methods that learn from
multiple datasets simultaneously and perform well across a range of scenarios
-- including pose variations, lighting conditions, and occlusions. Experiments
on six benchmarks demonstrate that the proposed methods significantly
outperform state-of-the-art methods while improving computational efficiency.
EViTPose exhibits a significant decrease in computational complexity (30% to
44% less in GFLOPs) with a minimal drop of accuracy (0% to 3.5% less), and
UniTransPose achieves accuracy improvements ranging from 0.9% to 43.8% across
these benchmarks.
| no_new_dataset | 0.948537 |
2503.00266 | Milad Yazdani | Milad Yazdani, Yasamin Medghalchi, Pooria Ashrafian, Ilker
Hacihaliloglu, and Dena Shahriari | Flow Matching for Medical Image Synthesis: Bridging the Gap Between
Speed and Quality | null | null | null | null | cs.CV eess.IV | http://creativecommons.org/licenses/by-sa/4.0/ | Deep learning models have emerged as a powerful tool for various medical
applications. However, their success depends on large, high-quality datasets
that are challenging to obtain due to privacy concerns and costly annotation.
Generative models, such as diffusion models, offer a potential solution by
synthesizing medical images, but their practical adoption is hindered by long
inference times. In this paper, we propose the use of an optimal transport flow
matching approach to accelerate image generation. By introducing a straighter
mapping between the source and target distribution, our method significantly
reduces inference time while preserving and further enhancing the quality of
the outputs. Furthermore, this approach is highly adaptable, supporting various
medical imaging modalities, conditioning mechanisms (such as class labels and
masks), and different spatial dimensions, including 2D and 3D. Beyond image
generation, it can also be applied to related tasks such as image enhancement.
Our results demonstrate the efficiency and versatility of this framework,
making it a promising advancement for medical imaging applications. Code with
checkpoints and a synthetic dataset (beneficial for classification and
segmentation) is now available on: https://github.com/milad1378yz/MOTFM.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 00:49:47 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Yazdani",
"Milad",
""
],
[
"Medghalchi",
"Yasamin",
""
],
[
"Ashrafian",
"Pooria",
""
],
[
"Hacihaliloglu",
"Ilker",
""
],
[
"Shahriari",
"Dena",
""
]
]
| TITLE: Flow Matching for Medical Image Synthesis: Bridging the Gap Between
Speed and Quality
ABSTRACT: Deep learning models have emerged as a powerful tool for various medical
applications. However, their success depends on large, high-quality datasets
that are challenging to obtain due to privacy concerns and costly annotation.
Generative models, such as diffusion models, offer a potential solution by
synthesizing medical images, but their practical adoption is hindered by long
inference times. In this paper, we propose the use of an optimal transport flow
matching approach to accelerate image generation. By introducing a straighter
mapping between the source and target distribution, our method significantly
reduces inference time while preserving and further enhancing the quality of
the outputs. Furthermore, this approach is highly adaptable, supporting various
medical imaging modalities, conditioning mechanisms (such as class labels and
masks), and different spatial dimensions, including 2D and 3D. Beyond image
generation, it can also be applied to related tasks such as image enhancement.
Our results demonstrate the efficiency and versatility of this framework,
making it a promising advancement for medical imaging applications. Code with
checkpoints and a synthetic dataset (beneficial for classification and
segmentation) is now available on: https://github.com/milad1378yz/MOTFM.
| no_new_dataset | 0.946843 |
2503.00267 | Xinwei Luo | Xinwei Luo, Songlin Zhao, Yun Zong, Yong Chen, Gui-shuang Ying, Lifang
He | SegImgNet: Segmentation-Guided Dual-Branch Network for Retinal Disease
Diagnoses | null | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Retinal image plays a crucial role in diagnosing various diseases, as retinal
structures provide essential diagnostic information. However, effectively
capturing structural features while integrating them with contextual
information from retinal images remains a challenge. In this work, we propose
segmentation-guided dual-branch network for retinal disease diagnosis using
retinal images and their segmentation maps, named SegImgNet. SegImgNet
incorporates a segmentation module to generate multi-scale retinal structural
feature maps from retinal images. The classification module employs two
encoders to independently extract features from segmented images and retinal
images for disease classification. To further enhance feature extraction, we
introduce the Segmentation-Guided Attention (SGA) block, which leverages
feature maps from the segmentation module to refine the classification process.
We evaluate SegImgNet on the public AIROGS dataset and the private e-ROP
dataset. Experimental results demonstrate that SegImgNet consistently
outperforms existing methods, underscoring its effectiveness in retinal disease
diagnosis. The code is publicly available at
https://github.com/hawk-sudo/SegImgNet.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 00:56:45 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Luo",
"Xinwei",
""
],
[
"Zhao",
"Songlin",
""
],
[
"Zong",
"Yun",
""
],
[
"Chen",
"Yong",
""
],
[
"Ying",
"Gui-shuang",
""
],
[
"He",
"Lifang",
""
]
]
| TITLE: SegImgNet: Segmentation-Guided Dual-Branch Network for Retinal Disease
Diagnoses
ABSTRACT: Retinal image plays a crucial role in diagnosing various diseases, as retinal
structures provide essential diagnostic information. However, effectively
capturing structural features while integrating them with contextual
information from retinal images remains a challenge. In this work, we propose
segmentation-guided dual-branch network for retinal disease diagnosis using
retinal images and their segmentation maps, named SegImgNet. SegImgNet
incorporates a segmentation module to generate multi-scale retinal structural
feature maps from retinal images. The classification module employs two
encoders to independently extract features from segmented images and retinal
images for disease classification. To further enhance feature extraction, we
introduce the Segmentation-Guided Attention (SGA) block, which leverages
feature maps from the segmentation module to refine the classification process.
We evaluate SegImgNet on the public AIROGS dataset and the private e-ROP
dataset. Experimental results demonstrate that SegImgNet consistently
outperforms existing methods, underscoring its effectiveness in retinal disease
diagnosis. The code is publicly available at
https://github.com/hawk-sudo/SegImgNet.
| no_new_dataset | 0.949482 |
2503.00269 | Gabriel Davis Jones | Jahan C. Penny-Dimri, Magdalena Bachmann, William R. Cooke, Sam
Mathewlynn, Samuel Dockree, John Tolladay, Jannik Kossen, Lin Li, Yarin Gal,
Gabriel Davis Jones | Reducing Large Language Model Safety Risks in Women's Health using
Semantic Entropy | 15 pages, 6 tables | null | null | null | cs.LG cs.AI cs.CL cs.CY | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Large language models (LLMs) hold substantial promise for clinical decision
support. However, their widespread adoption in medicine, particularly in
healthcare, is hindered by their propensity to generate false or misleading
outputs, known as hallucinations. In high-stakes domains such as women's health
(obstetrics & gynaecology), where errors in clinical reasoning can have
profound consequences for maternal and neonatal outcomes, ensuring the
reliability of AI-generated responses is critical. Traditional methods for
quantifying uncertainty, such as perplexity, fail to capture meaning-level
inconsistencies that lead to misinformation. Here, we evaluate semantic entropy
(SE), a novel uncertainty metric that assesses meaning-level variation, to
detect hallucinations in AI-generated medical content. Using a clinically
validated dataset derived from UK RCOG MRCOG examinations, we compared SE with
perplexity in identifying uncertain responses. SE demonstrated superior
performance, achieving an AUROC of 0.76 (95% CI: 0.75-0.78), compared to 0.62
(0.60-0.65) for perplexity. Clinical expert validation further confirmed its
effectiveness, with SE achieving near-perfect uncertainty discrimination
(AUROC: 0.97). While semantic clustering was successful in only 30% of cases,
SE remains a valuable tool for improving AI safety in women's health. These
findings suggest that SE could enable more reliable AI integration into
clinical practice, particularly in resource-limited settings where LLMs could
augment care. This study highlights the potential of SE as a key safeguard in
the responsible deployment of AI-driven tools in women's health, leading to
safer and more effective digital health interventions.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 00:57:52 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Penny-Dimri",
"Jahan C.",
""
],
[
"Bachmann",
"Magdalena",
""
],
[
"Cooke",
"William R.",
""
],
[
"Mathewlynn",
"Sam",
""
],
[
"Dockree",
"Samuel",
""
],
[
"Tolladay",
"John",
""
],
[
"Kossen",
"Jannik",
""
],
[
"Li",
"Lin",
""
],
[
"Gal",
"Yarin",
""
],
[
"Jones",
"Gabriel Davis",
""
]
]
| TITLE: Reducing Large Language Model Safety Risks in Women's Health using
Semantic Entropy
ABSTRACT: Large language models (LLMs) hold substantial promise for clinical decision
support. However, their widespread adoption in medicine, particularly in
healthcare, is hindered by their propensity to generate false or misleading
outputs, known as hallucinations. In high-stakes domains such as women's health
(obstetrics & gynaecology), where errors in clinical reasoning can have
profound consequences for maternal and neonatal outcomes, ensuring the
reliability of AI-generated responses is critical. Traditional methods for
quantifying uncertainty, such as perplexity, fail to capture meaning-level
inconsistencies that lead to misinformation. Here, we evaluate semantic entropy
(SE), a novel uncertainty metric that assesses meaning-level variation, to
detect hallucinations in AI-generated medical content. Using a clinically
validated dataset derived from UK RCOG MRCOG examinations, we compared SE with
perplexity in identifying uncertain responses. SE demonstrated superior
performance, achieving an AUROC of 0.76 (95% CI: 0.75-0.78), compared to 0.62
(0.60-0.65) for perplexity. Clinical expert validation further confirmed its
effectiveness, with SE achieving near-perfect uncertainty discrimination
(AUROC: 0.97). While semantic clustering was successful in only 30% of cases,
SE remains a valuable tool for improving AI safety in women's health. These
findings suggest that SE could enable more reliable AI integration into
clinical practice, particularly in resource-limited settings where LLMs could
augment care. This study highlights the potential of SE as a key safeguard in
the responsible deployment of AI-driven tools in women's health, leading to
safer and more effective digital health interventions.
| no_new_dataset | 0.942692 |
2503.00289 | Kisan Khatri | Kisan Khatri, Ronald M. Levy, Allan Haldane | Phylogenetic Corrections and Higher-Order Sequence Statistics in Protein
Families: The Potts Model vs MSA Transformer | 7 pages, 5 figures, Also presented in BPS2025 Annual Meeting, Los
Angeles, California | null | null | null | physics.bio-ph q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | Recent generative learning models applied to protein multiple sequence
alignment (MSA) datasets include simple and interpretable physics-based Potts
covariation models and other machine learning models such as MSA-Transformer
(MSA-T). The best models accurately reproduce MSA statistics induced by the
biophysical constraints within proteins, raising the question of which
functional forms best model the underlying physics. The Potts model is usually
specified by an effective potential including pairwise residue-residue
interaction terms, but it has been suggested that MSA-T can capture the effects
induced by effective potentials which include more than pairwise interactions
and implicitly account for phylogenetic structure in the MSA. Here we compare
the ability of the Potts model and MSA-T to reconstruct higher-order sequence
statistics reflecting complex biological sequence constraints. We find that the
model performance depends greatly on the treatment of phylogenetic
relationships between the sequences, which can induce non-biophysical
mutational covariation in MSAs. When using explicit corrections for
phylogenetic dependencies, we find the Potts model outperforms MSA-T in
detecting epistatic interactions of biophysical origin.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 01:43:49 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Khatri",
"Kisan",
""
],
[
"Levy",
"Ronald M.",
""
],
[
"Haldane",
"Allan",
""
]
]
| TITLE: Phylogenetic Corrections and Higher-Order Sequence Statistics in Protein
Families: The Potts Model vs MSA Transformer
ABSTRACT: Recent generative learning models applied to protein multiple sequence
alignment (MSA) datasets include simple and interpretable physics-based Potts
covariation models and other machine learning models such as MSA-Transformer
(MSA-T). The best models accurately reproduce MSA statistics induced by the
biophysical constraints within proteins, raising the question of which
functional forms best model the underlying physics. The Potts model is usually
specified by an effective potential including pairwise residue-residue
interaction terms, but it has been suggested that MSA-T can capture the effects
induced by effective potentials which include more than pairwise interactions
and implicitly account for phylogenetic structure in the MSA. Here we compare
the ability of the Potts model and MSA-T to reconstruct higher-order sequence
statistics reflecting complex biological sequence constraints. We find that the
model performance depends greatly on the treatment of phylogenetic
relationships between the sequences, which can induce non-biophysical
mutational covariation in MSAs. When using explicit corrections for
phylogenetic dependencies, we find the Potts model outperforms MSA-T in
detecting epistatic interactions of biophysical origin.
| no_new_dataset | 0.944536 |
2503.00292 | Zhiguo Wang | Hui Li, Zhiguo Wang, Bohui Chen, and Li Sheng | Generalization Bounds for Equivariant Networks on Markov Data | Submitted for possible publication | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Equivariant neural networks play a pivotal role in analyzing datasets with
symmetry properties, particularly in complex data structures. However,
integrating equivariance with Markov properties presents notable challenges due
to the inherent dependencies within such data. Previous research has primarily
concentrated on establishing generalization bounds under the assumption of
independently and identically distributed data, frequently neglecting the
influence of Markov dependencies. In this study, we investigate the impact of
Markov properties on generalization performance alongside the role of
equivariance within this context. We begin by applying a new McDiarmid's
inequality to derive a generalization bound for neural networks trained on
Markov datasets, using Rademacher complexity as a central measure of model
capacity. Subsequently, we utilize group theory to compute the covering number
under equivariant constraints, enabling us to obtain an upper bound on the
Rademacher complexity based on this covering number. This bound provides
practical insights into selecting low-dimensional irreducible representations,
enhancing generalization performance for fixed-width equivariant neural
networks.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 01:53:48 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Li",
"Hui",
""
],
[
"Wang",
"Zhiguo",
""
],
[
"Chen",
"Bohui",
""
],
[
"Sheng",
"Li",
""
]
]
| TITLE: Generalization Bounds for Equivariant Networks on Markov Data
ABSTRACT: Equivariant neural networks play a pivotal role in analyzing datasets with
symmetry properties, particularly in complex data structures. However,
integrating equivariance with Markov properties presents notable challenges due
to the inherent dependencies within such data. Previous research has primarily
concentrated on establishing generalization bounds under the assumption of
independently and identically distributed data, frequently neglecting the
influence of Markov dependencies. In this study, we investigate the impact of
Markov properties on generalization performance alongside the role of
equivariance within this context. We begin by applying a new McDiarmid's
inequality to derive a generalization bound for neural networks trained on
Markov datasets, using Rademacher complexity as a central measure of model
capacity. Subsequently, we utilize group theory to compute the covering number
under equivariant constraints, enabling us to obtain an upper bound on the
Rademacher complexity based on this covering number. This bound provides
practical insights into selecting low-dimensional irreducible representations,
enhancing generalization performance for fixed-width equivariant neural
networks.
| no_new_dataset | 0.9455 |
2503.00299 | Junhui Shen | Junhui Shen, Aaron J. Davis, Ding Lu, and Zhaojun Bai | Hidden Convexity of Fair PCA and Fast Solver via Eigenvalue Optimization | null | null | null | null | cs.LG cs.AI math.OC stat.ML | http://creativecommons.org/licenses/by/4.0/ | Principal Component Analysis (PCA) is a foundational technique in machine
learning for dimensionality reduction of high-dimensional datasets. However,
PCA could lead to biased outcomes that disadvantage certain subgroups of the
underlying datasets. To address the bias issue, a Fair PCA (FPCA) model was
introduced by Samadi et al. (2018) for equalizing the reconstruction loss
between subgroups. The semidefinite relaxation (SDR) based approach proposed by
Samadi et al. (2018) is computationally expensive even for suboptimal
solutions. To improve efficiency, several alternative variants of the FPCA
model have been developed. These variants often shift the focus away from
equalizing the reconstruction loss. In this paper, we identify a hidden
convexity in the FPCA model and introduce an algorithm for convex optimization
via eigenvalue optimization. Our approach achieves the desired fairness in
reconstruction loss without sacrificing performance. As demonstrated in
real-world datasets, the proposed FPCA algorithm runs $8\times$ faster than the
SDR-based algorithm, and only at most 85% slower than the standard PCA.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 02:13:20 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Shen",
"Junhui",
""
],
[
"Davis",
"Aaron J.",
""
],
[
"Lu",
"Ding",
""
],
[
"Bai",
"Zhaojun",
""
]
]
| TITLE: Hidden Convexity of Fair PCA and Fast Solver via Eigenvalue Optimization
ABSTRACT: Principal Component Analysis (PCA) is a foundational technique in machine
learning for dimensionality reduction of high-dimensional datasets. However,
PCA could lead to biased outcomes that disadvantage certain subgroups of the
underlying datasets. To address the bias issue, a Fair PCA (FPCA) model was
introduced by Samadi et al. (2018) for equalizing the reconstruction loss
between subgroups. The semidefinite relaxation (SDR) based approach proposed by
Samadi et al. (2018) is computationally expensive even for suboptimal
solutions. To improve efficiency, several alternative variants of the FPCA
model have been developed. These variants often shift the focus away from
equalizing the reconstruction loss. In this paper, we identify a hidden
convexity in the FPCA model and introduce an algorithm for convex optimization
via eigenvalue optimization. Our approach achieves the desired fairness in
reconstruction loss without sacrificing performance. As demonstrated in
real-world datasets, the proposed FPCA algorithm runs $8\times$ faster than the
SDR-based algorithm, and only at most 85% slower than the standard PCA.
| no_new_dataset | 0.95297 |
2503.00309 | Yuxin Yang | Yuxin Yang, Haoyang Wu, Tao Wang, Jia Yang, Hao Ma, Guojie Luo | Pseudo-Knowledge Graph: Meta-Path Guided Retrieval and In-Graph Text for
RAG-Equipped LLM | null | null | null | null | cs.IR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The advent of Large Language Models (LLMs) has revolutionized natural
language processing. However, these models face challenges in retrieving
precise information from vast datasets. Retrieval-Augmented Generation (RAG)
was developed to combining LLMs with external information retrieval systems to
enhance the accuracy and context of responses. Despite improvements, RAG still
struggles with comprehensive retrieval in high-volume, low-information-density
databases and lacks relational awareness, leading to fragmented answers.
To address this, this paper introduces the Pseudo-Knowledge Graph (PKG)
framework, designed to overcome these limitations by integrating Meta-path
Retrieval, In-graph Text and Vector Retrieval into LLMs. By preserving natural
language text and leveraging various retrieval techniques, the PKG offers a
richer knowledge representation and improves accuracy in information retrieval.
Extensive evaluations using Open Compass and MultiHop-RAG benchmarks
demonstrate the framework's effectiveness in managing large volumes of data and
complex relationships.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 02:39:37 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Yang",
"Yuxin",
""
],
[
"Wu",
"Haoyang",
""
],
[
"Wang",
"Tao",
""
],
[
"Yang",
"Jia",
""
],
[
"Ma",
"Hao",
""
],
[
"Luo",
"Guojie",
""
]
]
| TITLE: Pseudo-Knowledge Graph: Meta-Path Guided Retrieval and In-Graph Text for
RAG-Equipped LLM
ABSTRACT: The advent of Large Language Models (LLMs) has revolutionized natural
language processing. However, these models face challenges in retrieving
precise information from vast datasets. Retrieval-Augmented Generation (RAG)
was developed to combining LLMs with external information retrieval systems to
enhance the accuracy and context of responses. Despite improvements, RAG still
struggles with comprehensive retrieval in high-volume, low-information-density
databases and lacks relational awareness, leading to fragmented answers.
To address this, this paper introduces the Pseudo-Knowledge Graph (PKG)
framework, designed to overcome these limitations by integrating Meta-path
Retrieval, In-graph Text and Vector Retrieval into LLMs. By preserving natural
language text and leveraging various retrieval techniques, the PKG offers a
richer knowledge representation and improves accuracy in information retrieval.
Extensive evaluations using Open Compass and MultiHop-RAG benchmarks
demonstrate the framework's effectiveness in managing large volumes of data and
complex relationships.
| no_new_dataset | 0.943504 |
2503.00315 | Bahadir Kocer | Chit Yuen Lam and Ronald Clark and Basaran Bahadir Kocer | XIRVIO: Critic-guided Iterative Refinement for Visual-Inertial Odometry
with Explainable Adaptive Weighting | 7 pages, 6 figures | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce XIRVIO, a transformer-based Generative Adversarial Network (GAN)
framework for monocular visual inertial odometry (VIO). By taking sequences of
images and 6-DoF inertial measurements as inputs, XIRVIO's generator predicts
pose trajectories through an iterative refinement process which are then
evaluated by the critic to select the iteration with the optimised prediction.
Additionally, the self-emergent adaptive sensor weighting reveals how XIRVIO
attends to each sensory input based on contextual cues in the data, making it a
promising approach for achieving explainability in safety-critical VIO
applications. Evaluations on the KITTI dataset demonstrate that XIRVIO matches
well-known state-of-the-art learning-based methods in terms of both translation
and rotation errors.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 03:01:22 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Lam",
"Chit Yuen",
""
],
[
"Clark",
"Ronald",
""
],
[
"Kocer",
"Basaran Bahadir",
""
]
]
| TITLE: XIRVIO: Critic-guided Iterative Refinement for Visual-Inertial Odometry
with Explainable Adaptive Weighting
ABSTRACT: We introduce XIRVIO, a transformer-based Generative Adversarial Network (GAN)
framework for monocular visual inertial odometry (VIO). By taking sequences of
images and 6-DoF inertial measurements as inputs, XIRVIO's generator predicts
pose trajectories through an iterative refinement process which are then
evaluated by the critic to select the iteration with the optimised prediction.
Additionally, the self-emergent adaptive sensor weighting reveals how XIRVIO
attends to each sensory input based on contextual cues in the data, making it a
promising approach for achieving explainability in safety-critical VIO
applications. Evaluations on the KITTI dataset demonstrate that XIRVIO matches
well-known state-of-the-art learning-based methods in terms of both translation
and rotation errors.
| no_new_dataset | 0.943034 |
2503.00324 | Sk Tanzir Mehedi | Sk Tanzir Mehedi, Chadni Islam, Gowri Ramachandran, and Raja Jurdak | DySec: A Machine Learning-based Dynamic Analysis for Detecting Malicious
Packages in PyPI Ecosystem | null | null | null | null | cs.CR cs.SE | http://creativecommons.org/licenses/by/4.0/ | Malicious Python packages make software supply chains vulnerable by
exploiting trust in open-source repositories like Python Package Index (PyPI).
Lack of real-time behavioral monitoring makes metadata inspection and static
code analysis inadequate against advanced attack strategies such as
typosquatting, covert remote access activation, and dynamic payload generation.
To address these challenges, we introduce DySec, a machine learning (ML)-based
dynamic analysis framework for PyPI that uses eBPF kernel and user-level probes
to monitor behaviors during package installation. By capturing 36 real-time
features-including system calls, network traffic, resource usage, directory
access, and installation patterns-DySec detects threats like typosquatting,
covert remote access activation, dynamic payload generation, and multiphase
attack malware. We developed a comprehensive dataset of 14,271 Python packages,
including 7,127 malicious sample traces, by executing them in a controlled
isolated environment. Experimental results demonstrate that DySec achieves a
95.99\% detection accuracy with a latency of <0.5s, reducing false negatives by
78.65\% compared to static analysis and 82.24\% compared to metadata analysis.
During the evaluation, DySec flagged 11 packages that PyPI classified as
benign. A manual analysis, including installation behavior inspection,
confirmed six of them as malicious. These findings were reported to PyPI
maintainers, resulting in the removal of four packages. DySec bridges the gap
between reactive traditional methods and proactive, scalable threat mitigation
in open-source ecosystems by uniquely detecting malicious install-time
behaviors.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 03:20:42 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Mehedi",
"Sk Tanzir",
""
],
[
"Islam",
"Chadni",
""
],
[
"Ramachandran",
"Gowri",
""
],
[
"Jurdak",
"Raja",
""
]
]
| TITLE: DySec: A Machine Learning-based Dynamic Analysis for Detecting Malicious
Packages in PyPI Ecosystem
ABSTRACT: Malicious Python packages make software supply chains vulnerable by
exploiting trust in open-source repositories like Python Package Index (PyPI).
Lack of real-time behavioral monitoring makes metadata inspection and static
code analysis inadequate against advanced attack strategies such as
typosquatting, covert remote access activation, and dynamic payload generation.
To address these challenges, we introduce DySec, a machine learning (ML)-based
dynamic analysis framework for PyPI that uses eBPF kernel and user-level probes
to monitor behaviors during package installation. By capturing 36 real-time
features-including system calls, network traffic, resource usage, directory
access, and installation patterns-DySec detects threats like typosquatting,
covert remote access activation, dynamic payload generation, and multiphase
attack malware. We developed a comprehensive dataset of 14,271 Python packages,
including 7,127 malicious sample traces, by executing them in a controlled
isolated environment. Experimental results demonstrate that DySec achieves a
95.99\% detection accuracy with a latency of <0.5s, reducing false negatives by
78.65\% compared to static analysis and 82.24\% compared to metadata analysis.
During the evaluation, DySec flagged 11 packages that PyPI classified as
benign. A manual analysis, including installation behavior inspection,
confirmed six of them as malicious. These findings were reported to PyPI
maintainers, resulting in the removal of four packages. DySec bridges the gap
between reactive traditional methods and proactive, scalable threat mitigation
in open-source ecosystems by uniquely detecting malicious install-time
behaviors.
| new_dataset | 0.955651 |
2503.00329 | Benjamin Schneider | Benjamin Schneider, Florian Kerschbaum, Wenhu Chen | ABC: Achieving Better Control of Multimodal Embeddings using VLMs | null | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Visual embedding models excel at zero-shot tasks like visual retrieval and
classification. However, these models cannot be used for tasks that contain
ambiguity or require user instruction. These tasks necessitate a multimodal
embedding model, which outputs embeddings that combine visual and natural
language input. Existing CLIP-based approaches embed images and text
independently, and fuse the result. We find that this results in weak
interactions between modalities, and poor user control over the representation.
We introduce ABC, an open-source multimodal embedding model that uses a
vision-language model backbone to deeply integrate image features with natural
language instructions. ABC achieves bestfor-size performance on MSCOCO
image-to-text retrieval and is the top performing model on classification and
VQA tasks in the Massive Multimodal Embedding Benchmark. With a strongly
unified vision-language representation, ABC can use natural language to solve
subtle and potentially ambiguous visual retrieval problems. To evaluate this
capability, we design CtrlBench, a benchmark that requires interleaving textual
instructions with image content for correct retrieval. ABC advances the state
of multimodal embeddings by offering high-quality representations and flexible
natural language control. Our model and datasets are available at our project
page.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 03:29:02 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Schneider",
"Benjamin",
""
],
[
"Kerschbaum",
"Florian",
""
],
[
"Chen",
"Wenhu",
""
]
]
| TITLE: ABC: Achieving Better Control of Multimodal Embeddings using VLMs
ABSTRACT: Visual embedding models excel at zero-shot tasks like visual retrieval and
classification. However, these models cannot be used for tasks that contain
ambiguity or require user instruction. These tasks necessitate a multimodal
embedding model, which outputs embeddings that combine visual and natural
language input. Existing CLIP-based approaches embed images and text
independently, and fuse the result. We find that this results in weak
interactions between modalities, and poor user control over the representation.
We introduce ABC, an open-source multimodal embedding model that uses a
vision-language model backbone to deeply integrate image features with natural
language instructions. ABC achieves bestfor-size performance on MSCOCO
image-to-text retrieval and is the top performing model on classification and
VQA tasks in the Massive Multimodal Embedding Benchmark. With a strongly
unified vision-language representation, ABC can use natural language to solve
subtle and potentially ambiguous visual retrieval problems. To evaluate this
capability, we design CtrlBench, a benchmark that requires interleaving textual
instructions with image content for correct retrieval. ABC advances the state
of multimodal embeddings by offering high-quality representations and flexible
natural language control. Our model and datasets are available at our project
page.
| no_new_dataset | 0.768299 |
2503.00331 | Ahmad Gholizadeh Lonbar Mr. | Hajar Kazemi Naeini, Roya Shomali, Abolhassan Pishahang, Hamidreza
Hasanzadeh, Mahdieh Mohammadi, Saeid Asadi, Ahmad Gholizadeh Lonbar | PINN-DT: Optimizing Energy Consumption in Smart Building Using Hybrid
Physics-Informed Neural Networks and Digital Twin Framework with Blockchain
Security | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The advancement of smart grid technologies necessitates the integration of
cutting-edge computational methods to enhance predictive energy optimization.
This study proposes a multi-faceted approach by incorporating (1) Deep
Reinforcement Learning (DRL) agents trained using data from Digital Twins (DTs)
to optimize energy consumption in real time, (2) Physics-Informed Neural
Networks (PINNs) to seamlessly embed physical laws within the optimization
process, ensuring model accuracy and interpretability, and (3) Blockchain (BC)
technology to facilitate secure and transparent communication across the smart
grid infrastructure. The model was trained and validated using comprehensive
datasets, including smart meter energy consumption data, renewable energy
outputs, dynamic pricing, and user preferences collected from IoT devices. The
proposed framework achieved superior predictive performance with a Mean
Absolute Error (MAE) of 0.237 kWh, Root Mean Square Error (RMSE) of 0.298 kWh,
and an R-squared (R2) value of 0.978, indicating a 97.8% explanation of data
variance. Classification metrics further demonstrated the model's robustness,
achieving 97.7% accuracy, 97.8% precision, 97.6% recall, and an F1 Score of
97.7%. Comparative analysis with traditional models like Linear Regression,
Random Forest, SVM, LSTM, and XGBoost revealed the superior accuracy and
real-time adaptability of the proposed method. In addition to enhancing energy
efficiency, the model reduced energy costs by 35%, maintained a 96% user
comfort index, and increased renewable energy utilization to 40%. This study
demonstrates the transformative potential of integrating PINNs, DT, and
Blockchain technologies to optimize energy consumption in smart grids, paving
the way for sustainable, secure, and efficient energy management systems.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 03:37:09 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Naeini",
"Hajar Kazemi",
""
],
[
"Shomali",
"Roya",
""
],
[
"Pishahang",
"Abolhassan",
""
],
[
"Hasanzadeh",
"Hamidreza",
""
],
[
"Mohammadi",
"Mahdieh",
""
],
[
"Asadi",
"Saeid",
""
],
[
"Lonbar",
"Ahmad Gholizadeh",
""
]
]
| TITLE: PINN-DT: Optimizing Energy Consumption in Smart Building Using Hybrid
Physics-Informed Neural Networks and Digital Twin Framework with Blockchain
Security
ABSTRACT: The advancement of smart grid technologies necessitates the integration of
cutting-edge computational methods to enhance predictive energy optimization.
This study proposes a multi-faceted approach by incorporating (1) Deep
Reinforcement Learning (DRL) agents trained using data from Digital Twins (DTs)
to optimize energy consumption in real time, (2) Physics-Informed Neural
Networks (PINNs) to seamlessly embed physical laws within the optimization
process, ensuring model accuracy and interpretability, and (3) Blockchain (BC)
technology to facilitate secure and transparent communication across the smart
grid infrastructure. The model was trained and validated using comprehensive
datasets, including smart meter energy consumption data, renewable energy
outputs, dynamic pricing, and user preferences collected from IoT devices. The
proposed framework achieved superior predictive performance with a Mean
Absolute Error (MAE) of 0.237 kWh, Root Mean Square Error (RMSE) of 0.298 kWh,
and an R-squared (R2) value of 0.978, indicating a 97.8% explanation of data
variance. Classification metrics further demonstrated the model's robustness,
achieving 97.7% accuracy, 97.8% precision, 97.6% recall, and an F1 Score of
97.7%. Comparative analysis with traditional models like Linear Regression,
Random Forest, SVM, LSTM, and XGBoost revealed the superior accuracy and
real-time adaptability of the proposed method. In addition to enhancing energy
efficiency, the model reduced energy costs by 35%, maintained a 96% user
comfort index, and increased renewable energy utilization to 40%. This study
demonstrates the transformative potential of integrating PINNs, DT, and
Blockchain technologies to optimize energy consumption in smart grids, paving
the way for sustainable, secure, and efficient energy management systems.
| no_new_dataset | 0.950411 |
2503.00334 | Quanyu Dai | Quanyu Dai and Jiaren Xiao and Zhaocheng Du and Jieming Zhu and
Chengxiao Luo and Xiao-Ming Wu and Zhenhua Dong | MCNet: Monotonic Calibration Networks for Expressive Uncertainty
Calibration in Online Advertising | Accepted by WWW2025 | THE ACM WEB CONFERENCE 2025 | 10.1145/3696410.3714802 | null | cs.LG cs.AI stat.ML | http://creativecommons.org/licenses/by/4.0/ | In online advertising, uncertainty calibration aims to adjust a ranking
model's probability predictions to better approximate the true likelihood of an
event, e.g., a click or a conversion. However, existing calibration approaches
may lack the ability to effectively model complex nonlinear relations, consider
context features, and achieve balanced performance across different data
subsets. To tackle these challenges, we introduce a novel model called
Monotonic Calibration Networks, featuring three key designs: a monotonic
calibration function (MCF), an order-preserving regularizer, and a
field-balance regularizer. The nonlinear MCF is capable of naturally modeling
and universally approximating the intricate relations between uncalibrated
predictions and the posterior probabilities, thus being much more expressive
than existing methods. MCF can also integrate context features using a flexible
model architecture, thereby achieving context awareness. The order-preserving
and field-balance regularizers promote the monotonic relationship between
adjacent bins and the balanced calibration performance on data subsets,
respectively. Experimental results on both public and industrial datasets
demonstrate the superior performance of our method in generating
well-calibrated probability predictions.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 03:54:58 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Dai",
"Quanyu",
""
],
[
"Xiao",
"Jiaren",
""
],
[
"Du",
"Zhaocheng",
""
],
[
"Zhu",
"Jieming",
""
],
[
"Luo",
"Chengxiao",
""
],
[
"Wu",
"Xiao-Ming",
""
],
[
"Dong",
"Zhenhua",
""
]
]
| TITLE: MCNet: Monotonic Calibration Networks for Expressive Uncertainty
Calibration in Online Advertising
ABSTRACT: In online advertising, uncertainty calibration aims to adjust a ranking
model's probability predictions to better approximate the true likelihood of an
event, e.g., a click or a conversion. However, existing calibration approaches
may lack the ability to effectively model complex nonlinear relations, consider
context features, and achieve balanced performance across different data
subsets. To tackle these challenges, we introduce a novel model called
Monotonic Calibration Networks, featuring three key designs: a monotonic
calibration function (MCF), an order-preserving regularizer, and a
field-balance regularizer. The nonlinear MCF is capable of naturally modeling
and universally approximating the intricate relations between uncalibrated
predictions and the posterior probabilities, thus being much more expressive
than existing methods. MCF can also integrate context features using a flexible
model architecture, thereby achieving context awareness. The order-preserving
and field-balance regularizers promote the monotonic relationship between
adjacent bins and the balanced calibration performance on data subsets,
respectively. Experimental results on both public and industrial datasets
demonstrate the superior performance of our method in generating
well-calibrated probability predictions.
| no_new_dataset | 0.946001 |
2503.00348 | Samuel Garske Mr | Samuel Garske, Konrad Heidler, Bradley Evans, KC Wong, Xiao Xiang Zhu | SHAZAM: Self-Supervised Change Monitoring for Hazard Detection and
Mapping | 20 pages, 9 figures, 3 tables, code available at:
https://github.com/WiseGamgee/SHAZAM | null | null | null | cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The increasing frequency of environmental hazards due to climate change
underscores the urgent need for effective monitoring systems. Current
approaches either rely on expensive labelled datasets, struggle with seasonal
variations, or require multiple observations for confirmation (which delays
detection). To address these challenges, this work presents SHAZAM -
Self-Supervised Change Monitoring for Hazard Detection and Mapping. SHAZAM uses
a lightweight conditional UNet to generate expected images of a region of
interest (ROI) for any day of the year, allowing for the direct modelling of
normal seasonal changes and the ability to distinguish potential hazards. A
modified structural similarity measure compares the generated images with
actual satellite observations to compute region-level anomaly scores and
pixel-level hazard maps. Additionally, a theoretically grounded seasonal
threshold eliminates the need for dataset-specific optimisation. Evaluated on
four diverse datasets that contain bushfires (wildfires), burned regions,
extreme and out-of-season snowfall, floods, droughts, algal blooms, and
deforestation, SHAZAM achieved F1 score improvements of between 0.066 and 0.234
over existing methods. This was achieved primarily through more effective
hazard detection (higher recall) while using only 473K parameters. SHAZAM
demonstrated superior mapping capabilities through higher spatial resolution
and improved ability to suppress background features while accentuating both
immediate and gradual hazards. SHAZAM has been established as an effective and
generalisable solution for hazard detection and mapping across different
geographical regions and a diverse range of hazards. The Python code is
available at: https://github.com/WiseGamgee/SHAZAM
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 04:45:46 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Garske",
"Samuel",
""
],
[
"Heidler",
"Konrad",
""
],
[
"Evans",
"Bradley",
""
],
[
"Wong",
"KC",
""
],
[
"Zhu",
"Xiao Xiang",
""
]
]
| TITLE: SHAZAM: Self-Supervised Change Monitoring for Hazard Detection and
Mapping
ABSTRACT: The increasing frequency of environmental hazards due to climate change
underscores the urgent need for effective monitoring systems. Current
approaches either rely on expensive labelled datasets, struggle with seasonal
variations, or require multiple observations for confirmation (which delays
detection). To address these challenges, this work presents SHAZAM -
Self-Supervised Change Monitoring for Hazard Detection and Mapping. SHAZAM uses
a lightweight conditional UNet to generate expected images of a region of
interest (ROI) for any day of the year, allowing for the direct modelling of
normal seasonal changes and the ability to distinguish potential hazards. A
modified structural similarity measure compares the generated images with
actual satellite observations to compute region-level anomaly scores and
pixel-level hazard maps. Additionally, a theoretically grounded seasonal
threshold eliminates the need for dataset-specific optimisation. Evaluated on
four diverse datasets that contain bushfires (wildfires), burned regions,
extreme and out-of-season snowfall, floods, droughts, algal blooms, and
deforestation, SHAZAM achieved F1 score improvements of between 0.066 and 0.234
over existing methods. This was achieved primarily through more effective
hazard detection (higher recall) while using only 473K parameters. SHAZAM
demonstrated superior mapping capabilities through higher spatial resolution
and improved ability to suppress background features while accentuating both
immediate and gradual hazards. SHAZAM has been established as an effective and
generalisable solution for hazard detection and mapping across different
geographical regions and a diverse range of hazards. The Python code is
available at: https://github.com/WiseGamgee/SHAZAM
| no_new_dataset | 0.948585 |
2503.00353 | Yunfan Gao | Yunfan Gao, Yun Xiong, Wenlong Wu, Zijing Huang, Bohan Li, Haofen Wang | U-NIAH: Unified RAG and LLM Evaluation for Long Context
Needle-In-A-Haystack | null | null | null | null | cs.CL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advancements in Large Language Models (LLMs) have expanded their
context windows to unprecedented lengths, sparking debates about the necessity
of Retrieval-Augmented Generation (RAG). To address the fragmented evaluation
paradigms and limited cases in existing Needle-in-a-Haystack (NIAH), this paper
introduces U-NIAH, a unified framework that systematically compares LLMs and
RAG methods in controlled long context settings. Our framework extends beyond
traditional NIAH by incorporating multi-needle, long-needle, and
needle-in-needle configurations, along with different retrieval settings, while
leveraging the synthetic Starlight Academy dataset-a fictional magical
universe-to eliminate biases from pre-trained knowledge. Through extensive
experiments, we investigate three research questions: (1) performance
trade-offs between LLMs and RAG, (2) error patterns in RAG, and (3) RAG's
limitations in complex settings. Our findings show that RAG significantly
enhances smaller LLMs by mitigating the "lost-in-the-middle" effect and
improving robustness, achieving an 82.58% win-rate over LLMs. However, we
observe that retrieval noise and reverse chunk ordering degrade performance,
while surprisingly, advanced reasoning LLMs exhibit reduced RAG compatibility
due to sensitivity to semantic distractors. We identify typical error patterns
including omission due to noise, hallucination under high noise critical
condition, and self-doubt behaviors. Our work not only highlights the
complementary roles of RAG and LLMs, but also provides actionable insights for
optimizing deployments. Code: https://github.com/Tongji-KGLLM/U-NIAH.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 05:05:24 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Gao",
"Yunfan",
""
],
[
"Xiong",
"Yun",
""
],
[
"Wu",
"Wenlong",
""
],
[
"Huang",
"Zijing",
""
],
[
"Li",
"Bohan",
""
],
[
"Wang",
"Haofen",
""
]
]
| TITLE: U-NIAH: Unified RAG and LLM Evaluation for Long Context
Needle-In-A-Haystack
ABSTRACT: Recent advancements in Large Language Models (LLMs) have expanded their
context windows to unprecedented lengths, sparking debates about the necessity
of Retrieval-Augmented Generation (RAG). To address the fragmented evaluation
paradigms and limited cases in existing Needle-in-a-Haystack (NIAH), this paper
introduces U-NIAH, a unified framework that systematically compares LLMs and
RAG methods in controlled long context settings. Our framework extends beyond
traditional NIAH by incorporating multi-needle, long-needle, and
needle-in-needle configurations, along with different retrieval settings, while
leveraging the synthetic Starlight Academy dataset-a fictional magical
universe-to eliminate biases from pre-trained knowledge. Through extensive
experiments, we investigate three research questions: (1) performance
trade-offs between LLMs and RAG, (2) error patterns in RAG, and (3) RAG's
limitations in complex settings. Our findings show that RAG significantly
enhances smaller LLMs by mitigating the "lost-in-the-middle" effect and
improving robustness, achieving an 82.58% win-rate over LLMs. However, we
observe that retrieval noise and reverse chunk ordering degrade performance,
while surprisingly, advanced reasoning LLMs exhibit reduced RAG compatibility
due to sensitivity to semantic distractors. We identify typical error patterns
including omission due to noise, hallucination under high noise critical
condition, and self-doubt behaviors. Our work not only highlights the
complementary roles of RAG and LLMs, but also provides actionable insights for
optimizing deployments. Code: https://github.com/Tongji-KGLLM/U-NIAH.
| no_new_dataset | 0.941761 |
2503.00355 | Tianyi Huang | Tianyi Huang, Elsa Fan | Structured Reasoning for Fairness: A Multi-Agent Approach to Bias
Detection in Textual Data | Accepted Paper (Oral Presentation) in the Workshop on the Social
Impact of AI: Research, Diversity and Inclusion Frameworks at AAAI 2025 | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | From disinformation spread by AI chatbots to AI recommendations that
inadvertently reinforce stereotypes, textual bias poses a significant challenge
to the trustworthiness of large language models (LLMs). In this paper, we
propose a multi-agent framework that systematically identifies biases by
disentangling each statement as fact or opinion, assigning a bias intensity
score, and providing concise, factual justifications. Evaluated on 1,500
samples from the WikiNPOV dataset, the framework achieves 84.9%
accuracy$\unicode{x2014}$an improvement of 13.0% over the zero-shot
baseline$\unicode{x2014}$demonstrating the efficacy of explicitly modeling fact
versus opinion prior to quantifying bias intensity. By combining enhanced
detection accuracy with interpretable explanations, this approach sets a
foundation for promoting fairness and accountability in modern language models.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 05:27:54 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Huang",
"Tianyi",
""
],
[
"Fan",
"Elsa",
""
]
]
| TITLE: Structured Reasoning for Fairness: A Multi-Agent Approach to Bias
Detection in Textual Data
ABSTRACT: From disinformation spread by AI chatbots to AI recommendations that
inadvertently reinforce stereotypes, textual bias poses a significant challenge
to the trustworthiness of large language models (LLMs). In this paper, we
propose a multi-agent framework that systematically identifies biases by
disentangling each statement as fact or opinion, assigning a bias intensity
score, and providing concise, factual justifications. Evaluated on 1,500
samples from the WikiNPOV dataset, the framework achieves 84.9%
accuracy$\unicode{x2014}$an improvement of 13.0% over the zero-shot
baseline$\unicode{x2014}$demonstrating the efficacy of explicitly modeling fact
versus opinion prior to quantifying bias intensity. By combining enhanced
detection accuracy with interpretable explanations, this approach sets a
foundation for promoting fairness and accountability in modern language models.
| no_new_dataset | 0.95297 |
2503.00356 | Kh\'anh Tran | Bao Tran, T. N. Khanh, Khang Nguyen Tuong, Thien Dang, Quang Nguyen,
Nguyen T. Thinh, Vo T. Hung | BERT-based model for Vietnamese Fact Verification Dataset | accepted for Oral Presentation in CITA 2024 (The 13th Conference on
Information Technology and Its Applications) and will be published in VOLUME
1 OF CITA 2024 (Volume of the Lecture Notes in Network and Systems, Springer) | CITA 2024, LNNS, vol. 882, Springer, 2024 | 10.1007/978-3-031-74127-2_19 | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The rapid advancement of information and communication technology has
facilitated easier access to information. However, this progress has also
necessitated more stringent verification measures to ensure the accuracy of
information, particularly within the context of Vietnam. This paper introduces
an approach to address the challenges of Fact Verification using the Vietnamese
dataset by integrating both sentence selection and classification modules into
a unified network architecture. The proposed approach leverages the power of
large language models by utilizing pre-trained PhoBERT and XLM-RoBERTa as the
backbone of the network. The proposed model was trained on a Vietnamese
dataset, named ISE-DSC01, and demonstrated superior performance compared to the
baseline model across all three metrics. Notably, we achieved a Strict Accuracy
level of 75.11\%, indicating a remarkable 28.83\% improvement over the baseline
model.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 05:31:04 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Tran",
"Bao",
""
],
[
"Khanh",
"T. N.",
""
],
[
"Tuong",
"Khang Nguyen",
""
],
[
"Dang",
"Thien",
""
],
[
"Nguyen",
"Quang",
""
],
[
"Thinh",
"Nguyen T.",
""
],
[
"Hung",
"Vo T.",
""
]
]
| TITLE: BERT-based model for Vietnamese Fact Verification Dataset
ABSTRACT: The rapid advancement of information and communication technology has
facilitated easier access to information. However, this progress has also
necessitated more stringent verification measures to ensure the accuracy of
information, particularly within the context of Vietnam. This paper introduces
an approach to address the challenges of Fact Verification using the Vietnamese
dataset by integrating both sentence selection and classification modules into
a unified network architecture. The proposed approach leverages the power of
large language models by utilizing pre-trained PhoBERT and XLM-RoBERTa as the
backbone of the network. The proposed model was trained on a Vietnamese
dataset, named ISE-DSC01, and demonstrated superior performance compared to the
baseline model across all three metrics. Notably, we achieved a Strict Accuracy
level of 75.11\%, indicating a remarkable 28.83\% improvement over the baseline
model.
| no_new_dataset | 0.940681 |
2503.00358 | Smruti Dash | Smruti P. Dash, Kedar V. Khandeparkar, Nipun Agrawal | CRUPL: A Semi-Supervised Cyber Attack Detection with Consistency
Regularization and Uncertainty-aware Pseudo-Labeling in Smart Grid | 20 pages, 5 figures | null | null | null | cs.CR cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | The modern power grids are integrated with digital technologies and
automation systems. The inclusion of digital technologies has made the smart
grids vulnerable to cyber-attacks. Cyberattacks on smart grids can compromise
data integrity and jeopardize the reliability of the power supply. Traditional
intrusion detection systems often need help to effectively detect novel and
sophisticated attacks due to their reliance on labeled training data, which may
only encompass part of the spectrum of potential threats. This work proposes a
semi-supervised method for cyber-attack detection in smart grids by leveraging
the labeled and unlabeled measurement data. We implement consistency
regularization and pseudo-labeling to identify deviations from expected
behavior and predict the attack classes. We use a curriculum learning approach
to improve pseudo-labeling performance, capturing the model uncertainty. We
demonstrate the efficiency of the proposed method in detecting different types
of cyberattacks, minimizing the false positives by implementing them on
publicly available datasets. The method proposes a promising solution by
improving the detection accuracy to 99% in the presence of unknown samples and
significantly reducing false positives.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 05:49:23 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Dash",
"Smruti P.",
""
],
[
"Khandeparkar",
"Kedar V.",
""
],
[
"Agrawal",
"Nipun",
""
]
]
| TITLE: CRUPL: A Semi-Supervised Cyber Attack Detection with Consistency
Regularization and Uncertainty-aware Pseudo-Labeling in Smart Grid
ABSTRACT: The modern power grids are integrated with digital technologies and
automation systems. The inclusion of digital technologies has made the smart
grids vulnerable to cyber-attacks. Cyberattacks on smart grids can compromise
data integrity and jeopardize the reliability of the power supply. Traditional
intrusion detection systems often need help to effectively detect novel and
sophisticated attacks due to their reliance on labeled training data, which may
only encompass part of the spectrum of potential threats. This work proposes a
semi-supervised method for cyber-attack detection in smart grids by leveraging
the labeled and unlabeled measurement data. We implement consistency
regularization and pseudo-labeling to identify deviations from expected
behavior and predict the attack classes. We use a curriculum learning approach
to improve pseudo-labeling performance, capturing the model uncertainty. We
demonstrate the efficiency of the proposed method in detecting different types
of cyberattacks, minimizing the false positives by implementing them on
publicly available datasets. The method proposes a promising solution by
improving the detection accuracy to 99% in the presence of unknown samples and
significantly reducing false positives.
| no_new_dataset | 0.944485 |
2503.00364 | Yaowei Guo | Yaowei Guo, Jiazheng Xing, Xiaojun Hou, Shuo Xin, Juntao Jiang,
Demetri Terzopoulos, Chenfanfu Jiang, Yong Liu | CFSum: A Transformer-Based Multi-Modal Video Summarization Framework
With Coarse-Fine Fusion | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Video summarization, by selecting the most informative and/or user-relevant
parts of original videos to create concise summary videos, has high research
value and consumer demand in today's video proliferation era. Multi-modal video
summarization that accomodates user input has become a research hotspot.
However, current multi-modal video summarization methods suffer from two
limitations. First, existing methods inadequately fuse information from
different modalities and cannot effectively utilize modality-unique features.
Second, most multi-modal methods focus on video and text modalities, neglecting
the audio modality, despite the fact that audio information can be very useful
in certain types of videos. In this paper we propose CFSum, a transformer-based
multi-modal video summarization framework with coarse-fine fusion. CFSum
exploits video, text, and audio modal features as input, and incorporates a
two-stage transformer-based feature fusion framework to fully utilize
modality-unique information. In the first stage, multi-modal features are fused
simultaneously to perform initial coarse-grained feature fusion, then, in the
second stage, video and audio features are explicitly attended with the text
representation yielding more fine-grained information interaction. The CFSum
architecture gives equal importance to each modality, ensuring that each modal
feature interacts deeply with the other modalities. Our extensive comparative
experiments against prior methods and ablation studies on various datasets
confirm the effectiveness and superiority of CFSum.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 06:13:13 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Guo",
"Yaowei",
""
],
[
"Xing",
"Jiazheng",
""
],
[
"Hou",
"Xiaojun",
""
],
[
"Xin",
"Shuo",
""
],
[
"Jiang",
"Juntao",
""
],
[
"Terzopoulos",
"Demetri",
""
],
[
"Jiang",
"Chenfanfu",
""
],
[
"Liu",
"Yong",
""
]
]
| TITLE: CFSum: A Transformer-Based Multi-Modal Video Summarization Framework
With Coarse-Fine Fusion
ABSTRACT: Video summarization, by selecting the most informative and/or user-relevant
parts of original videos to create concise summary videos, has high research
value and consumer demand in today's video proliferation era. Multi-modal video
summarization that accomodates user input has become a research hotspot.
However, current multi-modal video summarization methods suffer from two
limitations. First, existing methods inadequately fuse information from
different modalities and cannot effectively utilize modality-unique features.
Second, most multi-modal methods focus on video and text modalities, neglecting
the audio modality, despite the fact that audio information can be very useful
in certain types of videos. In this paper we propose CFSum, a transformer-based
multi-modal video summarization framework with coarse-fine fusion. CFSum
exploits video, text, and audio modal features as input, and incorporates a
two-stage transformer-based feature fusion framework to fully utilize
modality-unique information. In the first stage, multi-modal features are fused
simultaneously to perform initial coarse-grained feature fusion, then, in the
second stage, video and audio features are explicitly attended with the text
representation yielding more fine-grained information interaction. The CFSum
architecture gives equal importance to each modality, ensuring that each modal
feature interacts deeply with the other modalities. Our extensive comparative
experiments against prior methods and ablation studies on various datasets
confirm the effectiveness and superiority of CFSum.
| no_new_dataset | 0.950595 |
2503.00366 | Maziar Sabouri | Maziar Sabouri, Ghasem Hajianfar, Alireza Rafiei Sardouei, Milad
Yazdani, Azin Asadzadeh, Soroush Bagheri, Mohsen Arabi, Seyed Rasoul Zakavi,
Emran Askari, Atena Aghaee, Dena Shahriari, Habib Zaidi, Arman Rahmim | AI-Augmented Thyroid Scintigraphy for Robust Classification | null | null | null | null | physics.med-ph cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | Thyroid scintigraphy is a key imaging modality for diagnosing thyroid
disorders. Deep learning models for thyroid scintigraphy classification often
face challenges due to limited and imbalanced datasets, leading to suboptimal
generalization. In this study, we investigate the effectiveness of different
data augmentation techniques including Stable Diffusion (SD), Flow Matching
(FM), and Conventional Augmentation (CA) to enhance the performance of a
ResNet18 classifier for thyroid condition classification. Our results showed
that FM-based augmentation consistently outperforms SD-based approaches,
particularly when combined with original (O) data and CA (O+FM+CA), achieving
both high accuracy and fair classification across Diffuse Goiter (DG), Nodular
Goiter (NG), Normal (NL), and Thyroiditis (TI) cases. The Wilcoxon statistical
analysis further validated the superiority of O+FM and its variants (O+FM+CA)
over SD-based augmentations in most scenarios. These findings highlight the
potential of FM-based augmentation as a superior approach for generating
high-quality synthetic thyroid scintigraphy images and improving model
generalization in medical image classification.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 06:21:46 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Sabouri",
"Maziar",
""
],
[
"Hajianfar",
"Ghasem",
""
],
[
"Sardouei",
"Alireza Rafiei",
""
],
[
"Yazdani",
"Milad",
""
],
[
"Asadzadeh",
"Azin",
""
],
[
"Bagheri",
"Soroush",
""
],
[
"Arabi",
"Mohsen",
""
],
[
"Zakavi",
"Seyed Rasoul",
""
],
[
"Askari",
"Emran",
""
],
[
"Aghaee",
"Atena",
""
],
[
"Shahriari",
"Dena",
""
],
[
"Zaidi",
"Habib",
""
],
[
"Rahmim",
"Arman",
""
]
]
| TITLE: AI-Augmented Thyroid Scintigraphy for Robust Classification
ABSTRACT: Thyroid scintigraphy is a key imaging modality for diagnosing thyroid
disorders. Deep learning models for thyroid scintigraphy classification often
face challenges due to limited and imbalanced datasets, leading to suboptimal
generalization. In this study, we investigate the effectiveness of different
data augmentation techniques including Stable Diffusion (SD), Flow Matching
(FM), and Conventional Augmentation (CA) to enhance the performance of a
ResNet18 classifier for thyroid condition classification. Our results showed
that FM-based augmentation consistently outperforms SD-based approaches,
particularly when combined with original (O) data and CA (O+FM+CA), achieving
both high accuracy and fair classification across Diffuse Goiter (DG), Nodular
Goiter (NG), Normal (NL), and Thyroiditis (TI) cases. The Wilcoxon statistical
analysis further validated the superiority of O+FM and its variants (O+FM+CA)
over SD-based augmentations in most scenarios. These findings highlight the
potential of FM-based augmentation as a superior approach for generating
high-quality synthetic thyroid scintigraphy images and improving model
generalization in medical image classification.
| no_new_dataset | 0.94699 |
2503.00376 | Yingchao Zhang | Yingchao Zhang and Cheng Liu | Few-shot crack image classification using clip based on bayesian
optimization | 5 pages, 5 figures, 3 tables, submit to the 1st International
Workshop on Bayesian Approach in Civil Engineering (IWOBA 2025) | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This study proposes a novel few-shot crack image classification model based
on CLIP and Bayesian optimization. By combining multimodal information and
Bayesian approach, the model achieves efficient classification of crack images
in a small number of training samples. The CLIP model employs its robust
feature extraction capabilities to facilitate precise classification with a
limited number of samples. In contrast, Bayesian optimisation enhances the
robustness and generalization of the model, while reducing the reliance on
extensive labelled data. The results demonstrate that the model exhibits robust
performance across a diverse range of dataset scales, particularly in the
context of small sample sets. The study validates the potential of the method
in civil engineering crack classification.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 07:04:54 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Zhang",
"Yingchao",
""
],
[
"Liu",
"Cheng",
""
]
]
| TITLE: Few-shot crack image classification using clip based on bayesian
optimization
ABSTRACT: This study proposes a novel few-shot crack image classification model based
on CLIP and Bayesian optimization. By combining multimodal information and
Bayesian approach, the model achieves efficient classification of crack images
in a small number of training samples. The CLIP model employs its robust
feature extraction capabilities to facilitate precise classification with a
limited number of samples. In contrast, Bayesian optimisation enhances the
robustness and generalization of the model, while reducing the reliance on
extensive labelled data. The results demonstrate that the model exhibits robust
performance across a diverse range of dataset scales, particularly in the
context of small sample sets. The study validates the potential of the method
in civil engineering crack classification.
| no_new_dataset | 0.955236 |
2503.00378 | Rickard Br\"annvall | Rickard Br\"annvall | Conditioning on Local Statistics for Scalable Heterogeneous Federated
Learning | 7 pages, 2 figures, 7 tables | null | null | null | cs.LG cs.AI cs.CR cs.DC | http://creativecommons.org/licenses/by/4.0/ | Federated learning is a distributed machine learning approach where multiple
clients collaboratively train a model without sharing their local data, which
contributes to preserving privacy. A challenge in federated learning is
managing heterogeneous data distributions across clients, which can hinder
model convergence and performance due to the need for the global model to
generalize well across diverse local datasets. We propose to use local
characteristic statistics, by which we mean some statistical properties
calculated independently by each client using only their local training
dataset. These statistics, such as means, covariances, and higher moments, are
used to capture the characteristics of the local data distribution. They are
not shared with other clients or a central node. During training, these local
statistics help the model learn how to condition on the local data
distribution, and during inference, they guide the client's predictions. Our
experiments show that this approach allows for efficient handling of
heterogeneous data across the federation, has favorable scaling compared to
approaches that directly try to identify peer nodes that share distribution
characteristics, and maintains privacy as no additional information needs to be
communicated.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 07:10:58 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Brännvall",
"Rickard",
""
]
]
| TITLE: Conditioning on Local Statistics for Scalable Heterogeneous Federated
Learning
ABSTRACT: Federated learning is a distributed machine learning approach where multiple
clients collaboratively train a model without sharing their local data, which
contributes to preserving privacy. A challenge in federated learning is
managing heterogeneous data distributions across clients, which can hinder
model convergence and performance due to the need for the global model to
generalize well across diverse local datasets. We propose to use local
characteristic statistics, by which we mean some statistical properties
calculated independently by each client using only their local training
dataset. These statistics, such as means, covariances, and higher moments, are
used to capture the characteristics of the local data distribution. They are
not shared with other clients or a central node. During training, these local
statistics help the model learn how to condition on the local data
distribution, and during inference, they guide the client's predictions. Our
experiments show that this approach allows for efficient handling of
heterogeneous data across the federation, has favorable scaling compared to
approaches that directly try to identify peer nodes that share distribution
characteristics, and maintains privacy as no additional information needs to be
communicated.
| no_new_dataset | 0.948298 |
2503.00384 | Nandish Chattopadhyay | Nandish Chattopadhyay, Abdul Basit, Bassem Ouni, Muhammad Shafique | A Survey of Adversarial Defenses in Vision-based Systems:
Categorization, Methods and Challenges | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Adversarial attacks have emerged as a major challenge to the trustworthy
deployment of machine learning models, particularly in computer vision
applications. These attacks have a varied level of potency and can be
implemented in both white box and black box approaches. Practical attacks
include methods to manipulate the physical world and enforce adversarial
behaviour by the corresponding target neural network models. Multiple different
approaches to mitigate different kinds of such attacks are available in the
literature, each with their own advantages and limitations. In this survey, we
present a comprehensive systematization of knowledge on adversarial defenses,
focusing on two key computer vision tasks: image classification and object
detection. We review the state-of-the-art adversarial defense techniques and
categorize them for easier comparison. In addition, we provide a schematic
representation of these categories within the context of the overall machine
learning pipeline, facilitating clearer understanding and benchmarking of
defenses. Furthermore, we map these defenses to the types of adversarial
attacks and datasets where they are most effective, offering practical insights
for researchers and practitioners. This study is necessary for understanding
the scope of how the available defenses are able to address the adversarial
threats, and their shortcomings as well, which is necessary for driving the
research in this area in the most appropriate direction, with the aim of
building trustworthy AI systems for regular practical use-cases.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 07:17:18 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Chattopadhyay",
"Nandish",
""
],
[
"Basit",
"Abdul",
""
],
[
"Ouni",
"Bassem",
""
],
[
"Shafique",
"Muhammad",
""
]
]
| TITLE: A Survey of Adversarial Defenses in Vision-based Systems:
Categorization, Methods and Challenges
ABSTRACT: Adversarial attacks have emerged as a major challenge to the trustworthy
deployment of machine learning models, particularly in computer vision
applications. These attacks have a varied level of potency and can be
implemented in both white box and black box approaches. Practical attacks
include methods to manipulate the physical world and enforce adversarial
behaviour by the corresponding target neural network models. Multiple different
approaches to mitigate different kinds of such attacks are available in the
literature, each with their own advantages and limitations. In this survey, we
present a comprehensive systematization of knowledge on adversarial defenses,
focusing on two key computer vision tasks: image classification and object
detection. We review the state-of-the-art adversarial defense techniques and
categorize them for easier comparison. In addition, we provide a schematic
representation of these categories within the context of the overall machine
learning pipeline, facilitating clearer understanding and benchmarking of
defenses. Furthermore, we map these defenses to the types of adversarial
attacks and datasets where they are most effective, offering practical insights
for researchers and practitioners. This study is necessary for understanding
the scope of how the available defenses are able to address the adversarial
threats, and their shortcomings as well, which is necessary for driving the
research in this area in the most appropriate direction, with the aim of
building trustworthy AI systems for regular practical use-cases.
| no_new_dataset | 0.938857 |
2503.00389 | Yuto Shibata | Yuto Shibata, Yusuke Oumi, Go Irie, Akisato Kimura, Yoshimitsu Aoki,
Mariko Isogawa | BGM2Pose: Active 3D Human Pose Estimation with Non-Stationary Sounds | null | null | null | null | cs.CV cs.AI cs.SD eess.AS | http://creativecommons.org/licenses/by/4.0/ | We propose BGM2Pose, a non-invasive 3D human pose estimation method using
arbitrary music (e.g., background music) as active sensing signals. Unlike
existing approaches that significantly limit practicality by employing
intrusive chirp signals within the audible range, our method utilizes natural
music that causes minimal discomfort to humans. Estimating human poses from
standard music presents significant challenges. In contrast to sound sources
specifically designed for measurement, regular music varies in both volume and
pitch. These dynamic changes in signals caused by music are inevitably mixed
with alterations in the sound field resulting from human motion, making it hard
to extract reliable cues for pose estimation. To address these challenges,
BGM2Pose introduces a Contrastive Pose Extraction Module that employs
contrastive learning and hard negative sampling to eliminate musical components
from the recorded data, isolating the pose information. Additionally, we
propose a Frequency-wise Attention Module that enables the model to focus on
subtle acoustic variations attributable to human movement by dynamically
computing attention across frequency bands. Experiments suggest that our method
outperforms the existing methods, demonstrating substantial potential for
real-world applications. Our datasets and code will be made publicly available.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 07:32:19 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Shibata",
"Yuto",
""
],
[
"Oumi",
"Yusuke",
""
],
[
"Irie",
"Go",
""
],
[
"Kimura",
"Akisato",
""
],
[
"Aoki",
"Yoshimitsu",
""
],
[
"Isogawa",
"Mariko",
""
]
]
| TITLE: BGM2Pose: Active 3D Human Pose Estimation with Non-Stationary Sounds
ABSTRACT: We propose BGM2Pose, a non-invasive 3D human pose estimation method using
arbitrary music (e.g., background music) as active sensing signals. Unlike
existing approaches that significantly limit practicality by employing
intrusive chirp signals within the audible range, our method utilizes natural
music that causes minimal discomfort to humans. Estimating human poses from
standard music presents significant challenges. In contrast to sound sources
specifically designed for measurement, regular music varies in both volume and
pitch. These dynamic changes in signals caused by music are inevitably mixed
with alterations in the sound field resulting from human motion, making it hard
to extract reliable cues for pose estimation. To address these challenges,
BGM2Pose introduces a Contrastive Pose Extraction Module that employs
contrastive learning and hard negative sampling to eliminate musical components
from the recorded data, isolating the pose information. Additionally, we
propose a Frequency-wise Attention Module that enables the model to focus on
subtle acoustic variations attributable to human movement by dynamically
computing attention across frequency bands. Experiments suggest that our method
outperforms the existing methods, demonstrating substantial potential for
real-world applications. Our datasets and code will be made publicly available.
| no_new_dataset | 0.939969 |
2503.00393 | Abdullah Zyarah | Abdullah M. Zyarah, Alaa M. Abdul-Hadi, and Dhireesha Kudithipudi | Reservoir Network with Structural Plasticity for Human Activity
Recognition | null | null | 10.1109/TETCI.2023.3330422 | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The unprecedented dissemination of edge devices is accompanied by a growing
demand for neuromorphic chips that can process time-series data natively
without cloud support. Echo state network (ESN) is a class of recurrent neural
networks that can be used to identify unique patterns in time-series data and
predict future events. It is known for minimal computing resource requirements
and fast training, owing to the use of linear optimization solely at the
readout stage. In this work, a custom-design neuromorphic chip based on ESN
targeting edge devices is proposed. The proposed system supports various
learning mechanisms, including structural plasticity and synaptic plasticity,
locally on-chip. This provides the network with an additional degree of freedom
to continuously learn, adapt, and alter its structure and sparsity level,
ensuring high performance and continuous stability. We demonstrate the
performance of the proposed system as well as its robustness to noise against
real-world time-series datasets while considering various topologies of data
movement. An average accuracy of 95.95% and 85.24% are achieved on human
activity recognition and prosthetic finger control, respectively. We also
illustrate that the proposed system offers a throughput of 6x10^4 samples/sec
with a power consumption of 47.7mW on a 65nm IBM process.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 07:57:22 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Zyarah",
"Abdullah M.",
""
],
[
"Abdul-Hadi",
"Alaa M.",
""
],
[
"Kudithipudi",
"Dhireesha",
""
]
]
| TITLE: Reservoir Network with Structural Plasticity for Human Activity
Recognition
ABSTRACT: The unprecedented dissemination of edge devices is accompanied by a growing
demand for neuromorphic chips that can process time-series data natively
without cloud support. Echo state network (ESN) is a class of recurrent neural
networks that can be used to identify unique patterns in time-series data and
predict future events. It is known for minimal computing resource requirements
and fast training, owing to the use of linear optimization solely at the
readout stage. In this work, a custom-design neuromorphic chip based on ESN
targeting edge devices is proposed. The proposed system supports various
learning mechanisms, including structural plasticity and synaptic plasticity,
locally on-chip. This provides the network with an additional degree of freedom
to continuously learn, adapt, and alter its structure and sparsity level,
ensuring high performance and continuous stability. We demonstrate the
performance of the proposed system as well as its robustness to noise against
real-world time-series datasets while considering various topologies of data
movement. An average accuracy of 95.95% and 85.24% are achieved on human
activity recognition and prosthetic finger control, respectively. We also
illustrate that the proposed system offers a throughput of 6x10^4 samples/sec
with a power consumption of 47.7mW on a 65nm IBM process.
| no_new_dataset | 0.947137 |
2503.00407 | Yuchen Li Durham.ac.uk | Fan Wan, Yuchen Li, Xueqi Qiu, Rui Sun, Leyuan Zhang, Xingyu Miao,
Tianyu Zhang, Haoran Duan, Yang Long | Asynchronous Personalized Federated Learning through Global Memorization | null | null | null | null | cs.LG cs.DC | http://creativecommons.org/licenses/by/4.0/ | The proliferation of Internet of Things devices and advances in communication
technology have unleashed an explosion of personal data, amplifying privacy
concerns amid stringent regulations like GDPR and CCPA. Federated Learning
offers a privacy preserving solution by enabling collaborative model training
across decentralized devices without centralizing sensitive data. However,
statistical heterogeneity from non-independent and identically distributed
datasets and system heterogeneity due to client dropouts particularly those
with monopolistic classes severely degrade the global model's performance. To
address these challenges, we propose the Asynchronous Personalized Federated
Learning framework, which empowers clients to develop personalized models using
a server side semantic generator. This generator, trained via data free
knowledge transfer under global model supervision, enhances client data
diversity by producing both seen and unseen samples, the latter enabled by
Zero-Shot Learning to mitigate dropout-induced data loss. To counter the risks
of synthetic data impairing training, we introduce a decoupled model
interpolation method, ensuring robust personalization. Extensive experiments
demonstrate that AP FL significantly outperforms state of the art FL methods in
tackling non-IID distributions and client dropouts, achieving superior accuracy
and resilience across diverse real-world scenarios.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 09:00:33 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Wan",
"Fan",
""
],
[
"Li",
"Yuchen",
""
],
[
"Qiu",
"Xueqi",
""
],
[
"Sun",
"Rui",
""
],
[
"Zhang",
"Leyuan",
""
],
[
"Miao",
"Xingyu",
""
],
[
"Zhang",
"Tianyu",
""
],
[
"Duan",
"Haoran",
""
],
[
"Long",
"Yang",
""
]
]
| TITLE: Asynchronous Personalized Federated Learning through Global Memorization
ABSTRACT: The proliferation of Internet of Things devices and advances in communication
technology have unleashed an explosion of personal data, amplifying privacy
concerns amid stringent regulations like GDPR and CCPA. Federated Learning
offers a privacy preserving solution by enabling collaborative model training
across decentralized devices without centralizing sensitive data. However,
statistical heterogeneity from non-independent and identically distributed
datasets and system heterogeneity due to client dropouts particularly those
with monopolistic classes severely degrade the global model's performance. To
address these challenges, we propose the Asynchronous Personalized Federated
Learning framework, which empowers clients to develop personalized models using
a server side semantic generator. This generator, trained via data free
knowledge transfer under global model supervision, enhances client data
diversity by producing both seen and unseen samples, the latter enabled by
Zero-Shot Learning to mitigate dropout-induced data loss. To counter the risks
of synthetic data impairing training, we introduce a decoupled model
interpolation method, ensuring robust personalization. Extensive experiments
demonstrate that AP FL significantly outperforms state of the art FL methods in
tackling non-IID distributions and client dropouts, achieving superior accuracy
and resilience across diverse real-world scenarios.
| no_new_dataset | 0.946001 |
2503.00410 | Zhaoyi Tian | Zhaoyi Tian, Feifeng Wang, Shiwei Wang, Zihao Zhou, Yao Zhu, Liquan
Shen | High Dynamic Range Video Compression: A Large-Scale Benchmark Dataset
and A Learned Bit-depth Scalable Compression Algorithm | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, learned video compression (LVC) is undergoing a period of rapid
development. However, due to absence of large and high-quality high dynamic
range (HDR) video training data, LVC on HDR video is still unexplored. In this
paper, we are the first to collect a large-scale HDR video benchmark dataset,
named HDRVD2K, featuring huge quantity, diverse scenes and multiple motion
types. HDRVD2K fills gaps of video training data and facilitate the development
of LVC on HDR videos. Based on HDRVD2K, we further propose the first learned
bit-depth scalable video compression (LBSVC) network for HDR videos by
effectively exploiting bit-depth redundancy between videos of multiple dynamic
ranges. To achieve this, we first propose a compression-friendly bit-depth
enhancement module (BEM) to effectively predict original HDR videos based on
compressed tone-mapped low dynamic range (LDR) videos and dynamic range prior,
instead of reducing redundancy only through spatio-temporal predictions. Our
method greatly improves the reconstruction quality and compression performance
on HDR videos. Extensive experiments demonstrate the effectiveness of HDRVD2K
on learned HDR video compression and great compression performance of our
proposed LBSVC network. Code and dataset will be released in
https://github.com/sdkinda/HDR-Learned-Video-Coding.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 09:13:29 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Tian",
"Zhaoyi",
""
],
[
"Wang",
"Feifeng",
""
],
[
"Wang",
"Shiwei",
""
],
[
"Zhou",
"Zihao",
""
],
[
"Zhu",
"Yao",
""
],
[
"Shen",
"Liquan",
""
]
]
| TITLE: High Dynamic Range Video Compression: A Large-Scale Benchmark Dataset
and A Learned Bit-depth Scalable Compression Algorithm
ABSTRACT: Recently, learned video compression (LVC) is undergoing a period of rapid
development. However, due to absence of large and high-quality high dynamic
range (HDR) video training data, LVC on HDR video is still unexplored. In this
paper, we are the first to collect a large-scale HDR video benchmark dataset,
named HDRVD2K, featuring huge quantity, diverse scenes and multiple motion
types. HDRVD2K fills gaps of video training data and facilitate the development
of LVC on HDR videos. Based on HDRVD2K, we further propose the first learned
bit-depth scalable video compression (LBSVC) network for HDR videos by
effectively exploiting bit-depth redundancy between videos of multiple dynamic
ranges. To achieve this, we first propose a compression-friendly bit-depth
enhancement module (BEM) to effectively predict original HDR videos based on
compressed tone-mapped low dynamic range (LDR) videos and dynamic range prior,
instead of reducing redundancy only through spatio-temporal predictions. Our
method greatly improves the reconstruction quality and compression performance
on HDR videos. Extensive experiments demonstrate the effectiveness of HDRVD2K
on learned HDR video compression and great compression performance of our
proposed LBSVC network. Code and dataset will be released in
https://github.com/sdkinda/HDR-Learned-Video-Coding.
| new_dataset | 0.959649 |
2503.00414 | Xin Lin | Xin Lin, Chong Shi, Zuopeng Yang, Haojin Tang, Zhili Zhou | SGC-Net: Stratified Granular Comparison Network for Open-Vocabulary HOI
Detection | Accepted by CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | Recent open-vocabulary human-object interaction (OV-HOI) detection methods
primarily rely on large language model (LLM) for generating auxiliary
descriptions and leverage knowledge distilled from CLIP to detect unseen
interaction categories. Despite their effectiveness, these methods face two
challenges: (1) feature granularity deficiency, due to reliance on last layer
visual features for text alignment, leading to the neglect of crucial
object-level details from intermediate layers; (2) semantic similarity
confusion, resulting from CLIP's inherent biases toward certain classes, while
LLM-generated descriptions based solely on labels fail to adequately capture
inter-class similarities. To address these challenges, we propose a stratified
granular comparison network. First, we introduce a granularity sensing
alignment module that aggregates global semantic features with local details,
refining interaction representations and ensuring robust alignment between
intermediate visual features and text embeddings. Second, we develop a
hierarchical group comparison module that recursively compares and groups
classes using LLMs, generating fine-grained and discriminative descriptions for
each interaction category. Experimental results on two widely-used benchmark
datasets, SWIG-HOI and HICO-DET, demonstrate that our method achieves
state-of-the-art results in OV-HOI detection. Codes will be released on
https://github.com/Phil0212/SGC-Net.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 09:26:05 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Lin",
"Xin",
""
],
[
"Shi",
"Chong",
""
],
[
"Yang",
"Zuopeng",
""
],
[
"Tang",
"Haojin",
""
],
[
"Zhou",
"Zhili",
""
]
]
| TITLE: SGC-Net: Stratified Granular Comparison Network for Open-Vocabulary HOI
Detection
ABSTRACT: Recent open-vocabulary human-object interaction (OV-HOI) detection methods
primarily rely on large language model (LLM) for generating auxiliary
descriptions and leverage knowledge distilled from CLIP to detect unseen
interaction categories. Despite their effectiveness, these methods face two
challenges: (1) feature granularity deficiency, due to reliance on last layer
visual features for text alignment, leading to the neglect of crucial
object-level details from intermediate layers; (2) semantic similarity
confusion, resulting from CLIP's inherent biases toward certain classes, while
LLM-generated descriptions based solely on labels fail to adequately capture
inter-class similarities. To address these challenges, we propose a stratified
granular comparison network. First, we introduce a granularity sensing
alignment module that aggregates global semantic features with local details,
refining interaction representations and ensuring robust alignment between
intermediate visual features and text embeddings. Second, we develop a
hierarchical group comparison module that recursively compares and groups
classes using LLMs, generating fine-grained and discriminative descriptions for
each interaction category. Experimental results on two widely-used benchmark
datasets, SWIG-HOI and HICO-DET, demonstrate that our method achieves
state-of-the-art results in OV-HOI detection. Codes will be released on
https://github.com/Phil0212/SGC-Net.
| no_new_dataset | 0.951908 |
2503.00417 | Lucky Susanto | Lucky Susanto, Musa Wijanarko, Prasetia Pratama, Zilu Tang, Fariz
Akyas, Traci Hong, Ika Idris, Alham Aji, Derry Wijaya | A Multi-Labeled Dataset for Indonesian Discourse: Examining Toxicity,
Polarization, and Demographics Information | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Polarization is defined as divisive opinions held by two or more groups on
substantive issues. As the world's third-largest democracy, Indonesia faces
growing concerns about the interplay between political polarization and online
toxicity, which is often directed at vulnerable minority groups. Despite the
importance of this issue, previous NLP research has not fully explored the
relationship between toxicity and polarization. To bridge this gap, we present
a novel multi-label Indonesian dataset that incorporates toxicity,
polarization, and annotator demographic information. Benchmarking this dataset
using BERT-base models and large language models (LLMs) shows that polarization
information enhances toxicity classification, and vice versa. Furthermore,
providing demographic information significantly improves the performance of
polarization classification.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 09:33:10 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Susanto",
"Lucky",
""
],
[
"Wijanarko",
"Musa",
""
],
[
"Pratama",
"Prasetia",
""
],
[
"Tang",
"Zilu",
""
],
[
"Akyas",
"Fariz",
""
],
[
"Hong",
"Traci",
""
],
[
"Idris",
"Ika",
""
],
[
"Aji",
"Alham",
""
],
[
"Wijaya",
"Derry",
""
]
]
| TITLE: A Multi-Labeled Dataset for Indonesian Discourse: Examining Toxicity,
Polarization, and Demographics Information
ABSTRACT: Polarization is defined as divisive opinions held by two or more groups on
substantive issues. As the world's third-largest democracy, Indonesia faces
growing concerns about the interplay between political polarization and online
toxicity, which is often directed at vulnerable minority groups. Despite the
importance of this issue, previous NLP research has not fully explored the
relationship between toxicity and polarization. To bridge this gap, we present
a novel multi-label Indonesian dataset that incorporates toxicity,
polarization, and annotator demographic information. Benchmarking this dataset
using BERT-base models and large language models (LLMs) shows that polarization
information enhances toxicity classification, and vice versa. Furthermore,
providing demographic information significantly improves the performance of
polarization classification.
| new_dataset | 0.959421 |
2503.00428 | Deepti Rawat | Deepti Rawat, Keshav Gupta, Aryamaan Basu Roy, Ravi Kiran
Sarvadevabhatla | DashCop: Automated E-ticket Generation for Two-Wheeler Traffic
Violations Using Dashcam Videos | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Motorized two-wheelers are a prevalent and economical means of
transportation, particularly in the Asia-Pacific region. However, hazardous
driving practices such as triple riding and non-compliance with helmet
regulations contribute significantly to accident rates. Addressing these
violations through automated enforcement mechanisms can enhance traffic safety.
In this paper, we propose DashCop, an end-to-end system for automated E-ticket
generation. The system processes vehicle-mounted dashcam videos to detect
two-wheeler traffic violations. Our contributions include: (1) a novel
Segmentation and Cross-Association (SAC) module to accurately associate riders
with their motorcycles, (2) a robust cross-association-based tracking algorithm
optimized for the simultaneous presence of riders and motorcycles, and (3) the
RideSafe-400 dataset, a comprehensive annotated dashcam video dataset for
triple riding and helmet rule violations. Our system demonstrates significant
improvements in violation detection, validated through extensive evaluations on
the RideSafe-400 dataset.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 10:10:06 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Rawat",
"Deepti",
""
],
[
"Gupta",
"Keshav",
""
],
[
"Roy",
"Aryamaan Basu",
""
],
[
"Sarvadevabhatla",
"Ravi Kiran",
""
]
]
| TITLE: DashCop: Automated E-ticket Generation for Two-Wheeler Traffic
Violations Using Dashcam Videos
ABSTRACT: Motorized two-wheelers are a prevalent and economical means of
transportation, particularly in the Asia-Pacific region. However, hazardous
driving practices such as triple riding and non-compliance with helmet
regulations contribute significantly to accident rates. Addressing these
violations through automated enforcement mechanisms can enhance traffic safety.
In this paper, we propose DashCop, an end-to-end system for automated E-ticket
generation. The system processes vehicle-mounted dashcam videos to detect
two-wheeler traffic violations. Our contributions include: (1) a novel
Segmentation and Cross-Association (SAC) module to accurately associate riders
with their motorcycles, (2) a robust cross-association-based tracking algorithm
optimized for the simultaneous presence of riders and motorcycles, and (3) the
RideSafe-400 dataset, a comprehensive annotated dashcam video dataset for
triple riding and helmet rule violations. Our system demonstrates significant
improvements in violation detection, validated through extensive evaluations on
the RideSafe-400 dataset.
| new_dataset | 0.965218 |
2503.00433 | Paraskevi Fragopoulou | Emmanouela Kokolaki, Paraskevi Fragopoulou | Unveiling AI's Threats to Child Protection: Regulatory efforts to
Criminalize AI-Generated CSAM and Emerging Children's Rights Violations | null | null | null | null | cs.CY cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper aims to present new alarming trends in the field of child sexual
abuse through imagery, as part of SafeLine's research activities in the field
of cybercrime, child sexual abuse material and the protection of children's
rights to safe online experiences. It focuses primarily on the phenomenon of
AI-generated CSAM, sophisticated ways employed for its production which are
discussed in dark web forums and the crucial role that the open-source AI
models play in the evolution of this overwhelming phenomenon. The paper's main
contribution is a correlation analysis between the hotline's reports and domain
names identified in dark web forums, where users' discussions focus on
exchanging information specifically related to the generation of AI-CSAM. The
objective was to reveal the close connection of clear net and dark web content,
which was accomplished through the use of the ATLAS dataset of the Voyager
system. Furthermore, through the analysis of a set of posts' content drilled
from the above dataset, valuable conclusions on forum members' techniques
employed for the production of AI-generated CSAM are also drawn, while users'
views on this type of content and routes followed in order to overcome
technological barriers set with the aim of preventing malicious purposes are
also presented. As the ultimate contribution of this research, an overview of
the current legislative developments in all country members of the INHOPE
organization and the issues arising in the process of regulating the AI- CSAM
is presented, shedding light in the legal challenges regarding the regulation
and limitation of the phenomenon.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 10:18:00 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Kokolaki",
"Emmanouela",
""
],
[
"Fragopoulou",
"Paraskevi",
""
]
]
| TITLE: Unveiling AI's Threats to Child Protection: Regulatory efforts to
Criminalize AI-Generated CSAM and Emerging Children's Rights Violations
ABSTRACT: This paper aims to present new alarming trends in the field of child sexual
abuse through imagery, as part of SafeLine's research activities in the field
of cybercrime, child sexual abuse material and the protection of children's
rights to safe online experiences. It focuses primarily on the phenomenon of
AI-generated CSAM, sophisticated ways employed for its production which are
discussed in dark web forums and the crucial role that the open-source AI
models play in the evolution of this overwhelming phenomenon. The paper's main
contribution is a correlation analysis between the hotline's reports and domain
names identified in dark web forums, where users' discussions focus on
exchanging information specifically related to the generation of AI-CSAM. The
objective was to reveal the close connection of clear net and dark web content,
which was accomplished through the use of the ATLAS dataset of the Voyager
system. Furthermore, through the analysis of a set of posts' content drilled
from the above dataset, valuable conclusions on forum members' techniques
employed for the production of AI-generated CSAM are also drawn, while users'
views on this type of content and routes followed in order to overcome
technological barriers set with the aim of preventing malicious purposes are
also presented. As the ultimate contribution of this research, an overview of
the current legislative developments in all country members of the INHOPE
organization and the issues arising in the process of regulating the AI- CSAM
is presented, shedding light in the legal challenges regarding the regulation
and limitation of the phenomenon.
| no_new_dataset | 0.939748 |
2503.00441 | Lixu Wang | Lixu Wang, Bingqi Shang, Yi Li, Payal Mohapatra, Wei Dong, Xiao Wang,
Qi Zhu | Split Adaptation for Pre-trained Vision Transformers | This paper has been accepted by CVPR 2025. The first two authors
contributed equally | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vision Transformers (ViTs), extensively pre-trained on large-scale datasets,
have become essential to foundation models, allowing excellent performance on
diverse downstream tasks with minimal adaptation. Consequently, there is
growing interest in adapting pre-trained ViTs across various fields, including
privacy-sensitive domains where clients are often reluctant to share their
data. Existing adaptation methods typically require direct data access,
rendering them infeasible under these constraints. A straightforward solution
may be sending the pre-trained ViT to clients for local adaptation, which poses
issues of model intellectual property protection and incurs heavy client
computation overhead. To address these issues, we propose a novel split
adaptation (SA) method that enables effective downstream adaptation while
protecting data and models. SA, inspired by split learning (SL), segments the
pre-trained ViT into a frontend and a backend, with only the frontend shared
with the client for data representation extraction. But unlike regular SL, SA
replaces frontend parameters with low-bit quantized values, preventing direct
exposure of the model. SA allows the client to add bi-level noise to the
frontend and the extracted data representations, ensuring data protection.
Accordingly, SA incorporates data-level and model-level out-of-distribution
enhancements to mitigate noise injection's impact on adaptation performance.
Our SA focuses on the challenging few-shot adaptation and adopts patch
retrieval augmentation for overfitting alleviation. Extensive experiments on
multiple datasets validate SA's superiority over state-of-the-art methods and
demonstrate its defense against advanced data reconstruction attacks while
preventing model leakage with minimal computation cost on the client side. The
source codes can be found at https://github.com/conditionWang/Split_Adaptation.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 10:38:53 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Wang",
"Lixu",
""
],
[
"Shang",
"Bingqi",
""
],
[
"Li",
"Yi",
""
],
[
"Mohapatra",
"Payal",
""
],
[
"Dong",
"Wei",
""
],
[
"Wang",
"Xiao",
""
],
[
"Zhu",
"Qi",
""
]
]
| TITLE: Split Adaptation for Pre-trained Vision Transformers
ABSTRACT: Vision Transformers (ViTs), extensively pre-trained on large-scale datasets,
have become essential to foundation models, allowing excellent performance on
diverse downstream tasks with minimal adaptation. Consequently, there is
growing interest in adapting pre-trained ViTs across various fields, including
privacy-sensitive domains where clients are often reluctant to share their
data. Existing adaptation methods typically require direct data access,
rendering them infeasible under these constraints. A straightforward solution
may be sending the pre-trained ViT to clients for local adaptation, which poses
issues of model intellectual property protection and incurs heavy client
computation overhead. To address these issues, we propose a novel split
adaptation (SA) method that enables effective downstream adaptation while
protecting data and models. SA, inspired by split learning (SL), segments the
pre-trained ViT into a frontend and a backend, with only the frontend shared
with the client for data representation extraction. But unlike regular SL, SA
replaces frontend parameters with low-bit quantized values, preventing direct
exposure of the model. SA allows the client to add bi-level noise to the
frontend and the extracted data representations, ensuring data protection.
Accordingly, SA incorporates data-level and model-level out-of-distribution
enhancements to mitigate noise injection's impact on adaptation performance.
Our SA focuses on the challenging few-shot adaptation and adopts patch
retrieval augmentation for overfitting alleviation. Extensive experiments on
multiple datasets validate SA's superiority over state-of-the-art methods and
demonstrate its defense against advanced data reconstruction attacks while
preventing model leakage with minimal computation cost on the client side. The
source codes can be found at https://github.com/conditionWang/Split_Adaptation.
| no_new_dataset | 0.945349 |
2503.00442 | Aniruddha Srinivas Joshi | Earnest Paul Ijjina, Aniruddha Srinivas Joshi and Goutham Kanahasabai | Detection of Customer Interested Garments in Surveillance Video using
Computer Vision | null | Proceedings of the 2020 11th International Conference on
Computing, Communication and Networking Technologies (ICCCNT) | 10.1109/ICCCNT49239.2020.9225571 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the basic requirements of humans is clothing and this approach aims to
identify the garments selected by customer during shopping, from surveillance
video. The existing approaches to detect garments were developed on western
wear using datasets of western clothing. They do not address Indian garments
due to the increased complexity. In this work, we propose a computer vision
based framework to address this problem through video surveillance. The
proposed framework uses the Mixture of Gaussians background subtraction
algorithm to identify the foreground present in a video frame. The visual
information present in this foreground is analysed using computer vision
techniques such as image segmentation to detect the various garments, the
customer is interested in. The framework was tested on a dataset, that
comprises of CCTV videos from a garments store. When presented with raw
surveillance footage, the proposed framework demonstrated its effectiveness in
detecting the interest of customer in choosing their garments by achieving a
high precision and recall.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 10:39:50 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Ijjina",
"Earnest Paul",
""
],
[
"Joshi",
"Aniruddha Srinivas",
""
],
[
"Kanahasabai",
"Goutham",
""
]
]
| TITLE: Detection of Customer Interested Garments in Surveillance Video using
Computer Vision
ABSTRACT: One of the basic requirements of humans is clothing and this approach aims to
identify the garments selected by customer during shopping, from surveillance
video. The existing approaches to detect garments were developed on western
wear using datasets of western clothing. They do not address Indian garments
due to the increased complexity. In this work, we propose a computer vision
based framework to address this problem through video surveillance. The
proposed framework uses the Mixture of Gaussians background subtraction
algorithm to identify the foreground present in a video frame. The visual
information present in this foreground is analysed using computer vision
techniques such as image segmentation to detect the various garments, the
customer is interested in. The framework was tested on a dataset, that
comprises of CCTV videos from a garments store. When presented with raw
surveillance footage, the proposed framework demonstrated its effectiveness in
detecting the interest of customer in choosing their garments by achieving a
high precision and recall.
| new_dataset | 0.963575 |
2503.00444 | Luca Bischetti | Maddalena Bressler, Veronica Mangiaterra, Paolo Canal, Federico Frau,
Fabrizio Luciani, Biagio Scalingi, Chiara Barattieri di San Pietro, Chiara
Battaglini, Chiara Pompei, Fortunata Romeo, Luca Bischetti, and Valentina
Bambini | Figurative Archive: an open dataset and web-based application for the
study of metaphor | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Research on metaphor has steadily increased over the last decades, as this
phenomenon opens a window into a range of processes in language and cognition,
from pragmatic inference to abstraction and embodied simulation. At the same
time, the demand for rigorously constructed and extensively normed experimental
materials increased as well. Here, we present the Figurative Archive, an open
database of 997 metaphors in Italian enriched with rating and corpus-based
measures (from familiarity to lexical frequency), derived by collecting stimuli
used across 11 studies. It includes both everyday and literary metaphors,
varying in structure and semantic domains. Dataset validation comprised
correlations between familiarity and other measures. The Figurative Archive has
several aspects of novelty: it is increased in size compared to previous
resources; it includes a novel measure of inclusiveness, to comply with current
recommendations for non-discriminatory language use; it is displayed in a
web-based interface, with features for a flexible and customized consultation.
We provide guidelines for using the Archive in future metaphor studies, in the
spirit of open science.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 10:47:45 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Bressler",
"Maddalena",
""
],
[
"Mangiaterra",
"Veronica",
""
],
[
"Canal",
"Paolo",
""
],
[
"Frau",
"Federico",
""
],
[
"Luciani",
"Fabrizio",
""
],
[
"Scalingi",
"Biagio",
""
],
[
"Pietro",
"Chiara Barattieri di San",
""
],
[
"Battaglini",
"Chiara",
""
],
[
"Pompei",
"Chiara",
""
],
[
"Romeo",
"Fortunata",
""
],
[
"Bischetti",
"Luca",
""
],
[
"Bambini",
"Valentina",
""
]
]
| TITLE: Figurative Archive: an open dataset and web-based application for the
study of metaphor
ABSTRACT: Research on metaphor has steadily increased over the last decades, as this
phenomenon opens a window into a range of processes in language and cognition,
from pragmatic inference to abstraction and embodied simulation. At the same
time, the demand for rigorously constructed and extensively normed experimental
materials increased as well. Here, we present the Figurative Archive, an open
database of 997 metaphors in Italian enriched with rating and corpus-based
measures (from familiarity to lexical frequency), derived by collecting stimuli
used across 11 studies. It includes both everyday and literary metaphors,
varying in structure and semantic domains. Dataset validation comprised
correlations between familiarity and other measures. The Figurative Archive has
several aspects of novelty: it is increased in size compared to previous
resources; it includes a novel measure of inclusiveness, to comply with current
recommendations for non-discriminatory language use; it is displayed in a
web-based interface, with features for a flexible and customized consultation.
We provide guidelines for using the Archive in future metaphor studies, in the
spirit of open science.
| new_dataset | 0.946151 |
2503.00450 | Joshua Talks | Joshua Talks, Anna Kreshuk | Ranking pre-trained segmentation models for zero-shot transferability | 11 pages, 3 figures | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Model transfer presents a solution to the challenges of segmentation in the
microscopy community, where the immense cost of labelling sufficient training
data is a major bottleneck in the use of deep learning. With large quantities
of imaging data produced across a wide range of imaging conditions, institutes
also produce many bespoke models trained on specific source data which then get
collected in model banks or zoos. As the number of available models grows, so
does the need for an efficient and reliable model selection method for a
specific target dataset of interest. We focus on the unsupervised regime where
no labels are available for the target dataset. Building on previous work
linking model generalisation and consistency under perturbation, we propose the
first unsupervised transferability estimator for semantic and instance
segmentation tasks which doesn't require access to source training data or
target domain labels. We evaluate the method on multiple segmentation problems
across microscopy modalities, finding a strong correlation between the rankings
based on our estimator and rankings based on target dataset performance.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 11:11:06 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Talks",
"Joshua",
""
],
[
"Kreshuk",
"Anna",
""
]
]
| TITLE: Ranking pre-trained segmentation models for zero-shot transferability
ABSTRACT: Model transfer presents a solution to the challenges of segmentation in the
microscopy community, where the immense cost of labelling sufficient training
data is a major bottleneck in the use of deep learning. With large quantities
of imaging data produced across a wide range of imaging conditions, institutes
also produce many bespoke models trained on specific source data which then get
collected in model banks or zoos. As the number of available models grows, so
does the need for an efficient and reliable model selection method for a
specific target dataset of interest. We focus on the unsupervised regime where
no labels are available for the target dataset. Building on previous work
linking model generalisation and consistency under perturbation, we propose the
first unsupervised transferability estimator for semantic and instance
segmentation tasks which doesn't require access to source training data or
target domain labels. We evaluate the method on multiple segmentation problems
across microscopy modalities, finding a strong correlation between the rankings
based on our estimator and rankings based on target dataset performance.
| no_new_dataset | 0.950503 |
2503.00464 | David Snee | David Snee, Luca Ciucci, Arne Rubehn, Kellen Parker van Dam,
Johann-Mattis List | Unstable Grounds for Beautiful Trees? Testing the Robustness of Concept
Translations in the Compilation of Multilingual Wordlists | Submitted to the 7th Workshop on Research in Computational Linguistic
Typology and Multilingual NLP (SIGTYP) | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Multilingual wordlists play a crucial role in comparative linguistics. While
many studies have been carried out to test the power of computational methods
for language subgrouping or divergence time estimation, few studies have put
the data upon which these studies are based to a rigorous test. Here, we
conduct a first experiment that tests the robustness of concept translation as
an integral part of the compilation of multilingual wordlists. Investigating
the variation in concept translations in independently compiled wordlists from
10 dataset pairs covering 9 different language families, we find that on
average, only 83% of all translations yield the same word form, while identical
forms in terms of phonetic transcriptions can only be found in 23% of all
cases. Our findings can prove important when trying to assess the uncertainty
of phylogenetic studies and the conclusions derived from them.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 12:16:45 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Snee",
"David",
""
],
[
"Ciucci",
"Luca",
""
],
[
"Rubehn",
"Arne",
""
],
[
"van Dam",
"Kellen Parker",
""
],
[
"List",
"Johann-Mattis",
""
]
]
| TITLE: Unstable Grounds for Beautiful Trees? Testing the Robustness of Concept
Translations in the Compilation of Multilingual Wordlists
ABSTRACT: Multilingual wordlists play a crucial role in comparative linguistics. While
many studies have been carried out to test the power of computational methods
for language subgrouping or divergence time estimation, few studies have put
the data upon which these studies are based to a rigorous test. Here, we
conduct a first experiment that tests the robustness of concept translation as
an integral part of the compilation of multilingual wordlists. Investigating
the variation in concept translations in independently compiled wordlists from
10 dataset pairs covering 9 different language families, we find that on
average, only 83% of all translations yield the same word form, while identical
forms in terms of phonetic transcriptions can only be found in 23% of all
cases. Our findings can prove important when trying to assess the uncertainty
of phylogenetic studies and the conclusions derived from them.
| no_new_dataset | 0.754463 |
2503.00467 | Xueyang Wang | Xueyang Wang, Zhixin Zheng, Jiandong Shao, Yule Duan, Liang-Jian Deng | Adaptive Rectangular Convolution for Remote Sensing Pansharpening | 8 pages, 6 figures, Accepted by CVPR | null | null | null | cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advancements in convolutional neural network (CNN)-based techniques
for remote sensing pansharpening have markedly enhanced image quality. However,
conventional convolutional modules in these methods have two critical
drawbacks. First, the sampling positions in convolution operations are confined
to a fixed square window. Second, the number of sampling points is preset and
remains unchanged. Given the diverse object sizes in remote sensing images,
these rigid parameters lead to suboptimal feature extraction. To overcome these
limitations, we introduce an innovative convolutional module, Adaptive
Rectangular Convolution (ARConv). ARConv adaptively learns both the height and
width of the convolutional kernel and dynamically adjusts the number of
sampling points based on the learned scale. This approach enables ARConv to
effectively capture scale-specific features of various objects within an image,
optimizing kernel sizes and sampling locations. Additionally, we propose ARNet,
a network architecture in which ARConv is the primary convolutional module.
Extensive evaluations across multiple datasets reveal the superiority of our
method in enhancing pansharpening performance over previous techniques.
Ablation studies and visualization further confirm the efficacy of ARConv.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 12:40:42 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Wang",
"Xueyang",
""
],
[
"Zheng",
"Zhixin",
""
],
[
"Shao",
"Jiandong",
""
],
[
"Duan",
"Yule",
""
],
[
"Deng",
"Liang-Jian",
""
]
]
| TITLE: Adaptive Rectangular Convolution for Remote Sensing Pansharpening
ABSTRACT: Recent advancements in convolutional neural network (CNN)-based techniques
for remote sensing pansharpening have markedly enhanced image quality. However,
conventional convolutional modules in these methods have two critical
drawbacks. First, the sampling positions in convolution operations are confined
to a fixed square window. Second, the number of sampling points is preset and
remains unchanged. Given the diverse object sizes in remote sensing images,
these rigid parameters lead to suboptimal feature extraction. To overcome these
limitations, we introduce an innovative convolutional module, Adaptive
Rectangular Convolution (ARConv). ARConv adaptively learns both the height and
width of the convolutional kernel and dynamically adjusts the number of
sampling points based on the learned scale. This approach enables ARConv to
effectively capture scale-specific features of various objects within an image,
optimizing kernel sizes and sampling locations. Additionally, we propose ARNet,
a network architecture in which ARConv is the primary convolutional module.
Extensive evaluations across multiple datasets reveal the superiority of our
method in enhancing pansharpening performance over previous techniques.
Ablation studies and visualization further confirm the efficacy of ARConv.
| no_new_dataset | 0.951684 |
2503.00476 | Yicong Dong | Yicong Dong, Rundong He, Guangyao Chen, Wentao Zhang, Zhongyi Han,
Jieming Shi, and Yilong Yin | G-OSR: A Comprehensive Benchmark for Graph Open-Set Recognition | 10 pages,2 figures | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph Neural Networks (GNNs) have achieved significant success in machine
learning, with wide applications in social networks, bioinformatics, knowledge
graphs, and other fields. Most research assumes ideal closed-set environments.
However, in real-world open-set environments, graph learning models face
challenges in robustness and reliability due to unseen classes. This highlights
the need for Graph Open-Set Recognition (GOSR) methods to address these issues
and ensure effective GNN application in practical scenarios. Research in GOSR
is in its early stages, with a lack of a comprehensive benchmark spanning
diverse tasks and datasets to evaluate methods. Moreover, traditional methods,
Graph Out-of-Distribution Detection (GOODD), GOSR, and Graph Anomaly Detection
(GAD) have mostly evolved in isolation, with little exploration of their
interconnections or potential applications to GOSR. To fill these gaps, we
introduce \textbf{G-OSR}, a comprehensive benchmark for evaluating GOSR methods
at both the node and graph levels, using datasets from multiple domains to
ensure fair and standardized comparisons of effectiveness and efficiency across
traditional, GOODD, GOSR, and GAD methods. The results offer critical insights
into the generalizability and limitations of current GOSR methods and provide
valuable resources for advancing research in this field through systematic
analysis of diverse approaches.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 13:02:47 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Dong",
"Yicong",
""
],
[
"He",
"Rundong",
""
],
[
"Chen",
"Guangyao",
""
],
[
"Zhang",
"Wentao",
""
],
[
"Han",
"Zhongyi",
""
],
[
"Shi",
"Jieming",
""
],
[
"Yin",
"Yilong",
""
]
]
| TITLE: G-OSR: A Comprehensive Benchmark for Graph Open-Set Recognition
ABSTRACT: Graph Neural Networks (GNNs) have achieved significant success in machine
learning, with wide applications in social networks, bioinformatics, knowledge
graphs, and other fields. Most research assumes ideal closed-set environments.
However, in real-world open-set environments, graph learning models face
challenges in robustness and reliability due to unseen classes. This highlights
the need for Graph Open-Set Recognition (GOSR) methods to address these issues
and ensure effective GNN application in practical scenarios. Research in GOSR
is in its early stages, with a lack of a comprehensive benchmark spanning
diverse tasks and datasets to evaluate methods. Moreover, traditional methods,
Graph Out-of-Distribution Detection (GOODD), GOSR, and Graph Anomaly Detection
(GAD) have mostly evolved in isolation, with little exploration of their
interconnections or potential applications to GOSR. To fill these gaps, we
introduce \textbf{G-OSR}, a comprehensive benchmark for evaluating GOSR methods
at both the node and graph levels, using datasets from multiple domains to
ensure fair and standardized comparisons of effectiveness and efficiency across
traditional, GOODD, GOSR, and GAD methods. The results offer critical insights
into the generalizability and limitations of current GOSR methods and provide
valuable resources for advancing research in this field through systematic
analysis of diverse approaches.
| no_new_dataset | 0.929504 |
2503.00477 | RuiQi He | Ruiqi He, Zihan Wang, Xiang Zhou | TSDW: A Tri-Stream Dynamic Weight Network for Cloth-Changing Person
Re-Identification | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cloth-Changing Person Re-identification (CC-ReID) aims to solve the challenge
of identifying individuals across different temporal-spatial scenarios,
viewpoints, and clothing variations. This field is gaining increasing attention
in big data research and public security domains. Existing ReID research
primarily relies on face recognition, gait semantic recognition, and
clothing-irrelevant feature identification, which perform relatively well in
scenarios with high-quality clothing change videos and images. However, these
approaches depend on either single features or simple combinations of multiple
features, making further performance improvements difficult. Additionally,
limitations such as missing facial information, challenges in gait extraction,
and inconsistent camera parameters restrict the broader application of CC-ReID.
To address the above limitations, we innovatively propose a Tri-Stream Dynamic
Weight Network (TSDW) that requires only images. This dynamic weighting network
consists of three parallel feature streams: facial features, head-limb
features, and global features. Each stream specializes in extracting its
designated features, after which a gating network dynamically fuses confidence
levels. The three parallel feature streams enhance recognition performance and
reduce the impact of any single feature failure, thereby improving model
robustness. Extensive experiments on benchmark datasets (e.g., PRCC,
Celeb-reID, VC-Clothes) demonstrate that our method significantly outperforms
existing state-of-the-art approaches.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 13:04:49 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"He",
"Ruiqi",
""
],
[
"Wang",
"Zihan",
""
],
[
"Zhou",
"Xiang",
""
]
]
| TITLE: TSDW: A Tri-Stream Dynamic Weight Network for Cloth-Changing Person
Re-Identification
ABSTRACT: Cloth-Changing Person Re-identification (CC-ReID) aims to solve the challenge
of identifying individuals across different temporal-spatial scenarios,
viewpoints, and clothing variations. This field is gaining increasing attention
in big data research and public security domains. Existing ReID research
primarily relies on face recognition, gait semantic recognition, and
clothing-irrelevant feature identification, which perform relatively well in
scenarios with high-quality clothing change videos and images. However, these
approaches depend on either single features or simple combinations of multiple
features, making further performance improvements difficult. Additionally,
limitations such as missing facial information, challenges in gait extraction,
and inconsistent camera parameters restrict the broader application of CC-ReID.
To address the above limitations, we innovatively propose a Tri-Stream Dynamic
Weight Network (TSDW) that requires only images. This dynamic weighting network
consists of three parallel feature streams: facial features, head-limb
features, and global features. Each stream specializes in extracting its
designated features, after which a gating network dynamically fuses confidence
levels. The three parallel feature streams enhance recognition performance and
reduce the impact of any single feature failure, thereby improving model
robustness. Extensive experiments on benchmark datasets (e.g., PRCC,
Celeb-reID, VC-Clothes) demonstrate that our method significantly outperforms
existing state-of-the-art approaches.
| no_new_dataset | 0.954137 |
2503.00481 | Felix Dobslaw | Felix Dobslaw, Robert Feldt, Juyeon Yoon, Shin Yoo | Challenges in Testing Large Language Model Based Software: A Faceted
Taxonomy | null | null | null | null | cs.SE cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Large Language Models (LLMs) and Multi-Agent LLMs (MALLMs) introduce
non-determinism unlike traditional or machine learning software, requiring new
approaches to verifying correctness beyond simple output comparisons or
statistical accuracy over test datasets.
This paper presents a taxonomy for LLM test case design, informed by both the
research literature, our experience, and open-source tools that represent the
state of practice. We identify key variation points that impact test
correctness and highlight open challenges that the research, industry, and
open-source communities must address as LLMs become integral to software
systems.
Our taxonomy defines four facets of LLM test case design, addressing
ambiguity in both inputs and outputs while establishing best practices. It
distinguishes variability in goals, the system under test, and inputs, and
introduces two key oracle types: atomic and aggregated. Our mapping indicates
that current tools insufficiently account for these variability points,
highlighting the need for closer collaboration between academia and
practitioners to improve the reliability and reproducibility of LLM testing.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 13:15:56 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Dobslaw",
"Felix",
""
],
[
"Feldt",
"Robert",
""
],
[
"Yoon",
"Juyeon",
""
],
[
"Yoo",
"Shin",
""
]
]
| TITLE: Challenges in Testing Large Language Model Based Software: A Faceted
Taxonomy
ABSTRACT: Large Language Models (LLMs) and Multi-Agent LLMs (MALLMs) introduce
non-determinism unlike traditional or machine learning software, requiring new
approaches to verifying correctness beyond simple output comparisons or
statistical accuracy over test datasets.
This paper presents a taxonomy for LLM test case design, informed by both the
research literature, our experience, and open-source tools that represent the
state of practice. We identify key variation points that impact test
correctness and highlight open challenges that the research, industry, and
open-source communities must address as LLMs become integral to software
systems.
Our taxonomy defines four facets of LLM test case design, addressing
ambiguity in both inputs and outputs while establishing best practices. It
distinguishes variability in goals, the system under test, and inputs, and
introduces two key oracle types: atomic and aggregated. Our mapping indicates
that current tools insufficiently account for these variability points,
highlighting the need for closer collaboration between academia and
practitioners to improve the reliability and reproducibility of LLM testing.
| no_new_dataset | 0.945601 |
2503.00489 | Benedetta Muscato - | Benedetta Muscato, Praveen Bushipaka, Gizem Gezici, Lucia Passaro,
Fosca Giannotti, Tommaso Cucinotta | Embracing Diversity: A Multi-Perspective Approach with Soft Labels | null | null | null | null | cs.CL cs.AI cs.HC | http://creativecommons.org/licenses/by/4.0/ | Prior studies show that adopting the annotation diversity shaped by different
backgrounds and life experiences and incorporating them into the model
learning, i.e. multi-perspective approach, contribute to the development of
more responsible models. Thus, in this paper we propose a new framework for
designing and further evaluating perspective-aware models on stance detection
task,in which multiple annotators assign stances based on a controversial
topic. We also share a new dataset established through obtaining both human and
LLM annotations. Results show that the multi-perspective approach yields better
classification performance (higher F1-scores), outperforming the traditional
approaches that use a single ground-truth, while displaying lower model
confidence scores, probably due to the high level of subjectivity of the stance
detection task.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 13:33:38 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Muscato",
"Benedetta",
""
],
[
"Bushipaka",
"Praveen",
""
],
[
"Gezici",
"Gizem",
""
],
[
"Passaro",
"Lucia",
""
],
[
"Giannotti",
"Fosca",
""
],
[
"Cucinotta",
"Tommaso",
""
]
]
| TITLE: Embracing Diversity: A Multi-Perspective Approach with Soft Labels
ABSTRACT: Prior studies show that adopting the annotation diversity shaped by different
backgrounds and life experiences and incorporating them into the model
learning, i.e. multi-perspective approach, contribute to the development of
more responsible models. Thus, in this paper we propose a new framework for
designing and further evaluating perspective-aware models on stance detection
task,in which multiple annotators assign stances based on a controversial
topic. We also share a new dataset established through obtaining both human and
LLM annotations. Results show that the multi-perspective approach yields better
classification performance (higher F1-scores), outperforming the traditional
approaches that use a single ground-truth, while displaying lower model
confidence scores, probably due to the high level of subjectivity of the stance
detection task.
| new_dataset | 0.958847 |
2503.00495 | Xuanchen Li | Xuanchen Li, Jianyu Wang, Yuhao Cheng, Yikun Zeng, Xingyu Ren, Wenhan
Zhu, Weiming Zhao, Yichao Yan | Towards High-fidelity 3D Talking Avatar with Personalized Dynamic
Texture | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Significant progress has been made for speech-driven 3D face animation, but
most works focus on learning the motion of mesh/geometry, ignoring the impact
of dynamic texture. In this work, we reveal that dynamic texture plays a key
role in rendering high-fidelity talking avatars, and introduce a
high-resolution 4D dataset \textbf{TexTalk4D}, consisting of 100 minutes of
audio-synced scan-level meshes with detailed 8K dynamic textures from 100
subjects. Based on the dataset, we explore the inherent correlation between
motion and texture, and propose a diffusion-based framework \textbf{TexTalker}
to simultaneously generate facial motions and dynamic textures from speech.
Furthermore, we propose a novel pivot-based style injection strategy to capture
the complicity of different texture and motion styles, which allows
disentangled control. TexTalker, as the first method to generate audio-synced
facial motion with dynamic texture, not only outperforms the prior arts in
synthesising facial motions, but also produces realistic textures that are
consistent with the underlying facial movements. Project page:
https://xuanchenli.github.io/TexTalk/.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 13:51:37 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Li",
"Xuanchen",
""
],
[
"Wang",
"Jianyu",
""
],
[
"Cheng",
"Yuhao",
""
],
[
"Zeng",
"Yikun",
""
],
[
"Ren",
"Xingyu",
""
],
[
"Zhu",
"Wenhan",
""
],
[
"Zhao",
"Weiming",
""
],
[
"Yan",
"Yichao",
""
]
]
| TITLE: Towards High-fidelity 3D Talking Avatar with Personalized Dynamic
Texture
ABSTRACT: Significant progress has been made for speech-driven 3D face animation, but
most works focus on learning the motion of mesh/geometry, ignoring the impact
of dynamic texture. In this work, we reveal that dynamic texture plays a key
role in rendering high-fidelity talking avatars, and introduce a
high-resolution 4D dataset \textbf{TexTalk4D}, consisting of 100 minutes of
audio-synced scan-level meshes with detailed 8K dynamic textures from 100
subjects. Based on the dataset, we explore the inherent correlation between
motion and texture, and propose a diffusion-based framework \textbf{TexTalker}
to simultaneously generate facial motions and dynamic textures from speech.
Furthermore, we propose a novel pivot-based style injection strategy to capture
the complicity of different texture and motion styles, which allows
disentangled control. TexTalker, as the first method to generate audio-synced
facial motion with dynamic texture, not only outperforms the prior arts in
synthesising facial motions, but also produces realistic textures that are
consistent with the underlying facial movements. Project page:
https://xuanchenli.github.io/TexTalk/.
| new_dataset | 0.960805 |
2503.00501 | Haitao Li | Jia Chen, Qian Dong, Haitao Li, Xiaohui He, Yan Gao, Shaosheng Cao, Yi
Wu, Ping Yang, Chen Xu, Yao Hu, Qingyao Ai, Yiqun Liu | Qilin: A Multimodal Information Retrieval Dataset with APP-level User
Sessions | 11 pages | null | null | null | cs.IR cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | User-generated content (UGC) communities, especially those featuring
multimodal content, improve user experiences by integrating visual and textual
information into results (or items). The challenge of improving user
experiences in complex systems with search and recommendation (S\&R) services
has drawn significant attention from both academia and industry these years.
However, the lack of high-quality datasets has limited the research progress on
multimodal S\&R. To address the growing need for developing better S\&R
services, we present a novel multimodal information retrieval dataset in this
paper, namely Qilin. The dataset is collected from Xiaohongshu, a popular
social platform with over 300 million monthly active users and an average
search penetration rate of over 70\%. In contrast to existing datasets,
\textsf{Qilin} offers a comprehensive collection of user sessions with
heterogeneous results like image-text notes, video notes, commercial notes, and
direct answers, facilitating the development of advanced multimodal neural
retrieval models across diverse task settings. To better model user
satisfaction and support the analysis of heterogeneous user behaviors, we also
collect extensive APP-level contextual signals and genuine user feedback.
Notably, Qilin contains user-favored answers and their referred results for
search requests triggering the Deep Query Answering (DQA) module. This allows
not only the training \& evaluation of a Retrieval-augmented Generation (RAG)
pipeline, but also the exploration of how such a module would affect users'
search behavior. Through comprehensive analysis and experiments, we provide
interesting findings and insights for further improving S\&R systems. We hope
that \textsf{Qilin} will significantly contribute to the advancement of
multimodal content platforms with S\&R services in the future.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 14:15:00 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Chen",
"Jia",
""
],
[
"Dong",
"Qian",
""
],
[
"Li",
"Haitao",
""
],
[
"He",
"Xiaohui",
""
],
[
"Gao",
"Yan",
""
],
[
"Cao",
"Shaosheng",
""
],
[
"Wu",
"Yi",
""
],
[
"Yang",
"Ping",
""
],
[
"Xu",
"Chen",
""
],
[
"Hu",
"Yao",
""
],
[
"Ai",
"Qingyao",
""
],
[
"Liu",
"Yiqun",
""
]
]
| TITLE: Qilin: A Multimodal Information Retrieval Dataset with APP-level User
Sessions
ABSTRACT: User-generated content (UGC) communities, especially those featuring
multimodal content, improve user experiences by integrating visual and textual
information into results (or items). The challenge of improving user
experiences in complex systems with search and recommendation (S\&R) services
has drawn significant attention from both academia and industry these years.
However, the lack of high-quality datasets has limited the research progress on
multimodal S\&R. To address the growing need for developing better S\&R
services, we present a novel multimodal information retrieval dataset in this
paper, namely Qilin. The dataset is collected from Xiaohongshu, a popular
social platform with over 300 million monthly active users and an average
search penetration rate of over 70\%. In contrast to existing datasets,
\textsf{Qilin} offers a comprehensive collection of user sessions with
heterogeneous results like image-text notes, video notes, commercial notes, and
direct answers, facilitating the development of advanced multimodal neural
retrieval models across diverse task settings. To better model user
satisfaction and support the analysis of heterogeneous user behaviors, we also
collect extensive APP-level contextual signals and genuine user feedback.
Notably, Qilin contains user-favored answers and their referred results for
search requests triggering the Deep Query Answering (DQA) module. This allows
not only the training \& evaluation of a Retrieval-augmented Generation (RAG)
pipeline, but also the exploration of how such a module would affect users'
search behavior. Through comprehensive analysis and experiments, we provide
interesting findings and insights for further improving S\&R systems. We hope
that \textsf{Qilin} will significantly contribute to the advancement of
multimodal content platforms with S\&R services in the future.
| new_dataset | 0.975693 |
2503.00503 | Paolo Giannitrapani | Paolo Giannitrapani, Elio D. Di Claudio and Giovanni Jacovitti | BELE: Blur Equivalent Linearized Estimator | null | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the Full-Reference Image Quality Assessment context, Mean Opinion Score
values represent subjective evaluations based on retinal perception, while
objective metrics assess the reproduced image on the display. Bridging these
subjective and objective domains requires parametric mapping functions, which
are sensitive to the observer's viewing distance. This paper introduces a novel
parametric model that separates perceptual effects due to strong edge
degradations from those caused by texture distortions. These effects are
quantified using two distinct quality indices. The first is the Blur Equivalent
Linearized Estimator, designed to measure blur on strong and isolated edges
while accounting for variations in viewing distance. The second is a Complex
Peak Signal-to-Noise Ratio, which evaluates distortions affecting texture
regions. The first-order effects of the estimator are directly tied to the
first index, for which we introduce the concept of \emph{focalization},
interpreted as a linearization term. Starting from a Positional Fisher
Information loss model applied to Gaussian blur distortion in natural images,
we demonstrate how this model can generalize to linearize all types of
distortions. Finally, we validate our theoretical findings by comparing them
with several state-of-the-art classical and deep-learning-based full-reference
image quality assessment methods on widely used benchmark datasets.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 14:19:08 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Giannitrapani",
"Paolo",
""
],
[
"Di Claudio",
"Elio D.",
""
],
[
"Jacovitti",
"Giovanni",
""
]
]
| TITLE: BELE: Blur Equivalent Linearized Estimator
ABSTRACT: In the Full-Reference Image Quality Assessment context, Mean Opinion Score
values represent subjective evaluations based on retinal perception, while
objective metrics assess the reproduced image on the display. Bridging these
subjective and objective domains requires parametric mapping functions, which
are sensitive to the observer's viewing distance. This paper introduces a novel
parametric model that separates perceptual effects due to strong edge
degradations from those caused by texture distortions. These effects are
quantified using two distinct quality indices. The first is the Blur Equivalent
Linearized Estimator, designed to measure blur on strong and isolated edges
while accounting for variations in viewing distance. The second is a Complex
Peak Signal-to-Noise Ratio, which evaluates distortions affecting texture
regions. The first-order effects of the estimator are directly tied to the
first index, for which we introduce the concept of \emph{focalization},
interpreted as a linearization term. Starting from a Positional Fisher
Information loss model applied to Gaussian blur distortion in natural images,
we demonstrate how this model can generalize to linearize all types of
distortions. Finally, we validate our theoretical findings by comparing them
with several state-of-the-art classical and deep-learning-based full-reference
image quality assessment methods on widely used benchmark datasets.
| no_new_dataset | 0.950503 |
2503.00510 | Yexiao He | Yexiao He, Ziyao Wang, Yuning Zhang, Tingting Dan, Tianlong Chen,
Guorong Wu, Ang Li | NeuroSymAD: A Neuro-Symbolic Framework for Interpretable Alzheimer's
Disease Diagnosis | null | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Alzheimer's disease (AD) diagnosis is complex, requiring the integration of
imaging and clinical data for accurate assessment. While deep learning has
shown promise in brain MRI analysis, it often functions as a black box,
limiting interpretability and lacking mechanisms to effectively integrate
critical clinical data such as biomarkers, medical history, and demographic
information. To bridge this gap, we propose NeuroSymAD, a neuro-symbolic
framework that synergizes neural networks with symbolic reasoning. A neural
network percepts brain MRI scans, while a large language model (LLM) distills
medical rules to guide a symbolic system in reasoning over biomarkers and
medical history. This structured integration enhances both diagnostic accuracy
and explainability. Experiments on the ADNI dataset demonstrate that NeuroSymAD
outperforms state-of-the-art methods by up to 2.91% in accuracy and 3.43% in
F1-score while providing transparent and interpretable diagnosis.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 14:29:39 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"He",
"Yexiao",
""
],
[
"Wang",
"Ziyao",
""
],
[
"Zhang",
"Yuning",
""
],
[
"Dan",
"Tingting",
""
],
[
"Chen",
"Tianlong",
""
],
[
"Wu",
"Guorong",
""
],
[
"Li",
"Ang",
""
]
]
| TITLE: NeuroSymAD: A Neuro-Symbolic Framework for Interpretable Alzheimer's
Disease Diagnosis
ABSTRACT: Alzheimer's disease (AD) diagnosis is complex, requiring the integration of
imaging and clinical data for accurate assessment. While deep learning has
shown promise in brain MRI analysis, it often functions as a black box,
limiting interpretability and lacking mechanisms to effectively integrate
critical clinical data such as biomarkers, medical history, and demographic
information. To bridge this gap, we propose NeuroSymAD, a neuro-symbolic
framework that synergizes neural networks with symbolic reasoning. A neural
network percepts brain MRI scans, while a large language model (LLM) distills
medical rules to guide a symbolic system in reasoning over biomarkers and
medical history. This structured integration enhances both diagnostic accuracy
and explainability. Experiments on the ADNI dataset demonstrate that NeuroSymAD
outperforms state-of-the-art methods by up to 2.91% in accuracy and 3.43% in
F1-score while providing transparent and interpretable diagnosis.
| no_new_dataset | 0.947672 |
2503.00515 | Songlin Dong | Songlin Dong, Yuhang He, Zhengdong Zhou, Haoyu Luo, Xing Wei, Alex C.
Kot, Yihong Gong | Class-Independent Increment: An Efficient Approach for Multi-label
Class-Incremental Learning | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current research on class-incremental learning primarily focuses on
single-label classification tasks. However, real-world applications often
involve multi-label scenarios, such as image retrieval and medical imaging.
Therefore, this paper focuses on the challenging yet practical multi-label
class-incremental learning (MLCIL) problem. In addition to the challenge of
catastrophic forgetting, MLCIL encounters issues related to feature confusion,
encompassing inter-session and intra-feature confusion. To address these
problems, we propose a novel MLCIL approach called class-independent increment
(CLIN). Specifically, in contrast to existing methods that extract image-level
features, we propose a class-independent incremental network (CINet) to extract
multiple class-level embeddings for multi-label samples. It learns and
preserves the knowledge of different classes by constructing class-specific
tokens. On this basis, we develop two novel loss functions, optimizing the
learning of class-specific tokens and class-level embeddings, respectively.
These losses aim to distinguish between new and old classes, further
alleviating the problem of feature confusion. Extensive experiments on MS-COCO
and PASCAL VOC datasets demonstrate the effectiveness of our method for
improving recognition performance and mitigating forgetting on various MLCIL
tasks.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 14:40:52 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Dong",
"Songlin",
""
],
[
"He",
"Yuhang",
""
],
[
"Zhou",
"Zhengdong",
""
],
[
"Luo",
"Haoyu",
""
],
[
"Wei",
"Xing",
""
],
[
"Kot",
"Alex C.",
""
],
[
"Gong",
"Yihong",
""
]
]
| TITLE: Class-Independent Increment: An Efficient Approach for Multi-label
Class-Incremental Learning
ABSTRACT: Current research on class-incremental learning primarily focuses on
single-label classification tasks. However, real-world applications often
involve multi-label scenarios, such as image retrieval and medical imaging.
Therefore, this paper focuses on the challenging yet practical multi-label
class-incremental learning (MLCIL) problem. In addition to the challenge of
catastrophic forgetting, MLCIL encounters issues related to feature confusion,
encompassing inter-session and intra-feature confusion. To address these
problems, we propose a novel MLCIL approach called class-independent increment
(CLIN). Specifically, in contrast to existing methods that extract image-level
features, we propose a class-independent incremental network (CINet) to extract
multiple class-level embeddings for multi-label samples. It learns and
preserves the knowledge of different classes by constructing class-specific
tokens. On this basis, we develop two novel loss functions, optimizing the
learning of class-specific tokens and class-level embeddings, respectively.
These losses aim to distinguish between new and old classes, further
alleviating the problem of feature confusion. Extensive experiments on MS-COCO
and PASCAL VOC datasets demonstrate the effectiveness of our method for
improving recognition performance and mitigating forgetting on various MLCIL
tasks.
| no_new_dataset | 0.946101 |
2503.00521 | Junyao Kuang | JunYao Kaung, HongWei Ge | 2DMCG:2DMambawith Change Flow Guidance for Change Detection in Remote
Sensing | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Remote sensing change detection (CD) has made significant advancements with
the adoption of Convolutional Neural Networks (CNNs) and Transformers. While
CNNs offer powerful feature extraction, they are constrained by receptive field
limitations, and Transformers suffer from quadratic complexity when processing
long sequences, restricting scalability. The Mamba architecture provides an
appealing alternative, offering linear complexity and high parallelism.
However, its inherent 1D processing structure causes a loss of spatial
information in 2D vision tasks. This paper addresses this limitation by
proposing an efficient framework based on a Vision Mamba variant that enhances
its ability to capture 2D spatial information while maintaining the linear
complexity characteristic of Mamba. The framework employs a 2DMamba encoder to
effectively learn global spatial contextual information from multi-temporal
images. For feature fusion, we introduce a 2D scan-based, channel-parallel
scanning strategy combined with a spatio-temporal feature fusion method, which
adeptly captures both local and global change information, alleviating spatial
discontinuity issues during fusion. In the decoding stage, we present a feature
change flow-based decoding method that improves the mapping of feature change
information from low-resolution to high-resolution feature maps, mitigating
feature shift and misalignment. Extensive experiments on benchmark datasets
such as LEVIR-CD+ and WHU-CD demonstrate the superior performance of our
framework compared to state-of-the-art methods, showcasing the potential of
Vision Mamba for efficient and accurate remote sensing change detection.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 14:55:13 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Kaung",
"JunYao",
""
],
[
"Ge",
"HongWei",
""
]
]
| TITLE: 2DMCG:2DMambawith Change Flow Guidance for Change Detection in Remote
Sensing
ABSTRACT: Remote sensing change detection (CD) has made significant advancements with
the adoption of Convolutional Neural Networks (CNNs) and Transformers. While
CNNs offer powerful feature extraction, they are constrained by receptive field
limitations, and Transformers suffer from quadratic complexity when processing
long sequences, restricting scalability. The Mamba architecture provides an
appealing alternative, offering linear complexity and high parallelism.
However, its inherent 1D processing structure causes a loss of spatial
information in 2D vision tasks. This paper addresses this limitation by
proposing an efficient framework based on a Vision Mamba variant that enhances
its ability to capture 2D spatial information while maintaining the linear
complexity characteristic of Mamba. The framework employs a 2DMamba encoder to
effectively learn global spatial contextual information from multi-temporal
images. For feature fusion, we introduce a 2D scan-based, channel-parallel
scanning strategy combined with a spatio-temporal feature fusion method, which
adeptly captures both local and global change information, alleviating spatial
discontinuity issues during fusion. In the decoding stage, we present a feature
change flow-based decoding method that improves the mapping of feature change
information from low-resolution to high-resolution feature maps, mitigating
feature shift and misalignment. Extensive experiments on benchmark datasets
such as LEVIR-CD+ and WHU-CD demonstrate the superior performance of our
framework compared to state-of-the-art methods, showcasing the potential of
Vision Mamba for efficient and accurate remote sensing change detection.
| no_new_dataset | 0.949949 |
2503.00522 | Kishalay Das | Kishalay Das, Subhojyoti Khastagir, Pawan Goyal, Seung-Cheol Lee,
Satadeep Bhattacharjee, Niloy Ganguly | Periodic Materials Generation using Text-Guided Joint Diffusion Model | ICLR 2025 | null | null | null | cs.LG cond-mat.mtrl-sci | http://creativecommons.org/licenses/by/4.0/ | Equivariant diffusion models have emerged as the prevailing approach for
generating novel crystal materials due to their ability to leverage the
physical symmetries of periodic material structures. However, current models do
not effectively learn the joint distribution of atom types, fractional
coordinates, and lattice structure of the crystal material in a cohesive
end-to-end diffusion framework. Also, none of these models work under realistic
setups, where users specify the desired characteristics that the generated
structures must match. In this work, we introduce TGDMat, a novel text-guided
diffusion model designed for 3D periodic material generation. Our approach
integrates global structural knowledge through textual descriptions at each
denoising step while jointly generating atom coordinates, types, and lattice
structure using a periodic-E(3)-equivariant graph neural network (GNN).
Extensive experiments using popular datasets on benchmark tasks reveal that
TGDMat outperforms existing baseline methods by a good margin. Notably, for the
structure prediction task, with just one generated sample, TGDMat outperforms
all baseline models, highlighting the importance of text-guided diffusion.
Further, in the generation task, TGDMat surpasses all baselines and their
text-fusion variants, showcasing the effectiveness of the joint diffusion
paradigm. Additionally, incorporating textual knowledge reduces overall
training and sampling computational overhead while enhancing generative
performance when utilizing real-world textual prompts from experts.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 14:56:44 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Das",
"Kishalay",
""
],
[
"Khastagir",
"Subhojyoti",
""
],
[
"Goyal",
"Pawan",
""
],
[
"Lee",
"Seung-Cheol",
""
],
[
"Bhattacharjee",
"Satadeep",
""
],
[
"Ganguly",
"Niloy",
""
]
]
| TITLE: Periodic Materials Generation using Text-Guided Joint Diffusion Model
ABSTRACT: Equivariant diffusion models have emerged as the prevailing approach for
generating novel crystal materials due to their ability to leverage the
physical symmetries of periodic material structures. However, current models do
not effectively learn the joint distribution of atom types, fractional
coordinates, and lattice structure of the crystal material in a cohesive
end-to-end diffusion framework. Also, none of these models work under realistic
setups, where users specify the desired characteristics that the generated
structures must match. In this work, we introduce TGDMat, a novel text-guided
diffusion model designed for 3D periodic material generation. Our approach
integrates global structural knowledge through textual descriptions at each
denoising step while jointly generating atom coordinates, types, and lattice
structure using a periodic-E(3)-equivariant graph neural network (GNN).
Extensive experiments using popular datasets on benchmark tasks reveal that
TGDMat outperforms existing baseline methods by a good margin. Notably, for the
structure prediction task, with just one generated sample, TGDMat outperforms
all baseline models, highlighting the importance of text-guided diffusion.
Further, in the generation task, TGDMat surpasses all baselines and their
text-fusion variants, showcasing the effectiveness of the joint diffusion
paradigm. Additionally, incorporating textual knowledge reduces overall
training and sampling computational overhead while enhancing generative
performance when utilizing real-world textual prompts from experts.
| no_new_dataset | 0.949623 |
2503.00528 | Zirun Guo | Zirun Guo, Shulei Wang, Wang Lin, Weicai Yan, Yangyang Wu, Tao Jin | Efficient Prompting for Continual Adaptation to Missing Modalities | Accepted to NAACL 2025 Main | null | null | null | cs.LG cs.CV | http://creativecommons.org/licenses/by/4.0/ | Missing modality issues are common in real-world applications, arising from
factors such as equipment failures and privacy concerns. When fine-tuning
pre-trained models on downstream datasets with missing modalities, performance
can degrade significantly. Current methods often aggregate various missing
cases to train recovery modules or align multimodal features, resulting in
suboptimal performance, high computational costs, and the risk of catastrophic
forgetting in continual environments where data arrives sequentially. In this
paper, we formulate the dynamic missing modality problem as a continual
learning task and introduce the continual multimodal missing modality task. To
address this challenge efficiently, we introduce three types of prompts:
modality-specific, task-aware, and task-specific prompts. These prompts enable
the model to learn intra-modality, inter-modality, intra-task, and inter-task
features. Furthermore, we propose a contrastive task interaction strategy to
explicitly learn prompts correlating different modalities. We conduct extensive
experiments on three public datasets, where our method consistently outperforms
state-of-the-art approaches.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 15:09:37 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Guo",
"Zirun",
""
],
[
"Wang",
"Shulei",
""
],
[
"Lin",
"Wang",
""
],
[
"Yan",
"Weicai",
""
],
[
"Wu",
"Yangyang",
""
],
[
"Jin",
"Tao",
""
]
]
| TITLE: Efficient Prompting for Continual Adaptation to Missing Modalities
ABSTRACT: Missing modality issues are common in real-world applications, arising from
factors such as equipment failures and privacy concerns. When fine-tuning
pre-trained models on downstream datasets with missing modalities, performance
can degrade significantly. Current methods often aggregate various missing
cases to train recovery modules or align multimodal features, resulting in
suboptimal performance, high computational costs, and the risk of catastrophic
forgetting in continual environments where data arrives sequentially. In this
paper, we formulate the dynamic missing modality problem as a continual
learning task and introduce the continual multimodal missing modality task. To
address this challenge efficiently, we introduce three types of prompts:
modality-specific, task-aware, and task-specific prompts. These prompts enable
the model to learn intra-modality, inter-modality, intra-task, and inter-task
features. Furthermore, we propose a contrastive task interaction strategy to
explicitly learn prompts correlating different modalities. We conduct extensive
experiments on three public datasets, where our method consistently outperforms
state-of-the-art approaches.
| no_new_dataset | 0.943243 |
2503.00530 | Yuliang Shi | Wanli Hong, Yuliang Shi, Jonathan Niles-Weed | Trajectory Inference with Smooth Schr\"odinger Bridges | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motivated by applications in trajectory inference and particle tracking, we
introduce Smooth Schr\"odinger Bridges. Our proposal generalizes prior work by
allowing the reference process in the Schr\"odinger Bridge problem to be a
smooth Gaussian process, leading to more regular and interpretable trajectories
in applications. Though na\"ively smoothing the reference process leads to a
computationally intractable problem, we identify a class of processes
(including the Mat\'ern processes) for which the resulting Smooth Schr\"odinger
Bridge problem can be lifted to a simpler problem on phase space, which can be
solved in polynomial time. We develop a practical approximation of this
algorithm that outperforms existing methods on numerous simulated and real
single-cell RNAseq datasets. The code can be found at
https://github.com/WanliHongC/Smooth_SB
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 15:12:01 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Hong",
"Wanli",
""
],
[
"Shi",
"Yuliang",
""
],
[
"Niles-Weed",
"Jonathan",
""
]
]
| TITLE: Trajectory Inference with Smooth Schr\"odinger Bridges
ABSTRACT: Motivated by applications in trajectory inference and particle tracking, we
introduce Smooth Schr\"odinger Bridges. Our proposal generalizes prior work by
allowing the reference process in the Schr\"odinger Bridge problem to be a
smooth Gaussian process, leading to more regular and interpretable trajectories
in applications. Though na\"ively smoothing the reference process leads to a
computationally intractable problem, we identify a class of processes
(including the Mat\'ern processes) for which the resulting Smooth Schr\"odinger
Bridge problem can be lifted to a simpler problem on phase space, which can be
solved in polynomial time. We develop a practical approximation of this
algorithm that outperforms existing methods on numerous simulated and real
single-cell RNAseq datasets. The code can be found at
https://github.com/WanliHongC/Smooth_SB
| no_new_dataset | 0.953057 |
2503.00539 | Debmalya Mandal | Debmalya Mandal, Paulius Sasnauskas, Goran Radanovic | Distributionally Robust Reinforcement Learning with Human Feedback | null | null | null | null | cs.LG cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | Reinforcement learning from human feedback (RLHF) has evolved to be one of
the main methods for fine-tuning large language models (LLMs). However,
existing RLHF methods are non-robust, and their performance deteriorates if the
downstream task differs significantly from the preference dataset used in
fine-tuning. In order to mitigate this problem, we introduce a distributionally
robust RLHF for fine-tuning LLMs. In particular, our goal is to ensure that a
fine-tuned model retains its performance even when the distribution of prompts
significantly differs from the distribution encountered during fine-tuning. We
formulate distributionally robust optimization (DRO) version of two popular
fine-tuning methods -- (1) reward-based RLHF and (2) reward-free DPO (direct
preference optimization). We propose a minibatch gradient descent based
algorithms for both of them, and theoretically prove convergence guarantees for
the algorithms. Subsequently, we evaluate our algorithms on an
out-of-distribution (OOD) task by first training the model on the
Unified-Feedback dataset and evaluating its performance on two different
datasets. The experimental results show that our robust training improves the
accuracy of the learned reward models on average, and markedly on some tasks,
such as reasoning. Furthermore, we show that the robust versions of policy
optimization methods, similarly improve performance on OOD tasks.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 15:43:39 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Mandal",
"Debmalya",
""
],
[
"Sasnauskas",
"Paulius",
""
],
[
"Radanovic",
"Goran",
""
]
]
| TITLE: Distributionally Robust Reinforcement Learning with Human Feedback
ABSTRACT: Reinforcement learning from human feedback (RLHF) has evolved to be one of
the main methods for fine-tuning large language models (LLMs). However,
existing RLHF methods are non-robust, and their performance deteriorates if the
downstream task differs significantly from the preference dataset used in
fine-tuning. In order to mitigate this problem, we introduce a distributionally
robust RLHF for fine-tuning LLMs. In particular, our goal is to ensure that a
fine-tuned model retains its performance even when the distribution of prompts
significantly differs from the distribution encountered during fine-tuning. We
formulate distributionally robust optimization (DRO) version of two popular
fine-tuning methods -- (1) reward-based RLHF and (2) reward-free DPO (direct
preference optimization). We propose a minibatch gradient descent based
algorithms for both of them, and theoretically prove convergence guarantees for
the algorithms. Subsequently, we evaluate our algorithms on an
out-of-distribution (OOD) task by first training the model on the
Unified-Feedback dataset and evaluating its performance on two different
datasets. The experimental results show that our robust training improves the
accuracy of the learned reward models on average, and markedly on some tasks,
such as reasoning. Furthermore, we show that the robust versions of policy
optimization methods, similarly improve performance on OOD tasks.
| no_new_dataset | 0.945651 |
2503.00545 | Yujie Lei | Yujie Lei, Wenjie Sun, Sen Jia, Qingquan Li, Jie Zhang | RFWNet: A Lightweight Remote Sensing Object Detector Integrating
Multi-Scale Receptive Fields and Foreground Focus Mechanism | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Challenges in remote sensing object detection (RSOD), such as high
inter-class similarity, imbalanced foreground-background distribution, and the
small size of objects in remote sensing images significantly hinder detection
accuracy. Moreo-ver, the trade-off between model accuracy and computational
complexity poses additional constraints on the application of RSOD algorithms.
To address these issues, this study proposes an efficient and lightweight RSOD
algorithm integrat-ing multi-scale receptive fields and foreground focus
mechanism, named RFWNet. Specifically, we proposed a lightweight backbone
network Receptive Field Adaptive Selection Network (RFASNet), leveraging the
rich context infor-mation of remote sensing images to enhance class
separability. Additionally, we developed a Foreground Background Separation
Module (FBSM) consisting of a background redundant information filtering module
and a foreground information enhancement module to emphasize critical regions
within images while filtering redundant background information. Finally, we
designed a loss function, the Weighted CIoU-Wasserstein (WCW) loss, which
weights the IoU-based loss by using the Normalized Wasserstein Distance to
mitigate model sensitivity to small object position deviations. Experimental
evaluations on the DOTA V1.0 and NWPU VHR-10 datasets demonstrate that RFWNet
achieves advanced perfor-mance with 6.0M parameters and can achieves 52 FPS.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 16:02:15 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Lei",
"Yujie",
""
],
[
"Sun",
"Wenjie",
""
],
[
"Jia",
"Sen",
""
],
[
"Li",
"Qingquan",
""
],
[
"Zhang",
"Jie",
""
]
]
| TITLE: RFWNet: A Lightweight Remote Sensing Object Detector Integrating
Multi-Scale Receptive Fields and Foreground Focus Mechanism
ABSTRACT: Challenges in remote sensing object detection (RSOD), such as high
inter-class similarity, imbalanced foreground-background distribution, and the
small size of objects in remote sensing images significantly hinder detection
accuracy. Moreo-ver, the trade-off between model accuracy and computational
complexity poses additional constraints on the application of RSOD algorithms.
To address these issues, this study proposes an efficient and lightweight RSOD
algorithm integrat-ing multi-scale receptive fields and foreground focus
mechanism, named RFWNet. Specifically, we proposed a lightweight backbone
network Receptive Field Adaptive Selection Network (RFASNet), leveraging the
rich context infor-mation of remote sensing images to enhance class
separability. Additionally, we developed a Foreground Background Separation
Module (FBSM) consisting of a background redundant information filtering module
and a foreground information enhancement module to emphasize critical regions
within images while filtering redundant background information. Finally, we
designed a loss function, the Weighted CIoU-Wasserstein (WCW) loss, which
weights the IoU-based loss by using the Normalized Wasserstein Distance to
mitigate model sensitivity to small object position deviations. Experimental
evaluations on the DOTA V1.0 and NWPU VHR-10 datasets demonstrate that RFWNet
achieves advanced perfor-mance with 6.0M parameters and can achieves 52 FPS.
| no_new_dataset | 0.949342 |
2503.00550 | Ronaldo Menezes | Ricardo de S Alencar, Fabiano L. Ribeiro, Horacio Samaniego, Ronaldo
Menezes, Alexandre G. Evsukoff | Validating Urban Scaling Laws through Mobile Phone Data: A
Continental-Scale Analysis of Brazil's Largest Cities | 23 pages, 5 figures, 2 Tables, 1 Algorithm | null | null | null | physics.soc-ph | http://creativecommons.org/licenses/by/4.0/ | \abstract{Urban scaling theories posit that larger cities exhibit
disproportionately higher levels of socioeconomic activity and human
interactions. Yet, evidence from developing contexts (especially those marked
by stark socioeconomic disparities) remains limited. To address this gap, we
analyse a month-long dataset of 3.1~billion voice-call records from Brazil's
100 most populous cities, providing a continental-scale test of urban scaling
laws. We measure interactions using two complementary proxies: the number of
phone-based contacts (voice-call degrees) and the number of trips inferred from
consecutive calls in distinct locations. Our findings reveal clear superlinear
relationships in both metrics, indicating that larger urban centres exhibit
intensified remote communication and physical mobility. We further observe that
gross domestic product (GDP) also scales superlinearly with population,
consistent with broader claims that economic output grows faster than city
size. Conversely, the number of antennas required per user scales sublinearly,
suggesting economies of scale in telecommunications infrastructure. Although
the dataset covers a single provider, its widespread coverage in major cities
supports the robustness of the results. We nonetheless discuss potential
biases, including city-specific marketing campaigns and predominantly prepaid
users, as well as the open question of whether higher interaction drives wealth
or vice versa. Overall, this study enriches our understanding of urban scaling,
emphasising how communication and mobility jointly shape the socioeconomic
landscapes of rapidly growing cities.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 16:34:37 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Alencar",
"Ricardo de S",
""
],
[
"Ribeiro",
"Fabiano L.",
""
],
[
"Samaniego",
"Horacio",
""
],
[
"Menezes",
"Ronaldo",
""
],
[
"Evsukoff",
"Alexandre G.",
""
]
]
| TITLE: Validating Urban Scaling Laws through Mobile Phone Data: A
Continental-Scale Analysis of Brazil's Largest Cities
ABSTRACT: \abstract{Urban scaling theories posit that larger cities exhibit
disproportionately higher levels of socioeconomic activity and human
interactions. Yet, evidence from developing contexts (especially those marked
by stark socioeconomic disparities) remains limited. To address this gap, we
analyse a month-long dataset of 3.1~billion voice-call records from Brazil's
100 most populous cities, providing a continental-scale test of urban scaling
laws. We measure interactions using two complementary proxies: the number of
phone-based contacts (voice-call degrees) and the number of trips inferred from
consecutive calls in distinct locations. Our findings reveal clear superlinear
relationships in both metrics, indicating that larger urban centres exhibit
intensified remote communication and physical mobility. We further observe that
gross domestic product (GDP) also scales superlinearly with population,
consistent with broader claims that economic output grows faster than city
size. Conversely, the number of antennas required per user scales sublinearly,
suggesting economies of scale in telecommunications infrastructure. Although
the dataset covers a single provider, its widespread coverage in major cities
supports the robustness of the results. We nonetheless discuss potential
biases, including city-specific marketing campaigns and predominantly prepaid
users, as well as the open question of whether higher interaction drives wealth
or vice versa. Overall, this study enriches our understanding of urban scaling,
emphasising how communication and mobility jointly shape the socioeconomic
landscapes of rapidly growing cities.
| no_new_dataset | 0.883638 |
2503.00551 | Zhixin Zhang | Zhixin Zhang, Wenzhi Bai, Liang Zhao, Pawel Ladosz | PL-VIWO: A Lightweight and Robust Point-Line Monocular Visual Inertial
Wheel Odometry | 8 pages conference | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a novel tightly coupled Filter-based monocular
visual-inertial-wheel odometry (VIWO) system for ground robots, designed to
deliver accurate and robust localization in long-term complex outdoor
navigation scenarios. As an external sensor, the camera enhances localization
performance by introducing visual constraints. However, obtaining a sufficient
number of effective visual features is often challenging, particularly in
dynamic or low-texture environments. To address this issue, we incorporate the
line features for additional geometric constraints. Unlike traditional
approaches that treat point and line features independently, our method
exploits the geometric relationships between points and lines in 2D images,
enabling fast and robust line matching and triangulation. Additionally, we
introduce Motion Consistency Check (MCC) to filter out potential dynamic
points, ensuring the effectiveness of point feature updates. The proposed
system was evaluated on publicly available datasets and benchmarked against
state-of-the-art methods. Experimental results demonstrate superior performance
in terms of accuracy, robustness, and efficiency. The source code is publicly
available at: https://github.com/Happy-ZZX/PL-VIWO
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 16:37:12 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Zhang",
"Zhixin",
""
],
[
"Bai",
"Wenzhi",
""
],
[
"Zhao",
"Liang",
""
],
[
"Ladosz",
"Pawel",
""
]
]
| TITLE: PL-VIWO: A Lightweight and Robust Point-Line Monocular Visual Inertial
Wheel Odometry
ABSTRACT: This paper presents a novel tightly coupled Filter-based monocular
visual-inertial-wheel odometry (VIWO) system for ground robots, designed to
deliver accurate and robust localization in long-term complex outdoor
navigation scenarios. As an external sensor, the camera enhances localization
performance by introducing visual constraints. However, obtaining a sufficient
number of effective visual features is often challenging, particularly in
dynamic or low-texture environments. To address this issue, we incorporate the
line features for additional geometric constraints. Unlike traditional
approaches that treat point and line features independently, our method
exploits the geometric relationships between points and lines in 2D images,
enabling fast and robust line matching and triangulation. Additionally, we
introduce Motion Consistency Check (MCC) to filter out potential dynamic
points, ensuring the effectiveness of point feature updates. The proposed
system was evaluated on publicly available datasets and benchmarked against
state-of-the-art methods. Experimental results demonstrate superior performance
in terms of accuracy, robustness, and efficiency. The source code is publicly
available at: https://github.com/Happy-ZZX/PL-VIWO
| no_new_dataset | 0.953405 |
2503.00555 | Tiansheng Huang | Tiansheng Huang, Sihao Hu, Fatih Ilhan, Selim Furkan Tekin, Zachary
Yahn, Yichang Xu, Ling Liu | Safety Tax: Safety Alignment Makes Your Large Reasoning Models Less
Reasonable | null | null | null | null | cs.CR cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Safety alignment is an important procedure before the official deployment of
a Large Language Model (LLM). While safety alignment has been extensively
studied for LLM, there is still a large research gap for Large Reasoning Models
(LRMs) that equip with improved reasoning capability. We in this paper
systematically examine a simplified pipeline for producing safety aligned LRMs.
With our evaluation of various LRMs, we deliver two main findings: i) Safety
alignment can be done upon the LRM to restore its safety capability. ii) Safety
alignment leads to a degradation of the reasoning capability of LRMs. The two
findings show that there exists a trade-off between reasoning and safety
capability with the sequential LRM production pipeline. The discovered
trade-off, which we name Safety Tax, should shed light on future endeavors of
safety research on LRMs. As a by-product, we curate a dataset called
DirectRefusal, which might serve as an alternative dataset for safety
alignment. Our source code is available at
https://github.com/git-disl/Safety-Tax.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 16:42:01 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Huang",
"Tiansheng",
""
],
[
"Hu",
"Sihao",
""
],
[
"Ilhan",
"Fatih",
""
],
[
"Tekin",
"Selim Furkan",
""
],
[
"Yahn",
"Zachary",
""
],
[
"Xu",
"Yichang",
""
],
[
"Liu",
"Ling",
""
]
]
| TITLE: Safety Tax: Safety Alignment Makes Your Large Reasoning Models Less
Reasonable
ABSTRACT: Safety alignment is an important procedure before the official deployment of
a Large Language Model (LLM). While safety alignment has been extensively
studied for LLM, there is still a large research gap for Large Reasoning Models
(LRMs) that equip with improved reasoning capability. We in this paper
systematically examine a simplified pipeline for producing safety aligned LRMs.
With our evaluation of various LRMs, we deliver two main findings: i) Safety
alignment can be done upon the LRM to restore its safety capability. ii) Safety
alignment leads to a degradation of the reasoning capability of LRMs. The two
findings show that there exists a trade-off between reasoning and safety
capability with the sequential LRM production pipeline. The discovered
trade-off, which we name Safety Tax, should shed light on future endeavors of
safety research on LRMs. As a by-product, we curate a dataset called
DirectRefusal, which might serve as an alternative dataset for safety
alignment. Our source code is available at
https://github.com/git-disl/Safety-Tax.
| new_dataset | 0.959762 |
2503.00564 | Jeonghoon Shim | Jeonghoon Shim, Gyuhyeon Seo, Cheongsu Lim, Yohan Jo | ToolDial: Multi-turn Dialogue Generation Method for Tool-Augmented
Language Models | Accepted to ICLR 2025 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tool-Augmented Language Models (TALMs) leverage external APIs to answer user
queries across various domains. However, existing benchmark datasets for TALM
research often feature simplistic dialogues that do not reflect real-world
scenarios, such as the need for models to ask clarifying questions or
proactively call additional APIs when essential information is missing. To
address these limitations, we construct and release ToolDial, a dataset
comprising 11,111 multi-turn dialogues, with an average of 8.95 turns per
dialogue, based on APIs from RapidAPI. ToolDial has two key characteristics.
First, the dialogues incorporate 16 user and system actions (e.g., "Request",
"Clarify", "Fail inform") to capture the rich dynamics of real-world
interactions. Second, we simulate dialogues where the system requests necessary
information from the user based on API documentation and seeks additional APIs
if the user fails to provide the required information. To facilitate this
process, we introduce a method for generating an API graph that represents
input and output compatibility between APIs. Using ToolDial, we evaluate a
suite of language models on their ability to predict correct actions and
extract input parameter values for API calls from the dialogue history. Modern
language models achieve accuracy scores below 70%, indicating substantial room
for improvement. We release our dataset and code at
https://github.com/holi-lab/ToolDial.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 17:23:51 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Shim",
"Jeonghoon",
""
],
[
"Seo",
"Gyuhyeon",
""
],
[
"Lim",
"Cheongsu",
""
],
[
"Jo",
"Yohan",
""
]
]
| TITLE: ToolDial: Multi-turn Dialogue Generation Method for Tool-Augmented
Language Models
ABSTRACT: Tool-Augmented Language Models (TALMs) leverage external APIs to answer user
queries across various domains. However, existing benchmark datasets for TALM
research often feature simplistic dialogues that do not reflect real-world
scenarios, such as the need for models to ask clarifying questions or
proactively call additional APIs when essential information is missing. To
address these limitations, we construct and release ToolDial, a dataset
comprising 11,111 multi-turn dialogues, with an average of 8.95 turns per
dialogue, based on APIs from RapidAPI. ToolDial has two key characteristics.
First, the dialogues incorporate 16 user and system actions (e.g., "Request",
"Clarify", "Fail inform") to capture the rich dynamics of real-world
interactions. Second, we simulate dialogues where the system requests necessary
information from the user based on API documentation and seeks additional APIs
if the user fails to provide the required information. To facilitate this
process, we introduce a method for generating an API graph that represents
input and output compatibility between APIs. Using ToolDial, we evaluate a
suite of language models on their ability to predict correct actions and
extract input parameter values for API calls from the dialogue history. Modern
language models achieve accuracy scores below 70%, indicating substantial room
for improvement. We release our dataset and code at
https://github.com/holi-lab/ToolDial.
| new_dataset | 0.959724 |
2503.00565 | Sakshi Arya | Sakshi Arya and Hyebin Song | Semi-Parametric Batched Global Multi-Armed Bandits with Covariates | null | null | null | null | stat.ML cs.LG math.ST stat.ME stat.TH | http://creativecommons.org/licenses/by/4.0/ | The multi-armed bandits (MAB) framework is a widely used approach for
sequential decision-making, where a decision-maker selects an arm in each round
with the goal of maximizing long-term rewards. Moreover, in many practical
applications, such as personalized medicine and recommendation systems,
feedback is provided in batches, contextual information is available at the
time of decision-making, and rewards from different arms are related rather
than independent. We propose a novel semi-parametric framework for batched
bandits with covariates and a shared parameter across arms, leveraging the
single-index regression (SIR) model to capture relationships between arm
rewards while balancing interpretability and flexibility. Our algorithm,
Batched single-Index Dynamic binning and Successive arm elimination (BIDS),
employs a batched successive arm elimination strategy with a dynamic binning
mechanism guided by the single-index direction. We consider two settings: one
where a pilot direction is available and another where the direction is
estimated from data, deriving theoretical regret bounds for both cases. When a
pilot direction is available with sufficient accuracy, our approach achieves
minimax-optimal rates (with $d = 1$) for nonparametric batched bandits,
circumventing the curse of dimensionality. Extensive experiments on simulated
and real-world datasets demonstrate the effectiveness of our algorithm compared
to the nonparametric batched bandit method introduced by
\cite{jiang2024batched}.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 17:23:55 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Arya",
"Sakshi",
""
],
[
"Song",
"Hyebin",
""
]
]
| TITLE: Semi-Parametric Batched Global Multi-Armed Bandits with Covariates
ABSTRACT: The multi-armed bandits (MAB) framework is a widely used approach for
sequential decision-making, where a decision-maker selects an arm in each round
with the goal of maximizing long-term rewards. Moreover, in many practical
applications, such as personalized medicine and recommendation systems,
feedback is provided in batches, contextual information is available at the
time of decision-making, and rewards from different arms are related rather
than independent. We propose a novel semi-parametric framework for batched
bandits with covariates and a shared parameter across arms, leveraging the
single-index regression (SIR) model to capture relationships between arm
rewards while balancing interpretability and flexibility. Our algorithm,
Batched single-Index Dynamic binning and Successive arm elimination (BIDS),
employs a batched successive arm elimination strategy with a dynamic binning
mechanism guided by the single-index direction. We consider two settings: one
where a pilot direction is available and another where the direction is
estimated from data, deriving theoretical regret bounds for both cases. When a
pilot direction is available with sufficient accuracy, our approach achieves
minimax-optimal rates (with $d = 1$) for nonparametric batched bandits,
circumventing the curse of dimensionality. Extensive experiments on simulated
and real-world datasets demonstrate the effectiveness of our algorithm compared
to the nonparametric batched bandit method introduced by
\cite{jiang2024batched}.
| no_new_dataset | 0.946794 |
2503.00569 | Jake Perazzone | Jake B. Perazzone, Shiqiang Wang, Mingyue Ji, Kevin Chan | Communication-Efficient Device Scheduling for Federated Learning Using
Lyapunov Optimization | Accepted in IEEE/ACM Transactions on Networking | null | null | null | cs.LG cs.DC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Federated learning (FL) is a useful tool that enables the training of machine
learning models over distributed data without having to collect data centrally.
When deploying FL in constrained wireless environments, however, intermittent
connectivity of devices, heterogeneous connection quality, and non-i.i.d. data
can severely slow convergence. In this paper, we consider FL with arbitrary
device participation probabilities for each round and show that by weighing
each device's update by the reciprocal of their per-round participation
probability, we can guarantee convergence to a stationary point. Our bound
applies to non-convex loss functions and non-i.i.d. datasets and recovers
state-of-the-art convergence rates for both full and uniform partial
participation, including linear speedup, with only a single-sided learning
rate. Then, using the derived convergence bound, we develop a new online client
selection and power allocation algorithm that utilizes the Lyapunov
drift-plus-penalty framework to opportunistically minimize a function of the
convergence bound and the average communication time under a transmit power
constraint. We use optimization over manifold techniques to obtain a solution
to the minimization problem. Thanks to the Lyapunov framework, one key feature
of the algorithm is that knowledge of the channel distribution is not required
and only the instantaneous channel state information needs to be known. Using
the CIFAR-10 dataset with varying levels of data heterogeneity, we show through
simulations that the communication time can be significantly decreased using
our algorithm compared to uniformly random participation, especially for
heterogeneous channel conditions.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 17:30:56 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Perazzone",
"Jake B.",
""
],
[
"Wang",
"Shiqiang",
""
],
[
"Ji",
"Mingyue",
""
],
[
"Chan",
"Kevin",
""
]
]
| TITLE: Communication-Efficient Device Scheduling for Federated Learning Using
Lyapunov Optimization
ABSTRACT: Federated learning (FL) is a useful tool that enables the training of machine
learning models over distributed data without having to collect data centrally.
When deploying FL in constrained wireless environments, however, intermittent
connectivity of devices, heterogeneous connection quality, and non-i.i.d. data
can severely slow convergence. In this paper, we consider FL with arbitrary
device participation probabilities for each round and show that by weighing
each device's update by the reciprocal of their per-round participation
probability, we can guarantee convergence to a stationary point. Our bound
applies to non-convex loss functions and non-i.i.d. datasets and recovers
state-of-the-art convergence rates for both full and uniform partial
participation, including linear speedup, with only a single-sided learning
rate. Then, using the derived convergence bound, we develop a new online client
selection and power allocation algorithm that utilizes the Lyapunov
drift-plus-penalty framework to opportunistically minimize a function of the
convergence bound and the average communication time under a transmit power
constraint. We use optimization over manifold techniques to obtain a solution
to the minimization problem. Thanks to the Lyapunov framework, one key feature
of the algorithm is that knowledge of the channel distribution is not required
and only the instantaneous channel state information needs to be known. Using
the CIFAR-10 dataset with varying levels of data heterogeneity, we show through
simulations that the communication time can be significantly decreased using
our algorithm compared to uniformly random participation, especially for
heterogeneous channel conditions.
| no_new_dataset | 0.942823 |
2503.00586 | Xiyu Ding | Shijia Zhang, Xiyu Ding, Brian Caffo, Junyu Chen, Cindy Zhang, Hadi
Kharrazi, and Zheyu Wang | Cross-Attention Fusion of MRI and Jacobian Maps for Alzheimer's Disease
Diagnosis | Submitted to MICCAI 2025 | null | null | null | eess.IV cs.CV q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Early diagnosis of Alzheimer's disease (AD) is critical for intervention
before irreversible neurodegeneration occurs. Structural MRI (sMRI) is widely
used for AD diagnosis, but conventional deep learning approaches primarily rely
on intensity-based features, which require large datasets to capture subtle
structural changes. Jacobian determinant maps (JSM) provide complementary
information by encoding localized brain deformations, yet existing multimodal
fusion strategies fail to fully integrate these features with sMRI. We propose
a cross-attention fusion framework to model the intrinsic relationship between
sMRI intensity and JSM-derived deformations for AD classification. Using the
Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset, we compare
cross-attention, pairwise self-attention, and bottleneck attention with four
pre-trained 3D image encoders. Cross-attention fusion achieves superior
performance, with mean ROC-AUC scores of 0.903 (+/-0.033) for AD vs.
cognitively normal (CN) and 0.692 (+/-0.061) for mild cognitive impairment
(MCI) vs. CN. Despite its strong performance, our model remains highly
efficient, with only 1.56 million parameters--over 40 times fewer than
ResNet-34 (63M) and Swin UNETR (61.98M). These findings demonstrate the
potential of cross-attention fusion for improving AD diagnosis while
maintaining computational efficiency.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 18:50:46 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Zhang",
"Shijia",
""
],
[
"Ding",
"Xiyu",
""
],
[
"Caffo",
"Brian",
""
],
[
"Chen",
"Junyu",
""
],
[
"Zhang",
"Cindy",
""
],
[
"Kharrazi",
"Hadi",
""
],
[
"Wang",
"Zheyu",
""
]
]
| TITLE: Cross-Attention Fusion of MRI and Jacobian Maps for Alzheimer's Disease
Diagnosis
ABSTRACT: Early diagnosis of Alzheimer's disease (AD) is critical for intervention
before irreversible neurodegeneration occurs. Structural MRI (sMRI) is widely
used for AD diagnosis, but conventional deep learning approaches primarily rely
on intensity-based features, which require large datasets to capture subtle
structural changes. Jacobian determinant maps (JSM) provide complementary
information by encoding localized brain deformations, yet existing multimodal
fusion strategies fail to fully integrate these features with sMRI. We propose
a cross-attention fusion framework to model the intrinsic relationship between
sMRI intensity and JSM-derived deformations for AD classification. Using the
Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset, we compare
cross-attention, pairwise self-attention, and bottleneck attention with four
pre-trained 3D image encoders. Cross-attention fusion achieves superior
performance, with mean ROC-AUC scores of 0.903 (+/-0.033) for AD vs.
cognitively normal (CN) and 0.692 (+/-0.061) for mild cognitive impairment
(MCI) vs. CN. Despite its strong performance, our model remains highly
efficient, with only 1.56 million parameters--over 40 times fewer than
ResNet-34 (63M) and Swin UNETR (61.98M). These findings demonstrate the
potential of cross-attention fusion for improving AD diagnosis while
maintaining computational efficiency.
| no_new_dataset | 0.947088 |
2503.00592 | Aniket Kriplani | Nicky Kriplani, Minh Pham, Gowthami Somepalli, Chinmay Hegde, Niv
Cohen | SolidMark: Evaluating Image Memorization in Generative Models | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Recent works have shown that diffusion models are able to memorize training
images and emit them at generation time. However, the metrics used to evaluate
memorization and its mitigation techniques suffer from dataset-dependent biases
and struggle to detect whether a given specific image has been memorized or
not.
This paper begins with a comprehensive exploration of issues surrounding
memorization metrics in diffusion models. Then, to mitigate these issues, we
introduce $\rm \style{font-variant: small-caps}{SolidMark}$, a novel evaluation
method that provides a per-image memorization score. We then re-evaluate
existing memorization mitigation techniques. We also show that $\rm
\style{font-variant: small-caps}{SolidMark}$ is capable of evaluating
fine-grained pixel-level memorization. Finally, we release a variety of models
based on $\rm \style{font-variant: small-caps}{SolidMark}$ to facilitate
further research for understanding memorization phenomena in generative models.
All of our code is available at https://github.com/NickyDCFP/SolidMark.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 19:14:51 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Kriplani",
"Nicky",
""
],
[
"Pham",
"Minh",
""
],
[
"Somepalli",
"Gowthami",
""
],
[
"Hegde",
"Chinmay",
""
],
[
"Cohen",
"Niv",
""
]
]
| TITLE: SolidMark: Evaluating Image Memorization in Generative Models
ABSTRACT: Recent works have shown that diffusion models are able to memorize training
images and emit them at generation time. However, the metrics used to evaluate
memorization and its mitigation techniques suffer from dataset-dependent biases
and struggle to detect whether a given specific image has been memorized or
not.
This paper begins with a comprehensive exploration of issues surrounding
memorization metrics in diffusion models. Then, to mitigate these issues, we
introduce $\rm \style{font-variant: small-caps}{SolidMark}$, a novel evaluation
method that provides a per-image memorization score. We then re-evaluate
existing memorization mitigation techniques. We also show that $\rm
\style{font-variant: small-caps}{SolidMark}$ is capable of evaluating
fine-grained pixel-level memorization. Finally, we release a variety of models
based on $\rm \style{font-variant: small-caps}{SolidMark}$ to facilitate
further research for understanding memorization phenomena in generative models.
All of our code is available at https://github.com/NickyDCFP/SolidMark.
| no_new_dataset | 0.942135 |
2503.00594 | Omar Costilla Reyes | Jose-Manuel Mu\~noz and Odin Mor\'on-Garc\'ia and J. Ignacio Hidalgo
and Omar Costilla-Reyes | Estimation of total body fat using symbolic regression and evolutionary
algorithms | Accepted at Evostar 2025 | null | null | null | cs.NE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Body fat percentage is an increasingly popular alternative to Body Mass Index
to measure overweight and obesity, offering a more accurate representation of
body composition. In this work, we evaluate three evolutionary computation
techniques, Grammatical Evolution, Context-Free Grammar Genetic Programming,
and Dynamic Structured Grammatical Evolution, to derive an interpretable
mathematical expression to estimate the percentage of body fat that are also
accurate. Our primary objective is to obtain a model that balances accuracy
with explainability, making it useful for clinical and health applications. We
compare the performance of the three variants on a public anthropometric
dataset and compare the results obtained with the QLattice framework.
Experimental results show that grammatical evolution techniques can obtain
competitive results in performance and interpretability.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 19:23:33 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Muñoz",
"Jose-Manuel",
""
],
[
"Morón-García",
"Odin",
""
],
[
"Hidalgo",
"J. Ignacio",
""
],
[
"Costilla-Reyes",
"Omar",
""
]
]
| TITLE: Estimation of total body fat using symbolic regression and evolutionary
algorithms
ABSTRACT: Body fat percentage is an increasingly popular alternative to Body Mass Index
to measure overweight and obesity, offering a more accurate representation of
body composition. In this work, we evaluate three evolutionary computation
techniques, Grammatical Evolution, Context-Free Grammar Genetic Programming,
and Dynamic Structured Grammatical Evolution, to derive an interpretable
mathematical expression to estimate the percentage of body fat that are also
accurate. Our primary objective is to obtain a model that balances accuracy
with explainability, making it useful for clinical and health applications. We
compare the performance of the three variants on a public anthropometric
dataset and compare the results obtained with the QLattice framework.
Experimental results show that grammatical evolution techniques can obtain
competitive results in performance and interpretability.
| no_new_dataset | 0.95222 |
2503.00608 | Lin An | Lin An, Andrew A. Li, Vaisnavi Nemala, Gabriel Visotsky | Real-Time Personalization with Simple Transformers | null | null | null | null | math.OC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Real-time personalization has advanced significantly in recent years, with
platforms utilizing machine learning models to predict user preferences based
on rich behavioral data on each individual user. Traditional approaches usually
rely on embedding-based machine learning models to capture user preferences,
and then reduce the final optimization task to nearest-neighbors, which can be
performed extremely fast. However, these models struggle to capture complex
user behaviors, which are essential for making accurate recommendations.
Transformer-based models, on the other hand, are known for their practical
ability to model sequential behaviors, and hence have been intensively used in
personalization recently to overcome these limitations. However, optimizing
recommendations under transformer-based models is challenging due to their
complicated architectures. In this paper, we address this challenge by
considering a specific class of transformers, showing its ability to represent
complex user preferences, and developing efficient algorithms for real-time
personalization.
We focus on a particular set of transformers, called simple transformers,
which contain a single self-attention layer. We show that simple transformers
are capable of capturing complex user preferences. We then develop an algorithm
that enables fast optimization of recommendation tasks based on simple
transformers. Our algorithm achieves near-optimal performance in sub-linear
time. Finally, we demonstrate the effectiveness of our approach through an
empirical study on datasets from Spotify and Trivago. Our experiment results
show that (1) simple transformers can model/predict user preferences
substantially more accurately than non-transformer models and nearly as
accurately as more complex transformers, and (2) our algorithm completes
simple-transformer-based recommendation tasks quickly and effectively.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 20:29:33 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"An",
"Lin",
""
],
[
"Li",
"Andrew A.",
""
],
[
"Nemala",
"Vaisnavi",
""
],
[
"Visotsky",
"Gabriel",
""
]
]
| TITLE: Real-Time Personalization with Simple Transformers
ABSTRACT: Real-time personalization has advanced significantly in recent years, with
platforms utilizing machine learning models to predict user preferences based
on rich behavioral data on each individual user. Traditional approaches usually
rely on embedding-based machine learning models to capture user preferences,
and then reduce the final optimization task to nearest-neighbors, which can be
performed extremely fast. However, these models struggle to capture complex
user behaviors, which are essential for making accurate recommendations.
Transformer-based models, on the other hand, are known for their practical
ability to model sequential behaviors, and hence have been intensively used in
personalization recently to overcome these limitations. However, optimizing
recommendations under transformer-based models is challenging due to their
complicated architectures. In this paper, we address this challenge by
considering a specific class of transformers, showing its ability to represent
complex user preferences, and developing efficient algorithms for real-time
personalization.
We focus on a particular set of transformers, called simple transformers,
which contain a single self-attention layer. We show that simple transformers
are capable of capturing complex user preferences. We then develop an algorithm
that enables fast optimization of recommendation tasks based on simple
transformers. Our algorithm achieves near-optimal performance in sub-linear
time. Finally, we demonstrate the effectiveness of our approach through an
empirical study on datasets from Spotify and Trivago. Our experiment results
show that (1) simple transformers can model/predict user preferences
substantially more accurately than non-transformer models and nearly as
accurately as more complex transformers, and (2) our algorithm completes
simple-transformer-based recommendation tasks quickly and effectively.
| no_new_dataset | 0.945901 |
2503.00615 | Muhammad Adil | Muhammad Adil, Mian Ahmad Jan, Safayat Bin Hakim, Houbing Herbert
Song, Zhanpeng Jin | xIDS-EnsembleGuard: An Explainable Ensemble Learning-based Intrusion
Detection System | Accepted in, 23rd IEEE International Conference on Trust, Security
and Privacy in Computing and Communications (TrustCom-2024) | null | null | null | cs.CR | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In this paper, we focus on addressing the challenges of detecting malicious
attacks in networks by designing an advanced Explainable Intrusion Detection
System (xIDS). The existing machine learning and deep learning approaches have
invisible limitations, such as potential biases in predictions, a lack of
interpretability, and the risk of overfitting to training data. These issues
can create doubt about their usefulness, transparency, and a decrease in trust
among stakeholders. To overcome these challenges, we propose an ensemble
learning technique called "EnsembleGuard." This approach uses the predicted
outputs of multiple models, including tree-based methods (LightGBM, GBM,
Bagging, XGBoost, CatBoost) and deep learning models such as LSTM (long
short-term memory) and GRU (gated recurrent unit), to maintain a balance and
achieve trustworthy results. Our work is unique because it combines both
tree-based and deep learning models to design an interpretable and explainable
meta-model through model distillation. By considering the predictions of all
individual models, our meta-model effectively addresses key challenges and
ensures both explainable and reliable results. We evaluate our model using
well-known datasets, including UNSW-NB15, NSL-KDD, and CIC-IDS-2017, to assess
its reliability against various types of attacks. During analysis, we found
that our model outperforms both tree-based models and other comparative
approaches in different attack scenarios.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 20:49:31 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Adil",
"Muhammad",
""
],
[
"Jan",
"Mian Ahmad",
""
],
[
"Hakim",
"Safayat Bin",
""
],
[
"Song",
"Houbing Herbert",
""
],
[
"Jin",
"Zhanpeng",
""
]
]
| TITLE: xIDS-EnsembleGuard: An Explainable Ensemble Learning-based Intrusion
Detection System
ABSTRACT: In this paper, we focus on addressing the challenges of detecting malicious
attacks in networks by designing an advanced Explainable Intrusion Detection
System (xIDS). The existing machine learning and deep learning approaches have
invisible limitations, such as potential biases in predictions, a lack of
interpretability, and the risk of overfitting to training data. These issues
can create doubt about their usefulness, transparency, and a decrease in trust
among stakeholders. To overcome these challenges, we propose an ensemble
learning technique called "EnsembleGuard." This approach uses the predicted
outputs of multiple models, including tree-based methods (LightGBM, GBM,
Bagging, XGBoost, CatBoost) and deep learning models such as LSTM (long
short-term memory) and GRU (gated recurrent unit), to maintain a balance and
achieve trustworthy results. Our work is unique because it combines both
tree-based and deep learning models to design an interpretable and explainable
meta-model through model distillation. By considering the predictions of all
individual models, our meta-model effectively addresses key challenges and
ensures both explainable and reliable results. We evaluate our model using
well-known datasets, including UNSW-NB15, NSL-KDD, and CIC-IDS-2017, to assess
its reliability against various types of attacks. During analysis, we found
that our model outperforms both tree-based models and other comparative
approaches in different attack scenarios.
| no_new_dataset | 0.943712 |
2503.00624 | Zaifu Zhan | Zaifu Zhan, Shuang Zhou, Huixue Zhou, Jiawen Deng, Yu Hou, Jeremy
Yeung and Rui Zhang | An evaluation of DeepSeek Models in Biomedical Natural Language
Processing | Plan to submit to AMIA 2025 Annual Symposium. 10 pages | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The advancement of Large Language Models (LLMs) has significantly impacted
biomedical Natural Language Processing (NLP), enhancing tasks such as named
entity recognition, relation extraction, event extraction, and text
classification. In this context, the DeepSeek series of models have shown
promising potential in general NLP tasks, yet their capabilities in the
biomedical domain remain underexplored. This study evaluates multiple DeepSeek
models (Distilled-DeepSeek-R1 series and Deepseek-LLMs) across four key
biomedical NLP tasks using 12 datasets, benchmarking them against
state-of-the-art alternatives (Llama3-8B, Qwen2.5-7B, Mistral-7B, Phi-4-14B,
Gemma-2-9B). Our results reveal that while DeepSeek models perform
competitively in named entity recognition and text classification, challenges
persist in event and relation extraction due to precision-recall trade-offs. We
provide task-specific model recommendations and highlight future research
directions. This evaluation underscores the strengths and limitations of
DeepSeek models in biomedical NLP, guiding their future deployment and
optimization.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 21:26:29 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Zhan",
"Zaifu",
""
],
[
"Zhou",
"Shuang",
""
],
[
"Zhou",
"Huixue",
""
],
[
"Deng",
"Jiawen",
""
],
[
"Hou",
"Yu",
""
],
[
"Yeung",
"Jeremy",
""
],
[
"Zhang",
"Rui",
""
]
]
| TITLE: An evaluation of DeepSeek Models in Biomedical Natural Language
Processing
ABSTRACT: The advancement of Large Language Models (LLMs) has significantly impacted
biomedical Natural Language Processing (NLP), enhancing tasks such as named
entity recognition, relation extraction, event extraction, and text
classification. In this context, the DeepSeek series of models have shown
promising potential in general NLP tasks, yet their capabilities in the
biomedical domain remain underexplored. This study evaluates multiple DeepSeek
models (Distilled-DeepSeek-R1 series and Deepseek-LLMs) across four key
biomedical NLP tasks using 12 datasets, benchmarking them against
state-of-the-art alternatives (Llama3-8B, Qwen2.5-7B, Mistral-7B, Phi-4-14B,
Gemma-2-9B). Our results reveal that while DeepSeek models perform
competitively in named entity recognition and text classification, challenges
persist in event and relation extraction due to precision-recall trade-offs. We
provide task-specific model recommendations and highlight future research
directions. This evaluation underscores the strengths and limitations of
DeepSeek models in biomedical NLP, guiding their future deployment and
optimization.
| no_new_dataset | 0.942665 |
2503.00639 | Zijian Li | Zijian Li, Shunxing Fan, Yujia Zheng, Ignavier Ng, Shaoan Xie, Guangyi
Chen, Xinshuai Dong, Ruichu Cai, Kun Zhang | Synergy Between Sufficient Changes and Sparse Mixing Procedure for
Disentangled Representation Learning | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Disentangled representation learning aims to uncover latent variables
underlying the observed data, and generally speaking, rather strong assumptions
are needed to ensure identifiability. Some approaches rely on sufficient
changes on the distribution of latent variables indicated by auxiliary
variables such as domain indices, but acquiring enough domains is often
challenging. Alternative approaches exploit structural sparsity assumptions on
the mixing procedure, but such constraints are usually (partially) violated in
practice. Interestingly, we find that these two seemingly unrelated assumptions
can actually complement each other to achieve identifiability. Specifically,
when conditioned on auxiliary variables, the sparse mixing procedure assumption
provides structural constraints on the mapping from estimated to true latent
variables and hence compensates for potentially insufficient distribution
changes. Building on this insight, we propose an identifiability theory with
less restrictive constraints regarding distribution changes and the sparse
mixing procedure, enhancing applicability to real-world scenarios.
Additionally, we develop an estimation framework incorporating a domain
encoding network and a sparse mixing constraint and provide two implementations
based on variational autoencoders and generative adversarial networks,
respectively. Experiment results on synthetic and real-world datasets support
our theoretical results.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 22:21:37 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Li",
"Zijian",
""
],
[
"Fan",
"Shunxing",
""
],
[
"Zheng",
"Yujia",
""
],
[
"Ng",
"Ignavier",
""
],
[
"Xie",
"Shaoan",
""
],
[
"Chen",
"Guangyi",
""
],
[
"Dong",
"Xinshuai",
""
],
[
"Cai",
"Ruichu",
""
],
[
"Zhang",
"Kun",
""
]
]
| TITLE: Synergy Between Sufficient Changes and Sparse Mixing Procedure for
Disentangled Representation Learning
ABSTRACT: Disentangled representation learning aims to uncover latent variables
underlying the observed data, and generally speaking, rather strong assumptions
are needed to ensure identifiability. Some approaches rely on sufficient
changes on the distribution of latent variables indicated by auxiliary
variables such as domain indices, but acquiring enough domains is often
challenging. Alternative approaches exploit structural sparsity assumptions on
the mixing procedure, but such constraints are usually (partially) violated in
practice. Interestingly, we find that these two seemingly unrelated assumptions
can actually complement each other to achieve identifiability. Specifically,
when conditioned on auxiliary variables, the sparse mixing procedure assumption
provides structural constraints on the mapping from estimated to true latent
variables and hence compensates for potentially insufficient distribution
changes. Building on this insight, we propose an identifiability theory with
less restrictive constraints regarding distribution changes and the sparse
mixing procedure, enhancing applicability to real-world scenarios.
Additionally, we develop an estimation framework incorporating a domain
encoding network and a sparse mixing constraint and provide two implementations
based on variational autoencoders and generative adversarial networks,
respectively. Experiment results on synthetic and real-world datasets support
our theoretical results.
| no_new_dataset | 0.947284 |
2503.00642 | Debashis Sen | Aupendu Kar, Sobhan K. Dhara, Debashis Sen, and Prabir K. Biswas | Self-supervision via Controlled Transformation and Unpaired
Self-conditioning for Low-light Image Enhancement | Copyright 2024 IEEE. Personal use of this material is permitted.
Permission from IEEE must be obtained for all other uses, in any current or
future media, including reprinting/republishing this material for advertising
or promotional purposes, creating new collective works, for resale or
redistribution to servers or lists, or reuse of any copyrighted component of
this work in other works | IEEE Transactions on Instrumentation and Measurement, vol. 73, pp.
1-13, 2024, Art no. 5013113 | 10.1109/TIM.2024.3370779 | null | cs.CV eess.IV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Real-world low-light images captured by imaging devices suffer from poor
visibility and require a domain-specific enhancement to produce artifact-free
outputs that reveal details. In this paper, we propose an unpaired low-light
image enhancement network leveraging novel controlled transformation-based
self-supervision and unpaired self-conditioning strategies. The model
determines the required degrees of enhancement at the input image pixels, which
are learned from the unpaired low-lit and well-lit images without any direct
supervision. The self-supervision is based on a controlled transformation of
the input image and subsequent maintenance of its enhancement in spite of the
transformation. The self-conditioning performs training of the model on
unpaired images such that it does not enhance an already-enhanced image or a
well-lit input image. The inherent noise in the input low-light images is
handled by employing low gradient magnitude suppression in a detail-preserving
manner. In addition, our noise handling is self-conditioned by preventing the
denoising of noise-free well-lit images. The training based on low-light image
enhancement-specific attributes allows our model to avoid paired supervision
without compromising significantly in performance. While our proposed
self-supervision aids consistent enhancement, our novel self-conditioning
facilitates adequate enhancement. Extensive experiments on multiple standard
datasets demonstrate that our model, in general, outperforms the
state-of-the-art both quantitatively and subjectively. Ablation studies show
the effectiveness of our self-supervision and self-conditioning strategies, and
the related loss functions.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 22:25:49 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Kar",
"Aupendu",
""
],
[
"Dhara",
"Sobhan K.",
""
],
[
"Sen",
"Debashis",
""
],
[
"Biswas",
"Prabir K.",
""
]
]
| TITLE: Self-supervision via Controlled Transformation and Unpaired
Self-conditioning for Low-light Image Enhancement
ABSTRACT: Real-world low-light images captured by imaging devices suffer from poor
visibility and require a domain-specific enhancement to produce artifact-free
outputs that reveal details. In this paper, we propose an unpaired low-light
image enhancement network leveraging novel controlled transformation-based
self-supervision and unpaired self-conditioning strategies. The model
determines the required degrees of enhancement at the input image pixels, which
are learned from the unpaired low-lit and well-lit images without any direct
supervision. The self-supervision is based on a controlled transformation of
the input image and subsequent maintenance of its enhancement in spite of the
transformation. The self-conditioning performs training of the model on
unpaired images such that it does not enhance an already-enhanced image or a
well-lit input image. The inherent noise in the input low-light images is
handled by employing low gradient magnitude suppression in a detail-preserving
manner. In addition, our noise handling is self-conditioned by preventing the
denoising of noise-free well-lit images. The training based on low-light image
enhancement-specific attributes allows our model to avoid paired supervision
without compromising significantly in performance. While our proposed
self-supervision aids consistent enhancement, our novel self-conditioning
facilitates adequate enhancement. Extensive experiments on multiple standard
datasets demonstrate that our model, in general, outperforms the
state-of-the-art both quantitatively and subjectively. Ablation studies show
the effectiveness of our self-supervision and self-conditioning strategies, and
the related loss functions.
| no_new_dataset | 0.949248 |
2503.00643 | Yante Li | Yante Li and Hanwen Qi and Haoyu Chen and Xinlian Liang and Guoying
Zhao | Deep Change Monitoring: A Hyperbolic Representative Learning Framework
and a Dataset for Long-term Fine-grained Tree Change Detection | 10 pages, 6 figures | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In environmental protection, tree monitoring plays an essential role in
maintaining and improving ecosystem health. However, precise monitoring is
challenging because existing datasets fail to capture continuous fine-grained
changes in trees due to low-resolution images and high acquisition costs. In
this paper, we introduce UAVTC, a large-scale, long-term, high-resolution
dataset collected using UAVs equipped with cameras, specifically designed to
detect individual Tree Changes (TCs). UAVTC includes rich annotations and
statistics based on biological knowledge, offering a fine-grained view for tree
monitoring. To address environmental influences and effectively model the
hierarchical diversity of physiological TCs, we propose a novel Hyperbolic
Siamese Network (HSN) for TC detection, enabling compact and hierarchical
representations of dynamic tree changes.
Extensive experiments show that HSN can effectively capture complex
hierarchical changes and provide a robust solution for fine-grained TC
detection. In addition, HSN generalizes well to cross-domain face anti-spoofing
task, highlighting its broader significance in AI. We believe our work,
combining ecological insights and interdisciplinary expertise, will benefit the
community by offering a new benchmark and innovative AI technologies.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 22:29:29 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Li",
"Yante",
""
],
[
"Qi",
"Hanwen",
""
],
[
"Chen",
"Haoyu",
""
],
[
"Liang",
"Xinlian",
""
],
[
"Zhao",
"Guoying",
""
]
]
| TITLE: Deep Change Monitoring: A Hyperbolic Representative Learning Framework
and a Dataset for Long-term Fine-grained Tree Change Detection
ABSTRACT: In environmental protection, tree monitoring plays an essential role in
maintaining and improving ecosystem health. However, precise monitoring is
challenging because existing datasets fail to capture continuous fine-grained
changes in trees due to low-resolution images and high acquisition costs. In
this paper, we introduce UAVTC, a large-scale, long-term, high-resolution
dataset collected using UAVs equipped with cameras, specifically designed to
detect individual Tree Changes (TCs). UAVTC includes rich annotations and
statistics based on biological knowledge, offering a fine-grained view for tree
monitoring. To address environmental influences and effectively model the
hierarchical diversity of physiological TCs, we propose a novel Hyperbolic
Siamese Network (HSN) for TC detection, enabling compact and hierarchical
representations of dynamic tree changes.
Extensive experiments show that HSN can effectively capture complex
hierarchical changes and provide a robust solution for fine-grained TC
detection. In addition, HSN generalizes well to cross-domain face anti-spoofing
task, highlighting its broader significance in AI. We believe our work,
combining ecological insights and interdisciplinary expertise, will benefit the
community by offering a new benchmark and innovative AI technologies.
| new_dataset | 0.964987 |
2503.00646 | Zeeshan Memon | Zeeshan Memon, Chen Ling, Ruochen Kong, Vishwanath Seshagiri, Andreas
Zufle and Liang Zhao | Deep Identification of Propagation Trees | null | null | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding propagation structures in graph diffusion processes, such as
epidemic spread or misinformation diffusion, is a fundamental yet challenging
problem. While existing methods primarily focus on source localization, they
cannot reconstruct the underlying propagation trees i.e., "who infected whom",
which are substantial for tracking the propagation pathways and investigate
diffusion mechanisms. In this work, we propose Deep Identification of
Propagation Trees (DIPT), a probabilistic framework that infers propagation
trees from observed diffused states. DIPT models local influence strengths
between nodes and leverages an alternating optimization strategy to jointly
learn the diffusion mechanism and reconstruct the propagation structure.
Extensive experiments on five real-world datasets demonstrate the effectiveness
of DIPT in accurately reconstructing propagation trees.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 22:31:31 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Memon",
"Zeeshan",
""
],
[
"Ling",
"Chen",
""
],
[
"Kong",
"Ruochen",
""
],
[
"Seshagiri",
"Vishwanath",
""
],
[
"Zufle",
"Andreas",
""
],
[
"Zhao",
"Liang",
""
]
]
| TITLE: Deep Identification of Propagation Trees
ABSTRACT: Understanding propagation structures in graph diffusion processes, such as
epidemic spread or misinformation diffusion, is a fundamental yet challenging
problem. While existing methods primarily focus on source localization, they
cannot reconstruct the underlying propagation trees i.e., "who infected whom",
which are substantial for tracking the propagation pathways and investigate
diffusion mechanisms. In this work, we propose Deep Identification of
Propagation Trees (DIPT), a probabilistic framework that infers propagation
trees from observed diffused states. DIPT models local influence strengths
between nodes and leverages an alternating optimization strategy to jointly
learn the diffusion mechanism and reconstruct the propagation structure.
Extensive experiments on five real-world datasets demonstrate the effectiveness
of DIPT in accurately reconstructing propagation trees.
| no_new_dataset | 0.951459 |
2503.00657 | Debashis Sen | Ashish Verma, Aupendu Kar, Krishnendu Ghosh, Sobhan Kanti Dhara,
Debashis Sen, and Prabir Kumar Biswas | Artificially Generated Visual Scanpath Improves Multi-label Thoracic
Disease Classification in Chest X-Ray Images | Copyright 2024 IEEE. Personal use of this material is permitted.
Permission from IEEE must be obtained for all other uses, in any current or
future media, including reprinting/republishing this material for advertising
or promotional purposes, creating new collective works, for resale or
redistribution to servers or lists, or reuse of any copyrighted component of
this work in other works | vol. 73, pp. 1-11, 2024, Art no. 4507311 | 10.1109/TIM.2024.3428591 | null | eess.IV cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Expert radiologists visually scan Chest X-Ray (CXR) images, sequentially
fixating on anatomical structures to perform disease diagnosis. An automatic
multi-label classifier of diseases in CXR images can benefit by incorporating
aspects of the radiologists' approach. Recorded visual scanpaths of
radiologists on CXR images can be used for the said purpose. But, such
scanpaths are not available for most CXR images, which creates a gap even for
modern deep learning based classifiers. This paper proposes to mitigate this
gap by generating effective artificial visual scanpaths using a visual scanpath
prediction model for CXR images. Further, a multi-class multi-label classifier
framework is proposed that uses a generated scanpath and visual image features
to classify diseases in CXR images. While the scanpath predictor is based on a
recurrent neural network, the multi-label classifier involves a novel iterative
sequential model with an attention module. We show that our scanpath predictor
generates human-like visual scanpaths. We also demonstrate that the use of
artificial visual scanpaths improves multi-class multi-label disease
classification results on CXR images. The above observations are made from
experiments involving around 0.2 million CXR images from 2 widely-used datasets
considering the multi-label classification of 14 pathological findings. Code
link: https://github.com/ashishverma03/SDC
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 23:13:29 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Verma",
"Ashish",
""
],
[
"Kar",
"Aupendu",
""
],
[
"Ghosh",
"Krishnendu",
""
],
[
"Dhara",
"Sobhan Kanti",
""
],
[
"Sen",
"Debashis",
""
],
[
"Biswas",
"Prabir Kumar",
""
]
]
| TITLE: Artificially Generated Visual Scanpath Improves Multi-label Thoracic
Disease Classification in Chest X-Ray Images
ABSTRACT: Expert radiologists visually scan Chest X-Ray (CXR) images, sequentially
fixating on anatomical structures to perform disease diagnosis. An automatic
multi-label classifier of diseases in CXR images can benefit by incorporating
aspects of the radiologists' approach. Recorded visual scanpaths of
radiologists on CXR images can be used for the said purpose. But, such
scanpaths are not available for most CXR images, which creates a gap even for
modern deep learning based classifiers. This paper proposes to mitigate this
gap by generating effective artificial visual scanpaths using a visual scanpath
prediction model for CXR images. Further, a multi-class multi-label classifier
framework is proposed that uses a generated scanpath and visual image features
to classify diseases in CXR images. While the scanpath predictor is based on a
recurrent neural network, the multi-label classifier involves a novel iterative
sequential model with an attention module. We show that our scanpath predictor
generates human-like visual scanpaths. We also demonstrate that the use of
artificial visual scanpaths improves multi-class multi-label disease
classification results on CXR images. The above observations are made from
experiments involving around 0.2 million CXR images from 2 widely-used datasets
considering the multi-label classification of 14 pathological findings. Code
link: https://github.com/ashishverma03/SDC
| no_new_dataset | 0.951323 |
2503.00658 | Chao Song | Chao Song, Tariq Alkhalifah, Umair Bin Waheed, Silin Wang, Cai Liu | A new practical and effective source-independent full-waveform inversion
with a velocity-distribution supported deep image prior: Applications to two
real datasets | 23 pages, 25 figures | null | null | null | physics.geo-ph cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Full-waveform inversion (FWI) is an advanced technique for reconstructing
high-resolution subsurface physical parameters by progressively minimizing the
discrepancy between observed and predicted seismic data. However, conventional
FWI encounters challenges in real data applications, primarily due to its
conventional objective of direct measurements of the data misfit. Accurate
estimation of the source wavelet is essential for effective data fitting,
alongside the need for low-frequency data and a reasonable initial model to
prevent cycle skipping. Additionally, wave equation solvers often struggle to
accurately simulate the amplitude of observed data in real applications. To
address these challenges, we introduce a correlation-based source-independent
objective function for FWI that aims to mitigate source uncertainty and
amplitude dependency, which effectively enhances its practicality for real data
applications. We develop a deep-learning framework constrained by this new
objective function with a velocity-distribution supported deep image prior,
which reparameterizes velocity inversion into trainable parameters within an
autoencoder, thereby reducing the nonlinearity in the conventional FWI's
objective function. We demonstrate the superiority of our proposed method using
synthetic data from benchmark velocity models and, more importantly, two real
datasets. These examples highlight its effectiveness and practicality even
under challenging conditions, such as missing low frequencies, a crude initial
velocity model, and an incorrect source wavelet.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 23:15:43 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Song",
"Chao",
""
],
[
"Alkhalifah",
"Tariq",
""
],
[
"Waheed",
"Umair Bin",
""
],
[
"Wang",
"Silin",
""
],
[
"Liu",
"Cai",
""
]
]
| TITLE: A new practical and effective source-independent full-waveform inversion
with a velocity-distribution supported deep image prior: Applications to two
real datasets
ABSTRACT: Full-waveform inversion (FWI) is an advanced technique for reconstructing
high-resolution subsurface physical parameters by progressively minimizing the
discrepancy between observed and predicted seismic data. However, conventional
FWI encounters challenges in real data applications, primarily due to its
conventional objective of direct measurements of the data misfit. Accurate
estimation of the source wavelet is essential for effective data fitting,
alongside the need for low-frequency data and a reasonable initial model to
prevent cycle skipping. Additionally, wave equation solvers often struggle to
accurately simulate the amplitude of observed data in real applications. To
address these challenges, we introduce a correlation-based source-independent
objective function for FWI that aims to mitigate source uncertainty and
amplitude dependency, which effectively enhances its practicality for real data
applications. We develop a deep-learning framework constrained by this new
objective function with a velocity-distribution supported deep image prior,
which reparameterizes velocity inversion into trainable parameters within an
autoencoder, thereby reducing the nonlinearity in the conventional FWI's
objective function. We demonstrate the superiority of our proposed method using
synthetic data from benchmark velocity models and, more importantly, two real
datasets. These examples highlight its effectiveness and practicality even
under challenging conditions, such as missing low frequencies, a crude initial
velocity model, and an incorrect source wavelet.
| no_new_dataset | 0.945551 |
2503.00660 | Nuno Laranjeiro | Renato Andrade, C\'esar Teixeira, Nuno Laranjeiro, Marco Vieira | An Empirical Study on the Classification of Bug Reports with Machine
Learning | null | null | null | null | cs.SE | http://creativecommons.org/licenses/by/4.0/ | Software defects are a major threat to the reliability of computer systems.
The literature shows that more than 30% of bug reports submitted in large
software projects are misclassified (i.e., are feature requests, or mistakes
made by the bug reporter), leading developers to place great effort in manually
inspecting them. Machine Learning algorithms can be used for the automatic
classification of issue reports. Still, little is known regarding key aspects
of training models, such as the influence of programming languages and issue
tracking systems. In this paper, we use a dataset containing more than 660,000
issue reports, collected from heterogeneous projects hosted in different issue
tracking systems, to study how different factors (e.g., project language,
report content) can influence the performance of models in handling
classification of issue reports. Results show that using the report title or
description does not significantly differ; Support Vector Machine, Logistic
Regression, and Random Forest are effective in classifying issue reports;
programming languages and issue tracking systems influence classification
outcomes; and models based on heterogeneous projects can classify reports from
projects not present during training. Based on findings, we propose guidelines
for future research, including recommendations for using heterogeneous data and
selecting high-performing algorithms.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 23:19:56 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Andrade",
"Renato",
""
],
[
"Teixeira",
"César",
""
],
[
"Laranjeiro",
"Nuno",
""
],
[
"Vieira",
"Marco",
""
]
]
| TITLE: An Empirical Study on the Classification of Bug Reports with Machine
Learning
ABSTRACT: Software defects are a major threat to the reliability of computer systems.
The literature shows that more than 30% of bug reports submitted in large
software projects are misclassified (i.e., are feature requests, or mistakes
made by the bug reporter), leading developers to place great effort in manually
inspecting them. Machine Learning algorithms can be used for the automatic
classification of issue reports. Still, little is known regarding key aspects
of training models, such as the influence of programming languages and issue
tracking systems. In this paper, we use a dataset containing more than 660,000
issue reports, collected from heterogeneous projects hosted in different issue
tracking systems, to study how different factors (e.g., project language,
report content) can influence the performance of models in handling
classification of issue reports. Results show that using the report title or
description does not significantly differ; Support Vector Machine, Logistic
Regression, and Random Forest are effective in classifying issue reports;
programming languages and issue tracking systems influence classification
outcomes; and models based on heterogeneous projects can classify reports from
projects not present during training. Based on findings, we propose guidelines
for future research, including recommendations for using heterogeneous data and
selecting high-performing algorithms.
| new_dataset | 0.962285 |
2503.00665 | Shinichiro Mori | Chisako Hayashi, Shinichiro Mori, Yasukuni Mori, Lim Taehyeung, Hiroki
Suyari, Hitoshi Ishikawa | Development of an Unpaired Deep Neural Network for Synthesizing X-ray
Fluoroscopic Images from Digitally Reconstructed Tomography in Image Guided
Radiotherapy | null | null | null | null | cs.CV physics.med-ph | http://creativecommons.org/licenses/by/4.0/ | Purpose The purpose of this study was to develop and evaluate a deep neural
network (DNN) capable of generating flat-panel detector (FPD) images from
digitally reconstructed radiography (DRR) images in lung cancer treatment, with
the aim of improving clinical workflows in image-guided radiotherapy.
Methods A modified CycleGAN architecture was trained on paired DRR-FPD image
data obtained from patients with lung tumors. The training dataset consisted of
over 400 DRR-FPD image pairs, and the final model was evaluated on an
independent set of 100 FPD images. Mean absolute error (MAE), peak
signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and
Kernel Inception Distance (KID) were used to quantify the similarity between
synthetic and ground-truth FPD images. Computation time for generating
synthetic images was also measured.
Results Despite some positional mismatches in the DRR-FPD pairs, the
synthetic FPD images closely resembled the ground-truth FPD images. The
proposed DNN achieved notable improvements over both input DRR images and a
U-Net-based method in terms of MAE, PSNR, SSIM, and KID. The average image
generation time was on the order of milliseconds per image, indicating its
potential for real-time application. Qualitative evaluations showed that the
DNN successfully reproduced image noise patterns akin to real FPD images,
reducing the need for manual noise adjustments.
Conclusions The proposed DNN effectively converted DRR images into realistic
FPD images for thoracic cases, offering a fast and practical method that could
streamline patient setup verification and enhance overall clinical workflow.
Future work should validate the model across different imaging systems and
address remaining challenges in marker visualization, thereby fostering broader
clinical adoption.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 23:34:43 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Hayashi",
"Chisako",
""
],
[
"Mori",
"Shinichiro",
""
],
[
"Mori",
"Yasukuni",
""
],
[
"Taehyeung",
"Lim",
""
],
[
"Suyari",
"Hiroki",
""
],
[
"Ishikawa",
"Hitoshi",
""
]
]
| TITLE: Development of an Unpaired Deep Neural Network for Synthesizing X-ray
Fluoroscopic Images from Digitally Reconstructed Tomography in Image Guided
Radiotherapy
ABSTRACT: Purpose The purpose of this study was to develop and evaluate a deep neural
network (DNN) capable of generating flat-panel detector (FPD) images from
digitally reconstructed radiography (DRR) images in lung cancer treatment, with
the aim of improving clinical workflows in image-guided radiotherapy.
Methods A modified CycleGAN architecture was trained on paired DRR-FPD image
data obtained from patients with lung tumors. The training dataset consisted of
over 400 DRR-FPD image pairs, and the final model was evaluated on an
independent set of 100 FPD images. Mean absolute error (MAE), peak
signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and
Kernel Inception Distance (KID) were used to quantify the similarity between
synthetic and ground-truth FPD images. Computation time for generating
synthetic images was also measured.
Results Despite some positional mismatches in the DRR-FPD pairs, the
synthetic FPD images closely resembled the ground-truth FPD images. The
proposed DNN achieved notable improvements over both input DRR images and a
U-Net-based method in terms of MAE, PSNR, SSIM, and KID. The average image
generation time was on the order of milliseconds per image, indicating its
potential for real-time application. Qualitative evaluations showed that the
DNN successfully reproduced image noise patterns akin to real FPD images,
reducing the need for manual noise adjustments.
Conclusions The proposed DNN effectively converted DRR images into realistic
FPD images for thoracic cases, offering a fast and practical method that could
streamline patient setup verification and enhance overall clinical workflow.
Future work should validate the model across different imaging systems and
address remaining challenges in marker visualization, thereby fostering broader
clinical adoption.
| no_new_dataset | 0.94887 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.