Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2401.09002 | Mingyu Jin | Dong Shu, Chong Zhang, Mingyu Jin, Zihao Zhou, Lingyao Li, Yongfeng
Zhang | AttackEval: How to Evaluate the Effectiveness of Jailbreak Attacking on
Large Language Models | Accepted by ACM SIGKDD Explorations 2025 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Jailbreak attacks represent one of the most sophisticated threats to the
security of large language models (LLMs). To deal with such risks, we introduce
an innovative framework that can help evaluate the effectiveness of jailbreak
attacks on LLMs. Unlike traditional binary evaluations focusing solely on the
robustness of LLMs, our method assesses the attacking prompts' effectiveness.
We present two distinct evaluation frameworks: a coarse-grained evaluation and
a fine-grained evaluation. Each framework uses a scoring range from 0 to 1,
offering unique perspectives and allowing for the assessment of attack
effectiveness in different scenarios. Additionally, we develop a comprehensive
ground truth dataset specifically tailored for jailbreak prompts. This dataset
is a crucial benchmark for our current study and provides a foundational
resource for future research. By comparing with traditional evaluation methods,
our study shows that the current results align with baseline metrics while
offering a more nuanced and fine-grained assessment. It also helps identify
potentially harmful attack prompts that might appear harmless in traditional
evaluations. Overall, our work establishes a solid foundation for assessing a
broader range of attack prompts in prompt injection.
| [
{
"version": "v1",
"created": "Wed, 17 Jan 2024 06:42:44 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Feb 2024 02:20:31 GMT"
},
{
"version": "v3",
"created": "Wed, 20 Mar 2024 14:08:39 GMT"
},
{
"version": "v4",
"created": "Wed, 31 Jul 2024 06:46:44 GMT"
},
{
"version": "v5",
"created": "Sat, 3 Aug 2024 06:39:25 GMT"
},
{
"version": "v6",
"created": "Tue, 18 Mar 2025 01:50:42 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Shu",
"Dong",
""
],
[
"Zhang",
"Chong",
""
],
[
"Jin",
"Mingyu",
""
],
[
"Zhou",
"Zihao",
""
],
[
"Li",
"Lingyao",
""
],
[
"Zhang",
"Yongfeng",
""
]
] | TITLE: AttackEval: How to Evaluate the Effectiveness of Jailbreak Attacking on
Large Language Models
ABSTRACT: Jailbreak attacks represent one of the most sophisticated threats to the
security of large language models (LLMs). To deal with such risks, we introduce
an innovative framework that can help evaluate the effectiveness of jailbreak
attacks on LLMs. Unlike traditional binary evaluations focusing solely on the
robustness of LLMs, our method assesses the attacking prompts' effectiveness.
We present two distinct evaluation frameworks: a coarse-grained evaluation and
a fine-grained evaluation. Each framework uses a scoring range from 0 to 1,
offering unique perspectives and allowing for the assessment of attack
effectiveness in different scenarios. Additionally, we develop a comprehensive
ground truth dataset specifically tailored for jailbreak prompts. This dataset
is a crucial benchmark for our current study and provides a foundational
resource for future research. By comparing with traditional evaluation methods,
our study shows that the current results align with baseline metrics while
offering a more nuanced and fine-grained assessment. It also helps identify
potentially harmful attack prompts that might appear harmless in traditional
evaluations. Overall, our work establishes a solid foundation for assessing a
broader range of attack prompts in prompt injection.
|
2401.15378 | Enis Karaarslan Dr. | Ahmet Yusuf Alan, Enis Karaarslan, \"Omer Aydin | A RAG-based Question Answering System Proposal for Understanding Islam:
MufassirQAS LLM | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Challenges exist in learning and understanding religions, such as the
complexity and depth of religious doctrines and teachings. Chatbots as
question-answering systems can help in solving these challenges. LLM chatbots
use NLP techniques to establish connections between topics and accurately
respond to complex questions. These capabilities make it perfect for
enlightenment on religion as a question-answering chatbot. However, LLMs also
tend to generate false information, known as hallucination. Also, the chatbots'
responses can include content that insults personal religious beliefs,
interfaith conflicts, and controversial or sensitive topics. It must avoid such
cases without promoting hate speech or offending certain groups of people or
their beliefs. This study uses a vector database-based Retrieval Augmented
Generation (RAG) approach to enhance the accuracy and transparency of LLMs. Our
question-answering system is called "MufassirQAS". We created a database
consisting of several open-access books that include Turkish context. These
books contain Turkish translations and interpretations of Islam. This database
is utilized to answer religion-related questions and ensure our answers are
trustworthy. The relevant part of the dataset, which LLM also uses, is
presented along with the answer. We have put careful effort into creating
system prompts that give instructions to prevent harmful, offensive, or
disrespectful responses to respect people's values and provide reliable
results. The system answers and shares additional information, such as the page
number from the respective book and the articles referenced for obtaining the
information. MufassirQAS and ChatGPT are also tested with sensitive questions.
We got better performance with our system. Study and enhancements are still in
progress. Results and future works are given.
| [
{
"version": "v1",
"created": "Sat, 27 Jan 2024 10:50:11 GMT"
},
{
"version": "v2",
"created": "Tue, 30 Jan 2024 05:36:32 GMT"
},
{
"version": "v3",
"created": "Wed, 31 Jan 2024 12:39:06 GMT"
},
{
"version": "v4",
"created": "Thu, 1 Feb 2024 20:28:11 GMT"
},
{
"version": "v5",
"created": "Tue, 18 Mar 2025 17:14:43 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Alan",
"Ahmet Yusuf",
""
],
[
"Karaarslan",
"Enis",
""
],
[
"Aydin",
"Ömer",
""
]
] | TITLE: A RAG-based Question Answering System Proposal for Understanding Islam:
MufassirQAS LLM
ABSTRACT: Challenges exist in learning and understanding religions, such as the
complexity and depth of religious doctrines and teachings. Chatbots as
question-answering systems can help in solving these challenges. LLM chatbots
use NLP techniques to establish connections between topics and accurately
respond to complex questions. These capabilities make it perfect for
enlightenment on religion as a question-answering chatbot. However, LLMs also
tend to generate false information, known as hallucination. Also, the chatbots'
responses can include content that insults personal religious beliefs,
interfaith conflicts, and controversial or sensitive topics. It must avoid such
cases without promoting hate speech or offending certain groups of people or
their beliefs. This study uses a vector database-based Retrieval Augmented
Generation (RAG) approach to enhance the accuracy and transparency of LLMs. Our
question-answering system is called "MufassirQAS". We created a database
consisting of several open-access books that include Turkish context. These
books contain Turkish translations and interpretations of Islam. This database
is utilized to answer religion-related questions and ensure our answers are
trustworthy. The relevant part of the dataset, which LLM also uses, is
presented along with the answer. We have put careful effort into creating
system prompts that give instructions to prevent harmful, offensive, or
disrespectful responses to respect people's values and provide reliable
results. The system answers and shares additional information, such as the page
number from the respective book and the articles referenced for obtaining the
information. MufassirQAS and ChatGPT are also tested with sensitive questions.
We got better performance with our system. Study and enhancements are still in
progress. Results and future works are given.
|
2403.02784 | Lushuang Wang | Lingyan Ran and Lushuang Wang and Tao Zhuo and Yinghui Xing | DDF: A Novel Dual-Domain Image Fusion Strategy for Remote Sensing Image
Semantic Segmentation with Unsupervised Domain Adaptation | Accepted to IEEE Transactions on Geoscience and Remote Sensing | null | 10.1109/TGRS.2024.3433564 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Semantic segmentation of remote sensing images is a challenging and hot issue
due to the large amount of unlabeled data. Unsupervised domain adaptation (UDA)
has proven to be advantageous in incorporating unclassified information from
the target domain. However, independently fine-tuning UDA models on the source
and target domains has a limited effect on the outcome. This paper proposes a
hybrid training strategy as well as a novel dual-domain image fusion strategy
that effectively utilizes the original image, transformation image, and
intermediate domain information. Moreover, to enhance the precision of
pseudo-labels, we present a pseudo-label region-specific weight strategy. The
efficacy of our approach is substantiated by extensive benchmark experiments
and ablation studies conducted on the ISPRS Vaihingen and Potsdam datasets.
| [
{
"version": "v1",
"created": "Tue, 5 Mar 2024 08:57:28 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Oct 2024 13:01:09 GMT"
},
{
"version": "v3",
"created": "Tue, 18 Mar 2025 08:08:21 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Ran",
"Lingyan",
""
],
[
"Wang",
"Lushuang",
""
],
[
"Zhuo",
"Tao",
""
],
[
"Xing",
"Yinghui",
""
]
] | TITLE: DDF: A Novel Dual-Domain Image Fusion Strategy for Remote Sensing Image
Semantic Segmentation with Unsupervised Domain Adaptation
ABSTRACT: Semantic segmentation of remote sensing images is a challenging and hot issue
due to the large amount of unlabeled data. Unsupervised domain adaptation (UDA)
has proven to be advantageous in incorporating unclassified information from
the target domain. However, independently fine-tuning UDA models on the source
and target domains has a limited effect on the outcome. This paper proposes a
hybrid training strategy as well as a novel dual-domain image fusion strategy
that effectively utilizes the original image, transformation image, and
intermediate domain information. Moreover, to enhance the precision of
pseudo-labels, we present a pseudo-label region-specific weight strategy. The
efficacy of our approach is substantiated by extensive benchmark experiments
and ablation studies conducted on the ISPRS Vaihingen and Potsdam datasets.
|
2403.10344 | Hala Djeghim | Hala Djeghim, Nathan Piasco, Moussab Bennehar, Luis Rold\~ao, Dzmitry
Tsishkou and D\'esir\'e Sidib\'e | ViiNeuS: Volumetric Initialization for Implicit Neural Surface
reconstruction of urban scenes with limited image overlap | CVPR2025. Project page: https://hala-djeghim.github.io/ViiNeuS/ | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Neural implicit surface representation methods have recently shown impressive
3D reconstruction results. However, existing solutions struggle to reconstruct
driving scenes due to their large size, highly complex nature and their limited
visual observation overlap. Hence, to achieve accurate reconstructions,
additional supervision data such as LiDAR, strong geometric priors, and long
training times are required. To tackle such limitations, we present ViiNeuS, a
new hybrid implicit surface learning method that efficiently initializes the
signed distance field to reconstruct large driving scenes from 2D street view
images. ViiNeuS's hybrid architecture models two separate implicit fields: one
representing the volumetric density of the scene, and another one representing
the signed distance to the surface. To accurately reconstruct urban outdoor
driving scenarios, we introduce a novel volume-rendering strategy that relies
on self-supervised probabilistic density estimation to sample points near the
surface and transition progressively from volumetric to surface representation.
Our solution permits a proper and fast initialization of the signed distance
field without relying on any geometric prior on the scene, compared to
concurrent methods. By conducting extensive experiments on four outdoor driving
datasets, we show that ViiNeuS can learn an accurate and detailed 3D surface
representation of various urban scene while being two times faster to train
compared to previous state-of-the-art solutions.
| [
{
"version": "v1",
"created": "Fri, 15 Mar 2024 14:31:17 GMT"
},
{
"version": "v2",
"created": "Fri, 5 Apr 2024 12:14:15 GMT"
},
{
"version": "v3",
"created": "Wed, 9 Oct 2024 10:52:15 GMT"
},
{
"version": "v4",
"created": "Fri, 3 Jan 2025 15:18:36 GMT"
},
{
"version": "v5",
"created": "Mon, 3 Mar 2025 14:49:10 GMT"
},
{
"version": "v6",
"created": "Tue, 18 Mar 2025 09:00:34 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Djeghim",
"Hala",
""
],
[
"Piasco",
"Nathan",
""
],
[
"Bennehar",
"Moussab",
""
],
[
"Roldão",
"Luis",
""
],
[
"Tsishkou",
"Dzmitry",
""
],
[
"Sidibé",
"Désiré",
""
]
] | TITLE: ViiNeuS: Volumetric Initialization for Implicit Neural Surface
reconstruction of urban scenes with limited image overlap
ABSTRACT: Neural implicit surface representation methods have recently shown impressive
3D reconstruction results. However, existing solutions struggle to reconstruct
driving scenes due to their large size, highly complex nature and their limited
visual observation overlap. Hence, to achieve accurate reconstructions,
additional supervision data such as LiDAR, strong geometric priors, and long
training times are required. To tackle such limitations, we present ViiNeuS, a
new hybrid implicit surface learning method that efficiently initializes the
signed distance field to reconstruct large driving scenes from 2D street view
images. ViiNeuS's hybrid architecture models two separate implicit fields: one
representing the volumetric density of the scene, and another one representing
the signed distance to the surface. To accurately reconstruct urban outdoor
driving scenarios, we introduce a novel volume-rendering strategy that relies
on self-supervised probabilistic density estimation to sample points near the
surface and transition progressively from volumetric to surface representation.
Our solution permits a proper and fast initialization of the signed distance
field without relying on any geometric prior on the scene, compared to
concurrent methods. By conducting extensive experiments on four outdoor driving
datasets, we show that ViiNeuS can learn an accurate and detailed 3D surface
representation of various urban scene while being two times faster to train
compared to previous state-of-the-art solutions.
|
2403.12029 | Justin Kay | Justin Kay, Timm Haucke, Suzanne Stathatos, Siqi Deng, Erik Young,
Pietro Perona, Sara Beery, Grant Van Horn | Align and Distill: Unifying and Improving Domain Adaptive Object
Detection | TMLR camera ready (Featured Certification). 33 pages, 15 figures | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Object detectors often perform poorly on data that differs from their
training set. Domain adaptive object detection (DAOD) methods have recently
demonstrated strong results on addressing this challenge. Unfortunately, we
identify systemic benchmarking pitfalls that call past results into question
and hamper further progress: (a) Overestimation of performance due to
underpowered baselines, (b) Inconsistent implementation practices preventing
transparent comparisons of methods, and (c) Lack of generality due to outdated
backbones and lack of diversity in benchmarks. We address these problems by
introducing: (1) A unified benchmarking and implementation framework, Align and
Distill (ALDI), enabling comparison of DAOD methods and supporting future
development, (2) A fair and modern training and evaluation protocol for DAOD
that addresses benchmarking pitfalls, (3) A new DAOD benchmark dataset,
CFC-DAOD, enabling evaluation on diverse real-world data, and (4) A new method,
ALDI++, that achieves state-of-the-art results by a large margin. ALDI++
outperforms the previous state-of-the-art by +3.5 AP50 on Cityscapes to Foggy
Cityscapes, +5.7 AP50 on Sim10k to Cityscapes (where ours is the only method to
outperform a fair baseline), and +0.6 AP50 on CFC Kenai to Channel. ALDI and
ALDI++ are architecture-agnostic, setting a new state-of-the-art for YOLO and
DETR-based DAOD as well without additional hyperparameter tuning. Our
framework, dataset, and state-of-the-art method offer a critical reset for DAOD
and provide a strong foundation for future research. Code and data are
available: https://github.com/justinkay/aldi and
https://github.com/visipedia/caltech-fish-counting.
| [
{
"version": "v1",
"created": "Mon, 18 Mar 2024 17:58:02 GMT"
},
{
"version": "v2",
"created": "Sun, 25 Aug 2024 14:05:18 GMT"
},
{
"version": "v3",
"created": "Mon, 17 Mar 2025 20:18:16 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Kay",
"Justin",
""
],
[
"Haucke",
"Timm",
""
],
[
"Stathatos",
"Suzanne",
""
],
[
"Deng",
"Siqi",
""
],
[
"Young",
"Erik",
""
],
[
"Perona",
"Pietro",
""
],
[
"Beery",
"Sara",
""
],
[
"Van Horn",
"Grant",
""
]
] | TITLE: Align and Distill: Unifying and Improving Domain Adaptive Object
Detection
ABSTRACT: Object detectors often perform poorly on data that differs from their
training set. Domain adaptive object detection (DAOD) methods have recently
demonstrated strong results on addressing this challenge. Unfortunately, we
identify systemic benchmarking pitfalls that call past results into question
and hamper further progress: (a) Overestimation of performance due to
underpowered baselines, (b) Inconsistent implementation practices preventing
transparent comparisons of methods, and (c) Lack of generality due to outdated
backbones and lack of diversity in benchmarks. We address these problems by
introducing: (1) A unified benchmarking and implementation framework, Align and
Distill (ALDI), enabling comparison of DAOD methods and supporting future
development, (2) A fair and modern training and evaluation protocol for DAOD
that addresses benchmarking pitfalls, (3) A new DAOD benchmark dataset,
CFC-DAOD, enabling evaluation on diverse real-world data, and (4) A new method,
ALDI++, that achieves state-of-the-art results by a large margin. ALDI++
outperforms the previous state-of-the-art by +3.5 AP50 on Cityscapes to Foggy
Cityscapes, +5.7 AP50 on Sim10k to Cityscapes (where ours is the only method to
outperform a fair baseline), and +0.6 AP50 on CFC Kenai to Channel. ALDI and
ALDI++ are architecture-agnostic, setting a new state-of-the-art for YOLO and
DETR-based DAOD as well without additional hyperparameter tuning. Our
framework, dataset, and state-of-the-art method offer a critical reset for DAOD
and provide a strong foundation for future research. Code and data are
available: https://github.com/justinkay/aldi and
https://github.com/visipedia/caltech-fish-counting.
|
2405.04476 | Wang Lijun | Lijun Wang, Yixian Lu, Ziyan Gao, Kai Li, Jianqiang Huang, Yuntao
Kong, Shogo Okada | BERP: A Blind Estimator of Room Parameters for Single-Channel Noisy
Speech Signals | 16-page with supplementary materials, Submitted to IEEE/ACM
Transaction on Audio Speech and Language Processing (TASLP) | null | null | null | eess.AS cs.SD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Room acoustical parameters (RAPs), room geometrical parameters (RGPs) and
instantaneous occupancy level are essential metrics for parameterizing the room
acoustical characteristics (RACs) of a sound field around a listener's local
environment, offering comprehensive indications for various applications.
Current blind estimation methods either fail to cover a broad range of
real-world acoustic environments in the context of real background noise or
estimate only a few RAPs and RGPs from noisy single-channel speech signals. In
addition, they are limited in their ability to estimate the instantaneous
occupancy level. In this paper, we propose a new universal blind estimation
framework called the blind estimator of room parameters (BERP) to estimate
RAPs, RGPs and occupancy level via a unified methodology. It consists of two
modules: a unified room feature encoder that combines attention mechanisms with
convolutional layers to learn common features across room parameters, and
multiple separate parametric predictors for continuous estimation of each
parameter in parallel. The combination of attention and convolutions enables
the model to capture acoustic features locally and globally from speech,
yielding more robust and multitask generalizable common features. Separate
predictors allow the model to independently optimize for each room parameter to
reduce task learning conflict and improve per-task performance. This estimation
framework enables universal and efficient estimation of room parameters while
maintaining satisfactory performance. To evaluate the effectiveness of the
proposed framework, we compile a task-specific dataset from several publicly
available datasets, including synthetic and real reverberant recordings. The
results reveal that BERP achieves state-of-the-art (SOTA) performance and
excellent adaptability to real-world scenarios. The code and weights are
available on GitHub.
| [
{
"version": "v1",
"created": "Tue, 7 May 2024 16:41:41 GMT"
},
{
"version": "v2",
"created": "Thu, 16 May 2024 10:17:12 GMT"
},
{
"version": "v3",
"created": "Sat, 19 Oct 2024 12:44:24 GMT"
},
{
"version": "v4",
"created": "Wed, 23 Oct 2024 11:01:59 GMT"
},
{
"version": "v5",
"created": "Thu, 24 Oct 2024 01:59:56 GMT"
},
{
"version": "v6",
"created": "Tue, 18 Mar 2025 15:08:12 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Wang",
"Lijun",
""
],
[
"Lu",
"Yixian",
""
],
[
"Gao",
"Ziyan",
""
],
[
"Li",
"Kai",
""
],
[
"Huang",
"Jianqiang",
""
],
[
"Kong",
"Yuntao",
""
],
[
"Okada",
"Shogo",
""
]
] | TITLE: BERP: A Blind Estimator of Room Parameters for Single-Channel Noisy
Speech Signals
ABSTRACT: Room acoustical parameters (RAPs), room geometrical parameters (RGPs) and
instantaneous occupancy level are essential metrics for parameterizing the room
acoustical characteristics (RACs) of a sound field around a listener's local
environment, offering comprehensive indications for various applications.
Current blind estimation methods either fail to cover a broad range of
real-world acoustic environments in the context of real background noise or
estimate only a few RAPs and RGPs from noisy single-channel speech signals. In
addition, they are limited in their ability to estimate the instantaneous
occupancy level. In this paper, we propose a new universal blind estimation
framework called the blind estimator of room parameters (BERP) to estimate
RAPs, RGPs and occupancy level via a unified methodology. It consists of two
modules: a unified room feature encoder that combines attention mechanisms with
convolutional layers to learn common features across room parameters, and
multiple separate parametric predictors for continuous estimation of each
parameter in parallel. The combination of attention and convolutions enables
the model to capture acoustic features locally and globally from speech,
yielding more robust and multitask generalizable common features. Separate
predictors allow the model to independently optimize for each room parameter to
reduce task learning conflict and improve per-task performance. This estimation
framework enables universal and efficient estimation of room parameters while
maintaining satisfactory performance. To evaluate the effectiveness of the
proposed framework, we compile a task-specific dataset from several publicly
available datasets, including synthetic and real reverberant recordings. The
results reveal that BERP achieves state-of-the-art (SOTA) performance and
excellent adaptability to real-world scenarios. The code and weights are
available on GitHub.
|
2405.14304 | Mojtaba Bemana | Mojtaba Bemana, Thomas Leimk\"uhler, Karol Myszkowski, Hans-Peter
Seidel, Tobias Ritschel | Bracket Diffusion: HDR Image Generation by Consistent LDR Denoising | 11 pages, 14 figures, Accepted to Eurographics 2025, see
https://bracketdiffusion.mpi-inf.mpg.de | null | null | null | cs.GR cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We demonstrate generating HDR images using the concerted action of multiple
black-box, pre-trained LDR image diffusion models. Relying on a pre-trained LDR
generative diffusion models is vital as, first, there is no sufficiently large
HDR image dataset available to re-train them, and, second, even if it was,
re-training such models is impossible for most compute budgets. Instead, we
seek inspiration from the HDR image capture literature that traditionally fuses
sets of LDR images, called "exposure brackets'', to produce a single HDR image.
We operate multiple denoising processes to generate multiple LDR brackets that
together form a valid HDR result. The key to making this work is to introduce a
consistency term into the diffusion process to couple the brackets such that
they agree across the exposure range they share while accounting for possible
differences due to the quantization error. We demonstrate state-of-the-art
unconditional and conditional or restoration-type (LDR2HDR) generative modeling
results, yet in HDR.
| [
{
"version": "v1",
"created": "Thu, 23 May 2024 08:24:22 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 14:54:28 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Bemana",
"Mojtaba",
""
],
[
"Leimkühler",
"Thomas",
""
],
[
"Myszkowski",
"Karol",
""
],
[
"Seidel",
"Hans-Peter",
""
],
[
"Ritschel",
"Tobias",
""
]
] | TITLE: Bracket Diffusion: HDR Image Generation by Consistent LDR Denoising
ABSTRACT: We demonstrate generating HDR images using the concerted action of multiple
black-box, pre-trained LDR image diffusion models. Relying on a pre-trained LDR
generative diffusion models is vital as, first, there is no sufficiently large
HDR image dataset available to re-train them, and, second, even if it was,
re-training such models is impossible for most compute budgets. Instead, we
seek inspiration from the HDR image capture literature that traditionally fuses
sets of LDR images, called "exposure brackets'', to produce a single HDR image.
We operate multiple denoising processes to generate multiple LDR brackets that
together form a valid HDR result. The key to making this work is to introduce a
consistency term into the diffusion process to couple the brackets such that
they agree across the exposure range they share while accounting for possible
differences due to the quantization error. We demonstrate state-of-the-art
unconditional and conditional or restoration-type (LDR2HDR) generative modeling
results, yet in HDR.
|
2405.17465 | Kathairiya Aashu | Aashu Katharria, Kanchan Rajwar, Millie Pant, Juan D. Vel\'asquez,
V\'aclav Sn\'a\v{s}el and Kusum Deep | Information Fusion in Smart Agriculture: Machine Learning Applications
and Future Research Directions | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Machine learning (ML) is a rapidly evolving technology with expanding
applications across various fields. This paper presents a comprehensive survey
of recent ML applications in agriculture for sustainability and efficiency.
Existing reviews mainly focus on narrow subdomains or lack a fusion-driven
perspectives. This study provides a combined analysis of ML applications in
agriculture, structured around five key objectives: (i) Analyzing ML techniques
across pre-harvesting, harvesting, and post-harvesting phases. (ii)
Demonstrating how ML can be used with agricultural data and data fusion. (iii)
Conducting a bibliometric and statistical analysis to reveal research trends
and activity. (iv) Investigating real-world case studies of leading artificial
intelligence (AI)-driven agricultural companies that use different types of
multisensors and multisource data. (v) Compiling publicly available datasets to
support ML model training. Going beyond existing previous reviews, this review
focuses on how machine learning (ML) techniques, combined with multi-source
data fusion (integrating remote sensing, IoT, and climate analytics), enhance
precision agriculture by improving predictive accuracy and decision-making.
Case studies and statistical insights illustrate the evolving landscape of AI
driven smart farming, while future research directions also discusses
challenges associated with data fusion for heterogeneous datasets. This review
bridges the gap between AI research and agricultural applications, offering a
roadmap for researchers, industry professionals, and policymakers to harness
information fusion and ML for advancing precision agriculture.
| [
{
"version": "v1",
"created": "Thu, 23 May 2024 17:53:31 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 17:32:09 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Katharria",
"Aashu",
""
],
[
"Rajwar",
"Kanchan",
""
],
[
"Pant",
"Millie",
""
],
[
"Velásquez",
"Juan D.",
""
],
[
"Snášel",
"Václav",
""
],
[
"Deep",
"Kusum",
""
]
] | TITLE: Information Fusion in Smart Agriculture: Machine Learning Applications
and Future Research Directions
ABSTRACT: Machine learning (ML) is a rapidly evolving technology with expanding
applications across various fields. This paper presents a comprehensive survey
of recent ML applications in agriculture for sustainability and efficiency.
Existing reviews mainly focus on narrow subdomains or lack a fusion-driven
perspectives. This study provides a combined analysis of ML applications in
agriculture, structured around five key objectives: (i) Analyzing ML techniques
across pre-harvesting, harvesting, and post-harvesting phases. (ii)
Demonstrating how ML can be used with agricultural data and data fusion. (iii)
Conducting a bibliometric and statistical analysis to reveal research trends
and activity. (iv) Investigating real-world case studies of leading artificial
intelligence (AI)-driven agricultural companies that use different types of
multisensors and multisource data. (v) Compiling publicly available datasets to
support ML model training. Going beyond existing previous reviews, this review
focuses on how machine learning (ML) techniques, combined with multi-source
data fusion (integrating remote sensing, IoT, and climate analytics), enhance
precision agriculture by improving predictive accuracy and decision-making.
Case studies and statistical insights illustrate the evolving landscape of AI
driven smart farming, while future research directions also discusses
challenges associated with data fusion for heterogeneous datasets. This review
bridges the gap between AI research and agricultural applications, offering a
roadmap for researchers, industry professionals, and policymakers to harness
information fusion and ML for advancing precision agriculture.
|
2406.04746 | Radu Tudor Ionescu | Eduard Poesina, Adriana Valentina Costache, Adrian-Gabriel Chifu,
Josiane Mothe, Radu Tudor Ionescu | PQPP: A Joint Benchmark for Text-to-Image Prompt and Query Performance
Prediction | Accepted at CVPR 2025 | null | null | null | cs.CV cs.AI cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Text-to-image generation has recently emerged as a viable alternative to
text-to-image retrieval, driven by the visually impressive results of
generative diffusion models. Although query performance prediction is an active
research topic in information retrieval, to the best of our knowledge, there is
no prior study that analyzes the difficulty of queries (referred to as prompts)
in text-to-image generation, based on human judgments. To this end, we
introduce the first dataset of prompts which are manually annotated in terms of
image generation performance. Additionally, we extend these evaluations to
text-to-image retrieval by collecting manual annotations that represent
retrieval performance. We thus establish the first joint benchmark for prompt
and query performance prediction (PQPP) across both tasks, comprising over 10K
queries. Our benchmark enables (i) the comparative assessment of prompt/query
difficulty in both image generation and image retrieval, and (ii) the
evaluation of prompt/query performance predictors addressing both generation
and retrieval. We evaluate several pre- and post-generation/retrieval
performance predictors, thus providing competitive baselines for future
research. Our benchmark and code are publicly available at
https://github.com/Eduard6421/PQPP.
| [
{
"version": "v1",
"created": "Fri, 7 Jun 2024 08:46:19 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 16:45:09 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Poesina",
"Eduard",
""
],
[
"Costache",
"Adriana Valentina",
""
],
[
"Chifu",
"Adrian-Gabriel",
""
],
[
"Mothe",
"Josiane",
""
],
[
"Ionescu",
"Radu Tudor",
""
]
] | TITLE: PQPP: A Joint Benchmark for Text-to-Image Prompt and Query Performance
Prediction
ABSTRACT: Text-to-image generation has recently emerged as a viable alternative to
text-to-image retrieval, driven by the visually impressive results of
generative diffusion models. Although query performance prediction is an active
research topic in information retrieval, to the best of our knowledge, there is
no prior study that analyzes the difficulty of queries (referred to as prompts)
in text-to-image generation, based on human judgments. To this end, we
introduce the first dataset of prompts which are manually annotated in terms of
image generation performance. Additionally, we extend these evaluations to
text-to-image retrieval by collecting manual annotations that represent
retrieval performance. We thus establish the first joint benchmark for prompt
and query performance prediction (PQPP) across both tasks, comprising over 10K
queries. Our benchmark enables (i) the comparative assessment of prompt/query
difficulty in both image generation and image retrieval, and (ii) the
evaluation of prompt/query performance predictors addressing both generation
and retrieval. We evaluate several pre- and post-generation/retrieval
performance predictors, thus providing competitive baselines for future
research. Our benchmark and code are publicly available at
https://github.com/Eduard6421/PQPP.
|
2406.06652 | Yubin Xiao | Yubin Xiao, Di Wang, Xuan Wu, Yuesong Wu, Boyang Li, Wei Du, Liupu
Wang, You Zhou | Improving Generalization of Neural Vehicle Routing Problem Solvers
Through the Lens of Model Architecture | This work has been accepted by Neural Networks | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neural models produce promising results when solving Vehicle Routing Problems
(VRPs), but often fall short in generalization. Recent attempts to enhance
model generalization often incur unnecessarily large training cost or cannot be
directly applied to other models solving different VRP variants. To address
these issues, we take a novel perspective on model architecture in this study.
Specifically, we propose a plug-and-play Entropy-based Scaling Factor (ESF) and
a Distribution-Specific (DS) decoder to enhance the size and distribution
generalization, respectively. ESF adjusts the attention weight pattern of the
model towards familiar ones discovered during training when solving VRPs of
varying sizes. The DS decoder explicitly models VRPs of multiple training
distribution patterns through multiple auxiliary light decoders, expanding the
model representation space to encompass a broader range of distributional
scenarios. We conduct extensive experiments on both synthetic and widely
recognized real-world benchmarking datasets and compare the performance with
seven baseline models. The results demonstrate the effectiveness of using ESF
and DS decoder to obtain a more generalizable model and showcase their
applicability to solve different VRP variants, i.e., travelling salesman
problem and capacitated VRP. Notably, our proposed generic components require
minimal computational resources, and can be effortlessly integrated into
conventional generalization strategies to further elevate model generalization.
| [
{
"version": "v1",
"created": "Mon, 10 Jun 2024 09:03:17 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Jun 2024 14:02:57 GMT"
},
{
"version": "v3",
"created": "Tue, 18 Mar 2025 08:40:04 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Xiao",
"Yubin",
""
],
[
"Wang",
"Di",
""
],
[
"Wu",
"Xuan",
""
],
[
"Wu",
"Yuesong",
""
],
[
"Li",
"Boyang",
""
],
[
"Du",
"Wei",
""
],
[
"Wang",
"Liupu",
""
],
[
"Zhou",
"You",
""
]
] | TITLE: Improving Generalization of Neural Vehicle Routing Problem Solvers
Through the Lens of Model Architecture
ABSTRACT: Neural models produce promising results when solving Vehicle Routing Problems
(VRPs), but often fall short in generalization. Recent attempts to enhance
model generalization often incur unnecessarily large training cost or cannot be
directly applied to other models solving different VRP variants. To address
these issues, we take a novel perspective on model architecture in this study.
Specifically, we propose a plug-and-play Entropy-based Scaling Factor (ESF) and
a Distribution-Specific (DS) decoder to enhance the size and distribution
generalization, respectively. ESF adjusts the attention weight pattern of the
model towards familiar ones discovered during training when solving VRPs of
varying sizes. The DS decoder explicitly models VRPs of multiple training
distribution patterns through multiple auxiliary light decoders, expanding the
model representation space to encompass a broader range of distributional
scenarios. We conduct extensive experiments on both synthetic and widely
recognized real-world benchmarking datasets and compare the performance with
seven baseline models. The results demonstrate the effectiveness of using ESF
and DS decoder to obtain a more generalizable model and showcase their
applicability to solve different VRP variants, i.e., travelling salesman
problem and capacitated VRP. Notably, our proposed generic components require
minimal computational resources, and can be effortlessly integrated into
conventional generalization strategies to further elevate model generalization.
|
2406.06918 | Dewu Zheng | Dewu Zheng, Yanlin Wang, Ensheng Shi, Ruikai Zhang, Yuchi Ma, Hongyu
Zhang, Zibin Zheng | HumanEvo: An Evolution-aware Benchmark for More Realistic Evaluation of
Repository-level Code Generation | To appear at ICSE 2025 | 47th International Conference on Software Engineering (ICSE 2025) | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To evaluate the repository-level code generation capabilities of Large
Language Models (LLMs) in complex real-world software development scenarios,
many evaluation methods have been developed. These methods typically leverage
contextual code from the latest version of a project to assist LLMs in
accurately generating the desired function. However, such evaluation methods
fail to consider the dynamic evolution of software projects over time, which we
refer to as evolution-ignored settings. This in turn results in inaccurate
evaluation of LLMs' performance. In this paper, we conduct an empirical study
to deeply understand LLMs' code generation performance within settings that
reflect the evolution nature of software development. To achieve this, we first
construct an evolution-aware repository-level code generation dataset, namely
HumanEvo, equipped with an automated execution-based evaluation tool. Second,
we manually categorize HumanEvo according to dependency levels to more
comprehensively analyze the model's performance in generating functions with
different dependency levels. Third, we conduct extensive experiments on
HumanEvo with seven representative and diverse LLMs to verify the effectiveness
of the proposed benchmark. We obtain several important findings through our
experimental study. For example, we find that previous evolution-ignored
evaluation methods result in inflated performance of LLMs, with performance
overestimations ranging from 10.0% to 61.1% under different context acquisition
methods, compared to the evolution-aware evaluation approach. Based on the
findings, we give actionable suggestions for more realistic evaluation of LLMs
on code generation. We also build a shared evolution-aware code generation
toolbox to facilitate future research.
| [
{
"version": "v1",
"created": "Tue, 11 Jun 2024 03:19:18 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 04:58:23 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Zheng",
"Dewu",
""
],
[
"Wang",
"Yanlin",
""
],
[
"Shi",
"Ensheng",
""
],
[
"Zhang",
"Ruikai",
""
],
[
"Ma",
"Yuchi",
""
],
[
"Zhang",
"Hongyu",
""
],
[
"Zheng",
"Zibin",
""
]
] | TITLE: HumanEvo: An Evolution-aware Benchmark for More Realistic Evaluation of
Repository-level Code Generation
ABSTRACT: To evaluate the repository-level code generation capabilities of Large
Language Models (LLMs) in complex real-world software development scenarios,
many evaluation methods have been developed. These methods typically leverage
contextual code from the latest version of a project to assist LLMs in
accurately generating the desired function. However, such evaluation methods
fail to consider the dynamic evolution of software projects over time, which we
refer to as evolution-ignored settings. This in turn results in inaccurate
evaluation of LLMs' performance. In this paper, we conduct an empirical study
to deeply understand LLMs' code generation performance within settings that
reflect the evolution nature of software development. To achieve this, we first
construct an evolution-aware repository-level code generation dataset, namely
HumanEvo, equipped with an automated execution-based evaluation tool. Second,
we manually categorize HumanEvo according to dependency levels to more
comprehensively analyze the model's performance in generating functions with
different dependency levels. Third, we conduct extensive experiments on
HumanEvo with seven representative and diverse LLMs to verify the effectiveness
of the proposed benchmark. We obtain several important findings through our
experimental study. For example, we find that previous evolution-ignored
evaluation methods result in inflated performance of LLMs, with performance
overestimations ranging from 10.0% to 61.1% under different context acquisition
methods, compared to the evolution-aware evaluation approach. Based on the
findings, we give actionable suggestions for more realistic evaluation of LLMs
on code generation. We also build a shared evolution-aware code generation
toolbox to facilitate future research.
|
2406.09123 | Hamidreza Saffari | Hamidreza Saffari, Mohammadamin Shafiei, Donya Rooein, Francesco
Pierri, Debora Nozza | Can I introduce my boyfriend to my grandmother? Evaluating Large
Language Models Capabilities on Iranian Social Norm Classification | 15 pages, 1 figure, 9 tables | null | null | null | cs.SI | http://creativecommons.org/licenses/by-sa/4.0/ | Creating globally inclusive AI systems demands datasets reflecting diverse
social norms. Iran, with its unique cultural blend, offers an ideal case study,
with Farsi adding linguistic complexity. In this work, we introduce the Iranian
Social Norms (ISN) dataset, a novel collection of 1,699 Iranian social norms,
including environments, demographic features, and scope annotation, alongside
English translations. Our evaluation of 6 Large Language Models (LLMs) in
classifying Iranian social norms, using a variety of prompts, uncovered
critical insights into the impact of geographic and linguistic context. Results
revealed a substantial performance gap in LLMs' comprehension of Iranian norms.
Notably, while the geographic context in English prompts enhanced the
performance, this effect was absent in Farsi, pointing to nuanced linguistic
challenges. Particularly, performance was significantly worse for Iran-specific
norms, emphasizing the importance of culturally tailored datasets. As the first
Farsi dataset for social norm classification, ISN will facilitate crucial
cross-cultural analyses, shedding light on how values differ across contexts
and cultures.
| [
{
"version": "v1",
"created": "Thu, 13 Jun 2024 13:56:55 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Jun 2024 15:19:23 GMT"
},
{
"version": "v3",
"created": "Tue, 18 Mar 2025 10:28:01 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Saffari",
"Hamidreza",
""
],
[
"Shafiei",
"Mohammadamin",
""
],
[
"Rooein",
"Donya",
""
],
[
"Pierri",
"Francesco",
""
],
[
"Nozza",
"Debora",
""
]
] | TITLE: Can I introduce my boyfriend to my grandmother? Evaluating Large
Language Models Capabilities on Iranian Social Norm Classification
ABSTRACT: Creating globally inclusive AI systems demands datasets reflecting diverse
social norms. Iran, with its unique cultural blend, offers an ideal case study,
with Farsi adding linguistic complexity. In this work, we introduce the Iranian
Social Norms (ISN) dataset, a novel collection of 1,699 Iranian social norms,
including environments, demographic features, and scope annotation, alongside
English translations. Our evaluation of 6 Large Language Models (LLMs) in
classifying Iranian social norms, using a variety of prompts, uncovered
critical insights into the impact of geographic and linguistic context. Results
revealed a substantial performance gap in LLMs' comprehension of Iranian norms.
Notably, while the geographic context in English prompts enhanced the
performance, this effect was absent in Farsi, pointing to nuanced linguistic
challenges. Particularly, performance was significantly worse for Iran-specific
norms, emphasizing the importance of culturally tailored datasets. As the first
Farsi dataset for social norm classification, ISN will facilitate crucial
cross-cultural analyses, shedding light on how values differ across contexts
and cultures.
|
2406.09891 | Adish Singla | Victor-Alexandru P\u{a}durean, Adish Singla | Benchmarking Generative Models on Computational Thinking Tests in
Elementary Visual Programming | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generative models have demonstrated human-level proficiency in various
benchmarks across domains like programming, natural sciences, and general
knowledge. Despite these promising results on competitive benchmarks, they
still struggle with seemingly simple problem-solving tasks typically carried
out by elementary-level students. How do state-of-the-art models perform on
standardized programming-related tests designed to assess computational
thinking and problem-solving skills at schools? In this paper, we curate a
novel benchmark involving computational thinking tests grounded in elementary
visual programming domains. Our initial results show that state-of-the-art
models like GPT-4o and Llama3 barely match the performance of an average school
student. To further boost the performance of these models, we fine-tune them
using a novel synthetic data generation methodology. The key idea is to develop
a comprehensive dataset using symbolic methods that capture different skill
levels, ranging from recognition of visual elements to multi-choice quizzes to
synthesis-style tasks. We showcase how various aspects of symbolic information
in synthetic data help improve fine-tuned models' performance. We will release
the full implementation and datasets to facilitate further research on
enhancing computational thinking in generative models.
| [
{
"version": "v1",
"created": "Fri, 14 Jun 2024 10:02:52 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 13:03:15 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Pădurean",
"Victor-Alexandru",
""
],
[
"Singla",
"Adish",
""
]
] | TITLE: Benchmarking Generative Models on Computational Thinking Tests in
Elementary Visual Programming
ABSTRACT: Generative models have demonstrated human-level proficiency in various
benchmarks across domains like programming, natural sciences, and general
knowledge. Despite these promising results on competitive benchmarks, they
still struggle with seemingly simple problem-solving tasks typically carried
out by elementary-level students. How do state-of-the-art models perform on
standardized programming-related tests designed to assess computational
thinking and problem-solving skills at schools? In this paper, we curate a
novel benchmark involving computational thinking tests grounded in elementary
visual programming domains. Our initial results show that state-of-the-art
models like GPT-4o and Llama3 barely match the performance of an average school
student. To further boost the performance of these models, we fine-tune them
using a novel synthetic data generation methodology. The key idea is to develop
a comprehensive dataset using symbolic methods that capture different skill
levels, ranging from recognition of visual elements to multi-choice quizzes to
synthesis-style tasks. We showcase how various aspects of symbolic information
in synthetic data help improve fine-tuned models' performance. We will release
the full implementation and datasets to facilitate further research on
enhancing computational thinking in generative models.
|
2406.12757 | Shuo Xu | Shuo Xu and Sai Wang and Xinyue Hu and Yutian Lin and Sibei Yang and
Yu Wu | MAC: A Benchmark for Multiple Attributes Compositional Zero-Shot
Learning | 13pages,5figures | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Compositional Zero-Shot Learning (CZSL) aims to learn semantic primitives
(attributes and objects) from seen compositions and recognize unseen
attribute-object compositions. Existing CZSL datasets focus on single
attributes, neglecting the fact that objects naturally exhibit multiple
interrelated attributes. Their narrow attribute scope and single attribute
labeling introduce annotation biases, misleading the learning of attributes and
causing inaccurate evaluation. To address these issues, we introduce the
Multi-Attribute Composition (MAC) dataset, encompassing 22,838 images and
17,627 compositions with comprehensive and representative attribute
annotations. MAC shows complex relationship between attributes and objects,
with each attribute type linked to an average of 82.2 object types, and each
object type associated with 31.4 attribute types. Based on MAC, we propose
multi-attribute compositional zero-shot learning that requires deeper semantic
understanding and advanced attribute associations, establishing a more
realistic and challenging benchmark for CZSL. We also propose Multi-attribute
Visual-Primitive Integrator (MVP-Integrator), a robust baseline for
multi-attribute CZSL, which disentangles semantic primitives and performs
effective visual-primitive association. Experimental results demonstrate that
MVP-Integrator significantly outperforms existing CZSL methods on MAC with
improved inference efficiency.
| [
{
"version": "v1",
"created": "Tue, 18 Jun 2024 16:24:48 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 16:51:43 GMT"
},
{
"version": "v3",
"created": "Tue, 18 Mar 2025 06:49:14 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Xu",
"Shuo",
""
],
[
"Wang",
"Sai",
""
],
[
"Hu",
"Xinyue",
""
],
[
"Lin",
"Yutian",
""
],
[
"Yang",
"Sibei",
""
],
[
"Wu",
"Yu",
""
]
] | TITLE: MAC: A Benchmark for Multiple Attributes Compositional Zero-Shot
Learning
ABSTRACT: Compositional Zero-Shot Learning (CZSL) aims to learn semantic primitives
(attributes and objects) from seen compositions and recognize unseen
attribute-object compositions. Existing CZSL datasets focus on single
attributes, neglecting the fact that objects naturally exhibit multiple
interrelated attributes. Their narrow attribute scope and single attribute
labeling introduce annotation biases, misleading the learning of attributes and
causing inaccurate evaluation. To address these issues, we introduce the
Multi-Attribute Composition (MAC) dataset, encompassing 22,838 images and
17,627 compositions with comprehensive and representative attribute
annotations. MAC shows complex relationship between attributes and objects,
with each attribute type linked to an average of 82.2 object types, and each
object type associated with 31.4 attribute types. Based on MAC, we propose
multi-attribute compositional zero-shot learning that requires deeper semantic
understanding and advanced attribute associations, establishing a more
realistic and challenging benchmark for CZSL. We also propose Multi-attribute
Visual-Primitive Integrator (MVP-Integrator), a robust baseline for
multi-attribute CZSL, which disentangles semantic primitives and performs
effective visual-primitive association. Experimental results demonstrate that
MVP-Integrator significantly outperforms existing CZSL methods on MAC with
improved inference efficiency.
|
2406.13839 | Chaitanya K. Joshi | Rishabh Anand, Chaitanya K. Joshi, Alex Morehead, Arian R. Jamasb,
Charles Harris, Simon V. Mathis, Kieran Didi, Rex Ying, Bryan Hooi, Pietro
Li\`o | RNA-FrameFlow: Flow Matching for de novo 3D RNA Backbone Design | Oral presentation at Machine Learning in Computational Biology
(MLCB), 2024. Also presented as an Oral at ICML 2024 Structured Probabilistic
Inference & Generative Modeling Workshop, and a Spotlight at ICML 2024
AI4Science Workshop | null | null | null | q-bio.BM cs.LG q-bio.GN | http://creativecommons.org/licenses/by/4.0/ | We introduce RNA-FrameFlow, the first generative model for 3D RNA backbone
design. We build upon SE(3) flow matching for protein backbone generation and
establish protocols for data preparation and evaluation to address unique
challenges posed by RNA modeling. We formulate RNA structures as a set of
rigid-body frames and associated loss functions which account for larger, more
conformationally flexible RNA backbones (13 atoms per nucleotide) vs. proteins
(4 atoms per residue). Toward tackling the lack of diversity in 3D RNA
datasets, we explore training with structural clustering and cropping
augmentations. Additionally, we define a suite of evaluation metrics to measure
whether the generated RNA structures are globally self-consistent (via inverse
folding followed by forward folding) and locally recover RNA-specific
structural descriptors. The most performant version of RNA-FrameFlow generates
locally realistic RNA backbones of 40-150 nucleotides, over 40% of which pass
our validity criteria as measured by a self-consistency TM-score >= 0.45, at
which two RNAs have the same global fold. Open-source code:
https://github.com/rish-16/rna-backbone-design
| [
{
"version": "v1",
"created": "Wed, 19 Jun 2024 21:06:44 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 20:59:58 GMT"
},
{
"version": "v3",
"created": "Tue, 18 Mar 2025 10:25:10 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Anand",
"Rishabh",
""
],
[
"Joshi",
"Chaitanya K.",
""
],
[
"Morehead",
"Alex",
""
],
[
"Jamasb",
"Arian R.",
""
],
[
"Harris",
"Charles",
""
],
[
"Mathis",
"Simon V.",
""
],
[
"Didi",
"Kieran",
""
],
[
"Ying",
"Rex",
""
],
[
"Hooi",
"Bryan",
""
],
[
"Liò",
"Pietro",
""
]
] | TITLE: RNA-FrameFlow: Flow Matching for de novo 3D RNA Backbone Design
ABSTRACT: We introduce RNA-FrameFlow, the first generative model for 3D RNA backbone
design. We build upon SE(3) flow matching for protein backbone generation and
establish protocols for data preparation and evaluation to address unique
challenges posed by RNA modeling. We formulate RNA structures as a set of
rigid-body frames and associated loss functions which account for larger, more
conformationally flexible RNA backbones (13 atoms per nucleotide) vs. proteins
(4 atoms per residue). Toward tackling the lack of diversity in 3D RNA
datasets, we explore training with structural clustering and cropping
augmentations. Additionally, we define a suite of evaluation metrics to measure
whether the generated RNA structures are globally self-consistent (via inverse
folding followed by forward folding) and locally recover RNA-specific
structural descriptors. The most performant version of RNA-FrameFlow generates
locally realistic RNA backbones of 40-150 nucleotides, over 40% of which pass
our validity criteria as measured by a self-consistency TM-score >= 0.45, at
which two RNAs have the same global fold. Open-source code:
https://github.com/rish-16/rna-backbone-design
|
2406.18414 | Kemiao Huang | Kemiao Huang, Yinqi Chen, Meiying Zhang, and Qi Hao | BiTrack: Bidirectional Offline 3D Multi-Object Tracking Using
Camera-LiDAR Data | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Compared with real-time multi-object tracking (MOT), offline multi-object
tracking (OMOT) has the advantages to perform 2D-3D detection fusion, erroneous
link correction, and full track optimization but has to deal with the
challenges from bounding box misalignment and track evaluation, editing, and
refinement. This paper proposes "BiTrack", a 3D OMOT framework that includes
modules of 2D-3D detection fusion, initial trajectory generation, and
bidirectional trajectory re-optimization to achieve optimal tracking results
from camera-LiDAR data. The novelty of this paper includes threefold: (1)
development of a point-level object registration technique that employs a
density-based similarity metric to achieve accurate fusion of 2D-3D detection
results; (2) development of a set of data association and track management
skills that utilizes a vertex-based similarity metric as well as false alarm
rejection and track recovery mechanisms to generate reliable bidirectional
object trajectories; (3) development of a trajectory re-optimization scheme
that re-organizes track fragments of different fidelities in a greedy fashion,
as well as refines each trajectory with completion and smoothing techniques.
The experiment results on the KITTI dataset demonstrate that BiTrack achieves
the state-of-the-art performance for 3D OMOT tasks in terms of accuracy and
efficiency.
| [
{
"version": "v1",
"created": "Wed, 26 Jun 2024 15:09:54 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 14:57:30 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Huang",
"Kemiao",
""
],
[
"Chen",
"Yinqi",
""
],
[
"Zhang",
"Meiying",
""
],
[
"Hao",
"Qi",
""
]
] | TITLE: BiTrack: Bidirectional Offline 3D Multi-Object Tracking Using
Camera-LiDAR Data
ABSTRACT: Compared with real-time multi-object tracking (MOT), offline multi-object
tracking (OMOT) has the advantages to perform 2D-3D detection fusion, erroneous
link correction, and full track optimization but has to deal with the
challenges from bounding box misalignment and track evaluation, editing, and
refinement. This paper proposes "BiTrack", a 3D OMOT framework that includes
modules of 2D-3D detection fusion, initial trajectory generation, and
bidirectional trajectory re-optimization to achieve optimal tracking results
from camera-LiDAR data. The novelty of this paper includes threefold: (1)
development of a point-level object registration technique that employs a
density-based similarity metric to achieve accurate fusion of 2D-3D detection
results; (2) development of a set of data association and track management
skills that utilizes a vertex-based similarity metric as well as false alarm
rejection and track recovery mechanisms to generate reliable bidirectional
object trajectories; (3) development of a trajectory re-optimization scheme
that re-organizes track fragments of different fidelities in a greedy fashion,
as well as refines each trajectory with completion and smoothing techniques.
The experiment results on the KITTI dataset demonstrate that BiTrack achieves
the state-of-the-art performance for 3D OMOT tasks in terms of accuracy and
efficiency.
|
2407.03786 | Stefan Scholz | Stefan Scholz, Nils B. Weidmann, Zachary C. Steinert-Threlkeld, Eda
Keremo\u{g}lu, Bastian Goldl\"ucke | Improving Computer Vision Interpretability: Transparent Two-level
Classification for Complex Scenes | null | Polit. Anal. 33 (2025) 107-121 | 10.1017/pan.2024.18 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Treating images as data has become increasingly popular in political science.
While existing classifiers for images reach high levels of accuracy, it is
difficult to systematically assess the visual features on which they base their
classification. This paper presents a two-level classification method that
addresses this transparency problem. At the first stage, an image segmenter
detects the objects present in the image and a feature vector is created from
those objects. In the second stage, this feature vector is used as input for
standard machine learning classifiers to discriminate between images. We apply
this method to a new dataset of more than 140,000 images to detect which ones
display political protest. This analysis demonstrates three advantages to this
paper's approach. First, identifying objects in images improves transparency by
providing human-understandable labels for the objects shown on an image.
Second, knowing these objects enables analysis of which distinguish protest
images from non-protest ones. Third, comparing the importance of objects across
countries reveals how protest behavior varies. These insights are not available
using conventional computer vision classifiers and provide new opportunities
for comparative research.
| [
{
"version": "v1",
"created": "Thu, 4 Jul 2024 09:48:58 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Scholz",
"Stefan",
""
],
[
"Weidmann",
"Nils B.",
""
],
[
"Steinert-Threlkeld",
"Zachary C.",
""
],
[
"Keremoğlu",
"Eda",
""
],
[
"Goldlücke",
"Bastian",
""
]
] | TITLE: Improving Computer Vision Interpretability: Transparent Two-level
Classification for Complex Scenes
ABSTRACT: Treating images as data has become increasingly popular in political science.
While existing classifiers for images reach high levels of accuracy, it is
difficult to systematically assess the visual features on which they base their
classification. This paper presents a two-level classification method that
addresses this transparency problem. At the first stage, an image segmenter
detects the objects present in the image and a feature vector is created from
those objects. In the second stage, this feature vector is used as input for
standard machine learning classifiers to discriminate between images. We apply
this method to a new dataset of more than 140,000 images to detect which ones
display political protest. This analysis demonstrates three advantages to this
paper's approach. First, identifying objects in images improves transparency by
providing human-understandable labels for the objects shown on an image.
Second, knowing these objects enables analysis of which distinguish protest
images from non-protest ones. Third, comparing the importance of objects across
countries reveals how protest behavior varies. These insights are not available
using conventional computer vision classifiers and provide new opportunities
for comparative research.
|
2407.19310 | Michal Kawulok | Patryk Kuban and Michal Kawulok | Ensembling convolutional neural networks for human skin segmentation | Paper accepted for IBERAMIA 2024 | Lecture Notes in Computer Science, vol 15277. Springer, Cham, 2025 | 10.1007/978-3-031-80366-6_16 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Detecting and segmenting human skin regions in digital images is an
intensively explored topic of computer vision with a variety of approaches
proposed over the years that have been found useful in numerous practical
applications. The first methods were based on pixel-wise skin color modeling
and they were later enhanced with context-based analysis to include the
textural and geometrical features, recently extracted using deep convolutional
neural networks. It has been also demonstrated that skin regions can be
segmented from grayscale images without using color information at all.
However, the possibility to combine these two sources of information has not
been explored so far and we address this research gap with the contribution
reported in this paper. We propose to train a convolutional network using the
datasets focused on different features to create an ensemble whose individual
outcomes are effectively combined using yet another convolutional network
trained to produce the final segmentation map. The experimental results clearly
indicate that the proposed approach outperforms the basic classifiers, as well
as an ensemble based on the voting scheme. We expect that this study will help
in developing new ensemble-based techniques that will improve the performance
of semantic segmentation systems, reaching beyond the problem of detecting
human skin.
| [
{
"version": "v1",
"created": "Sat, 27 Jul 2024 17:55:28 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 10:58:47 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Kuban",
"Patryk",
""
],
[
"Kawulok",
"Michal",
""
]
] | TITLE: Ensembling convolutional neural networks for human skin segmentation
ABSTRACT: Detecting and segmenting human skin regions in digital images is an
intensively explored topic of computer vision with a variety of approaches
proposed over the years that have been found useful in numerous practical
applications. The first methods were based on pixel-wise skin color modeling
and they were later enhanced with context-based analysis to include the
textural and geometrical features, recently extracted using deep convolutional
neural networks. It has been also demonstrated that skin regions can be
segmented from grayscale images without using color information at all.
However, the possibility to combine these two sources of information has not
been explored so far and we address this research gap with the contribution
reported in this paper. We propose to train a convolutional network using the
datasets focused on different features to create an ensemble whose individual
outcomes are effectively combined using yet another convolutional network
trained to produce the final segmentation map. The experimental results clearly
indicate that the proposed approach outperforms the basic classifiers, as well
as an ensemble based on the voting scheme. We expect that this study will help
in developing new ensemble-based techniques that will improve the performance
of semantic segmentation systems, reaching beyond the problem of detecting
human skin.
|
2408.06663 | Kaiser Sun | Kaiser Sun, Mark Dredze | Amuro and Char: Analyzing the Relationship between Pre-Training and
Fine-Tuning of Large Language Models | Rep4NLP Camera Ready | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | The development of large language models leads to the formation of a
pre-train-then-align paradigm, in which the model is typically pre-trained on a
large text corpus and undergoes a tuning stage to align the model with human
preference or downstream tasks. In this work, we investigate the relationship
between pre-training and fine-tuning by fine-tuning multiple intermediate
pre-trained model checkpoints. Our results on 18 datasets suggest that i)
continual pre-training improves the model in a latent way that unveils after
fine-tuning; ii) with extra fine-tuning, the datasets that the model does not
demonstrate capability gain much more than those that the model performs well
during the pre-training stage; iii) although model benefits significantly
through supervised fine-tuning, it may forget previously known domain knowledge
and the tasks that are not seen during fine-tuning; iv) the model resembles
high sensitivity to evaluation prompts after supervised fine-tuning, but this
sensitivity can be alleviated by more pre-training.
| [
{
"version": "v1",
"created": "Tue, 13 Aug 2024 06:28:43 GMT"
},
{
"version": "v2",
"created": "Wed, 14 Aug 2024 15:23:38 GMT"
},
{
"version": "v3",
"created": "Sun, 2 Feb 2025 22:07:55 GMT"
},
{
"version": "v4",
"created": "Tue, 11 Feb 2025 16:57:29 GMT"
},
{
"version": "v5",
"created": "Tue, 18 Mar 2025 16:21:04 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Sun",
"Kaiser",
""
],
[
"Dredze",
"Mark",
""
]
] | TITLE: Amuro and Char: Analyzing the Relationship between Pre-Training and
Fine-Tuning of Large Language Models
ABSTRACT: The development of large language models leads to the formation of a
pre-train-then-align paradigm, in which the model is typically pre-trained on a
large text corpus and undergoes a tuning stage to align the model with human
preference or downstream tasks. In this work, we investigate the relationship
between pre-training and fine-tuning by fine-tuning multiple intermediate
pre-trained model checkpoints. Our results on 18 datasets suggest that i)
continual pre-training improves the model in a latent way that unveils after
fine-tuning; ii) with extra fine-tuning, the datasets that the model does not
demonstrate capability gain much more than those that the model performs well
during the pre-training stage; iii) although model benefits significantly
through supervised fine-tuning, it may forget previously known domain knowledge
and the tasks that are not seen during fine-tuning; iv) the model resembles
high sensitivity to evaluation prompts after supervised fine-tuning, but this
sensitivity can be alleviated by more pre-training.
|
2408.10641 | Yuxiao Wang | Yuxiao Wang, Yu Lei, Li Cui, Weiying Xue, Qi Liu, Zhenao Wei | A Review of Human-Object Interaction Detection | Accepted by 2024 2nd International Conference on Computer, Vision and
Intelligent Technology (ICCVIT) | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human-object interaction (HOI) detection plays a key role in high-level
visual understanding, facilitating a deep comprehension of human activities.
Specifically, HOI detection aims to locate the humans and objects involved in
interactions within images or videos and classify the specific interactions
between them. The success of this task is influenced by several key factors,
including the accurate localization of human and object instances, as well as
the correct classification of object categories and interaction relationships.
This paper systematically summarizes and discusses the recent work in
image-based HOI detection. First, the mainstream datasets involved in HOI
relationship detection are introduced. Furthermore, starting with two-stage
methods and end-to-end one-stage detection approaches, this paper
comprehensively discusses the current developments in image-based HOI
detection, analyzing the strengths and weaknesses of these two methods.
Additionally, the advancements of zero-shot learning, weakly supervised
learning, and the application of large-scale language models in HOI detection
are discussed. Finally, the current challenges in HOI detection are outlined,
and potential research directions and future trends are explored.
| [
{
"version": "v1",
"created": "Tue, 20 Aug 2024 08:32:39 GMT"
},
{
"version": "v2",
"created": "Mon, 9 Dec 2024 09:27:29 GMT"
},
{
"version": "v3",
"created": "Tue, 18 Mar 2025 02:22:59 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Wang",
"Yuxiao",
""
],
[
"Lei",
"Yu",
""
],
[
"Cui",
"Li",
""
],
[
"Xue",
"Weiying",
""
],
[
"Liu",
"Qi",
""
],
[
"Wei",
"Zhenao",
""
]
] | TITLE: A Review of Human-Object Interaction Detection
ABSTRACT: Human-object interaction (HOI) detection plays a key role in high-level
visual understanding, facilitating a deep comprehension of human activities.
Specifically, HOI detection aims to locate the humans and objects involved in
interactions within images or videos and classify the specific interactions
between them. The success of this task is influenced by several key factors,
including the accurate localization of human and object instances, as well as
the correct classification of object categories and interaction relationships.
This paper systematically summarizes and discusses the recent work in
image-based HOI detection. First, the mainstream datasets involved in HOI
relationship detection are introduced. Furthermore, starting with two-stage
methods and end-to-end one-stage detection approaches, this paper
comprehensively discusses the current developments in image-based HOI
detection, analyzing the strengths and weaknesses of these two methods.
Additionally, the advancements of zero-shot learning, weakly supervised
learning, and the application of large-scale language models in HOI detection
are discussed. Finally, the current challenges in HOI detection are outlined,
and potential research directions and future trends are explored.
|
2409.14020 | Donghwi Jung | Donghwi Jung, Andres Pulido, Jane Shin, and Seong-Woo Kim | Point Cloud Structural Similarity-based Underwater Sonar Loop Detection | null | IEEE Robotics and Automation Letters, vol. 10, no. 4, pp.
3859-3866, April 2025 | 10.1109/LRA.2025.3547304 | null | cs.RO | http://creativecommons.org/licenses/by-sa/4.0/ | In this letter, we propose a point cloud structural similarity-based loop
detection method for underwater Simultaneous Localization and Mapping using
sonar sensors. Existing sonar-based loop detection approaches often rely on 2D
projection and keypoint extraction, which can lead to data loss and poor
performance in feature-scarce environments. Additionally, methods based on
neural networks or Bag-of-Words require extensive preprocessing, such as model
training or vocabulary creation, reducing adaptability to new environments. To
address these challenges, our method directly utilizes 3D sonar point clouds
without projection and computes point-wise structural feature maps based on
geometry, normals, and curvature. By leveraging rotation-invariant similarity
comparisons, the proposed approach eliminates the need for keypoint detection
and ensures robust loop detection across diverse underwater terrains. We
validate our method using two real-world datasets: the Antarctica dataset
obtained from deep underwater and the Seaward dataset collected from rivers and
lakes. Experimental results show that our method achieves the highest loop
detection performance compared to existing keypointbased and learning-based
approaches while requiring no additional training or preprocessing. Our code is
available at
https://github.com/donghwijung/point_cloud_structural_similarity_based_underwater_sonar_loop_detection.
| [
{
"version": "v1",
"created": "Sat, 21 Sep 2024 05:15:21 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 05:07:35 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Jung",
"Donghwi",
""
],
[
"Pulido",
"Andres",
""
],
[
"Shin",
"Jane",
""
],
[
"Kim",
"Seong-Woo",
""
]
] | TITLE: Point Cloud Structural Similarity-based Underwater Sonar Loop Detection
ABSTRACT: In this letter, we propose a point cloud structural similarity-based loop
detection method for underwater Simultaneous Localization and Mapping using
sonar sensors. Existing sonar-based loop detection approaches often rely on 2D
projection and keypoint extraction, which can lead to data loss and poor
performance in feature-scarce environments. Additionally, methods based on
neural networks or Bag-of-Words require extensive preprocessing, such as model
training or vocabulary creation, reducing adaptability to new environments. To
address these challenges, our method directly utilizes 3D sonar point clouds
without projection and computes point-wise structural feature maps based on
geometry, normals, and curvature. By leveraging rotation-invariant similarity
comparisons, the proposed approach eliminates the need for keypoint detection
and ensures robust loop detection across diverse underwater terrains. We
validate our method using two real-world datasets: the Antarctica dataset
obtained from deep underwater and the Seaward dataset collected from rivers and
lakes. Experimental results show that our method achieves the highest loop
detection performance compared to existing keypointbased and learning-based
approaches while requiring no additional training or preprocessing. Our code is
available at
https://github.com/donghwijung/point_cloud_structural_similarity_based_underwater_sonar_loop_detection.
|
2410.01273 | Jaeyoon Jung | Suhwan Choi, Yongjun Cho, Minchan Kim, Jaeyoon Jung, Myunchul Joe,
Yubeen Park, Minseo Kim, Sungwoong Kim, Sungjae Lee, Hwiseong Park, Jiwan
Chung, Youngjae Yu | CANVAS: Commonsense-Aware Navigation System for Intuitive Human-Robot
Interaction | Accepted to ICRA 2025, project page https://worv-ai.github.io/canvas | null | null | null | cs.RO cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Real-life robot navigation involves more than just reaching a destination; it
requires optimizing movements while addressing scenario-specific goals. An
intuitive way for humans to express these goals is through abstract cues like
verbal commands or rough sketches. Such human guidance may lack details or be
noisy. Nonetheless, we expect robots to navigate as intended. For robots to
interpret and execute these abstract instructions in line with human
expectations, they must share a common understanding of basic navigation
concepts with humans. To this end, we introduce CANVAS, a novel framework that
combines visual and linguistic instructions for commonsense-aware navigation.
Its success is driven by imitation learning, enabling the robot to learn from
human navigation behavior. We present COMMAND, a comprehensive dataset with
human-annotated navigation results, spanning over 48 hours and 219 km, designed
to train commonsense-aware navigation systems in simulated environments. Our
experiments show that CANVAS outperforms the strong rule-based system ROS
NavStack across all environments, demonstrating superior performance with noisy
instructions. Notably, in the orchard environment, where ROS NavStack records a
0% total success rate, CANVAS achieves a total success rate of 67%. CANVAS also
closely aligns with human demonstrations and commonsense constraints, even in
unseen environments. Furthermore, real-world deployment of CANVAS showcases
impressive Sim2Real transfer with a total success rate of 69%, highlighting the
potential of learning from human demonstrations in simulated environments for
real-world applications.
| [
{
"version": "v1",
"created": "Wed, 2 Oct 2024 06:34:45 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 12:44:59 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Choi",
"Suhwan",
""
],
[
"Cho",
"Yongjun",
""
],
[
"Kim",
"Minchan",
""
],
[
"Jung",
"Jaeyoon",
""
],
[
"Joe",
"Myunchul",
""
],
[
"Park",
"Yubeen",
""
],
[
"Kim",
"Minseo",
""
],
[
"Kim",
"Sungwoong",
""
],
[
"Lee",
"Sungjae",
""
],
[
"Park",
"Hwiseong",
""
],
[
"Chung",
"Jiwan",
""
],
[
"Yu",
"Youngjae",
""
]
] | TITLE: CANVAS: Commonsense-Aware Navigation System for Intuitive Human-Robot
Interaction
ABSTRACT: Real-life robot navigation involves more than just reaching a destination; it
requires optimizing movements while addressing scenario-specific goals. An
intuitive way for humans to express these goals is through abstract cues like
verbal commands or rough sketches. Such human guidance may lack details or be
noisy. Nonetheless, we expect robots to navigate as intended. For robots to
interpret and execute these abstract instructions in line with human
expectations, they must share a common understanding of basic navigation
concepts with humans. To this end, we introduce CANVAS, a novel framework that
combines visual and linguistic instructions for commonsense-aware navigation.
Its success is driven by imitation learning, enabling the robot to learn from
human navigation behavior. We present COMMAND, a comprehensive dataset with
human-annotated navigation results, spanning over 48 hours and 219 km, designed
to train commonsense-aware navigation systems in simulated environments. Our
experiments show that CANVAS outperforms the strong rule-based system ROS
NavStack across all environments, demonstrating superior performance with noisy
instructions. Notably, in the orchard environment, where ROS NavStack records a
0% total success rate, CANVAS achieves a total success rate of 67%. CANVAS also
closely aligns with human demonstrations and commonsense constraints, even in
unseen environments. Furthermore, real-world deployment of CANVAS showcases
impressive Sim2Real transfer with a total success rate of 69%, highlighting the
potential of learning from human demonstrations in simulated environments for
real-world applications.
|
2410.07933 | Carolin Schmidt | Carolin Schmidt, Daniele Gammelli, James Harrison, Marco Pavone,
Filipe Rodrigues | Offline Hierarchical Reinforcement Learning via Inverse Optimization | null | null | null | null | cs.LG cs.SY eess.SY math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hierarchical policies enable strong performance in many sequential
decision-making problems, such as those with high-dimensional action spaces,
those requiring long-horizon planning, and settings with sparse rewards.
However, learning hierarchical policies from static offline datasets presents a
significant challenge. Crucially, actions taken by higher-level policies may
not be directly observable within hierarchical controllers, and the offline
dataset might have been generated using a different policy structure, hindering
the use of standard offline learning algorithms. In this work, we propose OHIO:
a framework for offline reinforcement learning (RL) of hierarchical policies.
Our framework leverages knowledge of the policy structure to solve the
\textit{inverse problem}, recovering the unobservable high-level actions that
likely generated the observed data under our hierarchical policy. This approach
constructs a dataset suitable for off-the-shelf offline training. We
demonstrate our framework on robotic and network optimization problems and show
that it substantially outperforms end-to-end RL methods and improves
robustness. We investigate a variety of instantiations of our framework, both
in direct deployment of policies trained offline and when online fine-tuning is
performed. Code and data are available at
https://ohio-offline-hierarchical-rl.github.io
| [
{
"version": "v1",
"created": "Thu, 10 Oct 2024 14:00:21 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 15:30:08 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Schmidt",
"Carolin",
""
],
[
"Gammelli",
"Daniele",
""
],
[
"Harrison",
"James",
""
],
[
"Pavone",
"Marco",
""
],
[
"Rodrigues",
"Filipe",
""
]
] | TITLE: Offline Hierarchical Reinforcement Learning via Inverse Optimization
ABSTRACT: Hierarchical policies enable strong performance in many sequential
decision-making problems, such as those with high-dimensional action spaces,
those requiring long-horizon planning, and settings with sparse rewards.
However, learning hierarchical policies from static offline datasets presents a
significant challenge. Crucially, actions taken by higher-level policies may
not be directly observable within hierarchical controllers, and the offline
dataset might have been generated using a different policy structure, hindering
the use of standard offline learning algorithms. In this work, we propose OHIO:
a framework for offline reinforcement learning (RL) of hierarchical policies.
Our framework leverages knowledge of the policy structure to solve the
\textit{inverse problem}, recovering the unobservable high-level actions that
likely generated the observed data under our hierarchical policy. This approach
constructs a dataset suitable for off-the-shelf offline training. We
demonstrate our framework on robotic and network optimization problems and show
that it substantially outperforms end-to-end RL methods and improves
robustness. We investigate a variety of instantiations of our framework, both
in direct deployment of policies trained offline and when online fine-tuning is
performed. Code and data are available at
https://ohio-offline-hierarchical-rl.github.io
|
2410.08437 | Daniel Bramblett | Rushang Karia, Daniel Bramblett, Daksh Dobhal, Siddharth Srivastava | AutoEval: Autonomous Evaluation of LLMs for Truth Maintenance and
Reasoning Tasks | null | null | null | null | cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents AutoEval, a novel benchmark for scaling Large Language
Model (LLM) assessment in formal tasks with clear notions of correctness, such
as truth maintenance in translation and logical reasoning. AutoEval is the
first benchmarking paradigm that offers several key advantages necessary for
scaling objective evaluation of LLMs without human labeling: (a) ability to
evaluate LLMs of increasing sophistication by auto-generating tasks at
different levels of difficulty; (b) auto-generation of ground truth that
eliminates dependence on expensive and time-consuming human annotation; (c) the
use of automatically generated, randomized datasets that mitigate the ability
of successive LLMs to overfit to static datasets used in many contemporary
benchmarks. Empirical analysis shows that an LLM's performance on AutoEval is
highly indicative of its performance on a diverse array of other benchmarks
focusing on translation and reasoning tasks, making it a valuable autonomous
evaluation paradigm in settings where hand-curated datasets can be hard to
obtain and/or update.
| [
{
"version": "v1",
"created": "Fri, 11 Oct 2024 00:56:37 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 21:03:16 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Karia",
"Rushang",
""
],
[
"Bramblett",
"Daniel",
""
],
[
"Dobhal",
"Daksh",
""
],
[
"Srivastava",
"Siddharth",
""
]
] | TITLE: AutoEval: Autonomous Evaluation of LLMs for Truth Maintenance and
Reasoning Tasks
ABSTRACT: This paper presents AutoEval, a novel benchmark for scaling Large Language
Model (LLM) assessment in formal tasks with clear notions of correctness, such
as truth maintenance in translation and logical reasoning. AutoEval is the
first benchmarking paradigm that offers several key advantages necessary for
scaling objective evaluation of LLMs without human labeling: (a) ability to
evaluate LLMs of increasing sophistication by auto-generating tasks at
different levels of difficulty; (b) auto-generation of ground truth that
eliminates dependence on expensive and time-consuming human annotation; (c) the
use of automatically generated, randomized datasets that mitigate the ability
of successive LLMs to overfit to static datasets used in many contemporary
benchmarks. Empirical analysis shows that an LLM's performance on AutoEval is
highly indicative of its performance on a diverse array of other benchmarks
focusing on translation and reasoning tasks, making it a valuable autonomous
evaluation paradigm in settings where hand-curated datasets can be hard to
obtain and/or update.
|
2410.12360 | Qingren Yao | Qingren Yao, Chao-Han Huck Yang, Renhe Jiang, Yuxuan Liang, Ming Jin,
Shirui Pan | Towards Neural Scaling Laws for Time Series Foundation Models | Accepted by the 13th International Conference on Learning
Representations (ICLR 2025) | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Scaling laws offer valuable insights into the design of time series
foundation models (TSFMs). However, previous research has largely focused on
the scaling laws of TSFMs for in-distribution (ID) data, leaving their
out-of-distribution (OOD) scaling behavior and the influence of model
architectures less explored. In this work, we examine two common TSFM
architectures, encoder-only and decoder-only Transformers, and investigate
their scaling behavior on both ID and OOD data. These models are trained and
evaluated across varying parameter counts, compute budgets, and dataset sizes.
Our experiments reveal that the log-likelihood loss of TSFMs exhibits similar
scaling behavior in both OOD and ID settings. We further compare the scaling
properties across different architectures, incorporating two state-of-the-art
TSFMs as case studies, showing that model architecture plays a significant role
in scaling. The encoder-only Transformers demonstrate better scalability than
the decoder-only Transformers, while the architectural enhancements in the two
advanced TSFMs primarily improve ID performance but reduce OOD scalability.
While scaling up TSFMs is expected to drive performance breakthroughs, the lack
of a comprehensive understanding of TSFM scaling laws has hindered the
development of a robust framework to guide model scaling. We fill this gap in
this work by synthesizing our findings and providing practical guidelines for
designing and scaling larger TSFMs with enhanced model capabilities.
| [
{
"version": "v1",
"created": "Wed, 16 Oct 2024 08:23:39 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Feb 2025 02:35:14 GMT"
},
{
"version": "v3",
"created": "Tue, 18 Mar 2025 06:54:45 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Yao",
"Qingren",
""
],
[
"Yang",
"Chao-Han Huck",
""
],
[
"Jiang",
"Renhe",
""
],
[
"Liang",
"Yuxuan",
""
],
[
"Jin",
"Ming",
""
],
[
"Pan",
"Shirui",
""
]
] | TITLE: Towards Neural Scaling Laws for Time Series Foundation Models
ABSTRACT: Scaling laws offer valuable insights into the design of time series
foundation models (TSFMs). However, previous research has largely focused on
the scaling laws of TSFMs for in-distribution (ID) data, leaving their
out-of-distribution (OOD) scaling behavior and the influence of model
architectures less explored. In this work, we examine two common TSFM
architectures, encoder-only and decoder-only Transformers, and investigate
their scaling behavior on both ID and OOD data. These models are trained and
evaluated across varying parameter counts, compute budgets, and dataset sizes.
Our experiments reveal that the log-likelihood loss of TSFMs exhibits similar
scaling behavior in both OOD and ID settings. We further compare the scaling
properties across different architectures, incorporating two state-of-the-art
TSFMs as case studies, showing that model architecture plays a significant role
in scaling. The encoder-only Transformers demonstrate better scalability than
the decoder-only Transformers, while the architectural enhancements in the two
advanced TSFMs primarily improve ID performance but reduce OOD scalability.
While scaling up TSFMs is expected to drive performance breakthroughs, the lack
of a comprehensive understanding of TSFM scaling laws has hindered the
development of a robust framework to guide model scaling. We fill this gap in
this work by synthesizing our findings and providing practical guidelines for
designing and scaling larger TSFMs with enhanced model capabilities.
|
2410.12819 | Francisco M. Calatrava-Nicol\'as | Francisco M. Calatrava-Nicol\'as, Shoko Miyauchi, and Oscar Martinez
Mozos | Deep Adversarial Learning with Activity-Based User Discrimination Task
for Human Activity Recognition | null | null | null | null | eess.SP cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a new adversarial deep learning framework for the problem of human
activity recognition (HAR) using inertial sensors worn by people. Our framework
incorporates a novel adversarial activity-based discrimination task that
addresses inter-person variability-i.e., the fact that different people perform
the same activity in different ways. Overall, our proposed framework
outperforms previous approaches on three HAR datasets using a
leave-one-(person)-out cross-validation (LOOCV) benchmark. Additional results
demonstrate that our discrimination task yields better classification results
compared to previous tasks within the same adversarial framework.
| [
{
"version": "v1",
"created": "Tue, 1 Oct 2024 11:58:33 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 02:56:03 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Calatrava-Nicolás",
"Francisco M.",
""
],
[
"Miyauchi",
"Shoko",
""
],
[
"Mozos",
"Oscar Martinez",
""
]
] | TITLE: Deep Adversarial Learning with Activity-Based User Discrimination Task
for Human Activity Recognition
ABSTRACT: We present a new adversarial deep learning framework for the problem of human
activity recognition (HAR) using inertial sensors worn by people. Our framework
incorporates a novel adversarial activity-based discrimination task that
addresses inter-person variability-i.e., the fact that different people perform
the same activity in different ways. Overall, our proposed framework
outperforms previous approaches on three HAR datasets using a
leave-one-(person)-out cross-validation (LOOCV) benchmark. Additional results
demonstrate that our discrimination task yields better classification results
compared to previous tasks within the same adversarial framework.
|
2410.13788 | Michael J.Q. Zhang | Michael J.Q. Zhang, W. Bradley Knox, Eunsol Choi | Modeling Future Conversation Turns to Teach LLMs to Ask Clarifying
Questions | Presented at ICLR 2025 | null | null | null | cs.CL | http://creativecommons.org/licenses/by-sa/4.0/ | Large language models (LLMs) must often respond to highly ambiguous user
requests. In such cases, the LLM's best response may be to ask a clarifying
question to elicit more information. Existing LLMs often respond by
presupposing a single interpretation of such ambiguous requests, frustrating
users who intended a different interpretation. We speculate this is caused by
current preference data labeling practice, where LLM responses are evaluated
only on their prior contexts. To address this, we assign preference labels by
simulating their expected outcomes in future turns. This allows LLMs to learn
to ask clarifying questions when it can generate responses that are tailored to
each user interpretation in future turns. On open-domain QA datasets with
multiple annotations, we evaluate systems based on their ability to ask
clarifying questions to recover each user's interpretation and expected answer.
We compare systems trained using our proposed preference labeling methods
against standard methods, which assign preferences based on only prior context.
Our method achieves a 5% improvement in F1 measured against the answer set from
different interpretations of each query, showing the value of modeling future
conversation turns. We further demonstrate that our method can be used to train
models to judiciously determine when to ask clarifying questions, directly
answering the question when clarification is unnecessary. In our experiments,
we find that our method achieves a 3% improvement in accuracy of such judgments
over existing methods.
| [
{
"version": "v1",
"created": "Thu, 17 Oct 2024 17:29:04 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 14:17:47 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Zhang",
"Michael J. Q.",
""
],
[
"Knox",
"W. Bradley",
""
],
[
"Choi",
"Eunsol",
""
]
] | TITLE: Modeling Future Conversation Turns to Teach LLMs to Ask Clarifying
Questions
ABSTRACT: Large language models (LLMs) must often respond to highly ambiguous user
requests. In such cases, the LLM's best response may be to ask a clarifying
question to elicit more information. Existing LLMs often respond by
presupposing a single interpretation of such ambiguous requests, frustrating
users who intended a different interpretation. We speculate this is caused by
current preference data labeling practice, where LLM responses are evaluated
only on their prior contexts. To address this, we assign preference labels by
simulating their expected outcomes in future turns. This allows LLMs to learn
to ask clarifying questions when it can generate responses that are tailored to
each user interpretation in future turns. On open-domain QA datasets with
multiple annotations, we evaluate systems based on their ability to ask
clarifying questions to recover each user's interpretation and expected answer.
We compare systems trained using our proposed preference labeling methods
against standard methods, which assign preferences based on only prior context.
Our method achieves a 5% improvement in F1 measured against the answer set from
different interpretations of each query, showing the value of modeling future
conversation turns. We further demonstrate that our method can be used to train
models to judiciously determine when to ask clarifying questions, directly
answering the question when clarification is unnecessary. In our experiments,
we find that our method achieves a 3% improvement in accuracy of such judgments
over existing methods.
|
2410.16713 | Joshua Kazdan | Joshua Kazdan, Rylan Schaeffer, Apratim Dey, Matthias Gerstgrasser,
Rafael Rafailov, David L. Donoho, Sanmi Koyejo | Collapse or Thrive? Perils and Promises of Synthetic Data in a
Self-Generating World | Accepted at NeurIPS 2024 Workshops: Mathematics of Modern Machine
Learning (M3L) and Attributing Model Behavior at Scale (ATTRIB) | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | What happens when generative machine learning models are pretrained on
web-scale datasets containing data generated by earlier models? Some prior work
warns of "model collapse" as the web is overwhelmed by synthetic data; other
work suggests the problem can be contained (i.e. collapse can be avoided) by
managing how available data are used in pretraining. In this paper, we report
experiments on three ways of using data (training-workflows), across three
generative model task-settings (multivariate Gaussian estimation, kernel
density estimation, and language-model fine-tuning) to further confirm the
possibility of containment: (a) we confirm that the training-workflow of {\it
replacing} all real data by successive generations of purely synthetic data
indeed suffers model collapse in all task-settings studied; (b) we consider the
training-workflow of {\it accumulating} synthetic data alongside real data and
training on all data combined and confirming that, although the proportion of
real data eventually becomes zero, models remain stable and their test losses
do not diverge under this training-workflow; (c) we consider a
training-workflow where real and synthetic data accumulate together but
successive generations of pretraining are constrained to use fixed-size data
subsets each generation. In this workflow, we observe slow and gradual rather
than explosive degradation of test loss performance across generations. Our
insights are particularly important when forecasting whether future frontier
generative models will collapse or thrive, and our results open avenues for
empirically and mathematically studying the context-dependent value of
synthetic data.
| [
{
"version": "v1",
"created": "Tue, 22 Oct 2024 05:49:24 GMT"
},
{
"version": "v2",
"created": "Mon, 16 Dec 2024 06:37:01 GMT"
},
{
"version": "v3",
"created": "Thu, 6 Feb 2025 00:43:54 GMT"
},
{
"version": "v4",
"created": "Mon, 17 Mar 2025 21:14:46 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Kazdan",
"Joshua",
""
],
[
"Schaeffer",
"Rylan",
""
],
[
"Dey",
"Apratim",
""
],
[
"Gerstgrasser",
"Matthias",
""
],
[
"Rafailov",
"Rafael",
""
],
[
"Donoho",
"David L.",
""
],
[
"Koyejo",
"Sanmi",
""
]
] | TITLE: Collapse or Thrive? Perils and Promises of Synthetic Data in a
Self-Generating World
ABSTRACT: What happens when generative machine learning models are pretrained on
web-scale datasets containing data generated by earlier models? Some prior work
warns of "model collapse" as the web is overwhelmed by synthetic data; other
work suggests the problem can be contained (i.e. collapse can be avoided) by
managing how available data are used in pretraining. In this paper, we report
experiments on three ways of using data (training-workflows), across three
generative model task-settings (multivariate Gaussian estimation, kernel
density estimation, and language-model fine-tuning) to further confirm the
possibility of containment: (a) we confirm that the training-workflow of {\it
replacing} all real data by successive generations of purely synthetic data
indeed suffers model collapse in all task-settings studied; (b) we consider the
training-workflow of {\it accumulating} synthetic data alongside real data and
training on all data combined and confirming that, although the proportion of
real data eventually becomes zero, models remain stable and their test losses
do not diverge under this training-workflow; (c) we consider a
training-workflow where real and synthetic data accumulate together but
successive generations of pretraining are constrained to use fixed-size data
subsets each generation. In this workflow, we observe slow and gradual rather
than explosive degradation of test loss performance across generations. Our
insights are particularly important when forecasting whether future frontier
generative models will collapse or thrive, and our results open avenues for
empirically and mathematically studying the context-dependent value of
synthetic data.
|
2410.17263 | Arjun Subramonian | Arjun Subramonian, Samuel J. Bell, Levent Sagun, Elvis Dohmatob | An Effective Theory of Bias Amplification | Accepted to ICLR 2025 | null | null | null | cs.LG cs.CY stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Machine learning models can capture and amplify biases present in data,
leading to disparate test performance across social groups. To better
understand, evaluate, and mitigate these biases, a deeper theoretical
understanding of how model design choices and data distribution properties
contribute to bias is needed. In this work, we contribute a precise analytical
theory in the context of ridge regression, both with and without random
projections, where the former models feedforward neural networks in a
simplified regime. Our theory offers a unified and rigorous explanation of
machine learning bias, providing insights into phenomena such as bias
amplification and minority-group bias in various feature and parameter regimes.
For example, we observe that there may be an optimal regularization penalty or
training time to avoid bias amplification, and there can be differences in test
error between groups that are not alleviated with increased parameterization.
Importantly, our theoretical predictions align with empirical observations
reported in the literature on machine learning bias. We extensively empirically
validate our theory on synthetic and semi-synthetic datasets.
| [
{
"version": "v1",
"created": "Mon, 7 Oct 2024 08:43:22 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Oct 2024 16:24:30 GMT"
},
{
"version": "v3",
"created": "Tue, 29 Oct 2024 02:21:41 GMT"
},
{
"version": "v4",
"created": "Tue, 18 Mar 2025 17:56:58 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Subramonian",
"Arjun",
""
],
[
"Bell",
"Samuel J.",
""
],
[
"Sagun",
"Levent",
""
],
[
"Dohmatob",
"Elvis",
""
]
] | TITLE: An Effective Theory of Bias Amplification
ABSTRACT: Machine learning models can capture and amplify biases present in data,
leading to disparate test performance across social groups. To better
understand, evaluate, and mitigate these biases, a deeper theoretical
understanding of how model design choices and data distribution properties
contribute to bias is needed. In this work, we contribute a precise analytical
theory in the context of ridge regression, both with and without random
projections, where the former models feedforward neural networks in a
simplified regime. Our theory offers a unified and rigorous explanation of
machine learning bias, providing insights into phenomena such as bias
amplification and minority-group bias in various feature and parameter regimes.
For example, we observe that there may be an optimal regularization penalty or
training time to avoid bias amplification, and there can be differences in test
error between groups that are not alleviated with increased parameterization.
Importantly, our theoretical predictions align with empirical observations
reported in the literature on machine learning bias. We extensively empirically
validate our theory on synthetic and semi-synthetic datasets.
|
2410.21113 | Jo\~ao Pereira | Joao Pereira, Vasco Lopes, David Semedo, Joao Neves | Zero-Shot Action Recognition in Surveillance Videos | null | null | null | null | cs.CV cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The growing demand for surveillance in public spaces presents significant
challenges due to the shortage of human resources. Current AI-based video
surveillance systems heavily rely on core computer vision models that require
extensive finetuning, which is particularly difficult in surveillance settings
due to limited datasets and difficult setting (viewpoint, low quality, etc.).
In this work, we propose leveraging Large Vision-Language Models (LVLMs), known
for their strong zero and few-shot generalization, to tackle video
understanding tasks in surveillance. Specifically, we explore VideoLLaMA2, a
state-of-the-art LVLM, and an improved token-level sampling method,
Self-Reflective Sampling (Self-ReS). Our experiments on the UCF-Crime dataset
show that VideoLLaMA2 represents a significant leap in zero-shot performance,
with 20% boost over the baseline. Self-ReS additionally increases zero-shot
action recognition performance to 44.6%. These results highlight the potential
of LVLMs, paired with improved sampling techniques, for advancing surveillance
video analysis in diverse scenarios.
| [
{
"version": "v1",
"created": "Mon, 28 Oct 2024 15:13:53 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 13:30:27 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Pereira",
"Joao",
""
],
[
"Lopes",
"Vasco",
""
],
[
"Semedo",
"David",
""
],
[
"Neves",
"Joao",
""
]
] | TITLE: Zero-Shot Action Recognition in Surveillance Videos
ABSTRACT: The growing demand for surveillance in public spaces presents significant
challenges due to the shortage of human resources. Current AI-based video
surveillance systems heavily rely on core computer vision models that require
extensive finetuning, which is particularly difficult in surveillance settings
due to limited datasets and difficult setting (viewpoint, low quality, etc.).
In this work, we propose leveraging Large Vision-Language Models (LVLMs), known
for their strong zero and few-shot generalization, to tackle video
understanding tasks in surveillance. Specifically, we explore VideoLLaMA2, a
state-of-the-art LVLM, and an improved token-level sampling method,
Self-Reflective Sampling (Self-ReS). Our experiments on the UCF-Crime dataset
show that VideoLLaMA2 represents a significant leap in zero-shot performance,
with 20% boost over the baseline. Self-ReS additionally increases zero-shot
action recognition performance to 44.6%. These results highlight the potential
of LVLMs, paired with improved sampling techniques, for advancing surveillance
video analysis in diverse scenarios.
|
2410.21301 | Liam Moroy | Liam Moroy, Guillaume Bourmaud, Fr\'ed\'eric Champagnat,
Jean-Fran\c{c}ois Giovannelli | Evaluating the Posterior Sampling Ability of Plug&Play Diffusion Methods
in Sparse-View CT | null | null | null | null | eess.IV cs.AI cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Plug&Play (PnP) diffusion models are state-of-the-art methods in computed
tomography (CT) reconstruction. Such methods usually consider applications
where the sinogram contains a sufficient amount of information for the
posterior distribution to be concentrated around a single mode, and
consequently are evaluated using image-to-image metrics such as PSNR/SSIM.
Instead, we are interested in reconstructing compressible flow images from
sinograms having a small number of projections, which results in a posterior
distribution no longer concentrated or even multimodal. Thus, in this paper, we
aim at evaluating the approximate posterior of PnP diffusion models and
introduce two posterior evaluation properties. We quantitatively evaluate three
PnP diffusion methods on three different datasets for several numbers of
projections. We surprisingly find that, for each method, the approximate
posterior deviates from the true posterior when the number of projections
decreases.
| [
{
"version": "v1",
"created": "Mon, 21 Oct 2024 11:39:03 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 09:00:53 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Moroy",
"Liam",
""
],
[
"Bourmaud",
"Guillaume",
""
],
[
"Champagnat",
"Frédéric",
""
],
[
"Giovannelli",
"Jean-François",
""
]
] | TITLE: Evaluating the Posterior Sampling Ability of Plug&Play Diffusion Methods
in Sparse-View CT
ABSTRACT: Plug&Play (PnP) diffusion models are state-of-the-art methods in computed
tomography (CT) reconstruction. Such methods usually consider applications
where the sinogram contains a sufficient amount of information for the
posterior distribution to be concentrated around a single mode, and
consequently are evaluated using image-to-image metrics such as PSNR/SSIM.
Instead, we are interested in reconstructing compressible flow images from
sinograms having a small number of projections, which results in a posterior
distribution no longer concentrated or even multimodal. Thus, in this paper, we
aim at evaluating the approximate posterior of PnP diffusion models and
introduce two posterior evaluation properties. We quantitatively evaluate three
PnP diffusion methods on three different datasets for several numbers of
projections. We surprisingly find that, for each method, the approximate
posterior deviates from the true posterior when the number of projections
decreases.
|
2410.21967 | Chengkai Huang | Hongtao Huang, Chengkai Huang, Tong Yu, Xiaojun Chang, Wen Hu, Julian
McAuley, Lina Yao | Dual Conditional Diffusion Models for Sequential Recommendation | null | null | null | null | cs.IR cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Recent advancements in diffusion models have shown promising results in
sequential recommendation (SR). Existing approaches predominantly rely on
implicit conditional diffusion models, which compress user behaviors into a
single representation during the forward diffusion process. While effective to
some extent, this oversimplification often leads to the loss of sequential and
contextual information, which is critical for understanding user behavior.
Moreover, explicit information, such as user-item interactions or sequential
patterns, remains underutilized, despite its potential to directly guide the
recommendation process and improve precision. However, combining implicit and
explicit information is non-trivial, as it requires dynamically integrating
these complementary signals while avoiding noise and irrelevant patterns within
user behaviors. To address these challenges, we propose Dual Conditional
Diffusion Models for Sequential Recommendation (DCRec), which effectively
integrates implicit and explicit information by embedding dual conditions into
both the forward and reverse diffusion processes. This allows the model to
retain valuable sequential and contextual information while leveraging explicit
user-item interactions to guide the recommendation process. Specifically, we
introduce the Dual Conditional Diffusion Transformer (DCDT), which employs a
cross-attention mechanism to dynamically integrate explicit signals throughout
the diffusion stages, ensuring contextual understanding and minimizing the
influence of irrelevant patterns. This design enables precise and contextually
relevant recommendations. Extensive experiments on public benchmark datasets
demonstrate that DCRec significantly outperforms state-of-the-art methods in
both accuracy and computational efficiency.
| [
{
"version": "v1",
"created": "Tue, 29 Oct 2024 11:51:06 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 04:42:54 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Huang",
"Hongtao",
""
],
[
"Huang",
"Chengkai",
""
],
[
"Yu",
"Tong",
""
],
[
"Chang",
"Xiaojun",
""
],
[
"Hu",
"Wen",
""
],
[
"McAuley",
"Julian",
""
],
[
"Yao",
"Lina",
""
]
] | TITLE: Dual Conditional Diffusion Models for Sequential Recommendation
ABSTRACT: Recent advancements in diffusion models have shown promising results in
sequential recommendation (SR). Existing approaches predominantly rely on
implicit conditional diffusion models, which compress user behaviors into a
single representation during the forward diffusion process. While effective to
some extent, this oversimplification often leads to the loss of sequential and
contextual information, which is critical for understanding user behavior.
Moreover, explicit information, such as user-item interactions or sequential
patterns, remains underutilized, despite its potential to directly guide the
recommendation process and improve precision. However, combining implicit and
explicit information is non-trivial, as it requires dynamically integrating
these complementary signals while avoiding noise and irrelevant patterns within
user behaviors. To address these challenges, we propose Dual Conditional
Diffusion Models for Sequential Recommendation (DCRec), which effectively
integrates implicit and explicit information by embedding dual conditions into
both the forward and reverse diffusion processes. This allows the model to
retain valuable sequential and contextual information while leveraging explicit
user-item interactions to guide the recommendation process. Specifically, we
introduce the Dual Conditional Diffusion Transformer (DCDT), which employs a
cross-attention mechanism to dynamically integrate explicit signals throughout
the diffusion stages, ensuring contextual understanding and minimizing the
influence of irrelevant patterns. This design enables precise and contextually
relevant recommendations. Extensive experiments on public benchmark datasets
demonstrate that DCRec significantly outperforms state-of-the-art methods in
both accuracy and computational efficiency.
|
2411.00201 | Marwan Abdelatti | Nidhal Jegham, Chan Young Koh, Marwan Abdelatti, and Abdeltawab
Hendawi | YOLO Evolution: A Comprehensive Benchmark and Architectural Review of
YOLOv12, YOLO11, and Their Previous Versions | 20 pages | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | This study presents a comprehensive benchmark analysis of various YOLO (You
Only Look Once) algorithms. It represents the first comprehensive experimental
evaluation of YOLOv3 to the latest version, YOLOv12, on various object
detection challenges. The challenges considered include varying object sizes,
diverse aspect ratios, and small-sized objects of a single class, ensuring a
comprehensive assessment across datasets with distinct challenges. To ensure a
robust evaluation, we employ a comprehensive set of metrics, including
Precision, Recall, Mean Average Precision (mAP), Processing Time, GFLOPs count,
and Model Size. Our analysis highlights the distinctive strengths and
limitations of each YOLO version. For example: YOLOv9 demonstrates substantial
accuracy but struggles with detecting small objects and efficiency whereas
YOLOv10 exhibits relatively lower accuracy due to architectural choices that
affect its performance in overlapping object detection but excels in speed and
efficiency. Additionally, the YOLO11 family consistently shows superior
performance maintaining a remarkable balance of accuracy and efficiency.
However, YOLOv12 delivered underwhelming results, with its complex architecture
introducing computational overhead without significant performance gains. These
results provide critical insights for both industry and academia, facilitating
the selection of the most suitable YOLO algorithm for diverse applications and
guiding future enhancements.
| [
{
"version": "v1",
"created": "Thu, 31 Oct 2024 20:45:00 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Feb 2025 18:54:09 GMT"
},
{
"version": "v3",
"created": "Tue, 25 Feb 2025 19:00:29 GMT"
},
{
"version": "v4",
"created": "Mon, 17 Mar 2025 19:27:13 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Jegham",
"Nidhal",
""
],
[
"Koh",
"Chan Young",
""
],
[
"Abdelatti",
"Marwan",
""
],
[
"Hendawi",
"Abdeltawab",
""
]
] | TITLE: YOLO Evolution: A Comprehensive Benchmark and Architectural Review of
YOLOv12, YOLO11, and Their Previous Versions
ABSTRACT: This study presents a comprehensive benchmark analysis of various YOLO (You
Only Look Once) algorithms. It represents the first comprehensive experimental
evaluation of YOLOv3 to the latest version, YOLOv12, on various object
detection challenges. The challenges considered include varying object sizes,
diverse aspect ratios, and small-sized objects of a single class, ensuring a
comprehensive assessment across datasets with distinct challenges. To ensure a
robust evaluation, we employ a comprehensive set of metrics, including
Precision, Recall, Mean Average Precision (mAP), Processing Time, GFLOPs count,
and Model Size. Our analysis highlights the distinctive strengths and
limitations of each YOLO version. For example: YOLOv9 demonstrates substantial
accuracy but struggles with detecting small objects and efficiency whereas
YOLOv10 exhibits relatively lower accuracy due to architectural choices that
affect its performance in overlapping object detection but excels in speed and
efficiency. Additionally, the YOLO11 family consistently shows superior
performance maintaining a remarkable balance of accuracy and efficiency.
However, YOLOv12 delivered underwhelming results, with its complex architecture
introducing computational overhead without significant performance gains. These
results provide critical insights for both industry and academia, facilitating
the selection of the most suitable YOLO algorithm for diverse applications and
guiding future enhancements.
|
2411.02210 | Deepayan Das | Deepayan Das, Davide Talon, Massimiliano Mancini, Yiming Wang, Elisa
Ricci | One VLM to Keep it Learning: Generation and Balancing for Data-free
Continual Visual Question Answering | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Vision-Language Models (VLMs) have shown significant promise in Visual
Question Answering (VQA) tasks by leveraging web-scale multimodal datasets.
However, these models often struggle with continual learning due to
catastrophic forgetting when adapting to new tasks. As an effective remedy to
mitigate catastrophic forgetting, rehearsal strategy uses the data of past
tasks upon learning new task. However, such strategy incurs the need of storing
past data, which might not be feasible due to hardware constraints or privacy
concerns. In this work, we propose the first data-free method that leverages
the language generation capability of a VLM, instead of relying on external
models, to produce pseudo-rehearsal data for addressing continual VQA. Our
proposal, named as GaB, generates pseudo-rehearsal data by posing previous task
questions on new task data. Yet, despite being effective, the distribution of
generated questions skews towards the most frequently posed questions due to
the limited and task-specific training data. To mitigate this issue, we
introduce a pseudo-rehearsal balancing module that aligns the generated data
towards the ground-truth data distribution using either the question
meta-statistics or an unsupervised clustering method. We evaluate our proposed
method on two recent benchmarks, \ie VQACL-VQAv2 and CLOVE-function benchmarks.
GaB outperforms all the data-free baselines with substantial improvement in
maintaining VQA performance across evolving tasks, while being on-par with
methods with access to the past data.
| [
{
"version": "v1",
"created": "Mon, 4 Nov 2024 16:04:59 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 09:50:15 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Das",
"Deepayan",
""
],
[
"Talon",
"Davide",
""
],
[
"Mancini",
"Massimiliano",
""
],
[
"Wang",
"Yiming",
""
],
[
"Ricci",
"Elisa",
""
]
] | TITLE: One VLM to Keep it Learning: Generation and Balancing for Data-free
Continual Visual Question Answering
ABSTRACT: Vision-Language Models (VLMs) have shown significant promise in Visual
Question Answering (VQA) tasks by leveraging web-scale multimodal datasets.
However, these models often struggle with continual learning due to
catastrophic forgetting when adapting to new tasks. As an effective remedy to
mitigate catastrophic forgetting, rehearsal strategy uses the data of past
tasks upon learning new task. However, such strategy incurs the need of storing
past data, which might not be feasible due to hardware constraints or privacy
concerns. In this work, we propose the first data-free method that leverages
the language generation capability of a VLM, instead of relying on external
models, to produce pseudo-rehearsal data for addressing continual VQA. Our
proposal, named as GaB, generates pseudo-rehearsal data by posing previous task
questions on new task data. Yet, despite being effective, the distribution of
generated questions skews towards the most frequently posed questions due to
the limited and task-specific training data. To mitigate this issue, we
introduce a pseudo-rehearsal balancing module that aligns the generated data
towards the ground-truth data distribution using either the question
meta-statistics or an unsupervised clustering method. We evaluate our proposed
method on two recent benchmarks, \ie VQACL-VQAv2 and CLOVE-function benchmarks.
GaB outperforms all the data-free baselines with substantial improvement in
maintaining VQA performance across evolving tasks, while being on-par with
methods with access to the past data.
|
2411.06601 | Rohit Bokade | Rohit Bokade, Xiaoning Jin | OffLight: An Offline Multi-Agent Reinforcement Learning Framework for
Traffic Signal Control | null | null | null | null | cs.AI cs.LG cs.MA | http://creativecommons.org/licenses/by/4.0/ | Efficient traffic control (TSC) is essential for urban mobility, but
traditional systems struggle to handle the complexity of real-world traffic.
Multi-agent Reinforcement Learning (MARL) offers adaptive solutions, but online
MARL requires extensive interactions with the environment, making it costly and
impractical. Offline MARL mitigates these challenges by using historical
traffic data for training but faces significant difficulties with heterogeneous
behavior policies in real-world datasets, where mixed-quality data complicates
learning. We introduce OffLight, a novel offline MARL framework designed to
handle heterogeneous behavior policies in TSC datasets. To improve learning
efficiency, OffLight incorporates Importance Sampling (IS) to correct for
distributional shifts and Return-Based Prioritized Sampling (RBPS) to focus on
high-quality experiences. OffLight utilizes a Gaussian Mixture Variational
Graph Autoencoder (GMM-VGAE) to capture the diverse distribution of behavior
policies from local observations. Extensive experiments across real-world urban
traffic scenarios show that OffLight outperforms existing offline RL methods,
achieving up to a 7.8% reduction in average travel time and 11.2% decrease in
queue length. Ablation studies confirm the effectiveness of OffLight's
components in handling heterogeneous data and improving policy performance.
These results highlight OffLight's scalability and potential to improve urban
traffic management without the risks of online learning.
| [
{
"version": "v1",
"created": "Sun, 10 Nov 2024 21:26:17 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Nov 2024 15:17:30 GMT"
},
{
"version": "v3",
"created": "Tue, 18 Mar 2025 01:22:42 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Bokade",
"Rohit",
""
],
[
"Jin",
"Xiaoning",
""
]
] | TITLE: OffLight: An Offline Multi-Agent Reinforcement Learning Framework for
Traffic Signal Control
ABSTRACT: Efficient traffic control (TSC) is essential for urban mobility, but
traditional systems struggle to handle the complexity of real-world traffic.
Multi-agent Reinforcement Learning (MARL) offers adaptive solutions, but online
MARL requires extensive interactions with the environment, making it costly and
impractical. Offline MARL mitigates these challenges by using historical
traffic data for training but faces significant difficulties with heterogeneous
behavior policies in real-world datasets, where mixed-quality data complicates
learning. We introduce OffLight, a novel offline MARL framework designed to
handle heterogeneous behavior policies in TSC datasets. To improve learning
efficiency, OffLight incorporates Importance Sampling (IS) to correct for
distributional shifts and Return-Based Prioritized Sampling (RBPS) to focus on
high-quality experiences. OffLight utilizes a Gaussian Mixture Variational
Graph Autoencoder (GMM-VGAE) to capture the diverse distribution of behavior
policies from local observations. Extensive experiments across real-world urban
traffic scenarios show that OffLight outperforms existing offline RL methods,
achieving up to a 7.8% reduction in average travel time and 11.2% decrease in
queue length. Ablation studies confirm the effectiveness of OffLight's
components in handling heterogeneous data and improving policy performance.
These results highlight OffLight's scalability and potential to improve urban
traffic management without the risks of online learning.
|
2411.07521 | Sina Bagheri Nezhad | Sina Bagheri Nezhad, Sayan Bandyapadhyay, Ameeta Agrawal | Fair Summarization: Bridging Quality and Diversity in Extractive
Summaries | Accepted at AFLME@NeurIPS 2024 (non-archival) & C3NLP@NAACL 2025
(publication) | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Fairness in multi-document summarization of user-generated content remains a
critical challenge in natural language processing (NLP). Existing summarization
methods often fail to ensure equitable representation across different social
groups, leading to biased outputs. In this paper, we introduce two novel
methods for fair extractive summarization: FairExtract, a clustering-based
approach, and FairGPT, which leverages GPT-3.5-turbo with fairness constraints.
We evaluate these methods using Divsumm summarization dataset of White-aligned,
Hispanic, and African-American dialect tweets and compare them against relevant
baselines. The results obtained using a comprehensive set of summarization
quality metrics such as SUPERT, BLANC, SummaQA, BARTScore, and UniEval, as well
as a fairness metric F, demonstrate that FairExtract and FairGPT achieve
superior fairness while maintaining competitive summarization quality.
Additionally, we introduce composite metrics (e.g., SUPERT+F, BLANC+F) that
integrate quality and fairness into a single evaluation framework, offering a
more nuanced understanding of the trade-offs between these objectives. Our code
is available online.
| [
{
"version": "v1",
"created": "Tue, 12 Nov 2024 03:37:53 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Nov 2024 04:03:54 GMT"
},
{
"version": "v3",
"created": "Wed, 5 Feb 2025 23:34:44 GMT"
},
{
"version": "v4",
"created": "Tue, 11 Mar 2025 16:55:48 GMT"
},
{
"version": "v5",
"created": "Tue, 18 Mar 2025 04:53:09 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Nezhad",
"Sina Bagheri",
""
],
[
"Bandyapadhyay",
"Sayan",
""
],
[
"Agrawal",
"Ameeta",
""
]
] | TITLE: Fair Summarization: Bridging Quality and Diversity in Extractive
Summaries
ABSTRACT: Fairness in multi-document summarization of user-generated content remains a
critical challenge in natural language processing (NLP). Existing summarization
methods often fail to ensure equitable representation across different social
groups, leading to biased outputs. In this paper, we introduce two novel
methods for fair extractive summarization: FairExtract, a clustering-based
approach, and FairGPT, which leverages GPT-3.5-turbo with fairness constraints.
We evaluate these methods using Divsumm summarization dataset of White-aligned,
Hispanic, and African-American dialect tweets and compare them against relevant
baselines. The results obtained using a comprehensive set of summarization
quality metrics such as SUPERT, BLANC, SummaQA, BARTScore, and UniEval, as well
as a fairness metric F, demonstrate that FairExtract and FairGPT achieve
superior fairness while maintaining competitive summarization quality.
Additionally, we introduce composite metrics (e.g., SUPERT+F, BLANC+F) that
integrate quality and fairness into a single evaluation framework, offering a
more nuanced understanding of the trade-offs between these objectives. Our code
is available online.
|
2411.08553 | Abhishek Divekar | Suhas S Kowshik, Abhishek Divekar, Vijit Malik | CorrSynth -- A Correlated Sampling Method for Diverse Dataset Generation
from LLMs | Published as a main conference paper at EMNLP 2024; First two authors
contributed equally | null | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | Large language models (LLMs) have demonstrated remarkable performance in
diverse tasks using zero-shot and few-shot prompting. Even though their
capabilities of data synthesis have been studied well in recent years, the
generated data suffers from a lack of diversity, less adherence to the prompt,
and potential biases that creep into the data from the generator model. In this
work, we tackle the challenge of generating datasets with high diversity, upon
which a student model is trained for downstream tasks. Taking the route of
decoding-time guidance-based approaches, we propose CorrSynth, which generates
data that is more diverse and faithful to the input prompt using a correlated
sampling strategy. Further, our method overcomes the complexity drawbacks of
some other guidance-based techniques like classifier-based guidance. With
extensive experiments, we show the effectiveness of our approach and
substantiate our claims. In particular, we perform intrinsic evaluation to show
the improvements in diversity. Our experiments show that CorrSynth improves
both student metrics and intrinsic metrics upon competitive baselines across
four datasets, showing the innate advantage of our method.
| [
{
"version": "v1",
"created": "Wed, 13 Nov 2024 12:09:23 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Kowshik",
"Suhas S",
""
],
[
"Divekar",
"Abhishek",
""
],
[
"Malik",
"Vijit",
""
]
] | TITLE: CorrSynth -- A Correlated Sampling Method for Diverse Dataset Generation
from LLMs
ABSTRACT: Large language models (LLMs) have demonstrated remarkable performance in
diverse tasks using zero-shot and few-shot prompting. Even though their
capabilities of data synthesis have been studied well in recent years, the
generated data suffers from a lack of diversity, less adherence to the prompt,
and potential biases that creep into the data from the generator model. In this
work, we tackle the challenge of generating datasets with high diversity, upon
which a student model is trained for downstream tasks. Taking the route of
decoding-time guidance-based approaches, we propose CorrSynth, which generates
data that is more diverse and faithful to the input prompt using a correlated
sampling strategy. Further, our method overcomes the complexity drawbacks of
some other guidance-based techniques like classifier-based guidance. With
extensive experiments, we show the effectiveness of our approach and
substantiate our claims. In particular, we perform intrinsic evaluation to show
the improvements in diversity. Our experiments show that CorrSynth improves
both student metrics and intrinsic metrics upon competitive baselines across
four datasets, showing the innate advantage of our method.
|
2411.08726 | Rui Liu | Rui Liu, Jiayou Liang, Haolong Chen and Yujia Hu | Analyst Reports and Stock Performance: Evidence from the Chinese Market | null | null | null | null | cs.CL q-fin.CP | http://creativecommons.org/licenses/by-nc-sa/4.0/ | This article applies natural language processing (NLP) to extract and
quantify textual information to predict stock performance. Using an extensive
dataset of Chinese analyst reports and employing a customized BERT deep
learning model for Chinese text, this study categorizes the sentiment of the
reports as positive, neutral, or negative. The findings underscore the
predictive capacity of this sentiment indicator for stock volatility, excess
returns, and trading volume. Specifically, analyst reports with strong positive
sentiment will increase excess return and intraday volatility, and vice versa,
reports with strong negative sentiment also increase volatility and trading
volume, but decrease future excess return. The magnitude of this effect is
greater for positive sentiment reports than for negative sentiment reports.
This article contributes to the empirical literature on sentiment analysis and
the response of the stock market to news in the Chinese stock market.
| [
{
"version": "v1",
"created": "Wed, 13 Nov 2024 16:08:40 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 21:49:49 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Liu",
"Rui",
""
],
[
"Liang",
"Jiayou",
""
],
[
"Chen",
"Haolong",
""
],
[
"Hu",
"Yujia",
""
]
] | TITLE: Analyst Reports and Stock Performance: Evidence from the Chinese Market
ABSTRACT: This article applies natural language processing (NLP) to extract and
quantify textual information to predict stock performance. Using an extensive
dataset of Chinese analyst reports and employing a customized BERT deep
learning model for Chinese text, this study categorizes the sentiment of the
reports as positive, neutral, or negative. The findings underscore the
predictive capacity of this sentiment indicator for stock volatility, excess
returns, and trading volume. Specifically, analyst reports with strong positive
sentiment will increase excess return and intraday volatility, and vice versa,
reports with strong negative sentiment also increase volatility and trading
volume, but decrease future excess return. The magnitude of this effect is
greater for positive sentiment reports than for negative sentiment reports.
This article contributes to the empirical literature on sentiment analysis and
the response of the stock market to news in the Chinese stock market.
|
2411.10077 | Jiwoong Yang | Jiwoong Yang and Haejun Chung and Ikbeom Jang | Hierarchical Mutual Distillation for Multi-View Fusion: Learning from
All Possible View Combinations | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Multi-view learning often faces challenges in effectively leveraging images
captured from different angles and locations. This challenge is particularly
pronounced when addressing inconsistencies and uncertainties between views. In
this paper, we propose a novel Multi-View Uncertainty-Weighted Mutual
Distillation (MV-UWMD) method. Our method enhances prediction consistency by
performing hierarchical mutual distillation across all possible view
combinations, including single-view, partial multi-view, and full multi-view
predictions. This introduces an uncertainty-based weighting mechanism through
mutual distillation, allowing effective exploitation of unique information from
each view while mitigating the impact of uncertain predictions. We extend a
CNN-Transformer hybrid architecture to facilitate robust feature learning and
integration across multiple view combinations. We conducted extensive
experiments using a large, unstructured dataset captured from diverse,
non-fixed viewpoints. The results demonstrate that MV-UWMD improves prediction
accuracy and consistency compared to existing multi-view learning approaches.
| [
{
"version": "v1",
"created": "Fri, 15 Nov 2024 09:45:32 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 10:17:16 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Yang",
"Jiwoong",
""
],
[
"Chung",
"Haejun",
""
],
[
"Jang",
"Ikbeom",
""
]
] | TITLE: Hierarchical Mutual Distillation for Multi-View Fusion: Learning from
All Possible View Combinations
ABSTRACT: Multi-view learning often faces challenges in effectively leveraging images
captured from different angles and locations. This challenge is particularly
pronounced when addressing inconsistencies and uncertainties between views. In
this paper, we propose a novel Multi-View Uncertainty-Weighted Mutual
Distillation (MV-UWMD) method. Our method enhances prediction consistency by
performing hierarchical mutual distillation across all possible view
combinations, including single-view, partial multi-view, and full multi-view
predictions. This introduces an uncertainty-based weighting mechanism through
mutual distillation, allowing effective exploitation of unique information from
each view while mitigating the impact of uncertain predictions. We extend a
CNN-Transformer hybrid architecture to facilitate robust feature learning and
integration across multiple view combinations. We conducted extensive
experiments using a large, unstructured dataset captured from diverse,
non-fixed viewpoints. The results demonstrate that MV-UWMD improves prediction
accuracy and consistency compared to existing multi-view learning approaches.
|
2411.16064 | Peihua Deng | Peihua Deng, Jiehua Zhang, Xichun Sheng, Chenggang Yan, Yaoqi Sun,
Ying Fu, Liang Li | Multi-Granularity Class Prototype Topology Distillation for
Class-Incremental Source-Free Unsupervised Domain Adaptation | Accepted by CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper explores the Class-Incremental Source-Free Unsupervised Domain
Adaptation (CI-SFUDA) problem, where the unlabeled target data come
incrementally without access to labeled source instances. This problem poses
two challenges, the interference of similar source-class knowledge in
target-class representation learning and the shocks of new target knowledge to
old ones. To address them, we propose the Multi-Granularity Class Prototype
Topology Distillation (GROTO) algorithm, which effectively transfers the source
knowledge to the class-incremental target domain. Concretely, we design the
multi-granularity class prototype self-organization module and the prototype
topology distillation module. First, we mine the positive classes by modeling
accumulation distributions. Next, we introduce multi-granularity class
prototypes to generate reliable pseudo-labels, and exploit them to promote the
positive-class target feature self-organization. Second, the positive-class
prototypes are leveraged to construct the topological structures of source and
target feature spaces. Then, we perform the topology distillation to
continually mitigate the shocks of new target knowledge to old ones. Extensive
experiments demonstrate that our proposed method achieves state-of-the-art
performance on three public datasets.
| [
{
"version": "v1",
"created": "Mon, 25 Nov 2024 03:28:09 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 12:35:16 GMT"
},
{
"version": "v3",
"created": "Tue, 18 Mar 2025 08:34:36 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Deng",
"Peihua",
""
],
[
"Zhang",
"Jiehua",
""
],
[
"Sheng",
"Xichun",
""
],
[
"Yan",
"Chenggang",
""
],
[
"Sun",
"Yaoqi",
""
],
[
"Fu",
"Ying",
""
],
[
"Li",
"Liang",
""
]
] | TITLE: Multi-Granularity Class Prototype Topology Distillation for
Class-Incremental Source-Free Unsupervised Domain Adaptation
ABSTRACT: This paper explores the Class-Incremental Source-Free Unsupervised Domain
Adaptation (CI-SFUDA) problem, where the unlabeled target data come
incrementally without access to labeled source instances. This problem poses
two challenges, the interference of similar source-class knowledge in
target-class representation learning and the shocks of new target knowledge to
old ones. To address them, we propose the Multi-Granularity Class Prototype
Topology Distillation (GROTO) algorithm, which effectively transfers the source
knowledge to the class-incremental target domain. Concretely, we design the
multi-granularity class prototype self-organization module and the prototype
topology distillation module. First, we mine the positive classes by modeling
accumulation distributions. Next, we introduce multi-granularity class
prototypes to generate reliable pseudo-labels, and exploit them to promote the
positive-class target feature self-organization. Second, the positive-class
prototypes are leveraged to construct the topological structures of source and
target feature spaces. Then, we perform the topology distillation to
continually mitigate the shocks of new target knowledge to old ones. Extensive
experiments demonstrate that our proposed method achieves state-of-the-art
performance on three public datasets.
|
2411.17595 | Lun Yu | Shuyi Jin, Lu Chen, Hongru Ding, Meijie Wang, Lun Yu | Can artificial intelligence predict clinical trial outcomes? | null | null | null | null | cs.LG stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study evaluates the performance of large language models (LLMs) and the
HINT model in predicting clinical trial outcomes, focusing on metrics including
Balanced Accuracy, Matthews Correlation Coefficient (MCC), Recall, and
Specificity. Results show that GPT-4o achieves superior overall performance
among LLMs but, like its counterparts (GPT-3.5, GPT-4mini, Llama3), struggles
with identifying negative outcomes. In contrast, HINT excels in negative sample
recognition and demonstrates resilience to external factors (e.g., recruitment
challenges) but underperforms in oncology trials, a major component of the
dataset. LLMs exhibit strengths in early-phase trials and simpler endpoints
like Overall Survival (OS), while HINT shows consistency across trial phases
and excels in complex endpoints (e.g., Objective Response Rate). Trial duration
analysis reveals improved model performance for medium- to long-term trials,
with GPT-4o and HINT displaying stability and enhanced specificity,
respectively. We underscore the complementary potential of LLMs (e.g., GPT-4o,
Llama3) and HINT, advocating for hybrid approaches to leverage GPT-4o's
predictive power and HINT's specificity in clinical trial outcome forecasting.
| [
{
"version": "v1",
"created": "Tue, 26 Nov 2024 17:05:27 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 00:45:44 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Jin",
"Shuyi",
""
],
[
"Chen",
"Lu",
""
],
[
"Ding",
"Hongru",
""
],
[
"Wang",
"Meijie",
""
],
[
"Yu",
"Lun",
""
]
] | TITLE: Can artificial intelligence predict clinical trial outcomes?
ABSTRACT: This study evaluates the performance of large language models (LLMs) and the
HINT model in predicting clinical trial outcomes, focusing on metrics including
Balanced Accuracy, Matthews Correlation Coefficient (MCC), Recall, and
Specificity. Results show that GPT-4o achieves superior overall performance
among LLMs but, like its counterparts (GPT-3.5, GPT-4mini, Llama3), struggles
with identifying negative outcomes. In contrast, HINT excels in negative sample
recognition and demonstrates resilience to external factors (e.g., recruitment
challenges) but underperforms in oncology trials, a major component of the
dataset. LLMs exhibit strengths in early-phase trials and simpler endpoints
like Overall Survival (OS), while HINT shows consistency across trial phases
and excels in complex endpoints (e.g., Objective Response Rate). Trial duration
analysis reveals improved model performance for medium- to long-term trials,
with GPT-4o and HINT displaying stability and enhanced specificity,
respectively. We underscore the complementary potential of LLMs (e.g., GPT-4o,
Llama3) and HINT, advocating for hybrid approaches to leverage GPT-4o's
predictive power and HINT's specificity in clinical trial outcome forecasting.
|
2411.19149 | Corentin Dumery | Corentin Dumery, Noa Ett\'e, Aoxiang Fan, Ren Li, Jingyi Xu, Hieu Le,
Pascal Fua | Counting Stacked Objects | 13 pages | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Visual object counting is a fundamental computer vision task underpinning
numerous real-world applications, from cell counting in biomedicine to traffic
and wildlife monitoring. However, existing methods struggle to handle the
challenge of stacked 3D objects in which most objects are hidden by those above
them. To address this important yet underexplored problem, we propose a novel
3D counting approach that decomposes the task into two complementary
subproblems - estimating the 3D geometry of the object stack and the occupancy
ratio from multi-view images. By combining geometric reconstruction and deep
learning-based depth analysis, our method can accurately count identical
objects within containers, even when they are irregularly stacked. We validate
our 3D Counting pipeline on diverse real-world and large-scale synthetic
datasets, which we will release publicly to facilitate further research.
| [
{
"version": "v1",
"created": "Thu, 28 Nov 2024 13:51:16 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 10:46:27 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Dumery",
"Corentin",
""
],
[
"Etté",
"Noa",
""
],
[
"Fan",
"Aoxiang",
""
],
[
"Li",
"Ren",
""
],
[
"Xu",
"Jingyi",
""
],
[
"Le",
"Hieu",
""
],
[
"Fua",
"Pascal",
""
]
] | TITLE: Counting Stacked Objects
ABSTRACT: Visual object counting is a fundamental computer vision task underpinning
numerous real-world applications, from cell counting in biomedicine to traffic
and wildlife monitoring. However, existing methods struggle to handle the
challenge of stacked 3D objects in which most objects are hidden by those above
them. To address this important yet underexplored problem, we propose a novel
3D counting approach that decomposes the task into two complementary
subproblems - estimating the 3D geometry of the object stack and the occupancy
ratio from multi-view images. By combining geometric reconstruction and deep
learning-based depth analysis, our method can accurately count identical
objects within containers, even when they are irregularly stacked. We validate
our 3D Counting pipeline on diverse real-world and large-scale synthetic
datasets, which we will release publicly to facilitate further research.
|
2412.01250 | Francesco Taioli | Francesco Taioli, Edoardo Zorzi, Gianni Franchi, Alberto Castellini,
Alessandro Farinelli, Marco Cristani, Yiming Wang | Collaborative Instance Object Navigation: Leveraging
Uncertainty-Awareness to Minimize Human-Agent Dialogues | https://intelligolabs.github.io/CoIN/ | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Language-driven instance object navigation assumes that human users initiate
the task by providing a detailed description of the target instance to the
embodied agent. While this description is crucial for distinguishing the target
from visually similar instances in a scene, providing it prior to navigation
can be demanding for human. To bridge this gap, we introduce Collaborative
Instance object Navigation (CoIN), a new task setting where the agent actively
resolve uncertainties about the target instance during navigation in natural,
template-free, open-ended dialogues with human. We propose a novel
training-free method, Agent-user Interaction with UncerTainty Awareness
(AIUTA), which operates independently from the navigation policy, and focuses
on the human-agent interaction reasoning with Vision-Language Models (VLMs) and
Large Language Models (LLMs). First, upon object detection, a Self-Questioner
model initiates a self-dialogue within the agent to obtain a complete and
accurate observation description with a novel uncertainty estimation technique.
Then, an Interaction Trigger module determines whether to ask a question to the
human, continue or halt navigation, minimizing user input. For evaluation, we
introduce CoIN-Bench, with a curated dataset designed for challenging
multi-instance scenarios. CoIN-Bench supports both online evaluation with
humans and reproducible experiments with simulated user-agent interactions. On
CoIN-Bench, we show that AIUTA serves as a competitive baseline, while existing
language-driven instance navigation methods struggle in complex multi-instance
scenes. Code and benchmark will be available upon acceptance at
https://intelligolabs.github.io/CoIN/
| [
{
"version": "v1",
"created": "Mon, 2 Dec 2024 08:16:38 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Mar 2025 17:46:20 GMT"
},
{
"version": "v3",
"created": "Tue, 18 Mar 2025 16:09:20 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Taioli",
"Francesco",
""
],
[
"Zorzi",
"Edoardo",
""
],
[
"Franchi",
"Gianni",
""
],
[
"Castellini",
"Alberto",
""
],
[
"Farinelli",
"Alessandro",
""
],
[
"Cristani",
"Marco",
""
],
[
"Wang",
"Yiming",
""
]
] | TITLE: Collaborative Instance Object Navigation: Leveraging
Uncertainty-Awareness to Minimize Human-Agent Dialogues
ABSTRACT: Language-driven instance object navigation assumes that human users initiate
the task by providing a detailed description of the target instance to the
embodied agent. While this description is crucial for distinguishing the target
from visually similar instances in a scene, providing it prior to navigation
can be demanding for human. To bridge this gap, we introduce Collaborative
Instance object Navigation (CoIN), a new task setting where the agent actively
resolve uncertainties about the target instance during navigation in natural,
template-free, open-ended dialogues with human. We propose a novel
training-free method, Agent-user Interaction with UncerTainty Awareness
(AIUTA), which operates independently from the navigation policy, and focuses
on the human-agent interaction reasoning with Vision-Language Models (VLMs) and
Large Language Models (LLMs). First, upon object detection, a Self-Questioner
model initiates a self-dialogue within the agent to obtain a complete and
accurate observation description with a novel uncertainty estimation technique.
Then, an Interaction Trigger module determines whether to ask a question to the
human, continue or halt navigation, minimizing user input. For evaluation, we
introduce CoIN-Bench, with a curated dataset designed for challenging
multi-instance scenarios. CoIN-Bench supports both online evaluation with
humans and reproducible experiments with simulated user-agent interactions. On
CoIN-Bench, we show that AIUTA serves as a competitive baseline, while existing
language-driven instance navigation methods struggle in complex multi-instance
scenes. Code and benchmark will be available upon acceptance at
https://intelligolabs.github.io/CoIN/
|
2412.03192 | Luca Ciampi | Luca Ciampi, Gabriele Lagani, Giuseppe Amato, Fabrizio Falchi | Biologically-inspired Semi-supervised Semantic Segmentation for
Biomedical Imaging | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | We propose a novel bio-inspired semi-supervised learning approach for
training downsampling-upsampling semantic segmentation architectures. The first
stage does not use backpropagation. Rather, it exploits the Hebbian principle
``fire together, wire together'' as a local learning rule for updating the
weights of both convolutional and transpose-convolutional layers, allowing
unsupervised discovery of data features. In the second stage, the model is
fine-tuned with standard backpropagation on a small subset of labeled data. We
evaluate our methodology through experiments conducted on several widely used
biomedical datasets, deeming that this domain is paramount in computer vision
and is notably impacted by data scarcity. Results show that our proposed method
outperforms SOTA approaches across different levels of label availability.
Furthermore, we show that using our unsupervised stage to initialize the SOTA
approaches leads to performance improvements. The code to replicate our
experiments can be found at
https://github.com/ciampluca/hebbian-bootstraping-semi-supervised-medical-imaging
| [
{
"version": "v1",
"created": "Wed, 4 Dec 2024 10:25:53 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 14:28:57 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Ciampi",
"Luca",
""
],
[
"Lagani",
"Gabriele",
""
],
[
"Amato",
"Giuseppe",
""
],
[
"Falchi",
"Fabrizio",
""
]
] | TITLE: Biologically-inspired Semi-supervised Semantic Segmentation for
Biomedical Imaging
ABSTRACT: We propose a novel bio-inspired semi-supervised learning approach for
training downsampling-upsampling semantic segmentation architectures. The first
stage does not use backpropagation. Rather, it exploits the Hebbian principle
``fire together, wire together'' as a local learning rule for updating the
weights of both convolutional and transpose-convolutional layers, allowing
unsupervised discovery of data features. In the second stage, the model is
fine-tuned with standard backpropagation on a small subset of labeled data. We
evaluate our methodology through experiments conducted on several widely used
biomedical datasets, deeming that this domain is paramount in computer vision
and is notably impacted by data scarcity. Results show that our proposed method
outperforms SOTA approaches across different levels of label availability.
Furthermore, we show that using our unsupervised stage to initialize the SOTA
approaches leads to performance improvements. The code to replicate our
experiments can be found at
https://github.com/ciampluca/hebbian-bootstraping-semi-supervised-medical-imaging
|
2412.03261 | Edoardo Daniele Cannas | Edoardo Daniele Cannas, Sara Mandelli, Nata\v{s}a Popovi\'c, Ayman
Alkhateeb, Alessandro Gnutti, Paolo Bestagini, Stefano Tubaro | Is JPEG AI going to change image forensics? | null | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we investigate the counter-forensic effects of the new JPEG AI
standard based on neural image compression, focusing on two critical areas:
deepfake image detection and image splicing localization. Neural image
compression leverages advanced neural network algorithms to achieve higher
compression rates while maintaining image quality. However, it introduces
artifacts that closely resemble those generated by image synthesis techniques
and image splicing pipelines, complicating the work of researchers when
discriminating pristine from manipulated content. We comprehensively analyze
JPEG AI's counter-forensic effects through extensive experiments on several
state-of-the-art detectors and datasets. Our results demonstrate a reduction in
the performance of leading forensic detectors when analyzing content processed
through JPEG AI. By exposing the vulnerabilities of the available forensic
tools, we aim to raise the urgent need for multimedia forensics researchers to
include JPEG AI images in their experimental setups and develop robust forensic
techniques to distinguish between neural compression artifacts and actual
manipulations.
| [
{
"version": "v1",
"created": "Wed, 4 Dec 2024 12:07:20 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 13:34:40 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Cannas",
"Edoardo Daniele",
""
],
[
"Mandelli",
"Sara",
""
],
[
"Popović",
"Nataša",
""
],
[
"Alkhateeb",
"Ayman",
""
],
[
"Gnutti",
"Alessandro",
""
],
[
"Bestagini",
"Paolo",
""
],
[
"Tubaro",
"Stefano",
""
]
] | TITLE: Is JPEG AI going to change image forensics?
ABSTRACT: In this paper, we investigate the counter-forensic effects of the new JPEG AI
standard based on neural image compression, focusing on two critical areas:
deepfake image detection and image splicing localization. Neural image
compression leverages advanced neural network algorithms to achieve higher
compression rates while maintaining image quality. However, it introduces
artifacts that closely resemble those generated by image synthesis techniques
and image splicing pipelines, complicating the work of researchers when
discriminating pristine from manipulated content. We comprehensively analyze
JPEG AI's counter-forensic effects through extensive experiments on several
state-of-the-art detectors and datasets. Our results demonstrate a reduction in
the performance of leading forensic detectors when analyzing content processed
through JPEG AI. By exposing the vulnerabilities of the available forensic
tools, we aim to raise the urgent need for multimedia forensics researchers to
include JPEG AI images in their experimental setups and develop robust forensic
techniques to distinguish between neural compression artifacts and actual
manipulations.
|
2412.04908 | Zhijin Meng | Mohammed Althubyani, Zhijin Meng, Shengyuan Xie, Cha Seung, Imran
Razzak, Eduardo B. Sandoval, Baki Kocaballi, Francisco Cruz | MERCI: Multimodal Emotional and peRsonal Conversational Interactions
Dataset | 9 pages, 5 Figures, Rejected from International Conference of Human
Robot Interaction 2025, Melbourne, Australia | null | null | null | cs.HC cs.ET cs.RO | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The integration of conversational agents into our daily lives has become
increasingly common, yet many of these agents cannot engage in deep
interactions with humans. Despite this, there is a noticeable shortage of
datasets that capture multimodal information from human-robot interaction
dialogues. To address this gap, we have recorded a novel multimodal dataset
(MERCI) that encompasses rich embodied interaction data. The process involved
asking participants to complete a questionnaire and gathering their profiles on
ten topics, such as hobbies and favorite music. Subsequently, we initiated
conversations between the robot and the participants, leveraging GPT-4 to
generate contextually appropriate responses based on the participant's profile
and emotional state, as determined by facial expression recognition and
sentiment analysis. Automatic and user evaluations were conducted to assess the
overall quality of the collected data. The results of both evaluations
indicated a high level of naturalness, engagement, fluency, consistency, and
relevance in the conversation, as well as the robot's ability to provide
empathetic responses. It is worth noting that the dataset is derived from
genuine interactions with the robot, involving participants who provided
personal information and conveyed actual emotions.
| [
{
"version": "v1",
"created": "Fri, 6 Dec 2024 10:04:26 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 05:10:59 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Althubyani",
"Mohammed",
""
],
[
"Meng",
"Zhijin",
""
],
[
"Xie",
"Shengyuan",
""
],
[
"Seung",
"Cha",
""
],
[
"Razzak",
"Imran",
""
],
[
"Sandoval",
"Eduardo B.",
""
],
[
"Kocaballi",
"Baki",
""
],
[
"Cruz",
"Francisco",
""
]
] | TITLE: MERCI: Multimodal Emotional and peRsonal Conversational Interactions
Dataset
ABSTRACT: The integration of conversational agents into our daily lives has become
increasingly common, yet many of these agents cannot engage in deep
interactions with humans. Despite this, there is a noticeable shortage of
datasets that capture multimodal information from human-robot interaction
dialogues. To address this gap, we have recorded a novel multimodal dataset
(MERCI) that encompasses rich embodied interaction data. The process involved
asking participants to complete a questionnaire and gathering their profiles on
ten topics, such as hobbies and favorite music. Subsequently, we initiated
conversations between the robot and the participants, leveraging GPT-4 to
generate contextually appropriate responses based on the participant's profile
and emotional state, as determined by facial expression recognition and
sentiment analysis. Automatic and user evaluations were conducted to assess the
overall quality of the collected data. The results of both evaluations
indicated a high level of naturalness, engagement, fluency, consistency, and
relevance in the conversation, as well as the robot's ability to provide
empathetic responses. It is worth noting that the dataset is derived from
genuine interactions with the robot, involving participants who provided
personal information and conveyed actual emotions.
|
2412.06352 | Zengxi Zhang | Zeru Shi, Zengxi Zhang, Kemeng Cui, Ruizhe An, Jinyuan Liu, Zhiying
Jiang | SeFENet: Robust Deep Homography Estimation via Semantic-Driven Feature
Enhancement | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Images captured in harsh environments often exhibit blurred details, reduced
contrast, and color distortion, which hinder feature detection and matching,
thereby affecting the accuracy and robustness of homography estimation. While
visual enhancement can improve contrast and clarity, it may introduce
visual-tolerant artifacts that obscure the structural integrity of images.
Considering the resilience of semantic information against environmental
interference, we propose a semantic-driven feature enhancement network for
robust homography estimation, dubbed SeFENet. Concretely, we first introduce an
innovative hierarchical scale-aware module to expand the receptive field by
aggregating multi-scale information, thereby effectively extracting image
features under diverse harsh conditions. Subsequently, we propose a
semantic-guided constraint module combined with a high-level perceptual
framework to achieve degradation-tolerant with semantic feature. A
meta-learning-based training strategy is introduced to mitigate the disparity
between semantic and structural features. By internal-external alternating
optimization, the proposed network achieves implicit semantic-wise feature
enhancement, thereby improving the robustness of homography estimation in
adverse environments by strengthening the local feature comprehension and
context information extraction. Experimental results under both normal and
harsh conditions demonstrate that SeFENet significantly outperforms SOTA
methods, reducing point match error by at least 41% on the large-scale
datasets.
| [
{
"version": "v1",
"created": "Mon, 9 Dec 2024 10:04:14 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 06:34:37 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Shi",
"Zeru",
""
],
[
"Zhang",
"Zengxi",
""
],
[
"Cui",
"Kemeng",
""
],
[
"An",
"Ruizhe",
""
],
[
"Liu",
"Jinyuan",
""
],
[
"Jiang",
"Zhiying",
""
]
] | TITLE: SeFENet: Robust Deep Homography Estimation via Semantic-Driven Feature
Enhancement
ABSTRACT: Images captured in harsh environments often exhibit blurred details, reduced
contrast, and color distortion, which hinder feature detection and matching,
thereby affecting the accuracy and robustness of homography estimation. While
visual enhancement can improve contrast and clarity, it may introduce
visual-tolerant artifacts that obscure the structural integrity of images.
Considering the resilience of semantic information against environmental
interference, we propose a semantic-driven feature enhancement network for
robust homography estimation, dubbed SeFENet. Concretely, we first introduce an
innovative hierarchical scale-aware module to expand the receptive field by
aggregating multi-scale information, thereby effectively extracting image
features under diverse harsh conditions. Subsequently, we propose a
semantic-guided constraint module combined with a high-level perceptual
framework to achieve degradation-tolerant with semantic feature. A
meta-learning-based training strategy is introduced to mitigate the disparity
between semantic and structural features. By internal-external alternating
optimization, the proposed network achieves implicit semantic-wise feature
enhancement, thereby improving the robustness of homography estimation in
adverse environments by strengthening the local feature comprehension and
context information extraction. Experimental results under both normal and
harsh conditions demonstrate that SeFENet significantly outperforms SOTA
methods, reducing point match error by at least 41% on the large-scale
datasets.
|
2412.07612 | Subin Varghese | Subin Varghese, Joshua Gao, Vedhus Hoskere | ViewDelta: Text-Prompted Change Detection in Unaligned Images | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Detecting changes between images is fundamental in applications such as
infrastructure assessment, environmental monitoring, and industrial automation.
Existing supervised models demonstrate strong performance but are inherently
limited by the scope of their training data, requiring retraining to recognize
novel changes. To overcome this limitation, we introduce a novel change
detection task utilizing textual prompts alongside two potentially unaligned
images to produce binary segmentations highlighting user-relevant changes. This
text-conditioned framework significantly broadens the scope of change
detection, enabling unparalleled flexibility and straightforward scalability by
incorporating diverse future datasets without restriction to specific change
types. As a first approach to address this challenge, we propose ViewDelta, a
multimodal architecture extending the vision transformer into the domain of
text-conditioned change detection. ViewDelta establishes a robust baseline,
demonstrating flexibility across various scenarios and achieving competitive
results compared to specialized, fine-tuned models trained on aligned images.
Moreover, we create and release the first text-prompt-conditioned change
detection dataset, comprising 501,153 image pairs with corresponding textual
prompts and annotated labels. Extensive experiments confirm the robustness and
versatility of our model across diverse environments, including indoor,
outdoor, street-level, synthetic, and satellite imagery.
https://joshuakgao.github.io/viewdelta/
| [
{
"version": "v1",
"created": "Tue, 10 Dec 2024 15:51:17 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 13:47:36 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Varghese",
"Subin",
""
],
[
"Gao",
"Joshua",
""
],
[
"Hoskere",
"Vedhus",
""
]
] | TITLE: ViewDelta: Text-Prompted Change Detection in Unaligned Images
ABSTRACT: Detecting changes between images is fundamental in applications such as
infrastructure assessment, environmental monitoring, and industrial automation.
Existing supervised models demonstrate strong performance but are inherently
limited by the scope of their training data, requiring retraining to recognize
novel changes. To overcome this limitation, we introduce a novel change
detection task utilizing textual prompts alongside two potentially unaligned
images to produce binary segmentations highlighting user-relevant changes. This
text-conditioned framework significantly broadens the scope of change
detection, enabling unparalleled flexibility and straightforward scalability by
incorporating diverse future datasets without restriction to specific change
types. As a first approach to address this challenge, we propose ViewDelta, a
multimodal architecture extending the vision transformer into the domain of
text-conditioned change detection. ViewDelta establishes a robust baseline,
demonstrating flexibility across various scenarios and achieving competitive
results compared to specialized, fine-tuned models trained on aligned images.
Moreover, we create and release the first text-prompt-conditioned change
detection dataset, comprising 501,153 image pairs with corresponding textual
prompts and annotated labels. Extensive experiments confirm the robustness and
versatility of our model across diverse environments, including indoor,
outdoor, street-level, synthetic, and satellite imagery.
https://joshuakgao.github.io/viewdelta/
|
2412.08344 | Yushan Han | Yushan Han, Hui Zhang, Honglei Zhang, Jing Wang, Yidong Li | CoDTS: Enhancing Sparsely Supervised Collaborative Perception with a
Dual Teacher-Student Framework | AAAI 2025 (Oral) | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Current collaborative perception methods often rely on fully annotated
datasets, which can be expensive to obtain in practical situations. To reduce
annotation costs, some works adopt sparsely supervised learning techniques and
generate pseudo labels for the missing instances. However, these methods fail
to achieve an optimal confidence threshold that harmonizes the quality and
quantity of pseudo labels. To address this issue, we propose an end-to-end
Collaborative perception Dual Teacher-Student framework (CoDTS), which employs
adaptive complementary learning to produce both high-quality and high-quantity
pseudo labels. Specifically, the Main Foreground Mining (MFM) module generates
high-quality pseudo labels based on the prediction of the static teacher.
Subsequently, the Supplement Foreground Mining (SFM) module ensures a balance
between the quality and quantity of pseudo labels by adaptively identifying
missing instances based on the prediction of the dynamic teacher. Additionally,
the Neighbor Anchor Sampling (NAS) module is incorporated to enhance the
representation of pseudo labels. To promote the adaptive complementary
learning, we implement a staged training strategy that trains the student and
dynamic teacher in a mutually beneficial manner. Extensive experiments
demonstrate that the CoDTS effectively ensures an optimal balance of pseudo
labels in both quality and quantity, establishing a new state-of-the-art in
sparsely supervised collaborative perception. The code is available at
https://github.com/CatOneTwo/CoDTS.
| [
{
"version": "v1",
"created": "Wed, 11 Dec 2024 12:34:37 GMT"
},
{
"version": "v2",
"created": "Thu, 12 Dec 2024 09:52:55 GMT"
},
{
"version": "v3",
"created": "Tue, 21 Jan 2025 12:30:57 GMT"
},
{
"version": "v4",
"created": "Tue, 18 Mar 2025 14:41:58 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Han",
"Yushan",
""
],
[
"Zhang",
"Hui",
""
],
[
"Zhang",
"Honglei",
""
],
[
"Wang",
"Jing",
""
],
[
"Li",
"Yidong",
""
]
] | TITLE: CoDTS: Enhancing Sparsely Supervised Collaborative Perception with a
Dual Teacher-Student Framework
ABSTRACT: Current collaborative perception methods often rely on fully annotated
datasets, which can be expensive to obtain in practical situations. To reduce
annotation costs, some works adopt sparsely supervised learning techniques and
generate pseudo labels for the missing instances. However, these methods fail
to achieve an optimal confidence threshold that harmonizes the quality and
quantity of pseudo labels. To address this issue, we propose an end-to-end
Collaborative perception Dual Teacher-Student framework (CoDTS), which employs
adaptive complementary learning to produce both high-quality and high-quantity
pseudo labels. Specifically, the Main Foreground Mining (MFM) module generates
high-quality pseudo labels based on the prediction of the static teacher.
Subsequently, the Supplement Foreground Mining (SFM) module ensures a balance
between the quality and quantity of pseudo labels by adaptively identifying
missing instances based on the prediction of the dynamic teacher. Additionally,
the Neighbor Anchor Sampling (NAS) module is incorporated to enhance the
representation of pseudo labels. To promote the adaptive complementary
learning, we implement a staged training strategy that trains the student and
dynamic teacher in a mutually beneficial manner. Extensive experiments
demonstrate that the CoDTS effectively ensures an optimal balance of pseudo
labels in both quality and quantity, establishing a new state-of-the-art in
sparsely supervised collaborative perception. The code is available at
https://github.com/CatOneTwo/CoDTS.
|
2412.09617 | Hung-Jui Huang | Hung-Jui Huang, Michael Kaess, and Wenzhen Yuan | NormalFlow: Fast, Robust, and Accurate Contact-based Object 6DoF Pose
Tracking with Vision-based Tactile Sensors | 8 pages, published in 2024 RA-L, website link:
https://joehjhuang.github.io/normalflow | IEEE Robotics and Automation Letters ( Volume: 10, Issue: 1,
January 2025) | 10.1109/LRA.2024.3505815 | null | cs.RO | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Tactile sensing is crucial for robots aiming to achieve human-level
dexterity. Among tactile-dependent skills, tactile-based object tracking serves
as the cornerstone for many tasks, including manipulation, in-hand
manipulation, and 3D reconstruction. In this work, we introduce NormalFlow, a
fast, robust, and real-time tactile-based 6DoF tracking algorithm. Leveraging
the precise surface normal estimation of vision-based tactile sensors,
NormalFlow determines object movements by minimizing discrepancies between the
tactile-derived surface normals. Our results show that NormalFlow consistently
outperforms competitive baselines and can track low-texture objects like table
surfaces. For long-horizon tracking, we demonstrate when rolling the sensor
around a bead for 360 degrees, NormalFlow maintains a rotational tracking error
of 2.5 degrees. Additionally, we present state-of-the-art tactile-based 3D
reconstruction results, showcasing the high accuracy of NormalFlow. We believe
NormalFlow unlocks new possibilities for high-precision perception and
manipulation tasks that involve interacting with objects using hands. The video
demo, code, and dataset are available on our website:
https://joehjhuang.github.io/normalflow.
| [
{
"version": "v1",
"created": "Thu, 12 Dec 2024 18:59:46 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 04:31:12 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Huang",
"Hung-Jui",
""
],
[
"Kaess",
"Michael",
""
],
[
"Yuan",
"Wenzhen",
""
]
] | TITLE: NormalFlow: Fast, Robust, and Accurate Contact-based Object 6DoF Pose
Tracking with Vision-based Tactile Sensors
ABSTRACT: Tactile sensing is crucial for robots aiming to achieve human-level
dexterity. Among tactile-dependent skills, tactile-based object tracking serves
as the cornerstone for many tasks, including manipulation, in-hand
manipulation, and 3D reconstruction. In this work, we introduce NormalFlow, a
fast, robust, and real-time tactile-based 6DoF tracking algorithm. Leveraging
the precise surface normal estimation of vision-based tactile sensors,
NormalFlow determines object movements by minimizing discrepancies between the
tactile-derived surface normals. Our results show that NormalFlow consistently
outperforms competitive baselines and can track low-texture objects like table
surfaces. For long-horizon tracking, we demonstrate when rolling the sensor
around a bead for 360 degrees, NormalFlow maintains a rotational tracking error
of 2.5 degrees. Additionally, we present state-of-the-art tactile-based 3D
reconstruction results, showcasing the high accuracy of NormalFlow. We believe
NormalFlow unlocks new possibilities for high-precision perception and
manipulation tasks that involve interacting with objects using hands. The video
demo, code, and dataset are available on our website:
https://joehjhuang.github.io/normalflow.
|
2412.09901 | Zhe Li | Zhe Li, Yisheng He, Lei Zhong, Weichao Shen, Qi Zuo, Lingteng Qiu,
Zilong Dong, Laurence Tianruo Yang, Weihao Yuan | MulSMo: Multimodal Stylized Motion Generation by Bidirectional Control
Flow | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generating motion sequences conforming to a target style while adhering to
the given content prompts requires accommodating both the content and style. In
existing methods, the information usually only flows from style to content,
which may cause conflict between the style and content, harming the
integration. Differently, in this work we build a bidirectional control flow
between the style and the content, also adjusting the style towards the
content, in which case the style-content collision is alleviated and the
dynamics of the style is better preserved in the integration. Moreover, we
extend the stylized motion generation from one modality, i.e. the style motion,
to multiple modalities including texts and images through contrastive learning,
leading to flexible style control on the motion generation. Extensive
experiments demonstrate that our method significantly outperforms previous
methods across different datasets, while also enabling multimodal signals
control. The code of our method will be made publicly available.
| [
{
"version": "v1",
"created": "Fri, 13 Dec 2024 06:40:26 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 18:18:23 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Li",
"Zhe",
""
],
[
"He",
"Yisheng",
""
],
[
"Zhong",
"Lei",
""
],
[
"Shen",
"Weichao",
""
],
[
"Zuo",
"Qi",
""
],
[
"Qiu",
"Lingteng",
""
],
[
"Dong",
"Zilong",
""
],
[
"Yang",
"Laurence Tianruo",
""
],
[
"Yuan",
"Weihao",
""
]
] | TITLE: MulSMo: Multimodal Stylized Motion Generation by Bidirectional Control
Flow
ABSTRACT: Generating motion sequences conforming to a target style while adhering to
the given content prompts requires accommodating both the content and style. In
existing methods, the information usually only flows from style to content,
which may cause conflict between the style and content, harming the
integration. Differently, in this work we build a bidirectional control flow
between the style and the content, also adjusting the style towards the
content, in which case the style-content collision is alleviated and the
dynamics of the style is better preserved in the integration. Moreover, we
extend the stylized motion generation from one modality, i.e. the style motion,
to multiple modalities including texts and images through contrastive learning,
leading to flexible style control on the motion generation. Extensive
experiments demonstrate that our method significantly outperforms previous
methods across different datasets, while also enabling multimodal signals
control. The code of our method will be made publicly available.
|
2412.13769 | Hari Hara Suthan Chittoor | Hari Hara Suthan Chittoor, Paul Robert Griffin, Ariel Neufeld, Jayne
Thompson, Mile Gu | QuLTSF: Long-Term Time Series Forecasting with Quantum Machine Learning | Published in ICAART 2025 | null | 10.5220/0013395500003890 | null | quant-ph cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Long-term time series forecasting (LTSF) involves predicting a large number
of future values of a time series based on the past values. This is an
essential task in a wide range of domains including weather forecasting, stock
market analysis and disease outbreak prediction. Over the decades LTSF
algorithms have transitioned from statistical models to deep learning models
like transformer models. Despite the complex architecture of transformer based
LTSF models `Are Transformers Effective for Time Series Forecasting? (Zeng et
al., 2023)' showed that simple linear models can outperform the
state-of-the-art transformer based LTSF models. Recently, quantum machine
learning (QML) is evolving as a domain to enhance the capabilities of classical
machine learning models. In this paper we initiate the application of QML to
LTSF problems by proposing QuLTSF, a simple hybrid QML model for multivariate
LTSF. Through extensive experiments on a widely used weather dataset we show
the advantages of QuLTSF over the state-of-the-art classical linear models, in
terms of reduced mean squared error and mean absolute error.
| [
{
"version": "v1",
"created": "Wed, 18 Dec 2024 12:06:52 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 09:30:51 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Chittoor",
"Hari Hara Suthan",
""
],
[
"Griffin",
"Paul Robert",
""
],
[
"Neufeld",
"Ariel",
""
],
[
"Thompson",
"Jayne",
""
],
[
"Gu",
"Mile",
""
]
] | TITLE: QuLTSF: Long-Term Time Series Forecasting with Quantum Machine Learning
ABSTRACT: Long-term time series forecasting (LTSF) involves predicting a large number
of future values of a time series based on the past values. This is an
essential task in a wide range of domains including weather forecasting, stock
market analysis and disease outbreak prediction. Over the decades LTSF
algorithms have transitioned from statistical models to deep learning models
like transformer models. Despite the complex architecture of transformer based
LTSF models `Are Transformers Effective for Time Series Forecasting? (Zeng et
al., 2023)' showed that simple linear models can outperform the
state-of-the-art transformer based LTSF models. Recently, quantum machine
learning (QML) is evolving as a domain to enhance the capabilities of classical
machine learning models. In this paper we initiate the application of QML to
LTSF problems by proposing QuLTSF, a simple hybrid QML model for multivariate
LTSF. Through extensive experiments on a widely used weather dataset we show
the advantages of QuLTSF over the state-of-the-art classical linear models, in
terms of reduced mean squared error and mean absolute error.
|
2412.14295 | Anna Manasyan | Anna Manasyan, Maximilian Seitzer, Filip Radovic, Georg Martius,
Andrii Zadaianchuk | Temporally Consistent Object-Centric Learning by Contrasting Slots | Published at CVPR 2025 | null | null | null | cs.CV cs.AI cs.LG cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Unsupervised object-centric learning from videos is a promising approach to
extract structured representations from large, unlabeled collections of videos.
To support downstream tasks like autonomous control, these representations must
be both compositional and temporally consistent. Existing approaches based on
recurrent processing often lack long-term stability across frames because their
training objective does not enforce temporal consistency. In this work, we
introduce a novel object-level temporal contrastive loss for video
object-centric models that explicitly promotes temporal consistency. Our method
significantly improves the temporal consistency of the learned object-centric
representations, yielding more reliable video decompositions that facilitate
challenging downstream tasks such as unsupervised object dynamics prediction.
Furthermore, the inductive bias added by our loss strongly improves object
discovery, leading to state-of-the-art results on both synthetic and real-world
datasets, outperforming even weakly-supervised methods that leverage motion
masks as additional cues.
| [
{
"version": "v1",
"created": "Wed, 18 Dec 2024 19:46:04 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 13:01:07 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Manasyan",
"Anna",
""
],
[
"Seitzer",
"Maximilian",
""
],
[
"Radovic",
"Filip",
""
],
[
"Martius",
"Georg",
""
],
[
"Zadaianchuk",
"Andrii",
""
]
] | TITLE: Temporally Consistent Object-Centric Learning by Contrasting Slots
ABSTRACT: Unsupervised object-centric learning from videos is a promising approach to
extract structured representations from large, unlabeled collections of videos.
To support downstream tasks like autonomous control, these representations must
be both compositional and temporally consistent. Existing approaches based on
recurrent processing often lack long-term stability across frames because their
training objective does not enforce temporal consistency. In this work, we
introduce a novel object-level temporal contrastive loss for video
object-centric models that explicitly promotes temporal consistency. Our method
significantly improves the temporal consistency of the learned object-centric
representations, yielding more reliable video decompositions that facilitate
challenging downstream tasks such as unsupervised object dynamics prediction.
Furthermore, the inductive bias added by our loss strongly improves object
discovery, leading to state-of-the-art results on both synthetic and real-world
datasets, outperforming even weakly-supervised methods that leverage motion
masks as additional cues.
|
2501.00959 | Saleh Afroogh | Junfeng Jiao, Saleh Afroogh, Kevin Chen, David Atkinson, Amit
Dhurandhar | IGGA: A Dataset of Industrial Guidelines and Policy Statements for
Generative AIs | null | null | null | null | cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces IGGA, a dataset of 160 industry guidelines and policy
statements for the use of Generative AIs (GAIs) and Large Language Models
(LLMs) in industry and workplace settings, collected from official company
websites, and trustworthy news sources. The dataset contains 104,565 words and
serves as a valuable resource for natural language processing tasks commonly
applied in requirements engineering, such as model synthesis, abstraction
identification, and document structure assessment. Additionally, IGGA can be
further annotated to function as a benchmark for various tasks, including
ambiguity detection, requirements categorization, and the identification of
equivalent requirements. Our methodologically rigorous approach ensured a
thorough examination, with a selection of reputable and influential companies
that represent a diverse range of global institutions across six continents.
The dataset captures perspectives from fourteen industry sectors, including
technology, finance, and both public and private institutions, offering a broad
spectrum of insights into the integration of GAIs and LLMs in industry.
| [
{
"version": "v1",
"created": "Wed, 1 Jan 2025 21:31:47 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Jan 2025 19:17:56 GMT"
},
{
"version": "v3",
"created": "Tue, 18 Mar 2025 16:44:15 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Jiao",
"Junfeng",
""
],
[
"Afroogh",
"Saleh",
""
],
[
"Chen",
"Kevin",
""
],
[
"Atkinson",
"David",
""
],
[
"Dhurandhar",
"Amit",
""
]
] | TITLE: IGGA: A Dataset of Industrial Guidelines and Policy Statements for
Generative AIs
ABSTRACT: This paper introduces IGGA, a dataset of 160 industry guidelines and policy
statements for the use of Generative AIs (GAIs) and Large Language Models
(LLMs) in industry and workplace settings, collected from official company
websites, and trustworthy news sources. The dataset contains 104,565 words and
serves as a valuable resource for natural language processing tasks commonly
applied in requirements engineering, such as model synthesis, abstraction
identification, and document structure assessment. Additionally, IGGA can be
further annotated to function as a benchmark for various tasks, including
ambiguity detection, requirements categorization, and the identification of
equivalent requirements. Our methodologically rigorous approach ensured a
thorough examination, with a selection of reputable and influential companies
that represent a diverse range of global institutions across six continents.
The dataset captures perspectives from fourteen industry sectors, including
technology, finance, and both public and private institutions, offering a broad
spectrum of insights into the integration of GAIs and LLMs in industry.
|
2501.01790 | Zhengcong Fei | Zhengcong Fei, Debang Li, Di Qiu, Changqian Yu, Mingyuan Fan | Ingredients: Blending Custom Photos with Video Diffusion Transformers | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a powerful framework to customize video creations by
incorporating multiple specific identity (ID) photos, with video diffusion
Transformers, referred to as Ingredients. Generally, our method consists of
three primary modules: (i) a facial extractor that captures versatile and
precise facial features for each human ID from both global and local
perspectives; (ii) a multi-scale projector that maps face embeddings into the
contextual space of image query in video diffusion transformers; (iii) an ID
router that dynamically combines and allocates multiple ID embedding to the
corresponding space-time regions. Leveraging a meticulously curated text-video
dataset and a multi-stage training protocol, Ingredients demonstrates superior
performance in turning custom photos into dynamic and personalized video
content. Qualitative evaluations highlight the advantages of proposed method,
positioning it as a significant advancement toward more effective generative
video control tools in Transformer-based architecture, compared to existing
methods. The data, code, and model weights are publicly available at:
https://github.com/feizc/Ingredients.
| [
{
"version": "v1",
"created": "Fri, 3 Jan 2025 12:45:22 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 10:47:27 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Fei",
"Zhengcong",
""
],
[
"Li",
"Debang",
""
],
[
"Qiu",
"Di",
""
],
[
"Yu",
"Changqian",
""
],
[
"Fan",
"Mingyuan",
""
]
] | TITLE: Ingredients: Blending Custom Photos with Video Diffusion Transformers
ABSTRACT: This paper presents a powerful framework to customize video creations by
incorporating multiple specific identity (ID) photos, with video diffusion
Transformers, referred to as Ingredients. Generally, our method consists of
three primary modules: (i) a facial extractor that captures versatile and
precise facial features for each human ID from both global and local
perspectives; (ii) a multi-scale projector that maps face embeddings into the
contextual space of image query in video diffusion transformers; (iii) an ID
router that dynamically combines and allocates multiple ID embedding to the
corresponding space-time regions. Leveraging a meticulously curated text-video
dataset and a multi-stage training protocol, Ingredients demonstrates superior
performance in turning custom photos into dynamic and personalized video
content. Qualitative evaluations highlight the advantages of proposed method,
positioning it as a significant advancement toward more effective generative
video control tools in Transformer-based architecture, compared to existing
methods. The data, code, and model weights are publicly available at:
https://github.com/feizc/Ingredients.
|
2501.02063 | Saleh Afroogh | Junfeng Jiao, Saleh Afroogh, Kevin Chen, David Atkinson, Amit
Dhurandhar | AGGA: A Dataset of Academic Guidelines for Generative AI and Large
Language Models | arXiv admin note: text overlap with arXiv:2406.18842,
arXiv:2501.00959 | null | null | null | cs.CL cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study introduces AGGA, a dataset comprising 80 academic guidelines for
the use of Generative AIs (GAIs) and Large Language Models (LLMs) in academic
settings, meticulously collected from official university websites. The dataset
contains 188,674 words and serves as a valuable resource for natural language
processing tasks commonly applied in requirements engineering, such as model
synthesis, abstraction identification, and document structure assessment.
Additionally, AGGA can be further annotated to function as a benchmark for
various tasks, including ambiguity detection, requirements categorization, and
the identification of equivalent requirements. Our methodologically rigorous
approach ensured a thorough examination, with a selection of universities that
represent a diverse range of global institutions, including top-ranked
universities across six continents. The dataset captures perspectives from a
variety of academic fields, including humanities, technology, and both public
and private institutions, offering a broad spectrum of insights into the
integration of GAIs and LLMs in academia.
| [
{
"version": "v1",
"created": "Fri, 3 Jan 2025 19:16:36 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Jan 2025 19:12:22 GMT"
},
{
"version": "v3",
"created": "Tue, 18 Mar 2025 16:45:54 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Jiao",
"Junfeng",
""
],
[
"Afroogh",
"Saleh",
""
],
[
"Chen",
"Kevin",
""
],
[
"Atkinson",
"David",
""
],
[
"Dhurandhar",
"Amit",
""
]
] | TITLE: AGGA: A Dataset of Academic Guidelines for Generative AI and Large
Language Models
ABSTRACT: This study introduces AGGA, a dataset comprising 80 academic guidelines for
the use of Generative AIs (GAIs) and Large Language Models (LLMs) in academic
settings, meticulously collected from official university websites. The dataset
contains 188,674 words and serves as a valuable resource for natural language
processing tasks commonly applied in requirements engineering, such as model
synthesis, abstraction identification, and document structure assessment.
Additionally, AGGA can be further annotated to function as a benchmark for
various tasks, including ambiguity detection, requirements categorization, and
the identification of equivalent requirements. Our methodologically rigorous
approach ensured a thorough examination, with a selection of universities that
represent a diverse range of global institutions, including top-ranked
universities across six continents. The dataset captures perspectives from a
variety of academic fields, including humanities, technology, and both public
and private institutions, offering a broad spectrum of insights into the
integration of GAIs and LLMs in academia.
|
2501.05661 | Xiaochen Zheng | Yinghao Zhu and Xiaochen Zheng and Ahmed Allam and Michael Krauthammer | TAMER: A Test-Time Adaptive MoE-Driven Framework for EHR Representation
Learning | 8 pages, 3 figures, 7 tables. Code is available at:
https://github.com/yhzhu99/TAMER | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We propose TAMER, a Test-time Adaptive MoE-driven framework for Electronic
Health Record (EHR) Representation learning. TAMER introduces a framework where
a Mixture-of-Experts (MoE) architecture is co-designed with Test-Time
Adaptation (TTA) to jointly mitigate the intertwined challenges of patient
heterogeneity and distribution shifts in EHR modeling. The MoE focuses on
latent patient subgroups through domain-aware expert specialization, while TTA
enables real-time adaptation to evolving health status distributions when new
patient samples are introduced. Extensive experiments across four real-world
EHR datasets demonstrate that TAMER consistently improves predictive
performance for both mortality and readmission risk tasks when combined with
diverse EHR modeling backbones. TAMER offers a promising approach for dynamic
and personalized EHR-based predictions in practical clinical settings.
| [
{
"version": "v1",
"created": "Fri, 10 Jan 2025 02:25:39 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 13:21:08 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Zhu",
"Yinghao",
""
],
[
"Zheng",
"Xiaochen",
""
],
[
"Allam",
"Ahmed",
""
],
[
"Krauthammer",
"Michael",
""
]
] | TITLE: TAMER: A Test-Time Adaptive MoE-Driven Framework for EHR Representation
Learning
ABSTRACT: We propose TAMER, a Test-time Adaptive MoE-driven framework for Electronic
Health Record (EHR) Representation learning. TAMER introduces a framework where
a Mixture-of-Experts (MoE) architecture is co-designed with Test-Time
Adaptation (TTA) to jointly mitigate the intertwined challenges of patient
heterogeneity and distribution shifts in EHR modeling. The MoE focuses on
latent patient subgroups through domain-aware expert specialization, while TTA
enables real-time adaptation to evolving health status distributions when new
patient samples are introduced. Extensive experiments across four real-world
EHR datasets demonstrate that TAMER consistently improves predictive
performance for both mortality and readmission risk tasks when combined with
diverse EHR modeling backbones. TAMER offers a promising approach for dynamic
and personalized EHR-based predictions in practical clinical settings.
|
2501.08238 | Xuanjun Chen | Xuanjun Chen, Jiawei Du, Haibin Wu, Lin Zhang, I-Ming Lin, I-Hsiang
Chiu, Wenze Ren, Yuan Tseng, Yu Tsao, Jyh-Shing Roger Jang, Hung-yi Lee | CodecFake+: A Large-Scale Neural Audio Codec-Based Deepfake Speech
Dataset | Work in Progress | null | null | null | cs.SD eess.AS | http://creativecommons.org/licenses/by/4.0/ | With the rapid advancement of neural audio codecs, codec-based speech
generation (CoSG) systems have become highly powerful. Unfortunately, CoSG also
enables the creation of highly realistic deepfake speech, making it easier to
mimic an individual's voice and spread misinformation. We refer to this
emerging deepfake speech generated by CoSG systems as CodecFake. Detecting such
CodecFake is an urgent challenge, yet most existing systems primarily focus on
detecting fake speech generated by traditional speech synthesis models. In this
paper, we introduce CodecFake+, a large-scale dataset designed to advance
CodecFake detection. To our knowledge, CodecFake+ is the largest dataset
encompassing the most diverse range of codec architectures. The training set is
generated through re-synthesis using 31 publicly available open-source codec
models, while the evaluation set includes web-sourced data from 17 advanced
CoSG models. We also propose a comprehensive taxonomy that categorizes codecs
by their root components: vector quantizer, auxiliary objectives, and decoder
types. Our proposed dataset and taxonomy enable detailed analysis at multiple
levels to discern the key factors for successful CodecFake detection. At the
individual codec level, we validate the effectiveness of using codec
re-synthesized speech (CoRS) as training data for large-scale CodecFake
detection. At the taxonomy level, we show that detection performance is
strongest when the re-synthesis model incorporates disentanglement auxiliary
objectives or a frequency-domain decoder. Furthermore, from the perspective of
using all the CoRS training data, we show that our proposed taxonomy can be
used to select better training data for improving detection performance.
Overall, we envision that CodecFake+ will be a valuable resource for both
general and fine-grained exploration to develop better anti-spoofing models
against CodecFake.
| [
{
"version": "v1",
"created": "Tue, 14 Jan 2025 16:26:14 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 22:22:05 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Chen",
"Xuanjun",
""
],
[
"Du",
"Jiawei",
""
],
[
"Wu",
"Haibin",
""
],
[
"Zhang",
"Lin",
""
],
[
"Lin",
"I-Ming",
""
],
[
"Chiu",
"I-Hsiang",
""
],
[
"Ren",
"Wenze",
""
],
[
"Tseng",
"Yuan",
""
],
[
"Tsao",
"Yu",
""
],
[
"Jang",
"Jyh-Shing Roger",
""
],
[
"Lee",
"Hung-yi",
""
]
] | TITLE: CodecFake+: A Large-Scale Neural Audio Codec-Based Deepfake Speech
Dataset
ABSTRACT: With the rapid advancement of neural audio codecs, codec-based speech
generation (CoSG) systems have become highly powerful. Unfortunately, CoSG also
enables the creation of highly realistic deepfake speech, making it easier to
mimic an individual's voice and spread misinformation. We refer to this
emerging deepfake speech generated by CoSG systems as CodecFake. Detecting such
CodecFake is an urgent challenge, yet most existing systems primarily focus on
detecting fake speech generated by traditional speech synthesis models. In this
paper, we introduce CodecFake+, a large-scale dataset designed to advance
CodecFake detection. To our knowledge, CodecFake+ is the largest dataset
encompassing the most diverse range of codec architectures. The training set is
generated through re-synthesis using 31 publicly available open-source codec
models, while the evaluation set includes web-sourced data from 17 advanced
CoSG models. We also propose a comprehensive taxonomy that categorizes codecs
by their root components: vector quantizer, auxiliary objectives, and decoder
types. Our proposed dataset and taxonomy enable detailed analysis at multiple
levels to discern the key factors for successful CodecFake detection. At the
individual codec level, we validate the effectiveness of using codec
re-synthesized speech (CoRS) as training data for large-scale CodecFake
detection. At the taxonomy level, we show that detection performance is
strongest when the re-synthesis model incorporates disentanglement auxiliary
objectives or a frequency-domain decoder. Furthermore, from the perspective of
using all the CoRS training data, we show that our proposed taxonomy can be
used to select better training data for improving detection performance.
Overall, we envision that CodecFake+ will be a valuable resource for both
general and fine-grained exploration to develop better anti-spoofing models
against CodecFake.
|
2501.08880 | Yuhang Ming | Yuhang Ming, Di Ma, Weichen Dai, Han Yang, Rui Fan, Guofeng Zhang,
Wanzeng Kong | SLC$^2$-SLAM: Semantic-guided Loop Closure using Shared Latent Code for
NeRF SLAM | Accepted to RAL. 8 pages, 5 figures, 5 tables | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Targeting the notorious cumulative drift errors in NeRF SLAM, we propose a
Semantic-guided Loop Closure using Shared Latent Code, dubbed SLC$^2$-SLAM. We
argue that latent codes stored in many NeRF SLAM systems are not fully
exploited, as they are only used for better reconstruction. In this paper, we
propose a simple yet effective way to detect potential loops using the same
latent codes as local features. To further improve the loop detection
performance, we use the semantic information, which are also decoded from the
same latent codes to guide the aggregation of local features. Finally, with the
potential loops detected, we close them with a graph optimization followed by
bundle adjustment to refine both the estimated poses and the reconstructed
scene. To evaluate the performance of our SLC$^2$-SLAM, we conduct extensive
experiments on Replica and ScanNet datasets. Our proposed semantic-guided loop
closure significantly outperforms the pre-trained NetVLAD and ORB combined with
Bag-of-Words, which are used in all the other NeRF SLAM with loop closure. As a
result, our SLC$^2$-SLAM also demonstrated better tracking and reconstruction
performance, especially in larger scenes with more loops, like ScanNet.
| [
{
"version": "v1",
"created": "Wed, 15 Jan 2025 15:51:06 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 07:31:25 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Ming",
"Yuhang",
""
],
[
"Ma",
"Di",
""
],
[
"Dai",
"Weichen",
""
],
[
"Yang",
"Han",
""
],
[
"Fan",
"Rui",
""
],
[
"Zhang",
"Guofeng",
""
],
[
"Kong",
"Wanzeng",
""
]
] | TITLE: SLC$^2$-SLAM: Semantic-guided Loop Closure using Shared Latent Code for
NeRF SLAM
ABSTRACT: Targeting the notorious cumulative drift errors in NeRF SLAM, we propose a
Semantic-guided Loop Closure using Shared Latent Code, dubbed SLC$^2$-SLAM. We
argue that latent codes stored in many NeRF SLAM systems are not fully
exploited, as they are only used for better reconstruction. In this paper, we
propose a simple yet effective way to detect potential loops using the same
latent codes as local features. To further improve the loop detection
performance, we use the semantic information, which are also decoded from the
same latent codes to guide the aggregation of local features. Finally, with the
potential loops detected, we close them with a graph optimization followed by
bundle adjustment to refine both the estimated poses and the reconstructed
scene. To evaluate the performance of our SLC$^2$-SLAM, we conduct extensive
experiments on Replica and ScanNet datasets. Our proposed semantic-guided loop
closure significantly outperforms the pre-trained NetVLAD and ORB combined with
Bag-of-Words, which are used in all the other NeRF SLAM with loop closure. As a
result, our SLC$^2$-SLAM also demonstrated better tracking and reconstruction
performance, especially in larger scenes with more loops, like ScanNet.
|
2501.09129 | Harris Hardiman-Mostow | Harris Hardiman-Mostow, Charles Marshak, Alexander L. Handwerger | Deep Self-Supervised Disturbance Mapping with the OPERA Sentinel-1
Radiometric Terrain Corrected SAR Backscatter Product | 19 pages, 18 figures, 5 tables. Preprint. Submitted to JSTARS.
Revised figures, clarifications, added references | null | null | null | cs.CV cs.LG eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mapping land surface disturbances supports disaster response, resource and
ecosystem management, and climate adaptation efforts. Synthetic aperture radar
(SAR) is an invaluable tool for disturbance mapping, providing consistent
time-series images of the ground regardless of weather or illumination
conditions. Despite SAR's potential for disturbance mapping, processing SAR
data to an analysis-ready format requires expertise and significant compute
resources, particularly for large-scale global analysis. In October 2023,
NASA's Observational Products for End-Users from Remote Sensing Analysis
(OPERA) project released the near-global Radiometric Terrain Corrected SAR
backscatter from Sentinel-1 (RTC-S1) dataset, providing publicly available,
analysis-ready SAR imagery. In this work, we utilize this new dataset to
systematically analyze land surface disturbances. As labeling SAR data is often
prohibitively time-consuming, we train a self-supervised vision transformer -
which requires no labels to train - on OPERA RTC-S1 data to estimate a
per-pixel distribution from the set of baseline imagery and assess disturbances
when there is significant deviation from the modeled distribution. To test our
model's capability and generality, we evaluate three different natural
disasters - which represent high-intensity, abrupt disturbances - from three
different regions of the world. Across events, our approach yields high quality
delineations: F1 scores exceeding 0.6 and Areas Under the Precision-Recall
Curve exceeding 0.65, consistently outperforming existing SAR disturbance
methods. Our findings suggest that a self-supervised vision transformer is
well-suited for global disturbance mapping and can be a valuable tool for
operational, near-global disturbance monitoring, particularly when labeled data
does not exist.
| [
{
"version": "v1",
"created": "Wed, 15 Jan 2025 20:24:18 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 20:49:43 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Hardiman-Mostow",
"Harris",
""
],
[
"Marshak",
"Charles",
""
],
[
"Handwerger",
"Alexander L.",
""
]
] | TITLE: Deep Self-Supervised Disturbance Mapping with the OPERA Sentinel-1
Radiometric Terrain Corrected SAR Backscatter Product
ABSTRACT: Mapping land surface disturbances supports disaster response, resource and
ecosystem management, and climate adaptation efforts. Synthetic aperture radar
(SAR) is an invaluable tool for disturbance mapping, providing consistent
time-series images of the ground regardless of weather or illumination
conditions. Despite SAR's potential for disturbance mapping, processing SAR
data to an analysis-ready format requires expertise and significant compute
resources, particularly for large-scale global analysis. In October 2023,
NASA's Observational Products for End-Users from Remote Sensing Analysis
(OPERA) project released the near-global Radiometric Terrain Corrected SAR
backscatter from Sentinel-1 (RTC-S1) dataset, providing publicly available,
analysis-ready SAR imagery. In this work, we utilize this new dataset to
systematically analyze land surface disturbances. As labeling SAR data is often
prohibitively time-consuming, we train a self-supervised vision transformer -
which requires no labels to train - on OPERA RTC-S1 data to estimate a
per-pixel distribution from the set of baseline imagery and assess disturbances
when there is significant deviation from the modeled distribution. To test our
model's capability and generality, we evaluate three different natural
disasters - which represent high-intensity, abrupt disturbances - from three
different regions of the world. Across events, our approach yields high quality
delineations: F1 scores exceeding 0.6 and Areas Under the Precision-Recall
Curve exceeding 0.65, consistently outperforming existing SAR disturbance
methods. Our findings suggest that a self-supervised vision transformer is
well-suited for global disturbance mapping and can be a valuable tool for
operational, near-global disturbance monitoring, particularly when labeled data
does not exist.
|
2501.10266 | Xiangyuan Peng | Xiangyuan Peng, Huawei Sun, Kay Bierzynski, Anton Fischbacher, Lorenzo
Servadei and Robert Wille | MutualForce: Mutual-Aware Enhancement for 4D Radar-LiDAR 3D Object
Detection | Accepted by ICASSP 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Radar and LiDAR have been widely used in autonomous driving as LiDAR provides
rich structure information, and radar demonstrates high robustness under
adverse weather. Recent studies highlight the effectiveness of fusing radar and
LiDAR point clouds. However, challenges remain due to the modality misalignment
and information loss during feature extractions. To address these issues, we
propose a 4D radar-LiDAR framework to mutually enhance their representations.
Initially, the indicative features from radar are utilized to guide both radar
and LiDAR geometric feature learning. Subsequently, to mitigate their sparsity
gap, the shape information from LiDAR is used to enrich radar BEV features.
Extensive experiments on the View-of-Delft (VoD) dataset demonstrate our
approach's superiority over existing methods, achieving the highest mAP of
71.76% across the entire area and 86.36\% within the driving corridor.
Especially for cars, we improve the AP by 4.17% and 4.20% due to the strong
indicative features and symmetric shapes.
| [
{
"version": "v1",
"created": "Fri, 17 Jan 2025 15:48:37 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 09:28:00 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Peng",
"Xiangyuan",
""
],
[
"Sun",
"Huawei",
""
],
[
"Bierzynski",
"Kay",
""
],
[
"Fischbacher",
"Anton",
""
],
[
"Servadei",
"Lorenzo",
""
],
[
"Wille",
"Robert",
""
]
] | TITLE: MutualForce: Mutual-Aware Enhancement for 4D Radar-LiDAR 3D Object
Detection
ABSTRACT: Radar and LiDAR have been widely used in autonomous driving as LiDAR provides
rich structure information, and radar demonstrates high robustness under
adverse weather. Recent studies highlight the effectiveness of fusing radar and
LiDAR point clouds. However, challenges remain due to the modality misalignment
and information loss during feature extractions. To address these issues, we
propose a 4D radar-LiDAR framework to mutually enhance their representations.
Initially, the indicative features from radar are utilized to guide both radar
and LiDAR geometric feature learning. Subsequently, to mitigate their sparsity
gap, the shape information from LiDAR is used to enrich radar BEV features.
Extensive experiments on the View-of-Delft (VoD) dataset demonstrate our
approach's superiority over existing methods, achieving the highest mAP of
71.76% across the entire area and 86.36\% within the driving corridor.
Especially for cars, we improve the AP by 4.17% and 4.20% due to the strong
indicative features and symmetric shapes.
|
2501.14009 | Aditya Parameshwaran | Aditya Parameshwaran and Yue Wang | Scalable and Interpretable Verification of Image-based Neural Network
Controllers for Autonomous Vehicles | 11 pages, 5 figures | null | 10.1145/3716550.3722037 | null | cs.LG cs.AI cs.SY eess.SY | http://creativecommons.org/licenses/by/4.0/ | Existing formal verification methods for image-based neural network
controllers in autonomous vehicles often struggle with high-dimensional inputs,
computational inefficiency, and a lack of explainability. These challenges make
it difficult to ensure safety and reliability, as processing high-dimensional
image data is computationally intensive and neural networks are typically
treated as black boxes. To address these issues, we propose SEVIN (Scalable and
Explainable Verification of Image-Based Neural Network Controllers), a
framework that leverages a Variational Autoencoders (VAE) to encode
high-dimensional images into a lower-dimensional, explainable latent space. By
annotating latent variables with corresponding control actions, we generate
convex polytopes that serve as structured input spaces for verification,
significantly reducing computational complexity and enhancing scalability.
Integrating the VAE's decoder with the neural network controller allows for
formal and robustness verification using these explainable polytopes. Our
approach also incorporates robustness verification under real-world
perturbations by augmenting the dataset and retraining the VAE to capture
environmental variations. Experimental results demonstrate that SEVIN achieves
efficient and scalable verification while providing explainable insights into
controller behavior, bridging the gap between formal verification techniques
and practical applications in safety-critical systems.
| [
{
"version": "v1",
"created": "Thu, 23 Jan 2025 16:46:45 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 18:01:53 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Parameshwaran",
"Aditya",
""
],
[
"Wang",
"Yue",
""
]
] | TITLE: Scalable and Interpretable Verification of Image-based Neural Network
Controllers for Autonomous Vehicles
ABSTRACT: Existing formal verification methods for image-based neural network
controllers in autonomous vehicles often struggle with high-dimensional inputs,
computational inefficiency, and a lack of explainability. These challenges make
it difficult to ensure safety and reliability, as processing high-dimensional
image data is computationally intensive and neural networks are typically
treated as black boxes. To address these issues, we propose SEVIN (Scalable and
Explainable Verification of Image-Based Neural Network Controllers), a
framework that leverages a Variational Autoencoders (VAE) to encode
high-dimensional images into a lower-dimensional, explainable latent space. By
annotating latent variables with corresponding control actions, we generate
convex polytopes that serve as structured input spaces for verification,
significantly reducing computational complexity and enhancing scalability.
Integrating the VAE's decoder with the neural network controller allows for
formal and robustness verification using these explainable polytopes. Our
approach also incorporates robustness verification under real-world
perturbations by augmenting the dataset and retraining the VAE to capture
environmental variations. Experimental results demonstrate that SEVIN achieves
efficient and scalable verification while providing explainable insights into
controller behavior, bridging the gap between formal verification techniques
and practical applications in safety-critical systems.
|
2501.14894 | Qiaojie Zheng | Qiaojie Zheng, Jiucai Zhang, Xiaoli Zhang | Enhancing accuracy of uncertainty estimation in appearance-based gaze
tracking with probabilistic evaluation and calibration | 9 pages, 7 figures, 2 tables | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Accurately knowing uncertainties in appearance-based gaze tracking is
critical for ensuring reliable downstream applications. Due to the lack of
individual uncertainty labels, current uncertainty-aware approaches adopt
probabilistic models to acquire uncertainties by following distributions in the
training dataset. Without regulations, this approach lets the uncertainty model
build biases and overfits the training data, leading to poor performance when
deployed. We first presented a strict proper evaluation metric from the
probabilistic perspective based on comparing the coverage probability between
prediction and observation to provide quantitative evaluation for better
assessment on the inferred uncertainties. We then proposed a correction
strategy based on probability calibration to mitigate biases in the estimated
uncertainties of the trained models. Finally, we demonstrated the effectiveness
of the correction strategy with experiments performed on two popular gaze
estimation datasets with distinctive image characteristics caused by data
collection settings.
| [
{
"version": "v1",
"created": "Fri, 24 Jan 2025 19:33:55 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Feb 2025 21:07:44 GMT"
},
{
"version": "v3",
"created": "Mon, 17 Mar 2025 21:23:20 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Zheng",
"Qiaojie",
""
],
[
"Zhang",
"Jiucai",
""
],
[
"Zhang",
"Xiaoli",
""
]
] | TITLE: Enhancing accuracy of uncertainty estimation in appearance-based gaze
tracking with probabilistic evaluation and calibration
ABSTRACT: Accurately knowing uncertainties in appearance-based gaze tracking is
critical for ensuring reliable downstream applications. Due to the lack of
individual uncertainty labels, current uncertainty-aware approaches adopt
probabilistic models to acquire uncertainties by following distributions in the
training dataset. Without regulations, this approach lets the uncertainty model
build biases and overfits the training data, leading to poor performance when
deployed. We first presented a strict proper evaluation metric from the
probabilistic perspective based on comparing the coverage probability between
prediction and observation to provide quantitative evaluation for better
assessment on the inferred uncertainties. We then proposed a correction
strategy based on probability calibration to mitigate biases in the estimated
uncertainties of the trained models. Finally, we demonstrated the effectiveness
of the correction strategy with experiments performed on two popular gaze
estimation datasets with distinctive image characteristics caused by data
collection settings.
|
2502.04382 | Rajiv Movva | Rajiv Movva, Kenny Peng, Nikhil Garg, Jon Kleinberg, Emma Pierson | Sparse Autoencoders for Hypothesis Generation | First two authors contributed equally; working paper. Code is
available at https://github.com/rmovva/HypotheSAEs | null | null | null | cs.CL cs.AI cs.CY | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We describe HypotheSAEs, a general method to hypothesize interpretable
relationships between text data (e.g., headlines) and a target variable (e.g.,
clicks). HypotheSAEs has three steps: (1) train a sparse autoencoder on text
embeddings to produce interpretable features describing the data distribution,
(2) select features that predict the target variable, and (3) generate a
natural language interpretation of each feature (e.g., "mentions being
surprised or shocked") using an LLM. Each interpretation serves as a hypothesis
about what predicts the target variable. Compared to baselines, our method
better identifies reference hypotheses on synthetic datasets (at least +0.06 in
F1) and produces more predictive hypotheses on real datasets (~twice as many
significant findings), despite requiring 1-2 orders of magnitude less compute
than recent LLM-based methods. HypotheSAEs also produces novel discoveries on
two well-studied tasks: explaining partisan differences in Congressional
speeches and identifying drivers of engagement with online headlines.
| [
{
"version": "v1",
"created": "Wed, 5 Feb 2025 18:58:02 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 17:51:56 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Movva",
"Rajiv",
""
],
[
"Peng",
"Kenny",
""
],
[
"Garg",
"Nikhil",
""
],
[
"Kleinberg",
"Jon",
""
],
[
"Pierson",
"Emma",
""
]
] | TITLE: Sparse Autoencoders for Hypothesis Generation
ABSTRACT: We describe HypotheSAEs, a general method to hypothesize interpretable
relationships between text data (e.g., headlines) and a target variable (e.g.,
clicks). HypotheSAEs has three steps: (1) train a sparse autoencoder on text
embeddings to produce interpretable features describing the data distribution,
(2) select features that predict the target variable, and (3) generate a
natural language interpretation of each feature (e.g., "mentions being
surprised or shocked") using an LLM. Each interpretation serves as a hypothesis
about what predicts the target variable. Compared to baselines, our method
better identifies reference hypotheses on synthetic datasets (at least +0.06 in
F1) and produces more predictive hypotheses on real datasets (~twice as many
significant findings), despite requiring 1-2 orders of magnitude less compute
than recent LLM-based methods. HypotheSAEs also produces novel discoveries on
two well-studied tasks: explaining partisan differences in Congressional
speeches and identifying drivers of engagement with online headlines.
|
2502.05092 | Rohit Saxena | Rohit Saxena, Aryo Pradipta Gema, Pasquale Minervini | Lost in Time: Clock and Calendar Understanding Challenges in Multimodal
LLMs | Accepted at the ICLR 2025 Workshop on Reasoning and Planning for
Large Language Models | null | null | null | cs.CV cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | Understanding time from visual representations is a fundamental cognitive
skill, yet it remains a challenge for multimodal large language models (MLLMs).
In this work, we investigate the capabilities of MLLMs in interpreting time and
date through analogue clocks and yearly calendars. To facilitate this, we
curated a structured dataset comprising two subsets: 1) $\textit{ClockQA}$,
which comprises various types of clock styles$-$standard, black-dial,
no-second-hand, Roman numeral, and arrow-hand clocks$-$paired with time related
questions; and 2) $\textit{CalendarQA}$, which consists of yearly calendar
images with questions ranging from commonly known dates (e.g., Christmas, New
Year's Day) to computationally derived ones (e.g., the 100th or 153rd day of
the year). We aim to analyse how MLLMs can perform visual recognition,
numerical reasoning, and temporal inference when presented with time-related
visual data. Our evaluations show that despite recent advancements, reliably
understanding time remains a significant challenge for MLLMs.
| [
{
"version": "v1",
"created": "Fri, 7 Feb 2025 17:11:23 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 11:43:52 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Saxena",
"Rohit",
""
],
[
"Gema",
"Aryo Pradipta",
""
],
[
"Minervini",
"Pasquale",
""
]
] | TITLE: Lost in Time: Clock and Calendar Understanding Challenges in Multimodal
LLMs
ABSTRACT: Understanding time from visual representations is a fundamental cognitive
skill, yet it remains a challenge for multimodal large language models (MLLMs).
In this work, we investigate the capabilities of MLLMs in interpreting time and
date through analogue clocks and yearly calendars. To facilitate this, we
curated a structured dataset comprising two subsets: 1) $\textit{ClockQA}$,
which comprises various types of clock styles$-$standard, black-dial,
no-second-hand, Roman numeral, and arrow-hand clocks$-$paired with time related
questions; and 2) $\textit{CalendarQA}$, which consists of yearly calendar
images with questions ranging from commonly known dates (e.g., Christmas, New
Year's Day) to computationally derived ones (e.g., the 100th or 153rd day of
the year). We aim to analyse how MLLMs can perform visual recognition,
numerical reasoning, and temporal inference when presented with time-related
visual data. Our evaluations show that despite recent advancements, reliably
understanding time remains a significant challenge for MLLMs.
|
2502.16793 | Chen Yang | Yang Chen and Bin Zhou | VGFL-SA: Vertical Graph Federated Learning Structure Attack Based on
Contrastive Learning | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph Neural Networks (GNNs) have gained attention for their ability to learn
representations from graph data. Due to privacy concerns and conflicts of
interest that prevent clients from directly sharing graph data with one
another, Vertical Graph Federated Learning (VGFL) frameworks have been
developed. Recent studies have shown that VGFL is vulnerable to adversarial
attacks that degrade performance. However, it is a common problem that client
nodes are often unlabeled in the realm of VGFL. Consequently, the existing
attacks, which rely on the availability of labeling information to obtain
gradients, are inherently constrained in their applicability. This limitation
precludes their deployment in practical, real-world environments. To address
the above problems, we propose a novel graph adversarial attack against VGFL,
referred to as VGFL-SA, to degrade the performance of VGFL by modifying the
local clients structure without using labels. Specifically, VGFL-SA uses a
contrastive learning method to complete the attack before the local clients are
trained. VGFL-SA first accesses the graph structure and node feature
information of the poisoned clients, and generates the contrastive views by
node-degree-based edge augmentation and feature shuffling augmentation. Then,
VGFL-SA uses the shared graph encoder to get the embedding of each view, and
the gradients of the adjacency matrices are obtained by the contrastive
function. Finally, perturbed edges are generated using gradient modification
rules. We validated the performance of VGFL-SA by performing a node
classification task on real-world datasets, and the results show that VGFL-SA
achieves good attack effectiveness and transferability.
| [
{
"version": "v1",
"created": "Mon, 24 Feb 2025 03:04:48 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 15:07:23 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Chen",
"Yang",
""
],
[
"Zhou",
"Bin",
""
]
] | TITLE: VGFL-SA: Vertical Graph Federated Learning Structure Attack Based on
Contrastive Learning
ABSTRACT: Graph Neural Networks (GNNs) have gained attention for their ability to learn
representations from graph data. Due to privacy concerns and conflicts of
interest that prevent clients from directly sharing graph data with one
another, Vertical Graph Federated Learning (VGFL) frameworks have been
developed. Recent studies have shown that VGFL is vulnerable to adversarial
attacks that degrade performance. However, it is a common problem that client
nodes are often unlabeled in the realm of VGFL. Consequently, the existing
attacks, which rely on the availability of labeling information to obtain
gradients, are inherently constrained in their applicability. This limitation
precludes their deployment in practical, real-world environments. To address
the above problems, we propose a novel graph adversarial attack against VGFL,
referred to as VGFL-SA, to degrade the performance of VGFL by modifying the
local clients structure without using labels. Specifically, VGFL-SA uses a
contrastive learning method to complete the attack before the local clients are
trained. VGFL-SA first accesses the graph structure and node feature
information of the poisoned clients, and generates the contrastive views by
node-degree-based edge augmentation and feature shuffling augmentation. Then,
VGFL-SA uses the shared graph encoder to get the embedding of each view, and
the gradients of the adjacency matrices are obtained by the contrastive
function. Finally, perturbed edges are generated using gradient modification
rules. We validated the performance of VGFL-SA by performing a node
classification task on real-world datasets, and the results show that VGFL-SA
achieves good attack effectiveness and transferability.
|
2502.19351 | Ricardo Rios | Ademir G. Costa Junior, F\'abio S. da Silva and Ricardo Rios | Deep Learning-Based Transfer Learning for Classification of Cassava
Disease | 12 pages, in Portuguese language, 3 figures | null | 10.5753/eniac.2024.244378 | null | eess.IV cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a performance comparison among four Convolutional Neural
Network architectures (EfficientNet-B3, InceptionV3, ResNet50, and VGG16) for
classifying cassava disease images. The images were sourced from an imbalanced
dataset from a competition. Appropriate metrics were employed to address class
imbalance. The results indicate that EfficientNet-B3 achieved on this task
accuracy of 87.7%, precision of 87.8%, revocation of 87.8% and F1-Score of
87.7%. These findings suggest that EfficientNet-B3 could be a valuable tool to
support Digital Agriculture.
| [
{
"version": "v1",
"created": "Wed, 26 Feb 2025 17:50:01 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Junior",
"Ademir G. Costa",
""
],
[
"da Silva",
"Fábio S.",
""
],
[
"Rios",
"Ricardo",
""
]
] | TITLE: Deep Learning-Based Transfer Learning for Classification of Cassava
Disease
ABSTRACT: This paper presents a performance comparison among four Convolutional Neural
Network architectures (EfficientNet-B3, InceptionV3, ResNet50, and VGG16) for
classifying cassava disease images. The images were sourced from an imbalanced
dataset from a competition. Appropriate metrics were employed to address class
imbalance. The results indicate that EfficientNet-B3 achieved on this task
accuracy of 87.7%, precision of 87.8%, revocation of 87.8% and F1-Score of
87.7%. These findings suggest that EfficientNet-B3 could be a valuable tool to
support Digital Agriculture.
|
2502.20963 | Gerion Spielberger | Gerion Spielberger, Florian M. Artinger, Jochen Reb and Rudolf
Kerschreiter | Retrieval Augmented Generation for Topic Modeling in Organizational
Research: An Introduction with Empirical Demonstration | 30 pages, 4 figures | null | null | null | cs.LG cs.AI econ.GN q-fin.EC | http://creativecommons.org/licenses/by/4.0/ | Analyzing textual data is the cornerstone of qualitative research. While
traditional methods such as grounded theory and content analysis are widely
used, they are labor-intensive and time-consuming. Topic modeling offers an
automated complement. Yet, existing approaches, including LLM-based topic
modeling, still struggle with issues such as high data preprocessing
requirements, interpretability, and reliability. This paper introduces Agentic
Retrieval-Augmented Generation (Agentic RAG) as a method for topic modeling
with LLMs. It integrates three key components: (1) retrieval, enabling
automatized access to external data beyond an LLM's pre-trained knowledge; (2)
generation, leveraging LLM capabilities for text synthesis; and (3)
agent-driven learning, iteratively refining retrieval and query formulation
processes. To empirically validate Agentic RAG for topic modeling, we reanalyze
a Twitter/X dataset, previously examined by Mu et al. (2024a). Our findings
demonstrate that the approach is more efficient, interpretable and at the same
time achieves higher reliability and validity in comparison to the standard
machine learning approach but also in comparison to LLM prompting for topic
modeling. These results highlight Agentic RAG's ability to generate
semantically relevant and reproducible topics, positioning it as a robust,
scalable, and transparent alternative for AI-driven qualitative research in
leadership, managerial, and organizational research.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 11:25:11 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 12:00:26 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Spielberger",
"Gerion",
""
],
[
"Artinger",
"Florian M.",
""
],
[
"Reb",
"Jochen",
""
],
[
"Kerschreiter",
"Rudolf",
""
]
] | TITLE: Retrieval Augmented Generation for Topic Modeling in Organizational
Research: An Introduction with Empirical Demonstration
ABSTRACT: Analyzing textual data is the cornerstone of qualitative research. While
traditional methods such as grounded theory and content analysis are widely
used, they are labor-intensive and time-consuming. Topic modeling offers an
automated complement. Yet, existing approaches, including LLM-based topic
modeling, still struggle with issues such as high data preprocessing
requirements, interpretability, and reliability. This paper introduces Agentic
Retrieval-Augmented Generation (Agentic RAG) as a method for topic modeling
with LLMs. It integrates three key components: (1) retrieval, enabling
automatized access to external data beyond an LLM's pre-trained knowledge; (2)
generation, leveraging LLM capabilities for text synthesis; and (3)
agent-driven learning, iteratively refining retrieval and query formulation
processes. To empirically validate Agentic RAG for topic modeling, we reanalyze
a Twitter/X dataset, previously examined by Mu et al. (2024a). Our findings
demonstrate that the approach is more efficient, interpretable and at the same
time achieves higher reliability and validity in comparison to the standard
machine learning approach but also in comparison to LLM prompting for topic
modeling. These results highlight Agentic RAG's ability to generate
semantically relevant and reproducible topics, positioning it as a robust,
scalable, and transparent alternative for AI-driven qualitative research in
leadership, managerial, and organizational research.
|
2503.00402 | Song Yu | Song Yu, Shengyuan Lin, Shufeng Gong, Yongqing Xie, Ruicheng Liu,
Yijie Zhou, Ji Sun, Yanfeng Zhang, Guoliang Li, Ge Yu | A Topology-Aware Localized Update Strategy for Graph-Based ANN Index | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The graph-based index has been widely adopted to meet the demand for
approximate nearest neighbor search (ANNS) for high-dimensional vectors.
However, in dynamic scenarios involving frequent vector insertions and
deletions, existing systems improve update throughput by adopting a batch
update method. However, a large batch size leads to significant degradation in
search accuracy.
This work aims to improve the performance of graph-based ANNS systems in
small-batch update scenarios, while maintaining high search efficiency and
accuracy. We identify two key issues in existing batch update systems for
small-batch updates. First, the system needs to scan the entire index file to
identify and update the affected vertices, resulting in excessive unnecessary
I/O. Second, updating the affected vertices introduces many new neighbors,
frequently triggering neighbor pruning. To address these issues, we propose a
topology-aware localized update strategy for graph-based ANN index. We
introduce a lightweight index topology to identify affected vertices
efficiently and employ a localized update strategy that modifies only the
affected vertices in the index file. To mitigate frequent heavy neighbor
pruning, we propose a similar neighbor replacement strategy, which connects the
affected vertices to only a small number (typically one) of the most similar
outgoing neighbors of the deleted vertex during repair. Based on extensive
experiments on real-world datasets, our update strategy achieves 2.47X-6.45X
higher update throughput than the state-of-the-art system FreshDiskANN while
maintaining high search efficiency and accuracy.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 08:33:43 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 13:54:02 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Yu",
"Song",
""
],
[
"Lin",
"Shengyuan",
""
],
[
"Gong",
"Shufeng",
""
],
[
"Xie",
"Yongqing",
""
],
[
"Liu",
"Ruicheng",
""
],
[
"Zhou",
"Yijie",
""
],
[
"Sun",
"Ji",
""
],
[
"Zhang",
"Yanfeng",
""
],
[
"Li",
"Guoliang",
""
],
[
"Yu",
"Ge",
""
]
] | TITLE: A Topology-Aware Localized Update Strategy for Graph-Based ANN Index
ABSTRACT: The graph-based index has been widely adopted to meet the demand for
approximate nearest neighbor search (ANNS) for high-dimensional vectors.
However, in dynamic scenarios involving frequent vector insertions and
deletions, existing systems improve update throughput by adopting a batch
update method. However, a large batch size leads to significant degradation in
search accuracy.
This work aims to improve the performance of graph-based ANNS systems in
small-batch update scenarios, while maintaining high search efficiency and
accuracy. We identify two key issues in existing batch update systems for
small-batch updates. First, the system needs to scan the entire index file to
identify and update the affected vertices, resulting in excessive unnecessary
I/O. Second, updating the affected vertices introduces many new neighbors,
frequently triggering neighbor pruning. To address these issues, we propose a
topology-aware localized update strategy for graph-based ANN index. We
introduce a lightweight index topology to identify affected vertices
efficiently and employ a localized update strategy that modifies only the
affected vertices in the index file. To mitigate frequent heavy neighbor
pruning, we propose a similar neighbor replacement strategy, which connects the
affected vertices to only a small number (typically one) of the most similar
outgoing neighbors of the deleted vertex during repair. Based on extensive
experiments on real-world datasets, our update strategy achieves 2.47X-6.45X
higher update throughput than the state-of-the-art system FreshDiskANN while
maintaining high search efficiency and accuracy.
|
2503.00741 | WenHui Lei | Henrui Tian, Wenhui Lei, Linrui Dai, Hanyu Chen, Xiaofan Zhang | LesionDiffusion: Towards Text-controlled General Lesion Synthesis | 10 pages, 4 figures | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Fully-supervised lesion recognition methods in medical imaging face
challenges due to the reliance on large annotated datasets, which are expensive
and difficult to collect. To address this, synthetic lesion generation has
become a promising approach. However, existing models struggle with
scalability, fine-grained control over lesion attributes, and the generation of
complex structures. We propose LesionDiffusion, a text-controllable lesion
synthesis framework for 3D CT imaging that generates both lesions and
corresponding masks. By utilizing a structured lesion report template, our
model provides greater control over lesion attributes and supports a wider
variety of lesion types. We introduce a dataset of 1,505 annotated CT scans
with paired lesion masks and structured reports, covering 14 lesion types
across 8 organs. LesionDiffusion consists of two components: a lesion mask
synthesis network (LMNet) and a lesion inpainting network (LINet), both guided
by lesion attributes and image features. Extensive experiments demonstrate that
LesionDiffusion significantly improves segmentation performance, with strong
generalization to unseen lesion types and organs, outperforming current
state-of-the-art models. Code will be available at
https://github.com/HengruiTianSJTU/LesionDiffusion.
| [
{
"version": "v1",
"created": "Sun, 2 Mar 2025 05:36:04 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Mar 2025 03:44:10 GMT"
},
{
"version": "v3",
"created": "Tue, 18 Mar 2025 11:31:57 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Tian",
"Henrui",
""
],
[
"Lei",
"Wenhui",
""
],
[
"Dai",
"Linrui",
""
],
[
"Chen",
"Hanyu",
""
],
[
"Zhang",
"Xiaofan",
""
]
] | TITLE: LesionDiffusion: Towards Text-controlled General Lesion Synthesis
ABSTRACT: Fully-supervised lesion recognition methods in medical imaging face
challenges due to the reliance on large annotated datasets, which are expensive
and difficult to collect. To address this, synthetic lesion generation has
become a promising approach. However, existing models struggle with
scalability, fine-grained control over lesion attributes, and the generation of
complex structures. We propose LesionDiffusion, a text-controllable lesion
synthesis framework for 3D CT imaging that generates both lesions and
corresponding masks. By utilizing a structured lesion report template, our
model provides greater control over lesion attributes and supports a wider
variety of lesion types. We introduce a dataset of 1,505 annotated CT scans
with paired lesion masks and structured reports, covering 14 lesion types
across 8 organs. LesionDiffusion consists of two components: a lesion mask
synthesis network (LMNet) and a lesion inpainting network (LINet), both guided
by lesion attributes and image features. Extensive experiments demonstrate that
LesionDiffusion significantly improves segmentation performance, with strong
generalization to unseen lesion types and organs, outperforming current
state-of-the-art models. Code will be available at
https://github.com/HengruiTianSJTU/LesionDiffusion.
|
2503.00847 | Johannes Daxenberger | Moritz Altemeyer, Steffen Eger, Johannes Daxenberger, Tim Altendorf,
Philipp Cimiano, Benjamin Schiller | Argument Summarization and its Evaluation in the Era of Large Language
Models | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by-sa/4.0/ | Large Language Models (LLMs) have revolutionized various Natural Language
Generation (NLG) tasks, including Argument Summarization (ArgSum), a key
subfield of Argument Mining (AM). This paper investigates the integration of
state-of-the-art LLMs into ArgSum, including for its evaluation. In particular,
we propose a novel prompt-based evaluation scheme, and validate it through a
novel human benchmark dataset. Our work makes three main contributions: (i) the
integration of LLMs into existing ArgSum frameworks, (ii) the development of a
new LLM-based ArgSum system, benchmarked against prior methods, and (iii) the
introduction of an advanced LLM-based evaluation scheme. We demonstrate that
the use of LLMs substantially improves both the generation and evaluation of
argument summaries, achieving state-of-the-art results and advancing the field
of ArgSum.
| [
{
"version": "v1",
"created": "Sun, 2 Mar 2025 10:49:10 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 20:25:48 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Altemeyer",
"Moritz",
""
],
[
"Eger",
"Steffen",
""
],
[
"Daxenberger",
"Johannes",
""
],
[
"Altendorf",
"Tim",
""
],
[
"Cimiano",
"Philipp",
""
],
[
"Schiller",
"Benjamin",
""
]
] | TITLE: Argument Summarization and its Evaluation in the Era of Large Language
Models
ABSTRACT: Large Language Models (LLMs) have revolutionized various Natural Language
Generation (NLG) tasks, including Argument Summarization (ArgSum), a key
subfield of Argument Mining (AM). This paper investigates the integration of
state-of-the-art LLMs into ArgSum, including for its evaluation. In particular,
we propose a novel prompt-based evaluation scheme, and validate it through a
novel human benchmark dataset. Our work makes three main contributions: (i) the
integration of LLMs into existing ArgSum frameworks, (ii) the development of a
new LLM-based ArgSum system, benchmarked against prior methods, and (iii) the
introduction of an advanced LLM-based evaluation scheme. We demonstrate that
the use of LLMs substantially improves both the generation and evaluation of
argument summaries, achieving state-of-the-art results and advancing the field
of ArgSum.
|
2503.01843 | Dayal Singh Kalra | Dayal Singh Kalra, John Kirchenbauer, Maissam Barkeshli, Tom Goldstein | When Can You Get Away with Low Memory Adam? | Acknowledgement updates and minor writing edits | null | null | null | cs.LG cond-mat.dis-nn stat.ML | http://creativecommons.org/licenses/by/4.0/ | Adam is the go-to optimizer for training modern machine learning models, but
it requires additional memory to maintain the moving averages of the gradients
and their squares. While various low-memory optimizers have been proposed that
sometimes match the performance of Adam, their lack of reliability has left
Adam as the default choice. In this work, we apply a simple layer-wise
Signal-to-Noise Ratio (SNR) analysis to quantify when second-moment tensors can
be effectively replaced by their means across different dimensions. Our SNR
analysis reveals how architecture, training hyperparameters, and dataset
properties impact compressibility along Adam's trajectory, naturally leading to
$\textit{SlimAdam}$, a memory-efficient Adam variant. $\textit{SlimAdam}$
compresses the second moments along dimensions with high SNR when feasible, and
leaves when compression would be detrimental. Through experiments across a
diverse set of architectures and training scenarios, we show that
$\textit{SlimAdam}$ matches Adam's performance and stability while saving up to
$98\%$ of total second moments. Code for $\textit{SlimAdam}$ is available at
https://github.com/dayal-kalra/low-memory-adam.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 18:59:40 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Mar 2025 18:38:33 GMT"
},
{
"version": "v3",
"created": "Mon, 17 Mar 2025 18:55:25 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Kalra",
"Dayal Singh",
""
],
[
"Kirchenbauer",
"John",
""
],
[
"Barkeshli",
"Maissam",
""
],
[
"Goldstein",
"Tom",
""
]
] | TITLE: When Can You Get Away with Low Memory Adam?
ABSTRACT: Adam is the go-to optimizer for training modern machine learning models, but
it requires additional memory to maintain the moving averages of the gradients
and their squares. While various low-memory optimizers have been proposed that
sometimes match the performance of Adam, their lack of reliability has left
Adam as the default choice. In this work, we apply a simple layer-wise
Signal-to-Noise Ratio (SNR) analysis to quantify when second-moment tensors can
be effectively replaced by their means across different dimensions. Our SNR
analysis reveals how architecture, training hyperparameters, and dataset
properties impact compressibility along Adam's trajectory, naturally leading to
$\textit{SlimAdam}$, a memory-efficient Adam variant. $\textit{SlimAdam}$
compresses the second moments along dimensions with high SNR when feasible, and
leaves when compression would be detrimental. Through experiments across a
diverse set of architectures and training scenarios, we show that
$\textit{SlimAdam}$ matches Adam's performance and stability while saving up to
$98\%$ of total second moments. Code for $\textit{SlimAdam}$ is available at
https://github.com/dayal-kalra/low-memory-adam.
|
2503.02321 | Haishan Huang | Pengchen Liang, Leijun Shi, Huiping Yao, Bin Pu, Jianguo Chen, Lei
Zhao, Haishan Huang, Zhuangzhuang Chen, Zhaozhao Xu, Lite Xu, Qing Chang,
Yiwei Li | Semantic Prior Distillation with Vision Foundation Model for Enhanced
Rapid Bone Scintigraphy Image Restoration | 12 pages, 9 figures, 8 tables | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Rapid bone scintigraphy is an essential tool for diagnosing skeletal diseases
and tumor metastasis in pediatric patients, as it reduces scan time and
minimizes patient discomfort. However, rapid scans often result in poor image
quality, potentially affecting diagnosis due to reduced resolution and detail,
which make it challenging to identify and evaluate finer anatomical structures.
To address this issue, we propose the first application of SAM-based semantic
priors for medical image restoration, leveraging the Segment Anything Model
(SAM) to enhance rapid bone scintigraphy images in pediatric populations. Our
method comprises two cascaded networks, $f^{IR1}$ and $f^{IR2}$, augmented by
three key modules: a Semantic Prior Integration (SPI) module, a Semantic
Knowledge Distillation (SKD) module, and a Semantic Consistency Module (SCM).
The SPI and SKD modules incorporate domain-specific semantic information from a
fine-tuned SAM, while the SCM maintains consistent semantic feature
representation throughout the cascaded networks. In addition, we will release a
novel Rapid Bone Scintigraphy dataset called RBS, the first dataset dedicated
to rapid bone scintigraphy image restoration in pediatric patients. RBS
consists of 137 pediatric patients aged between 0.5 and 16 years who underwent
both standard and rapid bone scans. The dataset includes scans performed at 20
cm/min (standard) and 40 cm/min (rapid), representing a $2\times$ acceleration.
We conducted extensive experiments on both the publicly available endoscopic
dataset and RBS. The results demonstrate that our method outperforms all
existing methods across various metrics, including PSNR, SSIM, FID, and LPIPS.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 06:23:22 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 05:23:43 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Liang",
"Pengchen",
""
],
[
"Shi",
"Leijun",
""
],
[
"Yao",
"Huiping",
""
],
[
"Pu",
"Bin",
""
],
[
"Chen",
"Jianguo",
""
],
[
"Zhao",
"Lei",
""
],
[
"Huang",
"Haishan",
""
],
[
"Chen",
"Zhuangzhuang",
""
],
[
"Xu",
"Zhaozhao",
""
],
[
"Xu",
"Lite",
""
],
[
"Chang",
"Qing",
""
],
[
"Li",
"Yiwei",
""
]
] | TITLE: Semantic Prior Distillation with Vision Foundation Model for Enhanced
Rapid Bone Scintigraphy Image Restoration
ABSTRACT: Rapid bone scintigraphy is an essential tool for diagnosing skeletal diseases
and tumor metastasis in pediatric patients, as it reduces scan time and
minimizes patient discomfort. However, rapid scans often result in poor image
quality, potentially affecting diagnosis due to reduced resolution and detail,
which make it challenging to identify and evaluate finer anatomical structures.
To address this issue, we propose the first application of SAM-based semantic
priors for medical image restoration, leveraging the Segment Anything Model
(SAM) to enhance rapid bone scintigraphy images in pediatric populations. Our
method comprises two cascaded networks, $f^{IR1}$ and $f^{IR2}$, augmented by
three key modules: a Semantic Prior Integration (SPI) module, a Semantic
Knowledge Distillation (SKD) module, and a Semantic Consistency Module (SCM).
The SPI and SKD modules incorporate domain-specific semantic information from a
fine-tuned SAM, while the SCM maintains consistent semantic feature
representation throughout the cascaded networks. In addition, we will release a
novel Rapid Bone Scintigraphy dataset called RBS, the first dataset dedicated
to rapid bone scintigraphy image restoration in pediatric patients. RBS
consists of 137 pediatric patients aged between 0.5 and 16 years who underwent
both standard and rapid bone scans. The dataset includes scans performed at 20
cm/min (standard) and 40 cm/min (rapid), representing a $2\times$ acceleration.
We conducted extensive experiments on both the publicly available endoscopic
dataset and RBS. The results demonstrate that our method outperforms all
existing methods across various metrics, including PSNR, SSIM, FID, and LPIPS.
|
2503.04843 | Herv\'e Turlier | Alessandro Pasqui, Sajjad Mahdavi, Benoit Vianay, Alexandra Colin,
Alex McDougall, R\'emi Dumollard, Yekaterina A. Miroshnikova, Elsa Labrune
and Herv\'e Turlier | Self-Supervised Z-Slice Augmentation for 3D Bio-Imaging via Knowledge
Distillation | 25 pages, 5 figures, 1 table | null | null | null | cs.CV cs.AI eess.IV q-bio.QM | http://creativecommons.org/licenses/by-sa/4.0/ | Three-dimensional biological microscopy has significantly advanced our
understanding of complex biological structures. However, limitations due to
microscopy techniques, sample properties or phototoxicity often result in poor
z-resolution, hindering accurate cellular measurements. Here, we introduce
ZAugNet, a fast, accurate, and self-supervised deep learning method for
enhancing z-resolution in biological images. By performing nonlinear
interpolation between consecutive slices, ZAugNet effectively doubles
resolution with each iteration. Compared on several microscopy modalities and
biological objects, it outperforms competing methods on most metrics. Our
method leverages a generative adversarial network (GAN) architecture combined
with knowledge distillation to maximize prediction speed without compromising
accuracy. We also developed ZAugNet+, an extended version enabling continuous
interpolation at arbitrary distances, making it particularly useful for
datasets with nonuniform slice spacing. Both ZAugNet and ZAugNet+ provide
high-performance, scalable z-slice augmentation solutions for large-scale 3D
imaging. They are available as open-source frameworks in PyTorch, with an
intuitive Colab notebook interface for easy access by the scientific community.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 17:50:35 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 21:52:46 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Pasqui",
"Alessandro",
""
],
[
"Mahdavi",
"Sajjad",
""
],
[
"Vianay",
"Benoit",
""
],
[
"Colin",
"Alexandra",
""
],
[
"McDougall",
"Alex",
""
],
[
"Dumollard",
"Rémi",
""
],
[
"Miroshnikova",
"Yekaterina A.",
""
],
[
"Labrune",
"Elsa",
""
],
[
"Turlier",
"Hervé",
""
]
] | TITLE: Self-Supervised Z-Slice Augmentation for 3D Bio-Imaging via Knowledge
Distillation
ABSTRACT: Three-dimensional biological microscopy has significantly advanced our
understanding of complex biological structures. However, limitations due to
microscopy techniques, sample properties or phototoxicity often result in poor
z-resolution, hindering accurate cellular measurements. Here, we introduce
ZAugNet, a fast, accurate, and self-supervised deep learning method for
enhancing z-resolution in biological images. By performing nonlinear
interpolation between consecutive slices, ZAugNet effectively doubles
resolution with each iteration. Compared on several microscopy modalities and
biological objects, it outperforms competing methods on most metrics. Our
method leverages a generative adversarial network (GAN) architecture combined
with knowledge distillation to maximize prediction speed without compromising
accuracy. We also developed ZAugNet+, an extended version enabling continuous
interpolation at arbitrary distances, making it particularly useful for
datasets with nonuniform slice spacing. Both ZAugNet and ZAugNet+ provide
high-performance, scalable z-slice augmentation solutions for large-scale 3D
imaging. They are available as open-source frameworks in PyTorch, with an
intuitive Colab notebook interface for easy access by the scientific community.
|
2503.05592 | Huatong Song | Huatong Song, Jinhao Jiang, Yingqian Min, Jie Chen, Zhipeng Chen,
Wayne Xin Zhao, Lei Fang, Ji-Rong Wen | R1-Searcher: Incentivizing the Search Capability in LLMs via
Reinforcement Learning | null | null | null | null | cs.AI cs.CL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing Large Reasoning Models (LRMs) have shown the potential of
reinforcement learning (RL) to enhance the complex reasoning capabilities of
Large Language Models~(LLMs). While they achieve remarkable performance on
challenging tasks such as mathematics and coding, they often rely on their
internal knowledge to solve problems, which can be inadequate for
time-sensitive or knowledge-intensive questions, leading to inaccuracies and
hallucinations. To address this, we propose \textbf{R1-Searcher}, a novel
two-stage outcome-based RL approach designed to enhance the search capabilities
of LLMs. This method allows LLMs to autonomously invoke external search systems
to access additional knowledge during the reasoning process. Our framework
relies exclusively on RL, without requiring process rewards or distillation for
a cold start. % effectively generalizing to out-of-domain datasets and
supporting both Base and Instruct models. Our experiments demonstrate that our
method significantly outperforms previous strong RAG methods, even when
compared to the closed-source GPT-4o-mini.
| [
{
"version": "v1",
"created": "Fri, 7 Mar 2025 17:14:44 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 08:32:24 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Song",
"Huatong",
""
],
[
"Jiang",
"Jinhao",
""
],
[
"Min",
"Yingqian",
""
],
[
"Chen",
"Jie",
""
],
[
"Chen",
"Zhipeng",
""
],
[
"Zhao",
"Wayne Xin",
""
],
[
"Fang",
"Lei",
""
],
[
"Wen",
"Ji-Rong",
""
]
] | TITLE: R1-Searcher: Incentivizing the Search Capability in LLMs via
Reinforcement Learning
ABSTRACT: Existing Large Reasoning Models (LRMs) have shown the potential of
reinforcement learning (RL) to enhance the complex reasoning capabilities of
Large Language Models~(LLMs). While they achieve remarkable performance on
challenging tasks such as mathematics and coding, they often rely on their
internal knowledge to solve problems, which can be inadequate for
time-sensitive or knowledge-intensive questions, leading to inaccuracies and
hallucinations. To address this, we propose \textbf{R1-Searcher}, a novel
two-stage outcome-based RL approach designed to enhance the search capabilities
of LLMs. This method allows LLMs to autonomously invoke external search systems
to access additional knowledge during the reasoning process. Our framework
relies exclusively on RL, without requiring process rewards or distillation for
a cold start. % effectively generalizing to out-of-domain datasets and
supporting both Base and Instruct models. Our experiments demonstrate that our
method significantly outperforms previous strong RAG methods, even when
compared to the closed-source GPT-4o-mini.
|
2503.07204 | Mona Sheikh Zeinoddin | Mona Sheikh Zeinoddin, Mobarakol Islam, Zafer Tandogdu, Greg Shaw,
Mathew J. Clarkson, Evangelos Mazomenos, Danail Stoyanov | Endo-FASt3r: Endoscopic Foundation model Adaptation for Structure from
motion | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate depth and camera pose estimation is essential for achieving
high-quality 3D visualisations in robotic-assisted surgery. Despite recent
advancements in foundation model adaptation to monocular depth estimation of
endoscopic scenes via self-supervised learning (SSL), no prior work has
explored their use for pose estimation. These methods rely on low rank-based
adaptation approaches, which constrain model updates to a low-rank space. We
propose Endo-FASt3r, the first monocular SSL depth and pose estimation
framework that uses foundation models for both tasks. We extend the Reloc3r
relative pose estimation foundation model by designing Reloc3rX, introducing
modifications necessary for convergence in SSL. We also present DoMoRA, a novel
adaptation technique that enables higher-rank updates and faster convergence.
Experiments on the SCARED dataset show that Endo-FASt3r achieves a substantial
$10\%$ improvement in pose estimation and a $2\%$ improvement in depth
estimation over prior work. Similar performance gains on the Hamlyn and
StereoMIS datasets reinforce the generalisability of Endo-FASt3r across
different datasets.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 11:42:37 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Mar 2025 12:43:19 GMT"
},
{
"version": "v3",
"created": "Tue, 18 Mar 2025 10:21:53 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Zeinoddin",
"Mona Sheikh",
""
],
[
"Islam",
"Mobarakol",
""
],
[
"Tandogdu",
"Zafer",
""
],
[
"Shaw",
"Greg",
""
],
[
"Clarkson",
"Mathew J.",
""
],
[
"Mazomenos",
"Evangelos",
""
],
[
"Stoyanov",
"Danail",
""
]
] | TITLE: Endo-FASt3r: Endoscopic Foundation model Adaptation for Structure from
motion
ABSTRACT: Accurate depth and camera pose estimation is essential for achieving
high-quality 3D visualisations in robotic-assisted surgery. Despite recent
advancements in foundation model adaptation to monocular depth estimation of
endoscopic scenes via self-supervised learning (SSL), no prior work has
explored their use for pose estimation. These methods rely on low rank-based
adaptation approaches, which constrain model updates to a low-rank space. We
propose Endo-FASt3r, the first monocular SSL depth and pose estimation
framework that uses foundation models for both tasks. We extend the Reloc3r
relative pose estimation foundation model by designing Reloc3rX, introducing
modifications necessary for convergence in SSL. We also present DoMoRA, a novel
adaptation technique that enables higher-rank updates and faster convergence.
Experiments on the SCARED dataset show that Endo-FASt3r achieves a substantial
$10\%$ improvement in pose estimation and a $2\%$ improvement in depth
estimation over prior work. Similar performance gains on the Hamlyn and
StereoMIS datasets reinforce the generalisability of Endo-FASt3r across
different datasets.
|
2503.07604 | Tianhe Lin | Tianhe Lin, Jian Xie, Siyu Yuan, Deqing Yang | Implicit Reasoning in Transformers is Reasoning through Shortcuts | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Test-time compute is emerging as a new paradigm for enhancing language
models' complex multi-step reasoning capabilities, as demonstrated by the
success of OpenAI's o1 and o3, as well as DeepSeek's R1. Compared to explicit
reasoning in test-time compute, implicit reasoning is more inference-efficient,
requiring fewer generated tokens. However, why does the advanced reasoning
capability fail to emerge in the implicit reasoning style? In this work, we
train GPT-2 from scratch on a curated multi-step mathematical reasoning dataset
and conduct analytical experiments to investigate how language models perform
implicit reasoning in multi-step tasks. Our findings reveal: 1) Language models
can perform step-by-step reasoning and achieve high accuracy in both in-domain
and out-of-domain tests via implicit reasoning. However, this capability only
emerges when trained on fixed-pattern data. 2) Conversely, implicit reasoning
abilities emerging from training on unfixed-pattern data tend to overfit a
specific pattern and fail to generalize further. Notably, this limitation is
also observed in state-of-the-art large language models. These findings suggest
that language models acquire implicit reasoning through shortcut learning,
enabling strong performance on tasks with similar patterns while lacking
generalization.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 17:58:31 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 12:08:17 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Lin",
"Tianhe",
""
],
[
"Xie",
"Jian",
""
],
[
"Yuan",
"Siyu",
""
],
[
"Yang",
"Deqing",
""
]
] | TITLE: Implicit Reasoning in Transformers is Reasoning through Shortcuts
ABSTRACT: Test-time compute is emerging as a new paradigm for enhancing language
models' complex multi-step reasoning capabilities, as demonstrated by the
success of OpenAI's o1 and o3, as well as DeepSeek's R1. Compared to explicit
reasoning in test-time compute, implicit reasoning is more inference-efficient,
requiring fewer generated tokens. However, why does the advanced reasoning
capability fail to emerge in the implicit reasoning style? In this work, we
train GPT-2 from scratch on a curated multi-step mathematical reasoning dataset
and conduct analytical experiments to investigate how language models perform
implicit reasoning in multi-step tasks. Our findings reveal: 1) Language models
can perform step-by-step reasoning and achieve high accuracy in both in-domain
and out-of-domain tests via implicit reasoning. However, this capability only
emerges when trained on fixed-pattern data. 2) Conversely, implicit reasoning
abilities emerging from training on unfixed-pattern data tend to overfit a
specific pattern and fail to generalize further. Notably, this limitation is
also observed in state-of-the-art large language models. These findings suggest
that language models acquire implicit reasoning through shortcut learning,
enabling strong performance on tasks with similar patterns while lacking
generalization.
|
2503.07920 | Holy Lovenia | Samuel Cahyawijaya, Holy Lovenia, Joel Ruben Antony Moniz, Tack Hwa
Wong, Mohammad Rifqi Farhansyah, Thant Thiri Maung, Frederikus Hudi, David
Anugraha, Muhammad Ravi Shulthan Habibi, Muhammad Reza Qorib, Amit Agarwal,
Joseph Marvin Imperial, Hitesh Laxmichand Patel, Vicky Feliren, Bahrul Ilmi
Nasution, Manuel Antonio Rufino, Genta Indra Winata, Rian Adam Rajagede,
Carlos Rafael Catalan, Mohamed Fazli Imam, Priyaranjan Pattnayak, Salsabila
Zahirah Pranida, Kevin Pratama, Yeshil Bangera, Adisai Na-Thalang, Patricia
Nicole Monderin, Yueqi Song, Christian Simon, Lynnette Hui Xian Ng, Richardy
Lobo' Sapan, Taki Hasan Rafi, Bin Wang, Supryadi, Kanyakorn Veerakanjana,
Piyalitt Ittichaiwong, Matthew Theodore Roque, Karissa Vincentio, Takdanai
Kreangphet, Phakphum Artkaew, Kadek Hendrawan Palgunadi, Yanzhi Yu, Rochana
Prih Hastuti, William Nixon, Mithil Bangera, Adrian Xuan Wei Lim, Aye Hninn
Khine, Hanif Muhammad Zhafran, Teddy Ferdinan, Audra Aurora Izzani, Ayushman
Singh, Evan, Jauza Akbar Krito, Michael Anugraha, Fenal Ashokbhai Ilasariya,
Haochen Li, John Amadeo Daniswara, Filbert Aurelian Tjiaranata, Eryawan
Presma Yulianrifat, Can Udomcharoenchaikit, Fadil Risdian Ansori, Mahardika
Krisna Ihsani, Giang Nguyen, Anab Maulana Barik, Dan John Velasco, Rifo Ahmad
Genadi, Saptarshi Saha, Chengwei Wei, Isaiah Flores, Kenneth Ko Han Chen,
Anjela Gail Santos, Wan Shen Lim, Kaung Si Phyo, Tim Santos, Meisyarah
Dwiastuti, Jiayun Luo, Jan Christian Blaise Cruz, Ming Shan Hee, Ikhlasul
Akmal Hanif, M.Alif Al Hakim, Muhammad Rizky Sya'ban, Kun Kerdthaisong,
Lester James V. Miranda, Fajri Koto, Tirana Noor Fatyanosa, Alham Fikri Aji,
Jostin Jerico Rosal, Jun Kevin, Robert Wijaya, Onno P. Kampman, Ruochen
Zhang, B\"orje F. Karlsson, Peerat Limkonchotiwat | Crowdsource, Crawl, or Generate? Creating SEA-VL, a Multicultural
Vision-Language Dataset for Southeast Asia | [SEA-VL Dataset]
https://huggingface.co/collections/SEACrowd/sea-vl-multicultural-vl-dataset-for-southeast-asia-67cf223d0c341d4ba2b236e7
[Appendix J]
https://github.com/SEACrowd/seacrowd.github.io/blob/master/docs/SEA_VL_Appendix_J.pdf | null | null | null | cs.CV cs.AI cs.CL | http://creativecommons.org/licenses/by-sa/4.0/ | Southeast Asia (SEA) is a region of extraordinary linguistic and cultural
diversity, yet it remains significantly underrepresented in vision-language
(VL) research. This often results in artificial intelligence (AI) models that
fail to capture SEA cultural nuances. To fill this gap, we present SEA-VL, an
open-source initiative dedicated to developing high-quality, culturally
relevant data for SEA languages. By involving contributors from SEA countries,
SEA-VL aims to ensure better cultural relevance and diversity, fostering
greater inclusivity of underrepresented languages in VL research. Beyond
crowdsourcing, our initiative goes one step further in the exploration of the
automatic collection of culturally relevant images through crawling and image
generation. First, we find that image crawling achieves approximately ~85%
cultural relevance while being more cost- and time-efficient than
crowdsourcing. Second, despite the substantial progress in generative vision
models, synthetic images remain unreliable in accurately reflecting SEA
cultures. The generated images often fail to reflect the nuanced traditions and
cultural contexts of the region. Collectively, we gather 1.28M SEA
culturally-relevant images, more than 50 times larger than other existing
datasets. Through SEA-VL, we aim to bridge the representation gap in SEA,
fostering the development of more inclusive AI systems that authentically
represent diverse cultures across SEA.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 23:54:52 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 11:34:03 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Cahyawijaya",
"Samuel",
""
],
[
"Lovenia",
"Holy",
""
],
[
"Moniz",
"Joel Ruben Antony",
""
],
[
"Wong",
"Tack Hwa",
""
],
[
"Farhansyah",
"Mohammad Rifqi",
""
],
[
"Maung",
"Thant Thiri",
""
],
[
"Hudi",
"Frederikus",
""
],
[
"Anugraha",
"David",
""
],
[
"Habibi",
"Muhammad Ravi Shulthan",
""
],
[
"Qorib",
"Muhammad Reza",
""
],
[
"Agarwal",
"Amit",
""
],
[
"Imperial",
"Joseph Marvin",
""
],
[
"Patel",
"Hitesh Laxmichand",
""
],
[
"Feliren",
"Vicky",
""
],
[
"Nasution",
"Bahrul Ilmi",
""
],
[
"Rufino",
"Manuel Antonio",
""
],
[
"Winata",
"Genta Indra",
""
],
[
"Rajagede",
"Rian Adam",
""
],
[
"Catalan",
"Carlos Rafael",
""
],
[
"Imam",
"Mohamed Fazli",
""
],
[
"Pattnayak",
"Priyaranjan",
""
],
[
"Pranida",
"Salsabila Zahirah",
""
],
[
"Pratama",
"Kevin",
""
],
[
"Bangera",
"Yeshil",
""
],
[
"Na-Thalang",
"Adisai",
""
],
[
"Monderin",
"Patricia Nicole",
""
],
[
"Song",
"Yueqi",
""
],
[
"Simon",
"Christian",
""
],
[
"Ng",
"Lynnette Hui Xian",
""
],
[
"Sapan",
"Richardy Lobo'",
""
],
[
"Rafi",
"Taki Hasan",
""
],
[
"Wang",
"Bin",
""
],
[
"Supryadi",
"",
""
],
[
"Veerakanjana",
"Kanyakorn",
""
],
[
"Ittichaiwong",
"Piyalitt",
""
],
[
"Roque",
"Matthew Theodore",
""
],
[
"Vincentio",
"Karissa",
""
],
[
"Kreangphet",
"Takdanai",
""
],
[
"Artkaew",
"Phakphum",
""
],
[
"Palgunadi",
"Kadek Hendrawan",
""
],
[
"Yu",
"Yanzhi",
""
],
[
"Hastuti",
"Rochana Prih",
""
],
[
"Nixon",
"William",
""
],
[
"Bangera",
"Mithil",
""
],
[
"Lim",
"Adrian Xuan Wei",
""
],
[
"Khine",
"Aye Hninn",
""
],
[
"Zhafran",
"Hanif Muhammad",
""
],
[
"Ferdinan",
"Teddy",
""
],
[
"Izzani",
"Audra Aurora",
""
],
[
"Singh",
"Ayushman",
""
],
[
"Evan",
"",
""
],
[
"Krito",
"Jauza Akbar",
""
],
[
"Anugraha",
"Michael",
""
],
[
"Ilasariya",
"Fenal Ashokbhai",
""
],
[
"Li",
"Haochen",
""
],
[
"Daniswara",
"John Amadeo",
""
],
[
"Tjiaranata",
"Filbert Aurelian",
""
],
[
"Yulianrifat",
"Eryawan Presma",
""
],
[
"Udomcharoenchaikit",
"Can",
""
],
[
"Ansori",
"Fadil Risdian",
""
],
[
"Ihsani",
"Mahardika Krisna",
""
],
[
"Nguyen",
"Giang",
""
],
[
"Barik",
"Anab Maulana",
""
],
[
"Velasco",
"Dan John",
""
],
[
"Genadi",
"Rifo Ahmad",
""
],
[
"Saha",
"Saptarshi",
""
],
[
"Wei",
"Chengwei",
""
],
[
"Flores",
"Isaiah",
""
],
[
"Chen",
"Kenneth Ko Han",
""
],
[
"Santos",
"Anjela Gail",
""
],
[
"Lim",
"Wan Shen",
""
],
[
"Phyo",
"Kaung Si",
""
],
[
"Santos",
"Tim",
""
],
[
"Dwiastuti",
"Meisyarah",
""
],
[
"Luo",
"Jiayun",
""
],
[
"Cruz",
"Jan Christian Blaise",
""
],
[
"Hee",
"Ming Shan",
""
],
[
"Hanif",
"Ikhlasul Akmal",
""
],
[
"Hakim",
"M. Alif Al",
""
],
[
"Sya'ban",
"Muhammad Rizky",
""
],
[
"Kerdthaisong",
"Kun",
""
],
[
"Miranda",
"Lester James V.",
""
],
[
"Koto",
"Fajri",
""
],
[
"Fatyanosa",
"Tirana Noor",
""
],
[
"Aji",
"Alham Fikri",
""
],
[
"Rosal",
"Jostin Jerico",
""
],
[
"Kevin",
"Jun",
""
],
[
"Wijaya",
"Robert",
""
],
[
"Kampman",
"Onno P.",
""
],
[
"Zhang",
"Ruochen",
""
],
[
"Karlsson",
"Börje F.",
""
],
[
"Limkonchotiwat",
"Peerat",
""
]
] | TITLE: Crowdsource, Crawl, or Generate? Creating SEA-VL, a Multicultural
Vision-Language Dataset for Southeast Asia
ABSTRACT: Southeast Asia (SEA) is a region of extraordinary linguistic and cultural
diversity, yet it remains significantly underrepresented in vision-language
(VL) research. This often results in artificial intelligence (AI) models that
fail to capture SEA cultural nuances. To fill this gap, we present SEA-VL, an
open-source initiative dedicated to developing high-quality, culturally
relevant data for SEA languages. By involving contributors from SEA countries,
SEA-VL aims to ensure better cultural relevance and diversity, fostering
greater inclusivity of underrepresented languages in VL research. Beyond
crowdsourcing, our initiative goes one step further in the exploration of the
automatic collection of culturally relevant images through crawling and image
generation. First, we find that image crawling achieves approximately ~85%
cultural relevance while being more cost- and time-efficient than
crowdsourcing. Second, despite the substantial progress in generative vision
models, synthetic images remain unreliable in accurately reflecting SEA
cultures. The generated images often fail to reflect the nuanced traditions and
cultural contexts of the region. Collectively, we gather 1.28M SEA
culturally-relevant images, more than 50 times larger than other existing
datasets. Through SEA-VL, we aim to bridge the representation gap in SEA,
fostering the development of more inclusive AI systems that authentically
represent diverse cultures across SEA.
|
2503.08516 | Yujie Gao | Jianfu Zhang and Yujie Gao and Jiahui Zhan and Wentao Wang and Yiyi
Zhang and Haohua Zhao and Liqing Zhang | High-Quality 3D Head Reconstruction from Any Single Portrait Image | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we introduce a novel high-fidelity 3D head reconstruction
method from a single portrait image, regardless of perspective, expression, or
accessories. Despite significant efforts in adapting 2D generative models for
novel view synthesis and 3D optimization, most methods struggle to produce
high-quality 3D portraits. The lack of crucial information, such as identity,
expression, hair, and accessories, limits these approaches in generating
realistic 3D head models. To address these challenges, we construct a new
high-quality dataset containing 227 sequences of digital human portraits
captured from 96 different perspectives, totalling 21,792 frames, featuring
diverse expressions and accessories. To further improve performance, we
integrate identity and expression information into the multi-view diffusion
process to enhance facial consistency across views. Specifically, we apply
identity- and expression-aware guidance and supervision to extract accurate
facial representations, which guide the model and enforce objective functions
to ensure high identity and expression consistency during generation. Finally,
we generate an orbital video around the portrait consisting of 96 multi-view
frames, which can be used for 3D portrait model reconstruction. Our method
demonstrates robust performance across challenging scenarios, including
side-face angles and complex accessories
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 15:08:37 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 12:58:46 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Zhang",
"Jianfu",
""
],
[
"Gao",
"Yujie",
""
],
[
"Zhan",
"Jiahui",
""
],
[
"Wang",
"Wentao",
""
],
[
"Zhang",
"Yiyi",
""
],
[
"Zhao",
"Haohua",
""
],
[
"Zhang",
"Liqing",
""
]
] | TITLE: High-Quality 3D Head Reconstruction from Any Single Portrait Image
ABSTRACT: In this work, we introduce a novel high-fidelity 3D head reconstruction
method from a single portrait image, regardless of perspective, expression, or
accessories. Despite significant efforts in adapting 2D generative models for
novel view synthesis and 3D optimization, most methods struggle to produce
high-quality 3D portraits. The lack of crucial information, such as identity,
expression, hair, and accessories, limits these approaches in generating
realistic 3D head models. To address these challenges, we construct a new
high-quality dataset containing 227 sequences of digital human portraits
captured from 96 different perspectives, totalling 21,792 frames, featuring
diverse expressions and accessories. To further improve performance, we
integrate identity and expression information into the multi-view diffusion
process to enhance facial consistency across views. Specifically, we apply
identity- and expression-aware guidance and supervision to extract accurate
facial representations, which guide the model and enforce objective functions
to ensure high identity and expression consistency during generation. Finally,
we generate an orbital video around the portrait consisting of 96 multi-view
frames, which can be used for 3D portrait model reconstruction. Our method
demonstrates robust performance across challenging scenarios, including
side-face angles and complex accessories
|
2503.09033 | Rui Shi | Rui Shi, Xiaodong Yu, Shengming Wang, Yijia Zhang, Lu Xu, Peng Pan,
Chunlai Ma | RFUAV: A Benchmark Dataset for Unmanned Aerial Vehicle Detection and
Identification | 23 pages, 13 figures, conference | null | null | null | cs.RO cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In this paper, we propose RFUAV as a new benchmark dataset for
radio-frequency based (RF-based) unmanned aerial vehicle (UAV) identification
and address the following challenges: Firstly, many existing datasets feature a
restricted variety of drone types and insufficient volumes of raw data, which
fail to meet the demands of practical applications. Secondly, existing datasets
often lack raw data covering a broad range of signal-to-noise ratios (SNR), or
do not provide tools for transforming raw data to different SNR levels. This
limitation undermines the validity of model training and evaluation. Lastly,
many existing datasets do not offer open-access evaluation tools, leading to a
lack of unified evaluation standards in current research within this field.
RFUAV comprises approximately 1.3 TB of raw frequency data collected from 37
distinct UAVs using the Universal Software Radio Peripheral (USRP) device in
real-world environments. Through in-depth analysis of the RF data in RFUAV, we
define a drone feature sequence called RF drone fingerprint, which aids in
distinguishing drone signals. In addition to the dataset, RFUAV provides a
baseline preprocessing method and model evaluation tools. Rigorous experiments
demonstrate that these preprocessing methods achieve state-of-the-art (SOTA)
performance using the provided evaluation tools. The RFUAV dataset and baseline
implementation are publicly available at https://github.com/kitoweeknd/RFUAV/.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 03:46:09 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 03:28:48 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Shi",
"Rui",
""
],
[
"Yu",
"Xiaodong",
""
],
[
"Wang",
"Shengming",
""
],
[
"Zhang",
"Yijia",
""
],
[
"Xu",
"Lu",
""
],
[
"Pan",
"Peng",
""
],
[
"Ma",
"Chunlai",
""
]
] | TITLE: RFUAV: A Benchmark Dataset for Unmanned Aerial Vehicle Detection and
Identification
ABSTRACT: In this paper, we propose RFUAV as a new benchmark dataset for
radio-frequency based (RF-based) unmanned aerial vehicle (UAV) identification
and address the following challenges: Firstly, many existing datasets feature a
restricted variety of drone types and insufficient volumes of raw data, which
fail to meet the demands of practical applications. Secondly, existing datasets
often lack raw data covering a broad range of signal-to-noise ratios (SNR), or
do not provide tools for transforming raw data to different SNR levels. This
limitation undermines the validity of model training and evaluation. Lastly,
many existing datasets do not offer open-access evaluation tools, leading to a
lack of unified evaluation standards in current research within this field.
RFUAV comprises approximately 1.3 TB of raw frequency data collected from 37
distinct UAVs using the Universal Software Radio Peripheral (USRP) device in
real-world environments. Through in-depth analysis of the RF data in RFUAV, we
define a drone feature sequence called RF drone fingerprint, which aids in
distinguishing drone signals. In addition to the dataset, RFUAV provides a
baseline preprocessing method and model evaluation tools. Rigorous experiments
demonstrate that these preprocessing methods achieve state-of-the-art (SOTA)
performance using the provided evaluation tools. The RFUAV dataset and baseline
implementation are publicly available at https://github.com/kitoweeknd/RFUAV/.
|
2503.09315 | Yihong Huang | Yihong Huang, Chen Chu, Fan Zhang, Fei Chen, Yu Lin, Ruiduan Li,
Zhihao Li | ShuffleGate: An Efficient and Self-Polarizing Feature Selection Method
for Large-Scale Deep Models in Industry | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep models in industrial applications rely on thousands of features for
accurate predictions, such as deep recommendation systems. While new features
are introduced to capture evolving user behavior, outdated or redundant
features often remain, significantly increasing storage and computational
costs. To address this issue, feature selection methods are widely adopted to
identify and remove less important features. However, existing approaches face
two major challenges: (1) they often require complex hyperparameter (Hp)
tuning, making them difficult to employ in practice, and (2) they fail to
produce well-separated feature importance scores, which complicates
straightforward feature removal. Moreover, the impact of removing unimportant
features can only be evaluated through retraining the model, a time-consuming
and resource-intensive process that severely hinders efficient feature
selection.
To solve these challenges, we propose a novel feature selection approach,
ShuffleGate. In particular, it shuffles all feature values across instances
simultaneously and uses a gating mechanism that allows the model to dynamically
learn the weights for combining the original and shuffled inputs. Notably, it
can generate well-separated feature importance scores and estimate the
performance without retraining the model, while introducing only a single Hp.
Experiments on four public datasets show that our approach outperforms
state-of-the-art methods in feature selection for model retraining. Moreover,
it has been successfully integrated into the daily iteration of Bilibili's
search models across various scenarios, where it significantly reduces feature
set size (up to 60%+) and computational resource usage (up to 20%+), while
maintaining comparable performance.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 12:05:03 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 12:35:52 GMT"
},
{
"version": "v3",
"created": "Tue, 18 Mar 2025 05:06:43 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Huang",
"Yihong",
""
],
[
"Chu",
"Chen",
""
],
[
"Zhang",
"Fan",
""
],
[
"Chen",
"Fei",
""
],
[
"Lin",
"Yu",
""
],
[
"Li",
"Ruiduan",
""
],
[
"Li",
"Zhihao",
""
]
] | TITLE: ShuffleGate: An Efficient and Self-Polarizing Feature Selection Method
for Large-Scale Deep Models in Industry
ABSTRACT: Deep models in industrial applications rely on thousands of features for
accurate predictions, such as deep recommendation systems. While new features
are introduced to capture evolving user behavior, outdated or redundant
features often remain, significantly increasing storage and computational
costs. To address this issue, feature selection methods are widely adopted to
identify and remove less important features. However, existing approaches face
two major challenges: (1) they often require complex hyperparameter (Hp)
tuning, making them difficult to employ in practice, and (2) they fail to
produce well-separated feature importance scores, which complicates
straightforward feature removal. Moreover, the impact of removing unimportant
features can only be evaluated through retraining the model, a time-consuming
and resource-intensive process that severely hinders efficient feature
selection.
To solve these challenges, we propose a novel feature selection approach,
ShuffleGate. In particular, it shuffles all feature values across instances
simultaneously and uses a gating mechanism that allows the model to dynamically
learn the weights for combining the original and shuffled inputs. Notably, it
can generate well-separated feature importance scores and estimate the
performance without retraining the model, while introducing only a single Hp.
Experiments on four public datasets show that our approach outperforms
state-of-the-art methods in feature selection for model retraining. Moreover,
it has been successfully integrated into the daily iteration of Bilibili's
search models across various scenarios, where it significantly reduces feature
set size (up to 60%+) and computational resource usage (up to 20%+), while
maintaining comparable performance.
|
2503.09496 | Junjie Zhou | Junjie Zhou, Jiao Tang, Yingli Zuo, Peng Wan, Daoqiang Zhang, Wei Shao | Robust Multimodal Survival Prediction with the Latent Differentiation
Conditional Variational AutoEncoder | Accepted by CVPR2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | The integrative analysis of histopathological images and genomic data has
received increasing attention for survival prediction of human cancers.
However, the existing studies always hold the assumption that full modalities
are available. As a matter of fact, the cost for collecting genomic data is
high, which sometimes makes genomic data unavailable in testing samples. A
common way of tackling such incompleteness is to generate the genomic
representations from the pathology images. Nevertheless, such strategy still
faces the following two challenges: (1) The gigapixel whole slide images (WSIs)
are huge and thus hard for representation. (2) It is difficult to generate the
genomic embeddings with diverse function categories in a unified generative
framework. To address the above challenges, we propose a Conditional Latent
Differentiation Variational AutoEncoder (LD-CVAE) for robust multimodal
survival prediction, even with missing genomic data. Specifically, a
Variational Information Bottleneck Transformer (VIB-Trans) module is proposed
to learn compressed pathological representations from the gigapixel WSIs. To
generate different functional genomic features, we develop a novel Latent
Differentiation Variational AutoEncoder (LD-VAE) to learn the common and
specific posteriors for the genomic embeddings with diverse functions. Finally,
we use the product-of-experts technique to integrate the genomic common
posterior and image posterior for the joint latent distribution estimation in
LD-CVAE. We test the effectiveness of our method on five different cancer
datasets, and the experimental results demonstrate its superiority in both
complete and missing modality scenarios.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 15:58:37 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 07:15:08 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Zhou",
"Junjie",
""
],
[
"Tang",
"Jiao",
""
],
[
"Zuo",
"Yingli",
""
],
[
"Wan",
"Peng",
""
],
[
"Zhang",
"Daoqiang",
""
],
[
"Shao",
"Wei",
""
]
] | TITLE: Robust Multimodal Survival Prediction with the Latent Differentiation
Conditional Variational AutoEncoder
ABSTRACT: The integrative analysis of histopathological images and genomic data has
received increasing attention for survival prediction of human cancers.
However, the existing studies always hold the assumption that full modalities
are available. As a matter of fact, the cost for collecting genomic data is
high, which sometimes makes genomic data unavailable in testing samples. A
common way of tackling such incompleteness is to generate the genomic
representations from the pathology images. Nevertheless, such strategy still
faces the following two challenges: (1) The gigapixel whole slide images (WSIs)
are huge and thus hard for representation. (2) It is difficult to generate the
genomic embeddings with diverse function categories in a unified generative
framework. To address the above challenges, we propose a Conditional Latent
Differentiation Variational AutoEncoder (LD-CVAE) for robust multimodal
survival prediction, even with missing genomic data. Specifically, a
Variational Information Bottleneck Transformer (VIB-Trans) module is proposed
to learn compressed pathological representations from the gigapixel WSIs. To
generate different functional genomic features, we develop a novel Latent
Differentiation Variational AutoEncoder (LD-VAE) to learn the common and
specific posteriors for the genomic embeddings with diverse functions. Finally,
we use the product-of-experts technique to integrate the genomic common
posterior and image posterior for the joint latent distribution estimation in
LD-CVAE. We test the effectiveness of our method on five different cancer
datasets, and the experimental results demonstrate its superiority in both
complete and missing modality scenarios.
|
2503.09829 | Joohwan Seo | Joohwan Seo, Soochul Yoo, Junwoo Chang, Hyunseok An, Hyunwoo Ryu,
Soomi Lee, Arvind Kruthiventy, Jongeun Choi, and Roberto Horowitz | SE(3)-Equivariant Robot Learning and Control: A Tutorial Survey | Submitted to International Journcal of Control, Automation and
Systems (IJCAS), Under Review | null | null | null | cs.RO cs.LG cs.SY eess.SY | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Recent advances in deep learning and Transformers have driven major
breakthroughs in robotics by employing techniques such as imitation learning,
reinforcement learning, and LLM-based multimodal perception and
decision-making. However, conventional deep learning and Transformer models
often struggle to process data with inherent symmetries and invariances,
typically relying on large datasets or extensive data augmentation. Equivariant
neural networks overcome these limitations by explicitly integrating symmetry
and invariance into their architectures, leading to improved efficiency and
generalization. This tutorial survey reviews a wide range of equivariant deep
learning and control methods for robotics, from classic to state-of-the-art,
with a focus on SE(3)-equivariant models that leverage the natural 3D
rotational and translational symmetries in visual robotic manipulation and
control design. Using unified mathematical notation, we begin by reviewing key
concepts from group theory, along with matrix Lie groups and Lie algebras. We
then introduce foundational group-equivariant neural network design and show
how the group-equivariance can be obtained through their structure. Next, we
discuss the applications of SE(3)-equivariant neural networks in robotics in
terms of imitation learning and reinforcement learning. The SE(3)-equivariant
control design is also reviewed from the perspective of geometric control.
Finally, we highlight the challenges and future directions of equivariant
methods in developing more robust, sample-efficient, and multi-modal real-world
robotic systems.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 20:47:40 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 06:26:34 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Seo",
"Joohwan",
""
],
[
"Yoo",
"Soochul",
""
],
[
"Chang",
"Junwoo",
""
],
[
"An",
"Hyunseok",
""
],
[
"Ryu",
"Hyunwoo",
""
],
[
"Lee",
"Soomi",
""
],
[
"Kruthiventy",
"Arvind",
""
],
[
"Choi",
"Jongeun",
""
],
[
"Horowitz",
"Roberto",
""
]
] | TITLE: SE(3)-Equivariant Robot Learning and Control: A Tutorial Survey
ABSTRACT: Recent advances in deep learning and Transformers have driven major
breakthroughs in robotics by employing techniques such as imitation learning,
reinforcement learning, and LLM-based multimodal perception and
decision-making. However, conventional deep learning and Transformer models
often struggle to process data with inherent symmetries and invariances,
typically relying on large datasets or extensive data augmentation. Equivariant
neural networks overcome these limitations by explicitly integrating symmetry
and invariance into their architectures, leading to improved efficiency and
generalization. This tutorial survey reviews a wide range of equivariant deep
learning and control methods for robotics, from classic to state-of-the-art,
with a focus on SE(3)-equivariant models that leverage the natural 3D
rotational and translational symmetries in visual robotic manipulation and
control design. Using unified mathematical notation, we begin by reviewing key
concepts from group theory, along with matrix Lie groups and Lie algebras. We
then introduce foundational group-equivariant neural network design and show
how the group-equivariance can be obtained through their structure. Next, we
discuss the applications of SE(3)-equivariant neural networks in robotics in
terms of imitation learning and reinforcement learning. The SE(3)-equivariant
control design is also reviewed from the perspective of geometric control.
Finally, we highlight the challenges and future directions of equivariant
methods in developing more robust, sample-efficient, and multi-modal real-world
robotic systems.
|
2503.10253 | Wan Han | Han Wan, Qi Wang, Yuan Mi and Hao Sun | PIMRL: Physics-Informed Multi-Scale Recurrent Learning for
Spatiotemporal Prediction | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Simulation of spatiotemporal systems governed by partial differential
equations is widely applied in fields such as biology, chemistry, aerospace
dynamics, and meteorology. Traditional numerical methods incur high
computational costs due to the requirement of small time steps for accurate
predictions. While machine learning has reduced these costs, long-term
predictions remain challenged by error accumulation, particularly in scenarios
with insufficient data or varying time scales, where stability and accuracy are
compromised. Existing methods often neglect the effective utilization of
multi-scale data, leading to suboptimal robustness in predictions. To address
these issues, we propose a novel multi-scale learning framework, namely, the
Physics-Informed Multi-Scale Recurrent Learning (PIMRL), to effectively
leverage multi-scale data for spatiotemporal dynamics prediction. The PIMRL
framework comprises two modules: the micro-scale module embeds physical
knowledge into neural networks via pretraining, and the macro-scale module
adopts a data-driven approach to learn the temporal evolution of physics in the
latent space. Experimental results demonstrate that the PIMRL framework
consistently achieves state-of-the-art performance across five benchmark
datasets ranging from one to three dimensions, showing average improvements of
over 9\% in both RMSE and MAE evaluation metrics, with maximum enhancements
reaching up to 80%.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 11:01:03 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 07:08:41 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Wan",
"Han",
""
],
[
"Wang",
"Qi",
""
],
[
"Mi",
"Yuan",
""
],
[
"Sun",
"Hao",
""
]
] | TITLE: PIMRL: Physics-Informed Multi-Scale Recurrent Learning for
Spatiotemporal Prediction
ABSTRACT: Simulation of spatiotemporal systems governed by partial differential
equations is widely applied in fields such as biology, chemistry, aerospace
dynamics, and meteorology. Traditional numerical methods incur high
computational costs due to the requirement of small time steps for accurate
predictions. While machine learning has reduced these costs, long-term
predictions remain challenged by error accumulation, particularly in scenarios
with insufficient data or varying time scales, where stability and accuracy are
compromised. Existing methods often neglect the effective utilization of
multi-scale data, leading to suboptimal robustness in predictions. To address
these issues, we propose a novel multi-scale learning framework, namely, the
Physics-Informed Multi-Scale Recurrent Learning (PIMRL), to effectively
leverage multi-scale data for spatiotemporal dynamics prediction. The PIMRL
framework comprises two modules: the micro-scale module embeds physical
knowledge into neural networks via pretraining, and the macro-scale module
adopts a data-driven approach to learn the temporal evolution of physics in the
latent space. Experimental results demonstrate that the PIMRL framework
consistently achieves state-of-the-art performance across five benchmark
datasets ranging from one to three dimensions, showing average improvements of
over 9\% in both RMSE and MAE evaluation metrics, with maximum enhancements
reaching up to 80%.
|
2503.10615 | Yang Yi | Yi Yang, Xiaoxuan He, Hongkun Pan, Xiyan Jiang, Yan Deng, Xingtao
Yang, Haoyu Lu, Dacheng Yin, Fengyun Rao, Minfeng Zhu, Bo Zhang, Wei Chen | R1-Onevision: Advancing Generalized Multimodal Reasoning through
Cross-Modal Formalization | Code and Model: https://github.com/Fancy-MLLM/R1-onevision | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models have demonstrated remarkable reasoning capability in
complex textual tasks. However, multimodal reasoning, which requires
integrating visual and textual information, remains a significant challenge.
Existing visual-language models often struggle to effectively analyze and
reason visual content, resulting in suboptimal performance on complex reasoning
tasks. Moreover, the absence of comprehensive benchmarks hinders the accurate
assessment of multimodal reasoning capabilities. In this paper, we introduce
R1-Onevision, a multimodal reasoning model designed to bridge the gap between
visual perception and deep reasoning. To achieve this, we propose a cross-modal
reasoning pipeline that transforms images into formal textural representations,
enabling precise language-based reasoning. Leveraging this pipeline, we
construct the R1-Onevision dataset which provides detailed, step-by-step
multimodal reasoning annotations across diverse domains. We further develop the
R1-Onevision model through supervised fine-tuning and reinforcement learning to
cultivate advanced reasoning and robust generalization abilities. To
comprehensively evaluate multimodal reasoning performance across different
grades, we introduce R1-Onevision-Bench, a benchmark aligned with human
educational stages, covering exams from junior high school to university and
beyond. Experimental results show that R1-Onevision achieves state-of-the-art
performance, outperforming models such as GPT-4o and Qwen2.5-VL on multiple
challenging multimodal reasoning benchmarks.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 17:56:05 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 08:52:34 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Yang",
"Yi",
""
],
[
"He",
"Xiaoxuan",
""
],
[
"Pan",
"Hongkun",
""
],
[
"Jiang",
"Xiyan",
""
],
[
"Deng",
"Yan",
""
],
[
"Yang",
"Xingtao",
""
],
[
"Lu",
"Haoyu",
""
],
[
"Yin",
"Dacheng",
""
],
[
"Rao",
"Fengyun",
""
],
[
"Zhu",
"Minfeng",
""
],
[
"Zhang",
"Bo",
""
],
[
"Chen",
"Wei",
""
]
] | TITLE: R1-Onevision: Advancing Generalized Multimodal Reasoning through
Cross-Modal Formalization
ABSTRACT: Large Language Models have demonstrated remarkable reasoning capability in
complex textual tasks. However, multimodal reasoning, which requires
integrating visual and textual information, remains a significant challenge.
Existing visual-language models often struggle to effectively analyze and
reason visual content, resulting in suboptimal performance on complex reasoning
tasks. Moreover, the absence of comprehensive benchmarks hinders the accurate
assessment of multimodal reasoning capabilities. In this paper, we introduce
R1-Onevision, a multimodal reasoning model designed to bridge the gap between
visual perception and deep reasoning. To achieve this, we propose a cross-modal
reasoning pipeline that transforms images into formal textural representations,
enabling precise language-based reasoning. Leveraging this pipeline, we
construct the R1-Onevision dataset which provides detailed, step-by-step
multimodal reasoning annotations across diverse domains. We further develop the
R1-Onevision model through supervised fine-tuning and reinforcement learning to
cultivate advanced reasoning and robust generalization abilities. To
comprehensively evaluate multimodal reasoning performance across different
grades, we introduce R1-Onevision-Bench, a benchmark aligned with human
educational stages, covering exams from junior high school to university and
beyond. Experimental results show that R1-Onevision achieves state-of-the-art
performance, outperforming models such as GPT-4o and Qwen2.5-VL on multiple
challenging multimodal reasoning benchmarks.
|
2503.11737 | Jiseong Park | Jiseong Park, Hanjin Kim, Seojin Kim, Jueun Choi | Multi-View Node Pruning for Accurate Graph Representation | Jiseong Park and Hanjin Kim are co-first author for this work | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Graph pooling, which compresses a whole graph into a smaller coarsened graph,
is an essential component of graph representation learning. To efficiently
compress a given graph, graph pooling methods often drop their nodes with
attention-based scoring with the task loss. However, this often results in
simply removing nodes with lower degrees without consideration of their
feature-level relevance to the given task. To fix this problem, we propose a
Multi-View Pruning(MVP), a graph pruning method based on a multi-view framework
and reconstruction loss. Given a graph, MVP first constructs multiple graphs
for different views either by utilizing the predefined modalities or by
randomly partitioning the input features, to consider the importance of each
node in diverse perspectives. Then, it learns the score for each node by
considering both the reconstruction and the task loss. MVP can be incorporated
with any hierarchical pooling framework to score the nodes. We validate MVP on
multiple benchmark datasets by coupling it with two graph pooling methods, and
show that it significantly improves the performance of the base graph pooling
method, outperforming all baselines. Further analysis shows that both the
encoding of multiple views and the consideration of reconstruction loss are the
key to the success of MVP, and that it indeed identifies nodes that are less
important according to domain knowledge.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 14:44:54 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 14:34:49 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Park",
"Jiseong",
""
],
[
"Kim",
"Hanjin",
""
],
[
"Kim",
"Seojin",
""
],
[
"Choi",
"Jueun",
""
]
] | TITLE: Multi-View Node Pruning for Accurate Graph Representation
ABSTRACT: Graph pooling, which compresses a whole graph into a smaller coarsened graph,
is an essential component of graph representation learning. To efficiently
compress a given graph, graph pooling methods often drop their nodes with
attention-based scoring with the task loss. However, this often results in
simply removing nodes with lower degrees without consideration of their
feature-level relevance to the given task. To fix this problem, we propose a
Multi-View Pruning(MVP), a graph pruning method based on a multi-view framework
and reconstruction loss. Given a graph, MVP first constructs multiple graphs
for different views either by utilizing the predefined modalities or by
randomly partitioning the input features, to consider the importance of each
node in diverse perspectives. Then, it learns the score for each node by
considering both the reconstruction and the task loss. MVP can be incorporated
with any hierarchical pooling framework to score the nodes. We validate MVP on
multiple benchmark datasets by coupling it with two graph pooling methods, and
show that it significantly improves the performance of the base graph pooling
method, outperforming all baselines. Further analysis shows that both the
encoding of multiple views and the consideration of reconstruction loss are the
key to the success of MVP, and that it indeed identifies nodes that are less
important according to domain knowledge.
|
2503.11911 | Naome Etori | Naome A. Etori, Kevin Lu, Randu Karisa and Arturs Kanepajs | LAG-MMLU: Benchmarking Frontier LLM Understanding in Latvian and Giriama | Accepted at NoDaLiDa/Baltic-HLT 2025.
https://hdl.handle.net/10062/107190 | Joint 25th Nordic Conference on Computational Linguistics and 11th
Baltic Conference on Human Language Technologies (NoDaLiDa/Baltic-HLT 2025) :
Proceedings of the Conference: March 3-4, 2025 | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | As large language models (LLMs) rapidly advance, evaluating their performance
is critical. LLMs are trained on multilingual data, but their reasoning
abilities are mainly evaluated using English datasets. Hence, robust evaluation
frameworks are needed using high-quality non-English datasets, especially
low-resource languages (LRLs). This study evaluates eight state-of-the-art
(SOTA) LLMs on Latvian and Giriama using a Massive Multitask Language
Understanding (MMLU) subset curated with native speakers for linguistic and
cultural relevance. Giriama is benchmarked for the first time. Our evaluation
shows that OpenAI's o1 model outperforms others across all languages, scoring
92.8% in English, 88.8% in Latvian, and 70.8% in Giriama on 0-shot tasks.
Mistral-large (35.6%) and Llama-70B IT (41%) have weak performance, on both
Latvian and Giriama. Our results underscore the need for localized benchmarks
and human evaluations in advancing cultural AI contextualization.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 22:50:50 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 04:01:37 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Etori",
"Naome A.",
""
],
[
"Lu",
"Kevin",
""
],
[
"Karisa",
"Randu",
""
],
[
"Kanepajs",
"Arturs",
""
]
] | TITLE: LAG-MMLU: Benchmarking Frontier LLM Understanding in Latvian and Giriama
ABSTRACT: As large language models (LLMs) rapidly advance, evaluating their performance
is critical. LLMs are trained on multilingual data, but their reasoning
abilities are mainly evaluated using English datasets. Hence, robust evaluation
frameworks are needed using high-quality non-English datasets, especially
low-resource languages (LRLs). This study evaluates eight state-of-the-art
(SOTA) LLMs on Latvian and Giriama using a Massive Multitask Language
Understanding (MMLU) subset curated with native speakers for linguistic and
cultural relevance. Giriama is benchmarked for the first time. Our evaluation
shows that OpenAI's o1 model outperforms others across all languages, scoring
92.8% in English, 88.8% in Latvian, and 70.8% in Giriama on 0-shot tasks.
Mistral-large (35.6%) and Llama-70B IT (41%) have weak performance, on both
Latvian and Giriama. Our results underscore the need for localized benchmarks
and human evaluations in advancing cultural AI contextualization.
|
2503.12009 | Haisheng Su | Xin Jin, Haisheng Su, Kai Liu, Cong Ma, Wei Wu, Fei Hui, Junchi Yan | UniMamba: Unified Spatial-Channel Representation Learning with
Group-Efficient Mamba for LiDAR-based 3D Object Detection | Accepted to CVPR2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in LiDAR 3D detection have demonstrated the effectiveness of
Transformer-based frameworks in capturing the global dependencies from point
cloud spaces, which serialize the 3D voxels into the flattened 1D sequence for
iterative self-attention. However, the spatial structure of 3D voxels will be
inevitably destroyed during the serialization process. Besides, due to the
considerable number of 3D voxels and quadratic complexity of Transformers,
multiple sequences are grouped before feeding to Transformers, leading to a
limited receptive field. Inspired by the impressive performance of State Space
Models (SSM) achieved in the field of 2D vision tasks, in this paper, we
propose a novel Unified Mamba (UniMamba), which seamlessly integrates the
merits of 3D convolution and SSM in a concise multi-head manner, aiming to
perform "local and global" spatial context aggregation efficiently and
simultaneously. Specifically, a UniMamba block is designed which mainly
consists of spatial locality modeling, complementary Z-order serialization and
local-global sequential aggregator. The spatial locality modeling module
integrates 3D submanifold convolution to capture the dynamic spatial position
embedding before serialization. Then the efficient Z-order curve is adopted for
serialization both horizontally and vertically. Furthermore, the local-global
sequential aggregator adopts the channel grouping strategy to efficiently
encode both "local and global" spatial inter-dependencies using multi-head SSM.
Additionally, an encoder-decoder architecture with stacked UniMamba blocks is
formed to facilitate multi-scale spatial learning hierarchically. Extensive
experiments are conducted on three popular datasets: nuScenes, Waymo and
Argoverse 2. Particularly, our UniMamba achieves 70.2 mAP on the nuScenes
dataset.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 06:22:31 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 09:27:50 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Jin",
"Xin",
""
],
[
"Su",
"Haisheng",
""
],
[
"Liu",
"Kai",
""
],
[
"Ma",
"Cong",
""
],
[
"Wu",
"Wei",
""
],
[
"Hui",
"Fei",
""
],
[
"Yan",
"Junchi",
""
]
] | TITLE: UniMamba: Unified Spatial-Channel Representation Learning with
Group-Efficient Mamba for LiDAR-based 3D Object Detection
ABSTRACT: Recent advances in LiDAR 3D detection have demonstrated the effectiveness of
Transformer-based frameworks in capturing the global dependencies from point
cloud spaces, which serialize the 3D voxels into the flattened 1D sequence for
iterative self-attention. However, the spatial structure of 3D voxels will be
inevitably destroyed during the serialization process. Besides, due to the
considerable number of 3D voxels and quadratic complexity of Transformers,
multiple sequences are grouped before feeding to Transformers, leading to a
limited receptive field. Inspired by the impressive performance of State Space
Models (SSM) achieved in the field of 2D vision tasks, in this paper, we
propose a novel Unified Mamba (UniMamba), which seamlessly integrates the
merits of 3D convolution and SSM in a concise multi-head manner, aiming to
perform "local and global" spatial context aggregation efficiently and
simultaneously. Specifically, a UniMamba block is designed which mainly
consists of spatial locality modeling, complementary Z-order serialization and
local-global sequential aggregator. The spatial locality modeling module
integrates 3D submanifold convolution to capture the dynamic spatial position
embedding before serialization. Then the efficient Z-order curve is adopted for
serialization both horizontally and vertically. Furthermore, the local-global
sequential aggregator adopts the channel grouping strategy to efficiently
encode both "local and global" spatial inter-dependencies using multi-head SSM.
Additionally, an encoder-decoder architecture with stacked UniMamba blocks is
formed to facilitate multi-scale spatial learning hierarchically. Extensive
experiments are conducted on three popular datasets: nuScenes, Waymo and
Argoverse 2. Particularly, our UniMamba achieves 70.2 mAP on the nuScenes
dataset.
|
2503.12035 | Zhengyuan Peng | Zhengyuan Peng, Jinpeng Ma, Zhimin Sun, Ran Yi, Haichuan Song, Xin
Tan, Lizhuang Ma | MOS: Modeling Object-Scene Associations in Generalized Category
Discovery | Accepted to CVPR 2025.The code is available at
https://github.com/JethroPeng/MOS | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generalized Category Discovery (GCD) is a classification task that aims to
classify both base and novel classes in unlabeled images, using knowledge from
a labeled dataset. In GCD, previous research overlooks scene information or
treats it as noise, reducing its impact during model training. However, in this
paper, we argue that scene information should be viewed as a strong prior for
inferring novel classes. We attribute the misinterpretation of scene
information to a key factor: the Ambiguity Challenge inherent in GCD.
Specifically, novel objects in base scenes might be wrongly classified into
base categories, while base objects in novel scenes might be mistakenly
recognized as novel categories. Once the ambiguity challenge is addressed,
scene information can reach its full potential, significantly enhancing the
performance of GCD models. To more effectively leverage scene information, we
propose the Modeling Object-Scene Associations (MOS) framework, which utilizes
a simple MLP-based scene-awareness module to enhance GCD performance. It
achieves an exceptional average accuracy improvement of 4% on the challenging
fine-grained datasets compared to state-of-the-art methods, emphasizing its
superior performance in fine-grained GCD. The code is publicly available at
https://github.com/JethroPeng/MOS
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 07:59:30 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 02:35:28 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Peng",
"Zhengyuan",
""
],
[
"Ma",
"Jinpeng",
""
],
[
"Sun",
"Zhimin",
""
],
[
"Yi",
"Ran",
""
],
[
"Song",
"Haichuan",
""
],
[
"Tan",
"Xin",
""
],
[
"Ma",
"Lizhuang",
""
]
] | TITLE: MOS: Modeling Object-Scene Associations in Generalized Category
Discovery
ABSTRACT: Generalized Category Discovery (GCD) is a classification task that aims to
classify both base and novel classes in unlabeled images, using knowledge from
a labeled dataset. In GCD, previous research overlooks scene information or
treats it as noise, reducing its impact during model training. However, in this
paper, we argue that scene information should be viewed as a strong prior for
inferring novel classes. We attribute the misinterpretation of scene
information to a key factor: the Ambiguity Challenge inherent in GCD.
Specifically, novel objects in base scenes might be wrongly classified into
base categories, while base objects in novel scenes might be mistakenly
recognized as novel categories. Once the ambiguity challenge is addressed,
scene information can reach its full potential, significantly enhancing the
performance of GCD models. To more effectively leverage scene information, we
propose the Modeling Object-Scene Associations (MOS) framework, which utilizes
a simple MLP-based scene-awareness module to enhance GCD performance. It
achieves an exceptional average accuracy improvement of 4% on the challenging
fine-grained datasets compared to state-of-the-art methods, emphasizing its
superior performance in fine-grained GCD. The code is publicly available at
https://github.com/JethroPeng/MOS
|
2503.12042 | Zhedong Zhang | Zhedong Zhang, Liang Li, Chenggang Yan, Chunshan Liu, Anton van den
Hengel, Yuankai Qi | Prosody-Enhanced Acoustic Pre-training and Acoustic-Disentangled Prosody
Adapting for Movie Dubbing | Accepted by CVPR2025 | null | null | null | cs.SD cs.CV eess.AS | http://creativecommons.org/licenses/by/4.0/ | Movie dubbing describes the process of transforming a script into speech that
aligns temporally and emotionally with a given movie clip while exemplifying
the speaker's voice demonstrated in a short reference audio clip. This task
demands the model bridge character performances and complicated prosody
structures to build a high-quality video-synchronized dubbing track. The
limited scale of movie dubbing datasets, along with the background noise
inherent in audio data, hinder the acoustic modeling performance of trained
models. To address these issues, we propose an acoustic-prosody disentangled
two-stage method to achieve high-quality dubbing generation with precise
prosody alignment. First, we propose a prosody-enhanced acoustic pre-training
to develop robust acoustic modeling capabilities. Then, we freeze the
pre-trained acoustic system and design a disentangled framework to model
prosodic text features and dubbing style while maintaining acoustic quality.
Additionally, we incorporate an in-domain emotion analysis module to reduce the
impact of visual domain shifts across different movies, thereby enhancing
emotion-prosody alignment. Extensive experiments show that our method performs
favorably against the state-of-the-art models on two primary benchmarks. The
demos are available at https://zzdoog.github.io/ProDubber/.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 08:25:57 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 04:51:00 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Zhang",
"Zhedong",
""
],
[
"Li",
"Liang",
""
],
[
"Yan",
"Chenggang",
""
],
[
"Liu",
"Chunshan",
""
],
[
"Hengel",
"Anton van den",
""
],
[
"Qi",
"Yuankai",
""
]
] | TITLE: Prosody-Enhanced Acoustic Pre-training and Acoustic-Disentangled Prosody
Adapting for Movie Dubbing
ABSTRACT: Movie dubbing describes the process of transforming a script into speech that
aligns temporally and emotionally with a given movie clip while exemplifying
the speaker's voice demonstrated in a short reference audio clip. This task
demands the model bridge character performances and complicated prosody
structures to build a high-quality video-synchronized dubbing track. The
limited scale of movie dubbing datasets, along with the background noise
inherent in audio data, hinder the acoustic modeling performance of trained
models. To address these issues, we propose an acoustic-prosody disentangled
two-stage method to achieve high-quality dubbing generation with precise
prosody alignment. First, we propose a prosody-enhanced acoustic pre-training
to develop robust acoustic modeling capabilities. Then, we freeze the
pre-trained acoustic system and design a disentangled framework to model
prosodic text features and dubbing style while maintaining acoustic quality.
Additionally, we incorporate an in-domain emotion analysis module to reduce the
impact of visual domain shifts across different movies, thereby enhancing
emotion-prosody alignment. Extensive experiments show that our method performs
favorably against the state-of-the-art models on two primary benchmarks. The
demos are available at https://zzdoog.github.io/ProDubber/.
|
2503.12511 | Tianyang Zhou | Tianyang Zhou, Haowen Lin, Somesh Jha, Mihai Christodorescu, Kirill
Levchenko, Varun Chandrasekaran | LLM-Driven Multi-step Translation from C to Rust using Static Analysis | 22 pages, 13 figures | null | null | null | cs.SE cs.AI cs.PL | http://creativecommons.org/licenses/by/4.0/ | Translating software written in legacy languages to modern languages, such as
C to Rust, has significant benefits in improving memory safety while
maintaining high performance. However, manual translation is cumbersome,
error-prone, and produces unidiomatic code. Large language models (LLMs) have
demonstrated promise in producing idiomatic translations, but offer no
correctness guarantees as they lack the ability to capture all the semantics
differences between the source and target languages. To resolve this issue, we
propose SACTOR, an LLM-driven C-to-Rust zero-shot translation tool using a
two-step translation methodology: an "unidiomatic" step to translate C into
Rust while preserving semantics, and an "idiomatic" step to refine the code to
follow Rust's semantic standards. SACTOR utilizes information provided by
static analysis of the source C program to address challenges such as pointer
semantics and dependency resolution. To validate the correctness of the
translated result from each step, we use end-to-end testing via the foreign
function interface to embed our translated code segment into the original code.
We evaluate the translation of 200 programs from two datasets and two case
studies, comparing the performance of GPT-4o, Claude 3.5 Sonnet, Gemini 2.0
Flash, Llama 3.3 70B and DeepSeek-R1 in SACTOR. Our results demonstrate that
SACTOR achieves high correctness and improved idiomaticity, with the
best-performing model (DeepSeek-R1) reaching 93% and (GPT-4o, Claude 3.5,
DeepSeek-R1) reaching 84% correctness (on each dataset, respectively), while
producing more natural and Rust-compliant translations compared to existing
methods.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 14:05:26 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 04:17:27 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Zhou",
"Tianyang",
""
],
[
"Lin",
"Haowen",
""
],
[
"Jha",
"Somesh",
""
],
[
"Christodorescu",
"Mihai",
""
],
[
"Levchenko",
"Kirill",
""
],
[
"Chandrasekaran",
"Varun",
""
]
] | TITLE: LLM-Driven Multi-step Translation from C to Rust using Static Analysis
ABSTRACT: Translating software written in legacy languages to modern languages, such as
C to Rust, has significant benefits in improving memory safety while
maintaining high performance. However, manual translation is cumbersome,
error-prone, and produces unidiomatic code. Large language models (LLMs) have
demonstrated promise in producing idiomatic translations, but offer no
correctness guarantees as they lack the ability to capture all the semantics
differences between the source and target languages. To resolve this issue, we
propose SACTOR, an LLM-driven C-to-Rust zero-shot translation tool using a
two-step translation methodology: an "unidiomatic" step to translate C into
Rust while preserving semantics, and an "idiomatic" step to refine the code to
follow Rust's semantic standards. SACTOR utilizes information provided by
static analysis of the source C program to address challenges such as pointer
semantics and dependency resolution. To validate the correctness of the
translated result from each step, we use end-to-end testing via the foreign
function interface to embed our translated code segment into the original code.
We evaluate the translation of 200 programs from two datasets and two case
studies, comparing the performance of GPT-4o, Claude 3.5 Sonnet, Gemini 2.0
Flash, Llama 3.3 70B and DeepSeek-R1 in SACTOR. Our results demonstrate that
SACTOR achieves high correctness and improved idiomaticity, with the
best-performing model (DeepSeek-R1) reaching 93% and (GPT-4o, Claude 3.5,
DeepSeek-R1) reaching 84% correctness (on each dataset, respectively), while
producing more natural and Rust-compliant translations compared to existing
methods.
|
2503.12733 | Duy Nhat Phan | Patrick Hytla, Tran T. A. Nghia, Duy Nhat Phan, Andrew Rice | A Linearized Alternating Direction Multiplier Method for Federated
Matrix Completion Problems | 29 pages, 4 figures | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Matrix completion is fundamental for predicting missing data with a wide
range of applications in personalized healthcare, e-commerce, recommendation
systems, and social network analysis. Traditional matrix completion approaches
typically assume centralized data storage, which raises challenges in terms of
computational efficiency, scalability, and user privacy. In this paper, we
address the problem of federated matrix completion, focusing on scenarios where
user-specific data is distributed across multiple clients, and privacy
constraints are uncompromising. Federated learning provides a promising
framework to address these challenges by enabling collaborative learning across
distributed datasets without sharing raw data. We propose \texttt{FedMC-ADMM}
for solving federated matrix completion problems, a novel algorithmic framework
that combines the Alternating Direction Method of Multipliers with a randomized
block-coordinate strategy and alternating proximal gradient steps. Unlike
existing federated approaches, \texttt{FedMC-ADMM} effectively handles
multi-block nonconvex and nonsmooth optimization problems, allowing efficient
computation while preserving user privacy. We analyze the theoretical
properties of our algorithm, demonstrating subsequential convergence and
establishing a convergence rate of $\mathcal{O}(K^{-1/2})$, leading to a
communication complexity of $\mathcal{O}(\epsilon^{-2})$ for reaching an
$\epsilon$-stationary point. This work is the first to establish these
theoretical guarantees for federated matrix completion in the presence of
multi-block variables. To validate our approach, we conduct extensive
experiments on real-world datasets, including MovieLens 1M, 10M, and Netflix.
The results demonstrate that \texttt{FedMC-ADMM} outperforms existing methods
in terms of convergence speed and testing accuracy.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 01:57:06 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 01:46:32 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Hytla",
"Patrick",
""
],
[
"Nghia",
"Tran T. A.",
""
],
[
"Phan",
"Duy Nhat",
""
],
[
"Rice",
"Andrew",
""
]
] | TITLE: A Linearized Alternating Direction Multiplier Method for Federated
Matrix Completion Problems
ABSTRACT: Matrix completion is fundamental for predicting missing data with a wide
range of applications in personalized healthcare, e-commerce, recommendation
systems, and social network analysis. Traditional matrix completion approaches
typically assume centralized data storage, which raises challenges in terms of
computational efficiency, scalability, and user privacy. In this paper, we
address the problem of federated matrix completion, focusing on scenarios where
user-specific data is distributed across multiple clients, and privacy
constraints are uncompromising. Federated learning provides a promising
framework to address these challenges by enabling collaborative learning across
distributed datasets without sharing raw data. We propose \texttt{FedMC-ADMM}
for solving federated matrix completion problems, a novel algorithmic framework
that combines the Alternating Direction Method of Multipliers with a randomized
block-coordinate strategy and alternating proximal gradient steps. Unlike
existing federated approaches, \texttt{FedMC-ADMM} effectively handles
multi-block nonconvex and nonsmooth optimization problems, allowing efficient
computation while preserving user privacy. We analyze the theoretical
properties of our algorithm, demonstrating subsequential convergence and
establishing a convergence rate of $\mathcal{O}(K^{-1/2})$, leading to a
communication complexity of $\mathcal{O}(\epsilon^{-2})$ for reaching an
$\epsilon$-stationary point. This work is the first to establish these
theoretical guarantees for federated matrix completion in the presence of
multi-block variables. To validate our approach, we conduct extensive
experiments on real-world datasets, including MovieLens 1M, 10M, and Netflix.
The results demonstrate that \texttt{FedMC-ADMM} outperforms existing methods
in terms of convergence speed and testing accuracy.
|
2503.12797 | Xinyu Ma | Xinyu Ma, Ziyang Ding, Zhicong Luo, Chi Chen, Zonghao Guo, Derek F.
Wong, Xiaoyi Feng, Maosong Sun | DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs
for Knowledge-Intensive Visual Grounding | null | null | null | null | cs.CV cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | Human experts excel at fine-grained visual discrimination by leveraging
domain knowledge to refine perceptual features, a capability that remains
underdeveloped in current Multimodal Large Language Models (MLLMs). Despite
possessing vast expert-level knowledge, MLLMs struggle to integrate reasoning
into visual perception, often generating direct responses without deeper
analysis. To bridge this gap, we introduce knowledge-intensive visual grounding
(KVG), a novel visual grounding task that requires both fine-grained perception
and domain-specific knowledge integration. To address the challenges of KVG, we
propose DeepPerception, an MLLM enhanced with cognitive visual perception
capabilities. Our approach consists of (1) an automated data synthesis pipeline
that generates high-quality, knowledge-aligned training samples, and (2) a
two-stage training framework combining supervised fine-tuning for cognitive
reasoning scaffolding and reinforcement learning to optimize
perception-cognition synergy. To benchmark performance, we introduce KVG-Bench
a comprehensive dataset spanning 10 domains with 1.3K manually curated test
cases. Experimental results demonstrate that DeepPerception significantly
outperforms direct fine-tuning, achieving +8.08\% accuracy improvements on
KVG-Bench and exhibiting +4.60\% superior cross-domain generalization over
baseline approaches. Our findings highlight the importance of integrating
cognitive processes into MLLMs for human-like visual perception and open new
directions for multimodal reasoning research. The data, codes, and models are
released at https://github.com/thunlp/DeepPerception.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 04:06:34 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 05:06:22 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Ma",
"Xinyu",
""
],
[
"Ding",
"Ziyang",
""
],
[
"Luo",
"Zhicong",
""
],
[
"Chen",
"Chi",
""
],
[
"Guo",
"Zonghao",
""
],
[
"Wong",
"Derek F.",
""
],
[
"Feng",
"Xiaoyi",
""
],
[
"Sun",
"Maosong",
""
]
] | TITLE: DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs
for Knowledge-Intensive Visual Grounding
ABSTRACT: Human experts excel at fine-grained visual discrimination by leveraging
domain knowledge to refine perceptual features, a capability that remains
underdeveloped in current Multimodal Large Language Models (MLLMs). Despite
possessing vast expert-level knowledge, MLLMs struggle to integrate reasoning
into visual perception, often generating direct responses without deeper
analysis. To bridge this gap, we introduce knowledge-intensive visual grounding
(KVG), a novel visual grounding task that requires both fine-grained perception
and domain-specific knowledge integration. To address the challenges of KVG, we
propose DeepPerception, an MLLM enhanced with cognitive visual perception
capabilities. Our approach consists of (1) an automated data synthesis pipeline
that generates high-quality, knowledge-aligned training samples, and (2) a
two-stage training framework combining supervised fine-tuning for cognitive
reasoning scaffolding and reinforcement learning to optimize
perception-cognition synergy. To benchmark performance, we introduce KVG-Bench
a comprehensive dataset spanning 10 domains with 1.3K manually curated test
cases. Experimental results demonstrate that DeepPerception significantly
outperforms direct fine-tuning, achieving +8.08\% accuracy improvements on
KVG-Bench and exhibiting +4.60\% superior cross-domain generalization over
baseline approaches. Our findings highlight the importance of integrating
cognitive processes into MLLMs for human-like visual perception and open new
directions for multimodal reasoning research. The data, codes, and models are
released at https://github.com/thunlp/DeepPerception.
|
2503.12827 | Md Farhamdur Reza | Md Farhamdur Reza, Richeng Jin, Tianfu Wu, and Huaiyu Dai | GSBA$^K$: $top$-$K$ Geometric Score-based Black-box Attack | This article has been accepted for publication at ICLR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing score-based adversarial attacks mainly focus on crafting $top$-1
adversarial examples against classifiers with single-label classification.
Their attack success rate and query efficiency are often less than
satisfactory, particularly under small perturbation requirements; moreover, the
vulnerability of classifiers with multi-label learning is yet to be studied. In
this paper, we propose a comprehensive surrogate free score-based attack, named
\b geometric \b score-based \b black-box \b attack (GSBA$^K$), to craft
adversarial examples in an aggressive $top$-$K$ setting for both untargeted and
targeted attacks, where the goal is to change the $top$-$K$ predictions of the
target classifier. We introduce novel gradient-based methods to find a good
initial boundary point to attack. Our iterative method employs novel gradient
estimation techniques, particularly effective in $top$-$K$ setting, on the
decision boundary to effectively exploit the geometry of the decision boundary.
Additionally, GSBA$^K$ can be used to attack against classifiers with $top$-$K$
multi-label learning. Extensive experimental results on ImageNet and PASCAL VOC
datasets validate the effectiveness of GSBA$^K$ in crafting $top$-$K$
adversarial examples.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 05:15:09 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 02:55:39 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Reza",
"Md Farhamdur",
""
],
[
"Jin",
"Richeng",
""
],
[
"Wu",
"Tianfu",
""
],
[
"Dai",
"Huaiyu",
""
]
] | TITLE: GSBA$^K$: $top$-$K$ Geometric Score-based Black-box Attack
ABSTRACT: Existing score-based adversarial attacks mainly focus on crafting $top$-1
adversarial examples against classifiers with single-label classification.
Their attack success rate and query efficiency are often less than
satisfactory, particularly under small perturbation requirements; moreover, the
vulnerability of classifiers with multi-label learning is yet to be studied. In
this paper, we propose a comprehensive surrogate free score-based attack, named
\b geometric \b score-based \b black-box \b attack (GSBA$^K$), to craft
adversarial examples in an aggressive $top$-$K$ setting for both untargeted and
targeted attacks, where the goal is to change the $top$-$K$ predictions of the
target classifier. We introduce novel gradient-based methods to find a good
initial boundary point to attack. Our iterative method employs novel gradient
estimation techniques, particularly effective in $top$-$K$ setting, on the
decision boundary to effectively exploit the geometry of the decision boundary.
Additionally, GSBA$^K$ can be used to attack against classifiers with $top$-$K$
multi-label learning. Extensive experimental results on ImageNet and PASCAL VOC
datasets validate the effectiveness of GSBA$^K$ in crafting $top$-$K$
adversarial examples.
|
2503.12828 | Quang Trung Truong | Quang Trung Truong, Wong Yuk Kwan, Duc Thanh Nguyen, Binh-Son Hua,
Sai-Kit Yeung | AUTV: Creating Underwater Video Datasets with Pixel-wise Annotations | under review | null | null | null | cs.CE cs.CV | http://creativecommons.org/licenses/by/4.0/ | Underwater video analysis, hampered by the dynamic marine environment and
camera motion, remains a challenging task in computer vision. Existing
training-free video generation techniques, learning motion dynamics on the
frame-by-frame basis, often produce poor results with noticeable motion
interruptions and misaligments. To address these issues, we propose AUTV, a
framework for synthesizing marine video data with pixel-wise annotations. We
demonstrate the effectiveness of this framework by constructing two video
datasets, namely UTV, a real-world dataset comprising 2,000 video-text pairs,
and SUTV, a synthetic video dataset including 10,000 videos with segmentation
masks for marine objects. UTV provides diverse underwater videos with
comprehensive annotations including appearance, texture, camera intrinsics,
lighting, and animal behavior. SUTV can be used to improve underwater
downstream tasks, which are demonstrated in video inpainting and video object
segmentation.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 05:18:20 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Truong",
"Quang Trung",
""
],
[
"Kwan",
"Wong Yuk",
""
],
[
"Nguyen",
"Duc Thanh",
""
],
[
"Hua",
"Binh-Son",
""
],
[
"Yeung",
"Sai-Kit",
""
]
] | TITLE: AUTV: Creating Underwater Video Datasets with Pixel-wise Annotations
ABSTRACT: Underwater video analysis, hampered by the dynamic marine environment and
camera motion, remains a challenging task in computer vision. Existing
training-free video generation techniques, learning motion dynamics on the
frame-by-frame basis, often produce poor results with noticeable motion
interruptions and misaligments. To address these issues, we propose AUTV, a
framework for synthesizing marine video data with pixel-wise annotations. We
demonstrate the effectiveness of this framework by constructing two video
datasets, namely UTV, a real-world dataset comprising 2,000 video-text pairs,
and SUTV, a synthetic video dataset including 10,000 videos with segmentation
masks for marine objects. UTV provides diverse underwater videos with
comprehensive annotations including appearance, texture, camera intrinsics,
lighting, and animal behavior. SUTV can be used to improve underwater
downstream tasks, which are demonstrated in video inpainting and video object
segmentation.
|
2503.12874 | Xiaojun Jia | Xiaojun Jia, Sensen Gao, Simeng Qin, Ke Ma, Xinfeng Li, Yihao Huang,
Wei Dong, Yang Liu, Xiaochun Cao | Evolution-based Region Adversarial Prompt Learning for Robustness
Enhancement in Vision-Language Models | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large pre-trained vision-language models (VLMs), such as CLIP, demonstrate
impressive generalization but remain highly vulnerable to adversarial examples
(AEs). Previous work has explored robust text prompts through adversarial
training, achieving some improvement in both robustness and generalization.
However, they primarily rely on singlegradient direction perturbations (e.g.,
PGD) to generate AEs, which lack diversity, resulting in limited improvement in
adversarial robustness. To address these limitations, we propose an
evolution-based region adversarial prompt tuning method called ER-APT, which
combines gradient methods with genetic evolution to generate more diverse and
challenging AEs. In each training iteration, we first generate AEs using
traditional gradient-based methods. Subsequently, a genetic evolution mechanism
incorporating selection, mutation, and crossover is applied to optimize the
AEs, ensuring a broader and more aggressive perturbation distribution.The final
evolved AEs are used for prompt tuning, achieving region-based adversarial
optimization instead of conventional single-point adversarial prompt tuning. We
also propose a dynamic loss weighting method to adjust prompt learning
efficiency for accuracy and robustness. Experimental evaluations on various
benchmark datasets demonstrate the superiority of our proposed method,
outperforming stateof-the-art APT methods. The code is released at
https://github.com/jiaxiaojunQAQ/ER-APT.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 07:08:47 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 02:58:59 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Jia",
"Xiaojun",
""
],
[
"Gao",
"Sensen",
""
],
[
"Qin",
"Simeng",
""
],
[
"Ma",
"Ke",
""
],
[
"Li",
"Xinfeng",
""
],
[
"Huang",
"Yihao",
""
],
[
"Dong",
"Wei",
""
],
[
"Liu",
"Yang",
""
],
[
"Cao",
"Xiaochun",
""
]
] | TITLE: Evolution-based Region Adversarial Prompt Learning for Robustness
Enhancement in Vision-Language Models
ABSTRACT: Large pre-trained vision-language models (VLMs), such as CLIP, demonstrate
impressive generalization but remain highly vulnerable to adversarial examples
(AEs). Previous work has explored robust text prompts through adversarial
training, achieving some improvement in both robustness and generalization.
However, they primarily rely on singlegradient direction perturbations (e.g.,
PGD) to generate AEs, which lack diversity, resulting in limited improvement in
adversarial robustness. To address these limitations, we propose an
evolution-based region adversarial prompt tuning method called ER-APT, which
combines gradient methods with genetic evolution to generate more diverse and
challenging AEs. In each training iteration, we first generate AEs using
traditional gradient-based methods. Subsequently, a genetic evolution mechanism
incorporating selection, mutation, and crossover is applied to optimize the
AEs, ensuring a broader and more aggressive perturbation distribution.The final
evolved AEs are used for prompt tuning, achieving region-based adversarial
optimization instead of conventional single-point adversarial prompt tuning. We
also propose a dynamic loss weighting method to adjust prompt learning
efficiency for accuracy and robustness. Experimental evaluations on various
benchmark datasets demonstrate the superiority of our proposed method,
outperforming stateof-the-art APT methods. The code is released at
https://github.com/jiaxiaojunQAQ/ER-APT.
|
2503.13074 | Shaolin Su | Shaolin Su, Josep M. Rocafort, Danna Xue, David Serrano-Lozano, Lei
Sun, Javier Vazquez-Corral | Rethinking Image Evaluation in Super-Resolution | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | While recent advancing image super-resolution (SR) techniques are continually
improving the perceptual quality of their outputs, they can usually fail in
quantitative evaluations. This inconsistency leads to a growing distrust in
existing image metrics for SR evaluations. Though image evaluation depends on
both the metric and the reference ground truth (GT), researchers typically do
not inspect the role of GTs, as they are generally accepted as `perfect'
references. However, due to the data being collected in the early years and the
ignorance of controlling other types of distortions, we point out that GTs in
existing SR datasets can exhibit relatively poor quality, which leads to biased
evaluations. Following this observation, in this paper, we are interested in
the following questions: Are GT images in existing SR datasets 100% trustworthy
for model evaluations? How does GT quality affect this evaluation? And how to
make fair evaluations if there exist imperfect GTs? To answer these questions,
this paper presents two main contributions. First, by systematically analyzing
seven state-of-the-art SR models across three real-world SR datasets, we show
that SR performances can be consistently affected across models by low-quality
GTs, and models can perform quite differently when GT quality is controlled.
Second, we propose a novel perceptual quality metric, Relative Quality Index
(RQI), that measures the relative quality discrepancy of image pairs, thus
issuing the biased evaluations caused by unreliable GTs. Our proposed model
achieves significantly better consistency with human opinions. We expect our
work to provide insights for the SR community on how future datasets, models,
and metrics should be developed.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 11:25:48 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 13:39:06 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Su",
"Shaolin",
""
],
[
"Rocafort",
"Josep M.",
""
],
[
"Xue",
"Danna",
""
],
[
"Serrano-Lozano",
"David",
""
],
[
"Sun",
"Lei",
""
],
[
"Vazquez-Corral",
"Javier",
""
]
] | TITLE: Rethinking Image Evaluation in Super-Resolution
ABSTRACT: While recent advancing image super-resolution (SR) techniques are continually
improving the perceptual quality of their outputs, they can usually fail in
quantitative evaluations. This inconsistency leads to a growing distrust in
existing image metrics for SR evaluations. Though image evaluation depends on
both the metric and the reference ground truth (GT), researchers typically do
not inspect the role of GTs, as they are generally accepted as `perfect'
references. However, due to the data being collected in the early years and the
ignorance of controlling other types of distortions, we point out that GTs in
existing SR datasets can exhibit relatively poor quality, which leads to biased
evaluations. Following this observation, in this paper, we are interested in
the following questions: Are GT images in existing SR datasets 100% trustworthy
for model evaluations? How does GT quality affect this evaluation? And how to
make fair evaluations if there exist imperfect GTs? To answer these questions,
this paper presents two main contributions. First, by systematically analyzing
seven state-of-the-art SR models across three real-world SR datasets, we show
that SR performances can be consistently affected across models by low-quality
GTs, and models can perform quite differently when GT quality is controlled.
Second, we propose a novel perceptual quality metric, Relative Quality Index
(RQI), that measures the relative quality discrepancy of image pairs, thus
issuing the biased evaluations caused by unreliable GTs. Our proposed model
achieves significantly better consistency with human opinions. We expect our
work to provide insights for the SR community on how future datasets, models,
and metrics should be developed.
|
2503.13461 | Euxane TRAN-GIRARD | Euxane Tran-Girard (LIGM, CNRS), Laurent Bulteau (LIGM, CNRS),
Pierre-Yves David | CARDS: A collection of package, revision, and miscellaneous dependency
graphs | null | null | null | null | cs.DB cs.DL cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | CARDS (Corpus of Acyclic Repositories and Dependency Systems) is a collection
of directed graphs which express dependency relations, extracted from diverse
real-world sources such as package managers, version control systems, and event
graphs. Each graph contains anywhere from thousands to hundreds of millions of
nodes and edges, which are normalized into a simple, unified format. Both
cyclic and acyclic variants are included (as some graphs, such as citation
networks, are not entirely acyclic). The dataset is suitable for studying the
structure of different kinds of dependencies, enabling the characterization and
distinction of various dependency graph types. It has been utilized for
developing and testing efficient algorithms which leverage the specificities of
source version control graphs. The collection is publicly available at
doi.org/10.5281/zenodo.14245890.
| [
{
"version": "v1",
"created": "Wed, 5 Feb 2025 08:43:01 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Tran-Girard",
"Euxane",
"",
"LIGM, CNRS"
],
[
"Bulteau",
"Laurent",
"",
"LIGM, CNRS"
],
[
"David",
"Pierre-Yves",
""
]
] | TITLE: CARDS: A collection of package, revision, and miscellaneous dependency
graphs
ABSTRACT: CARDS (Corpus of Acyclic Repositories and Dependency Systems) is a collection
of directed graphs which express dependency relations, extracted from diverse
real-world sources such as package managers, version control systems, and event
graphs. Each graph contains anywhere from thousands to hundreds of millions of
nodes and edges, which are normalized into a simple, unified format. Both
cyclic and acyclic variants are included (as some graphs, such as citation
networks, are not entirely acyclic). The dataset is suitable for studying the
structure of different kinds of dependencies, enabling the characterization and
distinction of various dependency graph types. It has been utilized for
developing and testing efficient algorithms which leverage the specificities of
source version control graphs. The collection is publicly available at
doi.org/10.5281/zenodo.14245890.
|
2503.13463 | Marco Rondina | Marco Rondina, Antonio Vetr\`o, Juan Carlos De Martin | Completeness of Datasets Documentation on ML/AI repositories: an
Empirical Investigation | null | Progress in Artificial Intelligence. EPIA 2023. Lecture Notes in
Computer Science(), vol 14115. Springer, Cham | 10.1007/978-3-031-49008-8_7 | null | cs.DL cs.AI cs.HC cs.LG | http://creativecommons.org/licenses/by/4.0/ | ML/AI is the field of computer science and computer engineering that arguably
received the most attention and funding over the last decade. Data is the key
element of ML/AI, so it is becoming increasingly important to ensure that users
are fully aware of the quality of the datasets that they use, and of the
process generating them, so that possible negative impacts on downstream
effects can be tracked, analysed, and, where possible, mitigated. One of the
tools that can be useful in this perspective is dataset documentation. The aim
of this work is to investigate the state of dataset documentation practices,
measuring the completeness of the documentation of several popular datasets in
ML/AI repositories. We created a dataset documentation schema -- the
Documentation Test Sheet (DTS) -- that identifies the information that should
always be attached to a dataset (to ensure proper dataset choice and informed
use), according to relevant studies in the literature. We verified 100 popular
datasets from four different repositories with the DTS to investigate which
information was present. Overall, we observed a lack of relevant documentation,
especially about the context of data collection and data processing,
highlighting a paucity of transparency.
| [
{
"version": "v1",
"created": "Mon, 10 Feb 2025 12:31:42 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Rondina",
"Marco",
""
],
[
"Vetrò",
"Antonio",
""
],
[
"De Martin",
"Juan Carlos",
""
]
] | TITLE: Completeness of Datasets Documentation on ML/AI repositories: an
Empirical Investigation
ABSTRACT: ML/AI is the field of computer science and computer engineering that arguably
received the most attention and funding over the last decade. Data is the key
element of ML/AI, so it is becoming increasingly important to ensure that users
are fully aware of the quality of the datasets that they use, and of the
process generating them, so that possible negative impacts on downstream
effects can be tracked, analysed, and, where possible, mitigated. One of the
tools that can be useful in this perspective is dataset documentation. The aim
of this work is to investigate the state of dataset documentation practices,
measuring the completeness of the documentation of several popular datasets in
ML/AI repositories. We created a dataset documentation schema -- the
Documentation Test Sheet (DTS) -- that identifies the information that should
always be attached to a dataset (to ensure proper dataset choice and informed
use), according to relevant studies in the literature. We verified 100 popular
datasets from four different repositories with the DTS to investigate which
information was present. Overall, we observed a lack of relevant documentation,
especially about the context of data collection and data processing,
highlighting a paucity of transparency.
|
2503.13465 | Jinfeng Wang | Jinfeng Wang, Yanhao Huang, Sifan Song, Boqian Wang, Jionglong Su,
Jiaman Ding | A novel Fourier Adjacency Transformer for advanced EEG emotion
recognition | null | null | null | null | eess.SP cs.AI cs.LG q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | EEG emotion recognition faces significant hurdles due to noise interference,
signal nonstationarity, and the inherent complexity of brain activity which
make accurately emotion classification. In this study, we present the Fourier
Adjacency Transformer, a novel framework that seamlessly integrates
Fourier-based periodic analysis with graph-driven structural modeling. Our
method first leverages novel Fourier-inspired modules to extract periodic
features from embedded EEG signals, effectively decoupling them from aperiodic
components. Subsequently, we employ an adjacency attention scheme to reinforce
universal inter-channel correlation patterns, coupling these patterns with
their sample-based counterparts. Empirical evaluations on SEED and DEAP
datasets demonstrate that our method surpasses existing state-of-the-art
techniques, achieving an improvement of approximately 6.5% in recognition
accuracy. By unifying periodicity and structural insights, this framework
offers a promising direction for future research in EEG emotion analysis.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 03:15:12 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Wang",
"Jinfeng",
""
],
[
"Huang",
"Yanhao",
""
],
[
"Song",
"Sifan",
""
],
[
"Wang",
"Boqian",
""
],
[
"Su",
"Jionglong",
""
],
[
"Ding",
"Jiaman",
""
]
] | TITLE: A novel Fourier Adjacency Transformer for advanced EEG emotion
recognition
ABSTRACT: EEG emotion recognition faces significant hurdles due to noise interference,
signal nonstationarity, and the inherent complexity of brain activity which
make accurately emotion classification. In this study, we present the Fourier
Adjacency Transformer, a novel framework that seamlessly integrates
Fourier-based periodic analysis with graph-driven structural modeling. Our
method first leverages novel Fourier-inspired modules to extract periodic
features from embedded EEG signals, effectively decoupling them from aperiodic
components. Subsequently, we employ an adjacency attention scheme to reinforce
universal inter-channel correlation patterns, coupling these patterns with
their sample-based counterparts. Empirical evaluations on SEED and DEAP
datasets demonstrate that our method surpasses existing state-of-the-art
techniques, achieving an improvement of approximately 6.5% in recognition
accuracy. By unifying periodicity and structural insights, this framework
offers a promising direction for future research in EEG emotion analysis.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.