Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2501.05037 | Rujie Wu | Rujie Wu, Xiaojian Ma, Hai Ci, Yue Fan, Yuxuan Wang, Haozhe Zhao, Qing
Li, Yizhou Wang | LongViTU: Instruction Tuning for Long-Form Video Understanding | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces LongViTU, a large-scale (~121k QA pairs, ~900h videos),
automatically generated dataset for long-form video understanding. We propose a
systematic approach that organizes videos into a hierarchical tree structure
for QA generation and incorporates self-revision mechanisms to ensure
high-quality QA pairs. Each QA pair in LongViTU features: 1) long-term context
(average certificate length of 4.6 minutes); 2) rich knowledge and condensed
reasoning (commonsense, causality, planning, etc.)). We also offer explicit
timestamp annotations of relevant events for each QA pair. We have conducted
extensive human studies on LongViTU, and the results prove the quality of our
dataset. To better evaluate the challenges posed by LongViTU's emphasis on
long-term context and condensed reasoning, we manually curate a subset of
LongViTU into a benchmark. Evaluations using a state-of-the-art open-source
model (LongVU), a proprietary model (Gemini-1.5-Pro), and human annotators
yield GPT-4 scores of 49.9, 52.3, and 81.0, respectively, underscoring the
substantial difficulty presented by LongViTU questions. Performing supervised
fine-tuning (SFT) of LongVU and LLaVA-Video on LongViTU data results in average
performance gains of 2.5% and 3.7%, respectively, across a suite of long video
understanding benchmarks (EgoSchema, VideoMME-Long, MLVU, LVBench).
| [
{
"version": "v1",
"created": "Thu, 9 Jan 2025 07:51:14 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Mar 2025 09:39:11 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Wu",
"Rujie",
""
],
[
"Ma",
"Xiaojian",
""
],
[
"Ci",
"Hai",
""
],
[
"Fan",
"Yue",
""
],
[
"Wang",
"Yuxuan",
""
],
[
"Zhao",
"Haozhe",
""
],
[
"Li",
"Qing",
""
],
[
"Wang",
"Yizhou",
""
]
] | TITLE: LongViTU: Instruction Tuning for Long-Form Video Understanding
ABSTRACT: This paper introduces LongViTU, a large-scale (~121k QA pairs, ~900h videos),
automatically generated dataset for long-form video understanding. We propose a
systematic approach that organizes videos into a hierarchical tree structure
for QA generation and incorporates self-revision mechanisms to ensure
high-quality QA pairs. Each QA pair in LongViTU features: 1) long-term context
(average certificate length of 4.6 minutes); 2) rich knowledge and condensed
reasoning (commonsense, causality, planning, etc.)). We also offer explicit
timestamp annotations of relevant events for each QA pair. We have conducted
extensive human studies on LongViTU, and the results prove the quality of our
dataset. To better evaluate the challenges posed by LongViTU's emphasis on
long-term context and condensed reasoning, we manually curate a subset of
LongViTU into a benchmark. Evaluations using a state-of-the-art open-source
model (LongVU), a proprietary model (Gemini-1.5-Pro), and human annotators
yield GPT-4 scores of 49.9, 52.3, and 81.0, respectively, underscoring the
substantial difficulty presented by LongViTU questions. Performing supervised
fine-tuning (SFT) of LongVU and LLaVA-Video on LongViTU data results in average
performance gains of 2.5% and 3.7%, respectively, across a suite of long video
understanding benchmarks (EgoSchema, VideoMME-Long, MLVU, LVBench).
|
2501.11441 | Maria Taboada | Maria Taboada, Diego Martinez, Mohammed Arideh, Rosa Mosquera | Ontology Matching with Large Language Models and Prioritized Depth-First
Search | null | null | null | null | cs.IR cs.CL | http://creativecommons.org/licenses/by/4.0/ | Ontology matching (OM) plays a key role in enabling data interoperability and
knowledge sharing, but it remains challenging due to the need for large
training datasets and limited vocabulary processing in machine learning
approaches. Recently, methods based on Large Language Model (LLMs) have shown
great promise in OM, particularly through the use of a retrieve-then-prompt
pipeline. In this approach, relevant target entities are first retrieved and
then used to prompt the LLM to predict the final matches. Despite their
potential, these systems still present limited performance and high
computational overhead. To address these issues, we introduce MILA, a novel
approach that embeds a retrieve-identify-prompt pipeline within a prioritized
depth-first search (PDFS) strategy. This approach efficiently identifies a
large number of semantic correspondences with high accuracy, limiting LLM
requests to only the most borderline cases. We evaluated MILA using the
biomedical challenge proposed in the 2023 and 2024 editions of the Ontology
Alignment Evaluation Initiative. Our method achieved the highest F-Measure in
four of the five unsupervised tasks, outperforming state-of-the-art OM systems
by up to 17%. It also performed better than or comparable to the leading
supervised OM systems. MILA further exhibited task-agnostic performance,
remaining stable across all tasks and settings, while significantly reducing
LLM requests. These findings highlight that high-performance LLM-based OM can
be achieved through a combination of programmed (PDFS), learned (embedding
vectors), and prompting-based heuristics, without the need of domain-specific
heuristics or fine-tuning.
| [
{
"version": "v1",
"created": "Mon, 20 Jan 2025 12:29:09 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Mar 2025 11:29:21 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Taboada",
"Maria",
""
],
[
"Martinez",
"Diego",
""
],
[
"Arideh",
"Mohammed",
""
],
[
"Mosquera",
"Rosa",
""
]
] | TITLE: Ontology Matching with Large Language Models and Prioritized Depth-First
Search
ABSTRACT: Ontology matching (OM) plays a key role in enabling data interoperability and
knowledge sharing, but it remains challenging due to the need for large
training datasets and limited vocabulary processing in machine learning
approaches. Recently, methods based on Large Language Model (LLMs) have shown
great promise in OM, particularly through the use of a retrieve-then-prompt
pipeline. In this approach, relevant target entities are first retrieved and
then used to prompt the LLM to predict the final matches. Despite their
potential, these systems still present limited performance and high
computational overhead. To address these issues, we introduce MILA, a novel
approach that embeds a retrieve-identify-prompt pipeline within a prioritized
depth-first search (PDFS) strategy. This approach efficiently identifies a
large number of semantic correspondences with high accuracy, limiting LLM
requests to only the most borderline cases. We evaluated MILA using the
biomedical challenge proposed in the 2023 and 2024 editions of the Ontology
Alignment Evaluation Initiative. Our method achieved the highest F-Measure in
four of the five unsupervised tasks, outperforming state-of-the-art OM systems
by up to 17%. It also performed better than or comparable to the leading
supervised OM systems. MILA further exhibited task-agnostic performance,
remaining stable across all tasks and settings, while significantly reducing
LLM requests. These findings highlight that high-performance LLM-based OM can
be achieved through a combination of programmed (PDFS), learned (embedding
vectors), and prompting-based heuristics, without the need of domain-specific
heuristics or fine-tuning.
|
2501.12911 | Abdulkadir Korkmaz | Abdulkadir Korkmaz and Praveen Rao | A Selective Homomorphic Encryption Approach for Faster
Privacy-Preserving Federated Learning | 23 pages, 32 figures | null | null | null | cs.CR cs.DC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Federated learning is a machine learning method that supports training models
on decentralized devices or servers, where each holds its local data, removing
the need for data exchange. This approach is especially useful in healthcare,
as it enables training on sensitive data without needing to share them. The
nature of federated learning necessitates robust security precautions due to
data leakage concerns during communication. To address this issue, we propose a
new approach that employs selective encryption, homomorphic encryption,
differential privacy, and bit-wise scrambling to minimize data leakage while
achieving good execution performance. Our technique , FAS (fast and secure
federated learning) is used to train deep learning models on medical imaging
data. We implemented our technique using the Flower framework and compared with
a state-of-the-art federated learning approach that also uses selective
homomorphic encryption. Our experiments were run in a cluster of eleven
physical machines to create a real-world federated learning scenario on
different datasets. We observed that our approach is up to 90\% faster than
applying fully homomorphic encryption on the model weights. In addition, we can
avoid the pretraining step that is required by our competitor and can save up
to 46% in terms of total execution time. While our approach was faster, it
obtained similar security results as the competitor.
| [
{
"version": "v1",
"created": "Wed, 22 Jan 2025 14:37:44 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Feb 2025 21:23:54 GMT"
},
{
"version": "v3",
"created": "Thu, 27 Mar 2025 17:44:27 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Korkmaz",
"Abdulkadir",
""
],
[
"Rao",
"Praveen",
""
]
] | TITLE: A Selective Homomorphic Encryption Approach for Faster
Privacy-Preserving Federated Learning
ABSTRACT: Federated learning is a machine learning method that supports training models
on decentralized devices or servers, where each holds its local data, removing
the need for data exchange. This approach is especially useful in healthcare,
as it enables training on sensitive data without needing to share them. The
nature of federated learning necessitates robust security precautions due to
data leakage concerns during communication. To address this issue, we propose a
new approach that employs selective encryption, homomorphic encryption,
differential privacy, and bit-wise scrambling to minimize data leakage while
achieving good execution performance. Our technique , FAS (fast and secure
federated learning) is used to train deep learning models on medical imaging
data. We implemented our technique using the Flower framework and compared with
a state-of-the-art federated learning approach that also uses selective
homomorphic encryption. Our experiments were run in a cluster of eleven
physical machines to create a real-world federated learning scenario on
different datasets. We observed that our approach is up to 90\% faster than
applying fully homomorphic encryption on the model weights. In addition, we can
avoid the pretraining step that is required by our competitor and can save up
to 46% in terms of total execution time. While our approach was faster, it
obtained similar security results as the competitor.
|
2502.01894 | Goodarz Mehr | Goodarz Mehr and Azim Eskandarian | SimBEV: A Synthetic Multi-Task Multi-Sensor Driving Data Generation Tool
and Dataset | null | null | null | null | cs.CV cs.LG cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bird's-eye view (BEV) perception has garnered significant attention in
autonomous driving in recent years, in part because BEV representation
facilitates multi-modal sensor fusion. BEV representation enables a variety of
perception tasks including BEV segmentation, a concise view of the environment
useful for planning a vehicle's trajectory. However, this representation is not
fully supported by existing datasets, and creation of new datasets for this
purpose can be a time-consuming endeavor. To address this challenge, we
introduce SimBEV. SimBEV is a randomized synthetic data generation tool that is
extensively configurable and scalable, supports a wide array of sensors,
incorporates information from multiple sources to capture accurate BEV ground
truth, and enables a variety of perception tasks including BEV segmentation and
3D object detection. SimBEV is used to create the SimBEV dataset, a large
collection of annotated perception data from diverse driving scenarios. SimBEV
and the SimBEV dataset are open and available to the public.
| [
{
"version": "v1",
"created": "Tue, 4 Feb 2025 00:00:06 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 20:42:44 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Mehr",
"Goodarz",
""
],
[
"Eskandarian",
"Azim",
""
]
] | TITLE: SimBEV: A Synthetic Multi-Task Multi-Sensor Driving Data Generation Tool
and Dataset
ABSTRACT: Bird's-eye view (BEV) perception has garnered significant attention in
autonomous driving in recent years, in part because BEV representation
facilitates multi-modal sensor fusion. BEV representation enables a variety of
perception tasks including BEV segmentation, a concise view of the environment
useful for planning a vehicle's trajectory. However, this representation is not
fully supported by existing datasets, and creation of new datasets for this
purpose can be a time-consuming endeavor. To address this challenge, we
introduce SimBEV. SimBEV is a randomized synthetic data generation tool that is
extensively configurable and scalable, supports a wide array of sensors,
incorporates information from multiple sources to capture accurate BEV ground
truth, and enables a variety of perception tasks including BEV segmentation and
3D object detection. SimBEV is used to create the SimBEV dataset, a large
collection of annotated perception data from diverse driving scenarios. SimBEV
and the SimBEV dataset are open and available to the public.
|
2502.05628 | Houcheng Jiang | Houcheng Jiang, Junfeng Fang, Ningyu Zhang, Guojun Ma, Mingyang Wan,
Xiang Wang, Xiangnan He, Tat-seng Chua | AnyEdit: Edit Any Knowledge Encoded in Language Models | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Large language models (LLMs) often produce incorrect or outdated information,
necessitating efficient and precise knowledge updates. Current model editing
methods, however, struggle with long-form knowledge in diverse formats, such as
poetry, code snippets, and mathematical derivations. These limitations arise
from their reliance on editing a single token's hidden state, a limitation we
term "efficacy barrier". To solve this, we propose AnyEdit, a new
autoregressive editing paradigm. It decomposes long-form knowledge into
sequential chunks and iteratively edits the key token in each chunk, ensuring
consistent and accurate outputs. Theoretically, we ground AnyEdit in the Chain
Rule of Mutual Information, showing its ability to update any knowledge within
LLMs. Empirically, it outperforms strong baselines by 21.5% on benchmarks
including UnKEBench, AKEW, and our new EditEverything dataset for long-form
diverse-formatted knowledge. Additionally, AnyEdit serves as a plug-and-play
framework, enabling current editing methods to update knowledge with arbitrary
length and format, significantly advancing the scope and practicality of LLM
knowledge editing.
| [
{
"version": "v1",
"created": "Sat, 8 Feb 2025 16:18:37 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Mar 2025 03:21:36 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Jiang",
"Houcheng",
""
],
[
"Fang",
"Junfeng",
""
],
[
"Zhang",
"Ningyu",
""
],
[
"Ma",
"Guojun",
""
],
[
"Wan",
"Mingyang",
""
],
[
"Wang",
"Xiang",
""
],
[
"He",
"Xiangnan",
""
],
[
"Chua",
"Tat-seng",
""
]
] | TITLE: AnyEdit: Edit Any Knowledge Encoded in Language Models
ABSTRACT: Large language models (LLMs) often produce incorrect or outdated information,
necessitating efficient and precise knowledge updates. Current model editing
methods, however, struggle with long-form knowledge in diverse formats, such as
poetry, code snippets, and mathematical derivations. These limitations arise
from their reliance on editing a single token's hidden state, a limitation we
term "efficacy barrier". To solve this, we propose AnyEdit, a new
autoregressive editing paradigm. It decomposes long-form knowledge into
sequential chunks and iteratively edits the key token in each chunk, ensuring
consistent and accurate outputs. Theoretically, we ground AnyEdit in the Chain
Rule of Mutual Information, showing its ability to update any knowledge within
LLMs. Empirically, it outperforms strong baselines by 21.5% on benchmarks
including UnKEBench, AKEW, and our new EditEverything dataset for long-form
diverse-formatted knowledge. Additionally, AnyEdit serves as a plug-and-play
framework, enabling current editing methods to update knowledge with arbitrary
length and format, significantly advancing the scope and practicality of LLM
knowledge editing.
|
2502.06874 | Yanming Guo | Yanming Guo, Xiao Qian, Kevin Credit, Jin Ma | Group Reasoning Emission Estimation Networks | null | null | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Accurate greenhouse gas (GHG) emission reporting is critical for governments,
businesses, and investors. However, adoption remains limited particularly among
small and medium enterprises due to high implementation costs, fragmented
emission factor databases, and a lack of robust sector classification methods.
To address these challenges, we introduce Group Reasoning Emission Estimation
Networks (GREEN), an AI-driven carbon accounting framework that standardizes
enterprise-level emission estimation, constructs a large-scale benchmark
dataset, and leverages a novel reasoning approach with large language models
(LLMs). Specifically, we compile textual descriptions for 20,850 companies with
validated North American Industry Classification System (NAICS) labels and
align these with an economic model of carbon intensity factors. By reframing
sector classification as an information retrieval task, we fine-tune
Sentence-BERT models using a contrastive learning loss. To overcome the
limitations of single-stage models in handling thousands of hierarchical
categories, we propose a Group Reasoning method that ensembles LLM classifiers
based on the natural NAICS ontology, decomposing the task into multiple
sub-classification steps. We theoretically prove that this approach reduces
classification uncertainty and computational complexity. Experiments on 1,114
NAICS categories yield state-of-the-art performance (83.68% Top-1, 91.47%
Top-10 accuracy), and case studies on 20 companies report a mean absolute
percentage error (MAPE) of 45.88%. The project is available at:
https://huggingface.co/datasets/Yvnminc/ExioNAICS.
| [
{
"version": "v1",
"created": "Sat, 8 Feb 2025 09:02:43 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Mar 2025 06:37:40 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Guo",
"Yanming",
""
],
[
"Qian",
"Xiao",
""
],
[
"Credit",
"Kevin",
""
],
[
"Ma",
"Jin",
""
]
] | TITLE: Group Reasoning Emission Estimation Networks
ABSTRACT: Accurate greenhouse gas (GHG) emission reporting is critical for governments,
businesses, and investors. However, adoption remains limited particularly among
small and medium enterprises due to high implementation costs, fragmented
emission factor databases, and a lack of robust sector classification methods.
To address these challenges, we introduce Group Reasoning Emission Estimation
Networks (GREEN), an AI-driven carbon accounting framework that standardizes
enterprise-level emission estimation, constructs a large-scale benchmark
dataset, and leverages a novel reasoning approach with large language models
(LLMs). Specifically, we compile textual descriptions for 20,850 companies with
validated North American Industry Classification System (NAICS) labels and
align these with an economic model of carbon intensity factors. By reframing
sector classification as an information retrieval task, we fine-tune
Sentence-BERT models using a contrastive learning loss. To overcome the
limitations of single-stage models in handling thousands of hierarchical
categories, we propose a Group Reasoning method that ensembles LLM classifiers
based on the natural NAICS ontology, decomposing the task into multiple
sub-classification steps. We theoretically prove that this approach reduces
classification uncertainty and computational complexity. Experiments on 1,114
NAICS categories yield state-of-the-art performance (83.68% Top-1, 91.47%
Top-10 accuracy), and case studies on 20 companies report a mean absolute
percentage error (MAPE) of 45.88%. The project is available at:
https://huggingface.co/datasets/Yvnminc/ExioNAICS.
|
2502.09042 | Pittawat Taveekitworachai | Pittawat Taveekitworachai, Potsawee Manakul, Kasima Tharnpipitchai,
Kunat Pipatanakul | Typhoon T1: An Open Thai Reasoning Model | 25 pages, 6 figures | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper introduces Typhoon T1, an open effort to develop an open Thai
reasoning model. A reasoning model is a relatively new type of generative model
built on top of large language models (LLMs). A reasoning model generates a
long chain of thought before arriving at a final answer, an approach found to
improve performance on complex tasks. However, details on developing such a
model are limited, especially for reasoning models that can generate traces in
a low-resource language. Typhoon T1 presents an open effort that dives into the
details of developing a reasoning model in a more cost-effective way by
leveraging supervised fine-tuning using open datasets, instead of reinforcement
learning. This paper shares the details about synthetic data generation and
training, as well as our dataset and model weights. Additionally, we provide
insights gained from developing a reasoning model that generalizes across
domains and is capable of generating reasoning traces in a low-resource
language, using Thai as an example. We hope this open effort provides a
foundation for further research in this field.
| [
{
"version": "v1",
"created": "Thu, 13 Feb 2025 07:55:54 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Mar 2025 06:45:15 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Taveekitworachai",
"Pittawat",
""
],
[
"Manakul",
"Potsawee",
""
],
[
"Tharnpipitchai",
"Kasima",
""
],
[
"Pipatanakul",
"Kunat",
""
]
] | TITLE: Typhoon T1: An Open Thai Reasoning Model
ABSTRACT: This paper introduces Typhoon T1, an open effort to develop an open Thai
reasoning model. A reasoning model is a relatively new type of generative model
built on top of large language models (LLMs). A reasoning model generates a
long chain of thought before arriving at a final answer, an approach found to
improve performance on complex tasks. However, details on developing such a
model are limited, especially for reasoning models that can generate traces in
a low-resource language. Typhoon T1 presents an open effort that dives into the
details of developing a reasoning model in a more cost-effective way by
leveraging supervised fine-tuning using open datasets, instead of reinforcement
learning. This paper shares the details about synthetic data generation and
training, as well as our dataset and model weights. Additionally, we provide
insights gained from developing a reasoning model that generalizes across
domains and is capable of generating reasoning traces in a low-resource
language, using Thai as an example. We hope this open effort provides a
foundation for further research in this field.
|
2502.09056 | Kunat Pipatanakul | Kunat Pipatanakul, Pittawat Taveekitworachai, Potsawee Manakul, Kasima
Tharnpipitchai | Adapting Language-Specific LLMs to a Reasoning Model in One Day via
Model Merging -- An Open Recipe | 9 pages | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper investigates data selection and model merging methodologies aimed
at incorporating advanced reasoning capabilities such as those of DeepSeek R1
into language-specific large language models (LLMs), with a particular focus on
the Thai LLM. Our goal is to enhance the reasoning capabilities of
language-specific LLMs while maintaining their target language abilities.
DeepSeek R1 excels in reasoning but primarily benefits high-resource languages
such as English and Chinese. However, low-resource languages remain underserved
due to the dominance of English-centric training data and model optimizations,
which limit performance in these languages. This limitation results in
unreliable code-switching and diminished effectiveness on tasks in low-resource
languages. Meanwhile, local and regional LLM initiatives have attempted to
bridge this gap by developing language-specific LLMs that focus on improving
local linguistic fidelity. We demonstrate that, with only publicly available
datasets and a computational budget of $120, it is possible to enhance the
reasoning capabilities of language-specific LLMs to match the level of DeepSeek
R1, without compromising their performance on target language tasks.
| [
{
"version": "v1",
"created": "Thu, 13 Feb 2025 08:10:45 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Feb 2025 13:16:00 GMT"
},
{
"version": "v3",
"created": "Thu, 27 Mar 2025 06:45:16 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Pipatanakul",
"Kunat",
""
],
[
"Taveekitworachai",
"Pittawat",
""
],
[
"Manakul",
"Potsawee",
""
],
[
"Tharnpipitchai",
"Kasima",
""
]
] | TITLE: Adapting Language-Specific LLMs to a Reasoning Model in One Day via
Model Merging -- An Open Recipe
ABSTRACT: This paper investigates data selection and model merging methodologies aimed
at incorporating advanced reasoning capabilities such as those of DeepSeek R1
into language-specific large language models (LLMs), with a particular focus on
the Thai LLM. Our goal is to enhance the reasoning capabilities of
language-specific LLMs while maintaining their target language abilities.
DeepSeek R1 excels in reasoning but primarily benefits high-resource languages
such as English and Chinese. However, low-resource languages remain underserved
due to the dominance of English-centric training data and model optimizations,
which limit performance in these languages. This limitation results in
unreliable code-switching and diminished effectiveness on tasks in low-resource
languages. Meanwhile, local and regional LLM initiatives have attempted to
bridge this gap by developing language-specific LLMs that focus on improving
local linguistic fidelity. We demonstrate that, with only publicly available
datasets and a computational budget of $120, it is possible to enhance the
reasoning capabilities of language-specific LLMs to match the level of DeepSeek
R1, without compromising their performance on target language tasks.
|
2502.11748 | Giorgos Kordopatis-Zilos | Giorgos Kordopatis-Zilos, Vladan Stojni\'c, Anna Manko, Pavel
\v{S}uma, Nikolaos-Antonios Ypsilantis, Nikos Efthymiadis, Zakaria Laskar,
Ji\v{r}\'i Matas, Ond\v{r}ej Chum, Giorgos Tolias | ILIAS: Instance-Level Image retrieval At Scale | CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work introduces ILIAS, a new test dataset for Instance-Level Image
retrieval At Scale. It is designed to evaluate the ability of current and
future foundation models and retrieval techniques to recognize particular
objects. The key benefits over existing datasets include large scale, domain
diversity, accurate ground truth, and a performance that is far from saturated.
ILIAS includes query and positive images for 1,000 object instances, manually
collected to capture challenging conditions and diverse domains. Large-scale
retrieval is conducted against 100 million distractor images from YFCC100M. To
avoid false negatives without extra annotation effort, we include only query
objects confirmed to have emerged after 2014, i.e. the compilation date of
YFCC100M. An extensive benchmarking is performed with the following
observations: i) models fine-tuned on specific domains, such as landmarks or
products, excel in that domain but fail on ILIAS ii) learning a linear
adaptation layer using multi-domain class supervision results in performance
improvements, especially for vision-language models iii) local descriptors in
retrieval re-ranking are still a key ingredient, especially in the presence of
severe background clutter iv) the text-to-image performance of the
vision-language foundation models is surprisingly close to the corresponding
image-to-image case. website: https://vrg.fel.cvut.cz/ilias/
| [
{
"version": "v1",
"created": "Mon, 17 Feb 2025 12:42:38 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 17:27:09 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Kordopatis-Zilos",
"Giorgos",
""
],
[
"Stojnić",
"Vladan",
""
],
[
"Manko",
"Anna",
""
],
[
"Šuma",
"Pavel",
""
],
[
"Ypsilantis",
"Nikolaos-Antonios",
""
],
[
"Efthymiadis",
"Nikos",
""
],
[
"Laskar",
"Zakaria",
""
],
[
"Matas",
"Jiří",
""
],
[
"Chum",
"Ondřej",
""
],
[
"Tolias",
"Giorgos",
""
]
] | TITLE: ILIAS: Instance-Level Image retrieval At Scale
ABSTRACT: This work introduces ILIAS, a new test dataset for Instance-Level Image
retrieval At Scale. It is designed to evaluate the ability of current and
future foundation models and retrieval techniques to recognize particular
objects. The key benefits over existing datasets include large scale, domain
diversity, accurate ground truth, and a performance that is far from saturated.
ILIAS includes query and positive images for 1,000 object instances, manually
collected to capture challenging conditions and diverse domains. Large-scale
retrieval is conducted against 100 million distractor images from YFCC100M. To
avoid false negatives without extra annotation effort, we include only query
objects confirmed to have emerged after 2014, i.e. the compilation date of
YFCC100M. An extensive benchmarking is performed with the following
observations: i) models fine-tuned on specific domains, such as landmarks or
products, excel in that domain but fail on ILIAS ii) learning a linear
adaptation layer using multi-domain class supervision results in performance
improvements, especially for vision-language models iii) local descriptors in
retrieval re-ranking are still a key ingredient, especially in the presence of
severe background clutter iv) the text-to-image performance of the
vision-language foundation models is surprisingly close to the corresponding
image-to-image case. website: https://vrg.fel.cvut.cz/ilias/
|
2502.12920 | Thomas Lee | Thomas L. Lee, William Toner, Rajkarn Singh, Artjom Joosen and Martin
Asenov | Lightweight Online Adaption for Time Series Foundation Model Forecasts | 8 pages, Preprint | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | Foundation models (FMs) have emerged as a promising approach for time series
forecasting. While effective, FMs typically remain fixed during deployment due
to the high computational costs of learning them online. Consequently, deployed
FMs fail to adapt their forecasts to current data characteristics, despite the
availability of online feedback from newly arriving data. This raises the
question of whether FM performance can be enhanced by the efficient usage of
this feedback. We propose AdapTS to answer this question.
AdapTS is a lightweight mechanism for the online adaption of FM forecasts in
response to online feedback. AdapTS consists of two parts: a) the
AdapTS-Forecaster which is used to learn the current data distribution; and b)
the AdapTS-Weighter which is used to combine the forecasts of the FM and the
AdapTS-Forecaster. We evaluate the performance of AdapTS in conjunction with
several recent FMs across a suite of standard time series datasets. In all of
our experiments we find that using AdapTS improves performance. This work
demonstrates how efficient usage of online feedback can be used to improve FM
forecasts.
| [
{
"version": "v1",
"created": "Tue, 18 Feb 2025 15:01:02 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 21:36:47 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Lee",
"Thomas L.",
""
],
[
"Toner",
"William",
""
],
[
"Singh",
"Rajkarn",
""
],
[
"Joosen",
"Artjom",
""
],
[
"Asenov",
"Martin",
""
]
] | TITLE: Lightweight Online Adaption for Time Series Foundation Model Forecasts
ABSTRACT: Foundation models (FMs) have emerged as a promising approach for time series
forecasting. While effective, FMs typically remain fixed during deployment due
to the high computational costs of learning them online. Consequently, deployed
FMs fail to adapt their forecasts to current data characteristics, despite the
availability of online feedback from newly arriving data. This raises the
question of whether FM performance can be enhanced by the efficient usage of
this feedback. We propose AdapTS to answer this question.
AdapTS is a lightweight mechanism for the online adaption of FM forecasts in
response to online feedback. AdapTS consists of two parts: a) the
AdapTS-Forecaster which is used to learn the current data distribution; and b)
the AdapTS-Weighter which is used to combine the forecasts of the FM and the
AdapTS-Forecaster. We evaluate the performance of AdapTS in conjunction with
several recent FMs across a suite of standard time series datasets. In all of
our experiments we find that using AdapTS improves performance. This work
demonstrates how efficient usage of online feedback can be used to improve FM
forecasts.
|
2502.15682 | Guanqi Zhan | Guanqi Zhan, Yuanpei Liu, Kai Han, Weidi Xie, Andrew Zisserman | ELIP: Enhanced Visual-Language Foundation Models for Image Retrieval | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The objective in this paper is to improve the performance of text-to-image
retrieval. To this end, we introduce a new framework that can boost the
performance of large-scale pre-trained vision-language models, so that they can
be used for text-to-image re-ranking. The approach, Enhanced Language-Image
Pre-training (ELIP), uses the text query, via a simple MLP mapping network, to
predict a set of visual prompts to condition the ViT image encoding. ELIP can
easily be applied to the commonly used CLIP, SigLIP and BLIP-2 networks. To
train the architecture with limited computing resources, we develop a 'student
friendly' best practice, involving global hard sample mining, and curation of a
large-scale dataset. On the evaluation side, we set up two new
out-of-distribution (OOD) benchmarks, Occluded COCO and ImageNet-R, to assess
the zero-shot generalisation of the models to different domains. The results
demonstrate that ELIP significantly boosts CLIP/SigLIP/SigLIP-2 text-to-image
retrieval performance and outperforms BLIP-2 on several benchmarks, as well as
providing an easy means to adapt to OOD datasets.
| [
{
"version": "v1",
"created": "Fri, 21 Feb 2025 18:59:57 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Mar 2025 17:57:43 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Zhan",
"Guanqi",
""
],
[
"Liu",
"Yuanpei",
""
],
[
"Han",
"Kai",
""
],
[
"Xie",
"Weidi",
""
],
[
"Zisserman",
"Andrew",
""
]
] | TITLE: ELIP: Enhanced Visual-Language Foundation Models for Image Retrieval
ABSTRACT: The objective in this paper is to improve the performance of text-to-image
retrieval. To this end, we introduce a new framework that can boost the
performance of large-scale pre-trained vision-language models, so that they can
be used for text-to-image re-ranking. The approach, Enhanced Language-Image
Pre-training (ELIP), uses the text query, via a simple MLP mapping network, to
predict a set of visual prompts to condition the ViT image encoding. ELIP can
easily be applied to the commonly used CLIP, SigLIP and BLIP-2 networks. To
train the architecture with limited computing resources, we develop a 'student
friendly' best practice, involving global hard sample mining, and curation of a
large-scale dataset. On the evaluation side, we set up two new
out-of-distribution (OOD) benchmarks, Occluded COCO and ImageNet-R, to assess
the zero-shot generalisation of the models to different domains. The results
demonstrate that ELIP significantly boosts CLIP/SigLIP/SigLIP-2 text-to-image
retrieval performance and outperforms BLIP-2 on several benchmarks, as well as
providing an easy means to adapt to OOD datasets.
|
2502.18410 | Young-Chae Hong | Young-Chae Hong, Bei Xiao, Yangho Chen | TSKANMixer: Kolmogorov-Arnold Networks with MLP-Mixer Model for Time
Series Forecasting | 8 pages, 4 figures, 7 tables and accepted at the AI4TS: AI for Time
Series Analysis workshop, AAAI 2025 | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Time series forecasting has long been a focus of research across diverse
fields, including economics, energy, healthcare, and traffic management. Recent
works have introduced innovative architectures for time series models, such as
the Time-Series Mixer (TSMixer), which leverages multi-layer perceptrons (MLPs)
to enhance prediction accuracy by effectively capturing both spatial and
temporal dependencies within the data. In this paper, we investigate the
capabilities of the Kolmogorov-Arnold Networks (KANs) for time-series
forecasting by modifying TSMixer with a KAN layer (TSKANMixer). Experimental
results demonstrate that TSKANMixer tends to improve prediction accuracy over
the original TSMixer across multiple datasets, ranking among the top-performing
models compared to other time series approaches. Our results show that the KANs
are promising alternatives to improve the performance of time series
forecasting by replacing or extending traditional MLPs.
| [
{
"version": "v1",
"created": "Tue, 25 Feb 2025 18:04:45 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Mar 2025 16:34:13 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Hong",
"Young-Chae",
""
],
[
"Xiao",
"Bei",
""
],
[
"Chen",
"Yangho",
""
]
] | TITLE: TSKANMixer: Kolmogorov-Arnold Networks with MLP-Mixer Model for Time
Series Forecasting
ABSTRACT: Time series forecasting has long been a focus of research across diverse
fields, including economics, energy, healthcare, and traffic management. Recent
works have introduced innovative architectures for time series models, such as
the Time-Series Mixer (TSMixer), which leverages multi-layer perceptrons (MLPs)
to enhance prediction accuracy by effectively capturing both spatial and
temporal dependencies within the data. In this paper, we investigate the
capabilities of the Kolmogorov-Arnold Networks (KANs) for time-series
forecasting by modifying TSMixer with a KAN layer (TSKANMixer). Experimental
results demonstrate that TSKANMixer tends to improve prediction accuracy over
the original TSMixer across multiple datasets, ranking among the top-performing
models compared to other time series approaches. Our results show that the KANs
are promising alternatives to improve the performance of time series
forecasting by replacing or extending traditional MLPs.
|
2503.01877 | Henrik Abgaryan | Henrik Abgaryan, Tristan Cazenave, Ararat Harutyunyan | Starjob: Dataset for LLM-Driven Job Shop Scheduling | arXiv admin note: substantial text overlap with arXiv:2408.06993 | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) have shown remarkable capabilities across
various domains, but their potential for solving combinatorial optimization
problems remains largely unexplored. In this paper, we investigate the
applicability of LLMs to the Job Shop Scheduling Problem (JSSP), a classic
challenge in combinatorial optimization that requires efficient job allocation
to machines to minimize makespan. To this end, we introduce Starjob, the first
supervised dataset for JSSP, comprising 130k instances specifically designed
for training LLMs. Leveraging this dataset, we fine-tune the LLaMA 8B 4-bit
quantized model with the LoRA method to develop an end-to-end scheduling
approach. Our evaluation on standard benchmarks demonstrates that the proposed
LLM-based method not only surpasses traditional Priority Dispatching Rules
(PDRs) but also achieves notable improvements over state-of-the-art neural
approaches like L2D, with an average improvement of 15.36% on DMU and 7.85% on
Taillard benchmarks. These results highlight the untapped potential of LLMs in
tackling combinatorial optimization problems, paving the way for future
advancements in this area.
| [
{
"version": "v1",
"created": "Wed, 26 Feb 2025 15:20:01 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Mar 2025 10:38:45 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Abgaryan",
"Henrik",
""
],
[
"Cazenave",
"Tristan",
""
],
[
"Harutyunyan",
"Ararat",
""
]
] | TITLE: Starjob: Dataset for LLM-Driven Job Shop Scheduling
ABSTRACT: Large Language Models (LLMs) have shown remarkable capabilities across
various domains, but their potential for solving combinatorial optimization
problems remains largely unexplored. In this paper, we investigate the
applicability of LLMs to the Job Shop Scheduling Problem (JSSP), a classic
challenge in combinatorial optimization that requires efficient job allocation
to machines to minimize makespan. To this end, we introduce Starjob, the first
supervised dataset for JSSP, comprising 130k instances specifically designed
for training LLMs. Leveraging this dataset, we fine-tune the LLaMA 8B 4-bit
quantized model with the LoRA method to develop an end-to-end scheduling
approach. Our evaluation on standard benchmarks demonstrates that the proposed
LLM-based method not only surpasses traditional Priority Dispatching Rules
(PDRs) but also achieves notable improvements over state-of-the-art neural
approaches like L2D, with an average improvement of 15.36% on DMU and 7.85% on
Taillard benchmarks. These results highlight the untapped potential of LLMs in
tackling combinatorial optimization problems, paving the way for future
advancements in this area.
|
2503.02841 | Theodore Zhao | Theodore Zhao, Sid Kiblawi, Naoto Usuyama, Ho Hin Lee, Sam Preston,
Hoifung Poon, Mu Wei | Boltzmann Attention Sampling for Image Analysis with Small Objects | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Detecting and segmenting small objects, such as lung nodules and tumor
lesions, remains a critical challenge in image analysis. These objects often
occupy less than 0.1% of an image, making traditional transformer architectures
inefficient and prone to performance degradation due to redundant attention
computations on irrelevant regions. Existing sparse attention mechanisms rely
on rigid hierarchical structures, which are poorly suited for detecting small,
variable, and uncertain object locations. In this paper, we propose
BoltzFormer, a novel transformer-based architecture designed to address these
challenges through dynamic sparse attention. BoltzFormer identifies and focuses
attention on relevant areas by modeling uncertainty using a Boltzmann
distribution with an annealing schedule. Initially, a higher temperature allows
broader area sampling in early layers, when object location uncertainty is
greatest. As the temperature decreases in later layers, attention becomes more
focused, enhancing efficiency and accuracy. BoltzFormer seamlessly integrates
into existing transformer architectures via a modular Boltzmann attention
sampling mechanism. Comprehensive evaluations on benchmark datasets demonstrate
that BoltzFormer significantly improves segmentation performance for small
objects while reducing attention computation by an order of magnitude compared
to previous state-of-the-art methods.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 18:12:58 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 18:33:30 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Zhao",
"Theodore",
""
],
[
"Kiblawi",
"Sid",
""
],
[
"Usuyama",
"Naoto",
""
],
[
"Lee",
"Ho Hin",
""
],
[
"Preston",
"Sam",
""
],
[
"Poon",
"Hoifung",
""
],
[
"Wei",
"Mu",
""
]
] | TITLE: Boltzmann Attention Sampling for Image Analysis with Small Objects
ABSTRACT: Detecting and segmenting small objects, such as lung nodules and tumor
lesions, remains a critical challenge in image analysis. These objects often
occupy less than 0.1% of an image, making traditional transformer architectures
inefficient and prone to performance degradation due to redundant attention
computations on irrelevant regions. Existing sparse attention mechanisms rely
on rigid hierarchical structures, which are poorly suited for detecting small,
variable, and uncertain object locations. In this paper, we propose
BoltzFormer, a novel transformer-based architecture designed to address these
challenges through dynamic sparse attention. BoltzFormer identifies and focuses
attention on relevant areas by modeling uncertainty using a Boltzmann
distribution with an annealing schedule. Initially, a higher temperature allows
broader area sampling in early layers, when object location uncertainty is
greatest. As the temperature decreases in later layers, attention becomes more
focused, enhancing efficiency and accuracy. BoltzFormer seamlessly integrates
into existing transformer architectures via a modular Boltzmann attention
sampling mechanism. Comprehensive evaluations on benchmark datasets demonstrate
that BoltzFormer significantly improves segmentation performance for small
objects while reducing attention computation by an order of magnitude compared
to previous state-of-the-art methods.
|
2503.02892 | Malitha Gunawardhana | Malitha Gunawardhana, Fangqiang Xu, and Jichao Zhao | Segmenting Bi-Atrial Structures Using ResNext Based Framework | null | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Atrial fibrillation (AF) is the most common cardiac arrhythmia, significantly
contributing to mortality, particularly in older populations. While pulmonary
vein isolation is a standard treatment, its effectiveness is limited in
patients with persistent AF. Recent research highlights the importance of
targeting additional atrial regions, particularly fibrotic areas identified via
late gadolinium-enhanced MRI (LGE-MRI). However, existing manual segmentation
methods are time-consuming and prone to variability. Deep learning techniques,
particularly convolutional neural networks (CNNs), have shown promise in
automating segmentation. However, most studies focus solely on the left atrium
(LA) and rely on small datasets, limiting generalizability. In this paper, we
propose a novel two-stage framework incorporating ResNeXt encoders and a cyclic
learning rate to segment both the right atrium (RA) and LA walls and cavities
in LGE-MRIs. Our method aims to improve the segmentation of challenging small
structures, such as atrial walls while maintaining high performance in larger
regions like the atrial cavities. The results demonstrate that our approach
offers superior segmentation accuracy and robustness compared to traditional
architectures, particularly for imbalanced class structures.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 10:23:12 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 22:43:13 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Gunawardhana",
"Malitha",
""
],
[
"Xu",
"Fangqiang",
""
],
[
"Zhao",
"Jichao",
""
]
] | TITLE: Segmenting Bi-Atrial Structures Using ResNext Based Framework
ABSTRACT: Atrial fibrillation (AF) is the most common cardiac arrhythmia, significantly
contributing to mortality, particularly in older populations. While pulmonary
vein isolation is a standard treatment, its effectiveness is limited in
patients with persistent AF. Recent research highlights the importance of
targeting additional atrial regions, particularly fibrotic areas identified via
late gadolinium-enhanced MRI (LGE-MRI). However, existing manual segmentation
methods are time-consuming and prone to variability. Deep learning techniques,
particularly convolutional neural networks (CNNs), have shown promise in
automating segmentation. However, most studies focus solely on the left atrium
(LA) and rely on small datasets, limiting generalizability. In this paper, we
propose a novel two-stage framework incorporating ResNeXt encoders and a cyclic
learning rate to segment both the right atrium (RA) and LA walls and cavities
in LGE-MRIs. Our method aims to improve the segmentation of challenging small
structures, such as atrial walls while maintaining high performance in larger
regions like the atrial cavities. The results demonstrate that our approach
offers superior segmentation accuracy and robustness compared to traditional
architectures, particularly for imbalanced class structures.
|
2503.03384 | Vipul Garg | Vipul Garg, Ishita Thakre, Sayan Ranu | GNNMerge: Merging of GNN Models Without Accessing Training Data | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Model merging has gained prominence in machine learning as a method to
integrate multiple trained models into a single model without accessing the
original training data. While existing approaches have demonstrated success in
domains such as computer vision and NLP, their application to Graph Neural
Networks (GNNs) remains unexplored. These methods often rely on the assumption
of shared initialization, which is seldom applicable to GNNs. In this work, we
undertake the first benchmarking study of model merging algorithms for GNNs,
revealing their limited effectiveness in this context. To address these
challenges, we propose GNNMerge, which utilizes a task-agnostic node embedding
alignment strategy to merge GNNs. Furthermore, we establish that under a mild
relaxation, the proposed optimization objective admits direct analytical
solutions for widely used GNN architectures, significantly enhancing its
computational efficiency. Empirical evaluations across diverse datasets, tasks,
and architectures establish GNNMerge to be up to 24% more accurate than
existing methods while delivering over 2 orders of magnitude speed-up compared
to training from scratch.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 11:02:29 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Mar 2025 15:32:05 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Garg",
"Vipul",
""
],
[
"Thakre",
"Ishita",
""
],
[
"Ranu",
"Sayan",
""
]
] | TITLE: GNNMerge: Merging of GNN Models Without Accessing Training Data
ABSTRACT: Model merging has gained prominence in machine learning as a method to
integrate multiple trained models into a single model without accessing the
original training data. While existing approaches have demonstrated success in
domains such as computer vision and NLP, their application to Graph Neural
Networks (GNNs) remains unexplored. These methods often rely on the assumption
of shared initialization, which is seldom applicable to GNNs. In this work, we
undertake the first benchmarking study of model merging algorithms for GNNs,
revealing their limited effectiveness in this context. To address these
challenges, we propose GNNMerge, which utilizes a task-agnostic node embedding
alignment strategy to merge GNNs. Furthermore, we establish that under a mild
relaxation, the proposed optimization objective admits direct analytical
solutions for widely used GNN architectures, significantly enhancing its
computational efficiency. Empirical evaluations across diverse datasets, tasks,
and architectures establish GNNMerge to be up to 24% more accurate than
existing methods while delivering over 2 orders of magnitude speed-up compared
to training from scratch.
|
2503.05500 | Nicolas Boizard | Nicolas Boizard, Hippolyte Gisserot-Boukhlef, Duarte M. Alves, Andr\'e
Martins, Ayoub Hammal, Caio Corro, C\'eline Hudelot, Emmanuel Malherbe,
Etienne Malaboeuf, Fanny Jourdan, Gabriel Hautreux, Jo\~ao Alves, Kevin
El-Haddad, Manuel Faysse, Maxime Peyrard, Nuno M. Guerreiro, Patrick
Fernandes, Ricardo Rei, Pierre Colombo | EuroBERT: Scaling Multilingual Encoders for European Languages | 28 pages, 8 figures, 13 tables | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | General-purpose multilingual vector representations, used in retrieval,
regression and classification, are traditionally obtained from bidirectional
encoder models. Despite their wide applicability, encoders have been recently
overshadowed by advances in generative decoder-only models. However, many
innovations driving this progress are not inherently tied to decoders. In this
paper, we revisit the development of multilingual encoders through the lens of
these advances, and introduce EuroBERT, a family of multilingual encoders
covering European and widely spoken global languages. Our models outperform
existing alternatives across a diverse range of tasks, spanning multilingual
capabilities, mathematics, and coding, and natively supporting sequences of up
to 8,192 tokens. We also examine the design decisions behind EuroBERT, offering
insights into our dataset composition and training pipeline. We publicly
release the EuroBERT models, including intermediate training checkpoints,
together with our training framework.
| [
{
"version": "v1",
"created": "Fri, 7 Mar 2025 15:13:58 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 18:43:59 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Boizard",
"Nicolas",
""
],
[
"Gisserot-Boukhlef",
"Hippolyte",
""
],
[
"Alves",
"Duarte M.",
""
],
[
"Martins",
"André",
""
],
[
"Hammal",
"Ayoub",
""
],
[
"Corro",
"Caio",
""
],
[
"Hudelot",
"Céline",
""
],
[
"Malherbe",
"Emmanuel",
""
],
[
"Malaboeuf",
"Etienne",
""
],
[
"Jourdan",
"Fanny",
""
],
[
"Hautreux",
"Gabriel",
""
],
[
"Alves",
"João",
""
],
[
"El-Haddad",
"Kevin",
""
],
[
"Faysse",
"Manuel",
""
],
[
"Peyrard",
"Maxime",
""
],
[
"Guerreiro",
"Nuno M.",
""
],
[
"Fernandes",
"Patrick",
""
],
[
"Rei",
"Ricardo",
""
],
[
"Colombo",
"Pierre",
""
]
] | TITLE: EuroBERT: Scaling Multilingual Encoders for European Languages
ABSTRACT: General-purpose multilingual vector representations, used in retrieval,
regression and classification, are traditionally obtained from bidirectional
encoder models. Despite their wide applicability, encoders have been recently
overshadowed by advances in generative decoder-only models. However, many
innovations driving this progress are not inherently tied to decoders. In this
paper, we revisit the development of multilingual encoders through the lens of
these advances, and introduce EuroBERT, a family of multilingual encoders
covering European and widely spoken global languages. Our models outperform
existing alternatives across a diverse range of tasks, spanning multilingual
capabilities, mathematics, and coding, and natively supporting sequences of up
to 8,192 tokens. We also examine the design decisions behind EuroBERT, offering
insights into our dataset composition and training pipeline. We publicly
release the EuroBERT models, including intermediate training checkpoints,
together with our training framework.
|
2503.07091 | Shuhe Wang | Shuhe Wang, Xiaoya Li, Jiwei Li, Guoyin Wang, Xiaofei Sun, Bob Zhu,
Han Qiu, Mo Yu, Shengjie Shen, Tianwei Zhang, and Eduard Hovy | FaceID-6M: A Large-Scale, Open-Source FaceID Customization Dataset | arXiv admin note: text overlap with arXiv:2501.15407 | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Due to the data-driven nature of current face identity (FaceID) customization
methods, all state-of-the-art models rely on large-scale datasets containing
millions of high-quality text-image pairs for training. However, none of these
datasets are publicly available, which restricts transparency and hinders
further advancements in the field.
To address this issue, in this paper, we collect and release FaceID-6M, the
first large-scale, open-source FaceID dataset containing 6 million high-quality
text-image pairs. Filtered from LAION-5B \cite{schuhmann2022laion}, FaceID-6M
undergoes a rigorous image and text filtering steps to ensure dataset quality,
including resolution filtering to maintain high-quality images and faces, face
filtering to remove images that lack human faces, and keyword-based strategy to
retain descriptions containing human-related terms (e.g., nationality,
professions and names). Through these cleaning processes, FaceID-6M provides a
high-quality dataset optimized for training powerful FaceID customization
models, facilitating advancements in the field by offering an open resource for
research and development.
We conduct extensive experiments to show the effectiveness of our FaceID-6M,
demonstrating that models trained on our FaceID-6M dataset achieve performance
that is comparable to, and slightly better than currently available industrial
models. Additionally, to support and advance research in the FaceID
customization community, we make our code, datasets, and models fully publicly
available. Our codes, models, and datasets are available at:
https://github.com/ShuheSH/FaceID-6M.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 09:14:47 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 08:36:47 GMT"
},
{
"version": "v3",
"created": "Thu, 27 Mar 2025 11:23:24 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Wang",
"Shuhe",
""
],
[
"Li",
"Xiaoya",
""
],
[
"Li",
"Jiwei",
""
],
[
"Wang",
"Guoyin",
""
],
[
"Sun",
"Xiaofei",
""
],
[
"Zhu",
"Bob",
""
],
[
"Qiu",
"Han",
""
],
[
"Yu",
"Mo",
""
],
[
"Shen",
"Shengjie",
""
],
[
"Zhang",
"Tianwei",
""
],
[
"Hovy",
"Eduard",
""
]
] | TITLE: FaceID-6M: A Large-Scale, Open-Source FaceID Customization Dataset
ABSTRACT: Due to the data-driven nature of current face identity (FaceID) customization
methods, all state-of-the-art models rely on large-scale datasets containing
millions of high-quality text-image pairs for training. However, none of these
datasets are publicly available, which restricts transparency and hinders
further advancements in the field.
To address this issue, in this paper, we collect and release FaceID-6M, the
first large-scale, open-source FaceID dataset containing 6 million high-quality
text-image pairs. Filtered from LAION-5B \cite{schuhmann2022laion}, FaceID-6M
undergoes a rigorous image and text filtering steps to ensure dataset quality,
including resolution filtering to maintain high-quality images and faces, face
filtering to remove images that lack human faces, and keyword-based strategy to
retain descriptions containing human-related terms (e.g., nationality,
professions and names). Through these cleaning processes, FaceID-6M provides a
high-quality dataset optimized for training powerful FaceID customization
models, facilitating advancements in the field by offering an open resource for
research and development.
We conduct extensive experiments to show the effectiveness of our FaceID-6M,
demonstrating that models trained on our FaceID-6M dataset achieve performance
that is comparable to, and slightly better than currently available industrial
models. Additionally, to support and advance research in the FaceID
customization community, we make our code, datasets, and models fully publicly
available. Our codes, models, and datasets are available at:
https://github.com/ShuheSH/FaceID-6M.
|
2503.07101 | Haiyang Xie | Haiyang Xie, Xi Shen, Shihua Huang, Qirui Wang, Zheng Wang | SimROD: A Simple Baseline for Raw Object Detection with Global and Local
Enhancements | Code is available at https://ocean146.github.io/SimROD2025/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most visual models are designed for sRGB images, yet RAW data offers
significant advantages for object detection by preserving sensor information
before ISP processing. This enables improved detection accuracy and more
efficient hardware designs by bypassing the ISP. However, RAW object detection
is challenging due to limited training data, unbalanced pixel distributions,
and sensor noise. To address this, we propose SimROD, a lightweight and
effective approach for RAW object detection. We introduce a Global Gamma
Enhancement (GGE) module, which applies a learnable global gamma transformation
with only four parameters, improving feature representation while keeping the
model efficient. Additionally, we leverage the green channel's richer signal to
enhance local details, aligning with the human eye's sensitivity and Bayer
filter design. Extensive experiments on multiple RAW object detection datasets
and detectors demonstrate that SimROD outperforms state-of-the-art methods like
RAW-Adapter and DIAP while maintaining efficiency. Our work highlights the
potential of RAW data for real-world object detection. Code is available at
https://ocean146.github.io/SimROD2025/.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 09:23:14 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Mar 2025 08:58:54 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Xie",
"Haiyang",
""
],
[
"Shen",
"Xi",
""
],
[
"Huang",
"Shihua",
""
],
[
"Wang",
"Qirui",
""
],
[
"Wang",
"Zheng",
""
]
] | TITLE: SimROD: A Simple Baseline for Raw Object Detection with Global and Local
Enhancements
ABSTRACT: Most visual models are designed for sRGB images, yet RAW data offers
significant advantages for object detection by preserving sensor information
before ISP processing. This enables improved detection accuracy and more
efficient hardware designs by bypassing the ISP. However, RAW object detection
is challenging due to limited training data, unbalanced pixel distributions,
and sensor noise. To address this, we propose SimROD, a lightweight and
effective approach for RAW object detection. We introduce a Global Gamma
Enhancement (GGE) module, which applies a learnable global gamma transformation
with only four parameters, improving feature representation while keeping the
model efficient. Additionally, we leverage the green channel's richer signal to
enhance local details, aligning with the human eye's sensitivity and Bayer
filter design. Extensive experiments on multiple RAW object detection datasets
and detectors demonstrate that SimROD outperforms state-of-the-art methods like
RAW-Adapter and DIAP while maintaining efficiency. Our work highlights the
potential of RAW data for real-world object detection. Code is available at
https://ocean146.github.io/SimROD2025/.
|
2503.10095 | Avinash Patil | Avinash Patil, Amardeep Kour Gedhu | Cognitive-Mental-LLM: Evaluating Reasoning in Large Language Models for
Mental Health Prediction via Online Text | 8 pages, 4 Figures, 3 tables | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) have demonstrated potential in predicting mental
health outcomes from online text, yet traditional classification methods often
lack interpretability and robustness. This study evaluates structured reasoning
techniques-Chain-of-Thought (CoT), Self-Consistency (SC-CoT), and
Tree-of-Thought (ToT)-to improve classification accuracy across multiple mental
health datasets sourced from Reddit. We analyze reasoning-driven prompting
strategies, including Zero-shot CoT and Few-shot CoT, using key performance
metrics such as Balanced Accuracy, F1 score, and Sensitivity/Specificity. Our
findings indicate that reasoning-enhanced techniques improve classification
performance over direct prediction, particularly in complex cases. Compared to
baselines such as Zero Shot non-CoT Prompting, and fine-tuned pre-trained
transformers such as BERT and Mental-RoBerta, and fine-tuned Open Source LLMs
such as Mental Alpaca and Mental-Flan-T5, reasoning-driven LLMs yield notable
gains on datasets like Dreaddit (+0.52\% over M-LLM, +0.82\% over BERT) and
SDCNL (+4.67\% over M-LLM, +2.17\% over BERT). However, performance declines in
Depression Severity, and CSSRS predictions suggest dataset-specific
limitations, likely due to our using a more extensive test set. Among prompting
strategies, Few-shot CoT consistently outperforms others, reinforcing the
effectiveness of reasoning-driven LLMs. Nonetheless, dataset variability
highlights challenges in model reliability and interpretability. This study
provides a comprehensive benchmark of reasoning-based LLM techniques for mental
health text classification. It offers insights into their potential for
scalable clinical applications while identifying key challenges for future
improvements.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 06:42:37 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Mar 2025 07:14:15 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Patil",
"Avinash",
""
],
[
"Gedhu",
"Amardeep Kour",
""
]
] | TITLE: Cognitive-Mental-LLM: Evaluating Reasoning in Large Language Models for
Mental Health Prediction via Online Text
ABSTRACT: Large Language Models (LLMs) have demonstrated potential in predicting mental
health outcomes from online text, yet traditional classification methods often
lack interpretability and robustness. This study evaluates structured reasoning
techniques-Chain-of-Thought (CoT), Self-Consistency (SC-CoT), and
Tree-of-Thought (ToT)-to improve classification accuracy across multiple mental
health datasets sourced from Reddit. We analyze reasoning-driven prompting
strategies, including Zero-shot CoT and Few-shot CoT, using key performance
metrics such as Balanced Accuracy, F1 score, and Sensitivity/Specificity. Our
findings indicate that reasoning-enhanced techniques improve classification
performance over direct prediction, particularly in complex cases. Compared to
baselines such as Zero Shot non-CoT Prompting, and fine-tuned pre-trained
transformers such as BERT and Mental-RoBerta, and fine-tuned Open Source LLMs
such as Mental Alpaca and Mental-Flan-T5, reasoning-driven LLMs yield notable
gains on datasets like Dreaddit (+0.52\% over M-LLM, +0.82\% over BERT) and
SDCNL (+4.67\% over M-LLM, +2.17\% over BERT). However, performance declines in
Depression Severity, and CSSRS predictions suggest dataset-specific
limitations, likely due to our using a more extensive test set. Among prompting
strategies, Few-shot CoT consistently outperforms others, reinforcing the
effectiveness of reasoning-driven LLMs. Nonetheless, dataset variability
highlights challenges in model reliability and interpretability. This study
provides a comprehensive benchmark of reasoning-based LLM techniques for mental
health text classification. It offers insights into their potential for
scalable clinical applications while identifying key challenges for future
improvements.
|
2503.10212 | Teng Xu | Teng Xu, Taotao Zhou, Youjia Wang, Peng Yang, Simin Tang, Kuixiang
Shao, Zifeng Tang, Yifei Liu, Xinyuan Chen, Hongshuang Wang, Xiaohui Wang,
Huoqing Luo, Jingya Wang, Ji Hu and Jingyi Yu | MouseGPT: A Large-scale Vision-Language Model for Mouse Behavior
Analysis | 53 pages, 5 figures, 7 extended figures | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Analyzing animal behavior is crucial in advancing neuroscience, yet
quantifying and deciphering its intricate dynamics remains a significant
challenge. Traditional machine vision approaches, despite their ability to
detect spontaneous behaviors, fall short due to limited interpretability and
reliance on manual labeling, which restricts the exploration of the full
behavioral spectrum. Here, we introduce MouseGPT, a Vision-Language Model (VLM)
that integrates visual cues with natural language to revolutionize mouse
behavior analysis. Built upon our first-of-its-kind dataset - incorporating
pose dynamics and open-vocabulary behavioral annotations across over 42 million
frames of diverse psychiatric conditions - MouseGPT provides a novel,
context-rich method for comprehensive behavior interpretation. Our holistic
analysis framework enables detailed behavior profiling, clustering, and novel
behavior discovery, offering deep insights without the need for labor -
intensive manual annotation. Evaluations reveal that MouseGPT surpasses
existing models in precision, adaptability, and descriptive richness,
positioning it as a transformative tool for ethology and for unraveling complex
behavioral dynamics in animal models.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 09:55:13 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Mar 2025 05:38:37 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Xu",
"Teng",
""
],
[
"Zhou",
"Taotao",
""
],
[
"Wang",
"Youjia",
""
],
[
"Yang",
"Peng",
""
],
[
"Tang",
"Simin",
""
],
[
"Shao",
"Kuixiang",
""
],
[
"Tang",
"Zifeng",
""
],
[
"Liu",
"Yifei",
""
],
[
"Chen",
"Xinyuan",
""
],
[
"Wang",
"Hongshuang",
""
],
[
"Wang",
"Xiaohui",
""
],
[
"Luo",
"Huoqing",
""
],
[
"Wang",
"Jingya",
""
],
[
"Hu",
"Ji",
""
],
[
"Yu",
"Jingyi",
""
]
] | TITLE: MouseGPT: A Large-scale Vision-Language Model for Mouse Behavior
Analysis
ABSTRACT: Analyzing animal behavior is crucial in advancing neuroscience, yet
quantifying and deciphering its intricate dynamics remains a significant
challenge. Traditional machine vision approaches, despite their ability to
detect spontaneous behaviors, fall short due to limited interpretability and
reliance on manual labeling, which restricts the exploration of the full
behavioral spectrum. Here, we introduce MouseGPT, a Vision-Language Model (VLM)
that integrates visual cues with natural language to revolutionize mouse
behavior analysis. Built upon our first-of-its-kind dataset - incorporating
pose dynamics and open-vocabulary behavioral annotations across over 42 million
frames of diverse psychiatric conditions - MouseGPT provides a novel,
context-rich method for comprehensive behavior interpretation. Our holistic
analysis framework enables detailed behavior profiling, clustering, and novel
behavior discovery, offering deep insights without the need for labor -
intensive manual annotation. Evaluations reveal that MouseGPT surpasses
existing models in precision, adaptability, and descriptive richness,
positioning it as a transformative tool for ethology and for unraveling complex
behavioral dynamics in animal models.
|
2503.13985 | Jaewoo Song | Jaewoo Song, Daemin Park, Kanghyun Baek, Sangyub Lee, Jooyoung Choi,
Eunji Kim, Sungroh Yoon | DefectFill: Realistic Defect Generation with Inpainting Diffusion Model
for Visual Inspection | Accepted to CVPR 2025 | null | null | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Developing effective visual inspection models remains challenging due to the
scarcity of defect data. While image generation models have been used to
synthesize defect images, producing highly realistic defects remains difficult.
We propose DefectFill, a novel method for realistic defect generation that
requires only a few reference defect images. It leverages a fine-tuned
inpainting diffusion model, optimized with our custom loss functions
incorporating defect, object, and attention terms. It enables precise capture
of detailed, localized defect features and their seamless integration into
defect-free objects. Additionally, our Low-Fidelity Selection method further
enhances the defect sample quality. Experiments show that DefectFill generates
high-quality defect images, enabling visual inspection models to achieve
state-of-the-art performance on the MVTec AD dataset.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 07:42:11 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Mar 2025 05:23:02 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Song",
"Jaewoo",
""
],
[
"Park",
"Daemin",
""
],
[
"Baek",
"Kanghyun",
""
],
[
"Lee",
"Sangyub",
""
],
[
"Choi",
"Jooyoung",
""
],
[
"Kim",
"Eunji",
""
],
[
"Yoon",
"Sungroh",
""
]
] | TITLE: DefectFill: Realistic Defect Generation with Inpainting Diffusion Model
for Visual Inspection
ABSTRACT: Developing effective visual inspection models remains challenging due to the
scarcity of defect data. While image generation models have been used to
synthesize defect images, producing highly realistic defects remains difficult.
We propose DefectFill, a novel method for realistic defect generation that
requires only a few reference defect images. It leverages a fine-tuned
inpainting diffusion model, optimized with our custom loss functions
incorporating defect, object, and attention terms. It enables precise capture
of detailed, localized defect features and their seamless integration into
defect-free objects. Additionally, our Low-Fidelity Selection method further
enhances the defect sample quality. Experiments show that DefectFill generates
high-quality defect images, enabling visual inspection models to achieve
state-of-the-art performance on the MVTec AD dataset.
|
2503.14734 | Yuke Zhu | NVIDIA: Johan Bjorck, Fernando Casta\~neda, Nikita Cherniadev, Xingye
Da, Runyu Ding, Linxi "Jim" Fan, Yu Fang, Dieter Fox, Fengyuan Hu, Spencer
Huang, Joel Jang, Zhenyu Jiang, Jan Kautz, Kaushil Kundalia, Lawrence Lao,
Zhiqi Li, Zongyu Lin, Kevin Lin, Guilin Liu, Edith Llontop, Loic Magne, Ajay
Mandlekar, Avnish Narayan, Soroush Nasiriany, Scott Reed, You Liang Tan,
Guanzhi Wang, Zu Wang, Jing Wang, Qi Wang, Jiannan Xiang, Yuqi Xie, Yinzhen
Xu, Zhenjia Xu, Seonghyeon Ye, Zhiding Yu, Ao Zhang, Hao Zhang, Yizhou Zhao,
Ruijie Zheng, Yuke Zhu | GR00T N1: An Open Foundation Model for Generalist Humanoid Robots | Authors are listed alphabetically. Project leads are Linxi "Jim" Fan
and Yuke Zhu. For more information, see
https://developer.nvidia.com/isaac/gr00t | null | null | null | cs.RO cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | General-purpose robots need a versatile body and an intelligent mind. Recent
advancements in humanoid robots have shown great promise as a hardware platform
for building generalist autonomy in the human world. A robot foundation model,
trained on massive and diverse data sources, is essential for enabling the
robots to reason about novel situations, robustly handle real-world
variability, and rapidly learn new tasks. To this end, we introduce GR00T N1,
an open foundation model for humanoid robots. GR00T N1 is a
Vision-Language-Action (VLA) model with a dual-system architecture. The
vision-language module (System 2) interprets the environment through vision and
language instructions. The subsequent diffusion transformer module (System 1)
generates fluid motor actions in real time. Both modules are tightly coupled
and jointly trained end-to-end. We train GR00T N1 with a heterogeneous mixture
of real-robot trajectories, human videos, and synthetically generated datasets.
We show that our generalist robot model GR00T N1 outperforms the
state-of-the-art imitation learning baselines on standard simulation benchmarks
across multiple robot embodiments. Furthermore, we deploy our model on the
Fourier GR-1 humanoid robot for language-conditioned bimanual manipulation
tasks, achieving strong performance with high data efficiency.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 21:06:21 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Mar 2025 02:52:43 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"NVIDIA",
"",
""
],
[
":",
"",
""
],
[
"Bjorck",
"Johan",
""
],
[
"Castañeda",
"Fernando",
""
],
[
"Cherniadev",
"Nikita",
""
],
[
"Da",
"Xingye",
""
],
[
"Ding",
"Runyu",
""
],
[
"Fan",
"Linxi \"Jim\"",
""
],
[
"Fang",
"Yu",
""
],
[
"Fox",
"Dieter",
""
],
[
"Hu",
"Fengyuan",
""
],
[
"Huang",
"Spencer",
""
],
[
"Jang",
"Joel",
""
],
[
"Jiang",
"Zhenyu",
""
],
[
"Kautz",
"Jan",
""
],
[
"Kundalia",
"Kaushil",
""
],
[
"Lao",
"Lawrence",
""
],
[
"Li",
"Zhiqi",
""
],
[
"Lin",
"Zongyu",
""
],
[
"Lin",
"Kevin",
""
],
[
"Liu",
"Guilin",
""
],
[
"Llontop",
"Edith",
""
],
[
"Magne",
"Loic",
""
],
[
"Mandlekar",
"Ajay",
""
],
[
"Narayan",
"Avnish",
""
],
[
"Nasiriany",
"Soroush",
""
],
[
"Reed",
"Scott",
""
],
[
"Tan",
"You Liang",
""
],
[
"Wang",
"Guanzhi",
""
],
[
"Wang",
"Zu",
""
],
[
"Wang",
"Jing",
""
],
[
"Wang",
"Qi",
""
],
[
"Xiang",
"Jiannan",
""
],
[
"Xie",
"Yuqi",
""
],
[
"Xu",
"Yinzhen",
""
],
[
"Xu",
"Zhenjia",
""
],
[
"Ye",
"Seonghyeon",
""
],
[
"Yu",
"Zhiding",
""
],
[
"Zhang",
"Ao",
""
],
[
"Zhang",
"Hao",
""
],
[
"Zhao",
"Yizhou",
""
],
[
"Zheng",
"Ruijie",
""
],
[
"Zhu",
"Yuke",
""
]
] | TITLE: GR00T N1: An Open Foundation Model for Generalist Humanoid Robots
ABSTRACT: General-purpose robots need a versatile body and an intelligent mind. Recent
advancements in humanoid robots have shown great promise as a hardware platform
for building generalist autonomy in the human world. A robot foundation model,
trained on massive and diverse data sources, is essential for enabling the
robots to reason about novel situations, robustly handle real-world
variability, and rapidly learn new tasks. To this end, we introduce GR00T N1,
an open foundation model for humanoid robots. GR00T N1 is a
Vision-Language-Action (VLA) model with a dual-system architecture. The
vision-language module (System 2) interprets the environment through vision and
language instructions. The subsequent diffusion transformer module (System 1)
generates fluid motor actions in real time. Both modules are tightly coupled
and jointly trained end-to-end. We train GR00T N1 with a heterogeneous mixture
of real-robot trajectories, human videos, and synthetically generated datasets.
We show that our generalist robot model GR00T N1 outperforms the
state-of-the-art imitation learning baselines on standard simulation benchmarks
across multiple robot embodiments. Furthermore, we deploy our model on the
Fourier GR-1 humanoid robot for language-conditioned bimanual manipulation
tasks, achieving strong performance with high data efficiency.
|
2503.16400 | Feilong Tang | Haolin Yang, Feilong Tang, Ming Hu, Yulong Li, Yexin Liu, Zelin Peng,
Junjun He, Zongyuan Ge, Imran Razzak | ScalingNoise: Scaling Inference-Time Search for Generating Infinite
Videos | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Video diffusion models (VDMs) facilitate the generation of high-quality
videos, with current research predominantly concentrated on scaling efforts
during training through improvements in data quality, computational resources,
and model complexity. However, inference-time scaling has received less
attention, with most approaches restricting models to a single generation
attempt. Recent studies have uncovered the existence of "golden noises" that
can enhance video quality during generation. Building on this, we find that
guiding the scaling inference-time search of VDMs to identify better noise
candidates not only evaluates the quality of the frames generated in the
current step but also preserves the high-level object features by referencing
the anchor frame from previous multi-chunks, thereby delivering long-term
value. Our analysis reveals that diffusion models inherently possess flexible
adjustments of computation by varying denoising steps, and even a one-step
denoising approach, when guided by a reward signal, yields significant
long-term benefits. Based on the observation, we proposeScalingNoise, a
plug-and-play inference-time search strategy that identifies golden initial
noises for the diffusion sampling process to improve global content consistency
and visual diversity. Specifically, we perform one-step denoising to convert
initial noises into a clip and subsequently evaluate its long-term value,
leveraging a reward model anchored by previously generated content. Moreover,
to preserve diversity, we sample candidates from a tilted noise distribution
that up-weights promising noises. In this way, ScalingNoise significantly
reduces noise-induced errors, ensuring more coherent and spatiotemporally
consistent video generation. Extensive experiments on benchmark datasets
demonstrate that the proposed ScalingNoise effectively improves long video
generation.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 17:54:37 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Mar 2025 15:12:43 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Yang",
"Haolin",
""
],
[
"Tang",
"Feilong",
""
],
[
"Hu",
"Ming",
""
],
[
"Li",
"Yulong",
""
],
[
"Liu",
"Yexin",
""
],
[
"Peng",
"Zelin",
""
],
[
"He",
"Junjun",
""
],
[
"Ge",
"Zongyuan",
""
],
[
"Razzak",
"Imran",
""
]
] | TITLE: ScalingNoise: Scaling Inference-Time Search for Generating Infinite
Videos
ABSTRACT: Video diffusion models (VDMs) facilitate the generation of high-quality
videos, with current research predominantly concentrated on scaling efforts
during training through improvements in data quality, computational resources,
and model complexity. However, inference-time scaling has received less
attention, with most approaches restricting models to a single generation
attempt. Recent studies have uncovered the existence of "golden noises" that
can enhance video quality during generation. Building on this, we find that
guiding the scaling inference-time search of VDMs to identify better noise
candidates not only evaluates the quality of the frames generated in the
current step but also preserves the high-level object features by referencing
the anchor frame from previous multi-chunks, thereby delivering long-term
value. Our analysis reveals that diffusion models inherently possess flexible
adjustments of computation by varying denoising steps, and even a one-step
denoising approach, when guided by a reward signal, yields significant
long-term benefits. Based on the observation, we proposeScalingNoise, a
plug-and-play inference-time search strategy that identifies golden initial
noises for the diffusion sampling process to improve global content consistency
and visual diversity. Specifically, we perform one-step denoising to convert
initial noises into a clip and subsequently evaluate its long-term value,
leveraging a reward model anchored by previously generated content. Moreover,
to preserve diversity, we sample candidates from a tilted noise distribution
that up-weights promising noises. In this way, ScalingNoise significantly
reduces noise-induced errors, ensuring more coherent and spatiotemporally
consistent video generation. Extensive experiments on benchmark datasets
demonstrate that the proposed ScalingNoise effectively improves long video
generation.
|
2503.16541 | Hanzhi Zhang | Hanzhi Zhang, Sumera Anjum, Heng Fan, Weijian Zheng, Yan Huang, Yunhe
Feng | Poly-FEVER: A Multilingual Fact Verification Benchmark for Hallucination
Detection in Large Language Models | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Hallucinations in generative AI, particularly in Large Language Models
(LLMs), pose a significant challenge to the reliability of multilingual
applications. Existing benchmarks for hallucination detection focus primarily
on English and a few widely spoken languages, lacking the breadth to assess
inconsistencies in model performance across diverse linguistic contexts. To
address this gap, we introduce Poly-FEVER, a large-scale multilingual fact
verification benchmark specifically designed for evaluating hallucination
detection in LLMs. Poly-FEVER comprises 77,973 labeled factual claims spanning
11 languages, sourced from FEVER, Climate-FEVER, and SciFact. It provides the
first large-scale dataset tailored for analyzing hallucination patterns across
languages, enabling systematic evaluation of LLMs such as ChatGPT and the LLaMA
series. Our analysis reveals how topic distribution and web resource
availability influence hallucination frequency, uncovering language-specific
biases that impact model accuracy. By offering a multilingual benchmark for
fact verification, Poly-FEVER facilitates cross-linguistic comparisons of
hallucination detection and contributes to the development of more reliable,
language-inclusive AI systems. The dataset is publicly available to advance
research in responsible AI, fact-checking methodologies, and multilingual NLP,
promoting greater transparency and robustness in LLM performance. The proposed
Poly-FEVER is available at:
https://huggingface.co/datasets/HanzhiZhang/Poly-FEVER.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 01:46:09 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 23:53:56 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Zhang",
"Hanzhi",
""
],
[
"Anjum",
"Sumera",
""
],
[
"Fan",
"Heng",
""
],
[
"Zheng",
"Weijian",
""
],
[
"Huang",
"Yan",
""
],
[
"Feng",
"Yunhe",
""
]
] | TITLE: Poly-FEVER: A Multilingual Fact Verification Benchmark for Hallucination
Detection in Large Language Models
ABSTRACT: Hallucinations in generative AI, particularly in Large Language Models
(LLMs), pose a significant challenge to the reliability of multilingual
applications. Existing benchmarks for hallucination detection focus primarily
on English and a few widely spoken languages, lacking the breadth to assess
inconsistencies in model performance across diverse linguistic contexts. To
address this gap, we introduce Poly-FEVER, a large-scale multilingual fact
verification benchmark specifically designed for evaluating hallucination
detection in LLMs. Poly-FEVER comprises 77,973 labeled factual claims spanning
11 languages, sourced from FEVER, Climate-FEVER, and SciFact. It provides the
first large-scale dataset tailored for analyzing hallucination patterns across
languages, enabling systematic evaluation of LLMs such as ChatGPT and the LLaMA
series. Our analysis reveals how topic distribution and web resource
availability influence hallucination frequency, uncovering language-specific
biases that impact model accuracy. By offering a multilingual benchmark for
fact verification, Poly-FEVER facilitates cross-linguistic comparisons of
hallucination detection and contributes to the development of more reliable,
language-inclusive AI systems. The dataset is publicly available to advance
research in responsible AI, fact-checking methodologies, and multilingual NLP,
promoting greater transparency and robustness in LLM performance. The proposed
Poly-FEVER is available at:
https://huggingface.co/datasets/HanzhiZhang/Poly-FEVER.
|
2503.17132 | Shilin Lu | Siyuan Yang, Shilin Lu, Shizheng Wang, Meng Hwa Er, Zengwei Zheng,
Alex C. Kot | Temporal-Guided Spiking Neural Networks for Event-Based Human Action
Recognition | null | null | null | null | cs.CV cs.AI cs.CR cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper explores the promising interplay between spiking neural networks
(SNNs) and event-based cameras for privacy-preserving human action recognition
(HAR). The unique feature of event cameras in capturing only the outlines of
motion, combined with SNNs' proficiency in processing spatiotemporal data
through spikes, establishes a highly synergistic compatibility for event-based
HAR. Previous studies, however, have been limited by SNNs' ability to process
long-term temporal information, essential for precise HAR. In this paper, we
introduce two novel frameworks to address this: temporal segment-based SNN
(\textit{TS-SNN}) and 3D convolutional SNN (\textit{3D-SNN}). The
\textit{TS-SNN} extracts long-term temporal information by dividing actions
into shorter segments, while the \textit{3D-SNN} replaces 2D spatial elements
with 3D components to facilitate the transmission of temporal information. To
promote further research in event-based HAR, we create a dataset,
\textit{FallingDetection-CeleX}, collected using the high-resolution CeleX-V
event camera $(1280 \times 800)$, comprising 7 distinct actions. Extensive
experimental results show that our proposed frameworks surpass state-of-the-art
SNN methods on our newly collected dataset and three other neuromorphic
datasets, showcasing their effectiveness in handling long-range temporal
information for event-based HAR.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 13:31:16 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Mar 2025 11:35:37 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Yang",
"Siyuan",
""
],
[
"Lu",
"Shilin",
""
],
[
"Wang",
"Shizheng",
""
],
[
"Er",
"Meng Hwa",
""
],
[
"Zheng",
"Zengwei",
""
],
[
"Kot",
"Alex C.",
""
]
] | TITLE: Temporal-Guided Spiking Neural Networks for Event-Based Human Action
Recognition
ABSTRACT: This paper explores the promising interplay between spiking neural networks
(SNNs) and event-based cameras for privacy-preserving human action recognition
(HAR). The unique feature of event cameras in capturing only the outlines of
motion, combined with SNNs' proficiency in processing spatiotemporal data
through spikes, establishes a highly synergistic compatibility for event-based
HAR. Previous studies, however, have been limited by SNNs' ability to process
long-term temporal information, essential for precise HAR. In this paper, we
introduce two novel frameworks to address this: temporal segment-based SNN
(\textit{TS-SNN}) and 3D convolutional SNN (\textit{3D-SNN}). The
\textit{TS-SNN} extracts long-term temporal information by dividing actions
into shorter segments, while the \textit{3D-SNN} replaces 2D spatial elements
with 3D components to facilitate the transmission of temporal information. To
promote further research in event-based HAR, we create a dataset,
\textit{FallingDetection-CeleX}, collected using the high-resolution CeleX-V
event camera $(1280 \times 800)$, comprising 7 distinct actions. Extensive
experimental results show that our proposed frameworks surpass state-of-the-art
SNN methods on our newly collected dataset and three other neuromorphic
datasets, showcasing their effectiveness in handling long-range temporal
information for event-based HAR.
|
2503.18297 | Yishen Liu | Yishen Liu and Shengda Liu and Hudan Pan | Image-to-Text for Medical Reports Using Adaptive Co-Attention and
Triple-LSTM Module | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Medical report generation requires specialized expertise that general large
models often fail to accurately capture. Moreover, the inherent repetition and
similarity in medical data make it difficult for models to extract meaningful
features, resulting in a tendency to overfit. So in this paper, we propose a
multimodal model, Co-Attention Triple-LSTM Network (CA-TriNet), a deep learning
model that combines transformer architectures with a Multi-LSTM network. Its
Co-Attention module synergistically links a vision transformer with a text
transformer to better differentiate medical images with similarities, augmented
by an adaptive weight operator to catch and amplify image labels with minor
similarities. Furthermore, its Triple-LSTM module refines generated sentences
using targeted image objects. Extensive evaluations over three public datasets
have demonstrated that CA-TriNet outperforms state-of-the-art models in terms
of comprehensive ability, even pre-trained large language models on some
metrics.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 03:02:11 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Mar 2025 06:47:06 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Liu",
"Yishen",
""
],
[
"Liu",
"Shengda",
""
],
[
"Pan",
"Hudan",
""
]
] | TITLE: Image-to-Text for Medical Reports Using Adaptive Co-Attention and
Triple-LSTM Module
ABSTRACT: Medical report generation requires specialized expertise that general large
models often fail to accurately capture. Moreover, the inherent repetition and
similarity in medical data make it difficult for models to extract meaningful
features, resulting in a tendency to overfit. So in this paper, we propose a
multimodal model, Co-Attention Triple-LSTM Network (CA-TriNet), a deep learning
model that combines transformer architectures with a Multi-LSTM network. Its
Co-Attention module synergistically links a vision transformer with a text
transformer to better differentiate medical images with similarities, augmented
by an adaptive weight operator to catch and amplify image labels with minor
similarities. Furthermore, its Triple-LSTM module refines generated sentences
using targeted image objects. Extensive evaluations over three public datasets
have demonstrated that CA-TriNet outperforms state-of-the-art models in terms
of comprehensive ability, even pre-trained large language models on some
metrics.
|
2503.18943 | Mingze Xu | Mingze Xu, Mingfei Gao, Shiyu Li, Jiasen Lu, Zhe Gan, Zhengfeng Lai,
Meng Cao, Kai Kang, Yinfei Yang, Afshin Dehghan | SlowFast-LLaVA-1.5: A Family of Token-Efficient Video Large Language
Models for Long-Form Video Understanding | Technical report | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce SlowFast-LLaVA-1.5 (abbreviated as SF-LLaVA-1.5), a family of
video large language models (LLMs) offering a token-efficient solution for
long-form video understanding. We incorporate the two-stream SlowFast mechanism
into a streamlined training pipeline, and perform joint video-image training on
a carefully curated data mixture of only publicly available datasets. Our
primary focus is on highly efficient model scales (1B and 3B), demonstrating
that even relatively small Video LLMs can achieve state-of-the-art performance
on video understanding, meeting the demand for mobile-friendly models.
Experimental results demonstrate that SF-LLaVA-1.5 achieves superior
performance on a wide range of video and image tasks, with robust results at
all model sizes (ranging from 1B to 7B). Notably, SF-LLaVA-1.5 achieves
state-of-the-art results in long-form video understanding (e.g., LongVideoBench
and MLVU) and excels at small scales across various video benchmarks.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 17:59:07 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Mar 2025 17:34:06 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Xu",
"Mingze",
""
],
[
"Gao",
"Mingfei",
""
],
[
"Li",
"Shiyu",
""
],
[
"Lu",
"Jiasen",
""
],
[
"Gan",
"Zhe",
""
],
[
"Lai",
"Zhengfeng",
""
],
[
"Cao",
"Meng",
""
],
[
"Kang",
"Kai",
""
],
[
"Yang",
"Yinfei",
""
],
[
"Dehghan",
"Afshin",
""
]
] | TITLE: SlowFast-LLaVA-1.5: A Family of Token-Efficient Video Large Language
Models for Long-Form Video Understanding
ABSTRACT: We introduce SlowFast-LLaVA-1.5 (abbreviated as SF-LLaVA-1.5), a family of
video large language models (LLMs) offering a token-efficient solution for
long-form video understanding. We incorporate the two-stream SlowFast mechanism
into a streamlined training pipeline, and perform joint video-image training on
a carefully curated data mixture of only publicly available datasets. Our
primary focus is on highly efficient model scales (1B and 3B), demonstrating
that even relatively small Video LLMs can achieve state-of-the-art performance
on video understanding, meeting the demand for mobile-friendly models.
Experimental results demonstrate that SF-LLaVA-1.5 achieves superior
performance on a wide range of video and image tasks, with robust results at
all model sizes (ranging from 1B to 7B). Notably, SF-LLaVA-1.5 achieves
state-of-the-art results in long-form video understanding (e.g., LongVideoBench
and MLVU) and excels at small scales across various video benchmarks.
|
2503.19176 | Yizhu Wen | Yizhu Wen, Ashwin Innuganti, Aaron Bien Ramos, Hanqing Guo, Qiben Yan | SoK: How Robust is Audio Watermarking in Generative AI models? | null | null | null | null | cs.CR cs.AI | http://creativecommons.org/licenses/by/4.0/ | Audio watermarking is increasingly used to verify the provenance of
AI-generated content, enabling applications such as detecting AI-generated
speech, protecting music IP, and defending against voice cloning. To be
effective, audio watermarks must resist removal attacks that distort signals to
evade detection. While many schemes claim robustness, these claims are
typically tested in isolation and against a limited set of attacks. A
systematic evaluation against diverse removal attacks is lacking, hindering
practical deployment. In this paper, we investigate whether recent watermarking
schemes that claim robustness can withstand a broad range of removal attacks.
First, we introduce a taxonomy covering 22 audio watermarking schemes. Next, we
summarize their underlying technologies and potential vulnerabilities. We then
present a large-scale empirical study to assess their robustness. To support
this, we build an evaluation framework encompassing 22 types of removal attacks
(109 configurations) including signal-level, physical-level, and AI-induced
distortions. We reproduce 9 watermarking schemes using open-source code,
identify 8 new highly effective attacks, and highlight 11 key findings that
expose the fundamental limitations of these methods across 3 public datasets.
Our results reveal that none of the surveyed schemes can withstand all tested
distortions. This evaluation offers a comprehensive view of how current
watermarking methods perform under real-world threats. Our demo and code are
available at https://sokaudiowm.github.io/.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 21:57:59 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Mar 2025 00:51:02 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Wen",
"Yizhu",
""
],
[
"Innuganti",
"Ashwin",
""
],
[
"Ramos",
"Aaron Bien",
""
],
[
"Guo",
"Hanqing",
""
],
[
"Yan",
"Qiben",
""
]
] | TITLE: SoK: How Robust is Audio Watermarking in Generative AI models?
ABSTRACT: Audio watermarking is increasingly used to verify the provenance of
AI-generated content, enabling applications such as detecting AI-generated
speech, protecting music IP, and defending against voice cloning. To be
effective, audio watermarks must resist removal attacks that distort signals to
evade detection. While many schemes claim robustness, these claims are
typically tested in isolation and against a limited set of attacks. A
systematic evaluation against diverse removal attacks is lacking, hindering
practical deployment. In this paper, we investigate whether recent watermarking
schemes that claim robustness can withstand a broad range of removal attacks.
First, we introduce a taxonomy covering 22 audio watermarking schemes. Next, we
summarize their underlying technologies and potential vulnerabilities. We then
present a large-scale empirical study to assess their robustness. To support
this, we build an evaluation framework encompassing 22 types of removal attacks
(109 configurations) including signal-level, physical-level, and AI-induced
distortions. We reproduce 9 watermarking schemes using open-source code,
identify 8 new highly effective attacks, and highlight 11 key findings that
expose the fundamental limitations of these methods across 3 public datasets.
Our results reveal that none of the surveyed schemes can withstand all tested
distortions. This evaluation offers a comprehensive view of how current
watermarking methods perform under real-world threats. Our demo and code are
available at https://sokaudiowm.github.io/.
|
2503.19470 | Mingyang Chen | Mingyang Chen, Tianpeng Li, Haoze Sun, Yijie Zhou, Chenzheng Zhu,
Haofen Wang, Jeff Z. Pan, Wen Zhang, Huajun Chen, Fan Yang, Zenan Zhou,
Weipeng Chen | ReSearch: Learning to Reason with Search for LLMs via Reinforcement
Learning | Work in progress | null | null | null | cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) have shown remarkable capabilities in reasoning,
exemplified by the success of OpenAI-o1 and DeepSeek-R1. However, integrating
reasoning with external search processes remains challenging, especially for
complex multi-hop questions requiring multiple retrieval steps. We propose
ReSearch, a novel framework that trains LLMs to Reason with Search via
reinforcement learning without using any supervised data on reasoning steps.
Our approach treats search operations as integral components of the reasoning
chain, where when and how to perform searches is guided by text-based thinking,
and search results subsequently influence further reasoning. We train ReSearch
on Qwen2.5-7B(-Instruct) and Qwen2.5-32B(-Instruct) models and conduct
extensive experiments. Despite being trained on only one dataset, our models
demonstrate strong generalizability across various benchmarks. Analysis reveals
that ReSearch naturally elicits advanced reasoning capabilities such as
reflection and self-correction during the reinforcement learning process.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 09:00:58 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Mar 2025 05:56:31 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Chen",
"Mingyang",
""
],
[
"Li",
"Tianpeng",
""
],
[
"Sun",
"Haoze",
""
],
[
"Zhou",
"Yijie",
""
],
[
"Zhu",
"Chenzheng",
""
],
[
"Wang",
"Haofen",
""
],
[
"Pan",
"Jeff Z.",
""
],
[
"Zhang",
"Wen",
""
],
[
"Chen",
"Huajun",
""
],
[
"Yang",
"Fan",
""
],
[
"Zhou",
"Zenan",
""
],
[
"Chen",
"Weipeng",
""
]
] | TITLE: ReSearch: Learning to Reason with Search for LLMs via Reinforcement
Learning
ABSTRACT: Large Language Models (LLMs) have shown remarkable capabilities in reasoning,
exemplified by the success of OpenAI-o1 and DeepSeek-R1. However, integrating
reasoning with external search processes remains challenging, especially for
complex multi-hop questions requiring multiple retrieval steps. We propose
ReSearch, a novel framework that trains LLMs to Reason with Search via
reinforcement learning without using any supervised data on reasoning steps.
Our approach treats search operations as integral components of the reasoning
chain, where when and how to perform searches is guided by text-based thinking,
and search results subsequently influence further reasoning. We train ReSearch
on Qwen2.5-7B(-Instruct) and Qwen2.5-32B(-Instruct) models and conduct
extensive experiments. Despite being trained on only one dataset, our models
demonstrate strong generalizability across various benchmarks. Analysis reveals
that ReSearch naturally elicits advanced reasoning capabilities such as
reflection and self-correction during the reinforcement learning process.
|
2503.20235 | Ahyun Seo | Ahyun Seo, Minsu Cho | Leveraging 3D Geometric Priors in 2D Rotation Symmetry Detection | Accepted to CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Symmetry plays a vital role in understanding structural patterns, aiding
object recognition and scene interpretation. This paper focuses on rotation
symmetry, where objects remain unchanged when rotated around a central axis,
requiring detection of rotation centers and supporting vertices. Traditional
methods relied on hand-crafted feature matching, while recent segmentation
models based on convolutional neural networks detect rotation centers but
struggle with 3D geometric consistency due to viewpoint distortions. To
overcome this, we propose a model that directly predicts rotation centers and
vertices in 3D space and projects the results back to 2D while preserving
structural integrity. By incorporating a vertex reconstruction stage enforcing
3D geometric priors -- such as equal side lengths and interior angles -- our
model enhances robustness and accuracy. Experiments on the DENDI dataset show
superior performance in rotation axis detection and validate the impact of 3D
priors through ablation studies.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 05:02:16 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Mar 2025 02:40:25 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Seo",
"Ahyun",
""
],
[
"Cho",
"Minsu",
""
]
] | TITLE: Leveraging 3D Geometric Priors in 2D Rotation Symmetry Detection
ABSTRACT: Symmetry plays a vital role in understanding structural patterns, aiding
object recognition and scene interpretation. This paper focuses on rotation
symmetry, where objects remain unchanged when rotated around a central axis,
requiring detection of rotation centers and supporting vertices. Traditional
methods relied on hand-crafted feature matching, while recent segmentation
models based on convolutional neural networks detect rotation centers but
struggle with 3D geometric consistency due to viewpoint distortions. To
overcome this, we propose a model that directly predicts rotation centers and
vertices in 3D space and projects the results back to 2D while preserving
structural integrity. By incorporating a vertex reconstruction stage enforcing
3D geometric priors -- such as equal side lengths and interior angles -- our
model enhances robustness and accuracy. Experiments on the DENDI dataset show
superior performance in rotation axis detection and validate the impact of 3D
priors through ablation studies.
|
2503.20349 | Weiyi You | Weiyi You, Mingyang Zhang, Leheng Zhang, Xingyu Zhou, Kexuan Shi,
Shuhang Gu | Consistency Trajectory Matching for One-Step Generative Super-Resolution | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current diffusion-based super-resolution (SR) approaches achieve commendable
performance at the cost of high inference overhead. Therefore, distillation
techniques are utilized to accelerate the multi-step teacher model into
one-step student model. Nevertheless, these methods significantly raise
training costs and constrain the performance of the student model by the
teacher model. To overcome these tough challenges, we propose Consistency
Trajectory Matching for Super-Resolution (CTMSR), a distillation-free strategy
that is able to generate photo-realistic SR results in one step. Concretely, we
first formulate a Probability Flow Ordinary Differential Equation (PF-ODE)
trajectory to establish a deterministic mapping from low-resolution (LR) images
with noise to high-resolution (HR) images. Then we apply the Consistency
Training (CT) strategy to directly learn the mapping in one step, eliminating
the necessity of pre-trained diffusion model. To further enhance the
performance and better leverage the ground-truth during the training process,
we aim to align the distribution of SR results more closely with that of the
natural images. To this end, we propose to minimize the discrepancy between
their respective PF-ODE trajectories from the LR image distribution by our
meticulously designed Distribution Trajectory Matching (DTM) loss, resulting in
improved realism of our recovered HR images. Comprehensive experimental results
demonstrate that the proposed methods can attain comparable or even superior
capabilities on both synthetic and real datasets while maintaining minimal
inference latency.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 09:20:42 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Mar 2025 13:59:15 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"You",
"Weiyi",
""
],
[
"Zhang",
"Mingyang",
""
],
[
"Zhang",
"Leheng",
""
],
[
"Zhou",
"Xingyu",
""
],
[
"Shi",
"Kexuan",
""
],
[
"Gu",
"Shuhang",
""
]
] | TITLE: Consistency Trajectory Matching for One-Step Generative Super-Resolution
ABSTRACT: Current diffusion-based super-resolution (SR) approaches achieve commendable
performance at the cost of high inference overhead. Therefore, distillation
techniques are utilized to accelerate the multi-step teacher model into
one-step student model. Nevertheless, these methods significantly raise
training costs and constrain the performance of the student model by the
teacher model. To overcome these tough challenges, we propose Consistency
Trajectory Matching for Super-Resolution (CTMSR), a distillation-free strategy
that is able to generate photo-realistic SR results in one step. Concretely, we
first formulate a Probability Flow Ordinary Differential Equation (PF-ODE)
trajectory to establish a deterministic mapping from low-resolution (LR) images
with noise to high-resolution (HR) images. Then we apply the Consistency
Training (CT) strategy to directly learn the mapping in one step, eliminating
the necessity of pre-trained diffusion model. To further enhance the
performance and better leverage the ground-truth during the training process,
we aim to align the distribution of SR results more closely with that of the
natural images. To this end, we propose to minimize the discrepancy between
their respective PF-ODE trajectories from the LR image distribution by our
meticulously designed Distribution Trajectory Matching (DTM) loss, resulting in
improved realism of our recovered HR images. Comprehensive experimental results
demonstrate that the proposed methods can attain comparable or even superior
capabilities on both synthetic and real datasets while maintaining minimal
inference latency.
|
2503.20652 | Theo Di Piazza | Theo Di Piazza, Carole Lazarus, Olivier Nempont, Loic Boussel | Imitating Radiological Scrolling: A Global-Local Attention Model for 3D
Chest CT Volumes Multi-Label Anomaly Classification | 13 pages, 4 figures. Accepted for MIDL 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | The rapid increase in the number of Computed Tomography (CT) scan
examinations has created an urgent need for automated tools, such as organ
segmentation, anomaly classification, and report generation, to assist
radiologists with their growing workload. Multi-label classification of
Three-Dimensional (3D) CT scans is a challenging task due to the volumetric
nature of the data and the variety of anomalies to be detected. Existing deep
learning methods based on Convolutional Neural Networks (CNNs) struggle to
capture long-range dependencies effectively, while Vision Transformers require
extensive pre-training, posing challenges for practical use. Additionally,
these existing methods do not explicitly model the radiologist's navigational
behavior while scrolling through CT scan slices, which requires both global
context understanding and local detail awareness. In this study, we present
CT-Scroll, a novel global-local attention model specifically designed to
emulate the scrolling behavior of radiologists during the analysis of 3D CT
scans. Our approach is evaluated on two public datasets, demonstrating its
efficacy through comprehensive experiments and an ablation study that
highlights the contribution of each model component.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 15:47:50 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Mar 2025 14:46:42 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Di Piazza",
"Theo",
""
],
[
"Lazarus",
"Carole",
""
],
[
"Nempont",
"Olivier",
""
],
[
"Boussel",
"Loic",
""
]
] | TITLE: Imitating Radiological Scrolling: A Global-Local Attention Model for 3D
Chest CT Volumes Multi-Label Anomaly Classification
ABSTRACT: The rapid increase in the number of Computed Tomography (CT) scan
examinations has created an urgent need for automated tools, such as organ
segmentation, anomaly classification, and report generation, to assist
radiologists with their growing workload. Multi-label classification of
Three-Dimensional (3D) CT scans is a challenging task due to the volumetric
nature of the data and the variety of anomalies to be detected. Existing deep
learning methods based on Convolutional Neural Networks (CNNs) struggle to
capture long-range dependencies effectively, while Vision Transformers require
extensive pre-training, posing challenges for practical use. Additionally,
these existing methods do not explicitly model the radiologist's navigational
behavior while scrolling through CT scan slices, which requires both global
context understanding and local detail awareness. In this study, we present
CT-Scroll, a novel global-local attention model specifically designed to
emulate the scrolling behavior of radiologists during the analysis of 3D CT
scans. Our approach is evaluated on two public datasets, demonstrating its
efficacy through comprehensive experiments and an ablation study that
highlights the contribution of each model component.
|
2503.20685 | Yuhao Huang | Yuhao Huang, Ao Chang, Haoran Dou, Xing Tao, Xinrui Zhou, Yan Cao,
Ruobing Huang, Alejandro F Frangi, Lingyun Bao, Xin Yang, Dong Ni | Flip Learning: Weakly Supervised Erase to Segment Nodules in Breast
Ultrasound | Accepted by Medical Image Analysis. 24 pages, 13 figures, 20 tabels | null | null | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate segmentation of nodules in both 2D breast ultrasound (BUS) and 3D
automated breast ultrasound (ABUS) is crucial for clinical diagnosis and
treatment planning. Therefore, developing an automated system for nodule
segmentation can enhance user independence and expedite clinical analysis.
Unlike fully-supervised learning, weakly-supervised segmentation (WSS) can
streamline the laborious and intricate annotation process. However, current WSS
methods face challenges in achieving precise nodule segmentation, as many of
them depend on inaccurate activation maps or inefficient pseudo-mask generation
algorithms. In this study, we introduce a novel multi-agent reinforcement
learning-based WSS framework called Flip Learning, which relies solely on 2D/3D
boxes for accurate segmentation. Specifically, multiple agents are employed to
erase the target from the box to facilitate classification tag flipping, with
the erased region serving as the predicted segmentation mask. The key
contributions of this research are as follows: (1) Adoption of a
superpixel/supervoxel-based approach to encode the standardized environment,
capturing boundary priors and expediting the learning process. (2) Introduction
of three meticulously designed rewards, comprising a classification score
reward and two intensity distribution rewards, to steer the agents' erasing
process precisely, thereby avoiding both under- and over-segmentation. (3)
Implementation of a progressive curriculum learning strategy to enable agents
to interact with the environment in a progressively challenging manner, thereby
enhancing learning efficiency. Extensively validated on the large in-house BUS
and ABUS datasets, our Flip Learning method outperforms state-of-the-art WSS
methods and foundation models, and achieves comparable performance as
fully-supervised learning algorithms.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 16:20:02 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Mar 2025 06:16:16 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Huang",
"Yuhao",
""
],
[
"Chang",
"Ao",
""
],
[
"Dou",
"Haoran",
""
],
[
"Tao",
"Xing",
""
],
[
"Zhou",
"Xinrui",
""
],
[
"Cao",
"Yan",
""
],
[
"Huang",
"Ruobing",
""
],
[
"Frangi",
"Alejandro F",
""
],
[
"Bao",
"Lingyun",
""
],
[
"Yang",
"Xin",
""
],
[
"Ni",
"Dong",
""
]
] | TITLE: Flip Learning: Weakly Supervised Erase to Segment Nodules in Breast
Ultrasound
ABSTRACT: Accurate segmentation of nodules in both 2D breast ultrasound (BUS) and 3D
automated breast ultrasound (ABUS) is crucial for clinical diagnosis and
treatment planning. Therefore, developing an automated system for nodule
segmentation can enhance user independence and expedite clinical analysis.
Unlike fully-supervised learning, weakly-supervised segmentation (WSS) can
streamline the laborious and intricate annotation process. However, current WSS
methods face challenges in achieving precise nodule segmentation, as many of
them depend on inaccurate activation maps or inefficient pseudo-mask generation
algorithms. In this study, we introduce a novel multi-agent reinforcement
learning-based WSS framework called Flip Learning, which relies solely on 2D/3D
boxes for accurate segmentation. Specifically, multiple agents are employed to
erase the target from the box to facilitate classification tag flipping, with
the erased region serving as the predicted segmentation mask. The key
contributions of this research are as follows: (1) Adoption of a
superpixel/supervoxel-based approach to encode the standardized environment,
capturing boundary priors and expediting the learning process. (2) Introduction
of three meticulously designed rewards, comprising a classification score
reward and two intensity distribution rewards, to steer the agents' erasing
process precisely, thereby avoiding both under- and over-segmentation. (3)
Implementation of a progressive curriculum learning strategy to enable agents
to interact with the environment in a progressively challenging manner, thereby
enhancing learning efficiency. Extensively validated on the large in-house BUS
and ABUS datasets, our Flip Learning method outperforms state-of-the-art WSS
methods and foundation models, and achieves comparable performance as
fully-supervised learning algorithms.
|
2503.20752 | Huajie Tan | Huajie Tan, Yuheng Ji, Xiaoshuai Hao, Minglan Lin, Pengwei Wang,
Zhongyuan Wang, Shanghang Zhang | Reason-RFT: Reinforcement Fine-Tuning for Visual Reasoning | 35 pages, 22 figures | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual reasoning abilities play a crucial role in understanding complex
multimodal data, advancing both domain-specific applications and artificial
general intelligence (AGI). Existing methods improve VLM reasoning via
Chain-of-Thought (CoT) supervised fine-tuning, using meticulously annotated
training data to enhance visual reasoning capabilities. However, this training
paradigm may lead to overfitting and cognitive rigidity, restricting the
model's ability to transfer visual reasoning skills across domains and limiting
its real-world applicability. To address these limitations, we propose
Reason-RFT, a novel reinforcement fine-tuning framework that significantly
enhances generalization capabilities in visual reasoning tasks. Reason-RFT
introduces a two-phase training framework for visual reasoning: (1) Supervised
Fine-Tuning (SFT) with curated Chain-of-Thought (CoT) data activates the
reasoning potential of Vision-Language Models (VLMs), followed by (2) Group
Relative Policy Optimization (GRPO)-based reinforcement learning that generates
multiple reasoning-response pairs, significantly enhancing generalization in
visual reasoning tasks. To evaluate Reason-RFT's visual reasoning capabilities,
we reconstructed a comprehensive dataset spanning visual counting, structure
perception, and spatial transformation. Experimental results demonstrate
Reasoning-RFT's three key advantages: (1) Performance Enhancement: achieving
state-of-the-art results across multiple tasks, outperforming most mainstream
open-source and proprietary models; (2) Generalization Superiority:
consistently maintaining robust performance across diverse tasks and domains,
outperforming alternative training paradigms; (3) Data Efficiency: excelling in
few-shot learning scenarios while surpassing full-dataset SFT baselines.
Project website: https://tanhuajie.github.io/ReasonRFT
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 17:38:06 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Mar 2025 03:13:00 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Tan",
"Huajie",
""
],
[
"Ji",
"Yuheng",
""
],
[
"Hao",
"Xiaoshuai",
""
],
[
"Lin",
"Minglan",
""
],
[
"Wang",
"Pengwei",
""
],
[
"Wang",
"Zhongyuan",
""
],
[
"Zhang",
"Shanghang",
""
]
] | TITLE: Reason-RFT: Reinforcement Fine-Tuning for Visual Reasoning
ABSTRACT: Visual reasoning abilities play a crucial role in understanding complex
multimodal data, advancing both domain-specific applications and artificial
general intelligence (AGI). Existing methods improve VLM reasoning via
Chain-of-Thought (CoT) supervised fine-tuning, using meticulously annotated
training data to enhance visual reasoning capabilities. However, this training
paradigm may lead to overfitting and cognitive rigidity, restricting the
model's ability to transfer visual reasoning skills across domains and limiting
its real-world applicability. To address these limitations, we propose
Reason-RFT, a novel reinforcement fine-tuning framework that significantly
enhances generalization capabilities in visual reasoning tasks. Reason-RFT
introduces a two-phase training framework for visual reasoning: (1) Supervised
Fine-Tuning (SFT) with curated Chain-of-Thought (CoT) data activates the
reasoning potential of Vision-Language Models (VLMs), followed by (2) Group
Relative Policy Optimization (GRPO)-based reinforcement learning that generates
multiple reasoning-response pairs, significantly enhancing generalization in
visual reasoning tasks. To evaluate Reason-RFT's visual reasoning capabilities,
we reconstructed a comprehensive dataset spanning visual counting, structure
perception, and spatial transformation. Experimental results demonstrate
Reasoning-RFT's three key advantages: (1) Performance Enhancement: achieving
state-of-the-art results across multiple tasks, outperforming most mainstream
open-source and proprietary models; (2) Generalization Superiority:
consistently maintaining robust performance across diverse tasks and domains,
outperforming alternative training paradigms; (3) Data Efficiency: excelling in
few-shot learning scenarios while surpassing full-dataset SFT baselines.
Project website: https://tanhuajie.github.io/ReasonRFT
|
2503.20789 | Sowad Rahman | Sowad Rahman | Neuro-Informed Adaptive Learning (NIAL) Algorithm: A Hybrid Deep
Learning Approach for ECG Signal Classification | 1 figure ,2 pages | null | null | null | eess.SP cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The detection of cardiac abnormalities using electrocardiogram (ECG) signals
is crucial for early diagnosis and intervention in cardiovascular diseases.
Traditional deep learning models often lack adaptability to varying signal
patterns. This study introduces the Neuro-Informed Adaptive Learning (NIAL)
algorithm, a hybrid approach integrating convolutional neural networks (CNNs)
and transformer-based attention mechanisms to enhance ECG signal
classification. The algorithm dynamically adjusts learning rates based on
real-time validation performance, ensuring efficient convergence. Using the
MIT-BIH Arrhythmia and PTB Diagnostic ECG datasets, our model achieves high
classification accuracy, outperforming conventional approaches. These findings
highlight the potential of NIAL in real-time cardiovascular monitoring
applications.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 14:37:53 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Rahman",
"Sowad",
""
]
] | TITLE: Neuro-Informed Adaptive Learning (NIAL) Algorithm: A Hybrid Deep
Learning Approach for ECG Signal Classification
ABSTRACT: The detection of cardiac abnormalities using electrocardiogram (ECG) signals
is crucial for early diagnosis and intervention in cardiovascular diseases.
Traditional deep learning models often lack adaptability to varying signal
patterns. This study introduces the Neuro-Informed Adaptive Learning (NIAL)
algorithm, a hybrid approach integrating convolutional neural networks (CNNs)
and transformer-based attention mechanisms to enhance ECG signal
classification. The algorithm dynamically adjusts learning rates based on
real-time validation performance, ensuring efficient convergence. Using the
MIT-BIH Arrhythmia and PTB Diagnostic ECG datasets, our model achieves high
classification accuracy, outperforming conventional approaches. These findings
highlight the potential of NIAL in real-time cardiovascular monitoring
applications.
|
2503.20797 | Muhammad Haroon | Muhammad Haroon, Magdalena Wojcieszak, Anshuman Chhabra | "Whose Side Are You On?" Estimating Ideology of Political and News
Content Using Large Language Models and Few-shot Demonstration Selection | null | null | null | null | cs.CL cs.CY cs.SI | http://creativecommons.org/licenses/by/4.0/ | The rapid growth of social media platforms has led to concerns about
radicalization, filter bubbles, and content bias. Existing approaches to
classifying ideology are limited in that they require extensive human effort,
the labeling of large datasets, and are not able to adapt to evolving
ideological contexts. This paper explores the potential of Large Language
Models (LLMs) for classifying the political ideology of online content in the
context of the two-party US political spectrum through in-context learning
(ICL). Our extensive experiments involving demonstration selection in
label-balanced fashion, conducted on three datasets comprising news articles
and YouTube videos, reveal that our approach significantly outperforms
zero-shot and traditional supervised methods. Additionally, we evaluate the
influence of metadata (e.g., content source and descriptions) on ideological
classification and discuss its implications. Finally, we show how providing the
source for political and non-political content influences the LLM's
classification.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 02:32:25 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Haroon",
"Muhammad",
""
],
[
"Wojcieszak",
"Magdalena",
""
],
[
"Chhabra",
"Anshuman",
""
]
] | TITLE: "Whose Side Are You On?" Estimating Ideology of Political and News
Content Using Large Language Models and Few-shot Demonstration Selection
ABSTRACT: The rapid growth of social media platforms has led to concerns about
radicalization, filter bubbles, and content bias. Existing approaches to
classifying ideology are limited in that they require extensive human effort,
the labeling of large datasets, and are not able to adapt to evolving
ideological contexts. This paper explores the potential of Large Language
Models (LLMs) for classifying the political ideology of online content in the
context of the two-party US political spectrum through in-context learning
(ICL). Our extensive experiments involving demonstration selection in
label-balanced fashion, conducted on three datasets comprising news articles
and YouTube videos, reveal that our approach significantly outperforms
zero-shot and traditional supervised methods. Additionally, we evaluate the
influence of metadata (e.g., content source and descriptions) on ideological
classification and discuss its implications. Finally, we show how providing the
source for political and non-political content influences the LLM's
classification.
|
2503.20800 | Tao Qi | Qi Tao, Yin Jinhua, Cai Dongqi, Xie Yueqi, Wang Huili, Hu Zhiyang,
Yang Peiru, Nan Guoshun, Zhou Zhili, Wang Shangguang, Lyu Lingjuan, Huang
Yongfeng, Lane Nicholas | Evidencing Unauthorized Training Data from AI Generated Content using
Information Isotopes | null | null | null | null | cs.CR cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | In light of scaling laws, many AI institutions are intensifying efforts to
construct advanced AIs on extensive collections of high-quality human data.
However, in a rush to stay competitive, some institutions may inadvertently or
even deliberately include unauthorized data (like privacy- or intellectual
property-sensitive content) for AI training, which infringes on the rights of
data owners. Compounding this issue, these advanced AI services are typically
built on opaque cloud platforms, which restricts access to internal information
during AI training and inference, leaving only the generated outputs available
for forensics. Thus, despite the introduction of legal frameworks by various
countries to safeguard data rights, uncovering evidence of data misuse in
modern opaque AI applications remains a significant challenge. In this paper,
inspired by the ability of isotopes to trace elements within chemical
reactions, we introduce the concept of information isotopes and elucidate their
properties in tracing training data within opaque AI systems. Furthermore, we
propose an information isotope tracing method designed to identify and provide
evidence of unauthorized data usage by detecting the presence of target
information isotopes in AI generations. We conduct experiments on ten AI models
(including GPT-4o, Claude-3.5, and DeepSeek) and four benchmark datasets in
critical domains (medical data, copyrighted books, and news). Results show that
our method can distinguish training datasets from non-training datasets with
99\% accuracy and significant evidence (p-value$<0.001$) by examining a data
entry equivalent in length to a research paper. The findings show the potential
of our work as an inclusive tool for empowering individuals, including those
without expertise in AI, to safeguard their data rights in the rapidly evolving
era of AI advancements and applications.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 07:35:59 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Tao",
"Qi",
""
],
[
"Jinhua",
"Yin",
""
],
[
"Dongqi",
"Cai",
""
],
[
"Yueqi",
"Xie",
""
],
[
"Huili",
"Wang",
""
],
[
"Zhiyang",
"Hu",
""
],
[
"Peiru",
"Yang",
""
],
[
"Guoshun",
"Nan",
""
],
[
"Zhili",
"Zhou",
""
],
[
"Shangguang",
"Wang",
""
],
[
"Lingjuan",
"Lyu",
""
],
[
"Yongfeng",
"Huang",
""
],
[
"Nicholas",
"Lane",
""
]
] | TITLE: Evidencing Unauthorized Training Data from AI Generated Content using
Information Isotopes
ABSTRACT: In light of scaling laws, many AI institutions are intensifying efforts to
construct advanced AIs on extensive collections of high-quality human data.
However, in a rush to stay competitive, some institutions may inadvertently or
even deliberately include unauthorized data (like privacy- or intellectual
property-sensitive content) for AI training, which infringes on the rights of
data owners. Compounding this issue, these advanced AI services are typically
built on opaque cloud platforms, which restricts access to internal information
during AI training and inference, leaving only the generated outputs available
for forensics. Thus, despite the introduction of legal frameworks by various
countries to safeguard data rights, uncovering evidence of data misuse in
modern opaque AI applications remains a significant challenge. In this paper,
inspired by the ability of isotopes to trace elements within chemical
reactions, we introduce the concept of information isotopes and elucidate their
properties in tracing training data within opaque AI systems. Furthermore, we
propose an information isotope tracing method designed to identify and provide
evidence of unauthorized data usage by detecting the presence of target
information isotopes in AI generations. We conduct experiments on ten AI models
(including GPT-4o, Claude-3.5, and DeepSeek) and four benchmark datasets in
critical domains (medical data, copyrighted books, and news). Results show that
our method can distinguish training datasets from non-training datasets with
99\% accuracy and significant evidence (p-value$<0.001$) by examining a data
entry equivalent in length to a research paper. The findings show the potential
of our work as an inclusive tool for empowering individuals, including those
without expertise in AI, to safeguard their data rights in the rapidly evolving
era of AI advancements and applications.
|
2503.20803 | Bamidele Ajayi | Bamidele Ajayi, Basel Barakat and Ken McGarry | Leveraging VAE-Derived Latent Spaces for Enhanced Malware Detection with
Machine Learning Classifiers | null | null | null | null | cs.CR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper assesses the performance of five machine learning classifiers:
Decision Tree, Naive Bayes, LightGBM, Logistic Regression, and Random Forest
using latent representations learned by a Variational Autoencoder from malware
datasets. Results from the experiments conducted on different training-test
splits with different random seeds reveal that all the models perform well in
detecting malware with ensemble methods (LightGBM and Random Forest) performing
slightly better than the rest. In addition, the use of latent features reduces
the computational cost of the model and the need for extensive hyperparameter
tuning for improved efficiency of the model for deployment. Statistical tests
show that these improvements are significant, and thus, the practical relevance
of integrating latent space representation with traditional classifiers for
effective malware detection in cybersecurity is established.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 14:44:55 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Ajayi",
"Bamidele",
""
],
[
"Barakat",
"Basel",
""
],
[
"McGarry",
"Ken",
""
]
] | TITLE: Leveraging VAE-Derived Latent Spaces for Enhanced Malware Detection with
Machine Learning Classifiers
ABSTRACT: This paper assesses the performance of five machine learning classifiers:
Decision Tree, Naive Bayes, LightGBM, Logistic Regression, and Random Forest
using latent representations learned by a Variational Autoencoder from malware
datasets. Results from the experiments conducted on different training-test
splits with different random seeds reveal that all the models perform well in
detecting malware with ensemble methods (LightGBM and Random Forest) performing
slightly better than the rest. In addition, the use of latent features reduces
the computational cost of the model and the need for extensive hyperparameter
tuning for improved efficiency of the model for deployment. Statistical tests
show that these improvements are significant, and thus, the practical relevance
of integrating latent space representation with traditional classifiers for
effective malware detection in cybersecurity is established.
|
2503.20807 | Pin-Yu Chen | Pin-Yu Chen and Han Shen and Payel Das and Tianyi Chen | Fundamental Safety-Capability Trade-offs in Fine-tuning Large Language
Models | The first two authors contribute equally to this work and are listed
in alphabetical order | null | null | null | stat.ML cs.AI cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fine-tuning Large Language Models (LLMs) on some task-specific datasets has
been a primary use of LLMs. However, it has been empirically observed that this
approach to enhancing capability inevitably compromises safety, a phenomenon
also known as the safety-capability trade-off in LLM fine-tuning. This paper
presents a theoretical framework for understanding the interplay between safety
and capability in two primary safety-aware LLM fine-tuning strategies,
providing new insights into the effects of data similarity, context overlap,
and alignment loss landscape. Our theoretical results characterize the
fundamental limits of the safety-capability trade-off in LLM fine-tuning, which
are also validated by numerical experiments.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 20:41:57 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Chen",
"Pin-Yu",
""
],
[
"Shen",
"Han",
""
],
[
"Das",
"Payel",
""
],
[
"Chen",
"Tianyi",
""
]
] | TITLE: Fundamental Safety-Capability Trade-offs in Fine-tuning Large Language
Models
ABSTRACT: Fine-tuning Large Language Models (LLMs) on some task-specific datasets has
been a primary use of LLMs. However, it has been empirically observed that this
approach to enhancing capability inevitably compromises safety, a phenomenon
also known as the safety-capability trade-off in LLM fine-tuning. This paper
presents a theoretical framework for understanding the interplay between safety
and capability in two primary safety-aware LLM fine-tuning strategies,
providing new insights into the effects of data similarity, context overlap,
and alignment loss landscape. Our theoretical results characterize the
fundamental limits of the safety-capability trade-off in LLM fine-tuning, which
are also validated by numerical experiments.
|
2503.20808 | Xiaoming Qi | Xiaoming Qi and Jingyang Zhang and Huazhu Fu and Guanyu Yang and Shuo
Li and Yueming Jin | Dynamic Allocation Hypernetwork with Adaptive Model Recalibration for
Federated Continual Learning | null | Information Processing in Medical Imaging(IPMI)2025 | null | null | cs.LG cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | Federated continual learning (FCL) offers an emerging pattern to facilitate
the applicability of federated learning (FL) in real-world scenarios, where
tasks evolve dynamically and asynchronously across clients, especially in
medical scenario. Existing server-side FCL methods in nature domain construct a
continually learnable server model by client aggregation on all-involved tasks.
However, they are challenged by: (1) Catastrophic forgetting for previously
learned tasks, leading to error accumulation in server model, making it
difficult to sustain comprehensive knowledge across all tasks. (2) Biased
optimization due to asynchronous tasks handled across different clients,
leading to the collision of optimization targets of different clients at the
same time steps. In this work, we take the first step to propose a novel
server-side FCL pattern in medical domain, Dynamic Allocation Hypernetwork with
adaptive model recalibration (FedDAH). It is to facilitate collaborative
learning under the distinct and dynamic task streams across clients. To
alleviate the catastrophic forgetting, we propose a dynamic allocation
hypernetwork (DAHyper) where a continually updated hypernetwork is designed to
manage the mapping between task identities and their associated model
parameters, enabling the dynamic allocation of the model across clients. For
the biased optimization, we introduce a novel adaptive model recalibration
(AMR) to incorporate the candidate changes of historical models into current
server updates, and assign weights to identical tasks across different time
steps based on the similarity for continual optimization. Extensive experiments
on the AMOS dataset demonstrate the superiority of our FedDAH to other FCL
methods on sites with different task streams. The code is
available:https://github.com/jinlab-imvr/FedDAH.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 00:17:47 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Qi",
"Xiaoming",
""
],
[
"Zhang",
"Jingyang",
""
],
[
"Fu",
"Huazhu",
""
],
[
"Yang",
"Guanyu",
""
],
[
"Li",
"Shuo",
""
],
[
"Jin",
"Yueming",
""
]
] | TITLE: Dynamic Allocation Hypernetwork with Adaptive Model Recalibration for
Federated Continual Learning
ABSTRACT: Federated continual learning (FCL) offers an emerging pattern to facilitate
the applicability of federated learning (FL) in real-world scenarios, where
tasks evolve dynamically and asynchronously across clients, especially in
medical scenario. Existing server-side FCL methods in nature domain construct a
continually learnable server model by client aggregation on all-involved tasks.
However, they are challenged by: (1) Catastrophic forgetting for previously
learned tasks, leading to error accumulation in server model, making it
difficult to sustain comprehensive knowledge across all tasks. (2) Biased
optimization due to asynchronous tasks handled across different clients,
leading to the collision of optimization targets of different clients at the
same time steps. In this work, we take the first step to propose a novel
server-side FCL pattern in medical domain, Dynamic Allocation Hypernetwork with
adaptive model recalibration (FedDAH). It is to facilitate collaborative
learning under the distinct and dynamic task streams across clients. To
alleviate the catastrophic forgetting, we propose a dynamic allocation
hypernetwork (DAHyper) where a continually updated hypernetwork is designed to
manage the mapping between task identities and their associated model
parameters, enabling the dynamic allocation of the model across clients. For
the biased optimization, we introduce a novel adaptive model recalibration
(AMR) to incorporate the candidate changes of historical models into current
server updates, and assign weights to identical tasks across different time
steps based on the similarity for continual optimization. Extensive experiments
on the AMOS dataset demonstrate the superiority of our FedDAH to other FCL
methods on sites with different task streams. The code is
available:https://github.com/jinlab-imvr/FedDAH.
|
2503.20824 | Syed Hesham | Syed Ariff Syed Hesham, Yun Liu, Guolei Sun, Henghui Ding, Jing Yang,
Ender Konukoglu, Xue Geng, Xudong Jiang | Exploiting Temporal State Space Sharing for Video Semantic Segmentation | IEEE/CVF Conference on Computer Vision and Pattern Recognition 2025 | null | null | null | eess.IV cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Video semantic segmentation (VSS) plays a vital role in understanding the
temporal evolution of scenes. Traditional methods often segment videos
frame-by-frame or in a short temporal window, leading to limited temporal
context, redundant computations, and heavy memory requirements. To this end, we
introduce a Temporal Video State Space Sharing (TV3S) architecture to leverage
Mamba state space models for temporal feature sharing. Our model features a
selective gating mechanism that efficiently propagates relevant information
across video frames, eliminating the need for a memory-heavy feature pool. By
processing spatial patches independently and incorporating shifted operation,
TV3S supports highly parallel computation in both training and inference
stages, which reduces the delay in sequential state space processing and
improves the scalability for long video sequences. Moreover, TV3S incorporates
information from prior frames during inference, achieving long-range temporal
coherence and superior adaptability to extended sequences. Evaluations on the
VSPW and Cityscapes datasets reveal that our approach outperforms current
state-of-the-art methods, establishing a new standard for VSS with consistent
results across long video sequences. By achieving a good balance between
accuracy and efficiency, TV3S shows a significant advancement in spatiotemporal
modeling, paving the way for efficient video analysis. The code is publicly
available at https://github.com/Ashesham/TV3S.git.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 01:47:42 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Hesham",
"Syed Ariff Syed",
""
],
[
"Liu",
"Yun",
""
],
[
"Sun",
"Guolei",
""
],
[
"Ding",
"Henghui",
""
],
[
"Yang",
"Jing",
""
],
[
"Konukoglu",
"Ender",
""
],
[
"Geng",
"Xue",
""
],
[
"Jiang",
"Xudong",
""
]
] | TITLE: Exploiting Temporal State Space Sharing for Video Semantic Segmentation
ABSTRACT: Video semantic segmentation (VSS) plays a vital role in understanding the
temporal evolution of scenes. Traditional methods often segment videos
frame-by-frame or in a short temporal window, leading to limited temporal
context, redundant computations, and heavy memory requirements. To this end, we
introduce a Temporal Video State Space Sharing (TV3S) architecture to leverage
Mamba state space models for temporal feature sharing. Our model features a
selective gating mechanism that efficiently propagates relevant information
across video frames, eliminating the need for a memory-heavy feature pool. By
processing spatial patches independently and incorporating shifted operation,
TV3S supports highly parallel computation in both training and inference
stages, which reduces the delay in sequential state space processing and
improves the scalability for long video sequences. Moreover, TV3S incorporates
information from prior frames during inference, achieving long-range temporal
coherence and superior adaptability to extended sequences. Evaluations on the
VSPW and Cityscapes datasets reveal that our approach outperforms current
state-of-the-art methods, establishing a new standard for VSS with consistent
results across long video sequences. By achieving a good balance between
accuracy and efficiency, TV3S shows a significant advancement in spatiotemporal
modeling, paving the way for efficient video analysis. The code is publicly
available at https://github.com/Ashesham/TV3S.git.
|
2503.20835 | Qichen Sun | Qichen Sun, Yuxing Lu, Kun Xia, Li Chen, He Sun, Jinzhuo Wang | Comprehensive Manuscript Assessment with Text Summarization Using 69707
articles | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Rapid and efficient assessment of the future impact of research articles is a
significant concern for both authors and reviewers. The most common standard
for measuring the impact of academic papers is the number of citations. In
recent years, numerous efforts have been undertaken to predict citation counts
within various citation windows. However, most of these studies focus solely on
a specific academic field or require early citation counts for prediction,
rendering them impractical for the early-stage evaluation of papers. In this
work, we harness Scopus to curate a significantly comprehensive and large-scale
dataset of information from 69707 scientific articles sourced from 99 journals
spanning multiple disciplines. We propose a deep learning methodology for the
impact-based classification tasks, which leverages semantic features extracted
from the manuscripts and paper metadata. To summarize the semantic features,
such as titles and abstracts, we employ a Transformer-based language model to
encode semantic features and design a text fusion layer to capture shared
information between titles and abstracts. We specifically focus on the
following impact-based prediction tasks using information of scientific
manuscripts in pre-publication stage: (1) The impact of journals in which the
manuscripts will be published. (2) The future impact of manuscripts themselves.
Extensive experiments on our datasets demonstrate the superiority of our
proposed model for impact-based prediction tasks. We also demonstrate
potentials in generating manuscript's feedback and improvement suggestions.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 07:56:15 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Sun",
"Qichen",
""
],
[
"Lu",
"Yuxing",
""
],
[
"Xia",
"Kun",
""
],
[
"Chen",
"Li",
""
],
[
"Sun",
"He",
""
],
[
"Wang",
"Jinzhuo",
""
]
] | TITLE: Comprehensive Manuscript Assessment with Text Summarization Using 69707
articles
ABSTRACT: Rapid and efficient assessment of the future impact of research articles is a
significant concern for both authors and reviewers. The most common standard
for measuring the impact of academic papers is the number of citations. In
recent years, numerous efforts have been undertaken to predict citation counts
within various citation windows. However, most of these studies focus solely on
a specific academic field or require early citation counts for prediction,
rendering them impractical for the early-stage evaluation of papers. In this
work, we harness Scopus to curate a significantly comprehensive and large-scale
dataset of information from 69707 scientific articles sourced from 99 journals
spanning multiple disciplines. We propose a deep learning methodology for the
impact-based classification tasks, which leverages semantic features extracted
from the manuscripts and paper metadata. To summarize the semantic features,
such as titles and abstracts, we employ a Transformer-based language model to
encode semantic features and design a text fusion layer to capture shared
information between titles and abstracts. We specifically focus on the
following impact-based prediction tasks using information of scientific
manuscripts in pre-publication stage: (1) The impact of journals in which the
manuscripts will be published. (2) The future impact of manuscripts themselves.
Extensive experiments on our datasets demonstrate the superiority of our
proposed model for impact-based prediction tasks. We also demonstrate
potentials in generating manuscript's feedback and improvement suggestions.
|
2503.20846 | Viktor Schlegel | Viktor Schlegel, Anil A Bharath, Zilong Zhao, Kevin Yee | Generating Synthetic Data with Formal Privacy Guarantees: State of the
Art and the Road Ahead | 23 pages + references + Appendix. Preprint | null | null | null | cs.CR cs.CL cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Privacy-preserving synthetic data offers a promising solution to harness
segregated data in high-stakes domains where information is compartmentalized
for regulatory, privacy, or institutional reasons. This survey provides a
comprehensive framework for understanding the landscape of privacy-preserving
synthetic data, presenting the theoretical foundations of generative models and
differential privacy followed by a review of state-of-the-art methods across
tabular data, images, and text. Our synthesis of evaluation approaches
highlights the fundamental trade-off between utility for down-stream tasks and
privacy guarantees, while identifying critical research gaps: the lack of
realistic benchmarks representing specialized domains and insufficient
empirical evaluations required to contextualise formal guarantees.
Through empirical analysis of four leading methods on five real-world
datasets from specialized domains, we demonstrate significant performance
degradation under realistic privacy constraints ($\epsilon \leq 4$), revealing
a substantial gap between results reported on general domain benchmarks and
performance on domain-specific data. %Our findings highlight key challenges
including unaccounted privacy leakage, insufficient empirical verification of
formal guarantees, and a critical deficit of realistic benchmarks. These
challenges underscore the need for robust evaluation frameworks, standardized
benchmarks for specialized domains, and improved techniques to address the
unique requirements of privacy-sensitive fields such that this technology can
deliver on its considerable potential.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 16:06:33 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Schlegel",
"Viktor",
""
],
[
"Bharath",
"Anil A",
""
],
[
"Zhao",
"Zilong",
""
],
[
"Yee",
"Kevin",
""
]
] | TITLE: Generating Synthetic Data with Formal Privacy Guarantees: State of the
Art and the Road Ahead
ABSTRACT: Privacy-preserving synthetic data offers a promising solution to harness
segregated data in high-stakes domains where information is compartmentalized
for regulatory, privacy, or institutional reasons. This survey provides a
comprehensive framework for understanding the landscape of privacy-preserving
synthetic data, presenting the theoretical foundations of generative models and
differential privacy followed by a review of state-of-the-art methods across
tabular data, images, and text. Our synthesis of evaluation approaches
highlights the fundamental trade-off between utility for down-stream tasks and
privacy guarantees, while identifying critical research gaps: the lack of
realistic benchmarks representing specialized domains and insufficient
empirical evaluations required to contextualise formal guarantees.
Through empirical analysis of four leading methods on five real-world
datasets from specialized domains, we demonstrate significant performance
degradation under realistic privacy constraints ($\epsilon \leq 4$), revealing
a substantial gap between results reported on general domain benchmarks and
performance on domain-specific data. %Our findings highlight key challenges
including unaccounted privacy leakage, insufficient empirical verification of
formal guarantees, and a critical deficit of realistic benchmarks. These
challenges underscore the need for robust evaluation frameworks, standardized
benchmarks for specialized domains, and improved techniques to address the
unique requirements of privacy-sensitive fields such that this technology can
deliver on its considerable potential.
|
2503.20847 | Jim Achterberg | Jim Achterberg, Bram van Dijk, Saif ul Islam, Hafiz Muhammad Waseem,
Parisis Gallos, Gregory Epiphaniou, Carsten Maple, Marcel Haas, and Marco
Spruit | The Data Sharing Paradox of Synthetic Data in Healthcare | Accepted for publication at Medical Informatics Europe 2025
conference, Glasgow | null | null | null | cs.DB cs.CR cs.CY | http://creativecommons.org/licenses/by/4.0/ | Synthetic data offers a promising solution to privacy concerns in healthcare
by generating useful datasets in a privacy-aware manner. However, although
synthetic data is typically developed with the intention of sharing said data,
ambiguous reidentification risk assessments often prevent synthetic data from
seeing the light of day. One of the main causes is that privacy metrics for
synthetic data, which inform on reidentification risks, are not well-aligned
with practical requirements and regulations regarding data sharing in
healthcare. This article discusses the paradoxical situation where synthetic
data is designed for data sharing but is often still restricted. We also
discuss how the field should move forward to mitigate this issue.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 16:06:40 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Achterberg",
"Jim",
""
],
[
"van Dijk",
"Bram",
""
],
[
"Islam",
"Saif ul",
""
],
[
"Waseem",
"Hafiz Muhammad",
""
],
[
"Gallos",
"Parisis",
""
],
[
"Epiphaniou",
"Gregory",
""
],
[
"Maple",
"Carsten",
""
],
[
"Haas",
"Marcel",
""
],
[
"Spruit",
"Marco",
""
]
] | TITLE: The Data Sharing Paradox of Synthetic Data in Healthcare
ABSTRACT: Synthetic data offers a promising solution to privacy concerns in healthcare
by generating useful datasets in a privacy-aware manner. However, although
synthetic data is typically developed with the intention of sharing said data,
ambiguous reidentification risk assessments often prevent synthetic data from
seeing the light of day. One of the main causes is that privacy metrics for
synthetic data, which inform on reidentification risks, are not well-aligned
with practical requirements and regulations regarding data sharing in
healthcare. This article discusses the paradoxical situation where synthetic
data is designed for data sharing but is often still restricted. We also
discuss how the field should move forward to mitigate this issue.
|
2503.20850 | Qing Yao | Qing Yao, Kanishka Misra, Leonie Weissweiler, Kyle Mahowald | Both Direct and Indirect Evidence Contribute to Dative Alternation
Preferences in Language Models | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Language models (LMs) tend to show human-like preferences on a number of
syntactic phenomena, but the extent to which these are attributable to direct
exposure to the phenomena or more general properties of language is unclear. We
explore this with the English dative alternation (DO: "gave Y the X" vs. PO:
"gave the X to Y"), using a controlled rearing paradigm wherein we iteratively
train small LMs on systematically manipulated input. We focus on properties
that affect the choice of alternant: length and animacy. Both properties are
directly present in datives but also reflect more global tendencies for shorter
elements to precede longer ones and animates to precede inanimates. First, by
manipulating and ablating datives for these biases in the input, we show that
direct evidence of length and animacy matters, but easy-first preferences
persist even without such evidence. Then, using LMs trained on systematically
perturbed datasets to manipulate global length effects (re-linearizing
sentences globally while preserving dependency structure), we find that dative
preferences can emerge from indirect evidence. We conclude that LMs' emergent
syntactic preferences come from a mix of direct and indirect sources.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 17:32:41 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Yao",
"Qing",
""
],
[
"Misra",
"Kanishka",
""
],
[
"Weissweiler",
"Leonie",
""
],
[
"Mahowald",
"Kyle",
""
]
] | TITLE: Both Direct and Indirect Evidence Contribute to Dative Alternation
Preferences in Language Models
ABSTRACT: Language models (LMs) tend to show human-like preferences on a number of
syntactic phenomena, but the extent to which these are attributable to direct
exposure to the phenomena or more general properties of language is unclear. We
explore this with the English dative alternation (DO: "gave Y the X" vs. PO:
"gave the X to Y"), using a controlled rearing paradigm wherein we iteratively
train small LMs on systematically manipulated input. We focus on properties
that affect the choice of alternant: length and animacy. Both properties are
directly present in datives but also reflect more global tendencies for shorter
elements to precede longer ones and animates to precede inanimates. First, by
manipulating and ablating datives for these biases in the input, we show that
direct evidence of length and animacy matters, but easy-first preferences
persist even without such evidence. Then, using LMs trained on systematically
perturbed datasets to manipulate global length effects (re-linearizing
sentences globally while preserving dependency structure), we find that dative
preferences can emerge from indirect evidence. We conclude that LMs' emergent
syntactic preferences come from a mix of direct and indirect sources.
|
2503.20884 | Usama Zafar | Usama Zafar, Andr\'e Teixeira, Salman Toor | Robust Federated Learning Against Poisoning Attacks: A GAN-Based Defense
Framework | null | null | null | null | cs.CR cs.AI cs.DC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Federated Learning (FL) enables collaborative model training across
decentralized devices without sharing raw data, but it remains vulnerable to
poisoning attacks that compromise model integrity. Existing defenses often rely
on external datasets or predefined heuristics (e.g. number of malicious
clients), limiting their effectiveness and scalability. To address these
limitations, we propose a privacy-preserving defense framework that leverages a
Conditional Generative Adversarial Network (cGAN) to generate synthetic data at
the server for authenticating client updates, eliminating the need for external
datasets. Our framework is scalable, adaptive, and seamlessly integrates into
FL workflows. Extensive experiments on benchmark datasets demonstrate its
robust performance against a variety of poisoning attacks, achieving high True
Positive Rate (TPR) and True Negative Rate (TNR) of malicious and benign
clients, respectively, while maintaining model accuracy. The proposed framework
offers a practical and effective solution for securing federated learning
systems.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 18:00:56 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Zafar",
"Usama",
""
],
[
"Teixeira",
"André",
""
],
[
"Toor",
"Salman",
""
]
] | TITLE: Robust Federated Learning Against Poisoning Attacks: A GAN-Based Defense
Framework
ABSTRACT: Federated Learning (FL) enables collaborative model training across
decentralized devices without sharing raw data, but it remains vulnerable to
poisoning attacks that compromise model integrity. Existing defenses often rely
on external datasets or predefined heuristics (e.g. number of malicious
clients), limiting their effectiveness and scalability. To address these
limitations, we propose a privacy-preserving defense framework that leverages a
Conditional Generative Adversarial Network (cGAN) to generate synthetic data at
the server for authenticating client updates, eliminating the need for external
datasets. Our framework is scalable, adaptive, and seamlessly integrates into
FL workflows. Extensive experiments on benchmark datasets demonstrate its
robust performance against a variety of poisoning attacks, achieving high True
Positive Rate (TPR) and True Negative Rate (TNR) of malicious and benign
clients, respectively, while maintaining model accuracy. The proposed framework
offers a practical and effective solution for securing federated learning
systems.
|
2503.20929 | Dawon Ahn | Dawon Ahn, Evangelos E. Papalexakis | Global and Local Structure Learning for Sparse Tensor Completion | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | How can we accurately complete tensors by learning relationships of
dimensions along each mode? Tensor completion, a widely studied problem, is to
predict missing entries in incomplete tensors. Tensor decomposition methods,
fundamental tensor analysis tools, have been actively developed to solve tensor
completion tasks. However, standard tensor decomposition models have not been
designed to learn relationships of dimensions along each mode, which limits to
accurate tensor completion. Also, previously developed tensor decomposition
models have required prior knowledge between relations within dimensions to
model the relations, expensive to obtain.
This paper proposes TGL (Tensor Decomposition Learning Global and Local
Structures) to accurately predict missing entries in tensors. TGL reconstructs
a tensor with factor matrices which learn local structures with GNN without
prior knowledges. Extensive experiments are conducted to evaluate TGL with
baselines and datasets.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 19:02:04 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Ahn",
"Dawon",
""
],
[
"Papalexakis",
"Evangelos E.",
""
]
] | TITLE: Global and Local Structure Learning for Sparse Tensor Completion
ABSTRACT: How can we accurately complete tensors by learning relationships of
dimensions along each mode? Tensor completion, a widely studied problem, is to
predict missing entries in incomplete tensors. Tensor decomposition methods,
fundamental tensor analysis tools, have been actively developed to solve tensor
completion tasks. However, standard tensor decomposition models have not been
designed to learn relationships of dimensions along each mode, which limits to
accurate tensor completion. Also, previously developed tensor decomposition
models have required prior knowledge between relations within dimensions to
model the relations, expensive to obtain.
This paper proposes TGL (Tensor Decomposition Learning Global and Local
Structures) to accurately predict missing entries in tensors. TGL reconstructs
a tensor with factor matrices which learn local structures with GNN without
prior knowledges. Extensive experiments are conducted to evaluate TGL with
baselines and datasets.
|
2503.20936 | Dvij Kalaria | Daniel Etaat, Dvij Kalaria, Nima Rahmanian, Shankar Sastry | LATTE-MV: Learning to Anticipate Table Tennis Hits from Monocular Videos | CVPR 2025 | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Physical agility is a necessary skill in competitive table tennis, but by no
means sufficient. Champions excel in this fast-paced and highly dynamic
environment by anticipating their opponent's intent - buying themselves the
necessary time to react. In this work, we take one step towards designing such
an anticipatory agent. Previous works have developed systems capable of
real-time table tennis gameplay, though they often do not leverage
anticipation. Among the works that forecast opponent actions, their approaches
are limited by dataset size and variety. Our paper contributes (1) a scalable
system for reconstructing monocular video of table tennis matches in 3D and (2)
an uncertainty-aware controller that anticipates opponent actions. We
demonstrate in simulation that our policy improves the ball return rate against
high-speed hits from 49.9% to 59.0% as compared to a baseline non-anticipatory
policy.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 19:11:22 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Etaat",
"Daniel",
""
],
[
"Kalaria",
"Dvij",
""
],
[
"Rahmanian",
"Nima",
""
],
[
"Sastry",
"Shankar",
""
]
] | TITLE: LATTE-MV: Learning to Anticipate Table Tennis Hits from Monocular Videos
ABSTRACT: Physical agility is a necessary skill in competitive table tennis, but by no
means sufficient. Champions excel in this fast-paced and highly dynamic
environment by anticipating their opponent's intent - buying themselves the
necessary time to react. In this work, we take one step towards designing such
an anticipatory agent. Previous works have developed systems capable of
real-time table tennis gameplay, though they often do not leverage
anticipation. Among the works that forecast opponent actions, their approaches
are limited by dataset size and variety. Our paper contributes (1) a scalable
system for reconstructing monocular video of table tennis matches in 3D and (2)
an uncertainty-aware controller that anticipates opponent actions. We
demonstrate in simulation that our policy improves the ball return rate against
high-speed hits from 49.9% to 59.0% as compared to a baseline non-anticipatory
policy.
|
2503.20952 | Caspar Meijer | Caspar Meijer, Jiyue Huang, Shreshtha Sharma, Elena Lazovik, Lydia Y.
Chen | TS-Inverse: A Gradient Inversion Attack Tailored for Federated Time
Series Forecasting Models | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Federated learning (FL) for time series forecasting (TSF) enables clients
with privacy-sensitive time series (TS) data to collaboratively learn accurate
forecasting models, for example, in energy load prediction. Unfortunately,
privacy risks in FL persist, as servers can potentially reconstruct clients'
training data through gradient inversion attacks (GIA). Although GIA is
demonstrated for image classification tasks, little is known about time series
regression tasks. In this paper, we first conduct an extensive empirical study
on inverting TS data across 4 TSF models and 4 datasets, identifying the unique
challenges of reconstructing both observations and targets of TS data. We then
propose TS-Inverse, a novel GIA that improves the inversion of TS data by (i)
learning a gradient inversion model that outputs quantile predictions, (ii) a
unique loss function that incorporates periodicity and trend regularization,
and (iii) regularization according to the quantile predictions. Our evaluations
demonstrate a remarkable performance of TS-Inverse, achieving at least a 2x-10x
improvement in terms of the sMAPE metric over existing GIA methods on TS data.
Code repository: https://github.com/Capsar/ts-inverse
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 19:35:49 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Meijer",
"Caspar",
""
],
[
"Huang",
"Jiyue",
""
],
[
"Sharma",
"Shreshtha",
""
],
[
"Lazovik",
"Elena",
""
],
[
"Chen",
"Lydia Y.",
""
]
] | TITLE: TS-Inverse: A Gradient Inversion Attack Tailored for Federated Time
Series Forecasting Models
ABSTRACT: Federated learning (FL) for time series forecasting (TSF) enables clients
with privacy-sensitive time series (TS) data to collaboratively learn accurate
forecasting models, for example, in energy load prediction. Unfortunately,
privacy risks in FL persist, as servers can potentially reconstruct clients'
training data through gradient inversion attacks (GIA). Although GIA is
demonstrated for image classification tasks, little is known about time series
regression tasks. In this paper, we first conduct an extensive empirical study
on inverting TS data across 4 TSF models and 4 datasets, identifying the unique
challenges of reconstructing both observations and targets of TS data. We then
propose TS-Inverse, a novel GIA that improves the inversion of TS data by (i)
learning a gradient inversion model that outputs quantile predictions, (ii) a
unique loss function that incorporates periodicity and trend regularization,
and (iii) regularization according to the quantile predictions. Our evaluations
demonstrate a remarkable performance of TS-Inverse, achieving at least a 2x-10x
improvement in terms of the sMAPE metric over existing GIA methods on TS data.
Code repository: https://github.com/Capsar/ts-inverse
|
2503.20978 | Yiqiao Jin | Yiqiao Jin, Stefano Petrangeli, Yu Shen, Gang Wu | ScreenLLM: Stateful Screen Schema for Efficient Action Understanding and
Prediction | Accepted to MM4SG Workshop at The Web Conference 2025 | null | 10.1145/3701716.3718379 | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Graphical User Interface (GUI) agents are autonomous systems that interpret
and generate actions, enabling intelligent user assistance and automation.
Effective training of these agent presents unique challenges, such as sparsity
in supervision signals, scalability for large datasets, and the need for
nuanced user understanding. We propose stateful screen schema, an efficient
representation of GUI interactions that captures key user actions and
intentions over time. Building on this foundation, we introduce ScreenLLM, a
set of multimodal large language models (MLLMs) tailored for advanced UI
understanding and action prediction. Extensive experiments on both open-source
and proprietary models show that ScreenLLM accurately models user behavior and
predicts actions. Our work lays the foundation for scalable, robust, and
intelligent GUI agents that enhance user interaction in diverse software
environments.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 20:41:24 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Jin",
"Yiqiao",
""
],
[
"Petrangeli",
"Stefano",
""
],
[
"Shen",
"Yu",
""
],
[
"Wu",
"Gang",
""
]
] | TITLE: ScreenLLM: Stateful Screen Schema for Efficient Action Understanding and
Prediction
ABSTRACT: Graphical User Interface (GUI) agents are autonomous systems that interpret
and generate actions, enabling intelligent user assistance and automation.
Effective training of these agent presents unique challenges, such as sparsity
in supervision signals, scalability for large datasets, and the need for
nuanced user understanding. We propose stateful screen schema, an efficient
representation of GUI interactions that captures key user actions and
intentions over time. Building on this foundation, we introduce ScreenLLM, a
set of multimodal large language models (MLLMs) tailored for advanced UI
understanding and action prediction. Extensive experiments on both open-source
and proprietary models show that ScreenLLM accurately models user behavior and
predicts actions. Our work lays the foundation for scalable, robust, and
intelligent GUI agents that enhance user interaction in diverse software
environments.
|
2503.20989 | Gabriel Agostini | Gabriel Agostini, Rachel Young, Maria Fitzpatrick, Nikhil Garg, Emma
Pierson | Inferring fine-grained migration patterns across the United States | null | null | null | null | cs.CY | http://creativecommons.org/licenses/by/4.0/ | Fine-grained migration data illuminate important demographic, environmental,
and health phenomena. However, migration datasets within the United States
remain lacking: publicly available Census data are neither spatially nor
temporally granular, and proprietary data have higher resolution but
demographic and other biases. To address these limitations, we develop a
scalable iterative-proportional-fitting based method which reconciles
high-resolution but biased proprietary data with low-resolution but more
reliable Census data. We apply this method to produce MIGRATE, a dataset of
annual migration matrices from 2010 - 2019 which captures flows between 47.4
billion pairs of Census Block Groups -- about four thousand times more granular
than publicly available data. These estimates are highly correlated with
external ground-truth datasets, and improve accuracy and reduce bias relative
to raw proprietary data. We publicly release MIGRATE estimates and provide a
case study illustrating how they reveal granular patterns of migration in
response to California wildfires.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 21:07:44 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Agostini",
"Gabriel",
""
],
[
"Young",
"Rachel",
""
],
[
"Fitzpatrick",
"Maria",
""
],
[
"Garg",
"Nikhil",
""
],
[
"Pierson",
"Emma",
""
]
] | TITLE: Inferring fine-grained migration patterns across the United States
ABSTRACT: Fine-grained migration data illuminate important demographic, environmental,
and health phenomena. However, migration datasets within the United States
remain lacking: publicly available Census data are neither spatially nor
temporally granular, and proprietary data have higher resolution but
demographic and other biases. To address these limitations, we develop a
scalable iterative-proportional-fitting based method which reconciles
high-resolution but biased proprietary data with low-resolution but more
reliable Census data. We apply this method to produce MIGRATE, a dataset of
annual migration matrices from 2010 - 2019 which captures flows between 47.4
billion pairs of Census Block Groups -- about four thousand times more granular
than publicly available data. These estimates are highly correlated with
external ground-truth datasets, and improve accuracy and reduce bias relative
to raw proprietary data. We publicly release MIGRATE estimates and provide a
case study illustrating how they reveal granular patterns of migration in
response to California wildfires.
|
2503.20990 | Yupeng Cao | Yupeng Cao, Haohang Li, Yangyang Yu, Shashidhar Reddy Javaji, Yueru
He, Jimin Huang, Zining Zhu, Qianqian Xie, Xiao-yang Liu, Koduvayur
Subbalakshmi, Meikang Qiu, Sophia Ananiadou, Jian-Yun Nie | FinAudio: A Benchmark for Audio Large Language Models in Financial
Applications | null | null | null | null | cs.CE cs.AI cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Audio Large Language Models (AudioLLMs) have received widespread attention
and have significantly improved performance on audio tasks such as
conversation, audio understanding, and automatic speech recognition (ASR).
Despite these advancements, there is an absence of a benchmark for assessing
AudioLLMs in financial scenarios, where audio data, such as earnings conference
calls and CEO speeches, are crucial resources for financial analysis and
investment decisions. In this paper, we introduce \textsc{FinAudio}, the first
benchmark designed to evaluate the capacity of AudioLLMs in the financial
domain. We first define three tasks based on the unique characteristics of the
financial domain: 1) ASR for short financial audio, 2) ASR for long financial
audio, and 3) summarization of long financial audio. Then, we curate two short
and two long audio datasets, respectively, and develop a novel dataset for
financial audio summarization, comprising the \textsc{FinAudio} benchmark.
Then, we evaluate seven prevalent AudioLLMs on \textsc{FinAudio}. Our
evaluation reveals the limitations of existing AudioLLMs in the financial
domain and offers insights for improving AudioLLMs. All datasets and codes will
be released.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 21:07:51 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Cao",
"Yupeng",
""
],
[
"Li",
"Haohang",
""
],
[
"Yu",
"Yangyang",
""
],
[
"Javaji",
"Shashidhar Reddy",
""
],
[
"He",
"Yueru",
""
],
[
"Huang",
"Jimin",
""
],
[
"Zhu",
"Zining",
""
],
[
"Xie",
"Qianqian",
""
],
[
"Liu",
"Xiao-yang",
""
],
[
"Subbalakshmi",
"Koduvayur",
""
],
[
"Qiu",
"Meikang",
""
],
[
"Ananiadou",
"Sophia",
""
],
[
"Nie",
"Jian-Yun",
""
]
] | TITLE: FinAudio: A Benchmark for Audio Large Language Models in Financial
Applications
ABSTRACT: Audio Large Language Models (AudioLLMs) have received widespread attention
and have significantly improved performance on audio tasks such as
conversation, audio understanding, and automatic speech recognition (ASR).
Despite these advancements, there is an absence of a benchmark for assessing
AudioLLMs in financial scenarios, where audio data, such as earnings conference
calls and CEO speeches, are crucial resources for financial analysis and
investment decisions. In this paper, we introduce \textsc{FinAudio}, the first
benchmark designed to evaluate the capacity of AudioLLMs in the financial
domain. We first define three tasks based on the unique characteristics of the
financial domain: 1) ASR for short financial audio, 2) ASR for long financial
audio, and 3) summarization of long financial audio. Then, we curate two short
and two long audio datasets, respectively, and develop a novel dataset for
financial audio summarization, comprising the \textsc{FinAudio} benchmark.
Then, we evaluate seven prevalent AudioLLMs on \textsc{FinAudio}. Our
evaluation reveals the limitations of existing AudioLLMs in the financial
domain and offers insights for improving AudioLLMs. All datasets and codes will
be released.
|
2503.20994 | Cole Patten | Cole Patten, Christopher Saunders, Michael Puthawala | Deep Learning for Forensic Identification of Source | null | null | null | null | cs.LG stat.AP stat.ML | http://creativecommons.org/licenses/by/4.0/ | We used contrastive neural networks to learn useful similarity scores between
the 144 cartridge casings in the NBIDE dataset, under the common-but-unknown
source paradigm. The common-but-unknown source problem is a problem archetype
in forensics where the question is whether two objects share a common source
(e.g. were two cartridge casings fired from the same firearm). Similarity
scores are often used to interpret evidence under this paradigm. We directly
compared our results to a state-of-the-art algorithm, Congruent Matching Cells
(CMC). When trained on the E3 dataset of 2967 cartridge casings, contrastive
learning achieved an ROC AUC of 0.892. The CMC algorithm achieved 0.867. We
also conducted an ablation study where we varied the neural network
architecture; specifically, the network's width or depth. The ablation study
showed that contrastive network performance results are somewhat robust to the
network architecture. This work was in part motivated by the use of similarity
scores attained via contrastive learning for standard evidence interpretation
methods such as score-based likelihood ratios.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 21:13:08 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Patten",
"Cole",
""
],
[
"Saunders",
"Christopher",
""
],
[
"Puthawala",
"Michael",
""
]
] | TITLE: Deep Learning for Forensic Identification of Source
ABSTRACT: We used contrastive neural networks to learn useful similarity scores between
the 144 cartridge casings in the NBIDE dataset, under the common-but-unknown
source paradigm. The common-but-unknown source problem is a problem archetype
in forensics where the question is whether two objects share a common source
(e.g. were two cartridge casings fired from the same firearm). Similarity
scores are often used to interpret evidence under this paradigm. We directly
compared our results to a state-of-the-art algorithm, Congruent Matching Cells
(CMC). When trained on the E3 dataset of 2967 cartridge casings, contrastive
learning achieved an ROC AUC of 0.892. The CMC algorithm achieved 0.867. We
also conducted an ablation study where we varied the neural network
architecture; specifically, the network's width or depth. The ablation study
showed that contrastive network performance results are somewhat robust to the
network architecture. This work was in part motivated by the use of similarity
scores attained via contrastive learning for standard evidence interpretation
methods such as score-based likelihood ratios.
|
2503.20995 | Xiaomin Li | Xiaomin Li, Xupeng Chen, Jingxuan Fan, Eric Hanchen Jiang, Mingye Gao | Multi-head Reward Aggregation Guided by Entropy | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Aligning large language models (LLMs) with safety guidelines typically
involves reinforcement learning from human feedback (RLHF), relying on
human-generated preference annotations. However, assigning consistent overall
quality ratings is challenging, prompting recent research to shift towards
detailed evaluations based on multiple specific safety criteria. This paper
uncovers a consistent observation: safety rules characterized by high rating
entropy are generally less reliable in identifying responses preferred by
humans. Leveraging this finding, we introduce ENCORE, a straightforward
entropy-guided approach that composes multi-head rewards by downweighting rules
exhibiting high rating entropy. Theoretically, we demonstrate that rules with
elevated entropy naturally receive minimal weighting in the Bradley-Terry
optimization framework, justifying our entropy-based penalization. Through
extensive experiments on RewardBench safety tasks, our method significantly
surpasses several competitive baselines, including random weighting, uniform
weighting, single-head Bradley-Terry models, and LLM-based judging methods. Our
proposed approach is training-free, broadly applicable to various datasets, and
maintains interpretability, offering a practical and effective solution for
multi-attribute reward modeling.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 21:16:48 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Li",
"Xiaomin",
""
],
[
"Chen",
"Xupeng",
""
],
[
"Fan",
"Jingxuan",
""
],
[
"Jiang",
"Eric Hanchen",
""
],
[
"Gao",
"Mingye",
""
]
] | TITLE: Multi-head Reward Aggregation Guided by Entropy
ABSTRACT: Aligning large language models (LLMs) with safety guidelines typically
involves reinforcement learning from human feedback (RLHF), relying on
human-generated preference annotations. However, assigning consistent overall
quality ratings is challenging, prompting recent research to shift towards
detailed evaluations based on multiple specific safety criteria. This paper
uncovers a consistent observation: safety rules characterized by high rating
entropy are generally less reliable in identifying responses preferred by
humans. Leveraging this finding, we introduce ENCORE, a straightforward
entropy-guided approach that composes multi-head rewards by downweighting rules
exhibiting high rating entropy. Theoretically, we demonstrate that rules with
elevated entropy naturally receive minimal weighting in the Bradley-Terry
optimization framework, justifying our entropy-based penalization. Through
extensive experiments on RewardBench safety tasks, our method significantly
surpasses several competitive baselines, including random weighting, uniform
weighting, single-head Bradley-Terry models, and LLM-based judging methods. Our
proposed approach is training-free, broadly applicable to various datasets, and
maintains interpretability, offering a practical and effective solution for
multi-attribute reward modeling.
|
2503.20998 | Youngkyoon Jang | Youngkyoon Jang, Eduardo P\'erez-Pellitero | CoMapGS: Covisibility Map-based Gaussian Splatting for Sparse Novel View
Synthesis | Accepted to CVPR 2025, Mistakenly submitted as a replacement for
arXiv:2402.11057 | null | null | null | cs.GR cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose Covisibility Map-based Gaussian Splatting (CoMapGS), designed to
recover underrepresented sparse regions in sparse novel view synthesis. CoMapGS
addresses both high- and low-uncertainty regions by constructing covisibility
maps, enhancing initial point clouds, and applying uncertainty-aware weighted
supervision using a proximity classifier. Our contributions are threefold: (1)
CoMapGS reframes novel view synthesis by leveraging covisibility maps as a core
component to address region-specific uncertainty; (2) Enhanced initial point
clouds for both low- and high-uncertainty regions compensate for sparse
COLMAP-derived point clouds, improving reconstruction quality and benefiting
few-shot 3DGS methods; (3) Adaptive supervision with covisibility-score-based
weighting and proximity classification achieves consistent performance gains
across scenes with varying sparsity scores derived from covisibility maps.
Experimental results demonstrate that CoMapGS outperforms state-of-the-art
methods on datasets including Mip-NeRF 360 and LLFF.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 12:05:25 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Jang",
"Youngkyoon",
""
],
[
"Pérez-Pellitero",
"Eduardo",
""
]
] | TITLE: CoMapGS: Covisibility Map-based Gaussian Splatting for Sparse Novel View
Synthesis
ABSTRACT: We propose Covisibility Map-based Gaussian Splatting (CoMapGS), designed to
recover underrepresented sparse regions in sparse novel view synthesis. CoMapGS
addresses both high- and low-uncertainty regions by constructing covisibility
maps, enhancing initial point clouds, and applying uncertainty-aware weighted
supervision using a proximity classifier. Our contributions are threefold: (1)
CoMapGS reframes novel view synthesis by leveraging covisibility maps as a core
component to address region-specific uncertainty; (2) Enhanced initial point
clouds for both low- and high-uncertainty regions compensate for sparse
COLMAP-derived point clouds, improving reconstruction quality and benefiting
few-shot 3DGS methods; (3) Adaptive supervision with covisibility-score-based
weighting and proximity classification achieves consistent performance gains
across scenes with varying sparsity scores derived from covisibility maps.
Experimental results demonstrate that CoMapGS outperforms state-of-the-art
methods on datasets including Mip-NeRF 360 and LLFF.
|
2503.21000 | Lynnette Hui Xian Ng | Lynnette Hui Xian Ng, Kokil Jaidka, Kaiyuan Tay, Hansin Ahuja, Niyati
Chhaya | Improving User Behavior Prediction: Leveraging Annotator Metadata in
Supervised Machine Learning Models | Accepted at CSCW 2025 | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Supervised machine-learning models often underperform in predicting user
behaviors from conversational text, hindered by poor crowdsourced label quality
and low NLP task accuracy. We introduce the Metadata-Sensitive
Weighted-Encoding Ensemble Model (MSWEEM), which integrates annotator
meta-features like fatigue and speeding. First, our results show MSWEEM
outperforms standard ensembles by 14\% on held-out data and 12\% on an
alternative dataset. Second, we find that incorporating signals of annotator
behavior, such as speed and fatigue, significantly boosts model performance.
Third, we find that annotators with higher qualifications, such as Master's,
deliver more consistent and faster annotations. Given the increasing
uncertainty over annotation quality, our experiments show that understanding
annotator patterns is crucial for enhancing model accuracy in user behavior
prediction.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 21:30:48 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Ng",
"Lynnette Hui Xian",
""
],
[
"Jaidka",
"Kokil",
""
],
[
"Tay",
"Kaiyuan",
""
],
[
"Ahuja",
"Hansin",
""
],
[
"Chhaya",
"Niyati",
""
]
] | TITLE: Improving User Behavior Prediction: Leveraging Annotator Metadata in
Supervised Machine Learning Models
ABSTRACT: Supervised machine-learning models often underperform in predicting user
behaviors from conversational text, hindered by poor crowdsourced label quality
and low NLP task accuracy. We introduce the Metadata-Sensitive
Weighted-Encoding Ensemble Model (MSWEEM), which integrates annotator
meta-features like fatigue and speeding. First, our results show MSWEEM
outperforms standard ensembles by 14\% on held-out data and 12\% on an
alternative dataset. Second, we find that incorporating signals of annotator
behavior, such as speed and fatigue, significantly boosts model performance.
Third, we find that annotators with higher qualifications, such as Master's,
deliver more consistent and faster annotations. Given the increasing
uncertainty over annotation quality, our experiments show that understanding
annotator patterns is crucial for enhancing model accuracy in user behavior
prediction.
|
2503.21011 | Derek Powell | Ana Ma and Derek Powell | Can Large Language Models Predict Associations Among Human Attitudes? | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Prior work has shown that large language models (LLMs) can predict human
attitudes based on other attitudes, but this work has largely focused on
predictions from highly similar and interrelated attitudes. In contrast, human
attitudes are often strongly associated even across disparate and dissimilar
topics. Using a novel dataset of human responses toward diverse attitude
statements, we found that a frontier language model (GPT-4o) was able to
recreate the pairwise correlations among individual attitudes and to predict
individuals' attitudes from one another. Crucially, in an advance over prior
work, we tested GPT-4o's ability to predict in the absence of
surface-similarity between attitudes, finding that while surface similarity
improves prediction accuracy, the model was still highly-capable of generating
meaningful social inferences between dissimilar attitudes. Altogether, our
findings indicate that LLMs capture crucial aspects of the deeper, latent
structure of human belief systems.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 21:58:43 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Ma",
"Ana",
""
],
[
"Powell",
"Derek",
""
]
] | TITLE: Can Large Language Models Predict Associations Among Human Attitudes?
ABSTRACT: Prior work has shown that large language models (LLMs) can predict human
attitudes based on other attitudes, but this work has largely focused on
predictions from highly similar and interrelated attitudes. In contrast, human
attitudes are often strongly associated even across disparate and dissimilar
topics. Using a novel dataset of human responses toward diverse attitude
statements, we found that a frontier language model (GPT-4o) was able to
recreate the pairwise correlations among individual attitudes and to predict
individuals' attitudes from one another. Crucially, in an advance over prior
work, we tested GPT-4o's ability to predict in the absence of
surface-similarity between attitudes, finding that while surface similarity
improves prediction accuracy, the model was still highly-capable of generating
meaningful social inferences between dissimilar attitudes. Altogether, our
findings indicate that LLMs capture crucial aspects of the deeper, latent
structure of human belief systems.
|
2503.21023 | Thomson Yen | Thomson Yen, Andrew Wei Tung Siah, Haozhe Chen, Tianyi Peng, Daniel
Guetta, Hongseok Namkoong | Data Mixture Optimization: A Multi-fidelity Multi-scale Bayesian
Framework | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Careful curation of data sources can significantly improve the performance of
LLM pre-training, but predominant approaches rely heavily on intuition or
costly trial-and-error, making them difficult to generalize across different
data domains and downstream tasks. Although scaling laws can provide a
principled and general approach for data curation, standard deterministic
extrapolation from small-scale experiments to larger scales requires strong
assumptions on the reliability of such extrapolation, whose brittleness has
been highlighted in prior works. In this paper, we introduce a
$\textit{probabilistic extrapolation framework}$ for data mixture optimization
that avoids rigid assumptions and explicitly models the uncertainty in
performance across decision variables. We formulate data curation as a
sequential decision-making problem$\unicode{x2013}$multi-fidelity, multi-scale
Bayesian optimization$\unicode{x2013}$where $\{$data mixtures, model scale,
training steps$\}$ are adaptively selected to balance training cost and
potential information gain. Our framework naturally gives rise to algorithm
prototypes that leverage noisy information from inexpensive experiments to
systematically inform costly training decisions. To accelerate methodological
progress, we build a simulator based on 472 language model pre-training runs
with varying data compositions from the SlimPajama dataset. We observe that
even simple kernels and acquisition functions can enable principled decisions
across training models from 20M to 1B parameters and achieve $\textbf{2.6x}$
and $\textbf{3.3x}$ speedups compared to multi-fidelity BO and random search
baselines. Taken together, our framework underscores potential efficiency gains
achievable by developing principled and transferable data mixture optimization
methods.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 22:19:47 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Yen",
"Thomson",
""
],
[
"Siah",
"Andrew Wei Tung",
""
],
[
"Chen",
"Haozhe",
""
],
[
"Peng",
"Tianyi",
""
],
[
"Guetta",
"Daniel",
""
],
[
"Namkoong",
"Hongseok",
""
]
] | TITLE: Data Mixture Optimization: A Multi-fidelity Multi-scale Bayesian
Framework
ABSTRACT: Careful curation of data sources can significantly improve the performance of
LLM pre-training, but predominant approaches rely heavily on intuition or
costly trial-and-error, making them difficult to generalize across different
data domains and downstream tasks. Although scaling laws can provide a
principled and general approach for data curation, standard deterministic
extrapolation from small-scale experiments to larger scales requires strong
assumptions on the reliability of such extrapolation, whose brittleness has
been highlighted in prior works. In this paper, we introduce a
$\textit{probabilistic extrapolation framework}$ for data mixture optimization
that avoids rigid assumptions and explicitly models the uncertainty in
performance across decision variables. We formulate data curation as a
sequential decision-making problem$\unicode{x2013}$multi-fidelity, multi-scale
Bayesian optimization$\unicode{x2013}$where $\{$data mixtures, model scale,
training steps$\}$ are adaptively selected to balance training cost and
potential information gain. Our framework naturally gives rise to algorithm
prototypes that leverage noisy information from inexpensive experiments to
systematically inform costly training decisions. To accelerate methodological
progress, we build a simulator based on 472 language model pre-training runs
with varying data compositions from the SlimPajama dataset. We observe that
even simple kernels and acquisition functions can enable principled decisions
across training models from 20M to 1B parameters and achieve $\textbf{2.6x}$
and $\textbf{3.3x}$ speedups compared to multi-fidelity BO and random search
baselines. Taken together, our framework underscores potential efficiency gains
achievable by developing principled and transferable data mixture optimization
methods.
|
2503.21029 | Jungyeul Park | Jungyeul Park and Yige Chen and Kyuwon Kim and KyungTae Lim and
Chulwoo Park | Enhancing Korean Dependency Parsing with Morphosyntactic Features | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by-sa/4.0/ | This paper introduces UniDive for Korean, an integrated framework that
bridges Universal Dependencies (UD) and Universal Morphology (UniMorph) to
enhance the representation and processing of Korean {morphosyntax}. Korean's
rich inflectional morphology and flexible word order pose challenges for
existing frameworks, which often treat morphology and syntax separately,
leading to inconsistencies in linguistic analysis. UniDive unifies syntactic
and morphological annotations by preserving syntactic dependencies while
incorporating UniMorph-derived features, improving consistency in annotation.
We construct an integrated dataset and apply it to dependency parsing,
demonstrating that enriched morphosyntactic features enhance parsing accuracy,
particularly in distinguishing grammatical relations influenced by morphology.
Our experiments, conducted with both encoder-only and decoder-only models,
confirm that explicit morphological information contributes to more accurate
syntactic analysis.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 22:27:26 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Park",
"Jungyeul",
""
],
[
"Chen",
"Yige",
""
],
[
"Kim",
"Kyuwon",
""
],
[
"Lim",
"KyungTae",
""
],
[
"Park",
"Chulwoo",
""
]
] | TITLE: Enhancing Korean Dependency Parsing with Morphosyntactic Features
ABSTRACT: This paper introduces UniDive for Korean, an integrated framework that
bridges Universal Dependencies (UD) and Universal Morphology (UniMorph) to
enhance the representation and processing of Korean {morphosyntax}. Korean's
rich inflectional morphology and flexible word order pose challenges for
existing frameworks, which often treat morphology and syntax separately,
leading to inconsistencies in linguistic analysis. UniDive unifies syntactic
and morphological annotations by preserving syntactic dependencies while
incorporating UniMorph-derived features, improving consistency in annotation.
We construct an integrated dataset and apply it to dependency parsing,
demonstrating that enriched morphosyntactic features enhance parsing accuracy,
particularly in distinguishing grammatical relations influenced by morphology.
Our experiments, conducted with both encoder-only and decoder-only models,
confirm that explicit morphological information contributes to more accurate
syntactic analysis.
|
2503.21033 | Dimitar Mileski | Dimitar Mileski, Nikola Petrovski, Marjan Gusev | Scalability Evaluation of HPC Multi-GPU Training for ECG-based LLMs | null | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Training large language models requires extensive processing, made possible
by many high-performance computing resources. This study compares multi-node
and multi-GPU environments for training large language models of
electrocardiograms. It provides a detailed mapping of current frameworks for
distributed deep learning in multinode and multi-GPU settings, including
Horovod from Uber, DeepSpeed from Microsoft, and the built-in distributed
capabilities of PyTorch and TensorFlow. We compare various multi-GPU setups for
different dataset configurations, utilizing multiple HPC nodes independently
and focusing on scalability, speedup, efficiency, and overhead. The analysis
leverages HPC infrastructure with SLURM, Apptainer (Singularity) containers,
CUDA, PyTorch, and shell scripts to support training workflows and automation.
We achieved a sub-linear speedup when scaling the number of GPUs, with values
of 1.6x for two and 1.9x for four.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 22:48:17 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Mileski",
"Dimitar",
""
],
[
"Petrovski",
"Nikola",
""
],
[
"Gusev",
"Marjan",
""
]
] | TITLE: Scalability Evaluation of HPC Multi-GPU Training for ECG-based LLMs
ABSTRACT: Training large language models requires extensive processing, made possible
by many high-performance computing resources. This study compares multi-node
and multi-GPU environments for training large language models of
electrocardiograms. It provides a detailed mapping of current frameworks for
distributed deep learning in multinode and multi-GPU settings, including
Horovod from Uber, DeepSpeed from Microsoft, and the built-in distributed
capabilities of PyTorch and TensorFlow. We compare various multi-GPU setups for
different dataset configurations, utilizing multiple HPC nodes independently
and focusing on scalability, speedup, efficiency, and overhead. The analysis
leverages HPC infrastructure with SLURM, Apptainer (Singularity) containers,
CUDA, PyTorch, and shell scripts to support training workflows and automation.
We achieved a sub-linear speedup when scaling the number of GPUs, with values
of 1.6x for two and 1.9x for four.
|
2503.21036 | Yunnan Wu | Yunnan Wu, Paul Chen, Deshank Baranwal, Jinlong Zhou, Jian Yuan | The Art of Tool Interface Design | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We present an agentic framework, Thinker, which achieves state of art
performance in challenging reasoning tasks for realistic customer service
scenarios that involve complex business logic and human interactions via long
horizons. On the $\tau$-bench retail dataset, Thinker achieves 82.6\% success
rate with GPT-4o (version 2024-06-01) (baseline: 68.3\%), and 81.9\% success
rate with Llama-3.1 405B (baseline: 49.6\%), without any fine-tuning. Thinker
effectively closes the gap in reasoning capabilities between the base models by
introducing proper structure.
The key features of the Thinker framework are: (1) State-Machine Augmented
Generation (SMAG), which represents business logic as state machines and the
LLM uses state machines as tools. (2) Delegation of tasks from the main
reasoning loop to LLM-powered tools. (3) Adaptive context management.
Our prompting-only solution achieves signficant gains, while still
maintaining a standard agentic architecture with a ReAct style reasoning loop.
The key is to innovate on the tool interface design, as exemplified by SMAG and
the LLM-powered tools.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 23:02:00 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Wu",
"Yunnan",
""
],
[
"Chen",
"Paul",
""
],
[
"Baranwal",
"Deshank",
""
],
[
"Zhou",
"Jinlong",
""
],
[
"Yuan",
"Jian",
""
]
] | TITLE: The Art of Tool Interface Design
ABSTRACT: We present an agentic framework, Thinker, which achieves state of art
performance in challenging reasoning tasks for realistic customer service
scenarios that involve complex business logic and human interactions via long
horizons. On the $\tau$-bench retail dataset, Thinker achieves 82.6\% success
rate with GPT-4o (version 2024-06-01) (baseline: 68.3\%), and 81.9\% success
rate with Llama-3.1 405B (baseline: 49.6\%), without any fine-tuning. Thinker
effectively closes the gap in reasoning capabilities between the base models by
introducing proper structure.
The key features of the Thinker framework are: (1) State-Machine Augmented
Generation (SMAG), which represents business logic as state machines and the
LLM uses state machines as tools. (2) Delegation of tasks from the main
reasoning loop to LLM-powered tools. (3) Adaptive context management.
Our prompting-only solution achieves signficant gains, while still
maintaining a standard agentic architecture with a ReAct style reasoning loop.
The key is to innovate on the tool interface design, as exemplified by SMAG and
the LLM-powered tools.
|
2503.21048 | Jun Ohkubo | Ichiro Ohta, Shota Koyanagi, Kayo Kinjo, Jun Ohkubo | Integrated utilization of equations and small dataset in the Koopman
operator: applications to forward and inverse Problems | 10 pages, 8 figures | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, there has been a growing interest in data-driven approaches
in physics, such as extended dynamic mode decomposition (EDMD). The EDMD
algorithm focuses on nonlinear time-evolution systems, and the constructed
Koopman matrix yields the next-time prediction with only linear matrix-product
operations. Note that data-driven approaches generally require a large dataset.
However, assume that one has some prior knowledge, even if it may be ambiguous.
Then, one could achieve sufficient learning from only a small dataset by taking
advantage of the prior knowledge. This paper yields methods for incorporating
ambiguous prior knowledge into the EDMD algorithm. The ambiguous prior
knowledge in this paper corresponds to the underlying time-evolution equations
with unknown parameters. First, we apply the proposed method to forward
problems, i.e., prediction tasks. Second, we propose a scheme to apply the
proposed method to inverse problems, i.e., parameter estimation tasks. We
demonstrate the learning with only a small dataset using guiding examples,
i.e., the Duffing and the van der Pol systems.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 23:45:06 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Ohta",
"Ichiro",
""
],
[
"Koyanagi",
"Shota",
""
],
[
"Kinjo",
"Kayo",
""
],
[
"Ohkubo",
"Jun",
""
]
] | TITLE: Integrated utilization of equations and small dataset in the Koopman
operator: applications to forward and inverse Problems
ABSTRACT: In recent years, there has been a growing interest in data-driven approaches
in physics, such as extended dynamic mode decomposition (EDMD). The EDMD
algorithm focuses on nonlinear time-evolution systems, and the constructed
Koopman matrix yields the next-time prediction with only linear matrix-product
operations. Note that data-driven approaches generally require a large dataset.
However, assume that one has some prior knowledge, even if it may be ambiguous.
Then, one could achieve sufficient learning from only a small dataset by taking
advantage of the prior knowledge. This paper yields methods for incorporating
ambiguous prior knowledge into the EDMD algorithm. The ambiguous prior
knowledge in this paper corresponds to the underlying time-evolution equations
with unknown parameters. First, we apply the proposed method to forward
problems, i.e., prediction tasks. Second, we propose a scheme to apply the
proposed method to inverse problems, i.e., parameter estimation tasks. We
demonstrate the learning with only a small dataset using guiding examples,
i.e., the Duffing and the van der Pol systems.
|
2503.21054 | Yiqing Shen | Yiqing Shen, Chenjia Li, Bohan Liu, Cheng-Yi Li, Tito Porras, and
Mathias Unberath | Operating Room Workflow Analysis via Reasoning Segmentation over Digital
Twins | null | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Analyzing operating room (OR) workflows to derive quantitative insights into
OR efficiency is important for hospitals to maximize patient care and financial
sustainability. Prior work on OR-level workflow analysis has relied on
end-to-end deep neural networks. While these approaches work well in
constrained settings, they are limited to the conditions specified at
development time and do not offer the flexibility necessary to accommodate the
OR workflow analysis needs of various OR scenarios (e.g., large academic center
vs. rural provider) without data collection, annotation, and retraining.
Reasoning segmentation (RS) based on foundation models offers this flexibility
by enabling automated analysis of OR workflows from OR video feeds given only
an implicit text query related to the objects of interest. Due to the reliance
on large language model (LLM) fine-tuning, current RS approaches struggle with
reasoning about semantic/spatial relationships and show limited generalization
to OR video due to variations in visual characteristics and domain-specific
terminology. To address these limitations, we first propose a novel digital
twin (DT) representation that preserves both semantic and spatial relationships
between the various OR components. Then, building on this foundation, we
propose ORDiRS (Operating Room Digital twin representation for Reasoning
Segmentation), an LLM-tuning-free RS framework that reformulates RS into a
"reason-retrieval-synthesize" paradigm. Finally, we present ORDiRS-Agent, an
LLM-based agent that decomposes OR workflow analysis queries into manageable RS
sub-queries and generates responses by combining detailed textual explanations
with supporting visual evidence from RS. Experimental results on both an
in-house and a public OR dataset demonstrate that our ORDiRS achieves a cIoU
improvement of 6.12%-9.74% compared to the existing state-of-the-arts.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 23:59:32 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Shen",
"Yiqing",
""
],
[
"Li",
"Chenjia",
""
],
[
"Liu",
"Bohan",
""
],
[
"Li",
"Cheng-Yi",
""
],
[
"Porras",
"Tito",
""
],
[
"Unberath",
"Mathias",
""
]
] | TITLE: Operating Room Workflow Analysis via Reasoning Segmentation over Digital
Twins
ABSTRACT: Analyzing operating room (OR) workflows to derive quantitative insights into
OR efficiency is important for hospitals to maximize patient care and financial
sustainability. Prior work on OR-level workflow analysis has relied on
end-to-end deep neural networks. While these approaches work well in
constrained settings, they are limited to the conditions specified at
development time and do not offer the flexibility necessary to accommodate the
OR workflow analysis needs of various OR scenarios (e.g., large academic center
vs. rural provider) without data collection, annotation, and retraining.
Reasoning segmentation (RS) based on foundation models offers this flexibility
by enabling automated analysis of OR workflows from OR video feeds given only
an implicit text query related to the objects of interest. Due to the reliance
on large language model (LLM) fine-tuning, current RS approaches struggle with
reasoning about semantic/spatial relationships and show limited generalization
to OR video due to variations in visual characteristics and domain-specific
terminology. To address these limitations, we first propose a novel digital
twin (DT) representation that preserves both semantic and spatial relationships
between the various OR components. Then, building on this foundation, we
propose ORDiRS (Operating Room Digital twin representation for Reasoning
Segmentation), an LLM-tuning-free RS framework that reformulates RS into a
"reason-retrieval-synthesize" paradigm. Finally, we present ORDiRS-Agent, an
LLM-based agent that decomposes OR workflow analysis queries into manageable RS
sub-queries and generates responses by combining detailed textual explanations
with supporting visual evidence from RS. Experimental results on both an
in-house and a public OR dataset demonstrate that our ORDiRS achieves a cIoU
improvement of 6.12%-9.74% compared to the existing state-of-the-arts.
|
2503.21072 | JudyX Yang | Judy X Yang, Jing Wang, Zhuanfeng, Li, Chenhong Sui Zekun Long, and
Jun Zhou | HSLiNets: Evaluating Band Ordering Strategies in Hyperspectral and LiDAR
Fusion | 2 figures, 5 pages | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | The integration of hyperspectral imaging (HSI) and Light Detection and
Ranging (LiDAR) data provides complementary spectral and spatial information
for remote sensing applications. While previous studies have explored the role
of band selection and grouping in HSI classification, little attention has been
given to how the spectral sequence or band order affects classification
outcomes when fused with LiDAR. In this work, we systematically investigate the
influence of band order on HSI-LiDAR fusion performance. Through extensive
experiments, we demonstrate that band order significantly impacts
classification accuracy, revealing a previously overlooked factor in
fusion-based models. Motivated by this observation, we propose a novel fusion
architecture that not only integrates HSI and LiDAR data but also learns from
multiple band order configurations. The proposed method enhances feature
representation by adaptively fusing different spectral sequences, leading to
improved classification accuracy. Experimental results on the Houston 2013 and
Trento datasets show that our approach outperforms state-of-the-art fusion
models. Data and code are available at https://github.com/Judyxyang/HSLiNets.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 01:11:31 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Yang",
"Judy X",
""
],
[
"Wang",
"Jing",
""
],
[
"Zhuanfeng",
"",
""
],
[
"Li",
"",
""
],
[
"Long",
"Chenhong Sui Zekun",
""
],
[
"Zhou",
"Jun",
""
]
] | TITLE: HSLiNets: Evaluating Band Ordering Strategies in Hyperspectral and LiDAR
Fusion
ABSTRACT: The integration of hyperspectral imaging (HSI) and Light Detection and
Ranging (LiDAR) data provides complementary spectral and spatial information
for remote sensing applications. While previous studies have explored the role
of band selection and grouping in HSI classification, little attention has been
given to how the spectral sequence or band order affects classification
outcomes when fused with LiDAR. In this work, we systematically investigate the
influence of band order on HSI-LiDAR fusion performance. Through extensive
experiments, we demonstrate that band order significantly impacts
classification accuracy, revealing a previously overlooked factor in
fusion-based models. Motivated by this observation, we propose a novel fusion
architecture that not only integrates HSI and LiDAR data but also learns from
multiple band order configurations. The proposed method enhances feature
representation by adaptively fusing different spectral sequences, leading to
improved classification accuracy. Experimental results on the Houston 2013 and
Trento datasets show that our approach outperforms state-of-the-art fusion
models. Data and code are available at https://github.com/Judyxyang/HSLiNets.
|
2503.21084 | Yan Tang | Yan Tang | Geographical hotspot prediction based on point cloud-voxel-community
partition clustering | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing solutions to the hotspot prediction problem in the field of
geographic information remain at a relatively preliminary stage. This study
presents a novel approach for detecting and predicting geographical hotspots,
utilizing point cloud-voxel-community partition clustering. By analyzing
high-dimensional data, we represent spatial information through point clouds,
which are then subdivided into multiple voxels to enhance analytical
efficiency. Our method identifies spatial voxels with similar characteristics
through community partitioning, thereby revealing underlying patterns in
hotspot distributions. Experimental results indicate that when applied to a
dataset of archaeological sites in Turkey, our approach achieves a 19.31%
increase in processing speed, with an accuracy loss of merely 6%, outperforming
traditional clustering methods. This method not only provides a fresh
perspective for hotspot prediction but also serves as an effective tool for
high-dimensional data analysis.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 01:59:24 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Tang",
"Yan",
""
]
] | TITLE: Geographical hotspot prediction based on point cloud-voxel-community
partition clustering
ABSTRACT: Existing solutions to the hotspot prediction problem in the field of
geographic information remain at a relatively preliminary stage. This study
presents a novel approach for detecting and predicting geographical hotspots,
utilizing point cloud-voxel-community partition clustering. By analyzing
high-dimensional data, we represent spatial information through point clouds,
which are then subdivided into multiple voxels to enhance analytical
efficiency. Our method identifies spatial voxels with similar characteristics
through community partitioning, thereby revealing underlying patterns in
hotspot distributions. Experimental results indicate that when applied to a
dataset of archaeological sites in Turkey, our approach achieves a 19.31%
increase in processing speed, with an accuracy loss of merely 6%, outperforming
traditional clustering methods. This method not only provides a fresh
perspective for hotspot prediction but also serves as an effective tool for
high-dimensional data analysis.
|
2503.21098 | Yedan Shen | Yedan Shen, Kaixin Wu, Yuechen Ding, Jingyuan Wen, Hong Liu, Mingjie
Zhong, Zhouhan Lin, Jia Xu, Linjian Mo | Alleviating LLM-based Generative Retrieval Hallucination in Alipay
Search | 4 pages | null | null | null | cs.IR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generative retrieval (GR) has revolutionized document retrieval with the
advent of large language models (LLMs), and LLM-based GR is gradually being
adopted by the industry. Despite its remarkable advantages and potential,
LLM-based GR suffers from hallucination and generates documents that are
irrelevant to the query in some instances, severely challenging its credibility
in practical applications. We thereby propose an optimized GR framework
designed to alleviate retrieval hallucination, which integrates knowledge
distillation reasoning in model training and incorporate decision agent to
further improve retrieval precision. Specifically, we employ LLMs to assess and
reason GR retrieved query-document (q-d) pairs, and then distill the reasoning
data as transferred knowledge to the GR model. Moreover, we utilize a decision
agent as post-processing to extend the GR retrieved documents through retrieval
model and select the most relevant ones from multi perspectives as the final
generative retrieval result. Extensive offline experiments on real-world
datasets and online A/B tests on Fund Search and Insurance Search in Alipay
demonstrate our framework's superiority and effectiveness in improving search
quality and conversion gains.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 02:36:48 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Shen",
"Yedan",
""
],
[
"Wu",
"Kaixin",
""
],
[
"Ding",
"Yuechen",
""
],
[
"Wen",
"Jingyuan",
""
],
[
"Liu",
"Hong",
""
],
[
"Zhong",
"Mingjie",
""
],
[
"Lin",
"Zhouhan",
""
],
[
"Xu",
"Jia",
""
],
[
"Mo",
"Linjian",
""
]
] | TITLE: Alleviating LLM-based Generative Retrieval Hallucination in Alipay
Search
ABSTRACT: Generative retrieval (GR) has revolutionized document retrieval with the
advent of large language models (LLMs), and LLM-based GR is gradually being
adopted by the industry. Despite its remarkable advantages and potential,
LLM-based GR suffers from hallucination and generates documents that are
irrelevant to the query in some instances, severely challenging its credibility
in practical applications. We thereby propose an optimized GR framework
designed to alleviate retrieval hallucination, which integrates knowledge
distillation reasoning in model training and incorporate decision agent to
further improve retrieval precision. Specifically, we employ LLMs to assess and
reason GR retrieved query-document (q-d) pairs, and then distill the reasoning
data as transferred knowledge to the GR model. Moreover, we utilize a decision
agent as post-processing to extend the GR retrieved documents through retrieval
model and select the most relevant ones from multi perspectives as the final
generative retrieval result. Extensive offline experiments on real-world
datasets and online A/B tests on Fund Search and Insurance Search in Alipay
demonstrate our framework's superiority and effectiveness in improving search
quality and conversion gains.
|
2503.21099 | Yun Zhu | Yun Zhu, Le Hui, Hang Yang, Jianjun Qian, Jin Xie, Jian Yang | Learning Class Prototypes for Unified Sparse Supervised 3D Object
Detection | Accepted by CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Both indoor and outdoor scene perceptions are essential for embodied
intelligence. However, current sparse supervised 3D object detection methods
focus solely on outdoor scenes without considering indoor settings. To this
end, we propose a unified sparse supervised 3D object detection method for both
indoor and outdoor scenes through learning class prototypes to effectively
utilize unlabeled objects. Specifically, we first propose a prototype-based
object mining module that converts the unlabeled object mining into a matching
problem between class prototypes and unlabeled features. By using optimal
transport matching results, we assign prototype labels to high-confidence
features, thereby achieving the mining of unlabeled objects. We then present a
multi-label cooperative refinement module to effectively recover missed
detections through pseudo label quality control and prototype label
cooperation. Experiments show that our method achieves state-of-the-art
performance under the one object per scene sparse supervised setting across
indoor and outdoor datasets. With only one labeled object per scene, our method
achieves about 78%, 90%, and 96% performance compared to the fully supervised
detector on ScanNet V2, SUN RGB-D, and KITTI, respectively, highlighting the
scalability of our method. Code is available at
https://github.com/zyrant/CPDet3D.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 02:37:05 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Zhu",
"Yun",
""
],
[
"Hui",
"Le",
""
],
[
"Yang",
"Hang",
""
],
[
"Qian",
"Jianjun",
""
],
[
"Xie",
"Jin",
""
],
[
"Yang",
"Jian",
""
]
] | TITLE: Learning Class Prototypes for Unified Sparse Supervised 3D Object
Detection
ABSTRACT: Both indoor and outdoor scene perceptions are essential for embodied
intelligence. However, current sparse supervised 3D object detection methods
focus solely on outdoor scenes without considering indoor settings. To this
end, we propose a unified sparse supervised 3D object detection method for both
indoor and outdoor scenes through learning class prototypes to effectively
utilize unlabeled objects. Specifically, we first propose a prototype-based
object mining module that converts the unlabeled object mining into a matching
problem between class prototypes and unlabeled features. By using optimal
transport matching results, we assign prototype labels to high-confidence
features, thereby achieving the mining of unlabeled objects. We then present a
multi-label cooperative refinement module to effectively recover missed
detections through pseudo label quality control and prototype label
cooperation. Experiments show that our method achieves state-of-the-art
performance under the one object per scene sparse supervised setting across
indoor and outdoor datasets. With only one labeled object per scene, our method
achieves about 78%, 90%, and 96% performance compared to the fully supervised
detector on ScanNet V2, SUN RGB-D, and KITTI, respectively, highlighting the
scalability of our method. Code is available at
https://github.com/zyrant/CPDet3D.
|
2503.21122 | Han Ding | Teng Huang, Han Ding, Wenxin Sun, Cui Zhao, Ge Wang, Fei Wang, Kun
Zhao, Zhi Wang, Wei Xi | One Snapshot is All You Need: A Generalized Method for mmWave Signal
Generation | IEEE INFOCOM 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Wireless sensing systems, particularly those using mmWave technology, offer
distinct advantages over traditional vision-based approaches, such as enhanced
privacy and effectiveness in poor lighting conditions. These systems,
leveraging FMCW signals, have shown success in human-centric applications like
localization, gesture recognition, and so on. However, comprehensive mmWave
datasets for diverse applications are scarce, often constrained by
pre-processed signatures (e.g., point clouds or RA heatmaps) and inconsistent
annotation formats. To overcome these limitations, we propose mmGen, a novel
and generalized framework tailored for full-scene mmWave signal generation. By
constructing physical signal transmission models, mmGen synthesizes
human-reflected and environment-reflected mmWave signals from the constructed
3D meshes. Additionally, we incorporate methods to account for material
properties, antenna gains, and multipath reflections, enhancing the realism of
the synthesized signals. We conduct extensive experiments using a prototype
system with commercial mmWave devices and Kinect sensors. The results show that
the average similarity of Range-Angle and micro-Doppler signatures between the
synthesized and real-captured signals across three different environments
exceeds 0.91 and 0.89, respectively, demonstrating the effectiveness and
practical applicability of mmGen.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 03:24:10 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Huang",
"Teng",
""
],
[
"Ding",
"Han",
""
],
[
"Sun",
"Wenxin",
""
],
[
"Zhao",
"Cui",
""
],
[
"Wang",
"Ge",
""
],
[
"Wang",
"Fei",
""
],
[
"Zhao",
"Kun",
""
],
[
"Wang",
"Zhi",
""
],
[
"Xi",
"Wei",
""
]
] | TITLE: One Snapshot is All You Need: A Generalized Method for mmWave Signal
Generation
ABSTRACT: Wireless sensing systems, particularly those using mmWave technology, offer
distinct advantages over traditional vision-based approaches, such as enhanced
privacy and effectiveness in poor lighting conditions. These systems,
leveraging FMCW signals, have shown success in human-centric applications like
localization, gesture recognition, and so on. However, comprehensive mmWave
datasets for diverse applications are scarce, often constrained by
pre-processed signatures (e.g., point clouds or RA heatmaps) and inconsistent
annotation formats. To overcome these limitations, we propose mmGen, a novel
and generalized framework tailored for full-scene mmWave signal generation. By
constructing physical signal transmission models, mmGen synthesizes
human-reflected and environment-reflected mmWave signals from the constructed
3D meshes. Additionally, we incorporate methods to account for material
properties, antenna gains, and multipath reflections, enhancing the realism of
the synthesized signals. We conduct extensive experiments using a prototype
system with commercial mmWave devices and Kinect sensors. The results show that
the average similarity of Range-Angle and micro-Doppler signatures between the
synthesized and real-captured signals across three different environments
exceeds 0.91 and 0.89, respectively, demonstrating the effectiveness and
practical applicability of mmGen.
|
2503.21124 | Shuaiyu Zhang | Shuaiyu Zhang and Xun Lin and Rongxiang Zhang and Yu Bai and Yong Xu
and Tao Tan and Xunbin Zheng and Zitong Yu | AdaMHF: Adaptive Multimodal Hierarchical Fusion for Survival Prediction | Accepted by ICME 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The integration of pathologic images and genomic data for survival analysis
has gained increasing attention with advances in multimodal learning. However,
current methods often ignore biological characteristics, such as heterogeneity
and sparsity, both within and across modalities, ultimately limiting their
adaptability to clinical practice. To address these challenges, we propose
AdaMHF: Adaptive Multimodal Hierarchical Fusion, a framework designed for
efficient, comprehensive, and tailored feature extraction and fusion. AdaMHF is
specifically adapted to the uniqueness of medical data, enabling accurate
predictions with minimal resource consumption, even under challenging scenarios
with missing modalities. Initially, AdaMHF employs an experts expansion and
residual structure to activate specialized experts for extracting heterogeneous
and sparse features. Extracted tokens undergo refinement via selection and
aggregation, reducing the weight of non-dominant features while preserving
comprehensive information. Subsequently, the encoded features are
hierarchically fused, allowing multi-grained interactions across modalities to
be captured. Furthermore, we introduce a survival prediction benchmark designed
to resolve scenarios with missing modalities, mirroring real-world clinical
conditions. Extensive experiments on TCGA datasets demonstrate that AdaMHF
surpasses current state-of-the-art (SOTA) methods, showcasing exceptional
performance in both complete and incomplete modality settings.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 03:27:55 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Zhang",
"Shuaiyu",
""
],
[
"Lin",
"Xun",
""
],
[
"Zhang",
"Rongxiang",
""
],
[
"Bai",
"Yu",
""
],
[
"Xu",
"Yong",
""
],
[
"Tan",
"Tao",
""
],
[
"Zheng",
"Xunbin",
""
],
[
"Yu",
"Zitong",
""
]
] | TITLE: AdaMHF: Adaptive Multimodal Hierarchical Fusion for Survival Prediction
ABSTRACT: The integration of pathologic images and genomic data for survival analysis
has gained increasing attention with advances in multimodal learning. However,
current methods often ignore biological characteristics, such as heterogeneity
and sparsity, both within and across modalities, ultimately limiting their
adaptability to clinical practice. To address these challenges, we propose
AdaMHF: Adaptive Multimodal Hierarchical Fusion, a framework designed for
efficient, comprehensive, and tailored feature extraction and fusion. AdaMHF is
specifically adapted to the uniqueness of medical data, enabling accurate
predictions with minimal resource consumption, even under challenging scenarios
with missing modalities. Initially, AdaMHF employs an experts expansion and
residual structure to activate specialized experts for extracting heterogeneous
and sparse features. Extracted tokens undergo refinement via selection and
aggregation, reducing the weight of non-dominant features while preserving
comprehensive information. Subsequently, the encoded features are
hierarchically fused, allowing multi-grained interactions across modalities to
be captured. Furthermore, we introduce a survival prediction benchmark designed
to resolve scenarios with missing modalities, mirroring real-world clinical
conditions. Extensive experiments on TCGA datasets demonstrate that AdaMHF
surpasses current state-of-the-art (SOTA) methods, showcasing exceptional
performance in both complete and incomplete modality settings.
|
2503.21127 | Ziyi Zhou | Ziyi Zhou, Xiaoming Zhang, Shenghan Tan, Litian Zhang, Chaozhuo Li | Collaborative Evolution: Multi-Round Learning Between Large and Small
Language Models for Emergent Fake News Detection | null | null | null | null | cs.CL cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The proliferation of fake news on social media platforms has exerted a
substantial influence on society, leading to discernible impacts and
deleterious consequences. Conventional deep learning methodologies employing
small language models (SLMs) suffer from the necessity for extensive supervised
training and the challenge of adapting to rapidly evolving circumstances. Large
language models (LLMs), despite their robust zero-shot capabilities, have
fallen short in effectively identifying fake news due to a lack of pertinent
demonstrations and the dynamic nature of knowledge. In this paper, a novel
framework Multi-Round Collaboration Detection (MRCD) is proposed to address
these aforementioned limitations. The MRCD framework is capable of enjoying the
merits from both LLMs and SLMs by integrating their generalization abilities
and specialized functionalities, respectively. Our approach features a
two-stage retrieval module that selects relevant and up-to-date demonstrations
and knowledge, enhancing in-context learning for better detection of emerging
news events. We further design a multi-round learning framework to ensure more
reliable detection results. Our framework MRCD achieves SOTA results on two
real-world datasets Pheme and Twitter16, with accuracy improvements of 7.4\%
and 12.8\% compared to using only SLMs, which effectively addresses the
limitations of current models and improves the detection of emergent fake news.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 03:39:26 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Zhou",
"Ziyi",
""
],
[
"Zhang",
"Xiaoming",
""
],
[
"Tan",
"Shenghan",
""
],
[
"Zhang",
"Litian",
""
],
[
"Li",
"Chaozhuo",
""
]
] | TITLE: Collaborative Evolution: Multi-Round Learning Between Large and Small
Language Models for Emergent Fake News Detection
ABSTRACT: The proliferation of fake news on social media platforms has exerted a
substantial influence on society, leading to discernible impacts and
deleterious consequences. Conventional deep learning methodologies employing
small language models (SLMs) suffer from the necessity for extensive supervised
training and the challenge of adapting to rapidly evolving circumstances. Large
language models (LLMs), despite their robust zero-shot capabilities, have
fallen short in effectively identifying fake news due to a lack of pertinent
demonstrations and the dynamic nature of knowledge. In this paper, a novel
framework Multi-Round Collaboration Detection (MRCD) is proposed to address
these aforementioned limitations. The MRCD framework is capable of enjoying the
merits from both LLMs and SLMs by integrating their generalization abilities
and specialized functionalities, respectively. Our approach features a
two-stage retrieval module that selects relevant and up-to-date demonstrations
and knowledge, enhancing in-context learning for better detection of emerging
news events. We further design a multi-round learning framework to ensure more
reliable detection results. Our framework MRCD achieves SOTA results on two
real-world datasets Pheme and Twitter16, with accuracy improvements of 7.4\%
and 12.8\% compared to using only SLMs, which effectively addresses the
limitations of current models and improves the detection of emergent fake news.
|
2503.21140 | Junjie Chen | Junjie Chen, Weilong Chen, Yifan Zuo, Yuming Fang | Recurrent Feature Mining and Keypoint Mixup Padding for
Category-Agnostic Pose Estimation | null | Published in CVPR 2025 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Category-agnostic pose estimation aims to locate keypoints on query images
according to a few annotated support images for arbitrary novel classes.
Existing methods generally extract support features via heatmap pooling, and
obtain interacted features from support and query via cross-attention. Hence,
these works neglect to mine fine-grained and structure-aware (FGSA) features
from both support and query images, which are crucial for pixel-level keypoint
localization. To this end, we propose a novel yet concise framework, which
recurrently mines FGSA features from both support and query images.
Specifically, we design a FGSA mining module based on deformable attention
mechanism. On the one hand, we mine fine-grained features by applying
deformable attention head over multi-scale feature maps. On the other hand, we
mine structure-aware features by offsetting the reference points of keypoints
to their linked keypoints. By means of above module, we recurrently mine FGSA
features from support and query images, and thus obtain better support features
and query estimations. In addition, we propose to use mixup keypoints to pad
various classes to a unified keypoint number, which could provide richer
supervision than the zero padding used in existing works. We conduct extensive
experiments and in-depth studies on large-scale MP-100 dataset, and outperform
SOTA method dramatically (+3.2\%[email protected]). Code is avaiable at
https://github.com/chenbys/FMMP.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 04:09:13 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Chen",
"Junjie",
""
],
[
"Chen",
"Weilong",
""
],
[
"Zuo",
"Yifan",
""
],
[
"Fang",
"Yuming",
""
]
] | TITLE: Recurrent Feature Mining and Keypoint Mixup Padding for
Category-Agnostic Pose Estimation
ABSTRACT: Category-agnostic pose estimation aims to locate keypoints on query images
according to a few annotated support images for arbitrary novel classes.
Existing methods generally extract support features via heatmap pooling, and
obtain interacted features from support and query via cross-attention. Hence,
these works neglect to mine fine-grained and structure-aware (FGSA) features
from both support and query images, which are crucial for pixel-level keypoint
localization. To this end, we propose a novel yet concise framework, which
recurrently mines FGSA features from both support and query images.
Specifically, we design a FGSA mining module based on deformable attention
mechanism. On the one hand, we mine fine-grained features by applying
deformable attention head over multi-scale feature maps. On the other hand, we
mine structure-aware features by offsetting the reference points of keypoints
to their linked keypoints. By means of above module, we recurrently mine FGSA
features from support and query images, and thus obtain better support features
and query estimations. In addition, we propose to use mixup keypoints to pad
various classes to a unified keypoint number, which could provide richer
supervision than the zero padding used in existing works. We conduct extensive
experiments and in-depth studies on large-scale MP-100 dataset, and outperform
SOTA method dramatically (+3.2\%[email protected]). Code is avaiable at
https://github.com/chenbys/FMMP.
|
2503.21150 | Yuhan Liu | Yuhan Liu, Yixiong Zou, Yuhua Li, Ruixuan Li | The Devil is in Low-Level Features for Cross-Domain Few-Shot
Segmentation | Accepted by CVPR 2025 | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cross-Domain Few-Shot Segmentation (CDFSS) is proposed to transfer the
pixel-level segmentation capabilities learned from large-scale source-domain
datasets to downstream target-domain datasets, with only a few annotated images
per class. In this paper, we focus on a well-observed but unresolved phenomenon
in CDFSS: for target domains, particularly those distant from the source
domain, segmentation performance peaks at the very early epochs, and declines
sharply as the source-domain training proceeds. We delve into this phenomenon
for an interpretation: low-level features are vulnerable to domain shifts,
leading to sharper loss landscapes during the source-domain training, which is
the devil of CDFSS. Based on this phenomenon and interpretation, we further
propose a method that includes two plug-and-play modules: one to flatten the
loss landscapes for low-level features during source-domain training as a novel
sharpness-aware minimization method, and the other to directly supplement
target-domain information to the model during target-domain testing by
low-level-based calibration. Extensive experiments on four target datasets
validate our rationale and demonstrate that our method surpasses the
state-of-the-art method in CDFSS signifcantly by 3.71% and 5.34% average MIoU
in 1-shot and 5-shot scenarios, respectively.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 04:37:52 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Liu",
"Yuhan",
""
],
[
"Zou",
"Yixiong",
""
],
[
"Li",
"Yuhua",
""
],
[
"Li",
"Ruixuan",
""
]
] | TITLE: The Devil is in Low-Level Features for Cross-Domain Few-Shot
Segmentation
ABSTRACT: Cross-Domain Few-Shot Segmentation (CDFSS) is proposed to transfer the
pixel-level segmentation capabilities learned from large-scale source-domain
datasets to downstream target-domain datasets, with only a few annotated images
per class. In this paper, we focus on a well-observed but unresolved phenomenon
in CDFSS: for target domains, particularly those distant from the source
domain, segmentation performance peaks at the very early epochs, and declines
sharply as the source-domain training proceeds. We delve into this phenomenon
for an interpretation: low-level features are vulnerable to domain shifts,
leading to sharper loss landscapes during the source-domain training, which is
the devil of CDFSS. Based on this phenomenon and interpretation, we further
propose a method that includes two plug-and-play modules: one to flatten the
loss landscapes for low-level features during source-domain training as a novel
sharpness-aware minimization method, and the other to directly supplement
target-domain information to the model during target-domain testing by
low-level-based calibration. Extensive experiments on four target datasets
validate our rationale and demonstrate that our method surpasses the
state-of-the-art method in CDFSS signifcantly by 3.71% and 5.34% average MIoU
in 1-shot and 5-shot scenarios, respectively.
|
2503.21154 | Kanishka Ranaweera Mr. | Kanishka Ranaweera, Dinh C. Nguyen, Pubudu N. Pathirana, David Smith,
Ming Ding, Thierry Rakotoarivelo and Aruna Seneviratne | Federated Learning with Differential Privacy: An Utility-Enhanced
Approach | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Federated learning has emerged as an attractive approach to protect data
privacy by eliminating the need for sharing clients' data while reducing
communication costs compared with centralized machine learning algorithms.
However, recent studies have shown that federated learning alone does not
guarantee privacy, as private data may still be inferred from the uploaded
parameters to the central server. In order to successfully avoid data leakage,
adopting differential privacy (DP) in the local optimization process or in the
local update aggregation process has emerged as two feasible ways for achieving
sample-level or user-level privacy guarantees respectively, in federated
learning models. However, compared to their non-private equivalents, these
approaches suffer from a poor utility. To improve the privacy-utility
trade-off, we present a modification to these vanilla differentially private
algorithms based on a Haar wavelet transformation step and a novel noise
injection scheme that significantly lowers the asymptotic bound of the noise
variance. We also present a holistic convergence analysis of our proposed
algorithm, showing that our method yields better convergence performance than
the vanilla DP algorithms. Numerical experiments on real-world datasets
demonstrate that our method outperforms existing approaches in model utility
while maintaining the same privacy guarantees.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 04:48:29 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Ranaweera",
"Kanishka",
""
],
[
"Nguyen",
"Dinh C.",
""
],
[
"Pathirana",
"Pubudu N.",
""
],
[
"Smith",
"David",
""
],
[
"Ding",
"Ming",
""
],
[
"Rakotoarivelo",
"Thierry",
""
],
[
"Seneviratne",
"Aruna",
""
]
] | TITLE: Federated Learning with Differential Privacy: An Utility-Enhanced
Approach
ABSTRACT: Federated learning has emerged as an attractive approach to protect data
privacy by eliminating the need for sharing clients' data while reducing
communication costs compared with centralized machine learning algorithms.
However, recent studies have shown that federated learning alone does not
guarantee privacy, as private data may still be inferred from the uploaded
parameters to the central server. In order to successfully avoid data leakage,
adopting differential privacy (DP) in the local optimization process or in the
local update aggregation process has emerged as two feasible ways for achieving
sample-level or user-level privacy guarantees respectively, in federated
learning models. However, compared to their non-private equivalents, these
approaches suffer from a poor utility. To improve the privacy-utility
trade-off, we present a modification to these vanilla differentially private
algorithms based on a Haar wavelet transformation step and a novel noise
injection scheme that significantly lowers the asymptotic bound of the noise
variance. We also present a holistic convergence analysis of our proposed
algorithm, showing that our method yields better convergence performance than
the vanilla DP algorithms. Numerical experiments on real-world datasets
demonstrate that our method outperforms existing approaches in model utility
while maintaining the same privacy guarantees.
|
2503.21155 | Jo\~ao E. Batista | Jo\~ao Eduardo Batista | Embedding Domain-Specific Knowledge from LLMs into the Feature
Engineering Pipeline | 9 pages, 4 figures, 5 tables | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Feature engineering is mandatory in the machine learning pipeline to obtain
robust models. While evolutionary computation is well-known for its great
results both in feature selection and feature construction, its methods are
computationally expensive due to the large number of evaluations required to
induce the final model. Part of the reason why these algorithms require a large
number of evaluations is their lack of domain-specific knowledge, resulting in
a lot of random guessing during evolution. In this work, we propose using Large
Language Models (LLMs) as an initial feature construction step to add knowledge
to the dataset. By doing so, our results show that the evolution can converge
faster, saving us computational resources. The proposed approach only provides
the names of the features in the dataset and the target objective to the LLM,
making it usable even when working with datasets containing private data. While
consistent improvements to test performance were only observed for one-third of
the datasets (CSS, PM, and IM10), possibly due to problems being easily
explored by LLMs, this approach only decreased the model performance in 1/77
test cases. Additionally, this work introduces the M6GP feature engineering
algorithm to symbolic regression, showing it can improve the results of the
random forest regressor and produce competitive results with its predecessor,
M3GP.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 04:48:58 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Batista",
"João Eduardo",
""
]
] | TITLE: Embedding Domain-Specific Knowledge from LLMs into the Feature
Engineering Pipeline
ABSTRACT: Feature engineering is mandatory in the machine learning pipeline to obtain
robust models. While evolutionary computation is well-known for its great
results both in feature selection and feature construction, its methods are
computationally expensive due to the large number of evaluations required to
induce the final model. Part of the reason why these algorithms require a large
number of evaluations is their lack of domain-specific knowledge, resulting in
a lot of random guessing during evolution. In this work, we propose using Large
Language Models (LLMs) as an initial feature construction step to add knowledge
to the dataset. By doing so, our results show that the evolution can converge
faster, saving us computational resources. The proposed approach only provides
the names of the features in the dataset and the target objective to the LLM,
making it usable even when working with datasets containing private data. While
consistent improvements to test performance were only observed for one-third of
the datasets (CSS, PM, and IM10), possibly due to problems being easily
explored by LLMs, this approach only decreased the model performance in 1/77
test cases. Additionally, this work introduces the M6GP feature engineering
algorithm to symbolic regression, showing it can improve the results of the
random forest regressor and produce competitive results with its predecessor,
M3GP.
|
2503.21159 | Kanishka Ranaweera Mr. | Kanishka Ranaweera, David Smith, Pubudu N. Pathirana, Ming Ding,
Thierry Rakotoarivelo and Aruna Seneviratne | Multi-Objective Optimization for Privacy-Utility Balance in
Differentially Private Federated Learning | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Federated learning (FL) enables collaborative model training across
distributed clients without sharing raw data, making it a promising approach
for privacy-preserving machine learning. However, ensuring differential privacy
(DP) in FL presents challenges due to the trade-off between model utility and
privacy protection. Clipping gradients before aggregation is a common strategy
to limit privacy loss, but selecting an optimal clipping norm is non-trivial,
as excessively high values compromise privacy, while overly restrictive
clipping degrades model performance. In this work, we propose an adaptive
clipping mechanism that dynamically adjusts the clipping norm using a
multi-objective optimization framework. By integrating privacy and utility
considerations into the optimization objective, our approach balances privacy
preservation with model accuracy. We theoretically analyze the convergence
properties of our method and demonstrate its effectiveness through extensive
experiments on MNIST, Fashion-MNIST, and CIFAR-10 datasets. Our results show
that adaptive clipping consistently outperforms fixed-clipping baselines,
achieving improved accuracy under the same privacy constraints. This work
highlights the potential of dynamic clipping strategies to enhance
privacy-utility trade-offs in differentially private federated learning.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 04:57:05 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Ranaweera",
"Kanishka",
""
],
[
"Smith",
"David",
""
],
[
"Pathirana",
"Pubudu N.",
""
],
[
"Ding",
"Ming",
""
],
[
"Rakotoarivelo",
"Thierry",
""
],
[
"Seneviratne",
"Aruna",
""
]
] | TITLE: Multi-Objective Optimization for Privacy-Utility Balance in
Differentially Private Federated Learning
ABSTRACT: Federated learning (FL) enables collaborative model training across
distributed clients without sharing raw data, making it a promising approach
for privacy-preserving machine learning. However, ensuring differential privacy
(DP) in FL presents challenges due to the trade-off between model utility and
privacy protection. Clipping gradients before aggregation is a common strategy
to limit privacy loss, but selecting an optimal clipping norm is non-trivial,
as excessively high values compromise privacy, while overly restrictive
clipping degrades model performance. In this work, we propose an adaptive
clipping mechanism that dynamically adjusts the clipping norm using a
multi-objective optimization framework. By integrating privacy and utility
considerations into the optimization objective, our approach balances privacy
preservation with model accuracy. We theoretically analyze the convergence
properties of our method and demonstrate its effectiveness through extensive
experiments on MNIST, Fashion-MNIST, and CIFAR-10 datasets. Our results show
that adaptive clipping consistently outperforms fixed-clipping baselines,
achieving improved accuracy under the same privacy constraints. This work
highlights the potential of dynamic clipping strategies to enhance
privacy-utility trade-offs in differentially private federated learning.
|
2503.21160 | Yuhan Wang | Yuhan Wang | A Data Balancing and Ensemble Learning Approach for Credit Card Fraud
Detection | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This research introduces an innovative method for identifying credit card
fraud by combining the SMOTE-KMEANS technique with an ensemble machine learning
model. The proposed model was benchmarked against traditional models such as
logistic regression, decision trees, random forests, and support vector
machines. Performance was evaluated using metrics, including accuracy, recall,
and area under the curve (AUC). The results demonstrated that the proposed
model achieved superior performance, with an AUC of 0.96 when combined with the
SMOTE-KMEANS algorithm. This indicates a significant improvement in detecting
fraudulent transactions while maintaining high precision and recall. The study
also explores the application of different oversampling techniques to enhance
the performance of various classifiers. The findings suggest that the proposed
method is robust and effective for classification tasks on balanced datasets.
Future research directions include further optimization of the SMOTE-KMEANS
approach and its integration into existing fraud detection systems to enhance
financial security and consumer protection.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 04:59:45 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Wang",
"Yuhan",
""
]
] | TITLE: A Data Balancing and Ensemble Learning Approach for Credit Card Fraud
Detection
ABSTRACT: This research introduces an innovative method for identifying credit card
fraud by combining the SMOTE-KMEANS technique with an ensemble machine learning
model. The proposed model was benchmarked against traditional models such as
logistic regression, decision trees, random forests, and support vector
machines. Performance was evaluated using metrics, including accuracy, recall,
and area under the curve (AUC). The results demonstrated that the proposed
model achieved superior performance, with an AUC of 0.96 when combined with the
SMOTE-KMEANS algorithm. This indicates a significant improvement in detecting
fraudulent transactions while maintaining high precision and recall. The study
also explores the application of different oversampling techniques to enhance
the performance of various classifiers. The findings suggest that the proposed
method is robust and effective for classification tasks on balanced datasets.
Future research directions include further optimization of the SMOTE-KMEANS
approach and its integration into existing fraud detection systems to enhance
financial security and consumer protection.
|
2503.21164 | Samra Irshad | Samra Irshad, Seungkyu Lee, Nassir Navab, Hong Joo Lee, and Seong Tae
Kim | Adversarial Wear and Tear: Exploiting Natural Damage for Generating
Physical-World Adversarial Examples | 11 pages, 9 figures | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | The presence of adversarial examples in the physical world poses significant
challenges to the deployment of Deep Neural Networks in safety-critical
applications such as autonomous driving. Most existing methods for crafting
physical-world adversarial examples are ad-hoc, relying on temporary
modifications like shadows, laser beams, or stickers that are tailored to
specific scenarios. In this paper, we introduce a new class of physical-world
adversarial examples, AdvWT, which draws inspiration from the naturally
occurring phenomenon of `wear and tear', an inherent property of physical
objects. Unlike manually crafted perturbations, `wear and tear' emerges
organically over time due to environmental degradation, as seen in the gradual
deterioration of outdoor signboards. To achieve this, AdvWT follows a two-step
approach. First, a GAN-based, unsupervised image-to-image translation network
is employed to model these naturally occurring damages, particularly in the
context of outdoor signboards. The translation network encodes the
characteristics of damaged signs into a latent `damage style code'. In the
second step, we introduce adversarial perturbations into the style code,
strategically optimizing its transformation process. This manipulation subtly
alters the damage style representation, guiding the network to generate
adversarial images where the appearance of damages remains perceptually
realistic, while simultaneously ensuring their effectiveness in misleading
neural networks. Through comprehensive experiments on two traffic sign
datasets, we show that AdvWT effectively misleads DNNs in both digital and
physical domains. AdvWT achieves an effective attack success rate, greater
robustness, and a more natural appearance compared to existing physical-world
adversarial examples. Additionally, integrating AdvWT into training enhances a
model's generalizability to real-world damaged signs.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 05:19:41 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Irshad",
"Samra",
""
],
[
"Lee",
"Seungkyu",
""
],
[
"Navab",
"Nassir",
""
],
[
"Lee",
"Hong Joo",
""
],
[
"Kim",
"Seong Tae",
""
]
] | TITLE: Adversarial Wear and Tear: Exploiting Natural Damage for Generating
Physical-World Adversarial Examples
ABSTRACT: The presence of adversarial examples in the physical world poses significant
challenges to the deployment of Deep Neural Networks in safety-critical
applications such as autonomous driving. Most existing methods for crafting
physical-world adversarial examples are ad-hoc, relying on temporary
modifications like shadows, laser beams, or stickers that are tailored to
specific scenarios. In this paper, we introduce a new class of physical-world
adversarial examples, AdvWT, which draws inspiration from the naturally
occurring phenomenon of `wear and tear', an inherent property of physical
objects. Unlike manually crafted perturbations, `wear and tear' emerges
organically over time due to environmental degradation, as seen in the gradual
deterioration of outdoor signboards. To achieve this, AdvWT follows a two-step
approach. First, a GAN-based, unsupervised image-to-image translation network
is employed to model these naturally occurring damages, particularly in the
context of outdoor signboards. The translation network encodes the
characteristics of damaged signs into a latent `damage style code'. In the
second step, we introduce adversarial perturbations into the style code,
strategically optimizing its transformation process. This manipulation subtly
alters the damage style representation, guiding the network to generate
adversarial images where the appearance of damages remains perceptually
realistic, while simultaneously ensuring their effectiveness in misleading
neural networks. Through comprehensive experiments on two traffic sign
datasets, we show that AdvWT effectively misleads DNNs in both digital and
physical domains. AdvWT achieves an effective attack success rate, greater
robustness, and a more natural appearance compared to existing physical-world
adversarial examples. Additionally, integrating AdvWT into training enhances a
model's generalizability to real-world damaged signs.
|
2503.21169 | Jiahao Lyu | Jiahao Lyu, Minghua Zhao, Jing Hu, Xuewen Huang, Yifei Chen, Shuangli
Du | VADMamba: Exploring State Space Models for Fast Video Anomaly Detection | Accpeted by ICME 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Video anomaly detection (VAD) methods are mostly CNN-based or
Transformer-based, achieving impressive results, but the focus on detection
accuracy often comes at the expense of inference speed. The emergence of state
space models in computer vision, exemplified by the Mamba model, demonstrates
improved computational efficiency through selective scans and showcases the
great potential for long-range modeling. Our study pioneers the application of
Mamba to VAD, dubbed VADMamba, which is based on multi-task learning for frame
prediction and optical flow reconstruction. Specifically, we propose the
VQ-Mamba Unet (VQ-MaU) framework, which incorporates a Vector Quantization (VQ)
layer and Mamba-based Non-negative Visual State Space (NVSS) block.
Furthermore, two individual VQ-MaU networks separately predict frames and
reconstruct corresponding optical flows, further boosting accuracy through a
clip-level fusion evaluation strategy. Experimental results validate the
efficacy of the proposed VADMamba across three benchmark datasets,
demonstrating superior performance in inference speed compared to previous
work. Code is available at https://github.com/jLooo/VADMamba.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 05:38:12 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Lyu",
"Jiahao",
""
],
[
"Zhao",
"Minghua",
""
],
[
"Hu",
"Jing",
""
],
[
"Huang",
"Xuewen",
""
],
[
"Chen",
"Yifei",
""
],
[
"Du",
"Shuangli",
""
]
] | TITLE: VADMamba: Exploring State Space Models for Fast Video Anomaly Detection
ABSTRACT: Video anomaly detection (VAD) methods are mostly CNN-based or
Transformer-based, achieving impressive results, but the focus on detection
accuracy often comes at the expense of inference speed. The emergence of state
space models in computer vision, exemplified by the Mamba model, demonstrates
improved computational efficiency through selective scans and showcases the
great potential for long-range modeling. Our study pioneers the application of
Mamba to VAD, dubbed VADMamba, which is based on multi-task learning for frame
prediction and optical flow reconstruction. Specifically, we propose the
VQ-Mamba Unet (VQ-MaU) framework, which incorporates a Vector Quantization (VQ)
layer and Mamba-based Non-negative Visual State Space (NVSS) block.
Furthermore, two individual VQ-MaU networks separately predict frames and
reconstruct corresponding optical flows, further boosting accuracy through a
clip-level fusion evaluation strategy. Experimental results validate the
efficacy of the proposed VADMamba across three benchmark datasets,
demonstrating superior performance in inference speed compared to previous
work. Code is available at https://github.com/jLooo/VADMamba.
|
2503.21188 | Aixin Sun | Aixin Sun | Are We Solving a Well-Defined Problem? A Task-Centric Perspective on
Recommendation Tasks | Work in progress | null | null | null | cs.IR | http://creativecommons.org/licenses/by/4.0/ | Recommender systems (RecSys) leverage user interaction history to predict and
suggest relevant items, shaping user experiences across various domains. While
many studies adopt a general problem definition, i.e., to recommend preferred
items to users based on past interactions, such abstraction often lacks the
domain-specific nuances necessary for practical deployment. However, models are
frequently evaluated using datasets from online recommender platforms, which
inherently reflect these specificities. In this paper, we analyze RecSys task
formulations, emphasizing key components such as input-output structures,
temporal dynamics, and candidate item selection. All these factors directly
impact offline evaluation. We further examine the complexities of user-item
interactions, including decision-making costs, multi-step engagements, and
unobservable interactions, which may influence model design and loss functions.
Additionally, we explore the balance between task specificity and model
generalizability, highlighting how well-defined task formulations serve as the
foundation for robust evaluation and effective solution development. By
clarifying task definitions and their implications, this work provides a
structured perspective on RecSys research. The goal is to help researchers
better navigate the field, particularly in understanding specificities of the
RecSys tasks and ensuring fair and meaningful evaluations.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 06:10:22 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Sun",
"Aixin",
""
]
] | TITLE: Are We Solving a Well-Defined Problem? A Task-Centric Perspective on
Recommendation Tasks
ABSTRACT: Recommender systems (RecSys) leverage user interaction history to predict and
suggest relevant items, shaping user experiences across various domains. While
many studies adopt a general problem definition, i.e., to recommend preferred
items to users based on past interactions, such abstraction often lacks the
domain-specific nuances necessary for practical deployment. However, models are
frequently evaluated using datasets from online recommender platforms, which
inherently reflect these specificities. In this paper, we analyze RecSys task
formulations, emphasizing key components such as input-output structures,
temporal dynamics, and candidate item selection. All these factors directly
impact offline evaluation. We further examine the complexities of user-item
interactions, including decision-making costs, multi-step engagements, and
unobservable interactions, which may influence model design and loss functions.
Additionally, we explore the balance between task specificity and model
generalizability, highlighting how well-defined task formulations serve as the
foundation for robust evaluation and effective solution development. By
clarifying task definitions and their implications, this work provides a
structured perspective on RecSys research. The goal is to help researchers
better navigate the field, particularly in understanding specificities of the
RecSys tasks and ensuring fair and meaningful evaluations.
|
2503.21190 | Erika Mori | Erika Mori, Yue Qiu, Hirokatsu Kataoka and Yoshimitsu Aoki | Leveraging LLMs with Iterative Loop Structure for Enhanced Social
Intelligence in Video Question Answering | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Social intelligence, the ability to interpret emotions, intentions, and
behaviors, is essential for effective communication and adaptive responses. As
robots and AI systems become more prevalent in caregiving, healthcare, and
education, the demand for AI that can interact naturally with humans grows.
However, creating AI that seamlessly integrates multiple modalities, such as
vision and speech, remains a challenge. Current video-based methods for social
intelligence rely on general video recognition or emotion recognition
techniques, often overlook the unique elements inherent in human interactions.
To address this, we propose the Looped Video Debating (LVD) framework, which
integrates Large Language Models (LLMs) with visual information, such as facial
expressions and body movements, to enhance the transparency and reliability of
question-answering tasks involving human interaction videos. Our results on the
Social-IQ 2.0 benchmark show that LVD achieves state-of-the-art performance
without fine-tuning. Furthermore, supplementary human annotations on existing
datasets provide insights into the model's accuracy, guiding future
improvements in AI-driven social intelligence.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 06:14:21 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Mori",
"Erika",
""
],
[
"Qiu",
"Yue",
""
],
[
"Kataoka",
"Hirokatsu",
""
],
[
"Aoki",
"Yoshimitsu",
""
]
] | TITLE: Leveraging LLMs with Iterative Loop Structure for Enhanced Social
Intelligence in Video Question Answering
ABSTRACT: Social intelligence, the ability to interpret emotions, intentions, and
behaviors, is essential for effective communication and adaptive responses. As
robots and AI systems become more prevalent in caregiving, healthcare, and
education, the demand for AI that can interact naturally with humans grows.
However, creating AI that seamlessly integrates multiple modalities, such as
vision and speech, remains a challenge. Current video-based methods for social
intelligence rely on general video recognition or emotion recognition
techniques, often overlook the unique elements inherent in human interactions.
To address this, we propose the Looped Video Debating (LVD) framework, which
integrates Large Language Models (LLMs) with visual information, such as facial
expressions and body movements, to enhance the transparency and reliability of
question-answering tasks involving human interaction videos. Our results on the
Social-IQ 2.0 benchmark show that LVD achieves state-of-the-art performance
without fine-tuning. Furthermore, supplementary human annotations on existing
datasets provide insights into the model's accuracy, guiding future
improvements in AI-driven social intelligence.
|
2503.21206 | Yuntao Gui | Yuntao Gui, Peiqi Yin, Xiao Yan, Chaorui Zhang, Weixi Zhang, James
Cheng | PilotANN: Memory-Bounded GPU Acceleration for Vector Search | null | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Approximate Nearest Neighbor Search (ANNS) has become fundamental to modern
deep learning applications, having gained particular prominence through its
integration into recent generative models that work with increasingly complex
datasets and higher vector dimensions. Existing CPU-only solutions, even the
most efficient graph-based ones, struggle to meet these growing computational
demands, while GPU-only solutions face memory constraints. As a solution, we
propose PilotANN, a hybrid CPU-GPU system for graph-based ANNS that utilizes
both CPU's abundant RAM and GPU's parallel processing capabilities. Our
approach decomposes the graph traversal process of top-$k$ search into three
stages: GPU-accelerated subgraph traversal using SVD-reduced vectors, CPU
refinement and precise search using complete vectors. Furthermore, we introduce
fast entry selection to improve search starting points while maximizing GPU
utilization. Experimental results demonstrate that PilotANN achieves $3.9 - 5.4
\times$ speedup in throughput on 100-million scale datasets, and is able to
handle datasets up to $12 \times$ larger than the GPU memory. We offer a
complete open-source implementation at https://github.com/ytgui/PilotANN.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 06:48:18 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Gui",
"Yuntao",
""
],
[
"Yin",
"Peiqi",
""
],
[
"Yan",
"Xiao",
""
],
[
"Zhang",
"Chaorui",
""
],
[
"Zhang",
"Weixi",
""
],
[
"Cheng",
"James",
""
]
] | TITLE: PilotANN: Memory-Bounded GPU Acceleration for Vector Search
ABSTRACT: Approximate Nearest Neighbor Search (ANNS) has become fundamental to modern
deep learning applications, having gained particular prominence through its
integration into recent generative models that work with increasingly complex
datasets and higher vector dimensions. Existing CPU-only solutions, even the
most efficient graph-based ones, struggle to meet these growing computational
demands, while GPU-only solutions face memory constraints. As a solution, we
propose PilotANN, a hybrid CPU-GPU system for graph-based ANNS that utilizes
both CPU's abundant RAM and GPU's parallel processing capabilities. Our
approach decomposes the graph traversal process of top-$k$ search into three
stages: GPU-accelerated subgraph traversal using SVD-reduced vectors, CPU
refinement and precise search using complete vectors. Furthermore, we introduce
fast entry selection to improve search starting points while maximizing GPU
utilization. Experimental results demonstrate that PilotANN achieves $3.9 - 5.4
\times$ speedup in throughput on 100-million scale datasets, and is able to
handle datasets up to $12 \times$ larger than the GPU memory. We offer a
complete open-source implementation at https://github.com/ytgui/PilotANN.
|
2503.21208 | Wenxun Qiu | Wenxuan Qiu, Chengxin Xie and Jingui Huang | An improved EfficientNetV2 for garbage classification | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents an enhanced waste classification framework based on
EfficientNetV2 to address challenges in data acquisition cost, generalization,
and real-time performance. We propose a Channel-Efficient Attention
(CE-Attention) module that mitigates feature loss during global pooling without
introducing dimensional scaling, effectively enhancing critical feature
extraction. Additionally, a lightweight multi-scale spatial feature extraction
module (SAFM) is developed by integrating depthwise separable convolutions,
significantly reducing model complexity. Comprehensive data augmentation
strategies are further employed to improve generalization. Experiments on the
Huawei Cloud waste classification dataset demonstrate that our method achieves
a classification accuracy of 95.4\%, surpassing the baseline by 3.2\% and
outperforming mainstream models. The results validate the effectiveness of our
approach in balancing accuracy and efficiency for practical waste
classification scenarios.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 06:50:44 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Qiu",
"Wenxuan",
""
],
[
"Xie",
"Chengxin",
""
],
[
"Huang",
"Jingui",
""
]
] | TITLE: An improved EfficientNetV2 for garbage classification
ABSTRACT: This paper presents an enhanced waste classification framework based on
EfficientNetV2 to address challenges in data acquisition cost, generalization,
and real-time performance. We propose a Channel-Efficient Attention
(CE-Attention) module that mitigates feature loss during global pooling without
introducing dimensional scaling, effectively enhancing critical feature
extraction. Additionally, a lightweight multi-scale spatial feature extraction
module (SAFM) is developed by integrating depthwise separable convolutions,
significantly reducing model complexity. Comprehensive data augmentation
strategies are further employed to improve generalization. Experiments on the
Huawei Cloud waste classification dataset demonstrate that our method achieves
a classification accuracy of 95.4\%, surpassing the baseline by 3.2\% and
outperforming mainstream models. The results validate the effectiveness of our
approach in balancing accuracy and efficiency for practical waste
classification scenarios.
|
2503.21210 | Yueying Gao | Yueying Gao, Dongliang Chang, Bingyao Yu, Haotian Qin, Lei Chen,
Kongming Liang, Zhanyu Ma | FakeReasoning: Towards Generalizable Forgery Detection and Reasoning | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate and interpretable detection of AI-generated images is essential for
mitigating risks associated with AI misuse. However, the substantial domain gap
among generative models makes it challenging to develop a generalizable forgery
detection model. Moreover, since every pixel in an AI-generated image is
synthesized, traditional saliency-based forgery explanation methods are not
well suited for this task. To address these challenges, we propose modeling
AI-generated image detection and explanation as a Forgery Detection and
Reasoning task (FDR-Task), leveraging vision-language models (VLMs) to provide
accurate detection through structured and reliable reasoning over forgery
attributes. To facilitate this task, we introduce the Multi-Modal Forgery
Reasoning dataset (MMFR-Dataset), a large-scale dataset containing 100K images
across 10 generative models, with 10 types of forgery reasoning annotations,
enabling comprehensive evaluation of FDR-Task. Additionally, we propose
FakeReasoning, a forgery detection and reasoning framework with two key
components. First, Forgery-Aligned Contrastive Learning enhances VLMs'
understanding of forgery-related semantics through both cross-modal and
intra-modal contrastive learning between images and forgery attribute
reasoning. Second, a Classification Probability Mapper bridges the optimization
gap between forgery detection and language modeling by mapping the output
logits of VLMs to calibrated binary classification probabilities. Experiments
across multiple generative models demonstrate that FakeReasoning not only
achieves robust generalization but also outperforms state-of-the-art methods on
both detection and reasoning tasks.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 06:54:06 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Gao",
"Yueying",
""
],
[
"Chang",
"Dongliang",
""
],
[
"Yu",
"Bingyao",
""
],
[
"Qin",
"Haotian",
""
],
[
"Chen",
"Lei",
""
],
[
"Liang",
"Kongming",
""
],
[
"Ma",
"Zhanyu",
""
]
] | TITLE: FakeReasoning: Towards Generalizable Forgery Detection and Reasoning
ABSTRACT: Accurate and interpretable detection of AI-generated images is essential for
mitigating risks associated with AI misuse. However, the substantial domain gap
among generative models makes it challenging to develop a generalizable forgery
detection model. Moreover, since every pixel in an AI-generated image is
synthesized, traditional saliency-based forgery explanation methods are not
well suited for this task. To address these challenges, we propose modeling
AI-generated image detection and explanation as a Forgery Detection and
Reasoning task (FDR-Task), leveraging vision-language models (VLMs) to provide
accurate detection through structured and reliable reasoning over forgery
attributes. To facilitate this task, we introduce the Multi-Modal Forgery
Reasoning dataset (MMFR-Dataset), a large-scale dataset containing 100K images
across 10 generative models, with 10 types of forgery reasoning annotations,
enabling comprehensive evaluation of FDR-Task. Additionally, we propose
FakeReasoning, a forgery detection and reasoning framework with two key
components. First, Forgery-Aligned Contrastive Learning enhances VLMs'
understanding of forgery-related semantics through both cross-modal and
intra-modal contrastive learning between images and forgery attribute
reasoning. Second, a Classification Probability Mapper bridges the optimization
gap between forgery detection and language modeling by mapping the output
logits of VLMs to calibrated binary classification probabilities. Experiments
across multiple generative models demonstrate that FakeReasoning not only
achieves robust generalization but also outperforms state-of-the-art methods on
both detection and reasoning tasks.
|
2503.21223 | Xunkai Li | Zhihan Zhang, Xunkai Li, Guang Zeng, Hongchao Qin, Ronghua Li, Guoren
Wang | Rethinking Graph Structure Learning in the Era of LLMs | 17 pages, 8 figures | null | null | null | cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | Recently, the emergence of large language models (LLMs) has prompted
researchers to explore the integration of language descriptions into graphs,
aiming to enhance model encoding capabilities from a data-centric perspective.
This graph representation is called text-attributed graphs (TAGs). A review of
prior advancements highlights that graph structure learning (GSL) is a pivotal
technique for improving data utility, making it highly relevant to efficient
TAG learning. However, most GSL methods are tailored for traditional graphs
without textual information, underscoring the necessity of developing a new GSL
paradigm. Despite clear motivations, it remains challenging: (1) How can we
define a reasonable optimization objective for GSL in the era of LLMs,
considering the massive parameters in LLM? (2) How can we design an efficient
model architecture that enables seamless integration of LLM for this
optimization objective? For Question 1, we reformulate existing GSL
optimization objectives as a tree optimization framework, shifting the focus
from obtaining a well-trained edge predictor to a language-aware tree sampler.
For Question 2, we propose decoupled and training-free model design principles
for LLM integration, shifting the focus from computation-intensive fine-tuning
to more efficient inference. Based on this, we propose Large Language and Tree
Assistant (LLaTA), which leverages tree-based LLM in-context learning to
enhance the understanding of topology and text, enabling reliable inference and
generating improved graph structure. Extensive experiments on 10 TAG datasets
demonstrate that LLaTA enjoys flexibility - incorporated with any backbone;
scalability - outperforms other LLM-based GSL methods in terms of running
efficiency; effectiveness - achieves SOTA performance.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 07:28:30 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Zhang",
"Zhihan",
""
],
[
"Li",
"Xunkai",
""
],
[
"Zeng",
"Guang",
""
],
[
"Qin",
"Hongchao",
""
],
[
"Li",
"Ronghua",
""
],
[
"Wang",
"Guoren",
""
]
] | TITLE: Rethinking Graph Structure Learning in the Era of LLMs
ABSTRACT: Recently, the emergence of large language models (LLMs) has prompted
researchers to explore the integration of language descriptions into graphs,
aiming to enhance model encoding capabilities from a data-centric perspective.
This graph representation is called text-attributed graphs (TAGs). A review of
prior advancements highlights that graph structure learning (GSL) is a pivotal
technique for improving data utility, making it highly relevant to efficient
TAG learning. However, most GSL methods are tailored for traditional graphs
without textual information, underscoring the necessity of developing a new GSL
paradigm. Despite clear motivations, it remains challenging: (1) How can we
define a reasonable optimization objective for GSL in the era of LLMs,
considering the massive parameters in LLM? (2) How can we design an efficient
model architecture that enables seamless integration of LLM for this
optimization objective? For Question 1, we reformulate existing GSL
optimization objectives as a tree optimization framework, shifting the focus
from obtaining a well-trained edge predictor to a language-aware tree sampler.
For Question 2, we propose decoupled and training-free model design principles
for LLM integration, shifting the focus from computation-intensive fine-tuning
to more efficient inference. Based on this, we propose Large Language and Tree
Assistant (LLaTA), which leverages tree-based LLM in-context learning to
enhance the understanding of topology and text, enabling reliable inference and
generating improved graph structure. Extensive experiments on 10 TAG datasets
demonstrate that LLaTA enjoys flexibility - incorporated with any backbone;
scalability - outperforms other LLM-based GSL methods in terms of running
efficiency; effectiveness - achieves SOTA performance.
|
2503.21235 | Stavros Sintos | Aryan Esmailpour, Sainyam Galhotra, Rahul Raychaudhury, Stavros Sintos | A Theoretical Framework for Distribution-Aware Dataset Search | null | PODS 2025 | null | null | cs.DB cs.DS | http://creativecommons.org/licenses/by/4.0/ | Effective data discovery is a cornerstone of modern data-driven
decision-making. Yet, identifying datasets with specific distributional
characteristics, such as percentiles or preferences, remains challenging. While
recent proposals have enabled users to search based on percentile predicates,
much of the research in data discovery relies on heuristics. This paper
presents the first theoretically backed framework that unifies data discovery
under centralized and decentralized settings.
Let $\mathcal{P}=\{P_1,...,P_N\}$ be a repository of $N$ datasets, where
$P_i\subset \mathbb{R}^d$, for $d=O(1)$ . We study the percentile indexing
(Ptile) problem and the preference indexing (Pref) problem under the
centralized and the federated setting. In the centralized setting we assume
direct access to the datasets. In the federated setting we assume access to a
synopsis of each dataset. The goal of Ptile is to construct a data structure
such that given a predicate (rectangle $R$ and interval $\theta$) report all
indexes $J$ such that $j\in J$ iff $|P_j\cap R|/|P_j|\in\theta$. The goal of
Pref is to construct a data structure such that given a predicate (vector $v$
and interval $\theta$) report all indexes $J$ such that $j\in J$ iff
$\omega(P_j,v)\in \theta$, where $\omega(P_j,v)$ is the inner-product of the
$k$-th largest projection of $P_j$ on $v$. We first show that we cannot hope
for near-linear data structures with polylogarithmic query time in the
centralized setting. Next we show $\tilde{O}(N)$ space data structures that
answer Ptile and Pref queries in $\tilde{O}(1+OUT)$ time, where $OUT$ is the
output size. Each data structure returns a set of indexes $J$ such that i) for
every $P_i$ that satisfies the predicate, $i\in J$ and ii) if $j\in J$ then
$P_j$ satisfies the predicate up to an additive error $\varepsilon+2\delta$,
where $\varepsilon\in(0,1)$ and $\delta$ is the error of synopses.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 07:53:20 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Esmailpour",
"Aryan",
""
],
[
"Galhotra",
"Sainyam",
""
],
[
"Raychaudhury",
"Rahul",
""
],
[
"Sintos",
"Stavros",
""
]
] | TITLE: A Theoretical Framework for Distribution-Aware Dataset Search
ABSTRACT: Effective data discovery is a cornerstone of modern data-driven
decision-making. Yet, identifying datasets with specific distributional
characteristics, such as percentiles or preferences, remains challenging. While
recent proposals have enabled users to search based on percentile predicates,
much of the research in data discovery relies on heuristics. This paper
presents the first theoretically backed framework that unifies data discovery
under centralized and decentralized settings.
Let $\mathcal{P}=\{P_1,...,P_N\}$ be a repository of $N$ datasets, where
$P_i\subset \mathbb{R}^d$, for $d=O(1)$ . We study the percentile indexing
(Ptile) problem and the preference indexing (Pref) problem under the
centralized and the federated setting. In the centralized setting we assume
direct access to the datasets. In the federated setting we assume access to a
synopsis of each dataset. The goal of Ptile is to construct a data structure
such that given a predicate (rectangle $R$ and interval $\theta$) report all
indexes $J$ such that $j\in J$ iff $|P_j\cap R|/|P_j|\in\theta$. The goal of
Pref is to construct a data structure such that given a predicate (vector $v$
and interval $\theta$) report all indexes $J$ such that $j\in J$ iff
$\omega(P_j,v)\in \theta$, where $\omega(P_j,v)$ is the inner-product of the
$k$-th largest projection of $P_j$ on $v$. We first show that we cannot hope
for near-linear data structures with polylogarithmic query time in the
centralized setting. Next we show $\tilde{O}(N)$ space data structures that
answer Ptile and Pref queries in $\tilde{O}(1+OUT)$ time, where $OUT$ is the
output size. Each data structure returns a set of indexes $J$ such that i) for
every $P_i$ that satisfies the predicate, $i\in J$ and ii) if $j\in J$ then
$P_j$ satisfies the predicate up to an additive error $\varepsilon+2\delta$,
where $\varepsilon\in(0,1)$ and $\delta$ is the error of synopses.
|
2503.21236 | Li Shuai | Shuai Li, Jie Zhang, Yuang Qi, Kejiang Chen, Tianwei Zhang, Weiming
Zhang, and Nenghai Yu | Clean Image May be Dangerous: Data Poisoning Attacks Against Deep
Hashing | Accepted by TMM | null | null | null | cs.CV cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large-scale image retrieval using deep hashing has become increasingly
popular due to the exponential growth of image data and the remarkable feature
extraction capabilities of deep neural networks (DNNs). However, deep hashing
methods are vulnerable to malicious attacks, including adversarial and backdoor
attacks. It is worth noting that these attacks typically involve altering the
query images, which is not a practical concern in real-world scenarios. In this
paper, we point out that even clean query images can be dangerous, inducing
malicious target retrieval results, like undesired or illegal images. To the
best of our knowledge, we are the first to study data \textbf{p}oisoning
\textbf{a}ttacks against \textbf{d}eep \textbf{hash}ing
\textbf{(\textit{PADHASH})}. Specifically, we first train a surrogate model to
simulate the behavior of the target deep hashing model. Then, a strict gradient
matching strategy is proposed to generate the poisoned images. Extensive
experiments on different models, datasets, hash methods, and hash code lengths
demonstrate the effectiveness and generality of our attack method.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 07:54:27 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Li",
"Shuai",
""
],
[
"Zhang",
"Jie",
""
],
[
"Qi",
"Yuang",
""
],
[
"Chen",
"Kejiang",
""
],
[
"Zhang",
"Tianwei",
""
],
[
"Zhang",
"Weiming",
""
],
[
"Yu",
"Nenghai",
""
]
] | TITLE: Clean Image May be Dangerous: Data Poisoning Attacks Against Deep
Hashing
ABSTRACT: Large-scale image retrieval using deep hashing has become increasingly
popular due to the exponential growth of image data and the remarkable feature
extraction capabilities of deep neural networks (DNNs). However, deep hashing
methods are vulnerable to malicious attacks, including adversarial and backdoor
attacks. It is worth noting that these attacks typically involve altering the
query images, which is not a practical concern in real-world scenarios. In this
paper, we point out that even clean query images can be dangerous, inducing
malicious target retrieval results, like undesired or illegal images. To the
best of our knowledge, we are the first to study data \textbf{p}oisoning
\textbf{a}ttacks against \textbf{d}eep \textbf{hash}ing
\textbf{(\textit{PADHASH})}. Specifically, we first train a surrogate model to
simulate the behavior of the target deep hashing model. Then, a strict gradient
matching strategy is proposed to generate the poisoned images. Extensive
experiments on different models, datasets, hash methods, and hash code lengths
demonstrate the effectiveness and generality of our attack method.
|
2503.21240 | Ningyu He | Ningyu He, Shangtong Cao, Haoyu Wang, Yao Guo, Xiapu Luo | The Promise and Pitfalls of WebAssembly: Perspectives from the Industry | Accepted by FSE'25 Industry Track | null | null | null | cs.SE | http://creativecommons.org/licenses/by/4.0/ | As JavaScript has been criticized for performance and security issues in web
applications, WebAssembly (Wasm) was proposed in 2017 and is regarded as the
complementation for JavaScript. Due to its advantages like compact-size,
native-like speed, and portability, Wasm binaries are gradually used as the
compilation target for industrial projects in other high-level programming
languages and are responsible for computation-intensive tasks in browsers,
e.g., 3D graphic rendering and video decoding. Intuitively, characterizing
in-the-wild adopted Wasm binaries from different perspectives, like their
metadata, relation with source programming language, existence of security
threats, and practical purpose, is the prerequisite before delving deeper into
the Wasm ecosystem and beneficial to its roadmap selection. However, currently,
there is no work that conducts a large-scale measurement study on in-the-wild
adopted Wasm binaries. To fill this gap, we collect the largest-ever dataset to
the best of our knowledge, and characterize the status quo of them from
industry perspectives. According to the different roles of people engaging in
the community, i.e., web developers, Wasm maintainers, and researchers, we
reorganized our findings to suggestions and best practices for them
accordingly. We believe this work can shed light on the future direction of the
web and Wasm.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 08:01:22 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"He",
"Ningyu",
""
],
[
"Cao",
"Shangtong",
""
],
[
"Wang",
"Haoyu",
""
],
[
"Guo",
"Yao",
""
],
[
"Luo",
"Xiapu",
""
]
] | TITLE: The Promise and Pitfalls of WebAssembly: Perspectives from the Industry
ABSTRACT: As JavaScript has been criticized for performance and security issues in web
applications, WebAssembly (Wasm) was proposed in 2017 and is regarded as the
complementation for JavaScript. Due to its advantages like compact-size,
native-like speed, and portability, Wasm binaries are gradually used as the
compilation target for industrial projects in other high-level programming
languages and are responsible for computation-intensive tasks in browsers,
e.g., 3D graphic rendering and video decoding. Intuitively, characterizing
in-the-wild adopted Wasm binaries from different perspectives, like their
metadata, relation with source programming language, existence of security
threats, and practical purpose, is the prerequisite before delving deeper into
the Wasm ecosystem and beneficial to its roadmap selection. However, currently,
there is no work that conducts a large-scale measurement study on in-the-wild
adopted Wasm binaries. To fill this gap, we collect the largest-ever dataset to
the best of our knowledge, and characterize the status quo of them from
industry perspectives. According to the different roles of people engaging in
the community, i.e., web developers, Wasm maintainers, and researchers, we
reorganized our findings to suggestions and best practices for them
accordingly. We believe this work can shed light on the future direction of the
web and Wasm.
|
2503.21244 | Mario Garc\'ia-M\'arquez | Mario Garc\'ia-M\'arquez and Nuria Rodr\'iguez-Barroso and M.Victoria
Luz\'on and Francisco Herrera | Improving $(\alpha, f)$-Byzantine Resilience in Federated Learning via
layerwise aggregation and cosine distance | Submitted to Knowledge-Based Systems | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | The rapid development of artificial intelligence systems has amplified
societal concerns regarding their usage, necessitating regulatory frameworks
that encompass data privacy. Federated Learning (FL) is posed as potential
solution to data privacy challenges in distributed machine learning by enabling
collaborative model training {without data sharing}. However, FL systems remain
vulnerable to Byzantine attacks, where malicious nodes contribute corrupted
model updates. While Byzantine Resilient operators have emerged as a widely
adopted robust aggregation algorithm to mitigate these attacks, its efficacy
diminishes significantly in high-dimensional parameter spaces, sometimes
leading to poor performing models. This paper introduces Layerwise Cosine
Aggregation, a novel aggregation scheme designed to enhance robustness of these
rules in such high-dimensional settings while preserving computational
efficiency. A theoretical analysis is presented, demonstrating the superior
robustness of the proposed Layerwise Cosine Aggregation compared to original
robust aggregation operators. Empirical evaluation across diverse image
classification datasets, under varying data distributions and Byzantine attack
scenarios, consistently demonstrates the improved performance of Layerwise
Cosine Aggregation, achieving up to a 16% increase in model accuracy.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 08:07:39 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"García-Márquez",
"Mario",
""
],
[
"Rodríguez-Barroso",
"Nuria",
""
],
[
"Luzón",
"M. Victoria",
""
],
[
"Herrera",
"Francisco",
""
]
] | TITLE: Improving $(\alpha, f)$-Byzantine Resilience in Federated Learning via
layerwise aggregation and cosine distance
ABSTRACT: The rapid development of artificial intelligence systems has amplified
societal concerns regarding their usage, necessitating regulatory frameworks
that encompass data privacy. Federated Learning (FL) is posed as potential
solution to data privacy challenges in distributed machine learning by enabling
collaborative model training {without data sharing}. However, FL systems remain
vulnerable to Byzantine attacks, where malicious nodes contribute corrupted
model updates. While Byzantine Resilient operators have emerged as a widely
adopted robust aggregation algorithm to mitigate these attacks, its efficacy
diminishes significantly in high-dimensional parameter spaces, sometimes
leading to poor performing models. This paper introduces Layerwise Cosine
Aggregation, a novel aggregation scheme designed to enhance robustness of these
rules in such high-dimensional settings while preserving computational
efficiency. A theoretical analysis is presented, demonstrating the superior
robustness of the proposed Layerwise Cosine Aggregation compared to original
robust aggregation operators. Empirical evaluation across diverse image
classification datasets, under varying data distributions and Byzantine attack
scenarios, consistently demonstrates the improved performance of Layerwise
Cosine Aggregation, achieving up to a 16% increase in model accuracy.
|
2503.21246 | Haoyu Zhao | Haoyu Zhao, Zhongang Qi, Cong Wang, Qingping Zheng, Guansong Lu, Fei
Chen, Hang Xu, Zuxuan Wu | DynamiCtrl: Rethinking the Basic Structure and the Role of Text for
High-quality Human Image Animation | 11 pages, 10 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human image animation has recently gained significant attention due to
advancements in generative models. However, existing methods still face two
major challenges: (1) architectural limitations, most models rely on U-Net,
which underperforms compared to the MM-DiT; and (2) the neglect of textual
information, which can enhance controllability. In this work, we introduce
DynamiCtrl, a novel framework that not only explores different pose-guided
control structures in MM-DiT, but also reemphasizes the crucial role of text in
this task. Specifically, we employ a Shared VAE encoder for both reference
images and driving pose videos, eliminating the need for an additional pose
encoder and simplifying the overall framework. To incorporate pose features
into the full attention blocks, we propose Pose-adaptive Layer Norm (PadaLN),
which utilizes adaptive layer normalization to encode sparse pose features. The
encoded features are directly added to the visual input, preserving the
spatiotemporal consistency of the backbone while effectively introducing pose
control into MM-DiT. Furthermore, within the full attention mechanism, we align
textual and visual features to enhance controllability. By leveraging text, we
not only enable fine-grained control over the generated content, but also, for
the first time, achieve simultaneous control over both background and motion.
Experimental results verify the superiority of DynamiCtrl on benchmark
datasets, demonstrating its strong identity preservation, heterogeneous
character driving, background controllability, and high-quality synthesis. The
project page is available at https://gulucaptain.github.io/DynamiCtrl/.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 08:07:45 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Zhao",
"Haoyu",
""
],
[
"Qi",
"Zhongang",
""
],
[
"Wang",
"Cong",
""
],
[
"Zheng",
"Qingping",
""
],
[
"Lu",
"Guansong",
""
],
[
"Chen",
"Fei",
""
],
[
"Xu",
"Hang",
""
],
[
"Wu",
"Zuxuan",
""
]
] | TITLE: DynamiCtrl: Rethinking the Basic Structure and the Role of Text for
High-quality Human Image Animation
ABSTRACT: Human image animation has recently gained significant attention due to
advancements in generative models. However, existing methods still face two
major challenges: (1) architectural limitations, most models rely on U-Net,
which underperforms compared to the MM-DiT; and (2) the neglect of textual
information, which can enhance controllability. In this work, we introduce
DynamiCtrl, a novel framework that not only explores different pose-guided
control structures in MM-DiT, but also reemphasizes the crucial role of text in
this task. Specifically, we employ a Shared VAE encoder for both reference
images and driving pose videos, eliminating the need for an additional pose
encoder and simplifying the overall framework. To incorporate pose features
into the full attention blocks, we propose Pose-adaptive Layer Norm (PadaLN),
which utilizes adaptive layer normalization to encode sparse pose features. The
encoded features are directly added to the visual input, preserving the
spatiotemporal consistency of the backbone while effectively introducing pose
control into MM-DiT. Furthermore, within the full attention mechanism, we align
textual and visual features to enhance controllability. By leveraging text, we
not only enable fine-grained control over the generated content, but also, for
the first time, achieve simultaneous control over both background and motion.
Experimental results verify the superiority of DynamiCtrl on benchmark
datasets, demonstrating its strong identity preservation, heterogeneous
character driving, background controllability, and high-quality synthesis. The
project page is available at https://gulucaptain.github.io/DynamiCtrl/.
|
2503.21249 | Yufei Bo | Yufei Bo, Meixia Tao | Distributed Nonlinear Transform Source-Channel Coding for Wireless
Correlated Image Transmission | null | null | null | null | cs.IT math.IT | http://creativecommons.org/licenses/by/4.0/ | This paper investigates distributed joint source-channel coding (JSCC) for
correlated image semantic transmission over wireless channels. In this setup,
correlated images at different transmitters are separately encoded and
transmitted through dedicated channels for joint recovery at the receiver. We
propose a novel distributed nonlinear transform source-channel coding (D-NTSCC)
framework. Unlike existing learning-based approaches that implicitly learn
source correlation in a purely data-driven manner, our method explicitly models
the source correlation through joint distribution. Specifically, the correlated
images are separately encoded into latent representations via an encoding
transform function, followed by a JSCC encoder to produce channel input
symbols. A learned joint entropy model is introduced to determine the
transmission rates, which more accurately approximates the joint distribution
of the latent representations and captures source dependencies, thereby
improving rate-distortion performance. At the receiver, a JSCC decoder and a
decoding transform function reconstruct the images from the received signals,
each serving as side information for recovering the other image. Therein, a
transformation module is designed to align the latent representations for
maximal correlation learning. Furthermore, a loss function is derived to
jointly optimize encoding, decoding, and the joint entropy model, ensuring that
the learned joint entropy model approximates the true joint distribution.
Experiments on multi-view datasets show that D-NTSCC outperforms
state-of-the-art distributed schemes, demonstrating its effectiveness in
exploiting source correlation.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 08:09:55 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Bo",
"Yufei",
""
],
[
"Tao",
"Meixia",
""
]
] | TITLE: Distributed Nonlinear Transform Source-Channel Coding for Wireless
Correlated Image Transmission
ABSTRACT: This paper investigates distributed joint source-channel coding (JSCC) for
correlated image semantic transmission over wireless channels. In this setup,
correlated images at different transmitters are separately encoded and
transmitted through dedicated channels for joint recovery at the receiver. We
propose a novel distributed nonlinear transform source-channel coding (D-NTSCC)
framework. Unlike existing learning-based approaches that implicitly learn
source correlation in a purely data-driven manner, our method explicitly models
the source correlation through joint distribution. Specifically, the correlated
images are separately encoded into latent representations via an encoding
transform function, followed by a JSCC encoder to produce channel input
symbols. A learned joint entropy model is introduced to determine the
transmission rates, which more accurately approximates the joint distribution
of the latent representations and captures source dependencies, thereby
improving rate-distortion performance. At the receiver, a JSCC decoder and a
decoding transform function reconstruct the images from the received signals,
each serving as side information for recovering the other image. Therein, a
transformation module is designed to align the latent representations for
maximal correlation learning. Furthermore, a loss function is derived to
jointly optimize encoding, decoding, and the joint entropy model, ensuring that
the learned joint entropy model approximates the true joint distribution.
Experiments on multi-view datasets show that D-NTSCC outperforms
state-of-the-art distributed schemes, demonstrating its effectiveness in
exploiting source correlation.
|
2503.21251 | Xin Zhou | Qingdi Yu, Zhiwei Cao, Ruihang Wang, Zhen Yang, Lijun Deng, Min Hu,
Yong Luo and Xin Zhou | Dual-Splitting Conformal Prediction for Multi-Step Time Series
Forecasting | 28 pages, 13 figures, 3 tables. Submitted to Applied Soft Computing.
With Editor This is the first public release of the work | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Time series forecasting is crucial for applications like resource scheduling
and risk management, where multi-step predictions provide a comprehensive view
of future trends. Uncertainty Quantification (UQ) is a mainstream approach for
addressing forecasting uncertainties, with Conformal Prediction (CP) gaining
attention due to its model-agnostic nature and statistical guarantees. However,
most variants of CP are designed for single-step predictions and face
challenges in multi-step scenarios, such as reliance on real-time data and
limited scalability. This highlights the need for CP methods specifically
tailored to multi-step forecasting. We propose the Dual-Splitting Conformal
Prediction (DSCP) method, a novel CP approach designed to capture inherent
dependencies within time-series data for multi-step forecasting. Experimental
results on real-world datasets from four different domains demonstrate that the
proposed DSCP significantly outperforms existing CP variants in terms of the
Winkler Score, achieving a performance improvement of up to 23.59% compared to
state-of-the-art methods. Furthermore, we deployed the DSCP approach for
renewable energy generation and IT load forecasting in power management of a
real-world trajectory-based application, achieving an 11.25% reduction in
carbon emissions through predictive optimization of data center operations and
controls.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 08:17:18 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Yu",
"Qingdi",
""
],
[
"Cao",
"Zhiwei",
""
],
[
"Wang",
"Ruihang",
""
],
[
"Yang",
"Zhen",
""
],
[
"Deng",
"Lijun",
""
],
[
"Hu",
"Min",
""
],
[
"Luo",
"Yong",
""
],
[
"Zhou",
"Xin",
""
]
] | TITLE: Dual-Splitting Conformal Prediction for Multi-Step Time Series
Forecasting
ABSTRACT: Time series forecasting is crucial for applications like resource scheduling
and risk management, where multi-step predictions provide a comprehensive view
of future trends. Uncertainty Quantification (UQ) is a mainstream approach for
addressing forecasting uncertainties, with Conformal Prediction (CP) gaining
attention due to its model-agnostic nature and statistical guarantees. However,
most variants of CP are designed for single-step predictions and face
challenges in multi-step scenarios, such as reliance on real-time data and
limited scalability. This highlights the need for CP methods specifically
tailored to multi-step forecasting. We propose the Dual-Splitting Conformal
Prediction (DSCP) method, a novel CP approach designed to capture inherent
dependencies within time-series data for multi-step forecasting. Experimental
results on real-world datasets from four different domains demonstrate that the
proposed DSCP significantly outperforms existing CP variants in terms of the
Winkler Score, achieving a performance improvement of up to 23.59% compared to
state-of-the-art methods. Furthermore, we deployed the DSCP approach for
renewable energy generation and IT load forecasting in power management of a
real-world trajectory-based application, achieving an 11.25% reduction in
carbon emissions through predictive optimization of data center operations and
controls.
|
2503.21254 | Zhaokai Wang | Zhaokai Wang, Chenxi Bao, Le Zhuo, Jingrui Han, Yang Yue, Yihong Tang,
Victor Shea-Jay Huang, Yue Liao | Vision-to-Music Generation: A Survey | null | null | null | null | cs.CV cs.AI cs.MM cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vision-to-music Generation, including video-to-music and image-to-music
tasks, is a significant branch of multimodal artificial intelligence
demonstrating vast application prospects in fields such as film scoring, short
video creation, and dance music synthesis. However, compared to the rapid
development of modalities like text and images, research in vision-to-music is
still in its preliminary stage due to its complex internal structure and the
difficulty of modeling dynamic relationships with video. Existing surveys focus
on general music generation without comprehensive discussion on
vision-to-music. In this paper, we systematically review the research progress
in the field of vision-to-music generation. We first analyze the technical
characteristics and core challenges for three input types: general videos,
human movement videos, and images, as well as two output types of symbolic
music and audio music. We then summarize the existing methodologies on
vision-to-music generation from the architecture perspective. A detailed review
of common datasets and evaluation metrics is provided. Finally, we discuss
current challenges and promising directions for future research. We hope our
survey can inspire further innovation in vision-to-music generation and the
broader field of multimodal generation in academic research and industrial
applications. To follow latest works and foster further innovation in this
field, we are continuously maintaining a GitHub repository at
https://github.com/wzk1015/Awesome-Vision-to-Music-Generation.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 08:21:54 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Wang",
"Zhaokai",
""
],
[
"Bao",
"Chenxi",
""
],
[
"Zhuo",
"Le",
""
],
[
"Han",
"Jingrui",
""
],
[
"Yue",
"Yang",
""
],
[
"Tang",
"Yihong",
""
],
[
"Huang",
"Victor Shea-Jay",
""
],
[
"Liao",
"Yue",
""
]
] | TITLE: Vision-to-Music Generation: A Survey
ABSTRACT: Vision-to-music Generation, including video-to-music and image-to-music
tasks, is a significant branch of multimodal artificial intelligence
demonstrating vast application prospects in fields such as film scoring, short
video creation, and dance music synthesis. However, compared to the rapid
development of modalities like text and images, research in vision-to-music is
still in its preliminary stage due to its complex internal structure and the
difficulty of modeling dynamic relationships with video. Existing surveys focus
on general music generation without comprehensive discussion on
vision-to-music. In this paper, we systematically review the research progress
in the field of vision-to-music generation. We first analyze the technical
characteristics and core challenges for three input types: general videos,
human movement videos, and images, as well as two output types of symbolic
music and audio music. We then summarize the existing methodologies on
vision-to-music generation from the architecture perspective. A detailed review
of common datasets and evaluation metrics is provided. Finally, we discuss
current challenges and promising directions for future research. We hope our
survey can inspire further innovation in vision-to-music generation and the
broader field of multimodal generation in academic research and industrial
applications. To follow latest works and foster further innovation in this
field, we are continuously maintaining a GitHub repository at
https://github.com/wzk1015/Awesome-Vision-to-Music-Generation.
|
2503.21258 | Jizhou Han | Jizhou Han, Chenhao Ding, Yuhang He, Songlin Dong, Qiang Wang, Xinyuan
Gao, Yihong Gong | Learn by Reasoning: Analogical Weight Generation for Few-Shot
Class-Incremental Learning | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Few-shot class-incremental Learning (FSCIL) enables models to learn new
classes from limited data while retaining performance on previously learned
classes. Traditional FSCIL methods often require fine-tuning parameters with
limited new class data and suffer from a separation between learning new
classes and utilizing old knowledge. Inspired by the analogical learning
mechanisms of the human brain, we propose a novel analogical generative method.
Our approach includes the Brain-Inspired Analogical Generator (BiAG), which
derives new class weights from existing classes without parameter fine-tuning
during incremental stages. BiAG consists of three components: Weight
Self-Attention Module (WSA), Weight & Prototype Analogical Attention Module
(WPAA), and Semantic Conversion Module (SCM). SCM uses Neural Collapse theory
for semantic conversion, WSA supplements new class weights, and WPAA computes
analogies to generate new class weights. Experiments on miniImageNet, CUB-200,
and CIFAR-100 datasets demonstrate that our method achieves higher final and
average accuracy compared to SOTA methods.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 08:31:46 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Han",
"Jizhou",
""
],
[
"Ding",
"Chenhao",
""
],
[
"He",
"Yuhang",
""
],
[
"Dong",
"Songlin",
""
],
[
"Wang",
"Qiang",
""
],
[
"Gao",
"Xinyuan",
""
],
[
"Gong",
"Yihong",
""
]
] | TITLE: Learn by Reasoning: Analogical Weight Generation for Few-Shot
Class-Incremental Learning
ABSTRACT: Few-shot class-incremental Learning (FSCIL) enables models to learn new
classes from limited data while retaining performance on previously learned
classes. Traditional FSCIL methods often require fine-tuning parameters with
limited new class data and suffer from a separation between learning new
classes and utilizing old knowledge. Inspired by the analogical learning
mechanisms of the human brain, we propose a novel analogical generative method.
Our approach includes the Brain-Inspired Analogical Generator (BiAG), which
derives new class weights from existing classes without parameter fine-tuning
during incremental stages. BiAG consists of three components: Weight
Self-Attention Module (WSA), Weight & Prototype Analogical Attention Module
(WPAA), and Semantic Conversion Module (SCM). SCM uses Neural Collapse theory
for semantic conversion, WSA supplements new class weights, and WPAA computes
analogies to generate new class weights. Experiments on miniImageNet, CUB-200,
and CIFAR-100 datasets demonstrate that our method achieves higher final and
average accuracy compared to SOTA methods.
|
2503.21259 | Dongqian Guo | Wencheng Han, Dongqian Guo, Xiao Chen, Pang Lyu, Yi Jin, Jianbing Shen | Reducing CT Metal Artifacts by Learning Latent Space Alignment with
Gemstone Spectral Imaging Data | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Metal artifacts in CT slices have long posed challenges in medical
diagnostics. These artifacts degrade image quality, resulting in suboptimal
visualization and complicating the accurate interpretation of tissues adjacent
to metal implants. To address these issues, we introduce the Latent Gemstone
Spectral Imaging (GSI) Alignment Framework, which effectively reduces metal
artifacts while avoiding the introduction of noise information. Our work is
based on a key finding that even artifact-affected ordinary CT sequences
contain sufficient information to discern detailed structures. The challenge
lies in the inability to clearly represent this information. To address this
issue, we developed an Alignment Framework that adjusts the representation of
ordinary CT images to match GSI CT sequences. GSI is an advanced imaging
technique using multiple energy levels to mitigate artifacts caused by metal
implants. By aligning the representation to GSI data, we can effectively
suppress metal artifacts while clearly revealing detailed structure, without
introducing extraneous information into CT sequences. To facilitate the
application, we propose a new dataset, Artifacts-GSI, captured from real
patients with metal implants, and establish a new benchmark based on this
dataset. Experimental results show that our method significantly reduces metal
artifacts and greatly enhances the readability of CT slices. All our code and
data are available at: https://um-lab.github.io/GSI-MAR/
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 08:35:10 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Han",
"Wencheng",
""
],
[
"Guo",
"Dongqian",
""
],
[
"Chen",
"Xiao",
""
],
[
"Lyu",
"Pang",
""
],
[
"Jin",
"Yi",
""
],
[
"Shen",
"Jianbing",
""
]
] | TITLE: Reducing CT Metal Artifacts by Learning Latent Space Alignment with
Gemstone Spectral Imaging Data
ABSTRACT: Metal artifacts in CT slices have long posed challenges in medical
diagnostics. These artifacts degrade image quality, resulting in suboptimal
visualization and complicating the accurate interpretation of tissues adjacent
to metal implants. To address these issues, we introduce the Latent Gemstone
Spectral Imaging (GSI) Alignment Framework, which effectively reduces metal
artifacts while avoiding the introduction of noise information. Our work is
based on a key finding that even artifact-affected ordinary CT sequences
contain sufficient information to discern detailed structures. The challenge
lies in the inability to clearly represent this information. To address this
issue, we developed an Alignment Framework that adjusts the representation of
ordinary CT images to match GSI CT sequences. GSI is an advanced imaging
technique using multiple energy levels to mitigate artifacts caused by metal
implants. By aligning the representation to GSI data, we can effectively
suppress metal artifacts while clearly revealing detailed structure, without
introducing extraneous information into CT sequences. To facilitate the
application, we propose a new dataset, Artifacts-GSI, captured from real
patients with metal implants, and establish a new benchmark based on this
dataset. Experimental results show that our method significantly reduces metal
artifacts and greatly enhances the readability of CT slices. All our code and
data are available at: https://um-lab.github.io/GSI-MAR/
|
2503.21268 | Ming Yan | Ming Yan and Xincheng Lin and Yuhua Luo and Shuqi Fan and Yudi Dai and
Qixin Zhong and Lincai Zhong and Yuexin Ma and Lan Xu and Chenglu Wen and
Siqi Shen and Cheng Wang | ClimbingCap: Multi-Modal Dataset and Method for Rock Climbing in World
Coordinate | CVPR2025, project in \href{this
link}{http://www.lidarhumanmotion.net/climbingcap/} | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Human Motion Recovery (HMR) research mainly focuses on ground-based motions
such as running. The study on capturing climbing motion, an off-ground motion,
is sparse. This is partly due to the limited availability of climbing motion
datasets, especially large-scale and challenging 3D labeled datasets. To
address the insufficiency of climbing motion datasets, we collect AscendMotion,
a large-scale well-annotated, and challenging climbing motion dataset. It
consists of 412k RGB, LiDAR frames, and IMU measurements, including the
challenging climbing motions of 22 skilled climbing coaches across 12 different
rock walls. Capturing the climbing motions is challenging as it requires
precise recovery of not only the complex pose but also the global position of
climbers. Although multiple global HMR methods have been proposed, they cannot
faithfully capture climbing motions. To address the limitations of HMR methods
for climbing, we propose ClimbingCap, a motion recovery method that
reconstructs continuous 3D human climbing motion in a global coordinate system.
One key insight is to use the RGB and LiDAR modalities to separately
reconstruct motions in camera coordinates and global coordinates and to
optimize them jointly. We demonstrate the quality of the AscendMotion dataset
and present promising results from ClimbingCap. The AscendMotion dataset and
source code release publicly at \href{this
link}{http://www.lidarhumanmotion.net/climbingcap/}
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 08:49:33 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Yan",
"Ming",
""
],
[
"Lin",
"Xincheng",
""
],
[
"Luo",
"Yuhua",
""
],
[
"Fan",
"Shuqi",
""
],
[
"Dai",
"Yudi",
""
],
[
"Zhong",
"Qixin",
""
],
[
"Zhong",
"Lincai",
""
],
[
"Ma",
"Yuexin",
""
],
[
"Xu",
"Lan",
""
],
[
"Wen",
"Chenglu",
""
],
[
"Shen",
"Siqi",
""
],
[
"Wang",
"Cheng",
""
]
] | TITLE: ClimbingCap: Multi-Modal Dataset and Method for Rock Climbing in World
Coordinate
ABSTRACT: Human Motion Recovery (HMR) research mainly focuses on ground-based motions
such as running. The study on capturing climbing motion, an off-ground motion,
is sparse. This is partly due to the limited availability of climbing motion
datasets, especially large-scale and challenging 3D labeled datasets. To
address the insufficiency of climbing motion datasets, we collect AscendMotion,
a large-scale well-annotated, and challenging climbing motion dataset. It
consists of 412k RGB, LiDAR frames, and IMU measurements, including the
challenging climbing motions of 22 skilled climbing coaches across 12 different
rock walls. Capturing the climbing motions is challenging as it requires
precise recovery of not only the complex pose but also the global position of
climbers. Although multiple global HMR methods have been proposed, they cannot
faithfully capture climbing motions. To address the limitations of HMR methods
for climbing, we propose ClimbingCap, a motion recovery method that
reconstructs continuous 3D human climbing motion in a global coordinate system.
One key insight is to use the RGB and LiDAR modalities to separately
reconstruct motions in camera coordinates and global coordinates and to
optimize them jointly. We demonstrate the quality of the AscendMotion dataset
and present promising results from ClimbingCap. The AscendMotion dataset and
source code release publicly at \href{this
link}{http://www.lidarhumanmotion.net/climbingcap/}
|
2503.21269 | Zhaoyi Yan | Zhaoyi Yan, Kangjun Liu, Qixiang Ye | Delving Deep into Semantic Relation Distillation | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Knowledge distillation has become a cornerstone technique in deep learning,
facilitating the transfer of knowledge from complex models to lightweight
counterparts. Traditional distillation approaches focus on transferring
knowledge at the instance level, but fail to capture nuanced semantic
relationships within the data. In response, this paper introduces a novel
methodology, Semantics-based Relation Knowledge Distillation (SeRKD), which
reimagines knowledge distillation through a semantics-relation lens among each
sample. By leveraging semantic components, \ie, superpixels, SeRKD enables a
more comprehensive and context-aware transfer of knowledge, which skillfully
integrates superpixel-based semantic extraction with relation-based knowledge
distillation for a sophisticated model compression and distillation.
Particularly, the proposed method is naturally relevant in the domain of Vision
Transformers (ViTs), where visual tokens serve as fundamental units of
representation. Experimental evaluations on benchmark datasets demonstrate the
superiority of SeRKD over existing methods, underscoring its efficacy in
enhancing model performance and generalization capabilities.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 08:50:40 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Yan",
"Zhaoyi",
""
],
[
"Liu",
"Kangjun",
""
],
[
"Ye",
"Qixiang",
""
]
] | TITLE: Delving Deep into Semantic Relation Distillation
ABSTRACT: Knowledge distillation has become a cornerstone technique in deep learning,
facilitating the transfer of knowledge from complex models to lightweight
counterparts. Traditional distillation approaches focus on transferring
knowledge at the instance level, but fail to capture nuanced semantic
relationships within the data. In response, this paper introduces a novel
methodology, Semantics-based Relation Knowledge Distillation (SeRKD), which
reimagines knowledge distillation through a semantics-relation lens among each
sample. By leveraging semantic components, \ie, superpixels, SeRKD enables a
more comprehensive and context-aware transfer of knowledge, which skillfully
integrates superpixel-based semantic extraction with relation-based knowledge
distillation for a sophisticated model compression and distillation.
Particularly, the proposed method is naturally relevant in the domain of Vision
Transformers (ViTs), where visual tokens serve as fundamental units of
representation. Experimental evaluations on benchmark datasets demonstrate the
superiority of SeRKD over existing methods, underscoring its efficacy in
enhancing model performance and generalization capabilities.
|
2503.21272 | Jiaqi Han | Jiaqi Han, Jingwen Ye, Shunyu Liu, Haofei Zhang, Jie Song, Zunlei
Feng, Mingli Song | Reinforced Model Merging | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The success of large language models has garnered widespread attention for
model merging techniques, especially training-free methods which combine model
capabilities within the parameter space. However, two challenges remain: (1)
uniform treatment of all parameters leads to performance degradation; (2)
search-based algorithms are often inefficient. In this paper, we present an
innovative framework termed Reinforced Model Merging (RMM), which encompasses
an environment and agent tailored for merging tasks. These components interact
to execute layer-wise merging actions, aiming to search the optimal merging
architecture. Notably, RMM operates without any gradient computations on the
original models, rendering it feasible for edge devices. Furthermore, by
utilizing data subsets during the evaluation process, we addressed the
bottleneck in the reward feedback phase, thereby accelerating RMM by up to 100
times. Extensive experiments demonstrate that RMM achieves state-of-the-art
performance across various vision and NLP datasets and effectively overcomes
the limitations of the existing baseline methods. Our code is available at
https://github.com/WuDiHJQ/Reinforced-Model-Merging.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 08:52:41 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Han",
"Jiaqi",
""
],
[
"Ye",
"Jingwen",
""
],
[
"Liu",
"Shunyu",
""
],
[
"Zhang",
"Haofei",
""
],
[
"Song",
"Jie",
""
],
[
"Feng",
"Zunlei",
""
],
[
"Song",
"Mingli",
""
]
] | TITLE: Reinforced Model Merging
ABSTRACT: The success of large language models has garnered widespread attention for
model merging techniques, especially training-free methods which combine model
capabilities within the parameter space. However, two challenges remain: (1)
uniform treatment of all parameters leads to performance degradation; (2)
search-based algorithms are often inefficient. In this paper, we present an
innovative framework termed Reinforced Model Merging (RMM), which encompasses
an environment and agent tailored for merging tasks. These components interact
to execute layer-wise merging actions, aiming to search the optimal merging
architecture. Notably, RMM operates without any gradient computations on the
original models, rendering it feasible for edge devices. Furthermore, by
utilizing data subsets during the evaluation process, we addressed the
bottleneck in the reward feedback phase, thereby accelerating RMM by up to 100
times. Extensive experiments demonstrate that RMM achieves state-of-the-art
performance across various vision and NLP datasets and effectively overcomes
the limitations of the existing baseline methods. Our code is available at
https://github.com/WuDiHJQ/Reinforced-Model-Merging.
|
2503.21293 | Aaron Kurda | Aaron Kurda, Simon Steuernagel, and Marcus Baum | Lidar-only Odometry based on Multiple Scan-to-Scan Alignments over a
Moving Window | null | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Lidar-only odometry considers the pose estimation of a mobile robot based on
the accumulation of motion increments extracted from consecutive lidar scans.
Many existing approaches to the problem use a scan-to-map registration, which
neglects the accumulation of errors within the maintained map due to drift.
Other methods use a refinement step that jointly optimizes the local map on a
feature basis. We propose a solution that avoids this by using multiple
independent scan-to-scan Iterative Closest Points (ICP) registrations to
previous scans in order to derive constraints for a pose graph. The
optimization of the pose graph then not only yields an accurate estimate for
the latest pose, but also enables the refinement of previous scans in the
optimization window. By avoiding the need to recompute the scan-to-scan
alignments, the computational load is minimized. Extensive evaluation on the
public KITTI and MulRan datasets as well as on a custom automotive lidar
dataset is carried out. Results show that the proposed approach achieves
state-of-the-art estimation accuracy, while alleviating the mentioned issues.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 09:22:27 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Kurda",
"Aaron",
""
],
[
"Steuernagel",
"Simon",
""
],
[
"Baum",
"Marcus",
""
]
] | TITLE: Lidar-only Odometry based on Multiple Scan-to-Scan Alignments over a
Moving Window
ABSTRACT: Lidar-only odometry considers the pose estimation of a mobile robot based on
the accumulation of motion increments extracted from consecutive lidar scans.
Many existing approaches to the problem use a scan-to-map registration, which
neglects the accumulation of errors within the maintained map due to drift.
Other methods use a refinement step that jointly optimizes the local map on a
feature basis. We propose a solution that avoids this by using multiple
independent scan-to-scan Iterative Closest Points (ICP) registrations to
previous scans in order to derive constraints for a pose graph. The
optimization of the pose graph then not only yields an accurate estimate for
the latest pose, but also enables the refinement of previous scans in the
optimization window. By avoiding the need to recompute the scan-to-scan
alignments, the computational load is minimized. Extensive evaluation on the
public KITTI and MulRan datasets as well as on a custom automotive lidar
dataset is carried out. Results show that the proposed approach achieves
state-of-the-art estimation accuracy, while alleviating the mentioned issues.
|
2503.21295 | Shuaijie She | Shuaijie She, Junxiao Liu, Yifeng Liu, Jiajun Chen, Xin Huang, Shujian
Huang | R-PRM: Reasoning-Driven Process Reward Modeling | The project is available at https://github.com/NJUNLP/R-PRM | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs) inevitably make mistakes when performing
step-by-step mathematical reasoning. Process Reward Models (PRMs) have emerged
as a promising solution by evaluating each reasoning step. However, existing
PRMs typically output evaluation scores directly, limiting both learning
efficiency and evaluation accuracy, which is further exacerbated by the
scarcity of annotated data. To address these issues, we propose
Reasoning-Driven Process Reward Modeling (R-PRM). First, we leverage stronger
LLMs to generate seed data from limited annotations, effectively bootstrapping
our model's reasoning capabilities and enabling comprehensive step-by-step
evaluation. Second, we further enhance performance through preference
optimization, without requiring additional annotated data. Third, we introduce
inference-time scaling to fully harness the model's reasoning potential.
Extensive experiments demonstrate R-PRM's effectiveness: on ProcessBench and
PRMBench, it surpasses strong baselines by 11.9 and 8.5 points in F1 scores,
respectively. When applied to guide mathematical reasoning, R-PRM achieves
consistent accuracy improvements of over 8.5 points across six challenging
datasets. Further analysis reveals that R-PRM exhibits more comprehensive
evaluation and stronger generalization capabilities, thereby highlighting its
significant potential.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 09:23:08 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"She",
"Shuaijie",
""
],
[
"Liu",
"Junxiao",
""
],
[
"Liu",
"Yifeng",
""
],
[
"Chen",
"Jiajun",
""
],
[
"Huang",
"Xin",
""
],
[
"Huang",
"Shujian",
""
]
] | TITLE: R-PRM: Reasoning-Driven Process Reward Modeling
ABSTRACT: Large language models (LLMs) inevitably make mistakes when performing
step-by-step mathematical reasoning. Process Reward Models (PRMs) have emerged
as a promising solution by evaluating each reasoning step. However, existing
PRMs typically output evaluation scores directly, limiting both learning
efficiency and evaluation accuracy, which is further exacerbated by the
scarcity of annotated data. To address these issues, we propose
Reasoning-Driven Process Reward Modeling (R-PRM). First, we leverage stronger
LLMs to generate seed data from limited annotations, effectively bootstrapping
our model's reasoning capabilities and enabling comprehensive step-by-step
evaluation. Second, we further enhance performance through preference
optimization, without requiring additional annotated data. Third, we introduce
inference-time scaling to fully harness the model's reasoning potential.
Extensive experiments demonstrate R-PRM's effectiveness: on ProcessBench and
PRMBench, it surpasses strong baselines by 11.9 and 8.5 points in F1 scores,
respectively. When applied to guide mathematical reasoning, R-PRM achieves
consistent accuracy improvements of over 8.5 points across six challenging
datasets. Further analysis reveals that R-PRM exhibits more comprehensive
evaluation and stronger generalization capabilities, thereby highlighting its
significant potential.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.