Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2412.04626 | Pierre-Andr\'e No\"el | Juan Rodriguez, Xiangru Jian, Siba Smarak Panigrahi, Tianyu Zhang,
Aarash Feizi, Abhay Puri, Akshay Kalkunte, Fran\c{c}ois Savard, Ahmed Masry,
Shravan Nayak, Rabiul Awal, Mahsa Massoud, Amirhossein Abaskohi, Zichao Li,
Suyuchen Wang, Pierre-Andr\'e No\"el, Mats Leon Richter, Saverio Vadacchino,
Shubham Agarwal, Sanket Biswas, Sara Shanian, Ying Zhang, Noah Bolger, Kurt
MacDonald, Simon Fauvel, Sathwik Tejaswi, Srinivas Sunkara, Joao Monteiro,
Krishnamurthy DJ Dvijotham, Torsten Scholak, Nicolas Chapados, Sepideh
Kharagani, Sean Hughes, M. \"Ozsu, Siva Reddy, Marco Pedersoli, Yoshua
Bengio, Christopher Pal, Issam Laradji, Spandana Gella, Perouz Taslakian,
David Vazquez, Sai Rajeswar | BigDocs: An Open Dataset for Training Multimodal Models on Document and
Code Tasks | The project is hosted at https://bigdocs.github.io | ICLR 2025 https://openreview.net/forum?id=UTgNFcpk0j | null | null | cs.LG cs.CL | http://creativecommons.org/licenses/by/4.0/ | Multimodal AI has the potential to significantly enhance
document-understanding tasks, such as processing receipts, understanding
workflows, extracting data from documents, and summarizing reports. Code
generation tasks that require long-structured outputs can also be enhanced by
multimodality. Despite this, their use in commercial applications is often
limited due to limited access to training data and restrictive licensing, which
hinders open access. To address these limitations, we introduce BigDocs-7.5M, a
high-quality, open-access dataset comprising 7.5 million multimodal documents
across 30 tasks. We use an efficient data curation process to ensure our data
is high-quality and license-permissive. Our process emphasizes accountability,
responsibility, and transparency through filtering rules, traceable metadata,
and careful content analysis. Additionally, we introduce BigDocs-Bench, a
benchmark suite with 10 novel tasks where we create datasets that reflect
real-world use cases involving reasoning over Graphical User Interfaces (GUI)
and code generation from images. Our experiments show that training with
BigDocs-Bench improves average performance up to 25.8% over closed-source
GPT-4o in document reasoning and structured output tasks such as
Screenshot2HTML or Image2Latex generation. Finally, human evaluations showed a
preference for outputs from models trained on BigDocs over GPT-4o. This
suggests that BigDocs can help both academics and the open-source community
utilize and improve AI tools to enhance multimodal capabilities and document
reasoning. The project is hosted at https://bigdocs.github.io .
| [
{
"version": "v1",
"created": "Thu, 5 Dec 2024 21:41:20 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 16:32:24 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Rodriguez",
"Juan",
""
],
[
"Jian",
"Xiangru",
""
],
[
"Panigrahi",
"Siba Smarak",
""
],
[
"Zhang",
"Tianyu",
""
],
[
"Feizi",
"Aarash",
""
],
[
"Puri",
"Abhay",
""
],
[
"Kalkunte",
"Akshay",
""
],
[
"Savard",
"François",
""
],
[
"Masry",
"Ahmed",
""
],
[
"Nayak",
"Shravan",
""
],
[
"Awal",
"Rabiul",
""
],
[
"Massoud",
"Mahsa",
""
],
[
"Abaskohi",
"Amirhossein",
""
],
[
"Li",
"Zichao",
""
],
[
"Wang",
"Suyuchen",
""
],
[
"Noël",
"Pierre-André",
""
],
[
"Richter",
"Mats Leon",
""
],
[
"Vadacchino",
"Saverio",
""
],
[
"Agarwal",
"Shubham",
""
],
[
"Biswas",
"Sanket",
""
],
[
"Shanian",
"Sara",
""
],
[
"Zhang",
"Ying",
""
],
[
"Bolger",
"Noah",
""
],
[
"MacDonald",
"Kurt",
""
],
[
"Fauvel",
"Simon",
""
],
[
"Tejaswi",
"Sathwik",
""
],
[
"Sunkara",
"Srinivas",
""
],
[
"Monteiro",
"Joao",
""
],
[
"Dvijotham",
"Krishnamurthy DJ",
""
],
[
"Scholak",
"Torsten",
""
],
[
"Chapados",
"Nicolas",
""
],
[
"Kharagani",
"Sepideh",
""
],
[
"Hughes",
"Sean",
""
],
[
"Özsu",
"M.",
""
],
[
"Reddy",
"Siva",
""
],
[
"Pedersoli",
"Marco",
""
],
[
"Bengio",
"Yoshua",
""
],
[
"Pal",
"Christopher",
""
],
[
"Laradji",
"Issam",
""
],
[
"Gella",
"Spandana",
""
],
[
"Taslakian",
"Perouz",
""
],
[
"Vazquez",
"David",
""
],
[
"Rajeswar",
"Sai",
""
]
] | TITLE: BigDocs: An Open Dataset for Training Multimodal Models on Document and
Code Tasks
ABSTRACT: Multimodal AI has the potential to significantly enhance
document-understanding tasks, such as processing receipts, understanding
workflows, extracting data from documents, and summarizing reports. Code
generation tasks that require long-structured outputs can also be enhanced by
multimodality. Despite this, their use in commercial applications is often
limited due to limited access to training data and restrictive licensing, which
hinders open access. To address these limitations, we introduce BigDocs-7.5M, a
high-quality, open-access dataset comprising 7.5 million multimodal documents
across 30 tasks. We use an efficient data curation process to ensure our data
is high-quality and license-permissive. Our process emphasizes accountability,
responsibility, and transparency through filtering rules, traceable metadata,
and careful content analysis. Additionally, we introduce BigDocs-Bench, a
benchmark suite with 10 novel tasks where we create datasets that reflect
real-world use cases involving reasoning over Graphical User Interfaces (GUI)
and code generation from images. Our experiments show that training with
BigDocs-Bench improves average performance up to 25.8% over closed-source
GPT-4o in document reasoning and structured output tasks such as
Screenshot2HTML or Image2Latex generation. Finally, human evaluations showed a
preference for outputs from models trained on BigDocs over GPT-4o. This
suggests that BigDocs can help both academics and the open-source community
utilize and improve AI tools to enhance multimodal capabilities and document
reasoning. The project is hosted at https://bigdocs.github.io .
|
2412.06287 | Yanhao Wang | Qian Zhang, Panfeng Chen, Jiali Li, Linkun Feng, Shuyu Liu, Heng Zhao,
Mei Chen, Hui Li, Yanhao Wang | PediaBench: A Comprehensive Chinese Pediatric Dataset for Benchmarking
Large Language Models | 21 pages, 12 figures | null | 10.1007/s11704-025-41345-w | null | cs.CL | http://creativecommons.org/licenses/by-sa/4.0/ | The emergence of Large Language Models (LLMs) in the medical domain has
stressed a compelling need for standard datasets to evaluate their
question-answering (QA) performance. Although there have been several benchmark
datasets for medical QA, they either cover common knowledge across different
departments or are specific to another department rather than pediatrics.
Moreover, some of them are limited to objective questions and do not measure
the generation capacity of LLMs. Therefore, they cannot comprehensively assess
the QA ability of LLMs in pediatrics. To fill this gap, we construct
PediaBench, the first Chinese pediatric dataset for LLM evaluation.
Specifically, it contains 4,117 objective questions and 1,632 subjective
questions spanning 12 pediatric disease groups. It adopts an integrated scoring
criterion based on different difficulty levels to thoroughly assess the
proficiency of an LLM in instruction following, knowledge understanding,
clinical case analysis, etc. Finally, we validate the effectiveness of
PediaBench with extensive experiments on 20 open-source and commercial LLMs.
Through an in-depth analysis of experimental results, we offer insights into
the ability of LLMs to answer pediatric questions in the Chinese context,
highlighting their limitations for further improvements. Our code and data are
published at https://github.com/ACMISLab/PediaBench.
| [
{
"version": "v1",
"created": "Mon, 9 Dec 2024 08:19:28 GMT"
},
{
"version": "v2",
"created": "Thu, 12 Dec 2024 01:20:14 GMT"
},
{
"version": "v3",
"created": "Fri, 28 Feb 2025 07:54:16 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Zhang",
"Qian",
""
],
[
"Chen",
"Panfeng",
""
],
[
"Li",
"Jiali",
""
],
[
"Feng",
"Linkun",
""
],
[
"Liu",
"Shuyu",
""
],
[
"Zhao",
"Heng",
""
],
[
"Chen",
"Mei",
""
],
[
"Li",
"Hui",
""
],
[
"Wang",
"Yanhao",
""
]
] | TITLE: PediaBench: A Comprehensive Chinese Pediatric Dataset for Benchmarking
Large Language Models
ABSTRACT: The emergence of Large Language Models (LLMs) in the medical domain has
stressed a compelling need for standard datasets to evaluate their
question-answering (QA) performance. Although there have been several benchmark
datasets for medical QA, they either cover common knowledge across different
departments or are specific to another department rather than pediatrics.
Moreover, some of them are limited to objective questions and do not measure
the generation capacity of LLMs. Therefore, they cannot comprehensively assess
the QA ability of LLMs in pediatrics. To fill this gap, we construct
PediaBench, the first Chinese pediatric dataset for LLM evaluation.
Specifically, it contains 4,117 objective questions and 1,632 subjective
questions spanning 12 pediatric disease groups. It adopts an integrated scoring
criterion based on different difficulty levels to thoroughly assess the
proficiency of an LLM in instruction following, knowledge understanding,
clinical case analysis, etc. Finally, we validate the effectiveness of
PediaBench with extensive experiments on 20 open-source and commercial LLMs.
Through an in-depth analysis of experimental results, we offer insights into
the ability of LLMs to answer pediatric questions in the Chinese context,
highlighting their limitations for further improvements. Our code and data are
published at https://github.com/ACMISLab/PediaBench.
|
2412.06470 | Fei Wu | Fei Wu, Pablo Marquez-Neila, Hedyeh Rafi-Tarii, Raphael Sznitman | Active Learning with Context Sampling and One-vs-Rest Entropy for
Semantic Segmentation | WACV 2025 (Oral), 8 pages | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Multi-class semantic segmentation remains a cornerstone challenge in computer
vision. Yet, dataset creation remains excessively demanding in time and effort,
especially for specialized domains. Active Learning (AL) mitigates this
challenge by selecting data points for annotation strategically. However,
existing patch-based AL methods often overlook boundary pixels critical
information, essential for accurate segmentation. We present OREAL, a novel
patch-based AL method designed for multi-class semantic segmentation. OREAL
enhances boundary detection by employing maximum aggregation of pixel-wise
uncertainty scores. Additionally, we introduce one-vs-rest entropy, a novel
uncertainty score function that computes class-wise uncertainties while
achieving implicit class balancing during dataset creation. Comprehensive
experiments across diverse datasets and model architectures validate our
hypothesis.
| [
{
"version": "v1",
"created": "Mon, 9 Dec 2024 13:15:52 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 00:35:34 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Wu",
"Fei",
""
],
[
"Marquez-Neila",
"Pablo",
""
],
[
"Rafi-Tarii",
"Hedyeh",
""
],
[
"Sznitman",
"Raphael",
""
]
] | TITLE: Active Learning with Context Sampling and One-vs-Rest Entropy for
Semantic Segmentation
ABSTRACT: Multi-class semantic segmentation remains a cornerstone challenge in computer
vision. Yet, dataset creation remains excessively demanding in time and effort,
especially for specialized domains. Active Learning (AL) mitigates this
challenge by selecting data points for annotation strategically. However,
existing patch-based AL methods often overlook boundary pixels critical
information, essential for accurate segmentation. We present OREAL, a novel
patch-based AL method designed for multi-class semantic segmentation. OREAL
enhances boundary detection by employing maximum aggregation of pixel-wise
uncertainty scores. Additionally, we introduce one-vs-rest entropy, a novel
uncertainty score function that computes class-wise uncertainties while
achieving implicit class balancing during dataset creation. Comprehensive
experiments across diverse datasets and model architectures validate our
hypothesis.
|
2412.06847 | Siyuan Guo | Siyuan Guo, Lexuan Wang, Chang Jin, Jinxian Wang, Han Peng, Huayang
Shi, Wengen Li, Jihong Guan, Shuigeng Zhou | M$^{3}$-20M: A Large-Scale Multi-Modal Molecule Dataset for AI-driven
Drug Design and Discovery | null | null | null | null | q-bio.QM cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces M$^{3}$-20M, a large-scale Multi-Modal Molecule dataset
that contains over 20 million molecules, with the data mainly being integrated
from existing databases and partially generated by large language models.
Designed to support AI-driven drug design and discovery, M$^{3}$-20M is 71
times more in the number of molecules than the largest existing dataset,
providing an unprecedented scale that can highly benefit the training or
fine-tuning of models, including large language models for drug design and
discovery tasks. This dataset integrates one-dimensional SMILES,
two-dimensional molecular graphs, three-dimensional molecular structures,
physicochemical properties, and textual descriptions collected through web
crawling and generated using GPT-3.5, offering a comprehensive view of each
molecule. To demonstrate the power of M$^{3}$-20M in drug design and discovery,
we conduct extensive experiments on two key tasks: molecule generation and
molecular property prediction, using large language models including GLM4,
GPT-3.5, GPT-4, and Llama3-8b. Our experimental results show that M$^{3}$-20M
can significantly boost model performance in both tasks. Specifically, it
enables the models to generate more diverse and valid molecular structures and
achieve higher property prediction accuracy than existing single-modal
datasets, which validates the value and potential of M$^{3}$-20M in supporting
AI-driven drug design and discovery. The dataset is available at
https://github.com/bz99bz/M-3.
| [
{
"version": "v1",
"created": "Sun, 8 Dec 2024 03:43:07 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Mar 2025 12:37:49 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Guo",
"Siyuan",
""
],
[
"Wang",
"Lexuan",
""
],
[
"Jin",
"Chang",
""
],
[
"Wang",
"Jinxian",
""
],
[
"Peng",
"Han",
""
],
[
"Shi",
"Huayang",
""
],
[
"Li",
"Wengen",
""
],
[
"Guan",
"Jihong",
""
],
[
"Zhou",
"Shuigeng",
""
]
] | TITLE: M$^{3}$-20M: A Large-Scale Multi-Modal Molecule Dataset for AI-driven
Drug Design and Discovery
ABSTRACT: This paper introduces M$^{3}$-20M, a large-scale Multi-Modal Molecule dataset
that contains over 20 million molecules, with the data mainly being integrated
from existing databases and partially generated by large language models.
Designed to support AI-driven drug design and discovery, M$^{3}$-20M is 71
times more in the number of molecules than the largest existing dataset,
providing an unprecedented scale that can highly benefit the training or
fine-tuning of models, including large language models for drug design and
discovery tasks. This dataset integrates one-dimensional SMILES,
two-dimensional molecular graphs, three-dimensional molecular structures,
physicochemical properties, and textual descriptions collected through web
crawling and generated using GPT-3.5, offering a comprehensive view of each
molecule. To demonstrate the power of M$^{3}$-20M in drug design and discovery,
we conduct extensive experiments on two key tasks: molecule generation and
molecular property prediction, using large language models including GLM4,
GPT-3.5, GPT-4, and Llama3-8b. Our experimental results show that M$^{3}$-20M
can significantly boost model performance in both tasks. Specifically, it
enables the models to generate more diverse and valid molecular structures and
achieve higher property prediction accuracy than existing single-modal
datasets, which validates the value and potential of M$^{3}$-20M in supporting
AI-driven drug design and discovery. The dataset is available at
https://github.com/bz99bz/M-3.
|
2412.09468 | Yilei Zhao | Yilei Zhao, Wentao Zhang, Tingran Yang, Yong Jiang, Fei Huang, and Wei
Yang Bryan Lim | STORM: A Spatio-Temporal Factor Model Based on Dual Vector Quantized
Variational Autoencoders for Financial Trading | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | In financial trading, factor models are widely used to price assets and
capture excess returns from mispricing. Recently, we have witnessed the rise of
variational autoencoder-based latent factor models, which learn latent factors
self-adaptively. While these models focus on modeling overall market
conditions, they often fail to effectively capture the temporal patterns of
individual stocks. Additionally, representing multiple factors as single values
simplifies the model but limits its ability to capture complex relationships
and dependencies. As a result, the learned factors are of low quality and lack
diversity, reducing their effectiveness and robustness across different trading
periods. To address these issues, we propose a Spatio-Temporal factOR Model
based on dual vector quantized variational autoencoders, named STORM, which
extracts features of stocks from temporal and spatial perspectives, then fuses
and aligns these features at the fine-grained and semantic level, and
represents the factors as multi-dimensional embeddings. The discrete codebooks
cluster similar factor embeddings, ensuring orthogonality and diversity, which
helps distinguish between different factors and enables factor selection in
financial trading. To show the performance of the proposed factor model, we
apply it to two downstream experiments: portfolio management on two stock
datasets and individual trading tasks on six specific stocks. The extensive
experiments demonstrate STORM's flexibility in adapting to downstream tasks and
superior performance over baseline models.
| [
{
"version": "v1",
"created": "Thu, 12 Dec 2024 17:15:49 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Jan 2025 05:25:35 GMT"
},
{
"version": "v3",
"created": "Mon, 17 Mar 2025 04:30:03 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Zhao",
"Yilei",
""
],
[
"Zhang",
"Wentao",
""
],
[
"Yang",
"Tingran",
""
],
[
"Jiang",
"Yong",
""
],
[
"Huang",
"Fei",
""
],
[
"Lim",
"Wei Yang Bryan",
""
]
] | TITLE: STORM: A Spatio-Temporal Factor Model Based on Dual Vector Quantized
Variational Autoencoders for Financial Trading
ABSTRACT: In financial trading, factor models are widely used to price assets and
capture excess returns from mispricing. Recently, we have witnessed the rise of
variational autoencoder-based latent factor models, which learn latent factors
self-adaptively. While these models focus on modeling overall market
conditions, they often fail to effectively capture the temporal patterns of
individual stocks. Additionally, representing multiple factors as single values
simplifies the model but limits its ability to capture complex relationships
and dependencies. As a result, the learned factors are of low quality and lack
diversity, reducing their effectiveness and robustness across different trading
periods. To address these issues, we propose a Spatio-Temporal factOR Model
based on dual vector quantized variational autoencoders, named STORM, which
extracts features of stocks from temporal and spatial perspectives, then fuses
and aligns these features at the fine-grained and semantic level, and
represents the factors as multi-dimensional embeddings. The discrete codebooks
cluster similar factor embeddings, ensuring orthogonality and diversity, which
helps distinguish between different factors and enables factor selection in
financial trading. To show the performance of the proposed factor model, we
apply it to two downstream experiments: portfolio management on two stock
datasets and individual trading tasks on six specific stocks. The extensive
experiments demonstrate STORM's flexibility in adapting to downstream tasks and
superior performance over baseline models.
|
2412.11390 | Dongrui Wu | Xiaoqing Chen, Tianwang Jia, Dongrui Wu | A3E: Aligned and Augmented Adversarial Ensemble for Accurate, Robust and
Privacy-Preserving EEG Decoding | null | null | null | null | cs.HC cs.LG eess.SP | http://creativecommons.org/licenses/by/4.0/ | An electroencephalogram (EEG) based brain-computer interface (BCI) enables
direct communication between the brain and external devices. However, EEG-based
BCIs face at least three major challenges in real-world applications: data
scarcity and individual differences, adversarial vulnerability, and data
privacy. While previous studies have addressed one or two of these issues,
simultaneous accommodation of all three challenges remains challenging and
unexplored. This paper fills this gap, by proposing an Aligned and Augmented
Adversarial Ensemble (A3E) algorithm and integrating it into three privacy
protection scenarios (centralized source-free transfer, federated source-free
transfer, and source data perturbation), achieving simultaneously accurate
decoding, adversarial robustness, and privacy protection of EEG-based BCIs.
Experiments on three public EEG datasets demonstrated that our proposed
approach outperformed over 10 classic and state-of-the-art approaches in both
accuracy and robustness in all three privacy-preserving scenarios, even
outperforming state-of-the-art transfer learning approaches that do not
consider privacy protection at all. This is the first time that three major
challenges in EEG-based BCIs can be addressed simultaneously, significantly
improving the practicalness of EEG decoding in real-world BCIs.
| [
{
"version": "v1",
"created": "Mon, 16 Dec 2024 02:37:38 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 04:11:54 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Chen",
"Xiaoqing",
""
],
[
"Jia",
"Tianwang",
""
],
[
"Wu",
"Dongrui",
""
]
] | TITLE: A3E: Aligned and Augmented Adversarial Ensemble for Accurate, Robust and
Privacy-Preserving EEG Decoding
ABSTRACT: An electroencephalogram (EEG) based brain-computer interface (BCI) enables
direct communication between the brain and external devices. However, EEG-based
BCIs face at least three major challenges in real-world applications: data
scarcity and individual differences, adversarial vulnerability, and data
privacy. While previous studies have addressed one or two of these issues,
simultaneous accommodation of all three challenges remains challenging and
unexplored. This paper fills this gap, by proposing an Aligned and Augmented
Adversarial Ensemble (A3E) algorithm and integrating it into three privacy
protection scenarios (centralized source-free transfer, federated source-free
transfer, and source data perturbation), achieving simultaneously accurate
decoding, adversarial robustness, and privacy protection of EEG-based BCIs.
Experiments on three public EEG datasets demonstrated that our proposed
approach outperformed over 10 classic and state-of-the-art approaches in both
accuracy and robustness in all three privacy-preserving scenarios, even
outperforming state-of-the-art transfer learning approaches that do not
consider privacy protection at all. This is the first time that three major
challenges in EEG-based BCIs can be addressed simultaneously, significantly
improving the practicalness of EEG decoding in real-world BCIs.
|
2412.14477 | Yeo Jin Jung | Yeo Jin Jung, Claire Donnat | Graph Topic Modeling for Documents with Spatial or Covariate
Dependencies | Revised proof for Section D, Appendix | null | null | null | cs.LG stat.ME | http://creativecommons.org/licenses/by/4.0/ | We address the challenge of incorporating document-level metadata into topic
modeling to improve topic mixture estimation. To overcome the computational
complexity and lack of theoretical guarantees in existing Bayesian methods, we
extend probabilistic latent semantic indexing (pLSI), a frequentist framework
for topic modeling, by incorporating document-level covariates or known
similarities between documents through a graph formalism. Modeling documents as
nodes and edges denoting similarities, we propose a new estimator based on a
fast graph-regularized iterative singular value decomposition (SVD) that
encourages similar documents to share similar topic mixture proportions. We
characterize the estimation error of our proposed method by deriving
high-probability bounds and develop a specialized cross-validation method to
optimize our regularization parameters. We validate our model through
comprehensive experiments on synthetic datasets and three real-world corpora,
demonstrating improved performance and faster inference compared to existing
Bayesian methods.
| [
{
"version": "v1",
"created": "Thu, 19 Dec 2024 03:00:26 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Mar 2025 05:35:26 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Jung",
"Yeo Jin",
""
],
[
"Donnat",
"Claire",
""
]
] | TITLE: Graph Topic Modeling for Documents with Spatial or Covariate
Dependencies
ABSTRACT: We address the challenge of incorporating document-level metadata into topic
modeling to improve topic mixture estimation. To overcome the computational
complexity and lack of theoretical guarantees in existing Bayesian methods, we
extend probabilistic latent semantic indexing (pLSI), a frequentist framework
for topic modeling, by incorporating document-level covariates or known
similarities between documents through a graph formalism. Modeling documents as
nodes and edges denoting similarities, we propose a new estimator based on a
fast graph-regularized iterative singular value decomposition (SVD) that
encourages similar documents to share similar topic mixture proportions. We
characterize the estimation error of our proposed method by deriving
high-probability bounds and develop a specialized cross-validation method to
optimize our regularization parameters. We validate our model through
comprehensive experiments on synthetic datasets and three real-world corpora,
demonstrating improved performance and faster inference compared to existing
Bayesian methods.
|
2412.15018 | Georg Schramm | Masoud Elhamiasl, Frederic Jolivet, Ahmadreza Rezaei, Michael
Fieseler, Klaus Sch\"afers, Johan Nuyts, Georg Schramm, Fernando Boada | Joint estimation of activity, attenuation and motion in
respiratory-self-gated time-of-flight PET | 18 pages, 7 figures, 2 tables | null | 10.1088/1361-6560/adbed5 | null | physics.med-ph | http://creativecommons.org/licenses/by/4.0/ | Whole-body PET imaging is often hindered by respiratory motion during
acquisition, causing significant degradation in the quality of reconstructed
activity images. An additional challenge in PET/CT imaging arises from the
respiratory phase mismatch between CT-based attenuation correction and PET
acquisition, leading to attenuation artifacts. To address these issues, we
propose two new, purely data-driven methods for the joint estimation of
activity, attenuation, and motion in respiratory self-gated TOF PET. These
methods enable the reconstruction of a single activity image free from motion
and attenuation artifacts.
The proposed methods were evaluated using data from the anthropomorphic
Wilhelm phantom acquired on a Siemens mCT PET/CT system, as well as 3 clinical
FDG PET/CT datasets acquired on a GE DMI PET/CT system. Image quality was
assessed visually to identify motion and attenuation artifacts. Lesion uptake
values were quantitatively compared across reconstructions without motion
modeling, with motion modeling but static attenuation correction, and with our
proposed methods.
For the Wilhelm phantom, the proposed methods delivered image quality closely
matching the reference reconstruction from a static acquisition. The
lesion-to-background contrast for a liver dome lesion improved from 2.0 (no
motion correction) to 5.2 (proposed methods), matching the contrast from the
static acquisition (5.2). In contrast, motion modeling with static attenuation
correction yielded a lower contrast of 3.5. In patient datasets, the proposed
methods successfully reduced motion artifacts in lung and liver lesions and
mitigated attenuation artifacts, demonstrating superior lesion to background
separation.
Our proposed methods enable the reconstruction of a single, high-quality
activity image that is motion-corrected and free from attenuation artifacts,
without the need for external hardware.
| [
{
"version": "v1",
"created": "Thu, 19 Dec 2024 16:30:56 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 11:31:52 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Elhamiasl",
"Masoud",
""
],
[
"Jolivet",
"Frederic",
""
],
[
"Rezaei",
"Ahmadreza",
""
],
[
"Fieseler",
"Michael",
""
],
[
"Schäfers",
"Klaus",
""
],
[
"Nuyts",
"Johan",
""
],
[
"Schramm",
"Georg",
""
],
[
"Boada",
"Fernando",
""
]
] | TITLE: Joint estimation of activity, attenuation and motion in
respiratory-self-gated time-of-flight PET
ABSTRACT: Whole-body PET imaging is often hindered by respiratory motion during
acquisition, causing significant degradation in the quality of reconstructed
activity images. An additional challenge in PET/CT imaging arises from the
respiratory phase mismatch between CT-based attenuation correction and PET
acquisition, leading to attenuation artifacts. To address these issues, we
propose two new, purely data-driven methods for the joint estimation of
activity, attenuation, and motion in respiratory self-gated TOF PET. These
methods enable the reconstruction of a single activity image free from motion
and attenuation artifacts.
The proposed methods were evaluated using data from the anthropomorphic
Wilhelm phantom acquired on a Siemens mCT PET/CT system, as well as 3 clinical
FDG PET/CT datasets acquired on a GE DMI PET/CT system. Image quality was
assessed visually to identify motion and attenuation artifacts. Lesion uptake
values were quantitatively compared across reconstructions without motion
modeling, with motion modeling but static attenuation correction, and with our
proposed methods.
For the Wilhelm phantom, the proposed methods delivered image quality closely
matching the reference reconstruction from a static acquisition. The
lesion-to-background contrast for a liver dome lesion improved from 2.0 (no
motion correction) to 5.2 (proposed methods), matching the contrast from the
static acquisition (5.2). In contrast, motion modeling with static attenuation
correction yielded a lower contrast of 3.5. In patient datasets, the proposed
methods successfully reduced motion artifacts in lung and liver lesions and
mitigated attenuation artifacts, demonstrating superior lesion to background
separation.
Our proposed methods enable the reconstruction of a single, high-quality
activity image that is motion-corrected and free from attenuation artifacts,
without the need for external hardware.
|
2412.16217 | Hina Binte Haq | Hina Binte Haq, Syed Taha Ali, Asad Salman, Patrick McCorry and Siamak
F. Shahandashti | Neonpool: Reimagining cryptocurrency transaction pools for lightweight
clients and IoT devices | null | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The transaction pool plays a critical role in processing and disseminating
transactions in cryptocurrency networks. However, increasing transaction loads
strain the resources of full node deployments. We present Neonpool, an
innovative transaction pool optimization using bloom filter variants, which
reduces the memory footprint of the transaction pool to a fraction. Implemented
in C++ and benchmarked using a unique Bitcoin and Ethereum dataset, our
solution verifies and forwards transactions with over 99.99\% accuracy and does
not necessitate a hard fork. Neonpool is ideally suited for lightweight
cryptocurrency clients and for resource-constrained devices such as browsers,
systems-on-a-chip, mobile or IoT devices.
| [
{
"version": "v1",
"created": "Wed, 18 Dec 2024 03:19:19 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 07:37:50 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Haq",
"Hina Binte",
""
],
[
"Ali",
"Syed Taha",
""
],
[
"Salman",
"Asad",
""
],
[
"McCorry",
"Patrick",
""
],
[
"Shahandashti",
"Siamak F.",
""
]
] | TITLE: Neonpool: Reimagining cryptocurrency transaction pools for lightweight
clients and IoT devices
ABSTRACT: The transaction pool plays a critical role in processing and disseminating
transactions in cryptocurrency networks. However, increasing transaction loads
strain the resources of full node deployments. We present Neonpool, an
innovative transaction pool optimization using bloom filter variants, which
reduces the memory footprint of the transaction pool to a fraction. Implemented
in C++ and benchmarked using a unique Bitcoin and Ethereum dataset, our
solution verifies and forwards transactions with over 99.99\% accuracy and does
not necessitate a hard fork. Neonpool is ideally suited for lightweight
cryptocurrency clients and for resource-constrained devices such as browsers,
systems-on-a-chip, mobile or IoT devices.
|
2412.16848 | Kun Wu | Kun Wu, Yinuo Zhao, Zhiyuan Xu, Zhengping Che, Chengxiang Yin, Chi
Harold Liu, Feiferi Feng, Jian Tang | ACL-QL: Adaptive Conservative Level in Q-Learning for Offline
Reinforcement Learning | 19 pages, 4 figures, IEEE Transactions on Neural Networks and
Learning Systems (2024) | null | 10.1109/TNNLS.2024.3497667 | null | cs.LG cs.AI cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Offline Reinforcement Learning (RL), which operates solely on static datasets
without further interactions with the environment, provides an appealing
alternative to learning a safe and promising control policy. The prevailing
methods typically learn a conservative policy to mitigate the problem of
Q-value overestimation, but it is prone to overdo it, leading to an overly
conservative policy. Moreover, they optimize all samples equally with fixed
constraints, lacking the nuanced ability to control conservative levels in a
fine-grained manner. Consequently, this limitation results in a performance
decline. To address the above two challenges in a united way, we propose a
framework, Adaptive Conservative Level in Q-Learning (ACL-QL), which limits the
Q-values in a mild range and enables adaptive control on the conservative level
over each state-action pair, i.e., lifting the Q-values more for good
transitions and less for bad transitions. We theoretically analyze the
conditions under which the conservative level of the learned Q-function can be
limited in a mild range and how to optimize each transition adaptively.
Motivated by the theoretical analysis, we propose a novel algorithm, ACL-QL,
which uses two learnable adaptive weight functions to control the conservative
level over each transition. Subsequently, we design a monotonicity loss and
surrogate losses to train the adaptive weight functions, Q-function, and policy
network alternatively. We evaluate ACL-QL on the commonly used D4RL benchmark
and conduct extensive ablation studies to illustrate the effectiveness and
state-of-the-art performance compared to existing offline DRL baselines.
| [
{
"version": "v1",
"created": "Sun, 22 Dec 2024 04:18:02 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 06:25:26 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Wu",
"Kun",
""
],
[
"Zhao",
"Yinuo",
""
],
[
"Xu",
"Zhiyuan",
""
],
[
"Che",
"Zhengping",
""
],
[
"Yin",
"Chengxiang",
""
],
[
"Liu",
"Chi Harold",
""
],
[
"Feng",
"Feiferi",
""
],
[
"Tang",
"Jian",
""
]
] | TITLE: ACL-QL: Adaptive Conservative Level in Q-Learning for Offline
Reinforcement Learning
ABSTRACT: Offline Reinforcement Learning (RL), which operates solely on static datasets
without further interactions with the environment, provides an appealing
alternative to learning a safe and promising control policy. The prevailing
methods typically learn a conservative policy to mitigate the problem of
Q-value overestimation, but it is prone to overdo it, leading to an overly
conservative policy. Moreover, they optimize all samples equally with fixed
constraints, lacking the nuanced ability to control conservative levels in a
fine-grained manner. Consequently, this limitation results in a performance
decline. To address the above two challenges in a united way, we propose a
framework, Adaptive Conservative Level in Q-Learning (ACL-QL), which limits the
Q-values in a mild range and enables adaptive control on the conservative level
over each state-action pair, i.e., lifting the Q-values more for good
transitions and less for bad transitions. We theoretically analyze the
conditions under which the conservative level of the learned Q-function can be
limited in a mild range and how to optimize each transition adaptively.
Motivated by the theoretical analysis, we propose a novel algorithm, ACL-QL,
which uses two learnable adaptive weight functions to control the conservative
level over each transition. Subsequently, we design a monotonicity loss and
surrogate losses to train the adaptive weight functions, Q-function, and policy
network alternatively. We evaluate ACL-QL on the commonly used D4RL benchmark
and conduct extensive ablation studies to illustrate the effectiveness and
state-of-the-art performance compared to existing offline DRL baselines.
|
2412.21016 | Mingxuan Xiao | Mingxuan Xiao, Yan Xiao, Shunhui Ji, Hanbo Cai, Lei Xue, Pengcheng
Zhang | Assessing the Robustness of LLM-based NLP Software via Automated Testing | null | null | null | null | cs.SE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Benefiting from the advancements in LLMs, NLP software has undergone rapid
development. Such software is widely employed in various safety-critical tasks,
such as financial sentiment analysis, toxic content moderation, and log
generation. Unlike traditional software, LLM-based NLP software relies on
prompts and examples as inputs. Given the complexity of LLMs and the
unpredictability of real-world inputs, quantitatively assessing the robustness
of such software is crucial. However, to the best of our knowledge, no
automated robustness testing methods have been specifically designed to
evaluate the overall inputs of LLM-based NLP software. To this end, this paper
introduces the first AutOmated Robustness Testing frAmework, AORTA, which
reconceptualizes the testing process into a combinatorial optimization problem.
Existing testing methods designed for DNN-based software can be applied to
LLM-based software by AORTA, but their effectiveness is limited. To address
this, we propose a novel testing method for LLM-based software within AORTA
called Adaptive Beam Search. ABS is tailored for the expansive feature space of
LLMs and improves testing effectiveness through an adaptive beam width and the
capability for backtracking. We successfully embed 18 test methods in the
designed framework AORTA and compared the test validity of ABS with three
datasets and five threat models. ABS facilitates a more comprehensive and
accurate robustness assessment before software deployment, with an average test
success rate of 86.138%. Compared to the currently best-performing baseline
PWWS, ABS significantly reduces the computational overhead by up to 3441.895
seconds per successful test case and decreases the number of queries by 218.762
times on average. Furthermore, test cases generated by ABS exhibit greater
naturalness and transferability.
| [
{
"version": "v1",
"created": "Mon, 30 Dec 2024 15:33:34 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 13:42:06 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Xiao",
"Mingxuan",
""
],
[
"Xiao",
"Yan",
""
],
[
"Ji",
"Shunhui",
""
],
[
"Cai",
"Hanbo",
""
],
[
"Xue",
"Lei",
""
],
[
"Zhang",
"Pengcheng",
""
]
] | TITLE: Assessing the Robustness of LLM-based NLP Software via Automated Testing
ABSTRACT: Benefiting from the advancements in LLMs, NLP software has undergone rapid
development. Such software is widely employed in various safety-critical tasks,
such as financial sentiment analysis, toxic content moderation, and log
generation. Unlike traditional software, LLM-based NLP software relies on
prompts and examples as inputs. Given the complexity of LLMs and the
unpredictability of real-world inputs, quantitatively assessing the robustness
of such software is crucial. However, to the best of our knowledge, no
automated robustness testing methods have been specifically designed to
evaluate the overall inputs of LLM-based NLP software. To this end, this paper
introduces the first AutOmated Robustness Testing frAmework, AORTA, which
reconceptualizes the testing process into a combinatorial optimization problem.
Existing testing methods designed for DNN-based software can be applied to
LLM-based software by AORTA, but their effectiveness is limited. To address
this, we propose a novel testing method for LLM-based software within AORTA
called Adaptive Beam Search. ABS is tailored for the expansive feature space of
LLMs and improves testing effectiveness through an adaptive beam width and the
capability for backtracking. We successfully embed 18 test methods in the
designed framework AORTA and compared the test validity of ABS with three
datasets and five threat models. ABS facilitates a more comprehensive and
accurate robustness assessment before software deployment, with an average test
success rate of 86.138%. Compared to the currently best-performing baseline
PWWS, ABS significantly reduces the computational overhead by up to 3441.895
seconds per successful test case and decreases the number of queries by 218.762
times on average. Furthermore, test cases generated by ABS exhibit greater
naturalness and transferability.
|
2412.21124 | Tim Tsz-Kit Lau | Tim Tsz-Kit Lau, Weijian Li, Chenwei Xu, Han Liu, Mladen Kolar | Adaptive Batch Size Schedules for Distributed Training of Language
Models with Data and Model Parallelism | The Second Conference on Parsimony and Learning (CPAL; Proceedings
Track), March 2025, Stanford, CA | null | null | null | cs.LG math.OC stat.ML | http://creativecommons.org/licenses/by/4.0/ | An appropriate choice of batch sizes in large-scale model training is
crucial, yet it involves an intrinsic yet inevitable dilemma: large-batch
training improves training efficiency in terms of memory utilization, while
generalization performance often deteriorates due to small amounts of gradient
noise. Despite this dilemma, the common practice of choosing batch sizes in
language model training often prioritizes training efficiency -- employing
either constant large sizes with data parallelism or implementing batch size
warmup schedules. However, such batch size schedule designs remain heuristic
and often fail to adapt to training dynamics, presenting the challenge of
designing adaptive batch size schedules. Given the abundance of available
datasets and the data-hungry nature of language models, data parallelism has
become an indispensable distributed training paradigm, enabling the use of
larger batch sizes for gradient computation. However, vanilla data parallelism
requires replicas of model parameters, gradients, and optimizer states at each
worker, which prohibits training larger models with billions of parameters. To
optimize memory usage, more advanced parallelism strategies must be employed.
In this work, we propose general-purpose and theoretically principled adaptive
batch size schedules compatible with data parallelism and model parallelism. We
develop a practical implementation with PyTorch Fully Sharded Data Parallel,
facilitating the pretraining of language models of different sizes. We
empirically demonstrate that our proposed approaches outperform constant batch
sizes and heuristic batch size warmup schedules in the pretraining of models in
the Llama 2 family, with particular focus on smaller models with up to 3
billion parameters. We also establish theoretical convergence guarantees for
such adaptive batch size schedules with Adam for general smooth nonconvex
objectives.
| [
{
"version": "v1",
"created": "Mon, 30 Dec 2024 17:55:28 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Mar 2025 21:10:15 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Lau",
"Tim Tsz-Kit",
""
],
[
"Li",
"Weijian",
""
],
[
"Xu",
"Chenwei",
""
],
[
"Liu",
"Han",
""
],
[
"Kolar",
"Mladen",
""
]
] | TITLE: Adaptive Batch Size Schedules for Distributed Training of Language
Models with Data and Model Parallelism
ABSTRACT: An appropriate choice of batch sizes in large-scale model training is
crucial, yet it involves an intrinsic yet inevitable dilemma: large-batch
training improves training efficiency in terms of memory utilization, while
generalization performance often deteriorates due to small amounts of gradient
noise. Despite this dilemma, the common practice of choosing batch sizes in
language model training often prioritizes training efficiency -- employing
either constant large sizes with data parallelism or implementing batch size
warmup schedules. However, such batch size schedule designs remain heuristic
and often fail to adapt to training dynamics, presenting the challenge of
designing adaptive batch size schedules. Given the abundance of available
datasets and the data-hungry nature of language models, data parallelism has
become an indispensable distributed training paradigm, enabling the use of
larger batch sizes for gradient computation. However, vanilla data parallelism
requires replicas of model parameters, gradients, and optimizer states at each
worker, which prohibits training larger models with billions of parameters. To
optimize memory usage, more advanced parallelism strategies must be employed.
In this work, we propose general-purpose and theoretically principled adaptive
batch size schedules compatible with data parallelism and model parallelism. We
develop a practical implementation with PyTorch Fully Sharded Data Parallel,
facilitating the pretraining of language models of different sizes. We
empirically demonstrate that our proposed approaches outperform constant batch
sizes and heuristic batch size warmup schedules in the pretraining of models in
the Llama 2 family, with particular focus on smaller models with up to 3
billion parameters. We also establish theoretical convergence guarantees for
such adaptive batch size schedules with Adam for general smooth nonconvex
objectives.
|
2501.01908 | Mahdi Saberi | Mahdi Saberi, Chi Zhang, Mehmet Akcakaya | Training-Free Mitigation of Adversarial Attacks on Deep Learning-Based
MRI Reconstruction | null | null | null | null | cs.CV cs.LG eess.IV physics.med-ph | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Deep learning (DL) methods, especially those based on physics-driven DL, have
become the state-of-the-art for reconstructing sub-sampled magnetic resonance
imaging (MRI) data. However, studies have shown that these methods are
susceptible to small adversarial input perturbations, or attacks, resulting in
major distortions in the output images. Various strategies have been proposed
to reduce the effects of these attacks, but they require retraining and may
lower reconstruction quality for non-perturbed/clean inputs. In this work, we
propose a novel approach for mitigating adversarial attacks on MRI
reconstruction models without any retraining. Our framework is based on the
idea of cyclic measurement consistency. The output of the model is mapped to
another set of MRI measurements for a different sub-sampling pattern, and this
synthesized data is reconstructed with the same model. Intuitively, without an
attack, the second reconstruction is expected to be consistent with the first,
while with an attack, disruptions are present. A novel objective function is
devised based on this idea, which is minimized within a small ball around the
attack input for mitigation. Experimental results show that our method
substantially reduces the impact of adversarial perturbations across different
datasets, attack types/strengths and PD-DL networks, and qualitatively and
quantitatively outperforms conventional mitigation methods that involve
retraining. Finally, we extend our mitigation method to two important practical
scenarios: a blind setup, where the attack strength or algorithm is not known
to the end user; and an adaptive attack setup, where the attacker has full
knowledge of the defense strategy. Our approach remains effective in both
cases.
| [
{
"version": "v1",
"created": "Fri, 3 Jan 2025 17:23:52 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Mar 2025 19:50:19 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Saberi",
"Mahdi",
""
],
[
"Zhang",
"Chi",
""
],
[
"Akcakaya",
"Mehmet",
""
]
] | TITLE: Training-Free Mitigation of Adversarial Attacks on Deep Learning-Based
MRI Reconstruction
ABSTRACT: Deep learning (DL) methods, especially those based on physics-driven DL, have
become the state-of-the-art for reconstructing sub-sampled magnetic resonance
imaging (MRI) data. However, studies have shown that these methods are
susceptible to small adversarial input perturbations, or attacks, resulting in
major distortions in the output images. Various strategies have been proposed
to reduce the effects of these attacks, but they require retraining and may
lower reconstruction quality for non-perturbed/clean inputs. In this work, we
propose a novel approach for mitigating adversarial attacks on MRI
reconstruction models without any retraining. Our framework is based on the
idea of cyclic measurement consistency. The output of the model is mapped to
another set of MRI measurements for a different sub-sampling pattern, and this
synthesized data is reconstructed with the same model. Intuitively, without an
attack, the second reconstruction is expected to be consistent with the first,
while with an attack, disruptions are present. A novel objective function is
devised based on this idea, which is minimized within a small ball around the
attack input for mitigation. Experimental results show that our method
substantially reduces the impact of adversarial perturbations across different
datasets, attack types/strengths and PD-DL networks, and qualitatively and
quantitatively outperforms conventional mitigation methods that involve
retraining. Finally, we extend our mitigation method to two important practical
scenarios: a blind setup, where the attack strength or algorithm is not known
to the end user; and an adaptive attack setup, where the attacker has full
knowledge of the defense strategy. Our approach remains effective in both
cases.
|
2501.02464 | Yuliang Guo | Yuliang Guo, Sparsh Garg, S. Mahdi H. Miangoleh, Xinyu Huang, Liu Ren | Depth Any Camera: Zero-Shot Metric Depth Estimation from Any Camera | null | null | null | null | cs.CV cs.AI cs.RO | http://creativecommons.org/licenses/by-sa/4.0/ | While recent depth foundation models exhibit strong zero-shot generalization,
achieving accurate metric depth across diverse camera types-particularly those
with large fields of view (FoV) such as fisheye and 360-degree cameras-remains
a significant challenge. This paper presents Depth Any Camera (DAC), a powerful
zero-shot metric depth estimation framework that extends a perspective-trained
model to effectively handle cameras with varying FoVs. The framework is
designed to ensure that all existing 3D data can be leveraged, regardless of
the specific camera types used in new applications. Remarkably, DAC is trained
exclusively on perspective images but generalizes seamlessly to fisheye and
360-degree cameras without the need for specialized training data. DAC employs
Equi-Rectangular Projection (ERP) as a unified image representation, enabling
consistent processing of images with diverse FoVs. Its core components include
pitch-aware Image-to-ERP conversion with efficient online augmentation to
simulate distorted ERP patches from undistorted inputs, FoV alignment
operations to enable effective training across a wide range of FoVs, and
multi-resolution data augmentation to further address resolution disparities
between training and testing. DAC achieves state-of-the-art zero-shot metric
depth estimation, improving $\delta_1$ accuracy by up to 50% on multiple
fisheye and 360-degree datasets compared to prior metric depth foundation
models, demonstrating robust generalization across camera types.
| [
{
"version": "v1",
"created": "Sun, 5 Jan 2025 07:22:40 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Mar 2025 18:28:32 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Guo",
"Yuliang",
""
],
[
"Garg",
"Sparsh",
""
],
[
"Miangoleh",
"S. Mahdi H.",
""
],
[
"Huang",
"Xinyu",
""
],
[
"Ren",
"Liu",
""
]
] | TITLE: Depth Any Camera: Zero-Shot Metric Depth Estimation from Any Camera
ABSTRACT: While recent depth foundation models exhibit strong zero-shot generalization,
achieving accurate metric depth across diverse camera types-particularly those
with large fields of view (FoV) such as fisheye and 360-degree cameras-remains
a significant challenge. This paper presents Depth Any Camera (DAC), a powerful
zero-shot metric depth estimation framework that extends a perspective-trained
model to effectively handle cameras with varying FoVs. The framework is
designed to ensure that all existing 3D data can be leveraged, regardless of
the specific camera types used in new applications. Remarkably, DAC is trained
exclusively on perspective images but generalizes seamlessly to fisheye and
360-degree cameras without the need for specialized training data. DAC employs
Equi-Rectangular Projection (ERP) as a unified image representation, enabling
consistent processing of images with diverse FoVs. Its core components include
pitch-aware Image-to-ERP conversion with efficient online augmentation to
simulate distorted ERP patches from undistorted inputs, FoV alignment
operations to enable effective training across a wide range of FoVs, and
multi-resolution data augmentation to further address resolution disparities
between training and testing. DAC achieves state-of-the-art zero-shot metric
depth estimation, improving $\delta_1$ accuracy by up to 50% on multiple
fisheye and 360-degree datasets compared to prior metric depth foundation
models, demonstrating robust generalization across camera types.
|
2501.05662 | Zheqi Lv | Zheqi Lv, Wenkai Wang, Jiawei Wang, Shengyu Zhang, Fei Wu | Cascaded Self-Evaluation Augmented Training for Lightweight Multimodal
LLMs | null | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Efficient Multimodal Large Language Models (EMLLMs) can improve performance
through Chain-of-Thought (CoT) reasoning, but they have poor self-evaluation
capabilities during the CoT reasoning process. This is due to their tendency to
simplify the reasoning process and the degradation of self-evaluation ability
during downstream task fine-tuning. To address this, we intuitively propose
\textit{Self-Evaluation Augmented Training (SEAT)}, which uses more powerful
EMLLMs to evaluate CoT reasoning data. The evaluation data is then used to
train EMLLMs. However, due to the difficulties EMLLMs face with processing long
token input-output sequences, and the degradation of self-evaluation ability as
a basis for CoT reasoning, the SEAT method is not fully adapted. Therefore, we
further propose \textit{Cascaded Self-Evaluation Augmented Training
(Cas-SEAT)}, which converts long prompts into cascaded short prompts, each
focusing on a specific task. Additionally, we mix CoT reasoning and
self-evaluation data to preserve its CoT reasoning ability while enhancing the
self-evaluation capability of EMLLMs. We also conduct \textit{Double-level Data
Filtering (DDF)}, which includes source data filtering and labeled data
filtering, using both manual selection and MLLMs for filtering. Cas-SEAT and
DDF work together to improve the performance of EMLLMs. Experiments show that
Cas-SEAT achieves an average improvement of 22.16% across multiple datasets,
and DDF significantly reduces the resource consumption of training
| [
{
"version": "v1",
"created": "Fri, 10 Jan 2025 02:28:04 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Mar 2025 02:28:32 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Lv",
"Zheqi",
""
],
[
"Wang",
"Wenkai",
""
],
[
"Wang",
"Jiawei",
""
],
[
"Zhang",
"Shengyu",
""
],
[
"Wu",
"Fei",
""
]
] | TITLE: Cascaded Self-Evaluation Augmented Training for Lightweight Multimodal
LLMs
ABSTRACT: Efficient Multimodal Large Language Models (EMLLMs) can improve performance
through Chain-of-Thought (CoT) reasoning, but they have poor self-evaluation
capabilities during the CoT reasoning process. This is due to their tendency to
simplify the reasoning process and the degradation of self-evaluation ability
during downstream task fine-tuning. To address this, we intuitively propose
\textit{Self-Evaluation Augmented Training (SEAT)}, which uses more powerful
EMLLMs to evaluate CoT reasoning data. The evaluation data is then used to
train EMLLMs. However, due to the difficulties EMLLMs face with processing long
token input-output sequences, and the degradation of self-evaluation ability as
a basis for CoT reasoning, the SEAT method is not fully adapted. Therefore, we
further propose \textit{Cascaded Self-Evaluation Augmented Training
(Cas-SEAT)}, which converts long prompts into cascaded short prompts, each
focusing on a specific task. Additionally, we mix CoT reasoning and
self-evaluation data to preserve its CoT reasoning ability while enhancing the
self-evaluation capability of EMLLMs. We also conduct \textit{Double-level Data
Filtering (DDF)}, which includes source data filtering and labeled data
filtering, using both manual selection and MLLMs for filtering. Cas-SEAT and
DDF work together to improve the performance of EMLLMs. Experiments show that
Cas-SEAT achieves an average improvement of 22.16% across multiple datasets,
and DDF significantly reduces the resource consumption of training
|
2501.07155 | Bangchen Yin | Bangchen Yin, Jiaao Wang, Weitao Du, Pengbo Wang, Penghua Ying, Haojun
Jia, Zisheng Zhang, Yuanqi Du, Carla P. Gomes, Graeme Henkelman, Chenru Duan,
Hai Xiao | AlphaNet: Scaling Up Local Frame-based Atomistic Interatomic Potential | 15 pages, 4 figures | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Molecular dynamics simulations demand unprecedented accuracy and scalability
to tackle grand challenges in energy materials, catalytic processes, and
biomolecular design. To bridge this gap, we present AlphaNet, a local
frame-based equivariant model that simultaneously advances computational
efficiency and predictive precision for atomistic systems. By constructing
equivariant local frames with learnable geometric transitions, AlphaNet encodes
atomic environments with enhanced representational capacity, achieving state of
the art accuracy in energy and force predictions. Extensive benchmarks spanning
defected graphene, formate decomposition, inorganic bulks, and large-scale
datasets (OC2M and Matbench Discovery) demonstrate its superior performance
over existing neural network interatomic potentials while ensuring scalability
across diverse system sizes. The synergy of accuracy, efficiency, and
transferability positions AlphaNet as a transformative tool for simulating
multiscale phenomena, from catalyst dynamics to energy storage interfaces, with
direct implications for accelerating the discovery of functional materials and
complex molecular systems.
| [
{
"version": "v1",
"created": "Mon, 13 Jan 2025 09:28:47 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 09:59:57 GMT"
},
{
"version": "v3",
"created": "Sat, 15 Mar 2025 02:32:22 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Yin",
"Bangchen",
""
],
[
"Wang",
"Jiaao",
""
],
[
"Du",
"Weitao",
""
],
[
"Wang",
"Pengbo",
""
],
[
"Ying",
"Penghua",
""
],
[
"Jia",
"Haojun",
""
],
[
"Zhang",
"Zisheng",
""
],
[
"Du",
"Yuanqi",
""
],
[
"Gomes",
"Carla P.",
""
],
[
"Henkelman",
"Graeme",
""
],
[
"Duan",
"Chenru",
""
],
[
"Xiao",
"Hai",
""
]
] | TITLE: AlphaNet: Scaling Up Local Frame-based Atomistic Interatomic Potential
ABSTRACT: Molecular dynamics simulations demand unprecedented accuracy and scalability
to tackle grand challenges in energy materials, catalytic processes, and
biomolecular design. To bridge this gap, we present AlphaNet, a local
frame-based equivariant model that simultaneously advances computational
efficiency and predictive precision for atomistic systems. By constructing
equivariant local frames with learnable geometric transitions, AlphaNet encodes
atomic environments with enhanced representational capacity, achieving state of
the art accuracy in energy and force predictions. Extensive benchmarks spanning
defected graphene, formate decomposition, inorganic bulks, and large-scale
datasets (OC2M and Matbench Discovery) demonstrate its superior performance
over existing neural network interatomic potentials while ensuring scalability
across diverse system sizes. The synergy of accuracy, efficiency, and
transferability positions AlphaNet as a transformative tool for simulating
multiscale phenomena, from catalyst dynamics to energy storage interfaces, with
direct implications for accelerating the discovery of functional materials and
complex molecular systems.
|
2501.08659 | Dongzhihan Wang | Dongzhihan Wang, Yang Yang, Liang Xu | BRIGHT-VO: Brightness-Guided Hybrid Transformer for Visual Odometry with
Multi-modality Refinement Module | Method mistakes appearing in this paper | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual odometry (VO) plays a crucial role in autonomous driving, robotic
navigation, and other related tasks by estimating the position and orientation
of a camera based on visual input. Significant progress has been made in
data-driven VO methods, particularly those leveraging deep learning techniques
to extract image features and estimate camera poses. However, these methods
often struggle in low-light conditions because of the reduced visibility of
features and the increased difficulty of matching keypoints. To address this
limitation, we introduce BrightVO, a novel VO model based on Transformer
architecture, which not only performs front-end visual feature extraction, but
also incorporates a multi-modality refinement module in the back-end that
integrates Inertial Measurement Unit (IMU) data. Using pose graph optimization,
this module iteratively refines pose estimates to reduce errors and improve
both accuracy and robustness. Furthermore, we create a synthetic low-light
dataset, KiC4R, which includes a variety of lighting conditions to facilitate
the training and evaluation of VO frameworks in challenging environments.
Experimental results demonstrate that BrightVO achieves state-of-the-art
performance on both the KiC4R dataset and the KITTI benchmarks. Specifically,
it provides an average improvement of 20% in pose estimation accuracy in normal
outdoor environments and 259% in low-light conditions, outperforming existing
methods. For widespread use and further development, the research work is fully
open-source at https://github.com/Anastasiawd/BrightVO.
| [
{
"version": "v1",
"created": "Wed, 15 Jan 2025 08:50:52 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Jan 2025 03:51:49 GMT"
},
{
"version": "v3",
"created": "Mon, 17 Mar 2025 02:49:51 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Wang",
"Dongzhihan",
""
],
[
"Yang",
"Yang",
""
],
[
"Xu",
"Liang",
""
]
] | TITLE: BRIGHT-VO: Brightness-Guided Hybrid Transformer for Visual Odometry with
Multi-modality Refinement Module
ABSTRACT: Visual odometry (VO) plays a crucial role in autonomous driving, robotic
navigation, and other related tasks by estimating the position and orientation
of a camera based on visual input. Significant progress has been made in
data-driven VO methods, particularly those leveraging deep learning techniques
to extract image features and estimate camera poses. However, these methods
often struggle in low-light conditions because of the reduced visibility of
features and the increased difficulty of matching keypoints. To address this
limitation, we introduce BrightVO, a novel VO model based on Transformer
architecture, which not only performs front-end visual feature extraction, but
also incorporates a multi-modality refinement module in the back-end that
integrates Inertial Measurement Unit (IMU) data. Using pose graph optimization,
this module iteratively refines pose estimates to reduce errors and improve
both accuracy and robustness. Furthermore, we create a synthetic low-light
dataset, KiC4R, which includes a variety of lighting conditions to facilitate
the training and evaluation of VO frameworks in challenging environments.
Experimental results demonstrate that BrightVO achieves state-of-the-art
performance on both the KiC4R dataset and the KITTI benchmarks. Specifically,
it provides an average improvement of 20% in pose estimation accuracy in normal
outdoor environments and 259% in low-light conditions, outperforming existing
methods. For widespread use and further development, the research work is fully
open-source at https://github.com/Anastasiawd/BrightVO.
|
2501.09291 | Hyeongkeun Lee | Kyeongha Rho, Hyeongkeun Lee, Valentio Iverson, Joon Son Chung | LAVCap: LLM-based Audio-Visual Captioning using Optimal Transport | 5 pages, 2 figures; Accepted to ICASSP 2025 | null | null | null | cs.MM cs.AI cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automated audio captioning is a task that generates textual descriptions for
audio content, and recent studies have explored using visual information to
enhance captioning quality. However, current methods often fail to effectively
fuse audio and visual data, missing important semantic cues from each modality.
To address this, we introduce LAVCap, a large language model (LLM)-based
audio-visual captioning framework that effectively integrates visual
information with audio to improve audio captioning performance. LAVCap employs
an optimal transport-based alignment loss to bridge the modality gap between
audio and visual features, enabling more effective semantic extraction.
Additionally, we propose an optimal transport attention module that enhances
audio-visual fusion using an optimal transport assignment map. Combined with
the optimal training strategy, experimental results demonstrate that each
component of our framework is effective. LAVCap outperforms existing
state-of-the-art methods on the AudioCaps dataset, without relying on large
datasets or post-processing. Code is available at
https://github.com/NAVER-INTEL-Co-Lab/gaudi-lavcap.
| [
{
"version": "v1",
"created": "Thu, 16 Jan 2025 04:53:29 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Mar 2025 12:38:50 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Rho",
"Kyeongha",
""
],
[
"Lee",
"Hyeongkeun",
""
],
[
"Iverson",
"Valentio",
""
],
[
"Chung",
"Joon Son",
""
]
] | TITLE: LAVCap: LLM-based Audio-Visual Captioning using Optimal Transport
ABSTRACT: Automated audio captioning is a task that generates textual descriptions for
audio content, and recent studies have explored using visual information to
enhance captioning quality. However, current methods often fail to effectively
fuse audio and visual data, missing important semantic cues from each modality.
To address this, we introduce LAVCap, a large language model (LLM)-based
audio-visual captioning framework that effectively integrates visual
information with audio to improve audio captioning performance. LAVCap employs
an optimal transport-based alignment loss to bridge the modality gap between
audio and visual features, enabling more effective semantic extraction.
Additionally, we propose an optimal transport attention module that enhances
audio-visual fusion using an optimal transport assignment map. Combined with
the optimal training strategy, experimental results demonstrate that each
component of our framework is effective. LAVCap outperforms existing
state-of-the-art methods on the AudioCaps dataset, without relying on large
datasets or post-processing. Code is available at
https://github.com/NAVER-INTEL-Co-Lab/gaudi-lavcap.
|
2501.10796 | Haocheng Ye | Jing Chen, Haocheng Ye, Zhian Ying, Yuntao Sun, Wenqiang Xu | Dynamic Trend Fusion Module for Traffic Flow Prediction | null | journal = {Applied Soft Computing}, pages = {112979}, year =
{2025}, | 10.1016/j.asoc.2025.112979 | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Accurate traffic flow prediction is essential for applications like transport
logistics but remains challenging due to complex spatio-temporal correlations
and non-linear traffic patterns. Existing methods often model spatial and
temporal dependencies separately, failing to effectively fuse them. To overcome
this limitation, the Dynamic Spatial-Temporal Trend Transformer DST2former is
proposed to capture spatio-temporal correlations through adaptive embedding and
to fuse dynamic and static information for learning multi-view dynamic features
of traffic networks. The approach employs the Dynamic Trend Representation
Transformer (DTRformer) to generate dynamic trends using encoders for both
temporal and spatial dimensions, fused via Cross Spatial-Temporal Attention.
Predefined graphs are compressed into a representation graph to extract static
attributes and reduce redundancy. Experiments on four real-world traffic
datasets demonstrate that our framework achieves state-of-the-art performance.
| [
{
"version": "v1",
"created": "Sat, 18 Jan 2025 15:16:47 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Chen",
"Jing",
""
],
[
"Ye",
"Haocheng",
""
],
[
"Ying",
"Zhian",
""
],
[
"Sun",
"Yuntao",
""
],
[
"Xu",
"Wenqiang",
""
]
] | TITLE: Dynamic Trend Fusion Module for Traffic Flow Prediction
ABSTRACT: Accurate traffic flow prediction is essential for applications like transport
logistics but remains challenging due to complex spatio-temporal correlations
and non-linear traffic patterns. Existing methods often model spatial and
temporal dependencies separately, failing to effectively fuse them. To overcome
this limitation, the Dynamic Spatial-Temporal Trend Transformer DST2former is
proposed to capture spatio-temporal correlations through adaptive embedding and
to fuse dynamic and static information for learning multi-view dynamic features
of traffic networks. The approach employs the Dynamic Trend Representation
Transformer (DTRformer) to generate dynamic trends using encoders for both
temporal and spatial dimensions, fused via Cross Spatial-Temporal Attention.
Predefined graphs are compressed into a representation graph to extract static
attributes and reduce redundancy. Experiments on four real-world traffic
datasets demonstrate that our framework achieves state-of-the-art performance.
|
2501.11347 | Guankun Wang | Guankun Wang, Long Bai, Junyi Wang, Kun Yuan, Zhen Li, Tianxu Jiang,
Xiting He, Jinlin Wu, Zhen Chen, Zhen Lei, Hongbin Liu, Jiazheng Wang, Fan
Zhang, Nicolas Padoy, Nassir Navab, and Hongliang Ren | EndoChat: Grounded Multimodal Large Language Model for Endoscopic
Surgery | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, Multimodal Large Language Models (MLLMs) have demonstrated their
immense potential in computer-aided diagnosis and decision-making. In the
context of robotic-assisted surgery, MLLMs can serve as effective tools for
surgical training and guidance. However, there is still a lack of MLLMs
specialized for surgical scene understanding in clinical applications. In this
work, we introduce EndoChat to address various dialogue paradigms and subtasks
in surgical scene understanding that surgeons encounter. To train our EndoChat,
we construct the Surg-396K dataset through a novel pipeline that systematically
extracts surgical information and generates structured annotations based on
collected large-scale endoscopic surgery datasets. Furthermore, we introduce a
multi-scale visual token interaction mechanism and a visual contrast-based
reasoning mechanism to enhance the model's representation learning and
reasoning capabilities. Our model achieves state-of-the-art performance across
five dialogue paradigms and eight surgical scene understanding tasks.
Additionally, we conduct evaluations with professional surgeons, most of whom
provide positive feedback on collaborating with EndoChat. Overall, these
results demonstrate that our EndoChat has great potential to significantly
advance training and automation in robotic-assisted surgery.
| [
{
"version": "v1",
"created": "Mon, 20 Jan 2025 09:12:06 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Mar 2025 02:35:48 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Wang",
"Guankun",
""
],
[
"Bai",
"Long",
""
],
[
"Wang",
"Junyi",
""
],
[
"Yuan",
"Kun",
""
],
[
"Li",
"Zhen",
""
],
[
"Jiang",
"Tianxu",
""
],
[
"He",
"Xiting",
""
],
[
"Wu",
"Jinlin",
""
],
[
"Chen",
"Zhen",
""
],
[
"Lei",
"Zhen",
""
],
[
"Liu",
"Hongbin",
""
],
[
"Wang",
"Jiazheng",
""
],
[
"Zhang",
"Fan",
""
],
[
"Padoy",
"Nicolas",
""
],
[
"Navab",
"Nassir",
""
],
[
"Ren",
"Hongliang",
""
]
] | TITLE: EndoChat: Grounded Multimodal Large Language Model for Endoscopic
Surgery
ABSTRACT: Recently, Multimodal Large Language Models (MLLMs) have demonstrated their
immense potential in computer-aided diagnosis and decision-making. In the
context of robotic-assisted surgery, MLLMs can serve as effective tools for
surgical training and guidance. However, there is still a lack of MLLMs
specialized for surgical scene understanding in clinical applications. In this
work, we introduce EndoChat to address various dialogue paradigms and subtasks
in surgical scene understanding that surgeons encounter. To train our EndoChat,
we construct the Surg-396K dataset through a novel pipeline that systematically
extracts surgical information and generates structured annotations based on
collected large-scale endoscopic surgery datasets. Furthermore, we introduce a
multi-scale visual token interaction mechanism and a visual contrast-based
reasoning mechanism to enhance the model's representation learning and
reasoning capabilities. Our model achieves state-of-the-art performance across
five dialogue paradigms and eight surgical scene understanding tasks.
Additionally, we conduct evaluations with professional surgeons, most of whom
provide positive feedback on collaborating with EndoChat. Overall, these
results demonstrate that our EndoChat has great potential to significantly
advance training and automation in robotic-assisted surgery.
|
2501.11741 | Robert J\"ochl | Robert J\"ochl, Andreas Uhl | FaceQSORT: a Multi-Face Tracking Method based on Biometric and
Appearance Features | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, a novel multi-face tracking method named FaceQSORT is proposed.
To mitigate multi-face tracking challenges (e.g., partially occluded or lateral
faces), FaceQSORT combines biometric and visual appearance features (extracted
from the same image (face) patch) for association. The Q in FaceQSORT refers to
the scenario for which FaceQSORT is desinged, i.e. tracking people's faces as
they move towards a gate in a Queue. This scenario is also reflected in the new
dataset `Paris Lodron University Salzburg Faces in a Queue', which is made
publicly available as part of this work. The dataset consists of a total of
seven fully annotated and challenging sequences (12730 frames) and is utilized
together with two other publicly available datasets for the experimental
evaluation. It is shown that FaceQSORT outperforms state-of-the-art trackers in
the considered scenario. To provide a deeper insight into FaceQSORT,
comprehensive experiments are conducted evaluating the parameter selection, a
different similarity metric and the utilized face recognition model (used to
extract biometric features).
| [
{
"version": "v1",
"created": "Mon, 20 Jan 2025 21:00:12 GMT"
},
{
"version": "v2",
"created": "Fri, 31 Jan 2025 12:55:41 GMT"
},
{
"version": "v3",
"created": "Mon, 17 Mar 2025 12:08:11 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Jöchl",
"Robert",
""
],
[
"Uhl",
"Andreas",
""
]
] | TITLE: FaceQSORT: a Multi-Face Tracking Method based on Biometric and
Appearance Features
ABSTRACT: In this work, a novel multi-face tracking method named FaceQSORT is proposed.
To mitigate multi-face tracking challenges (e.g., partially occluded or lateral
faces), FaceQSORT combines biometric and visual appearance features (extracted
from the same image (face) patch) for association. The Q in FaceQSORT refers to
the scenario for which FaceQSORT is desinged, i.e. tracking people's faces as
they move towards a gate in a Queue. This scenario is also reflected in the new
dataset `Paris Lodron University Salzburg Faces in a Queue', which is made
publicly available as part of this work. The dataset consists of a total of
seven fully annotated and challenging sequences (12730 frames) and is utilized
together with two other publicly available datasets for the experimental
evaluation. It is shown that FaceQSORT outperforms state-of-the-art trackers in
the considered scenario. To provide a deeper insight into FaceQSORT,
comprehensive experiments are conducted evaluating the parameter selection, a
different similarity metric and the utilized face recognition model (used to
extract biometric features).
|
2501.12469 | Xiaoyu Chu | Xiaoyu Chu, Sacheendra Talluri, Qingxian Lu, Alexandru Iosup | An Empirical Characterization of Outages and Incidents in Public
Services for Large Language Models | null | 16th ACM/SPEC International Conference on Performance Engineering
(ICPE 2025) | 10.1145/3676151.3719372 | null | cs.PF cs.DC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | People and businesses increasingly rely on public LLM services, such as
ChatGPT, DALLE, and Claude. Understanding their outages, and particularly
measuring their failure-recovery processes, is becoming a stringent problem.
However, only limited studies exist in this emerging area. Addressing this
problem, in this work we conduct an empirical characterization of outages and
failure-recovery in public LLM services. We collect and prepare datasets for 8
commonly used LLM services across 3 major LLM providers, including market-leads
OpenAI and Anthropic. We conduct a detailed analysis of failure recovery
statistical properties, temporal patterns, co-occurrence, and the impact range
of outage-causing incidents. We make over 10 observations, among which: (1)
Failures in OpenAI's ChatGPT take longer to resolve but occur less frequently
than those in Anthropic's Claude;(2) OpenAI and Anthropic service failures
exhibit strong weekly and monthly periodicity; and (3) OpenAI services offer
better failure-isolation than Anthropic services. Our research explains LLM
failure characteristics and thus enables optimization in building and using LLM
systems. FAIR data and code are publicly available on
https://zenodo.org/records/14018219 and
https://github.com/atlarge-research/llm-service-analysis.
| [
{
"version": "v1",
"created": "Tue, 21 Jan 2025 19:37:48 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Mar 2025 16:13:32 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Chu",
"Xiaoyu",
""
],
[
"Talluri",
"Sacheendra",
""
],
[
"Lu",
"Qingxian",
""
],
[
"Iosup",
"Alexandru",
""
]
] | TITLE: An Empirical Characterization of Outages and Incidents in Public
Services for Large Language Models
ABSTRACT: People and businesses increasingly rely on public LLM services, such as
ChatGPT, DALLE, and Claude. Understanding their outages, and particularly
measuring their failure-recovery processes, is becoming a stringent problem.
However, only limited studies exist in this emerging area. Addressing this
problem, in this work we conduct an empirical characterization of outages and
failure-recovery in public LLM services. We collect and prepare datasets for 8
commonly used LLM services across 3 major LLM providers, including market-leads
OpenAI and Anthropic. We conduct a detailed analysis of failure recovery
statistical properties, temporal patterns, co-occurrence, and the impact range
of outage-causing incidents. We make over 10 observations, among which: (1)
Failures in OpenAI's ChatGPT take longer to resolve but occur less frequently
than those in Anthropic's Claude;(2) OpenAI and Anthropic service failures
exhibit strong weekly and monthly periodicity; and (3) OpenAI services offer
better failure-isolation than Anthropic services. Our research explains LLM
failure characteristics and thus enables optimization in building and using LLM
systems. FAIR data and code are publicly available on
https://zenodo.org/records/14018219 and
https://github.com/atlarge-research/llm-service-analysis.
|
2501.13125 | Yooseop Lee | Yooseop Lee, Suin Kim, Yohan Jo | Generating Plausible Distractors for Multiple-Choice Questions via
Student Choice Prediction | null | null | null | null | cs.CL cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In designing multiple-choice questions (MCQs) in education, creating
plausible distractors is crucial for identifying students' misconceptions and
gaps in knowledge and accurately assessing their understanding. However, prior
studies on distractor generation have not paid sufficient attention to
enhancing the difficulty of distractors, resulting in reduced effectiveness of
MCQs. This study presents a pipeline for training a model to generate
distractors that are more likely to be selected by students. First, we train a
pairwise ranker to reason about students' misconceptions and assess the
relative plausibility of two distractors. Using this model, we create a dataset
of pairwise distractor ranks and then train a distractor generator via Direct
Preference Optimization (DPO) to generate more plausible distractors.
Experiments on computer science subjects (Python, DB, MLDL) demonstrate that
our pairwise ranker effectively identifies students' potential
misunderstandings and achieves ranking accuracy comparable to human experts.
Furthermore, our distractor generator outperforms several baselines in
generating plausible distractors and produces questions with a higher item
discrimination index (DI).
| [
{
"version": "v1",
"created": "Tue, 21 Jan 2025 10:20:39 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Mar 2025 06:33:02 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Lee",
"Yooseop",
""
],
[
"Kim",
"Suin",
""
],
[
"Jo",
"Yohan",
""
]
] | TITLE: Generating Plausible Distractors for Multiple-Choice Questions via
Student Choice Prediction
ABSTRACT: In designing multiple-choice questions (MCQs) in education, creating
plausible distractors is crucial for identifying students' misconceptions and
gaps in knowledge and accurately assessing their understanding. However, prior
studies on distractor generation have not paid sufficient attention to
enhancing the difficulty of distractors, resulting in reduced effectiveness of
MCQs. This study presents a pipeline for training a model to generate
distractors that are more likely to be selected by students. First, we train a
pairwise ranker to reason about students' misconceptions and assess the
relative plausibility of two distractors. Using this model, we create a dataset
of pairwise distractor ranks and then train a distractor generator via Direct
Preference Optimization (DPO) to generate more plausible distractors.
Experiments on computer science subjects (Python, DB, MLDL) demonstrate that
our pairwise ranker effectively identifies students' potential
misunderstandings and achieves ranking accuracy comparable to human experts.
Furthermore, our distractor generator outperforms several baselines in
generating plausible distractors and produces questions with a higher item
discrimination index (DI).
|
2501.15035 | Jiazhen Chen | Jiazhen Chen, Sichao Fu, Zheng Ma, Mingbin Feng, Tony S. Wirjanto,
Qinmu Peng | Semi-supervised Anomaly Detection with Extremely Limited Labels in
Dynamic Graphs | Accepted by 30th International Conference on Database Systems for
Advanced Applications (DASFAA 2025) | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Semi-supervised graph anomaly detection (GAD) has recently received
increasing attention, which aims to distinguish anomalous patterns from graphs
under the guidance of a moderate amount of labeled data and a large volume of
unlabeled data. Although these proposed semi-supervised GAD methods have
achieved great success, their superior performance will be seriously degraded
when the provided labels are extremely limited due to some unpredictable
factors. Besides, the existing methods primarily focus on anomaly detection in
static graphs, and little effort was paid to consider the continuous evolution
characteristic of graphs over time (dynamic graphs). To address these
challenges, we propose a novel GAD framework (EL$^{2}$-DGAD) to tackle anomaly
detection problem in dynamic graphs with extremely limited labels.
Specifically, a transformer-based graph encoder model is designed to more
effectively preserve evolving graph structures beyond the local neighborhood.
Then, we incorporate an ego-context hypersphere classification loss to classify
temporal interactions according to their structure and temporal neighborhoods
while ensuring the normal samples are mapped compactly against anomalous data.
Finally, the above loss is further augmented with an ego-context contrasting
module which utilizes unlabeled data to enhance model generalization. Extensive
experiments on four datasets and three label rates demonstrate the
effectiveness of the proposed method in comparison to the existing GAD methods.
| [
{
"version": "v1",
"created": "Sat, 25 Jan 2025 02:35:48 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Mar 2025 02:43:25 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Chen",
"Jiazhen",
""
],
[
"Fu",
"Sichao",
""
],
[
"Ma",
"Zheng",
""
],
[
"Feng",
"Mingbin",
""
],
[
"Wirjanto",
"Tony S.",
""
],
[
"Peng",
"Qinmu",
""
]
] | TITLE: Semi-supervised Anomaly Detection with Extremely Limited Labels in
Dynamic Graphs
ABSTRACT: Semi-supervised graph anomaly detection (GAD) has recently received
increasing attention, which aims to distinguish anomalous patterns from graphs
under the guidance of a moderate amount of labeled data and a large volume of
unlabeled data. Although these proposed semi-supervised GAD methods have
achieved great success, their superior performance will be seriously degraded
when the provided labels are extremely limited due to some unpredictable
factors. Besides, the existing methods primarily focus on anomaly detection in
static graphs, and little effort was paid to consider the continuous evolution
characteristic of graphs over time (dynamic graphs). To address these
challenges, we propose a novel GAD framework (EL$^{2}$-DGAD) to tackle anomaly
detection problem in dynamic graphs with extremely limited labels.
Specifically, a transformer-based graph encoder model is designed to more
effectively preserve evolving graph structures beyond the local neighborhood.
Then, we incorporate an ego-context hypersphere classification loss to classify
temporal interactions according to their structure and temporal neighborhoods
while ensuring the normal samples are mapped compactly against anomalous data.
Finally, the above loss is further augmented with an ego-context contrasting
module which utilizes unlabeled data to enhance model generalization. Extensive
experiments on four datasets and three label rates demonstrate the
effectiveness of the proposed method in comparison to the existing GAD methods.
|
2501.15125 | Ziqi Liu | Ziqi Liu | FreqMoE: Enhancing Time Series Forecasting through Frequency
Decomposition Mixture of Experts | International Conference on Artificial Intelligence and Statistics
2025 (AISTATS) | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Long-term time series forecasting is essential in areas like finance and
weather prediction. Besides traditional methods that operate in the time
domain, many recent models transform time series data into the frequency domain
to better capture complex patterns. However, these methods often use filtering
techniques to remove certain frequency signals as noise, which may
unintentionally discard important information and reduce prediction accuracy.
To address this, we propose the Frequency Decomposition Mixture-of-Experts
(FreqMoE) model, which dynamically decomposes time series data into frequency
bands, each processed by a specialized expert. A gating mechanism adjusts the
importance of each output of expert based on frequency characteristics, and the
aggregated results are fed into a prediction module that iteratively refines
the forecast using residual connections. Our experiments demonstrate that
FreqMoE outperforms state-of-the-art models, achieving the best performance on
51 out of 70 metrics across all tested datasets, while significantly reducing
the number of required parameters to under 50k, providing notable efficiency
advantages. Code is available at: https://github.com/sunbus100/FreqMoE-main
| [
{
"version": "v1",
"created": "Sat, 25 Jan 2025 08:25:52 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Mar 2025 10:34:59 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Liu",
"Ziqi",
""
]
] | TITLE: FreqMoE: Enhancing Time Series Forecasting through Frequency
Decomposition Mixture of Experts
ABSTRACT: Long-term time series forecasting is essential in areas like finance and
weather prediction. Besides traditional methods that operate in the time
domain, many recent models transform time series data into the frequency domain
to better capture complex patterns. However, these methods often use filtering
techniques to remove certain frequency signals as noise, which may
unintentionally discard important information and reduce prediction accuracy.
To address this, we propose the Frequency Decomposition Mixture-of-Experts
(FreqMoE) model, which dynamically decomposes time series data into frequency
bands, each processed by a specialized expert. A gating mechanism adjusts the
importance of each output of expert based on frequency characteristics, and the
aggregated results are fed into a prediction module that iteratively refines
the forecast using residual connections. Our experiments demonstrate that
FreqMoE outperforms state-of-the-art models, achieving the best performance on
51 out of 70 metrics across all tested datasets, while significantly reducing
the number of required parameters to under 50k, providing notable efficiency
advantages. Code is available at: https://github.com/sunbus100/FreqMoE-main
|
2501.16944 | Maximilian Muschalik | Maximilian Muschalik, Fabian Fumagalli, Paolo Frazzetto, Janine
Strotherm, Luca Hermes, Alessandro Sperduti, Eyke H\"ullermeier, Barbara
Hammer | Exact Computation of Any-Order Shapley Interactions for Graph Neural
Networks | Preprint Version. Accepted at ICLR 2025 | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Albeit the ubiquitous use of Graph Neural Networks (GNNs) in machine learning
(ML) prediction tasks involving graph-structured data, their interpretability
remains challenging. In explainable artificial intelligence (XAI), the Shapley
Value (SV) is the predominant method to quantify contributions of individual
features to a ML model's output. Addressing the limitations of SVs in complex
prediction models, Shapley Interactions (SIs) extend the SV to groups of
features. In this work, we explain single graph predictions of GNNs with SIs
that quantify node contributions and interactions among multiple nodes. By
exploiting the GNN architecture, we show that the structure of interactions in
node embeddings are preserved for graph prediction. As a result, the
exponential complexity of SIs depends only on the receptive fields, i.e. the
message-passing ranges determined by the connectivity of the graph and the
number of convolutional layers. Based on our theoretical results, we introduce
GraphSHAP-IQ, an efficient approach to compute any-order SIs exactly.
GraphSHAP-IQ is applicable to popular message passing techniques in conjunction
with a linear global pooling and output layer. We showcase that GraphSHAP-IQ
substantially reduces the exponential complexity of computing exact SIs on
multiple benchmark datasets. Beyond exact computation, we evaluate
GraphSHAP-IQ's approximation of SIs on popular GNN architectures and compare
with existing baselines. Lastly, we visualize SIs of real-world water
distribution networks and molecule structures using a SI-Graph.
| [
{
"version": "v1",
"created": "Tue, 28 Jan 2025 13:37:44 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 09:46:45 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Muschalik",
"Maximilian",
""
],
[
"Fumagalli",
"Fabian",
""
],
[
"Frazzetto",
"Paolo",
""
],
[
"Strotherm",
"Janine",
""
],
[
"Hermes",
"Luca",
""
],
[
"Sperduti",
"Alessandro",
""
],
[
"Hüllermeier",
"Eyke",
""
],
[
"Hammer",
"Barbara",
""
]
] | TITLE: Exact Computation of Any-Order Shapley Interactions for Graph Neural
Networks
ABSTRACT: Albeit the ubiquitous use of Graph Neural Networks (GNNs) in machine learning
(ML) prediction tasks involving graph-structured data, their interpretability
remains challenging. In explainable artificial intelligence (XAI), the Shapley
Value (SV) is the predominant method to quantify contributions of individual
features to a ML model's output. Addressing the limitations of SVs in complex
prediction models, Shapley Interactions (SIs) extend the SV to groups of
features. In this work, we explain single graph predictions of GNNs with SIs
that quantify node contributions and interactions among multiple nodes. By
exploiting the GNN architecture, we show that the structure of interactions in
node embeddings are preserved for graph prediction. As a result, the
exponential complexity of SIs depends only on the receptive fields, i.e. the
message-passing ranges determined by the connectivity of the graph and the
number of convolutional layers. Based on our theoretical results, we introduce
GraphSHAP-IQ, an efficient approach to compute any-order SIs exactly.
GraphSHAP-IQ is applicable to popular message passing techniques in conjunction
with a linear global pooling and output layer. We showcase that GraphSHAP-IQ
substantially reduces the exponential complexity of computing exact SIs on
multiple benchmark datasets. Beyond exact computation, we evaluate
GraphSHAP-IQ's approximation of SIs on popular GNN architectures and compare
with existing baselines. Lastly, we visualize SIs of real-world water
distribution networks and molecule structures using a SI-Graph.
|
2501.17150 | Casey Bennett | Aksheytha Chelikavada, Casey C. Bennett | Cultural Differences and Perverse Incentives in Science Create a Bad
Mix: Exploring Country-Level Publication Bias in Select ACM Conferences | Main Paper (Page 1-20), Appendix (Page 21-75) | null | null | null | cs.DL cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the era of big science, many national governments are helping to build
well-funded teams of scientists to serve nationalistic ambitions, providing
financial incentives for certain outcomes for purposes other than advancing
science. That in turn can impact the behavior of scientists and create
distortions in publication rates, frequency, and publication venues targeted.
To that end, we provide evidence that indicates significant inequality using
standard Gini Index metrics in the publication rates of individual scientists
across various groupings (e.g. country, institution type, ranking-level) based
on an intensive analysis of thousands of papers published in several well-known
ACM conferences (HRI, IUI, KDD, CHI, SIGGRAPH, UIST, and UBICOMP) over 15 years
between 2010 to 2024. Furthermore, scientists who were affiliated with the
top-5 countries (in terms of research expenditure) were found to be
contributing significantly more to the inequality in publication rates than
others, which raises a number of questions for the scientific community. We
discuss some of those questions later in the paper. We also detected several
examples in the dataset of potential serious ethical problems in publications
likely caused by such incentive systems. Finally, a topic modeling analysis
revealed that some countries are pursuing a much narrower range of scientific
topics relative to others, indicating those incentives may also be limiting
genuine scientific curiosity. In summary, our findings raise awareness of
systems put in place by certain national governments that may be eroding the
pursuit of truth through science and gradually undermining the integrity of the
global scientific community.
| [
{
"version": "v1",
"created": "Tue, 28 Jan 2025 18:52:59 GMT"
},
{
"version": "v2",
"created": "Wed, 29 Jan 2025 21:22:36 GMT"
},
{
"version": "v3",
"created": "Fri, 14 Mar 2025 20:46:15 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Chelikavada",
"Aksheytha",
""
],
[
"Bennett",
"Casey C.",
""
]
] | TITLE: Cultural Differences and Perverse Incentives in Science Create a Bad
Mix: Exploring Country-Level Publication Bias in Select ACM Conferences
ABSTRACT: In the era of big science, many national governments are helping to build
well-funded teams of scientists to serve nationalistic ambitions, providing
financial incentives for certain outcomes for purposes other than advancing
science. That in turn can impact the behavior of scientists and create
distortions in publication rates, frequency, and publication venues targeted.
To that end, we provide evidence that indicates significant inequality using
standard Gini Index metrics in the publication rates of individual scientists
across various groupings (e.g. country, institution type, ranking-level) based
on an intensive analysis of thousands of papers published in several well-known
ACM conferences (HRI, IUI, KDD, CHI, SIGGRAPH, UIST, and UBICOMP) over 15 years
between 2010 to 2024. Furthermore, scientists who were affiliated with the
top-5 countries (in terms of research expenditure) were found to be
contributing significantly more to the inequality in publication rates than
others, which raises a number of questions for the scientific community. We
discuss some of those questions later in the paper. We also detected several
examples in the dataset of potential serious ethical problems in publications
likely caused by such incentive systems. Finally, a topic modeling analysis
revealed that some countries are pursuing a much narrower range of scientific
topics relative to others, indicating those incentives may also be limiting
genuine scientific curiosity. In summary, our findings raise awareness of
systems put in place by certain national governments that may be eroding the
pursuit of truth through science and gradually undermining the integrity of the
global scientific community.
|
2501.18054 | Constantine Sideris | Jui-Hung Sun, Mohamed Elsawaf, Yifei Zheng, Ho-Chun Lin, Chia Wei Hsu,
Constantine Sideris | Ultrafast Inverse Design of Electromagnetic Devices | null | null | null | null | physics.comp-ph physics.app-ph | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Inverse design enables automating the discovery and optimization of devices
achieving performance significantly exceeding that of traditional
human-engineered designs. However, existing methodologies to inverse design
electromagnetic devices require computationally expensive and time-consuming
full-wave electromagnetic simulation at each inverse design iteration or
generation of large datasets for training neural-network surrogate models. This
work introduces the Precomputed Numerical Green Function method, an approach
for ultrafast electromagnetic inverse design. The static components of the
design are incorporated into a numerical Green function obtained from a single
fully parallelized precomputation step, reducing the cost of evaluating
candidate designs during optimization to only being proportional to the size of
the region under modification. A low-rank matrix update technique is introduced
that further decreases the cost of the method to milliseconds per iteration
without any approximations or compromises in accuracy. The complete method is
shown to have linear time complexity, reducing the total runtime for an inverse
design by several orders of magnitude compared to using conventional
electromagnetics solvers. The design examples considered demonstrate speedups
of up to 16,000x, lowering the design process from multiple days to weeks down
to minutes. The approach stands to transform inverse design in
electromagnetics.
| [
{
"version": "v1",
"created": "Wed, 29 Jan 2025 23:35:28 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Feb 2025 21:39:32 GMT"
},
{
"version": "v3",
"created": "Mon, 17 Mar 2025 09:19:13 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Sun",
"Jui-Hung",
""
],
[
"Elsawaf",
"Mohamed",
""
],
[
"Zheng",
"Yifei",
""
],
[
"Lin",
"Ho-Chun",
""
],
[
"Hsu",
"Chia Wei",
""
],
[
"Sideris",
"Constantine",
""
]
] | TITLE: Ultrafast Inverse Design of Electromagnetic Devices
ABSTRACT: Inverse design enables automating the discovery and optimization of devices
achieving performance significantly exceeding that of traditional
human-engineered designs. However, existing methodologies to inverse design
electromagnetic devices require computationally expensive and time-consuming
full-wave electromagnetic simulation at each inverse design iteration or
generation of large datasets for training neural-network surrogate models. This
work introduces the Precomputed Numerical Green Function method, an approach
for ultrafast electromagnetic inverse design. The static components of the
design are incorporated into a numerical Green function obtained from a single
fully parallelized precomputation step, reducing the cost of evaluating
candidate designs during optimization to only being proportional to the size of
the region under modification. A low-rank matrix update technique is introduced
that further decreases the cost of the method to milliseconds per iteration
without any approximations or compromises in accuracy. The complete method is
shown to have linear time complexity, reducing the total runtime for an inverse
design by several orders of magnitude compared to using conventional
electromagnetics solvers. The design examples considered demonstrate speedups
of up to 16,000x, lowering the design process from multiple days to weeks down
to minutes. The approach stands to transform inverse design in
electromagnetics.
|
2502.02525 | Jian Liu | Jian Liu, Wei Sun, Hui Yang, Pengchao Deng, Chongpei Liu, Nicu Sebe,
Hossein Rahmani, Ajmal Mian | Diff9D: Diffusion-Based Domain-Generalized Category-Level 9-DoF Object
Pose Estimation | 17 pages, 13 figures | IEEE Transactions on Pattern Analysis and Machine Intelligence,
2025 | 10.1109/TPAMI.2025.3552132 | arXiv:2502.02525 | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Nine-degrees-of-freedom (9-DoF) object pose and size estimation is crucial
for enabling augmented reality and robotic manipulation. Category-level methods
have received extensive research attention due to their potential for
generalization to intra-class unknown objects. However, these methods require
manual collection and labeling of large-scale real-world training data. To
address this problem, we introduce a diffusion-based paradigm for
domain-generalized category-level 9-DoF object pose estimation. Our motivation
is to leverage the latent generalization ability of the diffusion model to
address the domain generalization challenge in object pose estimation. This
entails training the model exclusively on rendered synthetic data to achieve
generalization to real-world scenes. We propose an effective diffusion model to
redefine 9-DoF object pose estimation from a generative perspective. Our model
does not require any 3D shape priors during training or inference. By employing
the Denoising Diffusion Implicit Model, we demonstrate that the reverse
diffusion process can be executed in as few as 3 steps, achieving near
real-time performance. Finally, we design a robotic grasping system comprising
both hardware and software components. Through comprehensive experiments on two
benchmark datasets and the real-world robotic system, we show that our method
achieves state-of-the-art domain generalization performance. Our code will be
made public at https://github.com/CNJianLiu/Diff9D.
| [
{
"version": "v1",
"created": "Tue, 4 Feb 2025 17:46:34 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Liu",
"Jian",
""
],
[
"Sun",
"Wei",
""
],
[
"Yang",
"Hui",
""
],
[
"Deng",
"Pengchao",
""
],
[
"Liu",
"Chongpei",
""
],
[
"Sebe",
"Nicu",
""
],
[
"Rahmani",
"Hossein",
""
],
[
"Mian",
"Ajmal",
""
]
] | TITLE: Diff9D: Diffusion-Based Domain-Generalized Category-Level 9-DoF Object
Pose Estimation
ABSTRACT: Nine-degrees-of-freedom (9-DoF) object pose and size estimation is crucial
for enabling augmented reality and robotic manipulation. Category-level methods
have received extensive research attention due to their potential for
generalization to intra-class unknown objects. However, these methods require
manual collection and labeling of large-scale real-world training data. To
address this problem, we introduce a diffusion-based paradigm for
domain-generalized category-level 9-DoF object pose estimation. Our motivation
is to leverage the latent generalization ability of the diffusion model to
address the domain generalization challenge in object pose estimation. This
entails training the model exclusively on rendered synthetic data to achieve
generalization to real-world scenes. We propose an effective diffusion model to
redefine 9-DoF object pose estimation from a generative perspective. Our model
does not require any 3D shape priors during training or inference. By employing
the Denoising Diffusion Implicit Model, we demonstrate that the reverse
diffusion process can be executed in as few as 3 steps, achieving near
real-time performance. Finally, we design a robotic grasping system comprising
both hardware and software components. Through comprehensive experiments on two
benchmark datasets and the real-world robotic system, we show that our method
achieves state-of-the-art domain generalization performance. Our code will be
made public at https://github.com/CNJianLiu/Diff9D.
|
2502.02975 | Lu Yi | Lu Yi, Jie Peng, Yanping Zheng, Fengran Mo, Zhewei Wei, Yuhang Ye, Yue
Zixuan, Zengfeng Huang | TGB-Seq Benchmark: Challenging Temporal GNNs with Complex Sequential
Dynamics | published at ICLR 2025 | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Future link prediction is a fundamental challenge in various real-world
dynamic systems. To address this, numerous temporal graph neural networks
(temporal GNNs) and benchmark datasets have been developed. However, these
datasets often feature excessive repeated edges and lack complex sequential
dynamics, a key characteristic inherent in many real-world applications such as
recommender systems and ``Who-To-Follow'' on social networks. This oversight
has led existing methods to inadvertently downplay the importance of learning
sequential dynamics, focusing primarily on predicting repeated edges.
In this study, we demonstrate that existing methods, such as GraphMixer and
DyGFormer, are inherently incapable of learning simple sequential dynamics,
such as ``a user who has followed OpenAI and Anthropic is more likely to follow
AI at Meta next.'' Motivated by this issue, we introduce the Temporal Graph
Benchmark with Sequential Dynamics (TGB-Seq), a new benchmark carefully curated
to minimize repeated edges, challenging models to learn sequential dynamics and
generalize to unseen edges. TGB-Seq comprises large real-world datasets
spanning diverse domains, including e-commerce interactions, movie ratings,
business reviews, social networks, citation networks and web link networks.
Benchmarking experiments reveal that current methods usually suffer significant
performance degradation and incur substantial training costs on TGB-Seq, posing
new challenges and opportunities for future research. TGB-Seq datasets,
leaderboards, and example codes are available at https://tgb-seq.github.io/.
| [
{
"version": "v1",
"created": "Wed, 5 Feb 2025 08:20:19 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Feb 2025 06:24:09 GMT"
},
{
"version": "v3",
"created": "Sat, 15 Mar 2025 11:05:10 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Yi",
"Lu",
""
],
[
"Peng",
"Jie",
""
],
[
"Zheng",
"Yanping",
""
],
[
"Mo",
"Fengran",
""
],
[
"Wei",
"Zhewei",
""
],
[
"Ye",
"Yuhang",
""
],
[
"Zixuan",
"Yue",
""
],
[
"Huang",
"Zengfeng",
""
]
] | TITLE: TGB-Seq Benchmark: Challenging Temporal GNNs with Complex Sequential
Dynamics
ABSTRACT: Future link prediction is a fundamental challenge in various real-world
dynamic systems. To address this, numerous temporal graph neural networks
(temporal GNNs) and benchmark datasets have been developed. However, these
datasets often feature excessive repeated edges and lack complex sequential
dynamics, a key characteristic inherent in many real-world applications such as
recommender systems and ``Who-To-Follow'' on social networks. This oversight
has led existing methods to inadvertently downplay the importance of learning
sequential dynamics, focusing primarily on predicting repeated edges.
In this study, we demonstrate that existing methods, such as GraphMixer and
DyGFormer, are inherently incapable of learning simple sequential dynamics,
such as ``a user who has followed OpenAI and Anthropic is more likely to follow
AI at Meta next.'' Motivated by this issue, we introduce the Temporal Graph
Benchmark with Sequential Dynamics (TGB-Seq), a new benchmark carefully curated
to minimize repeated edges, challenging models to learn sequential dynamics and
generalize to unseen edges. TGB-Seq comprises large real-world datasets
spanning diverse domains, including e-commerce interactions, movie ratings,
business reviews, social networks, citation networks and web link networks.
Benchmarking experiments reveal that current methods usually suffer significant
performance degradation and incur substantial training costs on TGB-Seq, posing
new challenges and opportunities for future research. TGB-Seq datasets,
leaderboards, and example codes are available at https://tgb-seq.github.io/.
|
2502.06476 | Vlad Hosu | Vlad Hosu, Lorenzo Agnolucci, Daisuke Iso, Dietmar Saupe | Image Intrinsic Scale Assessment: Bridging the Gap Between Quality and
Resolution | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Image Quality Assessment (IQA) measures and predicts perceived image quality
by human observers. Although recent studies have highlighted the critical
influence that variations in the scale of an image have on its perceived
quality, this relationship has not been systematically quantified. To bridge
this gap, we introduce the Image Intrinsic Scale (IIS), defined as the largest
scale where an image exhibits its highest perceived quality. We also present
the Image Intrinsic Scale Assessment (IISA) task, which involves subjectively
measuring and predicting the IIS based on human judgments. We develop a
subjective annotation methodology and create the IISA-DB dataset, comprising
785 image-IIS pairs annotated by experts in a rigorously controlled
crowdsourcing study. Furthermore, we propose WIISA (Weak-labeling for Image
Intrinsic Scale Assessment), a strategy that leverages how the IIS of an image
varies with downscaling to generate weak labels. Experiments show that applying
WIISA during the training of several IQA methods adapted for IISA consistently
improves the performance compared to using only ground-truth labels. We will
release the code, dataset, and pre-trained models upon acceptance.
| [
{
"version": "v1",
"created": "Mon, 10 Feb 2025 13:54:55 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 10:32:40 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Hosu",
"Vlad",
""
],
[
"Agnolucci",
"Lorenzo",
""
],
[
"Iso",
"Daisuke",
""
],
[
"Saupe",
"Dietmar",
""
]
] | TITLE: Image Intrinsic Scale Assessment: Bridging the Gap Between Quality and
Resolution
ABSTRACT: Image Quality Assessment (IQA) measures and predicts perceived image quality
by human observers. Although recent studies have highlighted the critical
influence that variations in the scale of an image have on its perceived
quality, this relationship has not been systematically quantified. To bridge
this gap, we introduce the Image Intrinsic Scale (IIS), defined as the largest
scale where an image exhibits its highest perceived quality. We also present
the Image Intrinsic Scale Assessment (IISA) task, which involves subjectively
measuring and predicting the IIS based on human judgments. We develop a
subjective annotation methodology and create the IISA-DB dataset, comprising
785 image-IIS pairs annotated by experts in a rigorously controlled
crowdsourcing study. Furthermore, we propose WIISA (Weak-labeling for Image
Intrinsic Scale Assessment), a strategy that leverages how the IIS of an image
varies with downscaling to generate weak labels. Experiments show that applying
WIISA during the training of several IQA methods adapted for IISA consistently
improves the performance compared to using only ground-truth labels. We will
release the code, dataset, and pre-trained models upon acceptance.
|
2502.06756 | Yuqi Lin | Yuqi Lin, Hengjia Li, Wenqi Shao, Zheng Yang, Jun Zhao, Xiaofei He,
Ping Luo, Kaipeng Zhang | SAMRefiner: Taming Segment Anything Model for Universal Mask Refinement | Accepted to ICLR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we explore a principal way to enhance the quality of widely
pre-existing coarse masks, enabling them to serve as reliable training data for
segmentation models to reduce the annotation cost. In contrast to prior
refinement techniques that are tailored to specific models or tasks in a
close-world manner, we propose SAMRefiner, a universal and efficient approach
by adapting SAM to the mask refinement task. The core technique of our model is
the noise-tolerant prompting scheme. Specifically, we introduce a multi-prompt
excavation strategy to mine diverse input prompts for SAM (i.e.,
distance-guided points, context-aware elastic bounding boxes, and
Gaussian-style masks) from initial coarse masks. These prompts can collaborate
with each other to mitigate the effect of defects in coarse masks. In
particular, considering the difficulty of SAM to handle the multi-object case
in semantic segmentation, we introduce a split-then-merge (STM) pipeline.
Additionally, we extend our method to SAMRefiner++ by introducing an additional
IoU adaption step to further boost the performance of the generic SAMRefiner on
the target dataset. This step is self-boosted and requires no additional
annotation. The proposed framework is versatile and can flexibly cooperate with
existing segmentation methods. We evaluate our mask framework on a wide range
of benchmarks under different settings, demonstrating better accuracy and
efficiency. SAMRefiner holds significant potential to expedite the evolution of
refinement tools. Our code is available at
https://github.com/linyq2117/SAMRefiner.
| [
{
"version": "v1",
"created": "Mon, 10 Feb 2025 18:33:15 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Mar 2025 10:12:23 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Lin",
"Yuqi",
""
],
[
"Li",
"Hengjia",
""
],
[
"Shao",
"Wenqi",
""
],
[
"Yang",
"Zheng",
""
],
[
"Zhao",
"Jun",
""
],
[
"He",
"Xiaofei",
""
],
[
"Luo",
"Ping",
""
],
[
"Zhang",
"Kaipeng",
""
]
] | TITLE: SAMRefiner: Taming Segment Anything Model for Universal Mask Refinement
ABSTRACT: In this paper, we explore a principal way to enhance the quality of widely
pre-existing coarse masks, enabling them to serve as reliable training data for
segmentation models to reduce the annotation cost. In contrast to prior
refinement techniques that are tailored to specific models or tasks in a
close-world manner, we propose SAMRefiner, a universal and efficient approach
by adapting SAM to the mask refinement task. The core technique of our model is
the noise-tolerant prompting scheme. Specifically, we introduce a multi-prompt
excavation strategy to mine diverse input prompts for SAM (i.e.,
distance-guided points, context-aware elastic bounding boxes, and
Gaussian-style masks) from initial coarse masks. These prompts can collaborate
with each other to mitigate the effect of defects in coarse masks. In
particular, considering the difficulty of SAM to handle the multi-object case
in semantic segmentation, we introduce a split-then-merge (STM) pipeline.
Additionally, we extend our method to SAMRefiner++ by introducing an additional
IoU adaption step to further boost the performance of the generic SAMRefiner on
the target dataset. This step is self-boosted and requires no additional
annotation. The proposed framework is versatile and can flexibly cooperate with
existing segmentation methods. We evaluate our mask framework on a wide range
of benchmarks under different settings, demonstrating better accuracy and
efficiency. SAMRefiner holds significant potential to expedite the evolution of
refinement tools. Our code is available at
https://github.com/linyq2117/SAMRefiner.
|
2502.07221 | Qifeng Zhou | Qifeng Zhou, Thao M. Dang, Wenliang Zhong, Yuzhi Guo, Hehuan Ma,
Saiyang Na, Haiqing Li, Junzhou Huang | MLLM4PUE: Toward Universal Embeddings in Digital Pathology through
Multimodal LLMs | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pathology plays a critical role in diagnosing a wide range of diseases, yet
existing approaches often rely heavily on task-specific models trained on
extensive, well-labeled datasets. These methods face sustainability challenges
due to the diversity of pathologies and the labor-intensive nature of data
collection. To address these limitations, we highlight the need for universal
multimodal embeddings that can support multiple downstream tasks. Previous
approaches involve fine-tuning CLIP-based models, which handle images and texts
separately, limiting their ability to capture complex multimodal relationships.
Additionally, these models are evaluated across diverse datasets without a
unified benchmark. In this paper, we explore the possibility of applying
Multimodal Large Language Models (MLLMs) to generate pathology universal
embeddings to address these challenges. Our contributions can be summarized in
the following aspects: 1) We propose MLLM4PUE, a novel framework that leverages
MLLMs to generate embeddings for various pathology downstream tasks. 2) We
further introduce the Pathology Multimodal Embedding Benchmark (PMEB), a
comprehensive benchmark designed to assess the quality of pathology multimodal
embeddings, which comprises 16 original tasks drawn from 15 datasets. 3)
Extensive experimental results demonstrate the superiority of MLLM4PUE,
illustrating MLLM-based models can effectively support a wide range of
downstream tasks and unify the research direction for foundation models in
pathology.
| [
{
"version": "v1",
"created": "Tue, 11 Feb 2025 03:28:55 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Mar 2025 20:05:51 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Zhou",
"Qifeng",
""
],
[
"Dang",
"Thao M.",
""
],
[
"Zhong",
"Wenliang",
""
],
[
"Guo",
"Yuzhi",
""
],
[
"Ma",
"Hehuan",
""
],
[
"Na",
"Saiyang",
""
],
[
"Li",
"Haiqing",
""
],
[
"Huang",
"Junzhou",
""
]
] | TITLE: MLLM4PUE: Toward Universal Embeddings in Digital Pathology through
Multimodal LLMs
ABSTRACT: Pathology plays a critical role in diagnosing a wide range of diseases, yet
existing approaches often rely heavily on task-specific models trained on
extensive, well-labeled datasets. These methods face sustainability challenges
due to the diversity of pathologies and the labor-intensive nature of data
collection. To address these limitations, we highlight the need for universal
multimodal embeddings that can support multiple downstream tasks. Previous
approaches involve fine-tuning CLIP-based models, which handle images and texts
separately, limiting their ability to capture complex multimodal relationships.
Additionally, these models are evaluated across diverse datasets without a
unified benchmark. In this paper, we explore the possibility of applying
Multimodal Large Language Models (MLLMs) to generate pathology universal
embeddings to address these challenges. Our contributions can be summarized in
the following aspects: 1) We propose MLLM4PUE, a novel framework that leverages
MLLMs to generate embeddings for various pathology downstream tasks. 2) We
further introduce the Pathology Multimodal Embedding Benchmark (PMEB), a
comprehensive benchmark designed to assess the quality of pathology multimodal
embeddings, which comprises 16 original tasks drawn from 15 datasets. 3)
Extensive experimental results demonstrate the superiority of MLLM4PUE,
illustrating MLLM-based models can effectively support a wide range of
downstream tasks and unify the research direction for foundation models in
pathology.
|
2502.07238 | Dingtao Huang | Ding-Tao Huang, Xinyi He, Debei Hua, Dongfang Yu, En-Te Lin, Long Zeng | Diffusion Suction Grasping with Large-Scale Parcel Dataset | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While recent advances in object suction grasping have shown remarkable
progress, significant challenges persist particularly in cluttered and complex
parcel handling scenarios. Two fundamental limitations hinder current
approaches: (1) the lack of a comprehensive suction grasp dataset tailored for
parcel manipulation tasks, and (2) insufficient adaptability to diverse object
characteristics including size variations, geometric complexity, and textural
diversity. To address these challenges, we present Parcel-Suction-Dataset, a
large-scale synthetic dataset containing 25 thousand cluttered scenes with 410
million precision-annotated suction grasp poses. This dataset is generated
through our novel geometric sampling algorithm that enables efficient
generation of optimal suction grasps incorporating both physical constraints
and material properties. We further propose Diffusion-Suction, an innovative
framework that reformulates suction grasp prediction as a conditional
generation task through denoising diffusion probabilistic models. Our method
iteratively refines random noise into suction grasp score maps through
visual-conditioned guidance from point cloud observations, effectively learning
spatial point-wise affordances from our synthetic dataset. Extensive
experiments demonstrate that the simple yet efficient Diffusion-Suction
achieves new state-of-the-art performance compared to previous models on both
Parcel-Suction-Dataset and the public SuctionNet-1Billion benchmark.
| [
{
"version": "v1",
"created": "Tue, 11 Feb 2025 04:09:11 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 03:26:36 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Huang",
"Ding-Tao",
""
],
[
"He",
"Xinyi",
""
],
[
"Hua",
"Debei",
""
],
[
"Yu",
"Dongfang",
""
],
[
"Lin",
"En-Te",
""
],
[
"Zeng",
"Long",
""
]
] | TITLE: Diffusion Suction Grasping with Large-Scale Parcel Dataset
ABSTRACT: While recent advances in object suction grasping have shown remarkable
progress, significant challenges persist particularly in cluttered and complex
parcel handling scenarios. Two fundamental limitations hinder current
approaches: (1) the lack of a comprehensive suction grasp dataset tailored for
parcel manipulation tasks, and (2) insufficient adaptability to diverse object
characteristics including size variations, geometric complexity, and textural
diversity. To address these challenges, we present Parcel-Suction-Dataset, a
large-scale synthetic dataset containing 25 thousand cluttered scenes with 410
million precision-annotated suction grasp poses. This dataset is generated
through our novel geometric sampling algorithm that enables efficient
generation of optimal suction grasps incorporating both physical constraints
and material properties. We further propose Diffusion-Suction, an innovative
framework that reformulates suction grasp prediction as a conditional
generation task through denoising diffusion probabilistic models. Our method
iteratively refines random noise into suction grasp score maps through
visual-conditioned guidance from point cloud observations, effectively learning
spatial point-wise affordances from our synthetic dataset. Extensive
experiments demonstrate that the simple yet efficient Diffusion-Suction
achieves new state-of-the-art performance compared to previous models on both
Parcel-Suction-Dataset and the public SuctionNet-1Billion benchmark.
|
2502.07601 | Jiacong Xu | Jiacong Xu, Shao-Yuan Lo, Bardia Safaei, Vishal M. Patel, Isht Dwivedi | Towards Zero-Shot Anomaly Detection and Reasoning with Multimodal Large
Language Models | 19 pages, 10 figures, accepted by CVPR 2025 | null | null | null | cs.CV cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Zero-Shot Anomaly Detection (ZSAD) is an emerging AD paradigm. Unlike the
traditional unsupervised AD setting that requires a large number of normal
samples to train a model, ZSAD is more practical for handling data-restricted
real-world scenarios. Recently, Multimodal Large Language Models (MLLMs) have
shown revolutionary reasoning capabilities in various vision tasks. However,
the reasoning of image abnormalities remains underexplored due to the lack of
corresponding datasets and benchmarks. To facilitate research in AD &
reasoning, we establish the first visual instruction tuning dataset,
Anomaly-Instruct-125k, and the evaluation benchmark, VisA-D&R. Through
investigation with our benchmark, we reveal that current MLLMs like GPT-4o
cannot accurately detect and describe fine-grained anomalous details in images.
To address this, we propose Anomaly-OneVision (Anomaly-OV), the first
specialist visual assistant for ZSAD and reasoning. Inspired by human behavior
in visual inspection, Anomaly-OV leverages a Look-Twice Feature Matching (LTFM)
mechanism to adaptively select and emphasize abnormal visual tokens. Extensive
experiments demonstrate that Anomaly-OV achieves significant improvements over
advanced generalist models in both detection and reasoning. Extensions to
medical and 3D AD are provided for future study. The link to our project page:
https://xujiacong.github.io/Anomaly-OV/
| [
{
"version": "v1",
"created": "Tue, 11 Feb 2025 14:50:43 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 07:11:04 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Xu",
"Jiacong",
""
],
[
"Lo",
"Shao-Yuan",
""
],
[
"Safaei",
"Bardia",
""
],
[
"Patel",
"Vishal M.",
""
],
[
"Dwivedi",
"Isht",
""
]
] | TITLE: Towards Zero-Shot Anomaly Detection and Reasoning with Multimodal Large
Language Models
ABSTRACT: Zero-Shot Anomaly Detection (ZSAD) is an emerging AD paradigm. Unlike the
traditional unsupervised AD setting that requires a large number of normal
samples to train a model, ZSAD is more practical for handling data-restricted
real-world scenarios. Recently, Multimodal Large Language Models (MLLMs) have
shown revolutionary reasoning capabilities in various vision tasks. However,
the reasoning of image abnormalities remains underexplored due to the lack of
corresponding datasets and benchmarks. To facilitate research in AD &
reasoning, we establish the first visual instruction tuning dataset,
Anomaly-Instruct-125k, and the evaluation benchmark, VisA-D&R. Through
investigation with our benchmark, we reveal that current MLLMs like GPT-4o
cannot accurately detect and describe fine-grained anomalous details in images.
To address this, we propose Anomaly-OneVision (Anomaly-OV), the first
specialist visual assistant for ZSAD and reasoning. Inspired by human behavior
in visual inspection, Anomaly-OV leverages a Look-Twice Feature Matching (LTFM)
mechanism to adaptively select and emphasize abnormal visual tokens. Extensive
experiments demonstrate that Anomaly-OV achieves significant improvements over
advanced generalist models in both detection and reasoning. Extensions to
medical and 3D AD are provided for future study. The link to our project page:
https://xujiacong.github.io/Anomaly-OV/
|
2502.08576 | Antonio Montieri | Giampaolo Bovenzi, Francesco Cerasuolo, Domenico Ciuonzo, Davide Di
Monda, Idio Guarino, Antonio Montieri, Valerio Persico, Antonio Pescap\`e | Mapping the Landscape of Generative AI in Network Monitoring and
Management | 32 pages, 9 figure, 10 tables | null | 10.1109/TNSM.2025.3543022 | null | cs.NI cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Generative Artificial Intelligence (GenAI) models such as LLMs, GPTs, and
Diffusion Models have recently gained widespread attention from both the
research and the industrial communities. This survey explores their application
in network monitoring and management, focusing on prominent use cases, as well
as challenges and opportunities. We discuss how network traffic generation and
classification, network intrusion detection, networked system log analysis, and
network digital assistance can benefit from the use of GenAI models.
Additionally, we provide an overview of the available GenAI models, datasets
for large-scale training phases, and platforms for the development of such
models. Finally, we discuss research directions that potentially mitigate the
roadblocks to the adoption of GenAI for network monitoring and management. Our
investigation aims to map the current landscape and pave the way for future
research in leveraging GenAI for network monitoring and management.
| [
{
"version": "v1",
"created": "Wed, 12 Feb 2025 17:10:34 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Bovenzi",
"Giampaolo",
""
],
[
"Cerasuolo",
"Francesco",
""
],
[
"Ciuonzo",
"Domenico",
""
],
[
"Di Monda",
"Davide",
""
],
[
"Guarino",
"Idio",
""
],
[
"Montieri",
"Antonio",
""
],
[
"Persico",
"Valerio",
""
],
[
"Pescapè",
"Antonio",
""
]
] | TITLE: Mapping the Landscape of Generative AI in Network Monitoring and
Management
ABSTRACT: Generative Artificial Intelligence (GenAI) models such as LLMs, GPTs, and
Diffusion Models have recently gained widespread attention from both the
research and the industrial communities. This survey explores their application
in network monitoring and management, focusing on prominent use cases, as well
as challenges and opportunities. We discuss how network traffic generation and
classification, network intrusion detection, networked system log analysis, and
network digital assistance can benefit from the use of GenAI models.
Additionally, we provide an overview of the available GenAI models, datasets
for large-scale training phases, and platforms for the development of such
models. Finally, we discuss research directions that potentially mitigate the
roadblocks to the adoption of GenAI for network monitoring and management. Our
investigation aims to map the current landscape and pave the way for future
research in leveraging GenAI for network monitoring and management.
|
2502.10660 | Md Kowsher | Nusrat Jahan Prottasha, Md Kowsher, Hafijur Raman, Israt Jahan Anny,
Prakash Bhat, Ivan Garibay, Ozlem Garibay | User Profile with Large Language Models: Construction, Updating, and
Benchmarking | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | User profile modeling plays a key role in personalized systems, as it
requires building accurate profiles and updating them with new information. In
this paper, we present two high-quality open-source user profile datasets: one
for profile construction and another for profile updating. These datasets offer
a strong basis for evaluating user profile modeling techniques in dynamic
settings. We also show a methodology that uses large language models (LLMs) to
tackle both profile construction and updating. Our method uses a probabilistic
framework to predict user profiles from input text, allowing for precise and
context-aware profile generation. Our experiments demonstrate that models like
Mistral-7b and Llama2-7b perform strongly in both tasks. LLMs improve the
precision and recall of the generated profiles, and high evaluation scores
confirm the effectiveness of our approach.
| [
{
"version": "v1",
"created": "Sat, 15 Feb 2025 03:57:52 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Mar 2025 18:20:37 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Prottasha",
"Nusrat Jahan",
""
],
[
"Kowsher",
"Md",
""
],
[
"Raman",
"Hafijur",
""
],
[
"Anny",
"Israt Jahan",
""
],
[
"Bhat",
"Prakash",
""
],
[
"Garibay",
"Ivan",
""
],
[
"Garibay",
"Ozlem",
""
]
] | TITLE: User Profile with Large Language Models: Construction, Updating, and
Benchmarking
ABSTRACT: User profile modeling plays a key role in personalized systems, as it
requires building accurate profiles and updating them with new information. In
this paper, we present two high-quality open-source user profile datasets: one
for profile construction and another for profile updating. These datasets offer
a strong basis for evaluating user profile modeling techniques in dynamic
settings. We also show a methodology that uses large language models (LLMs) to
tackle both profile construction and updating. Our method uses a probabilistic
framework to predict user profiles from input text, allowing for precise and
context-aware profile generation. Our experiments demonstrate that models like
Mistral-7b and Llama2-7b perform strongly in both tasks. LLMs improve the
precision and recall of the generated profiles, and high evaluation scores
confirm the effectiveness of our approach.
|
2502.11262 | Mengying Wang | Mengying Wang, Hanchao Ma, Yiyang Bian, Yangxin Fan, Yinghui Wu | Generating Skyline Datasets for Data Science Models | EDBT25 | Proceedings of the 28th International Conference on Extending
Database Technology (EDBT 2025) | 10.48786/edbt.2025.65 | null | cs.DB cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Preparing high-quality datasets required by various data-driven AI and
machine learning models has become a cornerstone task in data-driven analysis.
Conventional data discovery methods typically integrate datasets towards a
single pre-defined quality measure that may lead to bias for downstream tasks.
This paper introduces MODis, a framework that discovers datasets by optimizing
multiple user-defined, model-performance measures. Given a set of data sources
and a model, MODis selects and integrates data sources into a skyline dataset,
over which the model is expected to have the desired performance in all the
performance measures. We formulate MODis as a multi-goal finite state
transducer, and derive three feasible algorithms to generate skyline datasets.
Our first algorithm adopts a "reduce-from-universal" strategy, that starts with
a universal schema and iteratively prunes unpromising data. Our second
algorithm further reduces the cost with a bi-directional strategy that
interleaves data augmentation and reduction. We also introduce a
diversification algorithm to mitigate the bias in skyline datasets. We
experimentally verify the efficiency and effectiveness of our skyline data
discovery algorithms, and showcase their applications in optimizing data
science pipelines.
| [
{
"version": "v1",
"created": "Sun, 16 Feb 2025 20:33:59 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Wang",
"Mengying",
""
],
[
"Ma",
"Hanchao",
""
],
[
"Bian",
"Yiyang",
""
],
[
"Fan",
"Yangxin",
""
],
[
"Wu",
"Yinghui",
""
]
] | TITLE: Generating Skyline Datasets for Data Science Models
ABSTRACT: Preparing high-quality datasets required by various data-driven AI and
machine learning models has become a cornerstone task in data-driven analysis.
Conventional data discovery methods typically integrate datasets towards a
single pre-defined quality measure that may lead to bias for downstream tasks.
This paper introduces MODis, a framework that discovers datasets by optimizing
multiple user-defined, model-performance measures. Given a set of data sources
and a model, MODis selects and integrates data sources into a skyline dataset,
over which the model is expected to have the desired performance in all the
performance measures. We formulate MODis as a multi-goal finite state
transducer, and derive three feasible algorithms to generate skyline datasets.
Our first algorithm adopts a "reduce-from-universal" strategy, that starts with
a universal schema and iteratively prunes unpromising data. Our second
algorithm further reduces the cost with a bi-directional strategy that
interleaves data augmentation and reduction. We also introduce a
diversification algorithm to mitigate the bias in skyline datasets. We
experimentally verify the efficiency and effectiveness of our skyline data
discovery algorithms, and showcase their applications in optimizing data
science pipelines.
|
2502.12089 | Alessandro Favero | Alessandro Favero, Antonio Sclocchi, Francesco Cagnetta, Pascal
Frossard and Matthieu Wyart | How compositional generalization and creativity improve as diffusion
models are trained | null | null | null | null | stat.ML cs.LG | http://creativecommons.org/licenses/by/4.0/ | Natural data is often organized as a hierarchical composition of features.
How many samples do generative models need in order to learn the composition
rules, so as to produce a combinatorially large number of novel data? What
signal in the data is exploited to learn those rules? We investigate these
questions in the context of diffusion models both theoretically and
empirically. Theoretically, we consider simple probabilistic context-free
grammars - tree-like graphical models used to represent the hierarchical and
compositional structure of data such as language and images. We demonstrate
that diffusion models learn the grammar's composition rules with the sample
complexity required for clustering features with statistically similar context,
a process similar to the word2vec algorithm. However, this clustering emerges
hierarchically: higher-level features associated with longer contexts require
more data to be identified. This mechanism leads to a sample complexity that
scales polynomially with the said context size. As a result, diffusion models
trained on an intermediate dataset size generate data coherent up to a certain
scale, but that lacks global coherence. We test these predictions in different
domains, and find remarkable agreement: both generated texts and images achieve
progressively larger coherence lengths as the training time or dataset size
grows. We discuss connections between the hierarchical clustering mechanism we
introduce here and the renormalization group in physics.
| [
{
"version": "v1",
"created": "Mon, 17 Feb 2025 18:06:33 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Mar 2025 20:57:35 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Favero",
"Alessandro",
""
],
[
"Sclocchi",
"Antonio",
""
],
[
"Cagnetta",
"Francesco",
""
],
[
"Frossard",
"Pascal",
""
],
[
"Wyart",
"Matthieu",
""
]
] | TITLE: How compositional generalization and creativity improve as diffusion
models are trained
ABSTRACT: Natural data is often organized as a hierarchical composition of features.
How many samples do generative models need in order to learn the composition
rules, so as to produce a combinatorially large number of novel data? What
signal in the data is exploited to learn those rules? We investigate these
questions in the context of diffusion models both theoretically and
empirically. Theoretically, we consider simple probabilistic context-free
grammars - tree-like graphical models used to represent the hierarchical and
compositional structure of data such as language and images. We demonstrate
that diffusion models learn the grammar's composition rules with the sample
complexity required for clustering features with statistically similar context,
a process similar to the word2vec algorithm. However, this clustering emerges
hierarchically: higher-level features associated with longer contexts require
more data to be identified. This mechanism leads to a sample complexity that
scales polynomially with the said context size. As a result, diffusion models
trained on an intermediate dataset size generate data coherent up to a certain
scale, but that lacks global coherence. We test these predictions in different
domains, and find remarkable agreement: both generated texts and images achieve
progressively larger coherence lengths as the training time or dataset size
grows. We discuss connections between the hierarchical clustering mechanism we
introduce here and the renormalization group in physics.
|
2502.13191 | Junyi Guan | Junyi Guan, Abhijith Sharma, Chong Tian, Salem Lahlou | On the Privacy Risks of Spiking Neural Networks: A Membership Inference
Analysis | 13 pages, 6 figures | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Spiking Neural Networks (SNNs) are increasingly explored for their energy
efficiency and robustness in real-world applications, yet their privacy risks
remain largely unexamined. In this work, we investigate the susceptibility of
SNNs to Membership Inference Attacks (MIAs) -- a major privacy threat where an
adversary attempts to determine whether a given sample was part of the training
dataset. While prior work suggests that SNNs may offer inherent robustness due
to their discrete, event-driven nature, we find that its resilience diminishes
as latency (T) increases. Furthermore, we introduce an input dropout strategy
under black box setting, that significantly enhances membership inference in
SNNs. Our findings challenge the assumption that SNNs are inherently more
secure, and even though they are expected to be better, our results reveal that
SNNs exhibit privacy vulnerabilities that are equally comparable to Artificial
Neural Networks (ANNs). Our code is available at
https://anonymous.4open.science/r/MIA_SNN-3610.
| [
{
"version": "v1",
"created": "Tue, 18 Feb 2025 15:19:20 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Mar 2025 15:25:29 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Guan",
"Junyi",
""
],
[
"Sharma",
"Abhijith",
""
],
[
"Tian",
"Chong",
""
],
[
"Lahlou",
"Salem",
""
]
] | TITLE: On the Privacy Risks of Spiking Neural Networks: A Membership Inference
Analysis
ABSTRACT: Spiking Neural Networks (SNNs) are increasingly explored for their energy
efficiency and robustness in real-world applications, yet their privacy risks
remain largely unexamined. In this work, we investigate the susceptibility of
SNNs to Membership Inference Attacks (MIAs) -- a major privacy threat where an
adversary attempts to determine whether a given sample was part of the training
dataset. While prior work suggests that SNNs may offer inherent robustness due
to their discrete, event-driven nature, we find that its resilience diminishes
as latency (T) increases. Furthermore, we introduce an input dropout strategy
under black box setting, that significantly enhances membership inference in
SNNs. Our findings challenge the assumption that SNNs are inherently more
secure, and even though they are expected to be better, our results reveal that
SNNs exhibit privacy vulnerabilities that are equally comparable to Artificial
Neural Networks (ANNs). Our code is available at
https://anonymous.4open.science/r/MIA_SNN-3610.
|
2502.13257 | Adrien Aumon | Adrien Aumon, Shuang Ni, Myriam Lizotte, Guy Wolf, Kevin R. Moon, Jake
S. Rhodes | Random Forest Autoencoders for Guided Representation Learning | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | Decades of research have produced robust methods for unsupervised data
visualization, yet supervised visualization$\unicode{x2013}$where expert labels
guide representations$\unicode{x2013}$remains underexplored, as most supervised
approaches prioritize classification over visualization. Recently, RF-PHATE, a
diffusion-based manifold learning method leveraging random forests and
information geometry, marked significant progress in supervised visualization.
However, its lack of an explicit mapping function limits scalability and
prevents application to unseen data, posing challenges for large datasets and
label-scarce scenarios. To overcome these limitations, we introduce Random
Forest Autoencoders (RF-AE), a neural network-based framework for out-of-sample
kernel extension that combines the flexibility of autoencoders with the
supervised learning strengths of random forests and the geometry captured by
RF-PHATE. RF-AE enables efficient out-of-sample supervised visualization and
outperforms existing methods, including RF-PHATE's standard kernel extension,
in both accuracy and interpretability. Additionally, RF-AE is robust to the
choice of hyper-parameters and generalizes to any kernel-based dimensionality
reduction method.
| [
{
"version": "v1",
"created": "Tue, 18 Feb 2025 20:02:29 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 00:18:37 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Aumon",
"Adrien",
""
],
[
"Ni",
"Shuang",
""
],
[
"Lizotte",
"Myriam",
""
],
[
"Wolf",
"Guy",
""
],
[
"Moon",
"Kevin R.",
""
],
[
"Rhodes",
"Jake S.",
""
]
] | TITLE: Random Forest Autoencoders for Guided Representation Learning
ABSTRACT: Decades of research have produced robust methods for unsupervised data
visualization, yet supervised visualization$\unicode{x2013}$where expert labels
guide representations$\unicode{x2013}$remains underexplored, as most supervised
approaches prioritize classification over visualization. Recently, RF-PHATE, a
diffusion-based manifold learning method leveraging random forests and
information geometry, marked significant progress in supervised visualization.
However, its lack of an explicit mapping function limits scalability and
prevents application to unseen data, posing challenges for large datasets and
label-scarce scenarios. To overcome these limitations, we introduce Random
Forest Autoencoders (RF-AE), a neural network-based framework for out-of-sample
kernel extension that combines the flexibility of autoencoders with the
supervised learning strengths of random forests and the geometry captured by
RF-PHATE. RF-AE enables efficient out-of-sample supervised visualization and
outperforms existing methods, including RF-PHATE's standard kernel extension,
in both accuracy and interpretability. Additionally, RF-AE is robust to the
choice of hyper-parameters and generalizes to any kernel-based dimensionality
reduction method.
|
2502.14599 | Levana Gesson | Levana Gesson, Greg Henning, Jonathan Collin, Marie Vanstalle | Enhancing nuclear cross-section predictions with deep learning: the DINo
algorithm | null | null | null | null | physics.comp-ph | http://creativecommons.org/licenses/by/4.0/ | Accurate modeling of nuclear reaction cross-sections is crucial for
applications such as hadron therapy, radiation protection, and nuclear reactor
design. Despite continuous advancements in nuclear physics, significant
discrepancies persist between experimental data and theoretical models such as
TENDL, and ENDF/B. These deviations introduce uncertainties in Monte Carlo
simulations widely used in nuclear physics and medical applications. In this
work, DINo (Deep learning Intelligence for Nuclear reactiOns) is introduced as
a deep learning-based algorithm designed to improve cross-section predictions
by learning correlations between charge-changing and total cross-sections.
Trained on the TENDL-2021 dataset and validated against experimental data from
the EXFOR database, DINo demonstrates a significant improvement in predictive
accuracy over conventional nuclear models. The results show that DINo
systematically achieves lower chi2 values compared to TENDL-2021 across
multiple isotopes, particularly for proton-induced reactions on a 12C target.
Specifically, for 11C production, DINo reduces the discrepancy with
experimental data by \sim 28\% compared to TENDL-2021. Additionally, DINo
provides improved predictions for other relevant isotopes produced, such as
4He, 6Li, 9Be, and 10B, which play a crucial role in modeling nuclear
fragmentation processes. By leveraging neural networks, DINo offers fast
cross-section predictions, making it a promising complementary tool for nuclear
reaction modeling. However, the algorithm's performance evaluation is sensitive
to the availability of experimental data, with increased uncertainty in
sparsely measured energy ranges. Future work will focus on refining the model
through data augmentation, expanding its applicability to other reaction
channels, and integrating it into Monte Carlo transport codes for real-time
nuclear data processing.
| [
{
"version": "v1",
"created": "Thu, 20 Feb 2025 14:33:33 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 13:04:32 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Gesson",
"Levana",
""
],
[
"Henning",
"Greg",
""
],
[
"Collin",
"Jonathan",
""
],
[
"Vanstalle",
"Marie",
""
]
] | TITLE: Enhancing nuclear cross-section predictions with deep learning: the DINo
algorithm
ABSTRACT: Accurate modeling of nuclear reaction cross-sections is crucial for
applications such as hadron therapy, radiation protection, and nuclear reactor
design. Despite continuous advancements in nuclear physics, significant
discrepancies persist between experimental data and theoretical models such as
TENDL, and ENDF/B. These deviations introduce uncertainties in Monte Carlo
simulations widely used in nuclear physics and medical applications. In this
work, DINo (Deep learning Intelligence for Nuclear reactiOns) is introduced as
a deep learning-based algorithm designed to improve cross-section predictions
by learning correlations between charge-changing and total cross-sections.
Trained on the TENDL-2021 dataset and validated against experimental data from
the EXFOR database, DINo demonstrates a significant improvement in predictive
accuracy over conventional nuclear models. The results show that DINo
systematically achieves lower chi2 values compared to TENDL-2021 across
multiple isotopes, particularly for proton-induced reactions on a 12C target.
Specifically, for 11C production, DINo reduces the discrepancy with
experimental data by \sim 28\% compared to TENDL-2021. Additionally, DINo
provides improved predictions for other relevant isotopes produced, such as
4He, 6Li, 9Be, and 10B, which play a crucial role in modeling nuclear
fragmentation processes. By leveraging neural networks, DINo offers fast
cross-section predictions, making it a promising complementary tool for nuclear
reaction modeling. However, the algorithm's performance evaluation is sensitive
to the availability of experimental data, with increased uncertainty in
sparsely measured energy ranges. Future work will focus on refining the model
through data augmentation, expanding its applicability to other reaction
channels, and integrating it into Monte Carlo transport codes for real-time
nuclear data processing.
|
2502.15177 | Raquib Bin Yousuf | Raquib Bin Yousuf, Hoang Anh Just, Shengzhe Xu, Brian Mayer, Victor
Deklerck, Jakub Truszkowski, John C. Simeone, Jade Saunders, Chang-Tien Lu,
Ruoxi Jia, Naren Ramakrishnan | Optimizing Product Provenance Verification using Data Valuation Methods | null | null | null | null | cs.LG cs.CY | http://creativecommons.org/licenses/by/4.0/ | Determining and verifying product provenance remains a critical challenge in
global supply chains, particularly as geopolitical conflicts and shifting
borders create new incentives for misrepresentation of commodities, such as
hiding the origin of illegally harvested timber or agriculture grown on
illegally cleared land. Stable Isotope Ratio Analysis (SIRA), combined with
Gaussian process regression-based isoscapes, has emerged as a powerful tool for
geographic origin verification. However, the effectiveness of these models is
often constrained by data scarcity and suboptimal dataset selection. In this
work, we introduce a novel data valuation framework designed to enhance the
selection and utilization of training data for machine learning models applied
in SIRA. By prioritizing high-informative samples, our approach improves model
robustness and predictive accuracy across diverse datasets and geographies. We
validate our methodology with extensive experiments, demonstrating its
potential to significantly enhance provenance verification, mitigate fraudulent
trade practices, and strengthen regulatory enforcement of global supply chains.
| [
{
"version": "v1",
"created": "Fri, 21 Feb 2025 03:16:19 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Mar 2025 06:20:56 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Yousuf",
"Raquib Bin",
""
],
[
"Just",
"Hoang Anh",
""
],
[
"Xu",
"Shengzhe",
""
],
[
"Mayer",
"Brian",
""
],
[
"Deklerck",
"Victor",
""
],
[
"Truszkowski",
"Jakub",
""
],
[
"Simeone",
"John C.",
""
],
[
"Saunders",
"Jade",
""
],
[
"Lu",
"Chang-Tien",
""
],
[
"Jia",
"Ruoxi",
""
],
[
"Ramakrishnan",
"Naren",
""
]
] | TITLE: Optimizing Product Provenance Verification using Data Valuation Methods
ABSTRACT: Determining and verifying product provenance remains a critical challenge in
global supply chains, particularly as geopolitical conflicts and shifting
borders create new incentives for misrepresentation of commodities, such as
hiding the origin of illegally harvested timber or agriculture grown on
illegally cleared land. Stable Isotope Ratio Analysis (SIRA), combined with
Gaussian process regression-based isoscapes, has emerged as a powerful tool for
geographic origin verification. However, the effectiveness of these models is
often constrained by data scarcity and suboptimal dataset selection. In this
work, we introduce a novel data valuation framework designed to enhance the
selection and utilization of training data for machine learning models applied
in SIRA. By prioritizing high-informative samples, our approach improves model
robustness and predictive accuracy across diverse datasets and geographies. We
validate our methodology with extensive experiments, demonstrating its
potential to significantly enhance provenance verification, mitigate fraudulent
trade practices, and strengthen regulatory enforcement of global supply chains.
|
2502.15483 | Botian Wang | Botian Wang, Yawen Ouyang, Yaohui Li, Yiqun Wang, Haorui Cui, Jianbing
Zhang, Xiaonan Wang, Wei-Ying Ma, Hao Zhou | MoMa: A Modular Deep Learning Framework for Material Property Prediction | null | null | null | null | cs.LG cond-mat.mtrl-sci | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Deep learning methods for material property prediction have been widely
explored to advance materials discovery. However, the prevailing pre-train then
fine-tune paradigm often fails to address the inherent diversity and disparity
of material tasks. To overcome these challenges, we introduce MoMa, a Modular
framework for Materials that first trains specialized modules across a wide
range of tasks and then adaptively composes synergistic modules tailored to
each downstream scenario. Evaluation across 17 datasets demonstrates the
superiority of MoMa, with a substantial 14% average improvement over the
strongest baseline. Few-shot and continual learning experiments further
highlight MoMa's potential for real-world applications. Pioneering a new
paradigm of modular material learning, MoMa will be open-sourced to foster
broader community collaboration.
| [
{
"version": "v1",
"created": "Fri, 21 Feb 2025 14:12:44 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 12:33:30 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Wang",
"Botian",
""
],
[
"Ouyang",
"Yawen",
""
],
[
"Li",
"Yaohui",
""
],
[
"Wang",
"Yiqun",
""
],
[
"Cui",
"Haorui",
""
],
[
"Zhang",
"Jianbing",
""
],
[
"Wang",
"Xiaonan",
""
],
[
"Ma",
"Wei-Ying",
""
],
[
"Zhou",
"Hao",
""
]
] | TITLE: MoMa: A Modular Deep Learning Framework for Material Property Prediction
ABSTRACT: Deep learning methods for material property prediction have been widely
explored to advance materials discovery. However, the prevailing pre-train then
fine-tune paradigm often fails to address the inherent diversity and disparity
of material tasks. To overcome these challenges, we introduce MoMa, a Modular
framework for Materials that first trains specialized modules across a wide
range of tasks and then adaptively composes synergistic modules tailored to
each downstream scenario. Evaluation across 17 datasets demonstrates the
superiority of MoMa, with a substantial 14% average improvement over the
strongest baseline. Few-shot and continual learning experiments further
highlight MoMa's potential for real-world applications. Pioneering a new
paradigm of modular material learning, MoMa will be open-sourced to foster
broader community collaboration.
|
2502.16240 | Haoyang Li | Haoyang Li, Jia Qi Yip, Tianyu Fan, Eng Siong Chng | Speech Enhancement Using Continuous Embeddings of Neural Audio Codec | Accepted to ICASSP 2025 | null | 10.1109/ICASSP49660.2025.10890379 | null | eess.AS cs.AI cs.LG cs.SD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advancements in Neural Audio Codec (NAC) models have inspired their
use in various speech processing tasks, including speech enhancement (SE). In
this work, we propose a novel, efficient SE approach by leveraging the
pre-quantization output of a pretrained NAC encoder. Unlike prior NAC-based SE
methods, which process discrete speech tokens using Language Models (LMs), we
perform SE within the continuous embedding space of the pretrained NAC, which
is highly compressed along the time dimension for efficient representation. Our
lightweight SE model, optimized through an embedding-level loss, delivers
results comparable to SE baselines trained on larger datasets, with a
significantly lower real-time factor of 0.005. Additionally, our method
achieves a low GMAC of 3.94, reducing complexity 18-fold compared to Sepformer
in a simulated cloud-based audio transmission environment. This work highlights
a new, efficient NAC-based SE solution, particularly suitable for cloud
applications where NAC is used to compress audio before transmission.
Copyright 20XX IEEE. Personal use of this material is permitted. Permission
from IEEE must be obtained for all other uses, in any current or future media,
including reprinting/republishing this material for advertising or promotional
purposes, creating new collective works, for resale or redistribution to
servers or lists, or reuse of any copyrighted component of this work in other
works.
| [
{
"version": "v1",
"created": "Sat, 22 Feb 2025 14:25:55 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Li",
"Haoyang",
""
],
[
"Yip",
"Jia Qi",
""
],
[
"Fan",
"Tianyu",
""
],
[
"Chng",
"Eng Siong",
""
]
] | TITLE: Speech Enhancement Using Continuous Embeddings of Neural Audio Codec
ABSTRACT: Recent advancements in Neural Audio Codec (NAC) models have inspired their
use in various speech processing tasks, including speech enhancement (SE). In
this work, we propose a novel, efficient SE approach by leveraging the
pre-quantization output of a pretrained NAC encoder. Unlike prior NAC-based SE
methods, which process discrete speech tokens using Language Models (LMs), we
perform SE within the continuous embedding space of the pretrained NAC, which
is highly compressed along the time dimension for efficient representation. Our
lightweight SE model, optimized through an embedding-level loss, delivers
results comparable to SE baselines trained on larger datasets, with a
significantly lower real-time factor of 0.005. Additionally, our method
achieves a low GMAC of 3.94, reducing complexity 18-fold compared to Sepformer
in a simulated cloud-based audio transmission environment. This work highlights
a new, efficient NAC-based SE solution, particularly suitable for cloud
applications where NAC is used to compress audio before transmission.
Copyright 20XX IEEE. Personal use of this material is permitted. Permission
from IEEE must be obtained for all other uses, in any current or future media,
including reprinting/republishing this material for advertising or promotional
purposes, creating new collective works, for resale or redistribution to
servers or lists, or reuse of any copyrighted component of this work in other
works.
|
2502.16627 | Ehsan Zeraatkar | Arshia Kermani, Ehsan Zeraatkar, Habib Irani | Energy-Efficient Transformer Inference: Optimization Strategies for Time
Series Classification | null | null | null | null | cs.LG cs.AI cs.PF | http://creativecommons.org/licenses/by/4.0/ | The increasing computational demands of transformer models in time series
classification necessitate effective optimization strategies for
energy-efficient deployment. Our study presents a systematic investigation of
optimization techniques, focusing on structured pruning and quantization
methods for transformer architectures. Through extensive experimentation on
three distinct datasets (RefrigerationDevices, ElectricDevices, and PLAID), we
quantitatively evaluate model performance and energy efficiency across
different transformer configurations. Our experimental results demonstrate that
static quantization reduces energy consumption by 29.14% while maintaining
classification performance, and L1 pruning achieves a 63% improvement in
inference speed with minimal accuracy degradation. Our findings provide
valuable insights into the effectiveness of optimization strategies for
transformer-based time series classification, establishing a foundation for
efficient model deployment in resource-constrained environments.
| [
{
"version": "v1",
"created": "Sun, 23 Feb 2025 16:04:56 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Feb 2025 17:39:46 GMT"
},
{
"version": "v3",
"created": "Sat, 15 Mar 2025 03:46:53 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Kermani",
"Arshia",
""
],
[
"Zeraatkar",
"Ehsan",
""
],
[
"Irani",
"Habib",
""
]
] | TITLE: Energy-Efficient Transformer Inference: Optimization Strategies for Time
Series Classification
ABSTRACT: The increasing computational demands of transformer models in time series
classification necessitate effective optimization strategies for
energy-efficient deployment. Our study presents a systematic investigation of
optimization techniques, focusing on structured pruning and quantization
methods for transformer architectures. Through extensive experimentation on
three distinct datasets (RefrigerationDevices, ElectricDevices, and PLAID), we
quantitatively evaluate model performance and energy efficiency across
different transformer configurations. Our experimental results demonstrate that
static quantization reduces energy consumption by 29.14% while maintaining
classification performance, and L1 pruning achieves a 63% improvement in
inference speed with minimal accuracy degradation. Our findings provide
valuable insights into the effectiveness of optimization strategies for
transformer-based time series classification, establishing a foundation for
efficient model deployment in resource-constrained environments.
|
2502.17391 | Andrei Chernov | Andrei Chernov and Oleg Novitskij | The Empirical Impact of Reducing Symmetries on the Performance of Deep
Ensembles and MoE | Accepted at the ICLR Workshop on Neural Network Weights as a New Data
Modality 2025 | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Recent studies have shown that reducing symmetries in neural networks
enhances linear mode connectivity between networks without requiring parameter
space alignment, leading to improved performance in linearly interpolated
neural networks. However, in practical applications, neural network
interpolation is rarely used; instead, ensembles of networks are more common.
In this paper, we empirically investigate the impact of reducing symmetries on
the performance of deep ensembles and Mixture of Experts (MoE) across five
datasets. Additionally, to explore deeper linear mode connectivity, we
introduce the Mixture of Interpolated Experts (MoIE). Our results show that
deep ensembles built on asymmetric neural networks achieve significantly better
performance as ensemble size increases compared to their symmetric
counterparts. In contrast, our experiments do not provide conclusive evidence
on whether reducing symmetries affects both MoE and MoIE architectures.
| [
{
"version": "v1",
"created": "Mon, 24 Feb 2025 18:16:23 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 13:20:52 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Chernov",
"Andrei",
""
],
[
"Novitskij",
"Oleg",
""
]
] | TITLE: The Empirical Impact of Reducing Symmetries on the Performance of Deep
Ensembles and MoE
ABSTRACT: Recent studies have shown that reducing symmetries in neural networks
enhances linear mode connectivity between networks without requiring parameter
space alignment, leading to improved performance in linearly interpolated
neural networks. However, in practical applications, neural network
interpolation is rarely used; instead, ensembles of networks are more common.
In this paper, we empirically investigate the impact of reducing symmetries on
the performance of deep ensembles and Mixture of Experts (MoE) across five
datasets. Additionally, to explore deeper linear mode connectivity, we
introduce the Mixture of Interpolated Experts (MoIE). Our results show that
deep ensembles built on asymmetric neural networks achieve significantly better
performance as ensemble size increases compared to their symmetric
counterparts. In contrast, our experiments do not provide conclusive evidence
on whether reducing symmetries affects both MoE and MoIE architectures.
|
2502.19698 | Guangfeng Jiang | Guangfeng Jiang, Jun Liu, Yongxuan Lv, Yuzhi Wu, Xianfei Li, Wenlong
Liao, Tao He, Pai Peng | You Only Click Once: Single Point Weakly Supervised 3D Instance
Segmentation for Autonomous Driving | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Outdoor LiDAR point cloud 3D instance segmentation is a crucial task in
autonomous driving. However, it requires laborious human efforts to annotate
the point cloud for training a segmentation model. To address this challenge,
we propose a YoCo framework, which generates 3D pseudo labels using minimal
coarse click annotations in the bird's eye view plane. It is a significant
challenge to produce high-quality pseudo labels from sparse annotations. Our
YoCo framework first leverages vision foundation models combined with geometric
constraints from point clouds to enhance pseudo label generation. Second, a
temporal and spatial-based label updating module is designed to generate
reliable updated labels. It leverages predictions from adjacent frames and
utilizes the inherent density variation of point clouds (dense near, sparse
far). Finally, to further improve label quality, an IoU-guided enhancement
module is proposed, replacing pseudo labels with high-confidence and high-IoU
predictions. Experiments on the Waymo dataset demonstrate YoCo's effectiveness
and generality, achieving state-of-the-art performance among weakly supervised
methods and surpassing fully supervised Cylinder3D. Additionally, the YoCo is
suitable for various networks, achieving performance comparable to fully
supervised methods with minimal fine-tuning using only 0.8% of the fully
labeled data, significantly reducing annotation costs.
| [
{
"version": "v1",
"created": "Thu, 27 Feb 2025 02:33:51 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Feb 2025 02:47:45 GMT"
},
{
"version": "v3",
"created": "Sat, 15 Mar 2025 06:46:30 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Jiang",
"Guangfeng",
""
],
[
"Liu",
"Jun",
""
],
[
"Lv",
"Yongxuan",
""
],
[
"Wu",
"Yuzhi",
""
],
[
"Li",
"Xianfei",
""
],
[
"Liao",
"Wenlong",
""
],
[
"He",
"Tao",
""
],
[
"Peng",
"Pai",
""
]
] | TITLE: You Only Click Once: Single Point Weakly Supervised 3D Instance
Segmentation for Autonomous Driving
ABSTRACT: Outdoor LiDAR point cloud 3D instance segmentation is a crucial task in
autonomous driving. However, it requires laborious human efforts to annotate
the point cloud for training a segmentation model. To address this challenge,
we propose a YoCo framework, which generates 3D pseudo labels using minimal
coarse click annotations in the bird's eye view plane. It is a significant
challenge to produce high-quality pseudo labels from sparse annotations. Our
YoCo framework first leverages vision foundation models combined with geometric
constraints from point clouds to enhance pseudo label generation. Second, a
temporal and spatial-based label updating module is designed to generate
reliable updated labels. It leverages predictions from adjacent frames and
utilizes the inherent density variation of point clouds (dense near, sparse
far). Finally, to further improve label quality, an IoU-guided enhancement
module is proposed, replacing pseudo labels with high-confidence and high-IoU
predictions. Experiments on the Waymo dataset demonstrate YoCo's effectiveness
and generality, achieving state-of-the-art performance among weakly supervised
methods and surpassing fully supervised Cylinder3D. Additionally, the YoCo is
suitable for various networks, achieving performance comparable to fully
supervised methods with minimal fine-tuning using only 0.8% of the fully
labeled data, significantly reducing annotation costs.
|
2503.00226 | Yufei Guo | Yufei Guo, Xiaode Liu, Yuanpei Chen, Weihang Peng, Yuhan Zhang, Zhe Ma | Spiking Transformer:Introducing Accurate Addition-Only Spiking
Self-Attention for Transformer | Accepted by CVPR2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Transformers have demonstrated outstanding performance across a wide range of
tasks, owing to their self-attention mechanism, but they are highly
energy-consuming. Spiking Neural Networks have emerged as a promising
energy-efficient alternative to traditional Artificial Neural Networks,
leveraging event-driven computation and binary spikes for information transfer.
The combination of Transformers' capabilities with the energy efficiency of
SNNs offers a compelling opportunity. This paper addresses the challenge of
adapting the self-attention mechanism of Transformers to the spiking paradigm
by introducing a novel approach: Accurate Addition-Only Spiking Self-Attention
(A$^2$OS$^2$A). Unlike existing methods that rely solely on binary spiking
neurons for all components of the self-attention mechanism, our approach
integrates binary, ReLU, and ternary spiking neurons. This hybrid strategy
significantly improves accuracy while preserving non-multiplicative
computations. Moreover, our method eliminates the need for softmax and scaling
operations. Extensive experiments show that the A$^2$OS$^2$A-based Spiking
Transformer outperforms existing SNN-based Transformers on several datasets,
even achieving an accuracy of 78.66\% on ImageNet-1K. Our work represents a
significant advancement in SNN-based Transformer models, offering a more
accurate and efficient solution for real-world applications.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 22:23:29 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 03:17:00 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Guo",
"Yufei",
""
],
[
"Liu",
"Xiaode",
""
],
[
"Chen",
"Yuanpei",
""
],
[
"Peng",
"Weihang",
""
],
[
"Zhang",
"Yuhan",
""
],
[
"Ma",
"Zhe",
""
]
] | TITLE: Spiking Transformer:Introducing Accurate Addition-Only Spiking
Self-Attention for Transformer
ABSTRACT: Transformers have demonstrated outstanding performance across a wide range of
tasks, owing to their self-attention mechanism, but they are highly
energy-consuming. Spiking Neural Networks have emerged as a promising
energy-efficient alternative to traditional Artificial Neural Networks,
leveraging event-driven computation and binary spikes for information transfer.
The combination of Transformers' capabilities with the energy efficiency of
SNNs offers a compelling opportunity. This paper addresses the challenge of
adapting the self-attention mechanism of Transformers to the spiking paradigm
by introducing a novel approach: Accurate Addition-Only Spiking Self-Attention
(A$^2$OS$^2$A). Unlike existing methods that rely solely on binary spiking
neurons for all components of the self-attention mechanism, our approach
integrates binary, ReLU, and ternary spiking neurons. This hybrid strategy
significantly improves accuracy while preserving non-multiplicative
computations. Moreover, our method eliminates the need for softmax and scaling
operations. Extensive experiments show that the A$^2$OS$^2$A-based Spiking
Transformer outperforms existing SNN-based Transformers on several datasets,
even achieving an accuracy of 78.66\% on ImageNet-1K. Our work represents a
significant advancement in SNN-based Transformer models, offering a more
accurate and efficient solution for real-world applications.
|
2503.00605 | Yuezhi Yang | Yuezhi Yang, Qimin Chen, Vladimir G. Kim, Siddhartha Chaudhuri, Qixing
Huang, Zhiqin Chen | GenVDM: Generating Vector Displacement Maps From a Single Image | accepted to CVPR2025 | null | null | null | cs.GR cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce the first method for generating Vector Displacement Maps (VDMs):
parameterized, detailed geometric stamps commonly used in 3D modeling. Given a
single input image, our method first generates multi-view normal maps and then
reconstructs a VDM from the normals via a novel reconstruction pipeline. We
also propose an efficient algorithm for extracting VDMs from 3D objects, and
present the first academic VDM dataset. Compared to existing 3D generative
models focusing on complete shapes, we focus on generating parts that can be
seamlessly attached to shape surfaces. The method gives artists rich control
over adding geometric details to a 3D shape. Experiments demonstrate that our
approach outperforms existing baselines. Generating VDMs offers additional
benefits, such as using 2D image editing to customize and refine 3D details.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 20:11:18 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Mar 2025 04:39:25 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Yang",
"Yuezhi",
""
],
[
"Chen",
"Qimin",
""
],
[
"Kim",
"Vladimir G.",
""
],
[
"Chaudhuri",
"Siddhartha",
""
],
[
"Huang",
"Qixing",
""
],
[
"Chen",
"Zhiqin",
""
]
] | TITLE: GenVDM: Generating Vector Displacement Maps From a Single Image
ABSTRACT: We introduce the first method for generating Vector Displacement Maps (VDMs):
parameterized, detailed geometric stamps commonly used in 3D modeling. Given a
single input image, our method first generates multi-view normal maps and then
reconstructs a VDM from the normals via a novel reconstruction pipeline. We
also propose an efficient algorithm for extracting VDMs from 3D objects, and
present the first academic VDM dataset. Compared to existing 3D generative
models focusing on complete shapes, we focus on generating parts that can be
seamlessly attached to shape surfaces. The method gives artists rich control
over adding geometric details to a 3D shape. Experiments demonstrate that our
approach outperforms existing baselines. Generating VDMs offers additional
benefits, such as using 2D image editing to customize and refine 3D details.
|
2503.00856 | Samet Demir | Samet Demir, Zafer Dogan | Asymptotic Analysis of Two-Layer Neural Networks after One Gradient Step
under Gaussian Mixtures Data with Structure | ICLR 2025, 27 pages, 9 figures | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we study the training and generalization performance of
two-layer neural networks (NNs) after one gradient descent step under
structured data modeled by Gaussian mixtures. While previous research has
extensively analyzed this model under isotropic data assumption, such
simplifications overlook the complexities inherent in real-world datasets. Our
work addresses this limitation by analyzing two-layer NNs under Gaussian
mixture data assumption in the asymptotically proportional limit, where the
input dimension, number of hidden neurons, and sample size grow with finite
ratios. We characterize the training and generalization errors by leveraging
recent advancements in Gaussian universality. Specifically, we prove that a
high-order polynomial model performs equivalent to the nonlinear neural
networks under certain conditions. The degree of the equivalent model is
intricately linked to both the "data spread" and the learning rate employed
during one gradient step. Through extensive simulations, we demonstrate the
equivalence between the original model and its polynomial counterpart across
various regression and classification tasks. Additionally, we explore how
different properties of Gaussian mixtures affect learning outcomes. Finally, we
illustrate experimental results on Fashion-MNIST classification, indicating
that our findings can translate to realistic data.
| [
{
"version": "v1",
"created": "Sun, 2 Mar 2025 11:28:54 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Mar 2025 10:54:27 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Demir",
"Samet",
""
],
[
"Dogan",
"Zafer",
""
]
] | TITLE: Asymptotic Analysis of Two-Layer Neural Networks after One Gradient Step
under Gaussian Mixtures Data with Structure
ABSTRACT: In this work, we study the training and generalization performance of
two-layer neural networks (NNs) after one gradient descent step under
structured data modeled by Gaussian mixtures. While previous research has
extensively analyzed this model under isotropic data assumption, such
simplifications overlook the complexities inherent in real-world datasets. Our
work addresses this limitation by analyzing two-layer NNs under Gaussian
mixture data assumption in the asymptotically proportional limit, where the
input dimension, number of hidden neurons, and sample size grow with finite
ratios. We characterize the training and generalization errors by leveraging
recent advancements in Gaussian universality. Specifically, we prove that a
high-order polynomial model performs equivalent to the nonlinear neural
networks under certain conditions. The degree of the equivalent model is
intricately linked to both the "data spread" and the learning rate employed
during one gradient step. Through extensive simulations, we demonstrate the
equivalence between the original model and its polynomial counterpart across
various regression and classification tasks. Additionally, we explore how
different properties of Gaussian mixtures affect learning outcomes. Finally, we
illustrate experimental results on Fashion-MNIST classification, indicating
that our findings can translate to realistic data.
|
2503.01001 | Allen Lin | Allen Lin, Renqin Cai, Yun He, Hanchao Yu, Jing Qian, Rui Li, Qifan
Wang, James Caverlee | Towards An Efficient LLM Training Paradigm for CTR Prediction | null | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) have demonstrated tremendous potential as the
next-generation ranking-based recommendation system. Many recent works have
shown that LLMs can significantly outperform conventional click-through-rate
(CTR) prediction approaches. Despite such promising results, the computational
inefficiency inherent in the current training paradigm makes it particularly
challenging to train LLMs for ranking-based recommendation tasks on large
datasets. To train LLMs for CTR prediction, most existing studies adopt the
prevalent ''sliding-window'' paradigm. Given a sequence of $m$ user
interactions, a unique training prompt is constructed for each interaction by
designating it as the prediction target along with its preceding $n$
interactions serving as context. In turn, the sliding-window paradigm results
in an overall complexity of $O(mn^2)$ that scales linearly with the length of
user interactions. Consequently, a direct adoption to train LLMs with such
strategy can result in prohibitively high training costs as the length of
interactions grows. To alleviate the computational inefficiency, we propose a
novel training paradigm, namely Dynamic Target Isolation (DTI), that
structurally parallelizes the training of $k$ (where $k >> 1$) target
interactions. Furthermore, we identify two major bottlenecks - hidden-state
leakage and positional bias overfitting - that limit DTI to only scale up to a
small value of $k$ (e.g., 5) then propose a computationally light solution to
effectively tackle each. Through extensive experiments on three widely adopted
public CTR datasets, we empirically show that DTI reduces training time by an
average of $\textbf{92%}$ (e.g., from $70.5$ hrs to $5.31$ hrs), without
compromising CTR prediction performance.
| [
{
"version": "v1",
"created": "Sun, 2 Mar 2025 19:43:35 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Mar 2025 21:50:37 GMT"
},
{
"version": "v3",
"created": "Sat, 15 Mar 2025 14:45:21 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Lin",
"Allen",
""
],
[
"Cai",
"Renqin",
""
],
[
"He",
"Yun",
""
],
[
"Yu",
"Hanchao",
""
],
[
"Qian",
"Jing",
""
],
[
"Li",
"Rui",
""
],
[
"Wang",
"Qifan",
""
],
[
"Caverlee",
"James",
""
]
] | TITLE: Towards An Efficient LLM Training Paradigm for CTR Prediction
ABSTRACT: Large Language Models (LLMs) have demonstrated tremendous potential as the
next-generation ranking-based recommendation system. Many recent works have
shown that LLMs can significantly outperform conventional click-through-rate
(CTR) prediction approaches. Despite such promising results, the computational
inefficiency inherent in the current training paradigm makes it particularly
challenging to train LLMs for ranking-based recommendation tasks on large
datasets. To train LLMs for CTR prediction, most existing studies adopt the
prevalent ''sliding-window'' paradigm. Given a sequence of $m$ user
interactions, a unique training prompt is constructed for each interaction by
designating it as the prediction target along with its preceding $n$
interactions serving as context. In turn, the sliding-window paradigm results
in an overall complexity of $O(mn^2)$ that scales linearly with the length of
user interactions. Consequently, a direct adoption to train LLMs with such
strategy can result in prohibitively high training costs as the length of
interactions grows. To alleviate the computational inefficiency, we propose a
novel training paradigm, namely Dynamic Target Isolation (DTI), that
structurally parallelizes the training of $k$ (where $k >> 1$) target
interactions. Furthermore, we identify two major bottlenecks - hidden-state
leakage and positional bias overfitting - that limit DTI to only scale up to a
small value of $k$ (e.g., 5) then propose a computationally light solution to
effectively tackle each. Through extensive experiments on three widely adopted
public CTR datasets, we empirically show that DTI reduces training time by an
average of $\textbf{92%}$ (e.g., from $70.5$ hrs to $5.31$ hrs), without
compromising CTR prediction performance.
|
2503.01611 | David Ponce | David Ponce, Thierry Etchegoyhen | In-context Learning vs. Instruction Tuning: The Case of Small and
Multilingual Language Models | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Instruction following is a critical ability for Large Language Models to
perform downstream tasks. The standard approach to instruction alignment has
relied on a specific phase of model tuning over curated instruction datasets,
optionally complemented with an alignment step over human preferences. Recent
work has shown the potential of in-context learning (ICL) alternatives to guide
base models towards instruction following. This type of approach is
particularly relevant to extend instruction following across languages and
models of varying sizes adapted to different types of usage. In this work we
compare ICL and instruction fine-tuning in English, French and Spanish, on
Small Language Models, and provide experimental results on applying Direct
Preference Optimisation (DPO) over base models. Our results show that scenarios
involving multilingual and smaller models result in downgraded ICL instruction
following performance, only partially mitigated by DPO alignment. This study
aims to further our understanding of current strengths and limitations of
alternative methods for instruction following.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 14:47:23 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 15:32:53 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Ponce",
"David",
""
],
[
"Etchegoyhen",
"Thierry",
""
]
] | TITLE: In-context Learning vs. Instruction Tuning: The Case of Small and
Multilingual Language Models
ABSTRACT: Instruction following is a critical ability for Large Language Models to
perform downstream tasks. The standard approach to instruction alignment has
relied on a specific phase of model tuning over curated instruction datasets,
optionally complemented with an alignment step over human preferences. Recent
work has shown the potential of in-context learning (ICL) alternatives to guide
base models towards instruction following. This type of approach is
particularly relevant to extend instruction following across languages and
models of varying sizes adapted to different types of usage. In this work we
compare ICL and instruction fine-tuning in English, French and Spanish, on
Small Language Models, and provide experimental results on applying Direct
Preference Optimisation (DPO) over base models. Our results show that scenarios
involving multilingual and smaller models result in downgraded ICL instruction
following performance, only partially mitigated by DPO alignment. This study
aims to further our understanding of current strengths and limitations of
alternative methods for instruction following.
|
2503.02304 | Tongkun Guan | Tongkun Guan, Zining Wang, Pei Fu, Zhengtao Guo, Wei Shen, Kai Zhou,
Tiezhu Yue, Chen Duan, Hao Sun, Qianyi Jiang, Junfeng Luo, Xiaokang Yang | A Token-level Text Image Foundation Model for Document Understanding | 23 pages | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In recent years, general visual foundation models (VFMs) have witnessed
increasing adoption, particularly as image encoders for popular multi-modal
large language models (MLLMs). However, without semantically fine-grained
supervision, these models still encounter fundamental prediction errors in the
context of downstream text-image-related tasks, i.e., perception, understanding
and reasoning with images containing small and dense texts. To bridge this gap,
we develop TokenOCR, the first token-level visual foundation model specifically
tailored for text-image-related tasks, designed to support a variety of
traditional downstream applications. To facilitate the pretraining of TokenOCR,
we also devise a high-quality data production pipeline that constructs the
first token-level image text dataset, TokenIT, comprising 20 million images and
1.8 billion token-mask pairs. Furthermore, leveraging this foundation with
exceptional image-as-text capability, we seamlessly replace previous VFMs with
TokenOCR to construct a document-level MLLM, TokenVL, for VQA-based document
understanding tasks. Finally, extensive experiments demonstrate the
effectiveness of TokenOCR and TokenVL. Code, datasets, and weights will be
available at https://github.com/Token-family/TokenFD.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 06:05:33 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Mar 2025 11:35:21 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Guan",
"Tongkun",
""
],
[
"Wang",
"Zining",
""
],
[
"Fu",
"Pei",
""
],
[
"Guo",
"Zhengtao",
""
],
[
"Shen",
"Wei",
""
],
[
"Zhou",
"Kai",
""
],
[
"Yue",
"Tiezhu",
""
],
[
"Duan",
"Chen",
""
],
[
"Sun",
"Hao",
""
],
[
"Jiang",
"Qianyi",
""
],
[
"Luo",
"Junfeng",
""
],
[
"Yang",
"Xiaokang",
""
]
] | TITLE: A Token-level Text Image Foundation Model for Document Understanding
ABSTRACT: In recent years, general visual foundation models (VFMs) have witnessed
increasing adoption, particularly as image encoders for popular multi-modal
large language models (MLLMs). However, without semantically fine-grained
supervision, these models still encounter fundamental prediction errors in the
context of downstream text-image-related tasks, i.e., perception, understanding
and reasoning with images containing small and dense texts. To bridge this gap,
we develop TokenOCR, the first token-level visual foundation model specifically
tailored for text-image-related tasks, designed to support a variety of
traditional downstream applications. To facilitate the pretraining of TokenOCR,
we also devise a high-quality data production pipeline that constructs the
first token-level image text dataset, TokenIT, comprising 20 million images and
1.8 billion token-mask pairs. Furthermore, leveraging this foundation with
exceptional image-as-text capability, we seamlessly replace previous VFMs with
TokenOCR to construct a document-level MLLM, TokenVL, for VQA-based document
understanding tasks. Finally, extensive experiments demonstrate the
effectiveness of TokenOCR and TokenVL. Code, datasets, and weights will be
available at https://github.com/Token-family/TokenFD.
|
2503.03355 | Zhihao Zhan | Zhihao Zhan, Wang Pang, Xiang Zhu, Yechao Bai | Video Super-Resolution: All You Need is a Video Diffusion Model | The paper is under consideration at Pattern Recognition Letters | null | null | null | cs.CV cs.LG eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a generic video super-resolution algorithm in this paper, based on
the Diffusion Posterior Sampling framework with an unconditional video
generation model in latent space. The video generation model, a diffusion
transformer, functions as a space-time model. We argue that a powerful model,
which learns the physics of the real world, can easily handle various kinds of
motion patterns as prior knowledge, thus eliminating the need for explicit
estimation of optical flows or motion parameters for pixel alignment.
Furthermore, a single instance of the proposed video diffusion transformer
model can adapt to different sampling conditions without re-training. Empirical
results on synthetic and real-world datasets demonstrate that our method has
strong capabilities to address video super-resolution challenges.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 10:37:51 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 16:01:32 GMT"
},
{
"version": "v3",
"created": "Mon, 17 Mar 2025 02:09:02 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Zhan",
"Zhihao",
""
],
[
"Pang",
"Wang",
""
],
[
"Zhu",
"Xiang",
""
],
[
"Bai",
"Yechao",
""
]
] | TITLE: Video Super-Resolution: All You Need is a Video Diffusion Model
ABSTRACT: We present a generic video super-resolution algorithm in this paper, based on
the Diffusion Posterior Sampling framework with an unconditional video
generation model in latent space. The video generation model, a diffusion
transformer, functions as a space-time model. We argue that a powerful model,
which learns the physics of the real world, can easily handle various kinds of
motion patterns as prior knowledge, thus eliminating the need for explicit
estimation of optical flows or motion parameters for pixel alignment.
Furthermore, a single instance of the proposed video diffusion transformer
model can adapt to different sampling conditions without re-training. Empirical
results on synthetic and real-world datasets demonstrate that our method has
strong capabilities to address video super-resolution challenges.
|
2503.03524 | Yixin Su | Yixin Su, Wei Jiang, Fangquan Lin, Cheng Yang, Sarah M. Erfani, Junhao
Gan, Yunxiang Zhao, Ruixuan Li, Rui Zhang | Intrinsic and Extrinsic Factor Disentanglement for Recommendation in
Various Context Scenarios | 32 pages, 13 figures, 11 tables. Published on Transactions of
Information Systems | null | 10.1145/3722553 | null | cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recommender systems, the patterns of user behaviors (e.g., purchase,
click) may vary greatly in different contexts (e.g., time and location). This
is because user behavior is jointly determined by two types of factors:
intrinsic factors, which reflect consistent user preference, and extrinsic
factors, which reflect external incentives that may vary in different contexts.
Differentiating between intrinsic and extrinsic factors helps learn user
behaviors better. However, existing studies have only considered
differentiating them from a single, pre-defined context (e.g., time or
location), ignoring the fact that a user's extrinsic factors may be influenced
by the interplay of various contexts at the same time. In this paper, we
propose the Intrinsic-Extrinsic Disentangled Recommendation (IEDR) model, a
generic framework that differentiates intrinsic from extrinsic factors
considering various contexts simultaneously, enabling more accurate
differentiation of factors and hence the improvement of recommendation
accuracy. IEDR contains a context-invariant contrastive learning component to
capture intrinsic factors, and a disentanglement component to extract extrinsic
factors under the interplay of various contexts. The two components work
together to achieve effective factor learning. Extensive experiments on
real-world datasets demonstrate IEDR's effectiveness in learning disentangled
factors and significantly improving recommendation accuracy by up to 4% in
NDCG.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 14:08:53 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Mar 2025 14:18:53 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Su",
"Yixin",
""
],
[
"Jiang",
"Wei",
""
],
[
"Lin",
"Fangquan",
""
],
[
"Yang",
"Cheng",
""
],
[
"Erfani",
"Sarah M.",
""
],
[
"Gan",
"Junhao",
""
],
[
"Zhao",
"Yunxiang",
""
],
[
"Li",
"Ruixuan",
""
],
[
"Zhang",
"Rui",
""
]
] | TITLE: Intrinsic and Extrinsic Factor Disentanglement for Recommendation in
Various Context Scenarios
ABSTRACT: In recommender systems, the patterns of user behaviors (e.g., purchase,
click) may vary greatly in different contexts (e.g., time and location). This
is because user behavior is jointly determined by two types of factors:
intrinsic factors, which reflect consistent user preference, and extrinsic
factors, which reflect external incentives that may vary in different contexts.
Differentiating between intrinsic and extrinsic factors helps learn user
behaviors better. However, existing studies have only considered
differentiating them from a single, pre-defined context (e.g., time or
location), ignoring the fact that a user's extrinsic factors may be influenced
by the interplay of various contexts at the same time. In this paper, we
propose the Intrinsic-Extrinsic Disentangled Recommendation (IEDR) model, a
generic framework that differentiates intrinsic from extrinsic factors
considering various contexts simultaneously, enabling more accurate
differentiation of factors and hence the improvement of recommendation
accuracy. IEDR contains a context-invariant contrastive learning component to
capture intrinsic factors, and a disentanglement component to extract extrinsic
factors under the interplay of various contexts. The two components work
together to achieve effective factor learning. Extensive experiments on
real-world datasets demonstrate IEDR's effectiveness in learning disentangled
factors and significantly improving recommendation accuracy by up to 4% in
NDCG.
|
2503.03799 | Chung Yue Hui David | Jianqi Yan (1), Alex P. Leung (1), Zhiyuan Pei (2), David C. Y. Hui
(3), Sangin Kim (3) ((1) The University of Hong Kong, (2) Macau University of
Science and Technology, (3) Chungnam National University) | DeepGrav: Anomalous Gravitational-Wave Detection Through Deep Latent
Features | 6 pages, 3 figures, A concise introduction to the winning solution
for NSF HDR A3D3 GW challenge. Our training code is publicly available at
https://github.com/yan123yan/HDR-anomaly-challenge-submission | null | null | null | cs.LG astro-ph.HE gr-qc | http://creativecommons.org/licenses/by/4.0/ | This work introduces a novel deep learning-based approach for gravitational
wave anomaly detection, aiming to overcome the limitations of traditional
matched filtering techniques in identifying unknown waveform gravitational wave
signals. We introduce a modified convolutional neural network architecture
inspired by ResNet that leverages residual blocks to extract high-dimensional
features, effectively capturing subtle differences between background noise and
gravitational wave signals. This network architecture learns a high-dimensional
projection while preserving discrepancies with the original input, facilitating
precise identification of gravitational wave signals. In our experiments, we
implement an innovative data augmentation strategy that generates new data by
computing the arithmetic mean of multiple signal samples while retaining the
key features of the original signals.
In the NSF HDR A3D3: Detecting Anomalous Gravitational Wave Signals
competition, it is honorable for us (group name: easonyan123) to get to the
first place at the end with our model achieving a true negative rate (TNR) of
0.9708 during development/validation phase and 0.9832 on an unseen challenge
dataset during final/testing phase, the highest among all competitors. These
results demonstrate that our method not only achieves excellent generalization
performance but also maintains robust adaptability in addressing the complex
uncertainties inherent in gravitational wave anomaly detection.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 16:14:22 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Mar 2025 01:37:42 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Yan",
"Jianqi",
""
],
[
"Leung",
"Alex P.",
""
],
[
"Pei",
"Zhiyuan",
""
],
[
"Hui",
"David C. Y.",
""
],
[
"Kim",
"Sangin",
""
]
] | TITLE: DeepGrav: Anomalous Gravitational-Wave Detection Through Deep Latent
Features
ABSTRACT: This work introduces a novel deep learning-based approach for gravitational
wave anomaly detection, aiming to overcome the limitations of traditional
matched filtering techniques in identifying unknown waveform gravitational wave
signals. We introduce a modified convolutional neural network architecture
inspired by ResNet that leverages residual blocks to extract high-dimensional
features, effectively capturing subtle differences between background noise and
gravitational wave signals. This network architecture learns a high-dimensional
projection while preserving discrepancies with the original input, facilitating
precise identification of gravitational wave signals. In our experiments, we
implement an innovative data augmentation strategy that generates new data by
computing the arithmetic mean of multiple signal samples while retaining the
key features of the original signals.
In the NSF HDR A3D3: Detecting Anomalous Gravitational Wave Signals
competition, it is honorable for us (group name: easonyan123) to get to the
first place at the end with our model achieving a true negative rate (TNR) of
0.9708 during development/validation phase and 0.9832 on an unseen challenge
dataset during final/testing phase, the highest among all competitors. These
results demonstrate that our method not only achieves excellent generalization
performance but also maintains robust adaptability in addressing the complex
uncertainties inherent in gravitational wave anomaly detection.
|
2503.06166 | Li Li | Shawn Li, Peilin Cai, Yuxiao Zhou, Zhiyu Ni, Renjie Liang, You Qin, Yi
Nian, Zhengzhong Tu, Xiyang Hu, Yue Zhao | Secure On-Device Video OOD Detection Without Backpropagation | null | null | null | null | cs.CR cs.AI | http://creativecommons.org/licenses/by/4.0/ | Out-of-Distribution (OOD) detection is critical for ensuring the reliability
of machine learning models in safety-critical applications such as autonomous
driving and medical diagnosis. While deploying personalized OOD detection
directly on edge devices is desirable, it remains challenging due to large
model sizes and the computational infeasibility of on-device training.
Federated learning partially addresses this but still requires gradient
computation and backpropagation, exceeding the capabilities of many edge
devices. To overcome these challenges, we propose SecDOOD, a secure
cloud-device collaboration framework for efficient on-device OOD detection
without requiring device-side backpropagation. SecDOOD utilizes cloud resources
for model training while ensuring user data privacy by retaining sensitive
information on-device. Central to SecDOOD is a HyperNetwork-based personalized
parameter generation module, which adapts cloud-trained models to
device-specific distributions by dynamically generating local weight
adjustments, effectively combining central and local information without local
fine-tuning. Additionally, our dynamic feature sampling and encryption strategy
selectively encrypts only the most informative feature channels, largely
reducing encryption overhead without compromising detection performance.
Extensive experiments across multiple datasets and OOD scenarios demonstrate
that SecDOOD achieves performance comparable to fully fine-tuned models,
enabling secure, efficient, and personalized OOD detection on resource-limited
edge devices. To enhance accessibility and reproducibility, our code is
publicly available at https://github.com/Dystopians/SecDOOD.
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2025 11:03:21 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 07:44:00 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Li",
"Shawn",
""
],
[
"Cai",
"Peilin",
""
],
[
"Zhou",
"Yuxiao",
""
],
[
"Ni",
"Zhiyu",
""
],
[
"Liang",
"Renjie",
""
],
[
"Qin",
"You",
""
],
[
"Nian",
"Yi",
""
],
[
"Tu",
"Zhengzhong",
""
],
[
"Hu",
"Xiyang",
""
],
[
"Zhao",
"Yue",
""
]
] | TITLE: Secure On-Device Video OOD Detection Without Backpropagation
ABSTRACT: Out-of-Distribution (OOD) detection is critical for ensuring the reliability
of machine learning models in safety-critical applications such as autonomous
driving and medical diagnosis. While deploying personalized OOD detection
directly on edge devices is desirable, it remains challenging due to large
model sizes and the computational infeasibility of on-device training.
Federated learning partially addresses this but still requires gradient
computation and backpropagation, exceeding the capabilities of many edge
devices. To overcome these challenges, we propose SecDOOD, a secure
cloud-device collaboration framework for efficient on-device OOD detection
without requiring device-side backpropagation. SecDOOD utilizes cloud resources
for model training while ensuring user data privacy by retaining sensitive
information on-device. Central to SecDOOD is a HyperNetwork-based personalized
parameter generation module, which adapts cloud-trained models to
device-specific distributions by dynamically generating local weight
adjustments, effectively combining central and local information without local
fine-tuning. Additionally, our dynamic feature sampling and encryption strategy
selectively encrypts only the most informative feature channels, largely
reducing encryption overhead without compromising detection performance.
Extensive experiments across multiple datasets and OOD scenarios demonstrate
that SecDOOD achieves performance comparable to fully fine-tuned models,
enabling secure, efficient, and personalized OOD detection on resource-limited
edge devices. To enhance accessibility and reproducibility, our code is
publicly available at https://github.com/Dystopians/SecDOOD.
|
2503.06232 | Yirong Sun | Yanjun Chen, Yirong Sun, Xinghao Chen, Jian Wang, Xiaoyu Shen, Wenjie
Li, Wei Zhang | Integrating Chain-of-Thought for Multimodal Alignment: A Study on 3D
Vision-Language Learning | null | null | null | null | cs.CL cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Chain-of-Thought (CoT) reasoning has proven effective in natural language
tasks but remains underexplored in multimodal alignment. This study
investigates its integration into 3D vision-language learning by embedding
structured reasoning into alignment training. We introduce the 3D-CoT
Benchmark, a dataset with hierarchical CoT annotations covering shape
recognition, functional inference, and causal reasoning. Through controlled
experiments, we compare CoT-structured and standard textual annotations across
large reasoning models (LRMs) and large language models (LLMs). Our evaluation
employs a dual-layer framework assessing both intermediate reasoning and final
inference quality. Extensive experiments demonstrate that CoT significantly
improves 3D semantic grounding, with LRMs leveraging CoT more effectively than
LLMs. Furthermore, we highlight that annotation structure influences
performance-explicit reasoning markers aid LLMs, while unmarked CoT better
aligns with LRM inference patterns. Our analyses suggest that CoT is crucial
for enhancing multimodal reasoning, with implications beyond 3D tasks. The
dataset will be publicly available at
https://huggingface.co/datasets/Battam/3D-CoT
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2025 14:24:54 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Mar 2025 09:59:54 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Chen",
"Yanjun",
""
],
[
"Sun",
"Yirong",
""
],
[
"Chen",
"Xinghao",
""
],
[
"Wang",
"Jian",
""
],
[
"Shen",
"Xiaoyu",
""
],
[
"Li",
"Wenjie",
""
],
[
"Zhang",
"Wei",
""
]
] | TITLE: Integrating Chain-of-Thought for Multimodal Alignment: A Study on 3D
Vision-Language Learning
ABSTRACT: Chain-of-Thought (CoT) reasoning has proven effective in natural language
tasks but remains underexplored in multimodal alignment. This study
investigates its integration into 3D vision-language learning by embedding
structured reasoning into alignment training. We introduce the 3D-CoT
Benchmark, a dataset with hierarchical CoT annotations covering shape
recognition, functional inference, and causal reasoning. Through controlled
experiments, we compare CoT-structured and standard textual annotations across
large reasoning models (LRMs) and large language models (LLMs). Our evaluation
employs a dual-layer framework assessing both intermediate reasoning and final
inference quality. Extensive experiments demonstrate that CoT significantly
improves 3D semantic grounding, with LRMs leveraging CoT more effectively than
LLMs. Furthermore, we highlight that annotation structure influences
performance-explicit reasoning markers aid LLMs, while unmarked CoT better
aligns with LRM inference patterns. Our analyses suggest that CoT is crucial
for enhancing multimodal reasoning, with implications beyond 3D tasks. The
dataset will be publicly available at
https://huggingface.co/datasets/Battam/3D-CoT
|
2503.06277 | Siyi Du | Siyi Du, Xinzhe Luo, Declan P. O'Regan, Chen Qin | STiL: Semi-supervised Tabular-Image Learning for Comprehensive
Task-Relevant Information Exploration in Multimodal Classification | 16 pages (including 5 pages of supplementary materials), accepted by
CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multimodal image-tabular learning is gaining attention, yet it faces
challenges due to limited labeled data. While earlier work has applied
self-supervised learning (SSL) to unlabeled data, its task-agnostic nature
often results in learning suboptimal features for downstream tasks.
Semi-supervised learning (SemiSL), which combines labeled and unlabeled data,
offers a promising solution. However, existing multimodal SemiSL methods
typically focus on unimodal or modality-shared features, ignoring valuable
task-relevant modality-specific information, leading to a Modality Information
Gap. In this paper, we propose STiL, a novel SemiSL tabular-image framework
that addresses this gap by comprehensively exploring task-relevant information.
STiL features a new disentangled contrastive consistency module to learn
cross-modal invariant representations of shared information while retaining
modality-specific information via disentanglement. We also propose a novel
consensus-guided pseudo-labeling strategy to generate reliable pseudo-labels
based on classifier consensus, along with a new prototype-guided label
smoothing technique to refine pseudo-label quality with prototype embeddings,
thereby enhancing task-relevant information learning in unlabeled data.
Experiments on natural and medical image datasets show that STiL outperforms
the state-of-the-art supervised/SSL/SemiSL image/multimodal approaches. Our
code is available at https://github.com/siyi-wind/STiL.
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2025 16:51:45 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 18:40:36 GMT"
},
{
"version": "v3",
"created": "Sat, 15 Mar 2025 15:31:28 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Du",
"Siyi",
""
],
[
"Luo",
"Xinzhe",
""
],
[
"O'Regan",
"Declan P.",
""
],
[
"Qin",
"Chen",
""
]
] | TITLE: STiL: Semi-supervised Tabular-Image Learning for Comprehensive
Task-Relevant Information Exploration in Multimodal Classification
ABSTRACT: Multimodal image-tabular learning is gaining attention, yet it faces
challenges due to limited labeled data. While earlier work has applied
self-supervised learning (SSL) to unlabeled data, its task-agnostic nature
often results in learning suboptimal features for downstream tasks.
Semi-supervised learning (SemiSL), which combines labeled and unlabeled data,
offers a promising solution. However, existing multimodal SemiSL methods
typically focus on unimodal or modality-shared features, ignoring valuable
task-relevant modality-specific information, leading to a Modality Information
Gap. In this paper, we propose STiL, a novel SemiSL tabular-image framework
that addresses this gap by comprehensively exploring task-relevant information.
STiL features a new disentangled contrastive consistency module to learn
cross-modal invariant representations of shared information while retaining
modality-specific information via disentanglement. We also propose a novel
consensus-guided pseudo-labeling strategy to generate reliable pseudo-labels
based on classifier consensus, along with a new prototype-guided label
smoothing technique to refine pseudo-label quality with prototype embeddings,
thereby enhancing task-relevant information learning in unlabeled data.
Experiments on natural and medical image datasets show that STiL outperforms
the state-of-the-art supervised/SSL/SemiSL image/multimodal approaches. Our
code is available at https://github.com/siyi-wind/STiL.
|
2503.06327 | Altaf Allah Abbassi | Altaf Allah Abbassi, Leuson Da Silva, Amin Nikanjam, Foutse Khomh | Unveiling Inefficiencies in LLM-Generated Code: Toward a Comprehensive
Taxonomy | null | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) are widely adopted for automated code generation
with promising results. Although prior research has assessed LLM-generated code
and identified various quality issues -- such as redundancy, poor
maintainability, and sub-optimal performance a systematic understanding and
categorization of these inefficiencies remain unexplored. Without such
knowledge, practitioners struggle to optimize LLM-generated code for real-world
applications, limiting its adoption. This study can also guide improving code
LLMs, enhancing the quality and efficiency of code generation. Therefore, in
this study, we empirically investigate inefficiencies in LLM-generated code by
state-of-the-art models, i.e., CodeLlama, DeepSeek-Coder, and CodeGemma. To do
so, we analyze 492 generated code snippets in the HumanEval++ dataset. We then
construct a taxonomy of inefficiencies in LLM-generated code that includes 5
categories General Logic, Performance, Readability, Maintainability, and
Errors) and 19 subcategories of inefficiencies. We then validate the proposed
taxonomy through an online survey with 58 LLM practitioners and researchers.
Our study indicates that logic and performance-related inefficiencies are the
most popular, relevant, and frequently co-occur and impact overall code quality
inefficiency. Our taxonomy provides a structured basis for evaluating the
quality LLM-generated code and guiding future research to improve code
generation efficiency.
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2025 19:51:52 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Mar 2025 03:59:36 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Abbassi",
"Altaf Allah",
""
],
[
"Da Silva",
"Leuson",
""
],
[
"Nikanjam",
"Amin",
""
],
[
"Khomh",
"Foutse",
""
]
] | TITLE: Unveiling Inefficiencies in LLM-Generated Code: Toward a Comprehensive
Taxonomy
ABSTRACT: Large Language Models (LLMs) are widely adopted for automated code generation
with promising results. Although prior research has assessed LLM-generated code
and identified various quality issues -- such as redundancy, poor
maintainability, and sub-optimal performance a systematic understanding and
categorization of these inefficiencies remain unexplored. Without such
knowledge, practitioners struggle to optimize LLM-generated code for real-world
applications, limiting its adoption. This study can also guide improving code
LLMs, enhancing the quality and efficiency of code generation. Therefore, in
this study, we empirically investigate inefficiencies in LLM-generated code by
state-of-the-art models, i.e., CodeLlama, DeepSeek-Coder, and CodeGemma. To do
so, we analyze 492 generated code snippets in the HumanEval++ dataset. We then
construct a taxonomy of inefficiencies in LLM-generated code that includes 5
categories General Logic, Performance, Readability, Maintainability, and
Errors) and 19 subcategories of inefficiencies. We then validate the proposed
taxonomy through an online survey with 58 LLM practitioners and researchers.
Our study indicates that logic and performance-related inefficiencies are the
most popular, relevant, and frequently co-occur and impact overall code quality
inefficiency. Our taxonomy provides a structured basis for evaluating the
quality LLM-generated code and guiding future research to improve code
generation efficiency.
|
2503.06499 | Xukun Zhou | Xukun Zhou, Fengxin Li, Ming Chen, Yan Zhou, Pengfei Wan, Di Zhang,
Yeying Jin, Zhaoxin Fan, Hongyan Liu, Jun He | ExGes: Expressive Human Motion Retrieval and Modulation for Audio-Driven
Gesture Synthesis | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Audio-driven human gesture synthesis is a crucial task with broad
applications in virtual avatars, human-computer interaction, and creative
content generation. Despite notable progress, existing methods often produce
gestures that are coarse, lack expressiveness, and fail to fully align with
audio semantics. To address these challenges, we propose ExGes, a novel
retrieval-enhanced diffusion framework with three key designs: (1) a Motion
Base Construction, which builds a gesture library using training dataset; (2) a
Motion Retrieval Module, employing constrative learning and momentum
distillation for fine-grained reference poses retreiving; and (3) a Precision
Control Module, integrating partial masking and stochastic masking to enable
flexible and fine-grained control. Experimental evaluations on BEAT2
demonstrate that ExGes reduces Fr\'echet Gesture Distance by 6.2\% and improves
motion diversity by 5.3\% over EMAGE, with user studies revealing a 71.3\%
preference for its naturalness and semantic relevance. Code will be released
upon acceptance.
| [
{
"version": "v1",
"created": "Sun, 9 Mar 2025 07:59:39 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Mar 2025 04:31:47 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Zhou",
"Xukun",
""
],
[
"Li",
"Fengxin",
""
],
[
"Chen",
"Ming",
""
],
[
"Zhou",
"Yan",
""
],
[
"Wan",
"Pengfei",
""
],
[
"Zhang",
"Di",
""
],
[
"Jin",
"Yeying",
""
],
[
"Fan",
"Zhaoxin",
""
],
[
"Liu",
"Hongyan",
""
],
[
"He",
"Jun",
""
]
] | TITLE: ExGes: Expressive Human Motion Retrieval and Modulation for Audio-Driven
Gesture Synthesis
ABSTRACT: Audio-driven human gesture synthesis is a crucial task with broad
applications in virtual avatars, human-computer interaction, and creative
content generation. Despite notable progress, existing methods often produce
gestures that are coarse, lack expressiveness, and fail to fully align with
audio semantics. To address these challenges, we propose ExGes, a novel
retrieval-enhanced diffusion framework with three key designs: (1) a Motion
Base Construction, which builds a gesture library using training dataset; (2) a
Motion Retrieval Module, employing constrative learning and momentum
distillation for fine-grained reference poses retreiving; and (3) a Precision
Control Module, integrating partial masking and stochastic masking to enable
flexible and fine-grained control. Experimental evaluations on BEAT2
demonstrate that ExGes reduces Fr\'echet Gesture Distance by 6.2\% and improves
motion diversity by 5.3\% over EMAGE, with user studies revealing a 71.3\%
preference for its naturalness and semantic relevance. Code will be released
upon acceptance.
|
2503.07435 | Riccardo Mazzieri | Riccardo Mazzieri, Jacopo Pegoraro, Michele Rossi | Open-Set Gait Recognition from Sparse mmWave Radar Point Clouds | null | null | null | null | cs.CV eess.SP | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The adoption of Millimeter-Wave (mmWave) radar devices for human sensing,
particularly gait recognition, has recently gathered significant attention due
to their efficiency, resilience to environmental conditions, and
privacy-preserving nature. In this work, we tackle the challenging problem of
Open-set Gait Recognition (OSGR) from sparse mmWave radar point clouds. Unlike
most existing research, which assumes a closed-set scenario, our work considers
the more realistic open-set case, where unknown subjects might be present at
inference time, and should be correctly recognized by the system. Point clouds
are well-suited for edge computing applications with resource constraints, but
are more significantly affected by noise and random fluctuations than other
representations, like the more common micro-Doppler signature. This is the
first work addressing open-set gait recognition with sparse point cloud data.
To do so, we propose a novel neural network architecture that combines
supervised classification with unsupervised reconstruction of the point clouds,
creating a robust, rich, and highly regularized latent space of gait features.
To detect unknown subjects at inference time, we introduce a probabilistic
novelty detection algorithm that leverages the structured latent space and
offers a tunable trade-off between inference speed and prediction accuracy.
Along with this paper, we release mmGait10, an original human gait dataset
featuring over five hours of measurements from ten subjects, under varied
walking modalities. Extensive experimental results show that our solution
attains F1-Score improvements by 24% over state-of-the-art methods, on average,
and across multiple openness levels.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 15:18:10 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 11:06:08 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Mazzieri",
"Riccardo",
""
],
[
"Pegoraro",
"Jacopo",
""
],
[
"Rossi",
"Michele",
""
]
] | TITLE: Open-Set Gait Recognition from Sparse mmWave Radar Point Clouds
ABSTRACT: The adoption of Millimeter-Wave (mmWave) radar devices for human sensing,
particularly gait recognition, has recently gathered significant attention due
to their efficiency, resilience to environmental conditions, and
privacy-preserving nature. In this work, we tackle the challenging problem of
Open-set Gait Recognition (OSGR) from sparse mmWave radar point clouds. Unlike
most existing research, which assumes a closed-set scenario, our work considers
the more realistic open-set case, where unknown subjects might be present at
inference time, and should be correctly recognized by the system. Point clouds
are well-suited for edge computing applications with resource constraints, but
are more significantly affected by noise and random fluctuations than other
representations, like the more common micro-Doppler signature. This is the
first work addressing open-set gait recognition with sparse point cloud data.
To do so, we propose a novel neural network architecture that combines
supervised classification with unsupervised reconstruction of the point clouds,
creating a robust, rich, and highly regularized latent space of gait features.
To detect unknown subjects at inference time, we introduce a probabilistic
novelty detection algorithm that leverages the structured latent space and
offers a tunable trade-off between inference speed and prediction accuracy.
Along with this paper, we release mmGait10, an original human gait dataset
featuring over five hours of measurements from ten subjects, under varied
walking modalities. Extensive experimental results show that our solution
attains F1-Score improvements by 24% over state-of-the-art methods, on average,
and across multiple openness levels.
|
2503.07638 | Martin Kuhn | Martin Kuhn, Joscha Gr\"uger, Tobias Geyer, Ralph Bergmann | Leveraging Taxonomy Similarity for Next Activity Prediction in Patient
Treatment | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rapid progress in modern medicine presents physicians with complex
challenges when planning patient treatment. Techniques from the field of
Predictive Business Process Monitoring, like Next-activity-prediction (NAP) can
be used as a promising technique to support physicians in treatment planning,
by proposing a possible next treatment step. Existing patient data, often in
the form of electronic health records, can be analyzed to recommend the next
suitable step in the treatment process. However, the use of patient data poses
many challenges due to its knowledge-intensive character, high variability and
scarcity of medical data. To overcome these challenges, this article examines
the use of the knowledge encoded in taxonomies to improve and explain the
prediction of the next activity in the treatment process. This study proposes
the TS4NAP approach, which uses medical taxonomies (ICD-10-CM and ICD-10-PCS)
in combination with graph matching to assess the similarities of medical codes
to predict the next treatment step. The effectiveness of the proposed approach
will be evaluated using event logs that are derived from the MIMIC-IV dataset.
The results highlight the potential of using domain-specific knowledge held in
taxonomies to improve the prediction of the next activity, and thus can improve
treatment planning and decision-making by making the predictions more
explainable.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 08:19:17 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 13:52:26 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Kuhn",
"Martin",
""
],
[
"Grüger",
"Joscha",
""
],
[
"Geyer",
"Tobias",
""
],
[
"Bergmann",
"Ralph",
""
]
] | TITLE: Leveraging Taxonomy Similarity for Next Activity Prediction in Patient
Treatment
ABSTRACT: The rapid progress in modern medicine presents physicians with complex
challenges when planning patient treatment. Techniques from the field of
Predictive Business Process Monitoring, like Next-activity-prediction (NAP) can
be used as a promising technique to support physicians in treatment planning,
by proposing a possible next treatment step. Existing patient data, often in
the form of electronic health records, can be analyzed to recommend the next
suitable step in the treatment process. However, the use of patient data poses
many challenges due to its knowledge-intensive character, high variability and
scarcity of medical data. To overcome these challenges, this article examines
the use of the knowledge encoded in taxonomies to improve and explain the
prediction of the next activity in the treatment process. This study proposes
the TS4NAP approach, which uses medical taxonomies (ICD-10-CM and ICD-10-PCS)
in combination with graph matching to assess the similarities of medical codes
to predict the next treatment step. The effectiveness of the proposed approach
will be evaluated using event logs that are derived from the MIMIC-IV dataset.
The results highlight the potential of using domain-specific knowledge held in
taxonomies to improve the prediction of the next activity, and thus can improve
treatment planning and decision-making by making the predictions more
explainable.
|
2503.07650 | Sara Alkhalifa | Sara Alkhalifa | Insights into Schizophrenia: Leveraging Machine Learning for Early
Identification via EEG, ERP, and Demographic Attributes | 12 pages, 6 figures and 2 tables | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The research presents a machine learning (ML) classifier designed to
differentiate between schizophrenia patients and healthy controls by utilising
features extracted from electroencephalogram (EEG) data, specifically focusing
on event-related potentials (ERPs) and certain demographic variables. The
dataset comprises data from 81 participants, encompassing 32 healthy controls
and 49 schizophrenia patients, all sourced from an online dataset. After
preprocessing the dataset, our ML model achieved an accuracy of 99.930%. This
performance outperforms earlier research, including those that used deep
learning methods. Additionally, an analysis was conducted to assess individual
features' contribution to improving classification accuracy. This involved
systematically excluding specific features from the original dataset one at a
time, and another technique involved an iterative process of removing features
based on their entropy scores incrementally. The impact of these removals on
model performance was evaluated to identify the most informative features.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 17:42:25 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Mar 2025 09:36:34 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Alkhalifa",
"Sara",
""
]
] | TITLE: Insights into Schizophrenia: Leveraging Machine Learning for Early
Identification via EEG, ERP, and Demographic Attributes
ABSTRACT: The research presents a machine learning (ML) classifier designed to
differentiate between schizophrenia patients and healthy controls by utilising
features extracted from electroencephalogram (EEG) data, specifically focusing
on event-related potentials (ERPs) and certain demographic variables. The
dataset comprises data from 81 participants, encompassing 32 healthy controls
and 49 schizophrenia patients, all sourced from an online dataset. After
preprocessing the dataset, our ML model achieved an accuracy of 99.930%. This
performance outperforms earlier research, including those that used deep
learning methods. Additionally, an analysis was conducted to assess individual
features' contribution to improving classification accuracy. This involved
systematically excluding specific features from the original dataset one at a
time, and another technique involved an iterative process of removing features
based on their entropy scores incrementally. The impact of these removals on
model performance was evaluated to identify the most informative features.
|
2503.08043 | Deyi Ji | Deyi Ji, Feng Zhao, Hongtao Lu, Feng Wu, Jieping Ye | Structural and Statistical Texture Knowledge Distillation and Learning
for Segmentation | Accepted to TPAMI 2025. CVPR 2022 Version: arXiv:2305.03944. arXiv
admin note: text overlap with arXiv:2305.03944 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Low-level texture feature/knowledge is also of vital importance for
characterizing the local structural pattern and global statistical properties,
such as boundary, smoothness, regularity, and color contrast, which may not be
well addressed by high-level deep features. In this paper, we aim to
re-emphasize the low-level texture information in deep networks for semantic
segmentation and related knowledge distillation tasks. To this end, we take
full advantage of both structural and statistical texture knowledge and propose
a novel Structural and Statistical Texture Knowledge Distillation (SSTKD)
framework for semantic segmentation. Specifically, Contourlet Decomposition
Module (CDM) is introduced to decompose the low-level features with iterative
Laplacian pyramid and directional filter bank to mine the structural texture
knowledge, and Texture Intensity Equalization Module (TIEM) is designed to
extract and enhance the statistical texture knowledge with the corresponding
Quantization Congruence Loss (QDL). Moreover, we propose the Co-occurrence TIEM
(C-TIEM) and generic segmentation frameworks, namely STLNet++ and U-SSNet, to
enable existing segmentation networks to harvest the structural and statistical
texture information more effectively. Extensive experimental results on three
segmentation tasks demonstrate the effectiveness of the proposed methods and
their state-of-the-art performance on seven popular benchmark datasets,
respectively.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 04:49:25 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Mar 2025 11:26:26 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Ji",
"Deyi",
""
],
[
"Zhao",
"Feng",
""
],
[
"Lu",
"Hongtao",
""
],
[
"Wu",
"Feng",
""
],
[
"Ye",
"Jieping",
""
]
] | TITLE: Structural and Statistical Texture Knowledge Distillation and Learning
for Segmentation
ABSTRACT: Low-level texture feature/knowledge is also of vital importance for
characterizing the local structural pattern and global statistical properties,
such as boundary, smoothness, regularity, and color contrast, which may not be
well addressed by high-level deep features. In this paper, we aim to
re-emphasize the low-level texture information in deep networks for semantic
segmentation and related knowledge distillation tasks. To this end, we take
full advantage of both structural and statistical texture knowledge and propose
a novel Structural and Statistical Texture Knowledge Distillation (SSTKD)
framework for semantic segmentation. Specifically, Contourlet Decomposition
Module (CDM) is introduced to decompose the low-level features with iterative
Laplacian pyramid and directional filter bank to mine the structural texture
knowledge, and Texture Intensity Equalization Module (TIEM) is designed to
extract and enhance the statistical texture knowledge with the corresponding
Quantization Congruence Loss (QDL). Moreover, we propose the Co-occurrence TIEM
(C-TIEM) and generic segmentation frameworks, namely STLNet++ and U-SSNet, to
enable existing segmentation networks to harvest the structural and statistical
texture information more effectively. Extensive experimental results on three
segmentation tasks demonstrate the effectiveness of the proposed methods and
their state-of-the-art performance on seven popular benchmark datasets,
respectively.
|
2503.08121 | Thanh Nhat Huy Nguyen | Huy Nguyen, Kien Nguyen, Akila Pemasiri, Feng Liu, Sridha Sridharan,
Clinton Fookes | AG-VPReID: A Challenging Large-Scale Benchmark for Aerial-Ground
Video-based Person Re-Identification | Accepted at Computer Vision and Pattern Recognition Conference (CVPR)
2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | We introduce AG-VPReID, a new large-scale dataset for aerial-ground
video-based person re-identification (ReID) that comprises 6,632 subjects,
32,321 tracklets and over 9.6 million frames captured by drones (altitudes
ranging from 15-120m), CCTV, and wearable cameras. This dataset offers a
real-world benchmark for evaluating the robustness to significant viewpoint
changes, scale variations, and resolution differences in cross-platform
aerial-ground settings. In addition, to address these challenges, we propose
AG-VPReID-Net, an end-to-end framework composed of three complementary streams:
(1) an Adapted Temporal-Spatial Stream addressing motion pattern
inconsistencies and facilitating temporal feature learning, (2) a Normalized
Appearance Stream leveraging physics-informed techniques to tackle resolution
and appearance changes, and (3) a Multi-Scale Attention Stream handling scale
variations across drone altitudes. We integrate visual-semantic cues from all
streams to form a robust, viewpoint-invariant whole-body representation.
Extensive experiments demonstrate that AG-VPReID-Net outperforms
state-of-the-art approaches on both our new dataset and existing video-based
ReID benchmarks, showcasing its effectiveness and generalizability.
Nevertheless, the performance gap observed on AG-VPReID across all methods
underscores the dataset's challenging nature. The dataset, code and trained
models are available at https://github.com/agvpreid25/AG-VPReID-Net.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 07:38:01 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 01:07:25 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Nguyen",
"Huy",
""
],
[
"Nguyen",
"Kien",
""
],
[
"Pemasiri",
"Akila",
""
],
[
"Liu",
"Feng",
""
],
[
"Sridharan",
"Sridha",
""
],
[
"Fookes",
"Clinton",
""
]
] | TITLE: AG-VPReID: A Challenging Large-Scale Benchmark for Aerial-Ground
Video-based Person Re-Identification
ABSTRACT: We introduce AG-VPReID, a new large-scale dataset for aerial-ground
video-based person re-identification (ReID) that comprises 6,632 subjects,
32,321 tracklets and over 9.6 million frames captured by drones (altitudes
ranging from 15-120m), CCTV, and wearable cameras. This dataset offers a
real-world benchmark for evaluating the robustness to significant viewpoint
changes, scale variations, and resolution differences in cross-platform
aerial-ground settings. In addition, to address these challenges, we propose
AG-VPReID-Net, an end-to-end framework composed of three complementary streams:
(1) an Adapted Temporal-Spatial Stream addressing motion pattern
inconsistencies and facilitating temporal feature learning, (2) a Normalized
Appearance Stream leveraging physics-informed techniques to tackle resolution
and appearance changes, and (3) a Multi-Scale Attention Stream handling scale
variations across drone altitudes. We integrate visual-semantic cues from all
streams to form a robust, viewpoint-invariant whole-body representation.
Extensive experiments demonstrate that AG-VPReID-Net outperforms
state-of-the-art approaches on both our new dataset and existing video-based
ReID benchmarks, showcasing its effectiveness and generalizability.
Nevertheless, the performance gap observed on AG-VPReID across all methods
underscores the dataset's challenging nature. The dataset, code and trained
models are available at https://github.com/agvpreid25/AG-VPReID-Net.
|
2503.08154 | Tian Jin | Tian Jin, Enjun Du, Changwei Wang, Wenhao Xu, Ding Luo | Structure-Activation Synergy: A Dual Efficiency Framework for
Parameter-Memory Optimized Transfer Learning | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While parameter-efficient transfer learning (PETL) successfully reduces
trainable parameters for adapting large pre-trained models, conventional
methods exhibit limited effectiveness in decreasing activation memory
consumption - a critical bottleneck for deployment on resource-constrained
devices. We present Structure-Activation Synergy (S2A), an innovative framework
achieving dual optimization of parameters and memory through two synergistic
mechanisms: (1) Structural activation modules (bias/prompt/side adaptations)
that strategically minimize both parametric complexity and intermediate feature
storage requirements, and (2) Derivative-aware 4-bit quantization for
non-parametric operators that maintains model fidelity through
gradient-informed precision allocation. Extensive evaluations across multiple
architectures (ViT, Swin, ResNet) and datasets (ImageNet-1K, CIFAR, DomainNet)
demonstrate S2A's superior efficiency, reducing GPU memory consumption by 75\%
(4.2 average reduction) while maintaining 98.7\% of full fine-tuning accuracy
with only 0.9\% tunable parameters. This hardware-aware paradigm establishes
new state-of-the-art in efficient model adaptation, offering practical
deployment advantages through simultaneous parameter and memory optimization
without compromising model capability
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 08:10:03 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 16:50:29 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Jin",
"Tian",
""
],
[
"Du",
"Enjun",
""
],
[
"Wang",
"Changwei",
""
],
[
"Xu",
"Wenhao",
""
],
[
"Luo",
"Ding",
""
]
] | TITLE: Structure-Activation Synergy: A Dual Efficiency Framework for
Parameter-Memory Optimized Transfer Learning
ABSTRACT: While parameter-efficient transfer learning (PETL) successfully reduces
trainable parameters for adapting large pre-trained models, conventional
methods exhibit limited effectiveness in decreasing activation memory
consumption - a critical bottleneck for deployment on resource-constrained
devices. We present Structure-Activation Synergy (S2A), an innovative framework
achieving dual optimization of parameters and memory through two synergistic
mechanisms: (1) Structural activation modules (bias/prompt/side adaptations)
that strategically minimize both parametric complexity and intermediate feature
storage requirements, and (2) Derivative-aware 4-bit quantization for
non-parametric operators that maintains model fidelity through
gradient-informed precision allocation. Extensive evaluations across multiple
architectures (ViT, Swin, ResNet) and datasets (ImageNet-1K, CIFAR, DomainNet)
demonstrate S2A's superior efficiency, reducing GPU memory consumption by 75\%
(4.2 average reduction) while maintaining 98.7\% of full fine-tuning accuracy
with only 0.9\% tunable parameters. This hardware-aware paradigm establishes
new state-of-the-art in efficient model adaptation, offering practical
deployment advantages through simultaneous parameter and memory optimization
without compromising model capability
|
2503.08257 | Yiming Zhong | Yiming Zhong, Qi Jiang, Jingyi Yu, Yuexin Ma | DexGrasp Anything: Towards Universal Robotic Dexterous Grasping with
Physics Awareness | Accepted by CVPR 2025 | null | null | null | cs.CV cs.AI cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A dexterous hand capable of grasping any object is essential for the
development of general-purpose embodied intelligent robots. However, due to the
high degree of freedom in dexterous hands and the vast diversity of objects,
generating high-quality, usable grasping poses in a robust manner is a
significant challenge. In this paper, we introduce DexGrasp Anything, a method
that effectively integrates physical constraints into both the training and
sampling phases of a diffusion-based generative model, achieving
state-of-the-art performance across nearly all open datasets. Additionally, we
present a new dexterous grasping dataset containing over 3.4 million diverse
grasping poses for more than 15k different objects, demonstrating its potential
to advance universal dexterous grasping. The code of our method and our dataset
will be publicly released soon.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 10:21:50 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Mar 2025 13:05:46 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Zhong",
"Yiming",
""
],
[
"Jiang",
"Qi",
""
],
[
"Yu",
"Jingyi",
""
],
[
"Ma",
"Yuexin",
""
]
] | TITLE: DexGrasp Anything: Towards Universal Robotic Dexterous Grasping with
Physics Awareness
ABSTRACT: A dexterous hand capable of grasping any object is essential for the
development of general-purpose embodied intelligent robots. However, due to the
high degree of freedom in dexterous hands and the vast diversity of objects,
generating high-quality, usable grasping poses in a robust manner is a
significant challenge. In this paper, we introduce DexGrasp Anything, a method
that effectively integrates physical constraints into both the training and
sampling phases of a diffusion-based generative model, achieving
state-of-the-art performance across nearly all open datasets. Additionally, we
present a new dexterous grasping dataset containing over 3.4 million diverse
grasping poses for more than 15k different objects, demonstrating its potential
to advance universal dexterous grasping. The code of our method and our dataset
will be publicly released soon.
|
2503.08600 | Delip Rao | Delip Rao, Weiqiu You, Eric Wong, Chris Callison-Burch | NSF-SciFy: Mining the NSF Awards Database for Scientific Claims | 11 pages, 3 figures, 6 tables | null | null | null | cs.CL | http://creativecommons.org/licenses/by-sa/4.0/ | We present NSF-SciFy, a large-scale dataset for scientific claim extraction
derived from the National Science Foundation (NSF) awards database, comprising
over 400K grant abstracts spanning five decades. While previous datasets relied
on published literature, we leverage grant abstracts which offer a unique
advantage: they capture claims at an earlier stage in the research lifecycle
before publication takes effect. We also introduce a new task to distinguish
between existing scientific claims and aspirational research intentions in
proposals. Using zero-shot prompting with frontier large language models, we
jointly extract 114K scientific claims and 145K investigation proposals from
16K grant abstracts in the materials science domain to create a focused subset
called NSF-SciFy-MatSci. We use this dataset to evaluate 3 three key tasks: (1)
technical to non-technical abstract generation, where models achieve high
BERTScore (0.85+ F1); (2) scientific claim extraction, where fine-tuned models
outperform base models by 100% relative improvement; and (3) investigation
proposal extraction, showing 90%+ improvement with fine-tuning. We introduce
novel LLM-based evaluation metrics for robust assessment of claim/proposal
extraction quality. As the largest scientific claim dataset to date -- with an
estimated 2.8 million claims across all STEM disciplines funded by the NSF --
NSF-SciFy enables new opportunities for claim verification and meta-scientific
research. We publicly release all datasets, trained models, and evaluation code
to facilitate further research.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 16:35:08 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Mar 2025 21:25:43 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Rao",
"Delip",
""
],
[
"You",
"Weiqiu",
""
],
[
"Wong",
"Eric",
""
],
[
"Callison-Burch",
"Chris",
""
]
] | TITLE: NSF-SciFy: Mining the NSF Awards Database for Scientific Claims
ABSTRACT: We present NSF-SciFy, a large-scale dataset for scientific claim extraction
derived from the National Science Foundation (NSF) awards database, comprising
over 400K grant abstracts spanning five decades. While previous datasets relied
on published literature, we leverage grant abstracts which offer a unique
advantage: they capture claims at an earlier stage in the research lifecycle
before publication takes effect. We also introduce a new task to distinguish
between existing scientific claims and aspirational research intentions in
proposals. Using zero-shot prompting with frontier large language models, we
jointly extract 114K scientific claims and 145K investigation proposals from
16K grant abstracts in the materials science domain to create a focused subset
called NSF-SciFy-MatSci. We use this dataset to evaluate 3 three key tasks: (1)
technical to non-technical abstract generation, where models achieve high
BERTScore (0.85+ F1); (2) scientific claim extraction, where fine-tuned models
outperform base models by 100% relative improvement; and (3) investigation
proposal extraction, showing 90%+ improvement with fine-tuning. We introduce
novel LLM-based evaluation metrics for robust assessment of claim/proposal
extraction quality. As the largest scientific claim dataset to date -- with an
estimated 2.8 million claims across all STEM disciplines funded by the NSF --
NSF-SciFy enables new opportunities for claim verification and meta-scientific
research. We publicly release all datasets, trained models, and evaluation code
to facilitate further research.
|
2503.08722 | Yehonathan Refael | Aviad Barzilai, Yotam Gigi, Amr Helmy, Vered Silverman, Yehonathan
Refael, Bolous Jaber, Tomer Shekel, George Leifman, Genady Beryozkin | A Recipe for Improving Remote Sensing VLM Zero Shot Generalization | null | null | null | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Foundation models have had a significant impact across various AI
applications, enabling use cases that were previously impossible. Contrastive
Visual Language Models (VLMs), in particular, have outperformed other
techniques in many tasks. However, their prevalence in remote sensing (RS) is
still limited, due to the scarcity of diverse remote-sensing visual-language
datasets. In this work we introduce two novel image-caption datasets for
training of remote sensing foundation models. The first dataset pairs aerial
and satellite imagery with captions generated by Gemini using landmarks
extracted from Google Maps. The second dataset utilizes public web images and
their corresponding alt-text, filtered for the remote sensing domain, resulting
in a diverse dataset with greater breadth in image styles and subject matter.
These datasets are used to pre-train the
MaMMUT~\citep{kuo2023mammutsimplearchitecturejoint} VLM architecture, resulting
in state-of-the-art generalization performance in zero-shot cross-modal
retrieval on well-known public benchmarks. Finally, we present our ongoing
research to distill image-level knowledge gained in the VLM contrastive
training procedure to enhance the model's localization ability. Specifically,
we iteratively generate pseudo-labels for image regions based on the model's
attention maps and use these labels for further training. To mitigate noisy
attention maps and create robust segmentation masks, we introduce a novel
attention-pooling mechanism called the Smooth-Attention-Operation.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 21:09:02 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 13:49:27 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Barzilai",
"Aviad",
""
],
[
"Gigi",
"Yotam",
""
],
[
"Helmy",
"Amr",
""
],
[
"Silverman",
"Vered",
""
],
[
"Refael",
"Yehonathan",
""
],
[
"Jaber",
"Bolous",
""
],
[
"Shekel",
"Tomer",
""
],
[
"Leifman",
"George",
""
],
[
"Beryozkin",
"Genady",
""
]
] | TITLE: A Recipe for Improving Remote Sensing VLM Zero Shot Generalization
ABSTRACT: Foundation models have had a significant impact across various AI
applications, enabling use cases that were previously impossible. Contrastive
Visual Language Models (VLMs), in particular, have outperformed other
techniques in many tasks. However, their prevalence in remote sensing (RS) is
still limited, due to the scarcity of diverse remote-sensing visual-language
datasets. In this work we introduce two novel image-caption datasets for
training of remote sensing foundation models. The first dataset pairs aerial
and satellite imagery with captions generated by Gemini using landmarks
extracted from Google Maps. The second dataset utilizes public web images and
their corresponding alt-text, filtered for the remote sensing domain, resulting
in a diverse dataset with greater breadth in image styles and subject matter.
These datasets are used to pre-train the
MaMMUT~\citep{kuo2023mammutsimplearchitecturejoint} VLM architecture, resulting
in state-of-the-art generalization performance in zero-shot cross-modal
retrieval on well-known public benchmarks. Finally, we present our ongoing
research to distill image-level knowledge gained in the VLM contrastive
training procedure to enhance the model's localization ability. Specifically,
we iteratively generate pseudo-labels for image regions based on the model's
attention maps and use these labels for further training. To mitigate noisy
attention maps and create robust segmentation masks, we introduce a novel
attention-pooling mechanism called the Smooth-Attention-Operation.
|
2503.09151 | Jong Chul Ye | Hyeonho Jeong, Suhyeon Lee, Jong Chul Ye | Reangle-A-Video: 4D Video Generation as Video-to-Video Translation | Project page: https://hyeonho99.github.io/reangle-a-video/ | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | We introduce Reangle-A-Video, a unified framework for generating synchronized
multi-view videos from a single input video. Unlike mainstream approaches that
train multi-view video diffusion models on large-scale 4D datasets, our method
reframes the multi-view video generation task as video-to-videos translation,
leveraging publicly available image and video diffusion priors. In essence,
Reangle-A-Video operates in two stages. (1) Multi-View Motion Learning: An
image-to-video diffusion transformer is synchronously fine-tuned in a
self-supervised manner to distill view-invariant motion from a set of warped
videos. (2) Multi-View Consistent Image-to-Images Translation: The first frame
of the input video is warped and inpainted into various camera perspectives
under an inference-time cross-view consistency guidance using DUSt3R,
generating multi-view consistent starting images. Extensive experiments on
static view transport and dynamic camera control show that Reangle-A-Video
surpasses existing methods, establishing a new solution for multi-view video
generation. We will publicly release our code and data. Project page:
https://hyeonho99.github.io/reangle-a-video/
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 08:26:15 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 13:01:59 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Jeong",
"Hyeonho",
""
],
[
"Lee",
"Suhyeon",
""
],
[
"Ye",
"Jong Chul",
""
]
] | TITLE: Reangle-A-Video: 4D Video Generation as Video-to-Video Translation
ABSTRACT: We introduce Reangle-A-Video, a unified framework for generating synchronized
multi-view videos from a single input video. Unlike mainstream approaches that
train multi-view video diffusion models on large-scale 4D datasets, our method
reframes the multi-view video generation task as video-to-videos translation,
leveraging publicly available image and video diffusion priors. In essence,
Reangle-A-Video operates in two stages. (1) Multi-View Motion Learning: An
image-to-video diffusion transformer is synchronously fine-tuned in a
self-supervised manner to distill view-invariant motion from a set of warped
videos. (2) Multi-View Consistent Image-to-Images Translation: The first frame
of the input video is warped and inpainted into various camera perspectives
under an inference-time cross-view consistency guidance using DUSt3R,
generating multi-view consistent starting images. Extensive experiments on
static view transport and dynamic camera control show that Reangle-A-Video
surpasses existing methods, establishing a new solution for multi-view video
generation. We will publicly release our code and data. Project page:
https://hyeonho99.github.io/reangle-a-video/
|
2503.09215 | Jian Zhu | Jian Zhu, Zhengyu Jia, Tian Gao, Jiaxin Deng, Shidi Li, Fu Liu, Peng
Jia, Xianpeng Lang, Xiaolong Sun | Other Vehicle Trajectories Are Also Needed: A Driving World Model
Unifies Ego-Other Vehicle Trajectories in Video Latent Space | 8 pages, 7 figures | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Advanced end-to-end autonomous driving systems predict other vehicles'
motions and plan ego vehicle's trajectory. The world model that can foresee the
outcome of the trajectory has been used to evaluate the end-to-end autonomous
driving system. However, existing world models predominantly emphasize the
trajectory of the ego vehicle and leave other vehicles uncontrollable. This
limitation hinders their ability to realistically simulate the interaction
between the ego vehicle and the driving scenario. In addition, it remains a
challenge to match multiple trajectories with each vehicle in the video to
control the video generation. To address above issues, a driving World Model
named EOT-WM is proposed in this paper, unifying Ego-Other vehicle Trajectories
in videos. Specifically, we first project ego and other vehicle trajectories in
the BEV space into the image coordinate to match each trajectory with its
corresponding vehicle in the video. Then, trajectory videos are encoded by the
Spatial-Temporal Variational Auto Encoder to align with driving video latents
spatially and temporally in the unified visual space. A trajectory-injected
diffusion Transformer is further designed to denoise the noisy video latents
for video generation with the guidance of ego-other vehicle trajectories. In
addition, we propose a metric based on control latent similarity to evaluate
the controllability of trajectories. Extensive experiments are conducted on the
nuScenes dataset, and the proposed model outperforms the state-of-the-art
method by 30% in FID and 55% in FVD. The model can also predict unseen driving
scenes with self-produced trajectories.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 10:02:18 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 08:07:46 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Zhu",
"Jian",
""
],
[
"Jia",
"Zhengyu",
""
],
[
"Gao",
"Tian",
""
],
[
"Deng",
"Jiaxin",
""
],
[
"Li",
"Shidi",
""
],
[
"Liu",
"Fu",
""
],
[
"Jia",
"Peng",
""
],
[
"Lang",
"Xianpeng",
""
],
[
"Sun",
"Xiaolong",
""
]
] | TITLE: Other Vehicle Trajectories Are Also Needed: A Driving World Model
Unifies Ego-Other Vehicle Trajectories in Video Latent Space
ABSTRACT: Advanced end-to-end autonomous driving systems predict other vehicles'
motions and plan ego vehicle's trajectory. The world model that can foresee the
outcome of the trajectory has been used to evaluate the end-to-end autonomous
driving system. However, existing world models predominantly emphasize the
trajectory of the ego vehicle and leave other vehicles uncontrollable. This
limitation hinders their ability to realistically simulate the interaction
between the ego vehicle and the driving scenario. In addition, it remains a
challenge to match multiple trajectories with each vehicle in the video to
control the video generation. To address above issues, a driving World Model
named EOT-WM is proposed in this paper, unifying Ego-Other vehicle Trajectories
in videos. Specifically, we first project ego and other vehicle trajectories in
the BEV space into the image coordinate to match each trajectory with its
corresponding vehicle in the video. Then, trajectory videos are encoded by the
Spatial-Temporal Variational Auto Encoder to align with driving video latents
spatially and temporally in the unified visual space. A trajectory-injected
diffusion Transformer is further designed to denoise the noisy video latents
for video generation with the guidance of ego-other vehicle trajectories. In
addition, we propose a metric based on control latent similarity to evaluate
the controllability of trajectories. Extensive experiments are conducted on the
nuScenes dataset, and the proposed model outperforms the state-of-the-art
method by 30% in FID and 55% in FVD. The model can also predict unseen driving
scenes with self-produced trajectories.
|
2503.09403 | Jiang Xu | Xu Jiang, Gehui Li, Bin Chen, Jian Zhang | Multi-Agent Image Restoration | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Image restoration (IR) is challenging due to the complexity of real-world
degradations. While many specialized and all-in-one IR models have been
developed, they fail to effectively handle complex, mixed degradations. Recent
agentic methods RestoreAgent and AgenticIR leverage intelligent, autonomous
workflows to alleviate this issue, yet they suffer from suboptimal results and
inefficiency due to their resource-intensive finetunings, and ineffective
searches and tool execution trials for satisfactory outputs. In this paper, we
propose MAIR, a novel Multi-Agent approach for complex IR problems. We
introduce a real-world degradation prior, categorizing degradations into three
types: (1) scene, (2) imaging, and (3) compression, which are observed to occur
sequentially in real world, and reverse them in the opposite order. Built upon
this three-stage restoration framework, MAIR emulates a team of collaborative
human specialists, including a "scheduler" for overall planning and multiple
"experts" dedicated to specific degradations. This design minimizes search
space and trial efforts, improving image quality while reducing inference
costs. In addition, a registry mechanism is introduced to enable easy
integration of new tools. Experiments on both synthetic and real-world datasets
show that proposed MAIR achieves competitive performance and improved
efficiency over the previous agentic IR system. Code and models will be made
available.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 13:53:57 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 07:34:25 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Jiang",
"Xu",
""
],
[
"Li",
"Gehui",
""
],
[
"Chen",
"Bin",
""
],
[
"Zhang",
"Jian",
""
]
] | TITLE: Multi-Agent Image Restoration
ABSTRACT: Image restoration (IR) is challenging due to the complexity of real-world
degradations. While many specialized and all-in-one IR models have been
developed, they fail to effectively handle complex, mixed degradations. Recent
agentic methods RestoreAgent and AgenticIR leverage intelligent, autonomous
workflows to alleviate this issue, yet they suffer from suboptimal results and
inefficiency due to their resource-intensive finetunings, and ineffective
searches and tool execution trials for satisfactory outputs. In this paper, we
propose MAIR, a novel Multi-Agent approach for complex IR problems. We
introduce a real-world degradation prior, categorizing degradations into three
types: (1) scene, (2) imaging, and (3) compression, which are observed to occur
sequentially in real world, and reverse them in the opposite order. Built upon
this three-stage restoration framework, MAIR emulates a team of collaborative
human specialists, including a "scheduler" for overall planning and multiple
"experts" dedicated to specific degradations. This design minimizes search
space and trial efforts, improving image quality while reducing inference
costs. In addition, a registry mechanism is introduced to enable easy
integration of new tools. Experiments on both synthetic and real-world datasets
show that proposed MAIR achieves competitive performance and improved
efficiency over the previous agentic IR system. Code and models will be made
available.
|
2503.09712 | Yuanmin Huang | Yuanmin Huang, Mi Zhang, Zhaoxiang Wang, Wenxuan Li, Min Yang | Revisiting Backdoor Attacks on Time Series Classification in the
Frequency Domain | WWW 2025 (Oral) | null | 10.1145/3696410.3714827 | null | cs.LG cs.AI cs.CR | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Time series classification (TSC) is a cornerstone of modern web applications,
powering tasks such as financial data analysis, network traffic monitoring, and
user behavior analysis. In recent years, deep neural networks (DNNs) have
greatly enhanced the performance of TSC models in these critical domains.
However, DNNs are vulnerable to backdoor attacks, where attackers can covertly
implant triggers into models to induce malicious outcomes. Existing backdoor
attacks targeting DNN-based TSC models remain elementary. In particular, early
methods borrow trigger designs from computer vision, which are ineffective for
time series data. More recent approaches utilize generative models for trigger
generation, but at the cost of significant computational complexity. In this
work, we analyze the limitations of existing attacks and introduce an enhanced
method, FreqBack. Drawing inspiration from the fact that DNN models inherently
capture frequency domain features in time series data, we identify that
improper perturbations in the frequency domain are the root cause of
ineffective attacks. To address this, we propose to generate triggers both
effectively and efficiently, guided by frequency analysis. FreqBack exhibits
substantial performance across five models and eight datasets, achieving an
impressive attack success rate of over 90%, while maintaining less than a 3%
drop in model accuracy on clean data.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 18:05:32 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Mar 2025 03:08:44 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Huang",
"Yuanmin",
""
],
[
"Zhang",
"Mi",
""
],
[
"Wang",
"Zhaoxiang",
""
],
[
"Li",
"Wenxuan",
""
],
[
"Yang",
"Min",
""
]
] | TITLE: Revisiting Backdoor Attacks on Time Series Classification in the
Frequency Domain
ABSTRACT: Time series classification (TSC) is a cornerstone of modern web applications,
powering tasks such as financial data analysis, network traffic monitoring, and
user behavior analysis. In recent years, deep neural networks (DNNs) have
greatly enhanced the performance of TSC models in these critical domains.
However, DNNs are vulnerable to backdoor attacks, where attackers can covertly
implant triggers into models to induce malicious outcomes. Existing backdoor
attacks targeting DNN-based TSC models remain elementary. In particular, early
methods borrow trigger designs from computer vision, which are ineffective for
time series data. More recent approaches utilize generative models for trigger
generation, but at the cost of significant computational complexity. In this
work, we analyze the limitations of existing attacks and introduce an enhanced
method, FreqBack. Drawing inspiration from the fact that DNN models inherently
capture frequency domain features in time series data, we identify that
improper perturbations in the frequency domain are the root cause of
ineffective attacks. To address this, we propose to generate triggers both
effectively and efficiently, guided by frequency analysis. FreqBack exhibits
substantial performance across five models and eight datasets, achieving an
impressive attack success rate of over 90%, while maintaining less than a 3%
drop in model accuracy on clean data.
|
2503.09853 | Kourosh Shahnazari | Kourosh Shahnazari, Seyed Moein Ayyoubzadeh | Who Are You Behind the Screen? Implicit MBTI and Gender Detection Using
Artificial Intelligence | null | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In personalized technology and psychological research, precisely detecting
demographic features and personality traits from digital interactions becomes
ever more important. This work investigates implicit categorization, inferring
personality and gender variables directly from linguistic patterns in Telegram
conversation data, while conventional personality prediction techniques mostly
depend on explicitly self-reported labels. We refine a Transformer-based
language model (RoBERTa) to capture complex linguistic cues indicative of
personality traits and gender differences using a dataset comprising 138,866
messages from 1,602 users annotated with MBTI types and 195,016 messages from
2,598 users annotated with gender. Confidence levels help to greatly raise
model accuracy to 86.16\%, hence proving RoBERTa's capacity to consistently
identify implicit personality types from conversational text data. Our results
highlight the usefulness of Transformer topologies for implicit personality and
gender classification, hence stressing their efficiency and stressing important
trade-offs between accuracy and coverage in realistic conversational
environments. With regard to gender classification, the model obtained an
accuracy of 74.4\%, therefore capturing gender-specific language patterns.
Personality dimension analysis showed that people with introverted and
intuitive preferences are especially more active in text-based interactions.
This study emphasizes practical issues in balancing accuracy and data coverage
as Transformer-based models show their efficiency in implicit personality and
gender prediction tasks from conversational texts.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 21:24:22 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Mar 2025 23:59:45 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Shahnazari",
"Kourosh",
""
],
[
"Ayyoubzadeh",
"Seyed Moein",
""
]
] | TITLE: Who Are You Behind the Screen? Implicit MBTI and Gender Detection Using
Artificial Intelligence
ABSTRACT: In personalized technology and psychological research, precisely detecting
demographic features and personality traits from digital interactions becomes
ever more important. This work investigates implicit categorization, inferring
personality and gender variables directly from linguistic patterns in Telegram
conversation data, while conventional personality prediction techniques mostly
depend on explicitly self-reported labels. We refine a Transformer-based
language model (RoBERTa) to capture complex linguistic cues indicative of
personality traits and gender differences using a dataset comprising 138,866
messages from 1,602 users annotated with MBTI types and 195,016 messages from
2,598 users annotated with gender. Confidence levels help to greatly raise
model accuracy to 86.16\%, hence proving RoBERTa's capacity to consistently
identify implicit personality types from conversational text data. Our results
highlight the usefulness of Transformer topologies for implicit personality and
gender classification, hence stressing their efficiency and stressing important
trade-offs between accuracy and coverage in realistic conversational
environments. With regard to gender classification, the model obtained an
accuracy of 74.4\%, therefore capturing gender-specific language patterns.
Personality dimension analysis showed that people with introverted and
intuitive preferences are especially more active in text-based interactions.
This study emphasizes practical issues in balancing accuracy and data coverage
as Transformer-based models show their efficiency in implicit personality and
gender prediction tasks from conversational texts.
|
2503.10156 | Thomas Sanchez | Thomas Sanchez, Vladyslav Zalevskyi, Angeline Mihailov, Gerard
Mart\'i-Juan, Elisenda Eixarch, Andras Jakab, Vincent Dunet, M\'eriam Koob,
Guillaume Auzias, Meritxell Bach Cuadra | Automatic quality control in multi-centric fetal brain MRI
super-resolution reconstruction | 11 pages, 3 figures; Submitted to MICCAI 2025 | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Quality control (QC) has long been considered essential to guarantee the
reliability of neuroimaging studies. It is particularly important for fetal
brain MRI, where acquisitions and image processing techniques are less
standardized than in adult imaging. In this work, we focus on automated quality
control of super-resolution reconstruction (SRR) volumes of fetal brain MRI, an
important processing step where multiple stacks of thick 2D slices are
registered together and combined to build a single, isotropic and artifact-free
T2 weighted volume. We propose FetMRQC$_{SR}$, a machine-learning method that
extracts more than 100 image quality metrics to predict image quality scores
using a random forest model. This approach is well suited to a problem that is
high dimensional, with highly heterogeneous data and small datasets. We
validate FetMRQC$_{SR}$ in an out-of-domain (OOD) setting and report high
performance (ROC AUC = 0.89), even when faced with data from an unknown site or
SRR method. We also investigate failure cases and show that they occur in
$45\%$ of the images due to ambiguous configurations for which the rating from
the expert is arguable. These results are encouraging and illustrate how a non
deep learning-based method like FetMRQC$_{SR}$ is well suited to this
multifaceted problem. Our tool, along with all the code used to generate, train
and evaluate the model will be released upon acceptance of the paper.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 08:34:40 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 10:05:34 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Sanchez",
"Thomas",
""
],
[
"Zalevskyi",
"Vladyslav",
""
],
[
"Mihailov",
"Angeline",
""
],
[
"Martí-Juan",
"Gerard",
""
],
[
"Eixarch",
"Elisenda",
""
],
[
"Jakab",
"Andras",
""
],
[
"Dunet",
"Vincent",
""
],
[
"Koob",
"Mériam",
""
],
[
"Auzias",
"Guillaume",
""
],
[
"Cuadra",
"Meritxell Bach",
""
]
] | TITLE: Automatic quality control in multi-centric fetal brain MRI
super-resolution reconstruction
ABSTRACT: Quality control (QC) has long been considered essential to guarantee the
reliability of neuroimaging studies. It is particularly important for fetal
brain MRI, where acquisitions and image processing techniques are less
standardized than in adult imaging. In this work, we focus on automated quality
control of super-resolution reconstruction (SRR) volumes of fetal brain MRI, an
important processing step where multiple stacks of thick 2D slices are
registered together and combined to build a single, isotropic and artifact-free
T2 weighted volume. We propose FetMRQC$_{SR}$, a machine-learning method that
extracts more than 100 image quality metrics to predict image quality scores
using a random forest model. This approach is well suited to a problem that is
high dimensional, with highly heterogeneous data and small datasets. We
validate FetMRQC$_{SR}$ in an out-of-domain (OOD) setting and report high
performance (ROC AUC = 0.89), even when faced with data from an unknown site or
SRR method. We also investigate failure cases and show that they occur in
$45\%$ of the images due to ambiguous configurations for which the rating from
the expert is arguable. These results are encouraging and illustrate how a non
deep learning-based method like FetMRQC$_{SR}$ is well suited to this
multifaceted problem. Our tool, along with all the code used to generate, train
and evaluate the model will be released upon acceptance of the paper.
|
2503.10508 | Daou Zhang | Yuhan Wang, Cheng Liu, Daou Zhang and Weichao Wu | Hoi2Anomaly: An Explainable Anomaly Detection Approach Guided by
Human-Object Interaction | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the domain of Image Anomaly Detection (IAD), Existing methods frequently
exhibit a paucity of fine-grained, interpretable semantic information,
resulting in the detection of anomalous entities or activities that are
susceptible to machine illusions. This deficiency often leads to the detection
of anomalous entities or actions that are susceptible to machine illusions and
lack sufficient explanation. In this thesis, we propose a novel approach to
anomaly detection, termed Hoi2Anomaly, which aims to achieve precise
discrimination and localization of anomalies. The proposed methodology involves
the construction of a multi-modal instruction tuning dataset comprising
human-object interaction (HOI) pairs in anomalous scenarios. Second, we have
trained an HOI extractor in threat scenarios to localize and match anomalous
actions and entities. Finally, explanatory content is generated for the
detected anomalous HOI by fine-tuning the visual language pretraining (VLP)
framework. The experimental results demonstrate that Hoi2Anomaly surpasses
existing generative approaches in terms of precision and explainability. We
will release Hoi2Anomaly for the advancement of the field of anomaly detection.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 16:09:51 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Mar 2025 05:44:22 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Wang",
"Yuhan",
""
],
[
"Liu",
"Cheng",
""
],
[
"Zhang",
"Daou",
""
],
[
"Wu",
"Weichao",
""
]
] | TITLE: Hoi2Anomaly: An Explainable Anomaly Detection Approach Guided by
Human-Object Interaction
ABSTRACT: In the domain of Image Anomaly Detection (IAD), Existing methods frequently
exhibit a paucity of fine-grained, interpretable semantic information,
resulting in the detection of anomalous entities or activities that are
susceptible to machine illusions. This deficiency often leads to the detection
of anomalous entities or actions that are susceptible to machine illusions and
lack sufficient explanation. In this thesis, we propose a novel approach to
anomaly detection, termed Hoi2Anomaly, which aims to achieve precise
discrimination and localization of anomalies. The proposed methodology involves
the construction of a multi-modal instruction tuning dataset comprising
human-object interaction (HOI) pairs in anomalous scenarios. Second, we have
trained an HOI extractor in threat scenarios to localize and match anomalous
actions and entities. Finally, explanatory content is generated for the
detected anomalous HOI by fine-tuning the visual language pretraining (VLP)
framework. The experimental results demonstrate that Hoi2Anomaly surpasses
existing generative approaches in terms of precision and explainability. We
will release Hoi2Anomaly for the advancement of the field of anomaly detection.
|
2503.10582 | Wenhu Chen | Yiming Jia, Jiachen Li, Xiang Yue, Bo Li, Ping Nie, Kai Zou, Wenhu
Chen | VisualWebInstruct: Scaling up Multimodal Instruction Data through Web
Search | Technical Report | null | null | null | cs.CV cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vision-Language Models have made significant progress on many
perception-focused tasks. However, their progress on reasoning-focused tasks
remains limited due to the lack of high-quality and diverse training data. In
this work, we aim to address the scarcity of reasoning-focused multimodal
datasets. We propose VisualWebInstruct, a novel approach that leverages search
engines to create a diverse and high-quality dataset spanning multiple
disciplines, including mathematics, physics, finance, and chemistry, etc.
Starting with a meticulously selected set of 30,000 seed images, we employ
Google Image Search to identify websites containing similar images. We collect
and process HTML data from over 700K unique URLs. Through a pipeline of content
extraction, filtering, and synthesis, we construct a dataset of approximately
900K question-answer (QA) pairs, with 40% consisting of visual QA pairs and the
remaining comprising text-based QA pairs. Models fine-tuned on
VisualWebInstruct demonstrate significant performance improvements: (1)
fine-tuning on Llava-OV results in 10-20 absolute points improvement across
benchmarks, and (2) fine-tuning from MAmmoTH-VL yields a 5 absolute points gain
across benchmarks. Our best model, MAmmoTH-VL2, achieves state-of-the-art
performance within the 10B parameter class on MMMU-Pro (40.7), MathVerse
(42.6), and DynaMath (55.7). These results highlight the effectiveness of our
dataset in enhancing the reasoning capabilities of vision-language models for
complex multimodal tasks.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 17:32:48 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Mar 2025 01:09:17 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Jia",
"Yiming",
""
],
[
"Li",
"Jiachen",
""
],
[
"Yue",
"Xiang",
""
],
[
"Li",
"Bo",
""
],
[
"Nie",
"Ping",
""
],
[
"Zou",
"Kai",
""
],
[
"Chen",
"Wenhu",
""
]
] | TITLE: VisualWebInstruct: Scaling up Multimodal Instruction Data through Web
Search
ABSTRACT: Vision-Language Models have made significant progress on many
perception-focused tasks. However, their progress on reasoning-focused tasks
remains limited due to the lack of high-quality and diverse training data. In
this work, we aim to address the scarcity of reasoning-focused multimodal
datasets. We propose VisualWebInstruct, a novel approach that leverages search
engines to create a diverse and high-quality dataset spanning multiple
disciplines, including mathematics, physics, finance, and chemistry, etc.
Starting with a meticulously selected set of 30,000 seed images, we employ
Google Image Search to identify websites containing similar images. We collect
and process HTML data from over 700K unique URLs. Through a pipeline of content
extraction, filtering, and synthesis, we construct a dataset of approximately
900K question-answer (QA) pairs, with 40% consisting of visual QA pairs and the
remaining comprising text-based QA pairs. Models fine-tuned on
VisualWebInstruct demonstrate significant performance improvements: (1)
fine-tuning on Llava-OV results in 10-20 absolute points improvement across
benchmarks, and (2) fine-tuning from MAmmoTH-VL yields a 5 absolute points gain
across benchmarks. Our best model, MAmmoTH-VL2, achieves state-of-the-art
performance within the 10B parameter class on MMMU-Pro (40.7), MathVerse
(42.6), and DynaMath (55.7). These results highlight the effectiveness of our
dataset in enhancing the reasoning capabilities of vision-language models for
complex multimodal tasks.
|
2503.10586 | Chaoqun Wang | Chaoqun Wang, Jie Yang, Xiaobin Hong, and Ruimao Zhang | Unlock the Power of Unlabeled Data in Language Driving Model | Accepted by ICRA2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Recent Vision-based Large Language Models~(VisionLLMs) for autonomous driving
have seen rapid advancements. However, such promotion is extremely dependent on
large-scale high-quality annotated data, which is costly and labor-intensive.
To address this issue, we propose unlocking the value of abundant yet unlabeled
data to improve the language-driving model in a semi-supervised learning
manner. Specifically, we first introduce a series of template-based prompts to
extract scene information, generating questions that create pseudo-answers for
the unlabeled data based on a model trained with limited labeled data. Next, we
propose a Self-Consistency Refinement method to improve the quality of these
pseudo-annotations, which are later used for further training. By utilizing a
pre-trained VisionLLM (e.g., InternVL), we build a strong Language Driving
Model (LDM) for driving scene question-answering, outperforming previous
state-of-the-art methods. Extensive experiments on the DriveLM benchmark show
that our approach performs well with just 5% labeled data, achieving
competitive performance against models trained with full datasets. In
particular, our LDM achieves 44.85% performance with limited labeled data,
increasing to 54.27% when using unlabeled data, while models trained with full
datasets reach 60.68% on the DriveLM benchmark.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 17:36:36 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Mar 2025 06:25:33 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Wang",
"Chaoqun",
""
],
[
"Yang",
"Jie",
""
],
[
"Hong",
"Xiaobin",
""
],
[
"Zhang",
"Ruimao",
""
]
] | TITLE: Unlock the Power of Unlabeled Data in Language Driving Model
ABSTRACT: Recent Vision-based Large Language Models~(VisionLLMs) for autonomous driving
have seen rapid advancements. However, such promotion is extremely dependent on
large-scale high-quality annotated data, which is costly and labor-intensive.
To address this issue, we propose unlocking the value of abundant yet unlabeled
data to improve the language-driving model in a semi-supervised learning
manner. Specifically, we first introduce a series of template-based prompts to
extract scene information, generating questions that create pseudo-answers for
the unlabeled data based on a model trained with limited labeled data. Next, we
propose a Self-Consistency Refinement method to improve the quality of these
pseudo-annotations, which are later used for further training. By utilizing a
pre-trained VisionLLM (e.g., InternVL), we build a strong Language Driving
Model (LDM) for driving scene question-answering, outperforming previous
state-of-the-art methods. Extensive experiments on the DriveLM benchmark show
that our approach performs well with just 5% labeled data, achieving
competitive performance against models trained with full datasets. In
particular, our LDM achieves 44.85% performance with limited labeled data,
increasing to 54.27% when using unlabeled data, while models trained with full
datasets reach 60.68% on the DriveLM benchmark.
|
2503.10619 | Andy Zhou | Andy Zhou | Siege: Autonomous Multi-Turn Jailbreaking of Large Language Models with
Tree Search | Accepted to ICLR 2025 Trustworthy LLM | null | null | null | cs.AI cs.CL cs.CR | http://creativecommons.org/licenses/by/4.0/ | We introduce Siege, a multi-turn adversarial framework that models the
gradual erosion of Large Language Model (LLM) safety through a tree search
perspective. Unlike single-turn jailbreaks that rely on one meticulously
engineered prompt, Siege expands the conversation at each turn in a
breadth-first fashion, branching out multiple adversarial prompts that exploit
partial compliance from previous responses. By tracking these incremental
policy leaks and re-injecting them into subsequent queries, Siege reveals how
minor concessions can accumulate into fully disallowed outputs. Evaluations on
the JailbreakBench dataset show that Siege achieves a 100% success rate on
GPT-3.5-turbo and 97% on GPT-4 in a single multi-turn run, using fewer queries
than baselines such as Crescendo or GOAT. This tree search methodology offers
an in-depth view of how model safeguards degrade over successive dialogue
turns, underscoring the urgency of robust multi-turn testing procedures for
language models.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 17:57:32 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Mar 2025 20:14:05 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Zhou",
"Andy",
""
]
] | TITLE: Siege: Autonomous Multi-Turn Jailbreaking of Large Language Models with
Tree Search
ABSTRACT: We introduce Siege, a multi-turn adversarial framework that models the
gradual erosion of Large Language Model (LLM) safety through a tree search
perspective. Unlike single-turn jailbreaks that rely on one meticulously
engineered prompt, Siege expands the conversation at each turn in a
breadth-first fashion, branching out multiple adversarial prompts that exploit
partial compliance from previous responses. By tracking these incremental
policy leaks and re-injecting them into subsequent queries, Siege reveals how
minor concessions can accumulate into fully disallowed outputs. Evaluations on
the JailbreakBench dataset show that Siege achieves a 100% success rate on
GPT-3.5-turbo and 97% on GPT-4 in a single multi-turn run, using fewer queries
than baselines such as Crescendo or GOAT. This tree search methodology offers
an in-depth view of how model safeguards degrade over successive dialogue
turns, underscoring the urgency of robust multi-turn testing procedures for
language models.
|
2503.10677 | Mingyue Cheng | Mingyue Cheng, Yucong Luo, Jie Ouyang, Qi Liu, Huijie Liu, Li Li, Shuo
Yu, Bohou Zhang, Jiawei Cao, Jie Ma, Daoyu Wang, Enhong Chen | A Survey on Knowledge-Oriented Retrieval-Augmented Generation | null | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Retrieval-Augmented Generation (RAG) has gained significant attention in
recent years for its potential to enhance natural language understanding and
generation by combining large-scale retrieval systems with generative models.
RAG leverages external knowledge sources, such as documents, databases, or
structured data, to improve model performance and generate more accurate and
contextually relevant outputs. This survey aims to provide a comprehensive
overview of RAG by examining its fundamental components, including retrieval
mechanisms, generation processes, and the integration between the two. We
discuss the key characteristics of RAG, such as its ability to augment
generative models with dynamic external knowledge, and the challenges
associated with aligning retrieved information with generative objectives. We
also present a taxonomy that categorizes RAG methods, ranging from basic
retrieval-augmented approaches to more advanced models incorporating
multi-modal data and reasoning capabilities. Additionally, we review the
evaluation benchmarks and datasets commonly used to assess RAG systems, along
with a detailed exploration of its applications in fields such as question
answering, summarization, and information retrieval. Finally, we highlight
emerging research directions and opportunities for improving RAG systems, such
as enhanced retrieval efficiency, model interpretability, and domain-specific
adaptations. This paper concludes by outlining the prospects for RAG in
addressing real-world challenges and its potential to drive further
advancements in natural language processing.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 01:59:35 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 11:24:11 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Cheng",
"Mingyue",
""
],
[
"Luo",
"Yucong",
""
],
[
"Ouyang",
"Jie",
""
],
[
"Liu",
"Qi",
""
],
[
"Liu",
"Huijie",
""
],
[
"Li",
"Li",
""
],
[
"Yu",
"Shuo",
""
],
[
"Zhang",
"Bohou",
""
],
[
"Cao",
"Jiawei",
""
],
[
"Ma",
"Jie",
""
],
[
"Wang",
"Daoyu",
""
],
[
"Chen",
"Enhong",
""
]
] | TITLE: A Survey on Knowledge-Oriented Retrieval-Augmented Generation
ABSTRACT: Retrieval-Augmented Generation (RAG) has gained significant attention in
recent years for its potential to enhance natural language understanding and
generation by combining large-scale retrieval systems with generative models.
RAG leverages external knowledge sources, such as documents, databases, or
structured data, to improve model performance and generate more accurate and
contextually relevant outputs. This survey aims to provide a comprehensive
overview of RAG by examining its fundamental components, including retrieval
mechanisms, generation processes, and the integration between the two. We
discuss the key characteristics of RAG, such as its ability to augment
generative models with dynamic external knowledge, and the challenges
associated with aligning retrieved information with generative objectives. We
also present a taxonomy that categorizes RAG methods, ranging from basic
retrieval-augmented approaches to more advanced models incorporating
multi-modal data and reasoning capabilities. Additionally, we review the
evaluation benchmarks and datasets commonly used to assess RAG systems, along
with a detailed exploration of its applications in fields such as question
answering, summarization, and information retrieval. Finally, we highlight
emerging research directions and opportunities for improving RAG systems, such
as enhanced retrieval efficiency, model interpretability, and domain-specific
adaptations. This paper concludes by outlining the prospects for RAG in
addressing real-world challenges and its potential to drive further
advancements in natural language processing.
|
2503.10719 | Yehang Zhang | Yehang Zhang, Xinli Xu, Xiaojie Xu, Li Liu, Yingcong Chen | Long-Video Audio Synthesis with Multi-Agent Collaboration | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Video-to-audio synthesis, which generates synchronized audio for visual
content, critically enhances viewer immersion and narrative coherence in film
and interactive media. However, video-to-audio dubbing for long-form content
remains an unsolved challenge due to dynamic semantic shifts, temporal
misalignment, and the absence of dedicated datasets. While existing methods
excel in short videos, they falter in long scenarios (e.g., movies) due to
fragmented synthesis and inadequate cross-scene consistency. We propose
LVAS-Agent, a novel multi-agent framework that emulates professional dubbing
workflows through collaborative role specialization. Our approach decomposes
long-video synthesis into four steps including scene segmentation, script
generation, sound design and audio synthesis. Central innovations include a
discussion-correction mechanism for scene/script refinement and a
generation-retrieval loop for temporal-semantic alignment. To enable systematic
evaluation, we introduce LVAS-Bench, the first benchmark with 207
professionally curated long videos spanning diverse scenarios. Experiments
demonstrate superior audio-visual alignment over baseline methods. Project
page: https://lvas-agent.github.io
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 07:58:23 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 05:48:37 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Zhang",
"Yehang",
""
],
[
"Xu",
"Xinli",
""
],
[
"Xu",
"Xiaojie",
""
],
[
"Liu",
"Li",
""
],
[
"Chen",
"Yingcong",
""
]
] | TITLE: Long-Video Audio Synthesis with Multi-Agent Collaboration
ABSTRACT: Video-to-audio synthesis, which generates synchronized audio for visual
content, critically enhances viewer immersion and narrative coherence in film
and interactive media. However, video-to-audio dubbing for long-form content
remains an unsolved challenge due to dynamic semantic shifts, temporal
misalignment, and the absence of dedicated datasets. While existing methods
excel in short videos, they falter in long scenarios (e.g., movies) due to
fragmented synthesis and inadequate cross-scene consistency. We propose
LVAS-Agent, a novel multi-agent framework that emulates professional dubbing
workflows through collaborative role specialization. Our approach decomposes
long-video synthesis into four steps including scene segmentation, script
generation, sound design and audio synthesis. Central innovations include a
discussion-correction mechanism for scene/script refinement and a
generation-retrieval loop for temporal-semantic alignment. To enable systematic
evaluation, we introduce LVAS-Bench, the first benchmark with 207
professionally curated long videos spanning diverse scenarios. Experiments
demonstrate superior audio-visual alignment over baseline methods. Project
page: https://lvas-agent.github.io
|
2503.10738 | Mohammad Mosaffa | Mohammad Mosaffa, Omid Rafieian and Hema Yoganarasimhan | Visual Polarization Measurement Using Counterfactual Image Generation | null | null | null | null | cs.CV cs.LG econ.EM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Political polarization is a significant issue in American politics,
influencing public discourse, policy, and consumer behavior. While studies on
polarization in news media have extensively focused on verbal content,
non-verbal elements, particularly visual content, have received less attention
due to the complexity and high dimensionality of image data. Traditional
descriptive approaches often rely on feature extraction from images, leading to
biased polarization estimates due to information loss. In this paper, we
introduce the Polarization Measurement using Counterfactual Image Generation
(PMCIG) method, which combines economic theory with generative models and
multi-modal deep learning to fully utilize the richness of image data and
provide a theoretically grounded measure of polarization in visual content.
Applying this framework to a decade-long dataset featuring 30 prominent
politicians across 20 major news outlets, we identify significant polarization
in visual content, with notable variations across outlets and politicians. At
the news outlet level, we observe significant heterogeneity in visual slant.
Outlets such as Daily Mail, Fox News, and Newsmax tend to favor Republican
politicians in their visual content, while The Washington Post, USA Today, and
The New York Times exhibit a slant in favor of Democratic politicians. At the
politician level, our results reveal substantial variation in polarized
coverage, with Donald Trump and Barack Obama among the most polarizing figures,
while Joe Manchin and Susan Collins are among the least. Finally, we conduct a
series of validation tests demonstrating the consistency of our proposed
measures with external measures of media slant that rely on non-image-based
sources.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 16:32:07 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Mosaffa",
"Mohammad",
""
],
[
"Rafieian",
"Omid",
""
],
[
"Yoganarasimhan",
"Hema",
""
]
] | TITLE: Visual Polarization Measurement Using Counterfactual Image Generation
ABSTRACT: Political polarization is a significant issue in American politics,
influencing public discourse, policy, and consumer behavior. While studies on
polarization in news media have extensively focused on verbal content,
non-verbal elements, particularly visual content, have received less attention
due to the complexity and high dimensionality of image data. Traditional
descriptive approaches often rely on feature extraction from images, leading to
biased polarization estimates due to information loss. In this paper, we
introduce the Polarization Measurement using Counterfactual Image Generation
(PMCIG) method, which combines economic theory with generative models and
multi-modal deep learning to fully utilize the richness of image data and
provide a theoretically grounded measure of polarization in visual content.
Applying this framework to a decade-long dataset featuring 30 prominent
politicians across 20 major news outlets, we identify significant polarization
in visual content, with notable variations across outlets and politicians. At
the news outlet level, we observe significant heterogeneity in visual slant.
Outlets such as Daily Mail, Fox News, and Newsmax tend to favor Republican
politicians in their visual content, while The Washington Post, USA Today, and
The New York Times exhibit a slant in favor of Democratic politicians. At the
politician level, our results reveal substantial variation in polarized
coverage, with Donald Trump and Barack Obama among the most polarizing figures,
while Joe Manchin and Susan Collins are among the least. Finally, we conduct a
series of validation tests demonstrating the consistency of our proposed
measures with external measures of media slant that rely on non-image-based
sources.
|
2503.11071 | Chao Shuai | Zhenguang Liu, Chao Shuai, Shaojing Fan, Ziping Dong, Jinwu Hu,
Zhongjie Ba, Kui Ren | Harnessing Frequency Spectrum Insights for Image Copyright Protection
Against Diffusion Models | Received by CVPR 2025 (10 pages, 11 figures) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Diffusion models have achieved remarkable success in novel view synthesis,
but their reliance on large, diverse, and often untraceable Web datasets has
raised pressing concerns about image copyright protection. Current methods fall
short in reliably identifying unauthorized image use, as they struggle to
generalize across varied generation tasks and fail when the training dataset
includes images from multiple sources with few identifiable (watermarked or
poisoned) samples. In this paper, we present novel evidence that
diffusion-generated images faithfully preserve the statistical properties of
their training data, particularly reflected in their spectral features.
Leveraging this insight, we introduce \emph{CoprGuard}, a robust frequency
domain watermarking framework to safeguard against unauthorized image usage in
diffusion model training and fine-tuning. CoprGuard demonstrates remarkable
effectiveness against a wide range of models, from naive diffusion models to
sophisticated text-to-image models, and is robust even when watermarked images
comprise a mere 1\% of the training dataset. This robust and versatile approach
empowers content owners to protect their intellectual property in the era of
AI-driven image generation.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 04:27:50 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 06:58:14 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Liu",
"Zhenguang",
""
],
[
"Shuai",
"Chao",
""
],
[
"Fan",
"Shaojing",
""
],
[
"Dong",
"Ziping",
""
],
[
"Hu",
"Jinwu",
""
],
[
"Ba",
"Zhongjie",
""
],
[
"Ren",
"Kui",
""
]
] | TITLE: Harnessing Frequency Spectrum Insights for Image Copyright Protection
Against Diffusion Models
ABSTRACT: Diffusion models have achieved remarkable success in novel view synthesis,
but their reliance on large, diverse, and often untraceable Web datasets has
raised pressing concerns about image copyright protection. Current methods fall
short in reliably identifying unauthorized image use, as they struggle to
generalize across varied generation tasks and fail when the training dataset
includes images from multiple sources with few identifiable (watermarked or
poisoned) samples. In this paper, we present novel evidence that
diffusion-generated images faithfully preserve the statistical properties of
their training data, particularly reflected in their spectral features.
Leveraging this insight, we introduce \emph{CoprGuard}, a robust frequency
domain watermarking framework to safeguard against unauthorized image usage in
diffusion model training and fine-tuning. CoprGuard demonstrates remarkable
effectiveness against a wide range of models, from naive diffusion models to
sophisticated text-to-image models, and is robust even when watermarked images
comprise a mere 1\% of the training dataset. This robust and versatile approach
empowers content owners to protect their intellectual property in the era of
AI-driven image generation.
|
2503.11227 | Jian Zhang | Jian Zhang, Bifan Wei, Shihao Qi, haiping Zhu, Jun Liu, Qika Lin | GKG-LLM: A Unified Framework for Generalized Knowledge Graph
Construction | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The construction of Generalized Knowledge Graph (GKG), including knowledge
graph, event knowledge graph and commonsense knowledge graph, is fundamental
for various natural language processing tasks. Current studies typically
construct these types of graph separately, overlooking holistic insights and
potential unification that could be beneficial in computing resources and usage
perspectives. However, a key challenge in developing a unified framework for
GKG is obstacles arising from task-specific differences. In this study, we
propose a unified framework for constructing generalized knowledge graphs to
address this challenge. First, we collect data from 15 sub-tasks in 29 datasets
across the three types of graphs, categorizing them into in-sample,
counter-task, and out-of-distribution (OOD) data. Then, we propose a
three-stage curriculum learning fine-tuning framework, by iteratively injecting
knowledge from the three types of graphs into the Large Language Models.
Extensive experiments show that our proposed model improves the construction of
all three graph types across in-domain, OOD and counter-task data.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 09:23:22 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 06:41:34 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Zhang",
"Jian",
""
],
[
"Wei",
"Bifan",
""
],
[
"Qi",
"Shihao",
""
],
[
"Zhu",
"haiping",
""
],
[
"Liu",
"Jun",
""
],
[
"Lin",
"Qika",
""
]
] | TITLE: GKG-LLM: A Unified Framework for Generalized Knowledge Graph
Construction
ABSTRACT: The construction of Generalized Knowledge Graph (GKG), including knowledge
graph, event knowledge graph and commonsense knowledge graph, is fundamental
for various natural language processing tasks. Current studies typically
construct these types of graph separately, overlooking holistic insights and
potential unification that could be beneficial in computing resources and usage
perspectives. However, a key challenge in developing a unified framework for
GKG is obstacles arising from task-specific differences. In this study, we
propose a unified framework for constructing generalized knowledge graphs to
address this challenge. First, we collect data from 15 sub-tasks in 29 datasets
across the three types of graphs, categorizing them into in-sample,
counter-task, and out-of-distribution (OOD) data. Then, we propose a
three-stage curriculum learning fine-tuning framework, by iteratively injecting
knowledge from the three types of graphs into the Large Language Models.
Extensive experiments show that our proposed model improves the construction of
all three graph types across in-domain, OOD and counter-task data.
|
2503.11371 | Fisher Wan | Zengyu Wan, Wei Zhai, Yang Cao, Zhengjun Zha | EMoTive: Event-guided Trajectory Modeling for 3D Motion Estimation | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Visual 3D motion estimation aims to infer the motion of 2D pixels in 3D space
based on visual cues. The key challenge arises from depth variation induced
spatio-temporal motion inconsistencies, disrupting the assumptions of local
spatial or temporal motion smoothness in previous motion estimation frameworks.
In contrast, event cameras offer new possibilities for 3D motion estimation
through continuous adaptive pixel-level responses to scene changes. This paper
presents EMoTive, a novel event-based framework that models spatio-temporal
trajectories via event-guided non-uniform parametric curves, effectively
characterizing locally heterogeneous spatio-temporal motion. Specifically, we
first introduce Event Kymograph - an event projection method that leverages a
continuous temporal projection kernel and decouples spatial observations to
encode fine-grained temporal evolution explicitly. For motion representation,
we introduce a density-aware adaptation mechanism to fuse spatial and temporal
features under event guidance, coupled with a non-uniform rational curve
parameterization framework to adaptively model heterogeneous trajectories. The
final 3D motion estimation is achieved through multi-temporal sampling of
parametric trajectories, yielding optical flow and depth motion fields. To
facilitate evaluation, we introduce CarlaEvent3D, a multi-dynamic synthetic
dataset for comprehensive validation. Extensive experiments on both this
dataset and a real-world benchmark demonstrate the effectiveness of the
proposed method.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 13:15:54 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 02:12:39 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Wan",
"Zengyu",
""
],
[
"Zhai",
"Wei",
""
],
[
"Cao",
"Yang",
""
],
[
"Zha",
"Zhengjun",
""
]
] | TITLE: EMoTive: Event-guided Trajectory Modeling for 3D Motion Estimation
ABSTRACT: Visual 3D motion estimation aims to infer the motion of 2D pixels in 3D space
based on visual cues. The key challenge arises from depth variation induced
spatio-temporal motion inconsistencies, disrupting the assumptions of local
spatial or temporal motion smoothness in previous motion estimation frameworks.
In contrast, event cameras offer new possibilities for 3D motion estimation
through continuous adaptive pixel-level responses to scene changes. This paper
presents EMoTive, a novel event-based framework that models spatio-temporal
trajectories via event-guided non-uniform parametric curves, effectively
characterizing locally heterogeneous spatio-temporal motion. Specifically, we
first introduce Event Kymograph - an event projection method that leverages a
continuous temporal projection kernel and decouples spatial observations to
encode fine-grained temporal evolution explicitly. For motion representation,
we introduce a density-aware adaptation mechanism to fuse spatial and temporal
features under event guidance, coupled with a non-uniform rational curve
parameterization framework to adaptively model heterogeneous trajectories. The
final 3D motion estimation is achieved through multi-temporal sampling of
parametric trajectories, yielding optical flow and depth motion fields. To
facilitate evaluation, we introduce CarlaEvent3D, a multi-dynamic synthetic
dataset for comprehensive validation. Extensive experiments on both this
dataset and a real-world benchmark demonstrate the effectiveness of the
proposed method.
|
2503.11439 | Seo Jin Lee | Sanghyun Jo, Seo Jin Lee, Seungwoo Lee, Seohyung Hong, Hyungseok Seo,
Kyungsu Kim | COIN: Confidence Score-Guided Distillation for Annotation-Free Cell
Segmentation | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Cell instance segmentation (CIS) is crucial for identifying individual cell
morphologies in histopathological images, providing valuable insights for
biological and medical research. While unsupervised CIS (UCIS) models aim to
reduce the heavy reliance on labor-intensive image annotations, they fail to
accurately capture cell boundaries, causing missed detections and poor
performance. Recognizing the absence of error-free instances as a key
limitation, we present COIN (COnfidence score-guided INstance distillation), a
novel annotation-free framework with three key steps: (1) Increasing the
sensitivity for the presence of error-free instances via unsupervised semantic
segmentation with optimal transport, leveraging its ability to discriminate
spatially minor instances, (2) Instance-level confidence scoring to measure the
consistency between model prediction and refined mask and identify highly
confident instances, offering an alternative to ground truth annotations, and
(3) Progressive expansion of confidence with recursive self-distillation.
Extensive experiments across six datasets show COIN outperforming existing UCIS
methods, even surpassing semi- and weakly-supervised approaches across all
metrics on the MoNuSeg and TNBC datasets. The code is available at
https://github.com/shjo-april/COIN.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 14:27:24 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 01:59:06 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Jo",
"Sanghyun",
""
],
[
"Lee",
"Seo Jin",
""
],
[
"Lee",
"Seungwoo",
""
],
[
"Hong",
"Seohyung",
""
],
[
"Seo",
"Hyungseok",
""
],
[
"Kim",
"Kyungsu",
""
]
] | TITLE: COIN: Confidence Score-Guided Distillation for Annotation-Free Cell
Segmentation
ABSTRACT: Cell instance segmentation (CIS) is crucial for identifying individual cell
morphologies in histopathological images, providing valuable insights for
biological and medical research. While unsupervised CIS (UCIS) models aim to
reduce the heavy reliance on labor-intensive image annotations, they fail to
accurately capture cell boundaries, causing missed detections and poor
performance. Recognizing the absence of error-free instances as a key
limitation, we present COIN (COnfidence score-guided INstance distillation), a
novel annotation-free framework with three key steps: (1) Increasing the
sensitivity for the presence of error-free instances via unsupervised semantic
segmentation with optimal transport, leveraging its ability to discriminate
spatially minor instances, (2) Instance-level confidence scoring to measure the
consistency between model prediction and refined mask and identify highly
confident instances, offering an alternative to ground truth annotations, and
(3) Progressive expansion of confidence with recursive self-distillation.
Extensive experiments across six datasets show COIN outperforming existing UCIS
methods, even surpassing semi- and weakly-supervised approaches across all
metrics on the MoNuSeg and TNBC datasets. The code is available at
https://github.com/shjo-april/COIN.
|
2503.11657 | Kevin Zhu | Vincent Li, Yule Fu, Tim Knappe, Kevin Han, Kevin Zhu | Automating Mathematical Proof Generation Using Large Language Model
Agents and Knowledge Graphs | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Large Language Models have demonstrated remarkable capabilities in natural
language processing tasks, including mathematical problem-solving that requires
multi-step logical reasoning. However, challenges persist in automating the
identification of key mathematical concepts, understanding their
interrelations, and formalizing proofs within a rigorous framework. We present
a novel framework that leverages knowledge graphs to augment LLMs to construct
and formalize mathematical proofs. Our results demonstrate significant
performance improvements across multiple datasets, with using knowledge graphs,
achieving up to a 34% success rate on the MUSTARDSAUCE dataset on o1-mini and
consistently outperforming baseline approaches by 2-11% across different
models. We show how this approach bridges the gap between natural language
understanding and formal logic proof systems and achieve elevated results for
foundation models over baseline.
| [
{
"version": "v1",
"created": "Tue, 4 Feb 2025 07:17:34 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Li",
"Vincent",
""
],
[
"Fu",
"Yule",
""
],
[
"Knappe",
"Tim",
""
],
[
"Han",
"Kevin",
""
],
[
"Zhu",
"Kevin",
""
]
] | TITLE: Automating Mathematical Proof Generation Using Large Language Model
Agents and Knowledge Graphs
ABSTRACT: Large Language Models have demonstrated remarkable capabilities in natural
language processing tasks, including mathematical problem-solving that requires
multi-step logical reasoning. However, challenges persist in automating the
identification of key mathematical concepts, understanding their
interrelations, and formalizing proofs within a rigorous framework. We present
a novel framework that leverages knowledge graphs to augment LLMs to construct
and formalize mathematical proofs. Our results demonstrate significant
performance improvements across multiple datasets, with using knowledge graphs,
achieving up to a 34% success rate on the MUSTARDSAUCE dataset on o1-mini and
consistently outperforming baseline approaches by 2-11% across different
models. We show how this approach bridges the gap between natural language
understanding and formal logic proof systems and achieve elevated results for
foundation models over baseline.
|
2503.11687 | Chris Bennett | Christopher Bennett, Kerstin Eder | Review of Machine Learning for Micro-Electronic Design Verification | 40 pages, 13 figures | null | null | null | cs.AR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Microelectronic design verification remains a critical bottleneck in device
development, traditionally mitigated by expanding verification teams and
computational resources. Since the late 1990s, machine learning (ML) has been
proposed to enhance verification efficiency, yet many techniques have not
achieved mainstream adoption. This review, from the perspective of verification
and ML practitioners, examines the application of ML in dynamic-based
techniques for functional verification of microelectronic designs, and provides
a starting point for those new to this interdisciplinary field. Historical
trends, techniques, ML types, and evaluation baselines are analysed to
understand why previous research has not been widely adopted in industry. The
review highlights the application of ML, the techniques used and critically
discusses their limitations and successes. Although there is a wealth of
promising research, real-world adoption is hindered by challenges in comparing
techniques, identifying suitable applications, and the expertise required for
implementation. This review proposes that the field can progress through the
creation and use of open datasets, common benchmarks, and verification targets.
By establishing open evaluation criteria, industry can guide future research.
Parallels with ML in software verification suggest potential for collaboration.
Additionally, greater use of open-source designs and verification environments
can allow more researchers from outside the hardware verification discipline to
contribute to the challenge of verifying microelectronic designs.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 15:41:09 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Bennett",
"Christopher",
""
],
[
"Eder",
"Kerstin",
""
]
] | TITLE: Review of Machine Learning for Micro-Electronic Design Verification
ABSTRACT: Microelectronic design verification remains a critical bottleneck in device
development, traditionally mitigated by expanding verification teams and
computational resources. Since the late 1990s, machine learning (ML) has been
proposed to enhance verification efficiency, yet many techniques have not
achieved mainstream adoption. This review, from the perspective of verification
and ML practitioners, examines the application of ML in dynamic-based
techniques for functional verification of microelectronic designs, and provides
a starting point for those new to this interdisciplinary field. Historical
trends, techniques, ML types, and evaluation baselines are analysed to
understand why previous research has not been widely adopted in industry. The
review highlights the application of ML, the techniques used and critically
discusses their limitations and successes. Although there is a wealth of
promising research, real-world adoption is hindered by challenges in comparing
techniques, identifying suitable applications, and the expertise required for
implementation. This review proposes that the field can progress through the
creation and use of open datasets, common benchmarks, and verification targets.
By establishing open evaluation criteria, industry can guide future research.
Parallels with ML in software verification suggest potential for collaboration.
Additionally, greater use of open-source designs and verification environments
can allow more researchers from outside the hardware verification discipline to
contribute to the challenge of verifying microelectronic designs.
|
2503.11692 | Rashik Shrestha | Rashik Shrestha, Madhav Rijal, Trevor Smith, Yu Gu | FloPE: Flower Pose Estimation for Precision Pollination | IROS2025 under review | null | null | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study presents Flower Pose Estimation (FloPE), a real-time flower pose
estimation framework for computationally constrained robotic pollination
systems. Robotic pollination has been proposed to supplement natural
pollination to ensure global food security due to the decreased population of
natural pollinators. However, flower pose estimation for pollination is
challenging due to natural variability, flower clusters, and high accuracy
demands due to the flowers' fragility when pollinating. This method leverages
3D Gaussian Splatting to generate photorealistic synthetic datasets with
precise pose annotations, enabling effective knowledge distillation from a
high-capacity teacher model to a lightweight student model for efficient
inference. The approach was evaluated on both single and multi-arm robotic
platforms, achieving a mean pose estimation error of 0.6 cm and 19.14 degrees
within a low computational cost. Our experiments validate the effectiveness of
FloPE, achieving up to 78.75% pollination success rate and outperforming prior
robotic pollination techniques.
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2025 20:24:54 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Shrestha",
"Rashik",
""
],
[
"Rijal",
"Madhav",
""
],
[
"Smith",
"Trevor",
""
],
[
"Gu",
"Yu",
""
]
] | TITLE: FloPE: Flower Pose Estimation for Precision Pollination
ABSTRACT: This study presents Flower Pose Estimation (FloPE), a real-time flower pose
estimation framework for computationally constrained robotic pollination
systems. Robotic pollination has been proposed to supplement natural
pollination to ensure global food security due to the decreased population of
natural pollinators. However, flower pose estimation for pollination is
challenging due to natural variability, flower clusters, and high accuracy
demands due to the flowers' fragility when pollinating. This method leverages
3D Gaussian Splatting to generate photorealistic synthetic datasets with
precise pose annotations, enabling effective knowledge distillation from a
high-capacity teacher model to a lightweight student model for efficient
inference. The approach was evaluated on both single and multi-arm robotic
platforms, achieving a mean pose estimation error of 0.6 cm and 19.14 degrees
within a low computational cost. Our experiments validate the effectiveness of
FloPE, achieving up to 78.75% pollination success rate and outperforming prior
robotic pollination techniques.
|
2503.11695 | Jiaqing Zhang | Jiaqing Zhang, Miguel Contreras, Jessica Sena, Andrea Davidson,
Yuanfang Ren, Ziyuan Guan, Tezcan Ozrazgat-Baslanti, Tyler J. Loftus, Subhash
Nerella, Azra Bihorac, Parisa Rashidi | MELON: Multimodal Mixture-of-Experts with Spectral-Temporal Fusion for
Long-Term Mobility Estimation in Critical Care | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Patient mobility monitoring in intensive care is critical for ensuring timely
interventions and improving clinical outcomes. While accelerometry-based sensor
data are widely adopted in training artificial intelligence models to estimate
patient mobility, existing approaches face two key limitations highlighted in
clinical practice: (1) modeling the long-term accelerometer data is challenging
due to the high dimensionality, variability, and noise, and (2) the absence of
efficient and robust methods for long-term mobility assessment. To overcome
these challenges, we introduce MELON, a novel multimodal framework designed to
predict 12-hour mobility status in the critical care setting. MELON leverages
the power of a dual-branch network architecture, combining the strengths of
spectrogram-based visual representations and sequential accelerometer
statistical features. MELON effectively captures global and fine-grained
mobility patterns by integrating a pre-trained image encoder for rich
frequency-domain feature extraction and a Mixture-of-Experts encoder for
sequence modeling. We trained and evaluated the MELON model on the multimodal
dataset of 126 patients recruited from nine Intensive Care Units at the
University of Florida Health Shands Hospital main campus in Gainesville,
Florida. Experiments showed that MELON outperforms conventional approaches for
12-hour mobility status estimation with an overall area under the receiver
operating characteristic curve (AUROC) of 0.82 (95\%, confidence interval
0.78-0.86). Notably, our experiments also revealed that accelerometer data
collected from the wrist provides robust predictive performance compared with
data from the ankle, suggesting a single-sensor solution that can reduce
patient burden and lower deployment costs...
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 19:47:46 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Zhang",
"Jiaqing",
""
],
[
"Contreras",
"Miguel",
""
],
[
"Sena",
"Jessica",
""
],
[
"Davidson",
"Andrea",
""
],
[
"Ren",
"Yuanfang",
""
],
[
"Guan",
"Ziyuan",
""
],
[
"Ozrazgat-Baslanti",
"Tezcan",
""
],
[
"Loftus",
"Tyler J.",
""
],
[
"Nerella",
"Subhash",
""
],
[
"Bihorac",
"Azra",
""
],
[
"Rashidi",
"Parisa",
""
]
] | TITLE: MELON: Multimodal Mixture-of-Experts with Spectral-Temporal Fusion for
Long-Term Mobility Estimation in Critical Care
ABSTRACT: Patient mobility monitoring in intensive care is critical for ensuring timely
interventions and improving clinical outcomes. While accelerometry-based sensor
data are widely adopted in training artificial intelligence models to estimate
patient mobility, existing approaches face two key limitations highlighted in
clinical practice: (1) modeling the long-term accelerometer data is challenging
due to the high dimensionality, variability, and noise, and (2) the absence of
efficient and robust methods for long-term mobility assessment. To overcome
these challenges, we introduce MELON, a novel multimodal framework designed to
predict 12-hour mobility status in the critical care setting. MELON leverages
the power of a dual-branch network architecture, combining the strengths of
spectrogram-based visual representations and sequential accelerometer
statistical features. MELON effectively captures global and fine-grained
mobility patterns by integrating a pre-trained image encoder for rich
frequency-domain feature extraction and a Mixture-of-Experts encoder for
sequence modeling. We trained and evaluated the MELON model on the multimodal
dataset of 126 patients recruited from nine Intensive Care Units at the
University of Florida Health Shands Hospital main campus in Gainesville,
Florida. Experiments showed that MELON outperforms conventional approaches for
12-hour mobility status estimation with an overall area under the receiver
operating characteristic curve (AUROC) of 0.82 (95\%, confidence interval
0.78-0.86). Notably, our experiments also revealed that accelerometer data
collected from the wrist provides robust predictive performance compared with
data from the ankle, suggesting a single-sensor solution that can reduce
patient burden and lower deployment costs...
|
2503.11697 | Bhargav Acharya | Bhargav Acharya, William Saakyan, Barbara Hammer, and Hanna Drimalla | Generalization of Video-Based Heart Rate Estimation Methods To Low
Illumination and Elevated Heart Rates | 10pages, 4 figures | null | null | null | cs.LG eess.IV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Heart rate is a physiological signal that provides information about an
individual's health and affective state. Remote photoplethysmography (rPPG)
allows the estimation of this signal from video recordings of a person's face.
Classical rPPG methods make use of signal processing techniques, while recent
rPPG methods utilize deep learning networks. Methods are typically evaluated on
datasets collected in well-lit environments with participants at resting heart
rates. However, little investigation has been done on how well these methods
adapt to variations in illumination and heart rate. In this work, we
systematically evaluate representative state-of-the-art methods for remote
heart rate estimation. Specifically, we evaluate four classical methods and
four deep learning-based rPPG estimation methods in terms of their
generalization ability to changing scenarios, including low lighting conditions
and elevated heart rates. For a thorough evaluation of existing approaches, we
collected a novel dataset called CHILL, which systematically varies heart rate
and lighting conditions. The dataset consists of recordings from 45
participants in four different scenarios. The video data was collected under
two different lighting conditions (high and low) and normal and elevated heart
rates. In addition, we selected two public datasets to conduct within- and
cross-dataset evaluations of the rPPG methods. Our experimental results
indicate that classical methods are not significantly impacted by low-light
conditions. Meanwhile, some deep learning methods were found to be more robust
to changes in lighting conditions but encountered challenges in estimating high
heart rates. The cross-dataset evaluation revealed that the selected deep
learning methods underperformed when influencing factors such as elevated heart
rates and low lighting conditions were not present in the training set.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 18:29:10 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Acharya",
"Bhargav",
""
],
[
"Saakyan",
"William",
""
],
[
"Hammer",
"Barbara",
""
],
[
"Drimalla",
"Hanna",
""
]
] | TITLE: Generalization of Video-Based Heart Rate Estimation Methods To Low
Illumination and Elevated Heart Rates
ABSTRACT: Heart rate is a physiological signal that provides information about an
individual's health and affective state. Remote photoplethysmography (rPPG)
allows the estimation of this signal from video recordings of a person's face.
Classical rPPG methods make use of signal processing techniques, while recent
rPPG methods utilize deep learning networks. Methods are typically evaluated on
datasets collected in well-lit environments with participants at resting heart
rates. However, little investigation has been done on how well these methods
adapt to variations in illumination and heart rate. In this work, we
systematically evaluate representative state-of-the-art methods for remote
heart rate estimation. Specifically, we evaluate four classical methods and
four deep learning-based rPPG estimation methods in terms of their
generalization ability to changing scenarios, including low lighting conditions
and elevated heart rates. For a thorough evaluation of existing approaches, we
collected a novel dataset called CHILL, which systematically varies heart rate
and lighting conditions. The dataset consists of recordings from 45
participants in four different scenarios. The video data was collected under
two different lighting conditions (high and low) and normal and elevated heart
rates. In addition, we selected two public datasets to conduct within- and
cross-dataset evaluations of the rPPG methods. Our experimental results
indicate that classical methods are not significantly impacted by low-light
conditions. Meanwhile, some deep learning methods were found to be more robust
to changes in lighting conditions but encountered challenges in estimating high
heart rates. The cross-dataset evaluation revealed that the selected deep
learning methods underperformed when influencing factors such as elevated heart
rates and low lighting conditions were not present in the training set.
|
2503.11706 | Fabian Galis | Fabian Galis and Darian Onchis | Refining Filter Global Feature Weighting for Fully-Unsupervised
Clustering | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | In the context of unsupervised learning, effective clustering plays a vital
role in revealing patterns and insights from unlabeled data. However, the
success of clustering algorithms often depends on the relevance and
contribution of features, which can differ between various datasets. This paper
explores feature weighting for clustering and presents new weighting
strategies, including methods based on SHAP (SHapley Additive exPlanations), a
technique commonly used for providing explainability in various supervised
machine learning tasks. By taking advantage of SHAP values in a way other than
just to gain explainability, we use them to weight features and ultimately
improve the clustering process itself in unsupervised scenarios.
Our empirical evaluations across five benchmark datasets and clustering
methods demonstrate that feature weighting based on SHAP can enhance
unsupervised clustering quality, achieving up to a 22.69\% improvement over
other weighting methods (from 0.586 to 0.719 in terms of the Adjusted Rand
Index). Additionally, these situations where the weighted data boosts the
results are highlighted and thoroughly explored, offering insight for practical
applications.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 13:14:09 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Galis",
"Fabian",
""
],
[
"Onchis",
"Darian",
""
]
] | TITLE: Refining Filter Global Feature Weighting for Fully-Unsupervised
Clustering
ABSTRACT: In the context of unsupervised learning, effective clustering plays a vital
role in revealing patterns and insights from unlabeled data. However, the
success of clustering algorithms often depends on the relevance and
contribution of features, which can differ between various datasets. This paper
explores feature weighting for clustering and presents new weighting
strategies, including methods based on SHAP (SHapley Additive exPlanations), a
technique commonly used for providing explainability in various supervised
machine learning tasks. By taking advantage of SHAP values in a way other than
just to gain explainability, we use them to weight features and ultimately
improve the clustering process itself in unsupervised scenarios.
Our empirical evaluations across five benchmark datasets and clustering
methods demonstrate that feature weighting based on SHAP can enhance
unsupervised clustering quality, achieving up to a 22.69\% improvement over
other weighting methods (from 0.586 to 0.719 in terms of the Adjusted Rand
Index). Additionally, these situations where the weighted data boosts the
results are highlighted and thoroughly explored, offering insight for practical
applications.
|
2503.11707 | Yi Chen | Yi Chen, Jie Lou, Malte Wabnitz, Johnson Loh and Tobias Gemmeke | EDEA: Efficient Dual-Engine Accelerator for Depthwise Separable
Convolution with Direct Data Transfer | null | null | 10.1109/SOCC62300.2024.10737823 | null | cs.AR cs.DC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Depthwise separable convolution (DSC) has emerged as a crucial technique,
especially for resource-constrained devices. In this paper, we propose a
dual-engine for the DSC hardware accelerator, which enables the full
utilization of depthwise convolution (DWC) and pointwise convolution (PWC)
processing elements (PEs) in all DSC layers. To determine the optimal dataflow,
data reuse, and configuration of the target architecture, we conduct a design
space exploration using MobileNetV1 with the CIFAR10 dataset. In the
architecture, we introduce an additional non-convolutional unit, which merges
the dequantization, batch normalization (BN), ReLU, and quantization between
DWC and PWC into a simple fixed-point multiplication and addition operation.
This also reduces the intermediate data access between the DWC and PWC,
enabling streaming operation and reducing latency. The proposed DSC dual-engine
accelerator is implemented using the 22nm FDSOI technology from
GlobalFoundries, occupying an area of 0.58 $mm^2$. After signoff, it can
operate at 1 GHz at TT corner, achieving a peak energy efficiency of 13.43
TOPS/W with a throughput of 973.55 GOPS with 8-bit precision. The average
energy efficiency of all DSC layers on MobileNetV1 is 11.13 TOPS/W,
demonstrating substantial hardware efficiency improvements for DSC-based
applications.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 14:00:48 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Chen",
"Yi",
""
],
[
"Lou",
"Jie",
""
],
[
"Wabnitz",
"Malte",
""
],
[
"Loh",
"Johnson",
""
],
[
"Gemmeke",
"Tobias",
""
]
] | TITLE: EDEA: Efficient Dual-Engine Accelerator for Depthwise Separable
Convolution with Direct Data Transfer
ABSTRACT: Depthwise separable convolution (DSC) has emerged as a crucial technique,
especially for resource-constrained devices. In this paper, we propose a
dual-engine for the DSC hardware accelerator, which enables the full
utilization of depthwise convolution (DWC) and pointwise convolution (PWC)
processing elements (PEs) in all DSC layers. To determine the optimal dataflow,
data reuse, and configuration of the target architecture, we conduct a design
space exploration using MobileNetV1 with the CIFAR10 dataset. In the
architecture, we introduce an additional non-convolutional unit, which merges
the dequantization, batch normalization (BN), ReLU, and quantization between
DWC and PWC into a simple fixed-point multiplication and addition operation.
This also reduces the intermediate data access between the DWC and PWC,
enabling streaming operation and reducing latency. The proposed DSC dual-engine
accelerator is implemented using the 22nm FDSOI technology from
GlobalFoundries, occupying an area of 0.58 $mm^2$. After signoff, it can
operate at 1 GHz at TT corner, achieving a peak energy efficiency of 13.43
TOPS/W with a throughput of 973.55 GOPS with 8-bit precision. The average
energy efficiency of all DSC layers on MobileNetV1 is 11.13 TOPS/W,
demonstrating substantial hardware efficiency improvements for DSC-based
applications.
|
2503.11710 | Yanxia Zhang | Yanxia Zhang, Francine Chen, Shabnam Hakimi, Totte Harinen, Alex
Filipowicz, Yan-Ying Chen, Rumen Iliev, Nikos Arechiga, Kalani Murakami, Kent
Lyons, Charlene Wu, Matt Klenk | ConjointNet: Enhancing Conjoint Analysis for Preference Prediction with
Representation Learning | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Understanding consumer preferences is essential to product design and
predicting market response to these new products. Choice-based conjoint
analysis is widely used to model user preferences using their choices in
surveys. However, traditional conjoint estimation techniques assume simple
linear models. This assumption may lead to limited predictability and
inaccurate estimation of product attribute contributions, especially on data
that has underlying non-linear relationships. In this work, we employ
representation learning to efficiently alleviate this issue. We propose
ConjointNet, which is composed of two novel neural architectures, to predict
user preferences. We demonstrate that the proposed ConjointNet models
outperform traditional conjoint estimate techniques on two preference datasets
by over 5%, and offer insights into non-linear feature interactions.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 19:01:59 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Zhang",
"Yanxia",
""
],
[
"Chen",
"Francine",
""
],
[
"Hakimi",
"Shabnam",
""
],
[
"Harinen",
"Totte",
""
],
[
"Filipowicz",
"Alex",
""
],
[
"Chen",
"Yan-Ying",
""
],
[
"Iliev",
"Rumen",
""
],
[
"Arechiga",
"Nikos",
""
],
[
"Murakami",
"Kalani",
""
],
[
"Lyons",
"Kent",
""
],
[
"Wu",
"Charlene",
""
],
[
"Klenk",
"Matt",
""
]
] | TITLE: ConjointNet: Enhancing Conjoint Analysis for Preference Prediction with
Representation Learning
ABSTRACT: Understanding consumer preferences is essential to product design and
predicting market response to these new products. Choice-based conjoint
analysis is widely used to model user preferences using their choices in
surveys. However, traditional conjoint estimation techniques assume simple
linear models. This assumption may lead to limited predictability and
inaccurate estimation of product attribute contributions, especially on data
that has underlying non-linear relationships. In this work, we employ
representation learning to efficiently alleviate this issue. We propose
ConjointNet, which is composed of two novel neural architectures, to predict
user preferences. We demonstrate that the proposed ConjointNet models
outperform traditional conjoint estimate techniques on two preference datasets
by over 5%, and offer insights into non-linear feature interactions.
|
2503.11730 | Zekai Zhang | Zekai Zhang, Dan Li, Shunyu Wu, Junya Cai, Bo Zhang, See Kiong Ng and
Zibin Zheng | BACE-RUL: A Bi-directional Adversarial Network with Covariate Encoding
for Machine Remaining Useful Life Prediction | This paper has been received as a research paper at CollaborateCom
2024 | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Prognostic and Health Management (PHM) are crucial ways to avoid unnecessary
maintenance for Cyber-Physical Systems (CPS) and improve system reliability.
Predicting the Remaining Useful Life (RUL) is one of the most challenging tasks
for PHM. Existing methods require prior knowledge about the system, contrived
assumptions, or temporal mining to model the life cycles of machine
equipment/devices, resulting in diminished accuracy and limited applicability
in real-world scenarios. This paper proposes a Bi-directional Adversarial
network with Covariate Encoding for machine Remaining Useful Life (BACE-RUL)
prediction, which only adopts sensor measurements from the current life cycle
to predict RUL rather than relying on previous consecutive cycle recordings.
The current sensor measurements of mechanical devices are encoded to a
conditional space to better understand the implicit inner mechanical status.
The predictor is trained as a conditional generative network with the encoded
sensor measurements as its conditions. Various experiments on several
real-world datasets, including the turbofan aircraft engine dataset and the
dataset collected from degradation experiments of Li-Ion battery cells, show
that the proposed model is a general framework and outperforms state-of-the-art
methods.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 08:56:40 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Zhang",
"Zekai",
""
],
[
"Li",
"Dan",
""
],
[
"Wu",
"Shunyu",
""
],
[
"Cai",
"Junya",
""
],
[
"Zhang",
"Bo",
""
],
[
"Ng",
"See Kiong",
""
],
[
"Zheng",
"Zibin",
""
]
] | TITLE: BACE-RUL: A Bi-directional Adversarial Network with Covariate Encoding
for Machine Remaining Useful Life Prediction
ABSTRACT: Prognostic and Health Management (PHM) are crucial ways to avoid unnecessary
maintenance for Cyber-Physical Systems (CPS) and improve system reliability.
Predicting the Remaining Useful Life (RUL) is one of the most challenging tasks
for PHM. Existing methods require prior knowledge about the system, contrived
assumptions, or temporal mining to model the life cycles of machine
equipment/devices, resulting in diminished accuracy and limited applicability
in real-world scenarios. This paper proposes a Bi-directional Adversarial
network with Covariate Encoding for machine Remaining Useful Life (BACE-RUL)
prediction, which only adopts sensor measurements from the current life cycle
to predict RUL rather than relying on previous consecutive cycle recordings.
The current sensor measurements of mechanical devices are encoded to a
conditional space to better understand the implicit inner mechanical status.
The predictor is trained as a conditional generative network with the encoded
sensor measurements as its conditions. Various experiments on several
real-world datasets, including the turbofan aircraft engine dataset and the
dataset collected from degradation experiments of Li-Ion battery cells, show
that the proposed model is a general framework and outperforms state-of-the-art
methods.
|
2503.11731 | Qifeng Chen | Xianming Zeng, Sicong Du, Qifeng Chen, Lizhe Liu, Haoyu Shu, Jiaxuan
Gao, Jiarun Liu, Jiulong Xu, Jianyun Xu, Mingxia Chen, Yiru Zhao, Peng Chen,
Yapeng Xue, Chunming Zhao, Sheng Yang, Qiang Li | Industrial-Grade Sensor Simulation via Gaussian Splatting: A Modular
Framework for Scalable Editing and Full-Stack Validation | null | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sensor simulation is pivotal for scalable validation of autonomous driving
systems, yet existing Neural Radiance Fields (NeRF) based methods face
applicability and efficiency challenges in industrial workflows. This paper
introduces a Gaussian Splatting (GS) based system to address these challenges:
We first break down sensor simulator components and analyze the possible
advantages of GS over NeRF. Then in practice, we refactor three crucial
components through GS, to leverage its explicit scene representation and
real-time rendering: (1) choosing the 2D neural Gaussian representation for
physics-compliant scene and sensor modeling, (2) proposing a scene editing
pipeline to leverage Gaussian primitives library for data augmentation, and (3)
coupling a controllable diffusion model for scene expansion and harmonization.
We implement this framework on a proprietary autonomous driving dataset
supporting cameras and LiDAR sensors. We demonstrate through ablation studies
that our approach reduces frame-wise simulation latency, achieves better
geometric and photometric consistency, and enables interpretable explicit scene
editing and expansion. Furthermore, we showcase how integrating such a GS-based
sensor simulator with traffic and dynamic simulators enables full-stack testing
of end-to-end autonomy algorithms. Our work provides both algorithmic insights
and practical validation, establishing GS as a cornerstone for industrial-grade
sensor simulation.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 10:10:22 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Zeng",
"Xianming",
""
],
[
"Du",
"Sicong",
""
],
[
"Chen",
"Qifeng",
""
],
[
"Liu",
"Lizhe",
""
],
[
"Shu",
"Haoyu",
""
],
[
"Gao",
"Jiaxuan",
""
],
[
"Liu",
"Jiarun",
""
],
[
"Xu",
"Jiulong",
""
],
[
"Xu",
"Jianyun",
""
],
[
"Chen",
"Mingxia",
""
],
[
"Zhao",
"Yiru",
""
],
[
"Chen",
"Peng",
""
],
[
"Xue",
"Yapeng",
""
],
[
"Zhao",
"Chunming",
""
],
[
"Yang",
"Sheng",
""
],
[
"Li",
"Qiang",
""
]
] | TITLE: Industrial-Grade Sensor Simulation via Gaussian Splatting: A Modular
Framework for Scalable Editing and Full-Stack Validation
ABSTRACT: Sensor simulation is pivotal for scalable validation of autonomous driving
systems, yet existing Neural Radiance Fields (NeRF) based methods face
applicability and efficiency challenges in industrial workflows. This paper
introduces a Gaussian Splatting (GS) based system to address these challenges:
We first break down sensor simulator components and analyze the possible
advantages of GS over NeRF. Then in practice, we refactor three crucial
components through GS, to leverage its explicit scene representation and
real-time rendering: (1) choosing the 2D neural Gaussian representation for
physics-compliant scene and sensor modeling, (2) proposing a scene editing
pipeline to leverage Gaussian primitives library for data augmentation, and (3)
coupling a controllable diffusion model for scene expansion and harmonization.
We implement this framework on a proprietary autonomous driving dataset
supporting cameras and LiDAR sensors. We demonstrate through ablation studies
that our approach reduces frame-wise simulation latency, achieves better
geometric and photometric consistency, and enables interpretable explicit scene
editing and expansion. Furthermore, we showcase how integrating such a GS-based
sensor simulator with traffic and dynamic simulators enables full-stack testing
of end-to-end autonomy algorithms. Our work provides both algorithmic insights
and practical validation, establishing GS as a cornerstone for industrial-grade
sensor simulation.
|
2503.11732 | Andrew Starkey | Andrew Starkey, Uduak Idio Akpan, Omaimah AL Hosni and Yaseen
Pullissery | Class-Level Feature Selection Method Using Feature Weighted Growing
Self-Organising Maps | 14 pages, 15 figures | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | There have been several attempts to develop Feature Selection (FS) algorithms
capable of identifying features that are relevant in a dataset. Although in
certain applications the FS algorithms can be seen to be successful, they have
similar basic limitations. In all cases, the global feature selection
algorithms seek to select features that are relevant and common to all classes
of the dataset. This is a major limitation since there could be features that
are specifically useful for a particular class while irrelevant for other
classes, and full explanation of the relationship at class level therefore
cannot be determined. While the inclusion of such features for all classes
could cause improved predictive ability for the relevant class, the same
features could be problematic for other classes. In this paper, we examine this
issue and also develop a class-level feature selection method called the
Feature Weighted Growing Self-Organising Map (FWGSOM). The proposed method
carries out feature analysis at class level which enhances its ability to
identify relevant features for each class. Results from experiments indicate
that our method performs better than other methods, gives explainable results
at class level, and has a low computational footprint when compared to other
methods.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 11:02:34 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Starkey",
"Andrew",
""
],
[
"Akpan",
"Uduak Idio",
""
],
[
"Hosni",
"Omaimah AL",
""
],
[
"Pullissery",
"Yaseen",
""
]
] | TITLE: Class-Level Feature Selection Method Using Feature Weighted Growing
Self-Organising Maps
ABSTRACT: There have been several attempts to develop Feature Selection (FS) algorithms
capable of identifying features that are relevant in a dataset. Although in
certain applications the FS algorithms can be seen to be successful, they have
similar basic limitations. In all cases, the global feature selection
algorithms seek to select features that are relevant and common to all classes
of the dataset. This is a major limitation since there could be features that
are specifically useful for a particular class while irrelevant for other
classes, and full explanation of the relationship at class level therefore
cannot be determined. While the inclusion of such features for all classes
could cause improved predictive ability for the relevant class, the same
features could be problematic for other classes. In this paper, we examine this
issue and also develop a class-level feature selection method called the
Feature Weighted Growing Self-Organising Map (FWGSOM). The proposed method
carries out feature analysis at class level which enhances its ability to
identify relevant features for each class. Results from experiments indicate
that our method performs better than other methods, gives explainable results
at class level, and has a low computational footprint when compared to other
methods.
|
2503.11733 | Zhendong Chu | Zhendong Chu, Shen Wang, Jian Xie, Tinghui Zhu, Yibo Yan, Jinheng Ye,
Aoxiao Zhong, Xuming Hu, Jing Liang, Philip S. Yu, Qingsong Wen | LLM Agents for Education: Advances and Applications | 17 pages | null | null | null | cs.CY cs.AI cs.CL cs.HC | http://creativecommons.org/licenses/by/4.0/ | Large Language Model (LLM) agents have demonstrated remarkable capabilities
in automating tasks and driving innovation across diverse educational
applications. In this survey, we provide a systematic review of
state-of-the-art research on LLM agents in education, categorizing them into
two broad classes: (1) \emph{Pedagogical Agents}, which focus on automating
complex pedagogical tasks to support both teachers and students; and (2)
\emph{Domain-Specific Educational Agents}, which are tailored for specialized
fields such as science education, language learning, and professional
development. We comprehensively examine the technological advancements
underlying these LLM agents, including key datasets, benchmarks, and
algorithmic frameworks that drive their effectiveness. Furthermore, we discuss
critical challenges such as privacy, bias and fairness concerns, hallucination
mitigation, and integration with existing educational ecosystems. This survey
aims to provide a comprehensive technological overview of LLM agents for
education, fostering further research and collaboration to enhance their impact
for the greater good of learners and educators alike.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 11:53:44 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Chu",
"Zhendong",
""
],
[
"Wang",
"Shen",
""
],
[
"Xie",
"Jian",
""
],
[
"Zhu",
"Tinghui",
""
],
[
"Yan",
"Yibo",
""
],
[
"Ye",
"Jinheng",
""
],
[
"Zhong",
"Aoxiao",
""
],
[
"Hu",
"Xuming",
""
],
[
"Liang",
"Jing",
""
],
[
"Yu",
"Philip S.",
""
],
[
"Wen",
"Qingsong",
""
]
] | TITLE: LLM Agents for Education: Advances and Applications
ABSTRACT: Large Language Model (LLM) agents have demonstrated remarkable capabilities
in automating tasks and driving innovation across diverse educational
applications. In this survey, we provide a systematic review of
state-of-the-art research on LLM agents in education, categorizing them into
two broad classes: (1) \emph{Pedagogical Agents}, which focus on automating
complex pedagogical tasks to support both teachers and students; and (2)
\emph{Domain-Specific Educational Agents}, which are tailored for specialized
fields such as science education, language learning, and professional
development. We comprehensively examine the technological advancements
underlying these LLM agents, including key datasets, benchmarks, and
algorithmic frameworks that drive their effectiveness. Furthermore, we discuss
critical challenges such as privacy, bias and fairness concerns, hallucination
mitigation, and integration with existing educational ecosystems. This survey
aims to provide a comprehensive technological overview of LLM agents for
education, fostering further research and collaboration to enhance their impact
for the greater good of learners and educators alike.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.