id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
listlengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2411.00803 | Hao Wang | Hao Wang, Jiajun Zhong, Yikun Li, Junrong Zhang, Rong Du | Designing a Dataset for Convolutional Neural Networks to Predict Space
Groups Consistent with Extinction Laws | 17 pages, 10 figures | null | null | null | cs.NE physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, a dataset of one-dimensional powder diffraction patterns was
designed with new strategy to train Convolutional Neural Networks for
predicting space groups. The diffraction pattern was calculated based on
lattice parameters and Extinction Laws, instead of the traditional approach of
generating it from a crystallographic database. This paper demonstrates that
the new strategy is more effective than the conventional method. As a result,
the model trained on the cubic and tetragonal training set from the newly
designed dataset achieves prediction accuracy that matches the theoretical
maximums calculated based on Extinction Laws. These results demonstrate that
machine learning-based prediction can be both physically reasonable and
reliable. Additionally, the model trained on our newly designed dataset shows
excellent generalization capability, much better than the one trained on a
traditionally designed dataset.
| [
{
"version": "v1",
"created": "Mon, 21 Oct 2024 05:32:39 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Nov 2024 06:49:25 GMT"
},
{
"version": "v3",
"created": "Tue, 4 Mar 2025 00:52:44 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Wang",
"Hao",
""
],
[
"Zhong",
"Jiajun",
""
],
[
"Li",
"Yikun",
""
],
[
"Zhang",
"Junrong",
""
],
[
"Du",
"Rong",
""
]
]
| TITLE: Designing a Dataset for Convolutional Neural Networks to Predict Space
Groups Consistent with Extinction Laws
ABSTRACT: In this paper, a dataset of one-dimensional powder diffraction patterns was
designed with new strategy to train Convolutional Neural Networks for
predicting space groups. The diffraction pattern was calculated based on
lattice parameters and Extinction Laws, instead of the traditional approach of
generating it from a crystallographic database. This paper demonstrates that
the new strategy is more effective than the conventional method. As a result,
the model trained on the cubic and tetragonal training set from the newly
designed dataset achieves prediction accuracy that matches the theoretical
maximums calculated based on Extinction Laws. These results demonstrate that
machine learning-based prediction can be both physically reasonable and
reliable. Additionally, the model trained on our newly designed dataset shows
excellent generalization capability, much better than the one trained on a
traditionally designed dataset.
| new_dataset | 0.956227 |
2411.04376 | Rui Luo | Rui Luo, Jie Bao, Zhixin Zhou, Chuangyin Dang | Game-Theoretic Defenses for Robust Conformal Prediction Against
Adversarial Attacks in Medical Imaging | null | null | null | null | cs.LG cs.CR eess.IV | http://creativecommons.org/licenses/by/4.0/ | Adversarial attacks pose significant threats to the reliability and safety of
deep learning models, especially in critical domains such as medical imaging.
This paper introduces a novel framework that integrates conformal prediction
with game-theoretic defensive strategies to enhance model robustness against
both known and unknown adversarial perturbations. We address three primary
research questions: constructing valid and efficient conformal prediction sets
under known attacks (RQ1), ensuring coverage under unknown attacks through
conservative thresholding (RQ2), and determining optimal defensive strategies
within a zero-sum game framework (RQ3). Our methodology involves training
specialized defensive models against specific attack types and employing
maximum and minimum classifiers to aggregate defenses effectively. Extensive
experiments conducted on the MedMNIST datasets, including PathMNIST,
OrganAMNIST, and TissueMNIST, demonstrate that our approach maintains high
coverage guarantees while minimizing prediction set sizes. The game-theoretic
analysis reveals that the optimal defensive strategy often converges to a
singular robust model, outperforming uniform and simple strategies across all
evaluated datasets. This work advances the state-of-the-art in uncertainty
quantification and adversarial robustness, providing a reliable mechanism for
deploying deep learning models in adversarial environments.
| [
{
"version": "v1",
"created": "Thu, 7 Nov 2024 02:20:04 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 04:56:21 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Luo",
"Rui",
""
],
[
"Bao",
"Jie",
""
],
[
"Zhou",
"Zhixin",
""
],
[
"Dang",
"Chuangyin",
""
]
]
| TITLE: Game-Theoretic Defenses for Robust Conformal Prediction Against
Adversarial Attacks in Medical Imaging
ABSTRACT: Adversarial attacks pose significant threats to the reliability and safety of
deep learning models, especially in critical domains such as medical imaging.
This paper introduces a novel framework that integrates conformal prediction
with game-theoretic defensive strategies to enhance model robustness against
both known and unknown adversarial perturbations. We address three primary
research questions: constructing valid and efficient conformal prediction sets
under known attacks (RQ1), ensuring coverage under unknown attacks through
conservative thresholding (RQ2), and determining optimal defensive strategies
within a zero-sum game framework (RQ3). Our methodology involves training
specialized defensive models against specific attack types and employing
maximum and minimum classifiers to aggregate defenses effectively. Extensive
experiments conducted on the MedMNIST datasets, including PathMNIST,
OrganAMNIST, and TissueMNIST, demonstrate that our approach maintains high
coverage guarantees while minimizing prediction set sizes. The game-theoretic
analysis reveals that the optimal defensive strategy often converges to a
singular robust model, outperforming uniform and simple strategies across all
evaluated datasets. This work advances the state-of-the-art in uncertainty
quantification and adversarial robustness, providing a reliable mechanism for
deploying deep learning models in adversarial environments.
| no_new_dataset | 0.941761 |
2411.12415 | Mustafa M. Abd Zaid | Mustafa M. Abd Zaid, Ahmed Abed Mohammed, Putra Sumari | Classification of Geographical Land Structure Using Convolution Neural
Network and Transfer Learning | null | J. Comput. Sci., 20(12), 1580-1592, 2024 | 10.3844/jcssp.2024.1580.1592 | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Satellite imagery has dramatically revolutionized the field of geography by
giving academics, scientists, and policymakers unprecedented global access to
spatial data. Manual methods typically require significant time and effort to
detect the generic land structure in satellite images. This study can produce a
set of applications such as urban planning and development, environmental
monitoring, disaster management, etc. Therefore, the research presents a
methodology to minimize human labor, reducing the expenses and duration needed
to identify the land structure. This article developed a deep learning-based
approach to automate the process of classifying geographical land structures.
We used a satellite image dataset acquired from MLRSNet. The study compared the
performance of three architectures, namely CNN, ResNet-50, and Inception-v3. We
used three optimizers with any model: Adam, SGD, and RMSProp. We conduct the
training process for a fixed number of epochs, specifically 100 epochs, with a
batch size of 64. The ResNet-50 achieved an accuracy of 76.5% with the ADAM
optimizer, the Inception-v3 with RMSProp achieved an accuracy of 93.8%, and the
proposed approach, CNN with RMSProp optimizer, achieved the highest level of
performance and an accuracy of 94.8%. Moreover, a thorough examination of the
CNN model demonstrated its exceptional accuracy, recall, and F1 scores for all
categories, confirming its resilience and dependability in precisely detecting
various terrain formations. The results highlight the potential of deep
learning models in scene understanding, as well as their significance in
efficiently identifying and categorizing land structures from satellite
imagery.
| [
{
"version": "v1",
"created": "Tue, 19 Nov 2024 11:01:30 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Zaid",
"Mustafa M. Abd",
""
],
[
"Mohammed",
"Ahmed Abed",
""
],
[
"Sumari",
"Putra",
""
]
]
| TITLE: Classification of Geographical Land Structure Using Convolution Neural
Network and Transfer Learning
ABSTRACT: Satellite imagery has dramatically revolutionized the field of geography by
giving academics, scientists, and policymakers unprecedented global access to
spatial data. Manual methods typically require significant time and effort to
detect the generic land structure in satellite images. This study can produce a
set of applications such as urban planning and development, environmental
monitoring, disaster management, etc. Therefore, the research presents a
methodology to minimize human labor, reducing the expenses and duration needed
to identify the land structure. This article developed a deep learning-based
approach to automate the process of classifying geographical land structures.
We used a satellite image dataset acquired from MLRSNet. The study compared the
performance of three architectures, namely CNN, ResNet-50, and Inception-v3. We
used three optimizers with any model: Adam, SGD, and RMSProp. We conduct the
training process for a fixed number of epochs, specifically 100 epochs, with a
batch size of 64. The ResNet-50 achieved an accuracy of 76.5% with the ADAM
optimizer, the Inception-v3 with RMSProp achieved an accuracy of 93.8%, and the
proposed approach, CNN with RMSProp optimizer, achieved the highest level of
performance and an accuracy of 94.8%. Moreover, a thorough examination of the
CNN model demonstrated its exceptional accuracy, recall, and F1 scores for all
categories, confirming its resilience and dependability in precisely detecting
various terrain formations. The results highlight the potential of deep
learning models in scene understanding, as well as their significance in
efficiently identifying and categorizing land structures from satellite
imagery.
| no_new_dataset | 0.949059 |
2411.17196 | Yaowei Jin | Yaowei Jin, Qi Huang, Ziyang Song, Mingyue Zheng, Dan Teng, Qian Shi | P2DFlow: A Protein Ensemble Generative Model with SE(3) Flow Matching | null | null | null | null | physics.bio-ph cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Biological processes, functions, and properties are intricately linked to the
ensemble of protein conformations, rather than being solely determined by a
single stable conformation. In this study, we have developed P2DFlow, a
generative model based on SE(3) flow matching, to predict the structural
ensembles of proteins. We specifically designed a valuable prior for the flow
process and enhanced the model's ability to distinguish each intermediate state
by incorporating an additional dimension to describe the ensemble data, which
can reflect the physical laws governing the distribution of ensembles, so that
the prior knowledge can effectively guide the generation process. When trained
and evaluated on the MD datasets of ATLAS, P2DFlow outperforms other baseline
models on extensive experiments, successfully capturing the observable dynamic
fluctuations as evidenced in crystal structure and MD simulations. As a
potential proxy agent for protein molecular simulation, the high-quality
ensembles generated by P2DFlow could significantly aid in understanding protein
functions across various scenarios. Code is available at
https://github.com/BLEACH366/P2DFlow
| [
{
"version": "v1",
"created": "Tue, 26 Nov 2024 08:10:12 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 01:38:11 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Jin",
"Yaowei",
""
],
[
"Huang",
"Qi",
""
],
[
"Song",
"Ziyang",
""
],
[
"Zheng",
"Mingyue",
""
],
[
"Teng",
"Dan",
""
],
[
"Shi",
"Qian",
""
]
]
| TITLE: P2DFlow: A Protein Ensemble Generative Model with SE(3) Flow Matching
ABSTRACT: Biological processes, functions, and properties are intricately linked to the
ensemble of protein conformations, rather than being solely determined by a
single stable conformation. In this study, we have developed P2DFlow, a
generative model based on SE(3) flow matching, to predict the structural
ensembles of proteins. We specifically designed a valuable prior for the flow
process and enhanced the model's ability to distinguish each intermediate state
by incorporating an additional dimension to describe the ensemble data, which
can reflect the physical laws governing the distribution of ensembles, so that
the prior knowledge can effectively guide the generation process. When trained
and evaluated on the MD datasets of ATLAS, P2DFlow outperforms other baseline
models on extensive experiments, successfully capturing the observable dynamic
fluctuations as evidenced in crystal structure and MD simulations. As a
potential proxy agent for protein molecular simulation, the high-quality
ensembles generated by P2DFlow could significantly aid in understanding protein
functions across various scenarios. Code is available at
https://github.com/BLEACH366/P2DFlow
| no_new_dataset | 0.950411 |
2411.17598 | William Ingram | William A. Ingram, Bipasha Banerjee, Edward A. Fox | Agentic AI for Improving Precision in Identifying Contributions to
Sustainable Development Goals | null | null | 10.1109/BigData62323.2024.10825072 | null | cs.DL cs.AI cs.IR | http://creativecommons.org/licenses/by-sa/4.0/ | As research institutions increasingly commit to supporting the United
Nations' Sustainable Development Goals (SDGs), there is a pressing need to
accurately assess their research output against these goals. Current
approaches, primarily reliant on keyword-based Boolean search queries, conflate
incidental keyword matches with genuine contributions, reducing retrieval
precision and complicating benchmarking efforts. This study investigates the
application of autoregressive Large Language Models (LLMs) as evaluation agents
to identify relevant scholarly contributions to SDG targets in scholarly
publications. Using a dataset of academic abstracts retrieved via SDG-specific
keyword queries, we demonstrate that small, locally-hosted LLMs can
differentiate semantically relevant contributions to SDG targets from documents
retrieved due to incidental keyword matches, addressing the limitations of
traditional methods. By leveraging the contextual understanding of LLMs, this
approach provides a scalable framework for improving SDG-related research
metrics and informing institutional reporting.
| [
{
"version": "v1",
"created": "Tue, 26 Nov 2024 17:06:30 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Ingram",
"William A.",
""
],
[
"Banerjee",
"Bipasha",
""
],
[
"Fox",
"Edward A.",
""
]
]
| TITLE: Agentic AI for Improving Precision in Identifying Contributions to
Sustainable Development Goals
ABSTRACT: As research institutions increasingly commit to supporting the United
Nations' Sustainable Development Goals (SDGs), there is a pressing need to
accurately assess their research output against these goals. Current
approaches, primarily reliant on keyword-based Boolean search queries, conflate
incidental keyword matches with genuine contributions, reducing retrieval
precision and complicating benchmarking efforts. This study investigates the
application of autoregressive Large Language Models (LLMs) as evaluation agents
to identify relevant scholarly contributions to SDG targets in scholarly
publications. Using a dataset of academic abstracts retrieved via SDG-specific
keyword queries, we demonstrate that small, locally-hosted LLMs can
differentiate semantically relevant contributions to SDG targets from documents
retrieved due to incidental keyword matches, addressing the limitations of
traditional methods. By leveraging the contextual understanding of LLMs, this
approach provides a scalable framework for improving SDG-related research
metrics and informing institutional reporting.
| no_new_dataset | 0.949295 |
2412.01007 | Revanth Reddy | Tarun Suresh, Revanth Gangi Reddy, Yifei Xu, Zach Nussbaum, Andriy
Mulyar, Brandon Duderstadt, Heng Ji | CoRNStack: High-Quality Contrastive Data for Better Code Retrieval and
Reranking | Published as a conference paper at ICLR 2025. First and second author
had equal contribution | null | null | null | cs.CL cs.IR | http://creativecommons.org/licenses/by/4.0/ | Effective code retrieval plays a crucial role in advancing code generation,
bug fixing, and software maintenance, particularly as software systems increase
in complexity. While current code embedding models have demonstrated promise in
retrieving code snippets for small-scale, well-defined tasks, they often
underperform in more demanding real-world applications such as bug localization
within GitHub repositories. We hypothesize that a key issue is their reliance
on noisy and inconsistent datasets for training, which impedes their ability to
generalize to more complex retrieval scenarios. To address these limitations,
we introduce CoRNStack, a large-scale, high-quality contrastive training
dataset for code that spans multiple programming languages. This dataset is
curated using consistency filtering to eliminate noisy positives and is further
enriched with mined hard negatives, thereby facilitating more effective
learning. We demonstrate that contrastive training of embedding models using
CoRNStack leads to state-of-the-art performance across a variety of code
retrieval tasks. Furthermore, the dataset can be leveraged for training code
reranking models, a largely underexplored area compared to text reranking. Our
finetuned code reranking model significantly improves the ranking quality over
the retrieved results. Finally, by employing our code retriever and reranker
together, we demonstrate significant improvements in function localization for
GitHub issues, an important component of real-world software development.
| [
{
"version": "v1",
"created": "Sun, 1 Dec 2024 23:54:12 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Dec 2024 20:01:42 GMT"
},
{
"version": "v3",
"created": "Tue, 4 Mar 2025 00:36:44 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Suresh",
"Tarun",
""
],
[
"Reddy",
"Revanth Gangi",
""
],
[
"Xu",
"Yifei",
""
],
[
"Nussbaum",
"Zach",
""
],
[
"Mulyar",
"Andriy",
""
],
[
"Duderstadt",
"Brandon",
""
],
[
"Ji",
"Heng",
""
]
]
| TITLE: CoRNStack: High-Quality Contrastive Data for Better Code Retrieval and
Reranking
ABSTRACT: Effective code retrieval plays a crucial role in advancing code generation,
bug fixing, and software maintenance, particularly as software systems increase
in complexity. While current code embedding models have demonstrated promise in
retrieving code snippets for small-scale, well-defined tasks, they often
underperform in more demanding real-world applications such as bug localization
within GitHub repositories. We hypothesize that a key issue is their reliance
on noisy and inconsistent datasets for training, which impedes their ability to
generalize to more complex retrieval scenarios. To address these limitations,
we introduce CoRNStack, a large-scale, high-quality contrastive training
dataset for code that spans multiple programming languages. This dataset is
curated using consistency filtering to eliminate noisy positives and is further
enriched with mined hard negatives, thereby facilitating more effective
learning. We demonstrate that contrastive training of embedding models using
CoRNStack leads to state-of-the-art performance across a variety of code
retrieval tasks. Furthermore, the dataset can be leveraged for training code
reranking models, a largely underexplored area compared to text reranking. Our
finetuned code reranking model significantly improves the ranking quality over
the retrieved results. Finally, by employing our code retriever and reranker
together, we demonstrate significant improvements in function localization for
GitHub issues, an important component of real-world software development.
| new_dataset | 0.965964 |
2412.03881 | Changho Shin | Changho Shin, John Cooper, Frederic Sala | Weak-to-Strong Generalization Through the Data-Centric Lens | ICLR 2025 | null | null | null | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The weak-to-strong generalization phenomenon is the driver for important
machine learning applications including highly data-efficient learning and,
most recently, performing superalignment. While decades of research have
resulted in numerous algorithms that produce strong empirical performance,
understanding what aspects of data enable weak-to-strong generalization has
been understudied. We propose a simple data-centric mechanism that
characterizes weak-to-strong generalization: the overlap density. Intuitively,
generalization tracks the number of points that contain overlaps, i.e., both
easy patterns (learnable by a weak model) and challenging patterns (only
learnable by a stronger model), as with such points, weak predictions can be
used to learn challenging patterns by stronger models. We provide a practical
overlap detection algorithm to find such points in datasets and leverage them
to learn, among multiple sources of data, which to query when seeking to
maximize overlap density and thereby enhance weak-to-strong generalization. We
present a theoretical result showing that the generalization benefit is a
function of the overlap density and a regret bound for our data selection
algorithm. Empirically, we validate the mechanism and the overlap detection
algorithm on a wide array of settings.
| [
{
"version": "v1",
"created": "Thu, 5 Dec 2024 05:29:19 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 04:28:19 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Shin",
"Changho",
""
],
[
"Cooper",
"John",
""
],
[
"Sala",
"Frederic",
""
]
]
| TITLE: Weak-to-Strong Generalization Through the Data-Centric Lens
ABSTRACT: The weak-to-strong generalization phenomenon is the driver for important
machine learning applications including highly data-efficient learning and,
most recently, performing superalignment. While decades of research have
resulted in numerous algorithms that produce strong empirical performance,
understanding what aspects of data enable weak-to-strong generalization has
been understudied. We propose a simple data-centric mechanism that
characterizes weak-to-strong generalization: the overlap density. Intuitively,
generalization tracks the number of points that contain overlaps, i.e., both
easy patterns (learnable by a weak model) and challenging patterns (only
learnable by a stronger model), as with such points, weak predictions can be
used to learn challenging patterns by stronger models. We provide a practical
overlap detection algorithm to find such points in datasets and leverage them
to learn, among multiple sources of data, which to query when seeking to
maximize overlap density and thereby enhance weak-to-strong generalization. We
present a theoretical result showing that the generalization benefit is a
function of the overlap density and a regret bound for our data selection
algorithm. Empirically, we validate the mechanism and the overlap detection
algorithm on a wide array of settings.
| no_new_dataset | 0.945651 |
2412.03905 | Peng Liang | Qiong Feng, Xiaotian Ma, Jiayi Sheng, Ziyuan Feng, Wei Song, Peng
Liang | Integrating Various Software Artifacts for Better LLM-based Bug
Localization and Program Repair | 22 pages, 11 images, 9 tables, Manuscript submitted to a journal
(2024) | null | null | null | cs.SE cs.AI | http://creativecommons.org/licenses/by/4.0/ | LLMs have garnered considerable attention for their potential to streamline
Automated Program Repair (APR). LLM-based approaches can either insert the
correct code or directly generate patches when provided with buggy methods.
However, most of LLM-based APR methods rely on a single type of software
information, without fully leveraging different software artifacts. Despite
this, many LLM-based approaches do not explore which specific types of
information best assist in APR. Addressing this gap is crucial for advancing
LLM-based APR techniques. We propose DEVLoRe to use issue content (description
and message) and stack error traces to localize buggy methods, then rely on
debug information in buggy methods and issue content and stack error to
localize buggy lines and generate plausible patches which can pass all unit
tests. The results show that while issue content is particularly effective in
assisting LLMs with fault localization and program repair, different types of
software artifacts complement each other. By incorporating different artifacts,
DEVLoRe successfully locates 49.3% and 47.6% of single and non-single buggy
methods and generates 56.0% and 14.5% plausible patches for the Defects4J v2.0
dataset, respectively. This outperforms current state-of-the-art APR methods.
The source code and experimental results of this work for replication are
available at https://github.com/XYZboom/DEVLoRe.
| [
{
"version": "v1",
"created": "Thu, 5 Dec 2024 06:21:31 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 07:06:35 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Feng",
"Qiong",
""
],
[
"Ma",
"Xiaotian",
""
],
[
"Sheng",
"Jiayi",
""
],
[
"Feng",
"Ziyuan",
""
],
[
"Song",
"Wei",
""
],
[
"Liang",
"Peng",
""
]
]
| TITLE: Integrating Various Software Artifacts for Better LLM-based Bug
Localization and Program Repair
ABSTRACT: LLMs have garnered considerable attention for their potential to streamline
Automated Program Repair (APR). LLM-based approaches can either insert the
correct code or directly generate patches when provided with buggy methods.
However, most of LLM-based APR methods rely on a single type of software
information, without fully leveraging different software artifacts. Despite
this, many LLM-based approaches do not explore which specific types of
information best assist in APR. Addressing this gap is crucial for advancing
LLM-based APR techniques. We propose DEVLoRe to use issue content (description
and message) and stack error traces to localize buggy methods, then rely on
debug information in buggy methods and issue content and stack error to
localize buggy lines and generate plausible patches which can pass all unit
tests. The results show that while issue content is particularly effective in
assisting LLMs with fault localization and program repair, different types of
software artifacts complement each other. By incorporating different artifacts,
DEVLoRe successfully locates 49.3% and 47.6% of single and non-single buggy
methods and generates 56.0% and 14.5% plausible patches for the Defects4J v2.0
dataset, respectively. This outperforms current state-of-the-art APR methods.
The source code and experimental results of this work for replication are
available at https://github.com/XYZboom/DEVLoRe.
| no_new_dataset | 0.940188 |
2412.05313 | Ahmed Jaafar | Ahmed Jaafar, Shreyas Sundara Raman, Yichen Wei, Sudarshan Harithas,
Sofia Juliani, Anneke Wernerfelt, Benedict Quartey, Ifrah Idrees, Jason Xinyu
Liu, Stefanie Tellex | {\lambda}: A Benchmark for Data-Efficiency in Long-Horizon Indoor Mobile
Manipulation Robotics | 8 pages | null | null | null | cs.RO cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Learning to execute long-horizon mobile manipulation tasks is crucial for
advancing robotics in household and workplace settings. However, current
approaches are typically data-inefficient, underscoring the need for improved
models that require realistically sized benchmarks to evaluate their
efficiency. To address this, we introduce the LAMBDA ({\lambda})
benchmark-Long-horizon Actions for Mobile-manipulation Benchmarking of Directed
Activities-which evaluates the data efficiency of models on
language-conditioned, long-horizon, multi-room, multi-floor, pick-and-place
tasks using a dataset of manageable size, more feasible for collection. Our
benchmark includes 571 human-collected demonstrations that provide realism and
diversity in simulated and real-world settings. Unlike planner-generated data,
these trajectories offer natural variability and replay-verifiability, ensuring
robust learning and evaluation. We leverage LAMBDA to benchmark current
end-to-end learning methods and a modular neuro-symbolic approaches that
combines foundation models with task and motion planning. We find that
end-to-end methods-even when pretrained-yield lower success rates, while
neuro-symbolic methods perform significantly better and require less data.
| [
{
"version": "v1",
"created": "Thu, 28 Nov 2024 19:31:50 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Jan 2025 15:16:49 GMT"
},
{
"version": "v3",
"created": "Tue, 7 Jan 2025 18:57:23 GMT"
},
{
"version": "v4",
"created": "Mon, 27 Jan 2025 18:53:40 GMT"
},
{
"version": "v5",
"created": "Mon, 3 Feb 2025 18:54:17 GMT"
},
{
"version": "v6",
"created": "Tue, 4 Mar 2025 17:33:11 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Jaafar",
"Ahmed",
""
],
[
"Raman",
"Shreyas Sundara",
""
],
[
"Wei",
"Yichen",
""
],
[
"Harithas",
"Sudarshan",
""
],
[
"Juliani",
"Sofia",
""
],
[
"Wernerfelt",
"Anneke",
""
],
[
"Quartey",
"Benedict",
""
],
[
"Idrees",
"Ifrah",
""
],
[
"Liu",
"Jason Xinyu",
""
],
[
"Tellex",
"Stefanie",
""
]
]
| TITLE: {\lambda}: A Benchmark for Data-Efficiency in Long-Horizon Indoor Mobile
Manipulation Robotics
ABSTRACT: Learning to execute long-horizon mobile manipulation tasks is crucial for
advancing robotics in household and workplace settings. However, current
approaches are typically data-inefficient, underscoring the need for improved
models that require realistically sized benchmarks to evaluate their
efficiency. To address this, we introduce the LAMBDA ({\lambda})
benchmark-Long-horizon Actions for Mobile-manipulation Benchmarking of Directed
Activities-which evaluates the data efficiency of models on
language-conditioned, long-horizon, multi-room, multi-floor, pick-and-place
tasks using a dataset of manageable size, more feasible for collection. Our
benchmark includes 571 human-collected demonstrations that provide realism and
diversity in simulated and real-world settings. Unlike planner-generated data,
these trajectories offer natural variability and replay-verifiability, ensuring
robust learning and evaluation. We leverage LAMBDA to benchmark current
end-to-end learning methods and a modular neuro-symbolic approaches that
combines foundation models with task and motion planning. We find that
end-to-end methods-even when pretrained-yield lower success rates, while
neuro-symbolic methods perform significantly better and require less data.
| no_new_dataset | 0.921464 |
2412.11694 | Shixin Jiang | Shixin Jiang, Jiafeng Liang, Jiyuan Wang, Xuan Dong, Heng Chang,
Weijiang Yu, Jinhua Du, Ming Liu, Bing Qin | From Specific-MLLMs to Omni-MLLMs: A Survey on MLLMs Aligned with
Multi-modalities | 35 pages | null | null | null | cs.AI cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To tackle complex tasks in real-world scenarios, more researchers are
focusing on Omni-MLLMs, which aim to achieve omni-modal understanding and
generation. Beyond the constraints of any specific non-linguistic modality,
Omni-MLLMs map various non-linguistic modalities into the embedding space of
LLMs and enable the interaction and understanding of arbitrary combinations of
modalities within a single model. In this paper, we systematically investigate
relevant research and provide a comprehensive survey of Omni-MLLMs.
Specifically, we first explain the four core components of Omni-MLLMs for
unified multi-modal modeling with a meticulous taxonomy that offers novel
perspectives. Then, we introduce the effective integration achieved through
two-stage training and discuss the corresponding datasets as well as
evaluation. Furthermore, we summarize the main challenges of current Omni-MLLMs
and outline future directions. We hope this paper serves as an introduction for
beginners and promotes the advancement of related research. Resources have been
made publicly available at https://github.com/threegold116/Awesome-Omni-MLLMs.
| [
{
"version": "v1",
"created": "Mon, 16 Dec 2024 12:12:45 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Feb 2025 16:30:38 GMT"
},
{
"version": "v3",
"created": "Tue, 4 Mar 2025 01:47:20 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Jiang",
"Shixin",
""
],
[
"Liang",
"Jiafeng",
""
],
[
"Wang",
"Jiyuan",
""
],
[
"Dong",
"Xuan",
""
],
[
"Chang",
"Heng",
""
],
[
"Yu",
"Weijiang",
""
],
[
"Du",
"Jinhua",
""
],
[
"Liu",
"Ming",
""
],
[
"Qin",
"Bing",
""
]
]
| TITLE: From Specific-MLLMs to Omni-MLLMs: A Survey on MLLMs Aligned with
Multi-modalities
ABSTRACT: To tackle complex tasks in real-world scenarios, more researchers are
focusing on Omni-MLLMs, which aim to achieve omni-modal understanding and
generation. Beyond the constraints of any specific non-linguistic modality,
Omni-MLLMs map various non-linguistic modalities into the embedding space of
LLMs and enable the interaction and understanding of arbitrary combinations of
modalities within a single model. In this paper, we systematically investigate
relevant research and provide a comprehensive survey of Omni-MLLMs.
Specifically, we first explain the four core components of Omni-MLLMs for
unified multi-modal modeling with a meticulous taxonomy that offers novel
perspectives. Then, we introduce the effective integration achieved through
two-stage training and discuss the corresponding datasets as well as
evaluation. Furthermore, we summarize the main challenges of current Omni-MLLMs
and outline future directions. We hope this paper serves as an introduction for
beginners and promotes the advancement of related research. Resources have been
made publicly available at https://github.com/threegold116/Awesome-Omni-MLLMs.
| no_new_dataset | 0.942981 |
2412.13838 | Prajwal Pisal | Prajwal Pisal, Ondrej Krejci, Patrick Rinke | Machine-learning Accelerated Descriptor Design for Catalyst Discovery: A
CO$_2$ to Methanol Conversion Case Study | 23 pages, 5 figures + 6 pages, 1 figure (supplementary). Revised
version: 1. Expanded intro on ML in heterogeneous catalysis. 2. Improved
adsorbate explanation in Search Space Selection. 3. More comprehensive
discussion in Unsupervised Learning. 4. Added material facets/selectivity
discussion in Statistical Analysis. 5. Clarified adsorption energies & ML
force field in Methods | null | null | null | physics.chem-ph cond-mat.mtrl-sci physics.comp-ph | http://creativecommons.org/licenses/by/4.0/ | Transforming CO$_2$ into methanol represents a crucial step towards closing
the carbon cycle, with thermoreduction technology nearing industrial
application. However, obtaining high methanol yields and ensuring the stability
of heterocatalysts remain significant challenges. Herein, we present a
sophisticated computational framework to accelerate the discovery of novel
thermal heterogeneous catalysts, using machine-learned force fields. We propose
a new catalytic descriptor, termed adsorption energy distribution, that
aggregates the binding energies for different catalyst facets, binding sites,
and adsorbates. The descriptor is versatile and can easily be adjusted to a
specific reaction through careful choice of the key-step reactants and reaction
intermediates. By applying unsupervised machine learning and statistical
analysis to a dataset comprising nearly 160 metallic alloys, we offer a
powerful tool for catalyst discovery. Finally, we propose new promising
candidate materials such as ZnRh and ZnPt$_3$, which to our knowledge, have not
yet been tested, and discuss their possible advantage in terms of stability.
| [
{
"version": "v1",
"created": "Wed, 18 Dec 2024 13:30:48 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 13:34:17 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Pisal",
"Prajwal",
""
],
[
"Krejci",
"Ondrej",
""
],
[
"Rinke",
"Patrick",
""
]
]
| TITLE: Machine-learning Accelerated Descriptor Design for Catalyst Discovery: A
CO$_2$ to Methanol Conversion Case Study
ABSTRACT: Transforming CO$_2$ into methanol represents a crucial step towards closing
the carbon cycle, with thermoreduction technology nearing industrial
application. However, obtaining high methanol yields and ensuring the stability
of heterocatalysts remain significant challenges. Herein, we present a
sophisticated computational framework to accelerate the discovery of novel
thermal heterogeneous catalysts, using machine-learned force fields. We propose
a new catalytic descriptor, termed adsorption energy distribution, that
aggregates the binding energies for different catalyst facets, binding sites,
and adsorbates. The descriptor is versatile and can easily be adjusted to a
specific reaction through careful choice of the key-step reactants and reaction
intermediates. By applying unsupervised machine learning and statistical
analysis to a dataset comprising nearly 160 metallic alloys, we offer a
powerful tool for catalyst discovery. Finally, we propose new promising
candidate materials such as ZnRh and ZnPt$_3$, which to our knowledge, have not
yet been tested, and discuss their possible advantage in terms of stability.
| no_new_dataset | 0.945045 |
2412.15267 | Hankun Kang | Hankun Kang, Jianhao Chen, Yongqi Li, Xin Miao, Mayi Xu, Ming Zhong,
Yuanyuan Zhu, Tieyun Qian | Toxicity Detection towards Adaptability to Changing Perturbations | There are still some flaws in the uploaded content, which may cause
confusion for readers. To be rigorous, we need to retract the paper for
optimization and improvement | null | null | null | cs.CR cs.AI cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Toxicity detection is crucial for maintaining the peace of the society. While
existing methods perform well on normal toxic contents or those generated by
specific perturbation methods, they are vulnerable to evolving perturbation
patterns. However, in real-world scenarios, malicious users tend to create new
perturbation patterns for fooling the detectors. For example, some users may
circumvent the detector of large language models (LLMs) by adding `I am a
scientist' at the beginning of the prompt. In this paper, we introduce a novel
problem, i.e., continual learning jailbreak perturbation patterns, into the
toxicity detection field. To tackle this problem, we first construct a new
dataset generated by 9 types of perturbation patterns, 7 of them are summarized
from prior work and 2 of them are developed by us. We then systematically
validate the vulnerability of current methods on this new perturbation
pattern-aware dataset via both the zero-shot and fine tuned cross-pattern
detection. Upon this, we present the domain incremental learning paradigm and
the corresponding benchmark to ensure the detector's robustness to dynamically
emerging types of perturbed toxic text. Our code and dataset are provided in
the appendix and will be publicly available at GitHub, by which we wish to
offer new research opportunities for the security-relevant communities.
| [
{
"version": "v1",
"created": "Tue, 17 Dec 2024 05:04:57 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Jan 2025 09:18:05 GMT"
},
{
"version": "v3",
"created": "Tue, 4 Mar 2025 04:49:58 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Kang",
"Hankun",
""
],
[
"Chen",
"Jianhao",
""
],
[
"Li",
"Yongqi",
""
],
[
"Miao",
"Xin",
""
],
[
"Xu",
"Mayi",
""
],
[
"Zhong",
"Ming",
""
],
[
"Zhu",
"Yuanyuan",
""
],
[
"Qian",
"Tieyun",
""
]
]
| TITLE: Toxicity Detection towards Adaptability to Changing Perturbations
ABSTRACT: Toxicity detection is crucial for maintaining the peace of the society. While
existing methods perform well on normal toxic contents or those generated by
specific perturbation methods, they are vulnerable to evolving perturbation
patterns. However, in real-world scenarios, malicious users tend to create new
perturbation patterns for fooling the detectors. For example, some users may
circumvent the detector of large language models (LLMs) by adding `I am a
scientist' at the beginning of the prompt. In this paper, we introduce a novel
problem, i.e., continual learning jailbreak perturbation patterns, into the
toxicity detection field. To tackle this problem, we first construct a new
dataset generated by 9 types of perturbation patterns, 7 of them are summarized
from prior work and 2 of them are developed by us. We then systematically
validate the vulnerability of current methods on this new perturbation
pattern-aware dataset via both the zero-shot and fine tuned cross-pattern
detection. Upon this, we present the domain incremental learning paradigm and
the corresponding benchmark to ensure the detector's robustness to dynamically
emerging types of perturbed toxic text. Our code and dataset are provided in
the appendix and will be publicly available at GitHub, by which we wish to
offer new research opportunities for the security-relevant communities.
| new_dataset | 0.969757 |
2412.20903 | Ting Zhang | Zhiqiang Yuan, Ting Zhang, Ying Deng, Jiapei Zhang, Yeshuang Zhu, Zexi
Jia, Jie Zhou, Jinchao Zhang | WalkVLM:Aid Visually Impaired People Walking by Vision Language Model | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Approximately 200 million individuals around the world suffer from varying
degrees of visual impairment, making it crucial to leverage AI technology to
offer walking assistance for these people. With the recent progress of
vision-language models (VLMs), applying VLMs to offer walking guidance has
become popular. However, the existing methods of walking guidance are mainly
based on self-curated question-answering datasets that are not publicly
accessible, without a standardized benchmark for training or evaluation.
Moreover, walking assistance often requires real-time streaming video analysis
and the generation of concise yet informative reminders, making VLMs struggle
due to excessive responses and low efficiency in inferences. In this paper, we
introduce the first large-scale dataset dedicated to walking assistance,
comprising 12,000 video-annotation pairs, to provide a unified benchmark for
training and evaluating systems to help visually-impaired individuals walk.
Furthermore, a WalkVLM model is proposed, which employs chain of thought for
hierarchical planning to generate concise but informative reminders and
utilizes temporal-aware adaptive prediction to reduce the temporal redundancy
of reminders. Finally, we have established a solid benchmark for blind walking
task and verified the advantages of WalkVLM in stream video processing for this
task compared to other VLMs. Our dataset and code are available at
https://walkvlm2024.github.io.
| [
{
"version": "v1",
"created": "Mon, 30 Dec 2024 12:29:02 GMT"
},
{
"version": "v2",
"created": "Sat, 4 Jan 2025 13:21:58 GMT"
},
{
"version": "v3",
"created": "Sat, 11 Jan 2025 06:44:43 GMT"
},
{
"version": "v4",
"created": "Tue, 4 Mar 2025 15:05:02 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Yuan",
"Zhiqiang",
""
],
[
"Zhang",
"Ting",
""
],
[
"Deng",
"Ying",
""
],
[
"Zhang",
"Jiapei",
""
],
[
"Zhu",
"Yeshuang",
""
],
[
"Jia",
"Zexi",
""
],
[
"Zhou",
"Jie",
""
],
[
"Zhang",
"Jinchao",
""
]
]
| TITLE: WalkVLM:Aid Visually Impaired People Walking by Vision Language Model
ABSTRACT: Approximately 200 million individuals around the world suffer from varying
degrees of visual impairment, making it crucial to leverage AI technology to
offer walking assistance for these people. With the recent progress of
vision-language models (VLMs), applying VLMs to offer walking guidance has
become popular. However, the existing methods of walking guidance are mainly
based on self-curated question-answering datasets that are not publicly
accessible, without a standardized benchmark for training or evaluation.
Moreover, walking assistance often requires real-time streaming video analysis
and the generation of concise yet informative reminders, making VLMs struggle
due to excessive responses and low efficiency in inferences. In this paper, we
introduce the first large-scale dataset dedicated to walking assistance,
comprising 12,000 video-annotation pairs, to provide a unified benchmark for
training and evaluating systems to help visually-impaired individuals walk.
Furthermore, a WalkVLM model is proposed, which employs chain of thought for
hierarchical planning to generate concise but informative reminders and
utilizes temporal-aware adaptive prediction to reduce the temporal redundancy
of reminders. Finally, we have established a solid benchmark for blind walking
task and verified the advantages of WalkVLM in stream video processing for this
task compared to other VLMs. Our dataset and code are available at
https://walkvlm2024.github.io.
| new_dataset | 0.958304 |
2501.01022 | Anna Grim | Anna Grim, Jayaram Chandrashekar, Uygar Sumbul | Efficient Connectivity-Preserving Instance Segmentation with
Supervoxel-Based Loss Function | null | AAAI 2025 | null | null | cs.CV q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Reconstructing the intricate local morphology of neurons and their long-range
projecting axons can address many connectivity related questions in
neuroscience. The main bottleneck in connectomics pipelines is correcting
topological errors, as multiple entangled neuronal arbors is a challenging
instance segmentation problem. More broadly, segmentation of curvilinear,
filamentous structures continues to pose significant challenges. To address
this problem, we extend the notion of simple points from digital topology to
connected sets of voxels (i.e. supervoxels) and propose a topology-aware neural
network segmentation method with minimal computational overhead. We demonstrate
its effectiveness on a new public dataset of 3-d light microscopy images of
mouse brains, along with the benchmark datasets DRIVE, ISBI12, and CrackTree.
| [
{
"version": "v1",
"created": "Thu, 2 Jan 2025 02:49:13 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Jan 2025 22:05:46 GMT"
},
{
"version": "v3",
"created": "Tue, 4 Mar 2025 16:59:53 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Grim",
"Anna",
""
],
[
"Chandrashekar",
"Jayaram",
""
],
[
"Sumbul",
"Uygar",
""
]
]
| TITLE: Efficient Connectivity-Preserving Instance Segmentation with
Supervoxel-Based Loss Function
ABSTRACT: Reconstructing the intricate local morphology of neurons and their long-range
projecting axons can address many connectivity related questions in
neuroscience. The main bottleneck in connectomics pipelines is correcting
topological errors, as multiple entangled neuronal arbors is a challenging
instance segmentation problem. More broadly, segmentation of curvilinear,
filamentous structures continues to pose significant challenges. To address
this problem, we extend the notion of simple points from digital topology to
connected sets of voxels (i.e. supervoxels) and propose a topology-aware neural
network segmentation method with minimal computational overhead. We demonstrate
its effectiveness on a new public dataset of 3-d light microscopy images of
mouse brains, along with the benchmark datasets DRIVE, ISBI12, and CrackTree.
| new_dataset | 0.953057 |
2501.01187 | Zhi Yuan Wu | Qingqing Ren, Wen Wang, Shuyong Zhu, Zhiyuan Wu, Yujun Zhang | NET-SA: An Efficient Secure Aggregation Architecture Based on In-Network
Computing | The reason for this withdrawal is that I did not obtain proper
authorization from my academic supervisor before submission | null | null | null | cs.CR cs.DC cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Privacy-preserving machine learning (PPML) enables clients to collaboratively
train deep learning models without sharing private datasets, but faces privacy
leakage risks due to gradient leakage attacks. Prevailing methods leverage
secure aggregation strategies to enhance PPML, where clients leverage masks and
secret sharing to further protect gradient data while tolerating participant
dropouts. These methods, however, require frequent inter-client communication
to negotiate keys and perform secret sharing, leading to substantial
communication overhead. To tackle this issue, we propose NET-SA, an efficient
secure aggregation architecture for PPML based on in-network computing. NET-SA
employs seed homomorphic pseudorandom generators for local gradient masking and
utilizes programmable switches for seed aggregation. Accurate and secure
gradient aggregation is then performed on the central server based on masked
gradients and aggregated seeds. This design effectively reduces communication
overhead due to eliminating the communication-intensive phases of seed
agreement and secret sharing, with enhanced dropout tolerance due to overcoming
the threshold limit of secret sharing. Extensive experiments on server clusters
and Intel Tofino programmable switch demonstrate that NET-SA achieves up to 77x
and 12x enhancements in runtime and 2x decrease in total client communication
cost compared with state-of-the-art methods.
| [
{
"version": "v1",
"created": "Thu, 2 Jan 2025 10:27:06 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 06:52:17 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Ren",
"Qingqing",
""
],
[
"Wang",
"Wen",
""
],
[
"Zhu",
"Shuyong",
""
],
[
"Wu",
"Zhiyuan",
""
],
[
"Zhang",
"Yujun",
""
]
]
| TITLE: NET-SA: An Efficient Secure Aggregation Architecture Based on In-Network
Computing
ABSTRACT: Privacy-preserving machine learning (PPML) enables clients to collaboratively
train deep learning models without sharing private datasets, but faces privacy
leakage risks due to gradient leakage attacks. Prevailing methods leverage
secure aggregation strategies to enhance PPML, where clients leverage masks and
secret sharing to further protect gradient data while tolerating participant
dropouts. These methods, however, require frequent inter-client communication
to negotiate keys and perform secret sharing, leading to substantial
communication overhead. To tackle this issue, we propose NET-SA, an efficient
secure aggregation architecture for PPML based on in-network computing. NET-SA
employs seed homomorphic pseudorandom generators for local gradient masking and
utilizes programmable switches for seed aggregation. Accurate and secure
gradient aggregation is then performed on the central server based on masked
gradients and aggregated seeds. This design effectively reduces communication
overhead due to eliminating the communication-intensive phases of seed
agreement and secret sharing, with enhanced dropout tolerance due to overcoming
the threshold limit of secret sharing. Extensive experiments on server clusters
and Intel Tofino programmable switch demonstrate that NET-SA achieves up to 77x
and 12x enhancements in runtime and 2x decrease in total client communication
cost compared with state-of-the-art methods.
| no_new_dataset | 0.949342 |
2501.02516 | Shuanglin Li | Shuanglin Li, Siyang Song, Rajesh Nair, and Syed Mohsen Naqvi | A Frequency-aware Augmentation Network for Mental Disorders Assessment
from Audio | Have find some technical problems which need be addressed within a
plenty of time, and some part of them should be completed | null | null | null | eess.AS cs.SD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Depression and Attention Deficit Hyperactivity Disorder (ADHD) stand out as
the common mental health challenges today. In affective computing, speech
signals serve as effective biomarkers for mental disorder assessment. Current
research, relying on labor-intensive hand-crafted features or simplistic
time-frequency representations, often overlooks critical details by not
accounting for the differential impacts of various frequency bands and temporal
fluctuations. Therefore, we propose a frequency-aware augmentation network with
dynamic convolution for depression and ADHD assessment. In the proposed method,
the spectrogram is used as the input feature and adopts a multi-scale
convolution to help the network focus on discriminative frequency bands related
to mental disorders. A dynamic convolution is also designed to aggregate
multiple convolution kernels dynamically based upon their attentions which are
input-independent to capture dynamic information. Finally, a feature
augmentation block is proposed to enhance the feature representation ability
and make full use of the captured information. Experimental results on AVEC
2014 and self-recorded ADHD dataset prove the robustness of our method, an RMSE
of 9.23 was attained for estimating depression severity, along with an accuracy
of 89.8\% in detecting ADHD.
| [
{
"version": "v1",
"created": "Sun, 5 Jan 2025 12:06:06 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 17:38:38 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Li",
"Shuanglin",
""
],
[
"Song",
"Siyang",
""
],
[
"Nair",
"Rajesh",
""
],
[
"Naqvi",
"Syed Mohsen",
""
]
]
| TITLE: A Frequency-aware Augmentation Network for Mental Disorders Assessment
from Audio
ABSTRACT: Depression and Attention Deficit Hyperactivity Disorder (ADHD) stand out as
the common mental health challenges today. In affective computing, speech
signals serve as effective biomarkers for mental disorder assessment. Current
research, relying on labor-intensive hand-crafted features or simplistic
time-frequency representations, often overlooks critical details by not
accounting for the differential impacts of various frequency bands and temporal
fluctuations. Therefore, we propose a frequency-aware augmentation network with
dynamic convolution for depression and ADHD assessment. In the proposed method,
the spectrogram is used as the input feature and adopts a multi-scale
convolution to help the network focus on discriminative frequency bands related
to mental disorders. A dynamic convolution is also designed to aggregate
multiple convolution kernels dynamically based upon their attentions which are
input-independent to capture dynamic information. Finally, a feature
augmentation block is proposed to enhance the feature representation ability
and make full use of the captured information. Experimental results on AVEC
2014 and self-recorded ADHD dataset prove the robustness of our method, an RMSE
of 9.23 was attained for estimating depression severity, along with an accuracy
of 89.8\% in detecting ADHD.
| new_dataset | 0.957912 |
2501.02825 | Kavi Gupta | Kavi Gupta, Kate Sanders, Armando Solar-Lezama | Randomly Sampled Language Reasoning Problems Reveal Limits of LLMs | 8 pages, 3 figures, 2 tables | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Can LLMs pick up language structure from examples? Evidence in prior work
seems to indicate yes, as pretrained models repeatedly demonstrate the ability
to adapt to new language structures and vocabularies. However, this line of
research typically considers languages that are present within common
pretraining datasets, or otherwise share notable similarities with these seen
languages. In contrast, in this work we attempt to measure models' language
understanding capacity while circumventing the risk of dataset recall. We
parameterize large families of language tasks recognized by deterministic
finite automata (DFAs), and can thus sample novel language reasoning problems
to fairly evaulate LLMs regardless of training data. We find that, even in the
strikingly simple setting of 3-state DFAs, LLMs underperform unparameterized
ngram models on both language recognition and synthesis tasks. These results
suggest that LLMs struggle to match the ability of basic language models in
recognizing and reasoning over languages that are sufficiently distinct from
the ones they see at training time, underscoring the distinction between
learning individual languages and possessing a general theory of language.
| [
{
"version": "v1",
"created": "Mon, 6 Jan 2025 07:57:51 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Jan 2025 21:51:30 GMT"
},
{
"version": "v3",
"created": "Fri, 24 Jan 2025 23:02:29 GMT"
},
{
"version": "v4",
"created": "Mon, 3 Mar 2025 20:16:13 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Gupta",
"Kavi",
""
],
[
"Sanders",
"Kate",
""
],
[
"Solar-Lezama",
"Armando",
""
]
]
| TITLE: Randomly Sampled Language Reasoning Problems Reveal Limits of LLMs
ABSTRACT: Can LLMs pick up language structure from examples? Evidence in prior work
seems to indicate yes, as pretrained models repeatedly demonstrate the ability
to adapt to new language structures and vocabularies. However, this line of
research typically considers languages that are present within common
pretraining datasets, or otherwise share notable similarities with these seen
languages. In contrast, in this work we attempt to measure models' language
understanding capacity while circumventing the risk of dataset recall. We
parameterize large families of language tasks recognized by deterministic
finite automata (DFAs), and can thus sample novel language reasoning problems
to fairly evaulate LLMs regardless of training data. We find that, even in the
strikingly simple setting of 3-state DFAs, LLMs underperform unparameterized
ngram models on both language recognition and synthesis tasks. These results
suggest that LLMs struggle to match the ability of basic language models in
recognizing and reasoning over languages that are sufficiently distinct from
the ones they see at training time, underscoring the distinction between
learning individual languages and possessing a general theory of language.
| no_new_dataset | 0.945096 |
2501.04477 | Kang Chen | Kang Chen and Yajing Zheng and Tiejun Huang and Zhaofei Yu | Rethinking High-speed Image Reconstruction Framework with Spike Camera | Accepted by AAAI2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Spike cameras, as innovative neuromorphic devices, generate continuous spike
streams to capture high-speed scenes with lower bandwidth and higher dynamic
range than traditional RGB cameras. However, reconstructing high-quality images
from the spike input under low-light conditions remains challenging.
Conventional learning-based methods often rely on the synthetic dataset as the
supervision for training. Still, these approaches falter when dealing with
noisy spikes fired under the low-light environment, leading to further
performance degradation in the real-world dataset. This phenomenon is primarily
due to inadequate noise modelling and the domain gap between synthetic and real
datasets, resulting in recovered images with unclear textures, excessive noise,
and diminished brightness. To address these challenges, we introduce a novel
spike-to-image reconstruction framework SpikeCLIP that goes beyond traditional
training paradigms. Leveraging the CLIP model's powerful capability to align
text and images, we incorporate the textual description of the captured scene
and unpaired high-quality datasets as the supervision. Our experiments on
real-world low-light datasets U-CALTECH and U-CIFAR demonstrate that SpikeCLIP
significantly enhances texture details and the luminance balance of recovered
images. Furthermore, the reconstructed images are well-aligned with the broader
visual features needed for downstream tasks, ensuring more robust and versatile
performance in challenging environments.
| [
{
"version": "v1",
"created": "Wed, 8 Jan 2025 13:00:17 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 14:53:28 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Chen",
"Kang",
""
],
[
"Zheng",
"Yajing",
""
],
[
"Huang",
"Tiejun",
""
],
[
"Yu",
"Zhaofei",
""
]
]
| TITLE: Rethinking High-speed Image Reconstruction Framework with Spike Camera
ABSTRACT: Spike cameras, as innovative neuromorphic devices, generate continuous spike
streams to capture high-speed scenes with lower bandwidth and higher dynamic
range than traditional RGB cameras. However, reconstructing high-quality images
from the spike input under low-light conditions remains challenging.
Conventional learning-based methods often rely on the synthetic dataset as the
supervision for training. Still, these approaches falter when dealing with
noisy spikes fired under the low-light environment, leading to further
performance degradation in the real-world dataset. This phenomenon is primarily
due to inadequate noise modelling and the domain gap between synthetic and real
datasets, resulting in recovered images with unclear textures, excessive noise,
and diminished brightness. To address these challenges, we introduce a novel
spike-to-image reconstruction framework SpikeCLIP that goes beyond traditional
training paradigms. Leveraging the CLIP model's powerful capability to align
text and images, we incorporate the textual description of the captured scene
and unpaired high-quality datasets as the supervision. Our experiments on
real-world low-light datasets U-CALTECH and U-CIFAR demonstrate that SpikeCLIP
significantly enhances texture details and the luminance balance of recovered
images. Furthermore, the reconstructed images are well-aligned with the broader
visual features needed for downstream tasks, ensuring more robust and versatile
performance in challenging environments.
| no_new_dataset | 0.948965 |
2501.09776 | Yikai Hou | Yikai Hou and Peng Tang | Multi-Head Self-Attending Neural Tucker Factorization | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Quality-of-service (QoS) data exhibit dynamic temporal patterns that are
crucial for accurately predicting missing values. These patterns arise from the
evolving interactions between users and services, making it essential to
capture the temporal dynamics inherent in such data for improved prediction
performance. As the size and complexity of QoS datasets increase, existing
models struggle to provide accurate predictions, highlighting the need for more
flexible and dynamic methods to better capture the underlying patterns in
large-scale QoS data. To address this issue, we introduce a neural
network-based tensor factorization approach tailored for learning
spatiotemporal representations of high-dimensional and incomplete (HDI)
tensors, namely the Multi-head Self-attending Neural Tucker Factorization
(MSNTucF). The model is elaborately designed for modeling intricate nonlinear
spatiotemporal feature interaction patterns hidden in real world data with a
two-fold idea. It first employs a neural network structure to generalize the
traditional framework of Tucker factorization and then proposes to leverage a
multi-head self-attending module to enforce nonlinear latent interaction
learning. In empirical studies on two dynamic QoS datasets from real
applications, the proposed MSNTucF model demonstrates superior performance
compared to state-of-the-art benchmark models in estimating missing
observations. This highlights its ability to learn non-linear spatiotemporal
representations of HDI tensors.
| [
{
"version": "v1",
"created": "Thu, 16 Jan 2025 13:04:15 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 14:08:33 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Hou",
"Yikai",
""
],
[
"Tang",
"Peng",
""
]
]
| TITLE: Multi-Head Self-Attending Neural Tucker Factorization
ABSTRACT: Quality-of-service (QoS) data exhibit dynamic temporal patterns that are
crucial for accurately predicting missing values. These patterns arise from the
evolving interactions between users and services, making it essential to
capture the temporal dynamics inherent in such data for improved prediction
performance. As the size and complexity of QoS datasets increase, existing
models struggle to provide accurate predictions, highlighting the need for more
flexible and dynamic methods to better capture the underlying patterns in
large-scale QoS data. To address this issue, we introduce a neural
network-based tensor factorization approach tailored for learning
spatiotemporal representations of high-dimensional and incomplete (HDI)
tensors, namely the Multi-head Self-attending Neural Tucker Factorization
(MSNTucF). The model is elaborately designed for modeling intricate nonlinear
spatiotemporal feature interaction patterns hidden in real world data with a
two-fold idea. It first employs a neural network structure to generalize the
traditional framework of Tucker factorization and then proposes to leverage a
multi-head self-attending module to enforce nonlinear latent interaction
learning. In empirical studies on two dynamic QoS datasets from real
applications, the proposed MSNTucF model demonstrates superior performance
compared to state-of-the-art benchmark models in estimating missing
observations. This highlights its ability to learn non-linear spatiotemporal
representations of HDI tensors.
| no_new_dataset | 0.946843 |
2501.13890 | Ayush Mohanty | Ayush Mohanty, Nazal Mohamed, Paritosh Ramanan and Nagi Gebraeel | Federated Granger Causality Learning for Interdependent Clients with
State Space Representation | Published as a conference paper at International Conference on
Learning Representations (ICLR) 2025 | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | Advanced sensors and IoT devices have improved the monitoring and control of
complex industrial enterprises. They have also created an interdependent fabric
of geographically distributed process operations (clients) across these
enterprises. Granger causality is an effective approach to detect and quantify
interdependencies by examining how one client's state affects others over time.
Understanding these interdependencies captures how localized events, such as
faults and disruptions, can propagate throughout the system, possibly causing
widespread operational impacts. However, the large volume and complexity of
industrial data pose challenges in modeling these interdependencies. This paper
develops a federated approach to learning Granger causality. We utilize a
linear state space system framework that leverages low-dimensional state
estimates to analyze interdependencies. This addresses bandwidth limitations
and the computational burden commonly associated with centralized data
processing. We propose augmenting the client models with the Granger causality
information learned by the server through a Machine Learning (ML) function. We
examine the co-dependence between the augmented client and server models and
reformulate the framework as a standalone ML algorithm providing conditions for
its sublinear and linear convergence rates. We also study the convergence of
the framework to a centralized oracle model. Moreover, we include a
differential privacy analysis to ensure data security while preserving causal
insights. Using synthetic data, we conduct comprehensive experiments to
demonstrate the robustness of our approach to perturbations in causality, the
scalability to the size of communication, number of clients, and the dimensions
of raw data. We also evaluate the performance on two real-world industrial
control system datasets by reporting the volume of data saved by
decentralization.
| [
{
"version": "v1",
"created": "Thu, 23 Jan 2025 18:04:21 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Jan 2025 16:49:19 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Mar 2025 22:33:36 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Mohanty",
"Ayush",
""
],
[
"Mohamed",
"Nazal",
""
],
[
"Ramanan",
"Paritosh",
""
],
[
"Gebraeel",
"Nagi",
""
]
]
| TITLE: Federated Granger Causality Learning for Interdependent Clients with
State Space Representation
ABSTRACT: Advanced sensors and IoT devices have improved the monitoring and control of
complex industrial enterprises. They have also created an interdependent fabric
of geographically distributed process operations (clients) across these
enterprises. Granger causality is an effective approach to detect and quantify
interdependencies by examining how one client's state affects others over time.
Understanding these interdependencies captures how localized events, such as
faults and disruptions, can propagate throughout the system, possibly causing
widespread operational impacts. However, the large volume and complexity of
industrial data pose challenges in modeling these interdependencies. This paper
develops a federated approach to learning Granger causality. We utilize a
linear state space system framework that leverages low-dimensional state
estimates to analyze interdependencies. This addresses bandwidth limitations
and the computational burden commonly associated with centralized data
processing. We propose augmenting the client models with the Granger causality
information learned by the server through a Machine Learning (ML) function. We
examine the co-dependence between the augmented client and server models and
reformulate the framework as a standalone ML algorithm providing conditions for
its sublinear and linear convergence rates. We also study the convergence of
the framework to a centralized oracle model. Moreover, we include a
differential privacy analysis to ensure data security while preserving causal
insights. Using synthetic data, we conduct comprehensive experiments to
demonstrate the robustness of our approach to perturbations in causality, the
scalability to the size of communication, number of clients, and the dimensions
of raw data. We also evaluate the performance on two real-world industrial
control system datasets by reporting the volume of data saved by
decentralization.
| no_new_dataset | 0.946597 |
2501.16456 | Claas Beger | Claas Beger, Saikat Dutta | CoCoNUT: Structural Code Understanding does not fall out of a tree | Accepted at 2025 IEEE/ACM International Workshop on Large Language
Models for Code (LLM4Code) | null | null | null | cs.LG cs.SE | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) have shown impressive performance across a wide
array of tasks involving both structured and unstructured textual data. Recent
results on various benchmarks for code generation, repair, or completion
suggest that certain models have programming abilities comparable to or even
surpass humans. In this work, we demonstrate that high performance on such
benchmarks does not correlate to humans' innate ability to understand
structural control flow in code. To this end, we extract solutions from the
HumanEval benchmark, which the relevant models perform strongly on, and trace
their execution path using function calls sampled from the respective test set.
Using this dataset, we investigate the ability of seven state-of-the-art LLMs
to match the execution trace and find that, despite their ability to generate
semantically identical code, they possess limited ability to trace execution
paths, especially for longer traces and specific control structures. We find
that even the top-performing model, Gemini, can fully and correctly generate
only 47% of HumanEval task traces. Additionally, we introduce a subset for
three key structures not contained in HumanEval: Recursion, Parallel
Processing, and Object-Oriented Programming, including concepts like
Inheritance and Polymorphism. Besides OOP, we show that none of the
investigated models achieve an accuracy over 5% on the relevant traces.
Aggregating these specialized parts with HumanEval tasks, we present CoCoNUT:
Code Control Flow for Navigation Understanding and Testing, which measures a
model's ability to trace execution of code upon relevant calls, including
advanced structural components. We conclude that current LLMs need significant
improvement to enhance code reasoning abilities. We hope our dataset helps
researchers bridge this gap.
| [
{
"version": "v1",
"created": "Mon, 27 Jan 2025 19:29:11 GMT"
},
{
"version": "v2",
"created": "Wed, 29 Jan 2025 05:15:45 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Mar 2025 22:44:04 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Beger",
"Claas",
""
],
[
"Dutta",
"Saikat",
""
]
]
| TITLE: CoCoNUT: Structural Code Understanding does not fall out of a tree
ABSTRACT: Large Language Models (LLMs) have shown impressive performance across a wide
array of tasks involving both structured and unstructured textual data. Recent
results on various benchmarks for code generation, repair, or completion
suggest that certain models have programming abilities comparable to or even
surpass humans. In this work, we demonstrate that high performance on such
benchmarks does not correlate to humans' innate ability to understand
structural control flow in code. To this end, we extract solutions from the
HumanEval benchmark, which the relevant models perform strongly on, and trace
their execution path using function calls sampled from the respective test set.
Using this dataset, we investigate the ability of seven state-of-the-art LLMs
to match the execution trace and find that, despite their ability to generate
semantically identical code, they possess limited ability to trace execution
paths, especially for longer traces and specific control structures. We find
that even the top-performing model, Gemini, can fully and correctly generate
only 47% of HumanEval task traces. Additionally, we introduce a subset for
three key structures not contained in HumanEval: Recursion, Parallel
Processing, and Object-Oriented Programming, including concepts like
Inheritance and Polymorphism. Besides OOP, we show that none of the
investigated models achieve an accuracy over 5% on the relevant traces.
Aggregating these specialized parts with HumanEval tasks, we present CoCoNUT:
Code Control Flow for Navigation Understanding and Testing, which measures a
model's ability to trace execution of code upon relevant calls, including
advanced structural components. We conclude that current LLMs need significant
improvement to enhance code reasoning abilities. We hope our dataset helps
researchers bridge this gap.
| new_dataset | 0.972467 |
2502.00196 | Abdurrahim Yilmaz | Abdurrahim Yilmaz and Furkan Yuceyalcin and Ece Gokyayla and Donghee
Choi and Ozan Erdem and Ali Anil Demircali and Rahmetullah Varol and Ufuk
Gorkem Kirabali and Gulsum Gencoglan and Joram M. Posma and Burak Temelkuran | DermaSynth: Rich Synthetic Image-Text Pairs Using Open Access
Dermatology Datasets | 12 pages, 4 figures | null | null | null | cs.CV cs.AI cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | A major barrier to developing vision large language models (LLMs) in
dermatology is the lack of large image--text pairs dataset. We introduce
DermaSynth, a dataset comprising of 92,020 synthetic image--text pairs curated
from 45,205 images (13,568 clinical and 35,561 dermatoscopic) for
dermatology-related clinical tasks. Leveraging state-of-the-art LLMs, using
Gemini 2.0, we used clinically related prompts and self-instruct method to
generate diverse and rich synthetic texts. Metadata of the datasets were
incorporated into the input prompts by targeting to reduce potential
hallucinations. The resulting dataset builds upon open access dermatological
image repositories (DERM12345, BCN20000, PAD-UFES-20, SCIN, and HIBA) that have
permissive CC-BY-4.0 licenses. We also fine-tuned a preliminary
Llama-3.2-11B-Vision-Instruct model, DermatoLlama 1.0, on 5,000 samples. We
anticipate this dataset to support and accelerate AI research in dermatology.
Data and code underlying this work are accessible at
https://github.com/abdurrahimyilmaz/DermaSynth.
| [
{
"version": "v1",
"created": "Fri, 31 Jan 2025 22:26:33 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 12:36:10 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Yilmaz",
"Abdurrahim",
""
],
[
"Yuceyalcin",
"Furkan",
""
],
[
"Gokyayla",
"Ece",
""
],
[
"Choi",
"Donghee",
""
],
[
"Erdem",
"Ozan",
""
],
[
"Demircali",
"Ali Anil",
""
],
[
"Varol",
"Rahmetullah",
""
],
[
"Kirabali",
"Ufuk Gorkem",
""
],
[
"Gencoglan",
"Gulsum",
""
],
[
"Posma",
"Joram M.",
""
],
[
"Temelkuran",
"Burak",
""
]
]
| TITLE: DermaSynth: Rich Synthetic Image-Text Pairs Using Open Access
Dermatology Datasets
ABSTRACT: A major barrier to developing vision large language models (LLMs) in
dermatology is the lack of large image--text pairs dataset. We introduce
DermaSynth, a dataset comprising of 92,020 synthetic image--text pairs curated
from 45,205 images (13,568 clinical and 35,561 dermatoscopic) for
dermatology-related clinical tasks. Leveraging state-of-the-art LLMs, using
Gemini 2.0, we used clinically related prompts and self-instruct method to
generate diverse and rich synthetic texts. Metadata of the datasets were
incorporated into the input prompts by targeting to reduce potential
hallucinations. The resulting dataset builds upon open access dermatological
image repositories (DERM12345, BCN20000, PAD-UFES-20, SCIN, and HIBA) that have
permissive CC-BY-4.0 licenses. We also fine-tuned a preliminary
Llama-3.2-11B-Vision-Instruct model, DermatoLlama 1.0, on 5,000 samples. We
anticipate this dataset to support and accelerate AI research in dermatology.
Data and code underlying this work are accessible at
https://github.com/abdurrahimyilmaz/DermaSynth.
| new_dataset | 0.969032 |
2502.02417 | Matthias Wolff | Matthias Wolff, Florian Eilers, Xiaoyi Jiang | CVKAN: Complex-Valued Kolmogorov-Arnold Networks | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | In this work we propose CVKAN, a complex-valued KAN, to join the intrinsic
interpretability of KANs and the advantages of Complex-Valued Neural Networks
(CVNNs). We show how to transfer a KAN and the necessary associated mechanisms
into the complex domain. To confirm that CVKAN meets expectations we conduct
experiments on symbolic complex-valued function fitting and physically
meaningful formulae as well as on a more realistic dataset from knot theory.
Our proposed CVKAN is more stable and performs on par or better than
real-valued KANs while requiring less parameters and a shallower network
architecture, making it more explainable.
| [
{
"version": "v1",
"created": "Tue, 4 Feb 2025 15:38:14 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 16:01:06 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Wolff",
"Matthias",
""
],
[
"Eilers",
"Florian",
""
],
[
"Jiang",
"Xiaoyi",
""
]
]
| TITLE: CVKAN: Complex-Valued Kolmogorov-Arnold Networks
ABSTRACT: In this work we propose CVKAN, a complex-valued KAN, to join the intrinsic
interpretability of KANs and the advantages of Complex-Valued Neural Networks
(CVNNs). We show how to transfer a KAN and the necessary associated mechanisms
into the complex domain. To confirm that CVKAN meets expectations we conduct
experiments on symbolic complex-valued function fitting and physically
meaningful formulae as well as on a more realistic dataset from knot theory.
Our proposed CVKAN is more stable and performs on par or better than
real-valued KANs while requiring less parameters and a shallower network
architecture, making it more explainable.
| no_new_dataset | 0.951997 |
2502.04342 | Yuchen Cao | Yeyubei Zhang, Zhongyan Wang, Zhanyi Ding, Yexin Tian, Jianglai Dai,
Xiaorui Shen, Yunchong Liu and Yuchen Cao | Tutorial on Using Machine Learning and Deep Learning Models for Mental
Illness Detection | null | null | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Social media has become an important source for understanding mental health,
providing researchers with a way to detect conditions like depression from
user-generated posts. This tutorial provides practical guidance to address
common challenges in applying machine learning and deep learning methods for
mental health detection on these platforms. It focuses on strategies for
working with diverse datasets, improving text preprocessing, and addressing
issues such as imbalanced data and model evaluation. Real-world examples and
step-by-step instructions demonstrate how to apply these techniques
effectively, with an emphasis on transparency, reproducibility, and ethical
considerations. By sharing these approaches, this tutorial aims to help
researchers build more reliable and widely applicable models for mental health
research, contributing to better tools for early detection and intervention.
| [
{
"version": "v1",
"created": "Mon, 3 Feb 2025 06:43:12 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 05:13:07 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Zhang",
"Yeyubei",
""
],
[
"Wang",
"Zhongyan",
""
],
[
"Ding",
"Zhanyi",
""
],
[
"Tian",
"Yexin",
""
],
[
"Dai",
"Jianglai",
""
],
[
"Shen",
"Xiaorui",
""
],
[
"Liu",
"Yunchong",
""
],
[
"Cao",
"Yuchen",
""
]
]
| TITLE: Tutorial on Using Machine Learning and Deep Learning Models for Mental
Illness Detection
ABSTRACT: Social media has become an important source for understanding mental health,
providing researchers with a way to detect conditions like depression from
user-generated posts. This tutorial provides practical guidance to address
common challenges in applying machine learning and deep learning methods for
mental health detection on these platforms. It focuses on strategies for
working with diverse datasets, improving text preprocessing, and addressing
issues such as imbalanced data and model evaluation. Real-world examples and
step-by-step instructions demonstrate how to apply these techniques
effectively, with an emphasis on transparency, reproducibility, and ethical
considerations. By sharing these approaches, this tutorial aims to help
researchers build more reliable and widely applicable models for mental health
research, contributing to better tools for early detection and intervention.
| no_new_dataset | 0.950411 |
2502.04740 | Yijun Wang | Yijun Wang, Yong Wang, Chendong xu, Shuai Yao, Qisong Wu | SelaFD:Seamless Adaptation of Vision Transformer Fine-tuning for
Radar-based Human Activity Recognition | null | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Human Activity Recognition (HAR) such as fall detection has become
increasingly critical due to the aging population, necessitating effective
monitoring systems to prevent serious injuries and fatalities associated with
falls. This study focuses on fine-tuning the Vision Transformer (ViT) model
specifically for HAR using radar-based Time-Doppler signatures. Unlike
traditional image datasets, these signals present unique challenges due to
their non-visual nature and the high degree of similarity among various
activities. Directly fine-tuning the ViT with all parameters proves suboptimal
for this application. To address this challenge, we propose a novel approach
that employs Low-Rank Adaptation (LoRA) fine-tuning in the weight space to
facilitate knowledge transfer from pre-trained ViT models. Additionally, to
extract fine-grained features, we enhance feature representation through the
integration of a serial-parallel adapter in the feature space. Our innovative
joint fine-tuning method, tailored for radar-based Time-Doppler signatures,
significantly improves HAR accuracy, surpassing existing state-of-the-art
methodologies in this domain. Our code is released at
https://github.com/wangyijunlyy/SelaFD.
| [
{
"version": "v1",
"created": "Fri, 7 Feb 2025 08:15:31 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 07:09:19 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Wang",
"Yijun",
""
],
[
"Wang",
"Yong",
""
],
[
"xu",
"Chendong",
""
],
[
"Yao",
"Shuai",
""
],
[
"Wu",
"Qisong",
""
]
]
| TITLE: SelaFD:Seamless Adaptation of Vision Transformer Fine-tuning for
Radar-based Human Activity Recognition
ABSTRACT: Human Activity Recognition (HAR) such as fall detection has become
increasingly critical due to the aging population, necessitating effective
monitoring systems to prevent serious injuries and fatalities associated with
falls. This study focuses on fine-tuning the Vision Transformer (ViT) model
specifically for HAR using radar-based Time-Doppler signatures. Unlike
traditional image datasets, these signals present unique challenges due to
their non-visual nature and the high degree of similarity among various
activities. Directly fine-tuning the ViT with all parameters proves suboptimal
for this application. To address this challenge, we propose a novel approach
that employs Low-Rank Adaptation (LoRA) fine-tuning in the weight space to
facilitate knowledge transfer from pre-trained ViT models. Additionally, to
extract fine-grained features, we enhance feature representation through the
integration of a serial-parallel adapter in the feature space. Our innovative
joint fine-tuning method, tailored for radar-based Time-Doppler signatures,
significantly improves HAR accuracy, surpassing existing state-of-the-art
methodologies in this domain. Our code is released at
https://github.com/wangyijunlyy/SelaFD.
| no_new_dataset | 0.946547 |
2502.08168 | Zhiming Ma | Zhiming Ma, Xiayang Xiao, Sihao Dong, Peidong Wang, HaiPeng Wang,
Qingyun Pan | SARChat-Bench-2M: A Multi-Task Vision-Language Benchmark for SAR Image
Interpretation | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As a powerful all-weather Earth observation tool, synthetic aperture radar
(SAR) remote sensing enables critical military reconnaissance, maritime
surveillance, and infrastructure monitoring. Although Vision language models
(VLMs) have made remarkable progress in natural language processing and image
understanding, their applications remain limited in professional domains due to
insufficient domain expertise. This paper innovatively proposes the first
large-scale multimodal dialogue dataset for SAR images, named SARChat-2M, which
contains approximately 2 million high-quality image-text pairs, encompasses
diverse scenarios with detailed target annotations. This dataset not only
supports several key tasks such as visual understanding and object detection
tasks, but also has unique innovative aspects: this study develop a
visual-language dataset and benchmark for the SAR domain, enabling and
evaluating VLMs' capabilities in SAR image interpretation, which provides a
paradigmatic framework for constructing multimodal datasets across various
remote sensing vertical domains. Through experiments on 16 mainstream VLMs, the
effectiveness of the dataset has been fully verified. The project will be
released at https://github.com/JimmyMa99/SARChat.
| [
{
"version": "v1",
"created": "Wed, 12 Feb 2025 07:19:36 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Feb 2025 17:11:41 GMT"
},
{
"version": "v3",
"created": "Mon, 17 Feb 2025 07:13:46 GMT"
},
{
"version": "v4",
"created": "Sun, 23 Feb 2025 06:28:31 GMT"
},
{
"version": "v5",
"created": "Tue, 4 Mar 2025 01:07:55 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Ma",
"Zhiming",
""
],
[
"Xiao",
"Xiayang",
""
],
[
"Dong",
"Sihao",
""
],
[
"Wang",
"Peidong",
""
],
[
"Wang",
"HaiPeng",
""
],
[
"Pan",
"Qingyun",
""
]
]
| TITLE: SARChat-Bench-2M: A Multi-Task Vision-Language Benchmark for SAR Image
Interpretation
ABSTRACT: As a powerful all-weather Earth observation tool, synthetic aperture radar
(SAR) remote sensing enables critical military reconnaissance, maritime
surveillance, and infrastructure monitoring. Although Vision language models
(VLMs) have made remarkable progress in natural language processing and image
understanding, their applications remain limited in professional domains due to
insufficient domain expertise. This paper innovatively proposes the first
large-scale multimodal dialogue dataset for SAR images, named SARChat-2M, which
contains approximately 2 million high-quality image-text pairs, encompasses
diverse scenarios with detailed target annotations. This dataset not only
supports several key tasks such as visual understanding and object detection
tasks, but also has unique innovative aspects: this study develop a
visual-language dataset and benchmark for the SAR domain, enabling and
evaluating VLMs' capabilities in SAR image interpretation, which provides a
paradigmatic framework for constructing multimodal datasets across various
remote sensing vertical domains. Through experiments on 16 mainstream VLMs, the
effectiveness of the dataset has been fully verified. The project will be
released at https://github.com/JimmyMa99/SARChat.
| new_dataset | 0.962072 |
2502.08705 | Jill Naiman | Jill Naiman, Aria Pessianzadeh, Hanyu Zhao, AJ Christensen, Kalina
Borkiewicz, Shriya Srikanth, Anushka Gami, Emma Maxwell, Louisa Zhang, Sri
Nithya Yeragorla and Rezvaneh Rezapour | Beyond the Lens: Quantifying the Impact of Scientific Documentaries
through Amazon Reviews | Camera-ready version for WebSci 2025 | null | 10.1145/3717867.3717908 | null | cs.CY cs.DL physics.ed-ph | http://creativecommons.org/licenses/by/4.0/ | Engaging the public with science is critical for a well-informed population.
A popular method of scientific communication is documentaries. Once released,
it can be difficult to assess the impact of such works on a large scale, due to
the overhead required for in-depth audience feedback studies. In what follows,
we overview our complementary approach to qualitative studies through
quantitative impact and sentiment analysis of Amazon reviews for several
scientific documentaries. In addition to developing a novel impact category
taxonomy for this analysis, we release a dataset containing 1296
human-annotated sentences from 1043 Amazon reviews for six movies created in
whole or part by the Advanced Visualization Lab (AVL). This interdisciplinary
team is housed at the National Center for Supercomputing Applications and
consists of visualization designers who focus on cinematic presentations of
scientific data. Using this data, we train and evaluate several machine
learning and large language models, discussing their effectiveness and possible
generalizability for documentaries beyond those focused on for this work.
Themes are also extracted from our annotated dataset which, along with our
large language model analysis, demonstrate a measure of the ability of
scientific documentaries to engage with the public.
| [
{
"version": "v1",
"created": "Wed, 12 Feb 2025 19:00:01 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 18:46:28 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Naiman",
"Jill",
""
],
[
"Pessianzadeh",
"Aria",
""
],
[
"Zhao",
"Hanyu",
""
],
[
"Christensen",
"AJ",
""
],
[
"Borkiewicz",
"Kalina",
""
],
[
"Srikanth",
"Shriya",
""
],
[
"Gami",
"Anushka",
""
],
[
"Maxwell",
"Emma",
""
],
[
"Zhang",
"Louisa",
""
],
[
"Yeragorla",
"Sri Nithya",
""
],
[
"Rezapour",
"Rezvaneh",
""
]
]
| TITLE: Beyond the Lens: Quantifying the Impact of Scientific Documentaries
through Amazon Reviews
ABSTRACT: Engaging the public with science is critical for a well-informed population.
A popular method of scientific communication is documentaries. Once released,
it can be difficult to assess the impact of such works on a large scale, due to
the overhead required for in-depth audience feedback studies. In what follows,
we overview our complementary approach to qualitative studies through
quantitative impact and sentiment analysis of Amazon reviews for several
scientific documentaries. In addition to developing a novel impact category
taxonomy for this analysis, we release a dataset containing 1296
human-annotated sentences from 1043 Amazon reviews for six movies created in
whole or part by the Advanced Visualization Lab (AVL). This interdisciplinary
team is housed at the National Center for Supercomputing Applications and
consists of visualization designers who focus on cinematic presentations of
scientific data. Using this data, we train and evaluate several machine
learning and large language models, discussing their effectiveness and possible
generalizability for documentaries beyond those focused on for this work.
Themes are also extracted from our annotated dataset which, along with our
large language model analysis, demonstrate a measure of the ability of
scientific documentaries to engage with the public.
| new_dataset | 0.966347 |
2502.09888 | Shijia Wang | Songpei Xu, Shijia Wang, Da Guo, Xianwen Guo, Qiang Xiao, Fangjian Li,
Chuanjiang Luo | An Efficient Large Recommendation Model: Towards a Resource-Optimal
Scaling Law | null | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The pursuit of scaling up recommendation models confronts intrinsic tensions
between expanding model capacity and preserving computational tractability.
While prior studies have explored scaling laws for recommendation systems,
their resource-intensive paradigms -- often requiring tens of thousands of A100
GPU hours -- remain impractical for most industrial applications. This work
addresses a critical gap: achieving sustainable model scaling under strict
computational budgets. We propose Climber, a resource-efficient recommendation
framework comprising two synergistic components: the ASTRO model architecture
for algorithmic innovation and the TURBO acceleration framework for engineering
optimization. ASTRO (Adaptive Scalable Transformer for RecOmmendation) adopts
two core innovations: (1) multi-scale sequence partitioning that reduces
attention complexity from O(n^2d) to O(n^2d/Nb) via hierarchical blocks,
enabling more efficient scaling with sequence length; (2) dynamic temperature
modulation that adaptively adjusts attention scores for multimodal
distributions arising from inherent multi-scenario and multi-behavior
interactions. Complemented by TURBO (Two-stage Unified Ranking with Batched
Output), a co-designed acceleration framework integrating gradient-aware
feature compression and memory-efficient Key-Value caching, Climber achieves
5.15x throughput gains without performance degradation. Comprehensive offline
experiments on multiple datasets validate that Climber exhibits a more ideal
scaling curve. To our knowledge, this is the first publicly documented
framework where controlled model scaling drives continuous online metric growth
(12.19% overall lift) without prohibitive resource costs. Climber has been
successfully deployed on Netease Cloud Music, one of China's largest music
streaming platforms, serving tens of millions of users daily.
| [
{
"version": "v1",
"created": "Fri, 14 Feb 2025 03:25:09 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Xu",
"Songpei",
""
],
[
"Wang",
"Shijia",
""
],
[
"Guo",
"Da",
""
],
[
"Guo",
"Xianwen",
""
],
[
"Xiao",
"Qiang",
""
],
[
"Li",
"Fangjian",
""
],
[
"Luo",
"Chuanjiang",
""
]
]
| TITLE: An Efficient Large Recommendation Model: Towards a Resource-Optimal
Scaling Law
ABSTRACT: The pursuit of scaling up recommendation models confronts intrinsic tensions
between expanding model capacity and preserving computational tractability.
While prior studies have explored scaling laws for recommendation systems,
their resource-intensive paradigms -- often requiring tens of thousands of A100
GPU hours -- remain impractical for most industrial applications. This work
addresses a critical gap: achieving sustainable model scaling under strict
computational budgets. We propose Climber, a resource-efficient recommendation
framework comprising two synergistic components: the ASTRO model architecture
for algorithmic innovation and the TURBO acceleration framework for engineering
optimization. ASTRO (Adaptive Scalable Transformer for RecOmmendation) adopts
two core innovations: (1) multi-scale sequence partitioning that reduces
attention complexity from O(n^2d) to O(n^2d/Nb) via hierarchical blocks,
enabling more efficient scaling with sequence length; (2) dynamic temperature
modulation that adaptively adjusts attention scores for multimodal
distributions arising from inherent multi-scenario and multi-behavior
interactions. Complemented by TURBO (Two-stage Unified Ranking with Batched
Output), a co-designed acceleration framework integrating gradient-aware
feature compression and memory-efficient Key-Value caching, Climber achieves
5.15x throughput gains without performance degradation. Comprehensive offline
experiments on multiple datasets validate that Climber exhibits a more ideal
scaling curve. To our knowledge, this is the first publicly documented
framework where controlled model scaling drives continuous online metric growth
(12.19% overall lift) without prohibitive resource costs. Climber has been
successfully deployed on Netease Cloud Music, one of China's largest music
streaming platforms, serving tens of millions of users daily.
| no_new_dataset | 0.946151 |
2502.09993 | JunGyu Lee | JunGyu Lee, Yeji Choi, Haksub Kim, Ig-Jae Kim, Gi Pyo Nam | Navigating Label Ambiguity for Facial Expression Recognition in the Wild | Accepted by AAAI2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Facial expression recognition (FER) remains a challenging task due to label
ambiguity caused by the subjective nature of facial expressions and noisy
samples. Additionally, class imbalance, which is common in real-world datasets,
further complicates FER. Although many studies have shown impressive
improvements, they typically address only one of these issues, leading to
suboptimal results. To tackle both challenges simultaneously, we propose a
novel framework called Navigating Label Ambiguity (NLA), which is robust under
real-world conditions. The motivation behind NLA is that dynamically estimating
and emphasizing ambiguous samples at each iteration helps mitigate noise and
class imbalance by reducing the model's bias toward majority classes. To
achieve this, NLA consists of two main components: Noise-aware Adaptive
Weighting (NAW) and consistency regularization. Specifically, NAW adaptively
assigns higher importance to ambiguous samples and lower importance to noisy
ones, based on the correlation between the intermediate prediction scores for
the ground truth and the nearest negative. Moreover, we incorporate a
regularization term to ensure consistent latent distributions. Consequently,
NLA enables the model to progressively focus on more challenging ambiguous
samples, which primarily belong to the minority class, in the later stages of
training. Extensive experiments demonstrate that NLA outperforms existing
methods in both overall and mean accuracy, confirming its robustness against
noise and class imbalance. To the best of our knowledge, this is the first
framework to address both problems simultaneously.
| [
{
"version": "v1",
"created": "Fri, 14 Feb 2025 08:24:38 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Lee",
"JunGyu",
""
],
[
"Choi",
"Yeji",
""
],
[
"Kim",
"Haksub",
""
],
[
"Kim",
"Ig-Jae",
""
],
[
"Nam",
"Gi Pyo",
""
]
]
| TITLE: Navigating Label Ambiguity for Facial Expression Recognition in the Wild
ABSTRACT: Facial expression recognition (FER) remains a challenging task due to label
ambiguity caused by the subjective nature of facial expressions and noisy
samples. Additionally, class imbalance, which is common in real-world datasets,
further complicates FER. Although many studies have shown impressive
improvements, they typically address only one of these issues, leading to
suboptimal results. To tackle both challenges simultaneously, we propose a
novel framework called Navigating Label Ambiguity (NLA), which is robust under
real-world conditions. The motivation behind NLA is that dynamically estimating
and emphasizing ambiguous samples at each iteration helps mitigate noise and
class imbalance by reducing the model's bias toward majority classes. To
achieve this, NLA consists of two main components: Noise-aware Adaptive
Weighting (NAW) and consistency regularization. Specifically, NAW adaptively
assigns higher importance to ambiguous samples and lower importance to noisy
ones, based on the correlation between the intermediate prediction scores for
the ground truth and the nearest negative. Moreover, we incorporate a
regularization term to ensure consistent latent distributions. Consequently,
NLA enables the model to progressively focus on more challenging ambiguous
samples, which primarily belong to the minority class, in the later stages of
training. Extensive experiments demonstrate that NLA outperforms existing
methods in both overall and mean accuracy, confirming its robustness against
noise and class imbalance. To the best of our knowledge, this is the first
framework to address both problems simultaneously.
| no_new_dataset | 0.944331 |
2502.10038 | Jiawei Cheng | Jiawei Cheng, Jingyuan Wang, Yichuan Zhang, Jiahao Ji, Yuanshao Zhu,
Zhibo Zhang, Xiangyu Zhao | POI-Enhancer: An LLM-based Semantic Enhancement Framework for POI
Representation Learning | AAAI 25 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | POI representation learning plays a crucial role in handling tasks related to
user mobility data. Recent studies have shown that enriching POI
representations with multimodal information can significantly enhance their
task performance. Previously, the textual information incorporated into POI
representations typically involved only POI categories or check-in content,
leading to relatively weak textual features in existing methods. In contrast,
large language models (LLMs) trained on extensive text data have been found to
possess rich textual knowledge. However leveraging such knowledge to enhance
POI representation learning presents two key challenges: first, how to extract
POI-related knowledge from LLMs effectively, and second, how to integrate the
extracted information to enhance POI representations. To address these
challenges, we propose POI-Enhancer, a portable framework that leverages LLMs
to improve POI representations produced by classic POI learning models. We
first design three specialized prompts to extract semantic information from
LLMs efficiently. Then, the Dual Feature Alignment module enhances the quality
of the extracted information, while the Semantic Feature Fusion module
preserves its integrity. The Cross Attention Fusion module then fully
adaptively integrates such high-quality information into POI representations
and Multi-View Contrastive Learning further injects human-understandable
semantic information into these representations. Extensive experiments on three
real-world datasets demonstrate the effectiveness of our framework, showing
significant improvements across all baseline representations.
| [
{
"version": "v1",
"created": "Fri, 14 Feb 2025 09:34:24 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 00:19:42 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Cheng",
"Jiawei",
""
],
[
"Wang",
"Jingyuan",
""
],
[
"Zhang",
"Yichuan",
""
],
[
"Ji",
"Jiahao",
""
],
[
"Zhu",
"Yuanshao",
""
],
[
"Zhang",
"Zhibo",
""
],
[
"Zhao",
"Xiangyu",
""
]
]
| TITLE: POI-Enhancer: An LLM-based Semantic Enhancement Framework for POI
Representation Learning
ABSTRACT: POI representation learning plays a crucial role in handling tasks related to
user mobility data. Recent studies have shown that enriching POI
representations with multimodal information can significantly enhance their
task performance. Previously, the textual information incorporated into POI
representations typically involved only POI categories or check-in content,
leading to relatively weak textual features in existing methods. In contrast,
large language models (LLMs) trained on extensive text data have been found to
possess rich textual knowledge. However leveraging such knowledge to enhance
POI representation learning presents two key challenges: first, how to extract
POI-related knowledge from LLMs effectively, and second, how to integrate the
extracted information to enhance POI representations. To address these
challenges, we propose POI-Enhancer, a portable framework that leverages LLMs
to improve POI representations produced by classic POI learning models. We
first design three specialized prompts to extract semantic information from
LLMs efficiently. Then, the Dual Feature Alignment module enhances the quality
of the extracted information, while the Semantic Feature Fusion module
preserves its integrity. The Cross Attention Fusion module then fully
adaptively integrates such high-quality information into POI representations
and Multi-View Contrastive Learning further injects human-understandable
semantic information into these representations. Extensive experiments on three
real-world datasets demonstrate the effectiveness of our framework, showing
significant improvements across all baseline representations.
| no_new_dataset | 0.942929 |
2502.10050 | Qiyao Peng | Qiyao Peng, Hongtao Liu, Hua Huang, Qing Yang, Minglai Shao | A Survey on LLM-powered Agents for Recommender Systems | null | null | null | null | cs.IR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recommender systems are essential components of many online platforms, yet
traditional approaches still struggle with understanding complex user
preferences and providing explainable recommendations. The emergence of Large
Language Model (LLM)-powered agents offers a promising approach by enabling
natural language interactions and interpretable reasoning, potentially
transforming research in recommender systems. This survey provides a systematic
review of the emerging applications of LLM-powered agents in recommender
systems. We identify and analyze three key paradigms in current research: (1)
Recommender-oriented approaches, which leverage intelligent agents to enhance
the fundamental recommendation mechanisms; (2) Interaction-oriented approaches,
which facilitate dynamic user engagement through natural dialogue and
interpretable suggestions; and (3) Simulation-oriented approaches, which employ
multi-agent frameworks to model complex user-item interactions and system
dynamics. Beyond paradigm categorization, we analyze the architectural
foundations of LLM-powered recommendation agents, examining their essential
components: profile construction, memory management, strategic planning, and
action execution. Our investigation extends to a comprehensive analysis of
benchmark datasets and evaluation frameworks in this domain. This systematic
examination not only illuminates the current state of LLM-powered agent
recommender systems but also charts critical challenges and promising research
directions in this transformative field.
| [
{
"version": "v1",
"created": "Fri, 14 Feb 2025 09:57:07 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Peng",
"Qiyao",
""
],
[
"Liu",
"Hongtao",
""
],
[
"Huang",
"Hua",
""
],
[
"Yang",
"Qing",
""
],
[
"Shao",
"Minglai",
""
]
]
| TITLE: A Survey on LLM-powered Agents for Recommender Systems
ABSTRACT: Recommender systems are essential components of many online platforms, yet
traditional approaches still struggle with understanding complex user
preferences and providing explainable recommendations. The emergence of Large
Language Model (LLM)-powered agents offers a promising approach by enabling
natural language interactions and interpretable reasoning, potentially
transforming research in recommender systems. This survey provides a systematic
review of the emerging applications of LLM-powered agents in recommender
systems. We identify and analyze three key paradigms in current research: (1)
Recommender-oriented approaches, which leverage intelligent agents to enhance
the fundamental recommendation mechanisms; (2) Interaction-oriented approaches,
which facilitate dynamic user engagement through natural dialogue and
interpretable suggestions; and (3) Simulation-oriented approaches, which employ
multi-agent frameworks to model complex user-item interactions and system
dynamics. Beyond paradigm categorization, we analyze the architectural
foundations of LLM-powered recommendation agents, examining their essential
components: profile construction, memory management, strategic planning, and
action execution. Our investigation extends to a comprehensive analysis of
benchmark datasets and evaluation frameworks in this domain. This systematic
examination not only illuminates the current state of LLM-powered agent
recommender systems but also charts critical challenges and promising research
directions in this transformative field.
| no_new_dataset | 0.938913 |
2502.10388 | WonJin Yoon | WonJin Yoon, Boyu Ren, Spencer Thomas, Chanwhi Kim, Guergana Savova,
Mei-Hua Hall, Timothy Miller | Aspect-Oriented Summarization for Psychiatric Short-Term Readmission
Prediction | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent progress in large language models (LLMs) has enabled the automated
processing of lengthy documents even without supervised training on a
task-specific dataset. Yet, their zero-shot performance in complex tasks as
opposed to straightforward information extraction tasks remains suboptimal. One
feasible approach for tasks with lengthy, complex input is to first summarize
the document and then apply supervised fine-tuning to the summary. However, the
summarization process inevitably results in some loss of information. In this
study we present a method for processing the summaries of long documents aimed
to capture different important aspects of the original document. We hypothesize
that LLM summaries generated with different aspect-oriented prompts contain
different \textit{information signals}, and we propose methods to measure these
differences. We introduce approaches to effectively integrate signals from
these different summaries for supervised training of transformer models. We
validate our hypotheses on a high-impact task -- 30-day readmission prediction
from a psychiatric discharge -- using real-world data from four hospitals, and
show that our proposed method increases the prediction performance for the
complex task of predicting patient outcome.
| [
{
"version": "v1",
"created": "Fri, 14 Feb 2025 18:59:28 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Yoon",
"WonJin",
""
],
[
"Ren",
"Boyu",
""
],
[
"Thomas",
"Spencer",
""
],
[
"Kim",
"Chanwhi",
""
],
[
"Savova",
"Guergana",
""
],
[
"Hall",
"Mei-Hua",
""
],
[
"Miller",
"Timothy",
""
]
]
| TITLE: Aspect-Oriented Summarization for Psychiatric Short-Term Readmission
Prediction
ABSTRACT: Recent progress in large language models (LLMs) has enabled the automated
processing of lengthy documents even without supervised training on a
task-specific dataset. Yet, their zero-shot performance in complex tasks as
opposed to straightforward information extraction tasks remains suboptimal. One
feasible approach for tasks with lengthy, complex input is to first summarize
the document and then apply supervised fine-tuning to the summary. However, the
summarization process inevitably results in some loss of information. In this
study we present a method for processing the summaries of long documents aimed
to capture different important aspects of the original document. We hypothesize
that LLM summaries generated with different aspect-oriented prompts contain
different \textit{information signals}, and we propose methods to measure these
differences. We introduce approaches to effectively integrate signals from
these different summaries for supervised training of transformer models. We
validate our hypotheses on a high-impact task -- 30-day readmission prediction
from a psychiatric discharge -- using real-world data from four hospitals, and
show that our proposed method increases the prediction performance for the
complex task of predicting patient outcome.
| no_new_dataset | 0.945147 |
2502.10967 | Xiao Shen Dr. | Xiao Shen, Zhihao Chen, Shirui Pan, Shuang Zhou, Laurence T. Yang, and
Xi Zhou | Open-Set Cross-Network Node Classification via Unknown-Excluded
Adversarial Graph Domain Alignment | In Proc. AAAI, 2025 | null | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing cross-network node classification methods are mainly proposed for
closed-set setting, where the source network and the target network share
exactly the same label space. Such a setting is restricted in real-world
applications, since the target network might contain additional classes that
are not present in the source. In this work, we study a more realistic open-set
cross-network node classification (O-CNNC) problem, where the target network
contains all the known classes in the source and further contains several
target-private classes unseen in the source. Borrowing the concept from
open-set domain adaptation, all target-private classes are defined as an
additional unknown class. To address the challenging O-CNNC problem, we propose
an unknown-excluded adversarial graph domain alignment (UAGA) model with a
separate-adapt training strategy. Firstly, UAGA roughly separates known classes
from unknown class, by training a graph neural network encoder and a
neighborhood-aggregation node classifier in an adversarial framework. Then,
unknown-excluded adversarial domain alignment is customized to align only
target nodes from known classes with the source, while pushing target nodes
from unknown class far away from the source, by assigning positive and negative
domain adaptation coefficient to known class nodes and unknown class nodes.
Extensive experiments on real-world datasets demonstrate significant
outperformance of the proposed UAGA over state-of-the-art methods on O-CNNC.
| [
{
"version": "v1",
"created": "Sun, 16 Feb 2025 03:00:42 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Shen",
"Xiao",
""
],
[
"Chen",
"Zhihao",
""
],
[
"Pan",
"Shirui",
""
],
[
"Zhou",
"Shuang",
""
],
[
"Yang",
"Laurence T.",
""
],
[
"Zhou",
"Xi",
""
]
]
| TITLE: Open-Set Cross-Network Node Classification via Unknown-Excluded
Adversarial Graph Domain Alignment
ABSTRACT: Existing cross-network node classification methods are mainly proposed for
closed-set setting, where the source network and the target network share
exactly the same label space. Such a setting is restricted in real-world
applications, since the target network might contain additional classes that
are not present in the source. In this work, we study a more realistic open-set
cross-network node classification (O-CNNC) problem, where the target network
contains all the known classes in the source and further contains several
target-private classes unseen in the source. Borrowing the concept from
open-set domain adaptation, all target-private classes are defined as an
additional unknown class. To address the challenging O-CNNC problem, we propose
an unknown-excluded adversarial graph domain alignment (UAGA) model with a
separate-adapt training strategy. Firstly, UAGA roughly separates known classes
from unknown class, by training a graph neural network encoder and a
neighborhood-aggregation node classifier in an adversarial framework. Then,
unknown-excluded adversarial domain alignment is customized to align only
target nodes from known classes with the source, while pushing target nodes
from unknown class far away from the source, by assigning positive and negative
domain adaptation coefficient to known class nodes and unknown class nodes.
Extensive experiments on real-world datasets demonstrate significant
outperformance of the proposed UAGA over state-of-the-art methods on O-CNNC.
| no_new_dataset | 0.956431 |
2502.11610 | Renyu Zhao | Renyu Zhao, Yunxin Chen | Accuracy Assessment of OpenAlex and Clarivate Scholar ID with an
LLM-Assisted Benchmark | null | null | null | null | cs.IR | http://creativecommons.org/licenses/by/4.0/ | In quantitative SciSci (science of science) studies, accurately identifying
individual scholars is paramount for scientific data analysis. However, the
variability in how names are represented-due to commonality, abbreviations, and
different spelling conventions-complicates this task. While identifier systems
like ORCID are being developed, many scholars remain unregistered, and numerous
publications are not included. Scholarly databases such as Clarivate and
OpenAlex have introduced their own ID systems as preliminary name
disambiguation solutions. This study evaluates the effectiveness of these
systems across different groups to determine their suitability for various
application scenarios. We sampled authors from the top quartile (Q1) of Web of
Science (WOS) journals based on country, discipline, and number of
corresponding author papers. For each group, we selected 100 scholars and
meticulously annotated all their papers using a Search-enhanced Large Language
Model method. Using these annotations, we identified the corresponding IDs in
OpenAlex and Clarivate, extracted all associated papers, filtered for Q1 WOS
journals, and calculated precision and recall by comparing against the
annotated dataset.
| [
{
"version": "v1",
"created": "Mon, 17 Feb 2025 09:54:46 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 06:28:50 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Zhao",
"Renyu",
""
],
[
"Chen",
"Yunxin",
""
]
]
| TITLE: Accuracy Assessment of OpenAlex and Clarivate Scholar ID with an
LLM-Assisted Benchmark
ABSTRACT: In quantitative SciSci (science of science) studies, accurately identifying
individual scholars is paramount for scientific data analysis. However, the
variability in how names are represented-due to commonality, abbreviations, and
different spelling conventions-complicates this task. While identifier systems
like ORCID are being developed, many scholars remain unregistered, and numerous
publications are not included. Scholarly databases such as Clarivate and
OpenAlex have introduced their own ID systems as preliminary name
disambiguation solutions. This study evaluates the effectiveness of these
systems across different groups to determine their suitability for various
application scenarios. We sampled authors from the top quartile (Q1) of Web of
Science (WOS) journals based on country, discipline, and number of
corresponding author papers. For each group, we selected 100 scholars and
meticulously annotated all their papers using a Search-enhanced Large Language
Model method. Using these annotations, we identified the corresponding IDs in
OpenAlex and Clarivate, extracted all associated papers, filtered for Q1 WOS
journals, and calculated precision and recall by comparing against the
annotated dataset.
| no_new_dataset | 0.941708 |
2502.11619 | Anton Storgaard | Lauritz Christian Holme, Anton Mosquera Storgaard, Siavash Arjomand
Bigdeli | Membership Inference Attacks for Face Images Against Fine-Tuned Latent
Diffusion Models | In Proceedings of the 20th International Joint Conference on Computer
Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP
2025) - Volume 2: VISAPP, pages 439-446 | In Proceedings of VISAPP 2025, ISBN 978-989-758-728-3, ISSN
2184-4321, pages 439-446 (2025) | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The rise of generative image models leads to privacy concerns when it comes
to the huge datasets used to train such models. This paper investigates the
possibility of inferring if a set of face images was used for fine-tuning a
Latent Diffusion Model (LDM). A Membership Inference Attack (MIA) method is
presented for this task. Using generated auxiliary data for the training of the
attack model leads to significantly better performance, and so does the use of
watermarks. The guidance scale used for inference was found to have a
significant influence. If a LDM is fine-tuned for long enough, the text prompt
used for inference has no significant influence. The proposed MIA is found to
be viable in a realistic black-box setup against LDMs fine-tuned on
face-images.
| [
{
"version": "v1",
"created": "Mon, 17 Feb 2025 10:01:24 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Holme",
"Lauritz Christian",
""
],
[
"Storgaard",
"Anton Mosquera",
""
],
[
"Bigdeli",
"Siavash Arjomand",
""
]
]
| TITLE: Membership Inference Attacks for Face Images Against Fine-Tuned Latent
Diffusion Models
ABSTRACT: The rise of generative image models leads to privacy concerns when it comes
to the huge datasets used to train such models. This paper investigates the
possibility of inferring if a set of face images was used for fine-tuning a
Latent Diffusion Model (LDM). A Membership Inference Attack (MIA) method is
presented for this task. Using generated auxiliary data for the training of the
attack model leads to significantly better performance, and so does the use of
watermarks. The guidance scale used for inference was found to have a
significant influence. If a LDM is fine-tuned for long enough, the text prompt
used for inference has no significant influence. The proposed MIA is found to
be viable in a realistic black-box setup against LDMs fine-tuned on
face-images.
| no_new_dataset | 0.952353 |
2502.12944 | Thomas Lee | William Toner, Thomas L. Lee, Artjom Joosen, Rajkarn Singh and Martin
Asenov | Performance of Zero-Shot Time Series Foundation Models on Cloud Data | 5 pages, Preprint | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Time series foundation models (FMs) have emerged as a popular paradigm for
zero-shot multi-domain forecasting. FMs are trained on numerous diverse
datasets and claim to be effective forecasters across multiple different time
series domains, including cloud data. In this work we investigate this claim,
exploring the effectiveness of FMs on cloud data. We demonstrate that many
well-known FMs fail to generate meaningful or accurate zero-shot forecasts in
this setting. We support this claim empirically, showing that FMs are
outperformed consistently by simple linear baselines. We also illustrate a
number of interesting pathologies, including instances where FMs suddenly
output seemingly erratic, random-looking forecasts. Our results suggest a
widespread failure of FMs to model cloud data.
| [
{
"version": "v1",
"created": "Tue, 18 Feb 2025 15:28:02 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 16:02:59 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Toner",
"William",
""
],
[
"Lee",
"Thomas L.",
""
],
[
"Joosen",
"Artjom",
""
],
[
"Singh",
"Rajkarn",
""
],
[
"Asenov",
"Martin",
""
]
]
| TITLE: Performance of Zero-Shot Time Series Foundation Models on Cloud Data
ABSTRACT: Time series foundation models (FMs) have emerged as a popular paradigm for
zero-shot multi-domain forecasting. FMs are trained on numerous diverse
datasets and claim to be effective forecasters across multiple different time
series domains, including cloud data. In this work we investigate this claim,
exploring the effectiveness of FMs on cloud data. We demonstrate that many
well-known FMs fail to generate meaningful or accurate zero-shot forecasts in
this setting. We support this claim empirically, showing that FMs are
outperformed consistently by simple linear baselines. We also illustrate a
number of interesting pathologies, including instances where FMs suddenly
output seemingly erratic, random-looking forecasts. Our results suggest a
widespread failure of FMs to model cloud data.
| no_new_dataset | 0.953405 |
2502.13044 | Nils Constantin Hellwig | Nils Constantin Hellwig, Jakob Fehle, Udo Kruschwitz, Christian Wolff | Do we still need Human Annotators? Prompting Large Language Models for
Aspect Sentiment Quad Prediction | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Aspect sentiment quadruple prediction (ASQP) facilitates a detailed
understanding of opinions expressed in a text by identifying the opinion term,
aspect term, aspect category and sentiment polarity for each opinion. However,
annotating a full set of training examples to fine-tune models for ASQP is a
resource-intensive process. In this study, we explore the capabilities of large
language models (LLMs) for zero- and few-shot learning on the ASQP task across
five diverse datasets. We report F1 scores slightly below those obtained with
state-of-the-art fine-tuned models but exceeding previously reported zero- and
few-shot performance. In the 40-shot setting on the Rest16 restaurant domain
dataset, LLMs achieved an F1 score of 52.46, compared to 60.39 by the
best-performing fine-tuned method MVP. Additionally, we report the performance
of LLMs in target aspect sentiment detection (TASD), where the F1 scores were
also close to fine-tuned models, achieving 66.03 on Rest16 in the 40-shot
setting, compared to 72.76 with MVP. While human annotators remain essential
for achieving optimal performance, LLMs can reduce the need for extensive
manual annotation in ASQP tasks.
| [
{
"version": "v1",
"created": "Tue, 18 Feb 2025 16:56:15 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 13:51:34 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Hellwig",
"Nils Constantin",
""
],
[
"Fehle",
"Jakob",
""
],
[
"Kruschwitz",
"Udo",
""
],
[
"Wolff",
"Christian",
""
]
]
| TITLE: Do we still need Human Annotators? Prompting Large Language Models for
Aspect Sentiment Quad Prediction
ABSTRACT: Aspect sentiment quadruple prediction (ASQP) facilitates a detailed
understanding of opinions expressed in a text by identifying the opinion term,
aspect term, aspect category and sentiment polarity for each opinion. However,
annotating a full set of training examples to fine-tune models for ASQP is a
resource-intensive process. In this study, we explore the capabilities of large
language models (LLMs) for zero- and few-shot learning on the ASQP task across
five diverse datasets. We report F1 scores slightly below those obtained with
state-of-the-art fine-tuned models but exceeding previously reported zero- and
few-shot performance. In the 40-shot setting on the Rest16 restaurant domain
dataset, LLMs achieved an F1 score of 52.46, compared to 60.39 by the
best-performing fine-tuned method MVP. Additionally, we report the performance
of LLMs in target aspect sentiment detection (TASD), where the F1 scores were
also close to fine-tuned models, achieving 66.03 on Rest16 in the 40-shot
setting, compared to 72.76 with MVP. While human annotators remain essential
for achieving optimal performance, LLMs can reduce the need for extensive
manual annotation in ASQP tasks.
| no_new_dataset | 0.944842 |
2502.14401 | Paul Friedrich | Paul Friedrich, Florentin Bieder, Philippe C. Cattin | MedFuncta: Modality-Agnostic Representations Based on Efficient Neural
Fields | Project page: https://pfriedri.github.io/medfuncta-io/ Code and
Dataset: https://github.com/pfriedri/medfuncta/ | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent research in medical image analysis with deep learning almost
exclusively focuses on grid- or voxel-based data representations. We challenge
this common choice by introducing MedFuncta, a modality-agnostic continuous
data representation based on neural fields. We demonstrate how to scale neural
fields from single instances to large datasets by exploiting redundancy in
medical signals and by applying an efficient meta-learning approach with a
context reduction scheme. We further address the spectral bias in commonly used
SIREN activations, by introducing an $\omega_0$-schedule, improving
reconstruction quality and convergence speed. We validate our proposed approach
on a large variety of medical signals of different dimensions and modalities
(1D: ECG; 2D: Chest X-ray, Retinal OCT, Fundus Camera, Dermatoscope, Colon
Histopathology, Cell Microscopy; 3D: Brain MRI, Lung CT) and successfully
demonstrate that we can solve relevant downstream tasks on these
representations. We additionally release a large-scale dataset of > 550k
annotated neural fields to promote research in this direction.
| [
{
"version": "v1",
"created": "Thu, 20 Feb 2025 09:38:13 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 13:08:22 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Friedrich",
"Paul",
""
],
[
"Bieder",
"Florentin",
""
],
[
"Cattin",
"Philippe C.",
""
]
]
| TITLE: MedFuncta: Modality-Agnostic Representations Based on Efficient Neural
Fields
ABSTRACT: Recent research in medical image analysis with deep learning almost
exclusively focuses on grid- or voxel-based data representations. We challenge
this common choice by introducing MedFuncta, a modality-agnostic continuous
data representation based on neural fields. We demonstrate how to scale neural
fields from single instances to large datasets by exploiting redundancy in
medical signals and by applying an efficient meta-learning approach with a
context reduction scheme. We further address the spectral bias in commonly used
SIREN activations, by introducing an $\omega_0$-schedule, improving
reconstruction quality and convergence speed. We validate our proposed approach
on a large variety of medical signals of different dimensions and modalities
(1D: ECG; 2D: Chest X-ray, Retinal OCT, Fundus Camera, Dermatoscope, Colon
Histopathology, Cell Microscopy; 3D: Brain MRI, Lung CT) and successfully
demonstrate that we can solve relevant downstream tasks on these
representations. We additionally release a large-scale dataset of > 550k
annotated neural fields to promote research in this direction.
| new_dataset | 0.951863 |
2502.14801 | Cheng Li | Cheng Li, Keyuan Zhou, Tong Liu, Yu Wang, Mingqiao Zhuang, Huan-ang
Gao, Bu Jin and Hao Zhao | AVD2: Accident Video Diffusion for Accident Video Description | ICRA 2025, Project Page: https://an-answer-tree.github.io/ | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Traffic accidents present complex challenges for autonomous driving, often
featuring unpredictable scenarios that hinder accurate system interpretation
and responses. Nonetheless, prevailing methodologies fall short in elucidating
the causes of accidents and proposing preventive measures due to the paucity of
training data specific to accident scenarios. In this work, we introduce AVD2
(Accident Video Diffusion for Accident Video Description), a novel framework
that enhances accident scene understanding by generating accident videos that
aligned with detailed natural language descriptions and reasoning, resulting in
the contributed EMM-AU (Enhanced Multi-Modal Accident Video Understanding)
dataset. Empirical results reveal that the integration of the EMM-AU dataset
establishes state-of-the-art performance across both automated metrics and
human evaluations, markedly advancing the domains of accident analysis and
prevention. Project resources are available at https://an-answer-tree.github.io
| [
{
"version": "v1",
"created": "Thu, 20 Feb 2025 18:22:44 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Feb 2025 05:33:06 GMT"
},
{
"version": "v3",
"created": "Tue, 4 Mar 2025 10:28:47 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Li",
"Cheng",
""
],
[
"Zhou",
"Keyuan",
""
],
[
"Liu",
"Tong",
""
],
[
"Wang",
"Yu",
""
],
[
"Zhuang",
"Mingqiao",
""
],
[
"Gao",
"Huan-ang",
""
],
[
"Jin",
"Bu",
""
],
[
"Zhao",
"Hao",
""
]
]
| TITLE: AVD2: Accident Video Diffusion for Accident Video Description
ABSTRACT: Traffic accidents present complex challenges for autonomous driving, often
featuring unpredictable scenarios that hinder accurate system interpretation
and responses. Nonetheless, prevailing methodologies fall short in elucidating
the causes of accidents and proposing preventive measures due to the paucity of
training data specific to accident scenarios. In this work, we introduce AVD2
(Accident Video Diffusion for Accident Video Description), a novel framework
that enhances accident scene understanding by generating accident videos that
aligned with detailed natural language descriptions and reasoning, resulting in
the contributed EMM-AU (Enhanced Multi-Modal Accident Video Understanding)
dataset. Empirical results reveal that the integration of the EMM-AU dataset
establishes state-of-the-art performance across both automated metrics and
human evaluations, markedly advancing the domains of accident analysis and
prevention. Project resources are available at https://an-answer-tree.github.io
| new_dataset | 0.75037 |
2502.14827 | Aiswarya Baby | Aiswarya Baby and Tintu Thankom Koshy | Exploring Advanced Techniques for Visual Question Answering: A
Comprehensive Comparison | 8 pages, No figures | null | null | null | cs.CV cs.AI cs.ET cs.LG | http://creativecommons.org/licenses/by/4.0/ | Visual Question Answering (VQA) has emerged as a pivotal task in the
intersection of computer vision and natural language processing, requiring
models to understand and reason about visual content in response to natural
language questions. Analyzing VQA datasets is essential for developing robust
models that can handle the complexities of multimodal reasoning. Several
approaches have been developed to examine these datasets, each offering
distinct perspectives on question diversity, answer distribution, and
visual-textual correlations. Despite significant progress, existing VQA models
face challenges related to dataset bias, limited model complexity, commonsense
reasoning gaps, rigid evaluation methods, and generalization to real world
scenarios. This paper offers a detailed study of the original VQA dataset,
baseline models and methods along with a comparative study of five advanced VQA
models, ABC-CNN, KICNLE, Masked Vision and Language Modeling, BLIP-2, and OFA,
each employing distinct methods to address these ongoing challenges.
| [
{
"version": "v1",
"created": "Thu, 20 Feb 2025 18:45:00 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 16:43:01 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Baby",
"Aiswarya",
""
],
[
"Koshy",
"Tintu Thankom",
""
]
]
| TITLE: Exploring Advanced Techniques for Visual Question Answering: A
Comprehensive Comparison
ABSTRACT: Visual Question Answering (VQA) has emerged as a pivotal task in the
intersection of computer vision and natural language processing, requiring
models to understand and reason about visual content in response to natural
language questions. Analyzing VQA datasets is essential for developing robust
models that can handle the complexities of multimodal reasoning. Several
approaches have been developed to examine these datasets, each offering
distinct perspectives on question diversity, answer distribution, and
visual-textual correlations. Despite significant progress, existing VQA models
face challenges related to dataset bias, limited model complexity, commonsense
reasoning gaps, rigid evaluation methods, and generalization to real world
scenarios. This paper offers a detailed study of the original VQA dataset,
baseline models and methods along with a comparative study of five advanced VQA
models, ABC-CNN, KICNLE, Masked Vision and Language Modeling, BLIP-2, and OFA,
each employing distinct methods to address these ongoing challenges.
| no_new_dataset | 0.940953 |
2502.15331 | Jinyu Zhang | Jinyu Zhang, Chao Li, Zhongying Zhao | Lightweight yet Efficient: An External Attentive Graph Convolutional
Network with Positional Prompts for Sequential Recommendation | 26 pages, 8 figures, journal paper, accepted by TOIS at 20th
February, 2025 | null | 10.1145/3719343 | null | cs.IR cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph-based Sequential Recommender systems (GSRs) have gained significant
research attention due to their ability to simultaneously handle user-item
interactions and sequential relationships between items. Current GSRs often
utilize composite or in-depth structures for graph encoding (e.g., the Graph
Transformer). Nevertheless, they have high computational complexity, hindering
the deployment on resource-constrained edge devices. Moreover, the relative
position encoding in Graph Transformer has difficulty in considering the
complicated positional dependencies within sequence. To this end, we propose an
External Attentive Graph convolutional network with Positional prompts for
Sequential recommendation, namely EA-GPS. Specifically, we first introduce an
external attentive graph convolutional network that linearly measures the
global associations among nodes via two external memory units. Then, we present
a positional prompt-based decoder that explicitly treats the absolute item
positions as external prompts. By introducing length-adaptive sequential
masking and a soft attention network, such a decoder facilitates the model to
capture the long-term positional dependencies and contextual relationships
within sequences. Extensive experimental results on five real-world datasets
demonstrate that the proposed EA-GPS outperforms the state-of-the-art methods.
Remarkably, it achieves the superior performance while maintaining a smaller
parameter size and lower training overhead. The implementation of this work is
publicly available at https://github.com/ZZY-GraphMiningLab/EA-GPS.
| [
{
"version": "v1",
"created": "Fri, 21 Feb 2025 09:34:31 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 02:18:36 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Zhang",
"Jinyu",
""
],
[
"Li",
"Chao",
""
],
[
"Zhao",
"Zhongying",
""
]
]
| TITLE: Lightweight yet Efficient: An External Attentive Graph Convolutional
Network with Positional Prompts for Sequential Recommendation
ABSTRACT: Graph-based Sequential Recommender systems (GSRs) have gained significant
research attention due to their ability to simultaneously handle user-item
interactions and sequential relationships between items. Current GSRs often
utilize composite or in-depth structures for graph encoding (e.g., the Graph
Transformer). Nevertheless, they have high computational complexity, hindering
the deployment on resource-constrained edge devices. Moreover, the relative
position encoding in Graph Transformer has difficulty in considering the
complicated positional dependencies within sequence. To this end, we propose an
External Attentive Graph convolutional network with Positional prompts for
Sequential recommendation, namely EA-GPS. Specifically, we first introduce an
external attentive graph convolutional network that linearly measures the
global associations among nodes via two external memory units. Then, we present
a positional prompt-based decoder that explicitly treats the absolute item
positions as external prompts. By introducing length-adaptive sequential
masking and a soft attention network, such a decoder facilitates the model to
capture the long-term positional dependencies and contextual relationships
within sequences. Extensive experimental results on five real-world datasets
demonstrate that the proposed EA-GPS outperforms the state-of-the-art methods.
Remarkably, it achieves the superior performance while maintaining a smaller
parameter size and lower training overhead. The implementation of this work is
publicly available at https://github.com/ZZY-GraphMiningLab/EA-GPS.
| no_new_dataset | 0.950503 |
2502.15530 | Flaviano Della Pia | Flaviano Della Pia, Benjamin X. Shi, Venkat Kapil, Andrea Zen, Dario
Alf\`e, Angelos Michaelides | Accurate and efficient machine learning interatomic potentials for
finite temperature modeling of molecular crystals | Updated figure 1 with corrected energy errors | null | null | null | physics.comp-ph cond-mat.mtrl-sci | http://creativecommons.org/licenses/by/4.0/ | As with many parts of the natural sciences, machine learning interatomic
potentials (MLIPs) are revolutionizing the modeling of molecular crystals.
However, challenges remain for the accurate and efficient calculation of
sublimation enthalpies - a key thermodynamic quantity measuring the stability
of a molecular crystal. Specifically, two key stumbling blocks are: (i) the
need for thousands of ab initio quality reference structures to generate
training data; and (ii) the sometimes unreliable nature of density functional
theory, the main technique for generating such data. Exploiting recent
developments in foundational models for chemistry and materials science
alongside accurate quantum diffusion Monte Carlo benchmarks, offers a promising
path forward. Herein, we demonstrate the generation of MLIPs capable of
describing molecular crystals at finite temperature and pressure with
sub-chemical accuracy, using as few as $\sim 200$ data structures; an order of
magnitude improvement over the current state-of-the-art. We apply this
framework to compute the sublimation enthalpies of the X23 dataset, accounting
for anharmonicity and nuclear quantum effects, achieving sub-chemical accuracy
with respect to experiment. Importantly, we show that our framework can be
generalized to crystals of pharmaceutical relevance, including paracetamol and
aspirin. Nuclear quantum effects are also accurately captured as shown for the
case of squaric acid. By enabling accurate modeling at ambient conditions, this
work paves the way for deeper insights into pharmaceutical and biological
systems.
| [
{
"version": "v1",
"created": "Fri, 21 Feb 2025 15:30:56 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 10:55:06 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Della Pia",
"Flaviano",
""
],
[
"Shi",
"Benjamin X.",
""
],
[
"Kapil",
"Venkat",
""
],
[
"Zen",
"Andrea",
""
],
[
"Alfè",
"Dario",
""
],
[
"Michaelides",
"Angelos",
""
]
]
| TITLE: Accurate and efficient machine learning interatomic potentials for
finite temperature modeling of molecular crystals
ABSTRACT: As with many parts of the natural sciences, machine learning interatomic
potentials (MLIPs) are revolutionizing the modeling of molecular crystals.
However, challenges remain for the accurate and efficient calculation of
sublimation enthalpies - a key thermodynamic quantity measuring the stability
of a molecular crystal. Specifically, two key stumbling blocks are: (i) the
need for thousands of ab initio quality reference structures to generate
training data; and (ii) the sometimes unreliable nature of density functional
theory, the main technique for generating such data. Exploiting recent
developments in foundational models for chemistry and materials science
alongside accurate quantum diffusion Monte Carlo benchmarks, offers a promising
path forward. Herein, we demonstrate the generation of MLIPs capable of
describing molecular crystals at finite temperature and pressure with
sub-chemical accuracy, using as few as $\sim 200$ data structures; an order of
magnitude improvement over the current state-of-the-art. We apply this
framework to compute the sublimation enthalpies of the X23 dataset, accounting
for anharmonicity and nuclear quantum effects, achieving sub-chemical accuracy
with respect to experiment. Importantly, we show that our framework can be
generalized to crystals of pharmaceutical relevance, including paracetamol and
aspirin. Nuclear quantum effects are also accurately captured as shown for the
case of squaric acid. By enabling accurate modeling at ambient conditions, this
work paves the way for deeper insights into pharmaceutical and biological
systems.
| no_new_dataset | 0.942454 |
2502.16779 | Yaxuan Huang | Yaxuan Huang, Xili Dai, Jianan Wang, Xianbiao Qi, Yixing Yuan, Xiangyu
Yue | Unposed Sparse Views Room Layout Reconstruction in the Age of Pretrain
Model | Accepted by ICLR 2025. Github
page:https://github.com/justacar/Plane-DUSt3R | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Room layout estimation from multiple-perspective images is poorly
investigated due to the complexities that emerge from multi-view geometry,
which requires muti-step solutions such as camera intrinsic and extrinsic
estimation, image matching, and triangulation. However, in 3D reconstruction,
the advancement of recent 3D foundation models such as DUSt3R has shifted the
paradigm from the traditional multi-step structure-from-motion process to an
end-to-end single-step approach. To this end, we introduce Plane-DUSt3R, a
novel method for multi-view room layout estimation leveraging the 3D foundation
model DUSt3R. Plane-DUSt3R incorporates the DUSt3R framework and fine-tunes on
a room layout dataset (Structure3D) with a modified objective to estimate
structural planes. By generating uniform and parsimonious results, Plane-DUSt3R
enables room layout estimation with only a single post-processing step and 2D
detection results. Unlike previous methods that rely on single-perspective or
panorama image, Plane-DUSt3R extends the setting to handle multiple-perspective
images. Moreover, it offers a streamlined, end-to-end solution that simplifies
the process and reduces error accumulation. Experimental results demonstrate
that Plane-DUSt3R not only outperforms state-of-the-art methods on the
synthetic dataset but also proves robust and effective on in the wild data with
different image styles such as cartoon. Our code is available at:
https://github.com/justacar/Plane-DUSt3R
| [
{
"version": "v1",
"created": "Mon, 24 Feb 2025 02:14:19 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Feb 2025 03:33:01 GMT"
},
{
"version": "v3",
"created": "Tue, 4 Mar 2025 09:24:06 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Huang",
"Yaxuan",
""
],
[
"Dai",
"Xili",
""
],
[
"Wang",
"Jianan",
""
],
[
"Qi",
"Xianbiao",
""
],
[
"Yuan",
"Yixing",
""
],
[
"Yue",
"Xiangyu",
""
]
]
| TITLE: Unposed Sparse Views Room Layout Reconstruction in the Age of Pretrain
Model
ABSTRACT: Room layout estimation from multiple-perspective images is poorly
investigated due to the complexities that emerge from multi-view geometry,
which requires muti-step solutions such as camera intrinsic and extrinsic
estimation, image matching, and triangulation. However, in 3D reconstruction,
the advancement of recent 3D foundation models such as DUSt3R has shifted the
paradigm from the traditional multi-step structure-from-motion process to an
end-to-end single-step approach. To this end, we introduce Plane-DUSt3R, a
novel method for multi-view room layout estimation leveraging the 3D foundation
model DUSt3R. Plane-DUSt3R incorporates the DUSt3R framework and fine-tunes on
a room layout dataset (Structure3D) with a modified objective to estimate
structural planes. By generating uniform and parsimonious results, Plane-DUSt3R
enables room layout estimation with only a single post-processing step and 2D
detection results. Unlike previous methods that rely on single-perspective or
panorama image, Plane-DUSt3R extends the setting to handle multiple-perspective
images. Moreover, it offers a streamlined, end-to-end solution that simplifies
the process and reduces error accumulation. Experimental results demonstrate
that Plane-DUSt3R not only outperforms state-of-the-art methods on the
synthetic dataset but also proves robust and effective on in the wild data with
different image styles such as cartoon. Our code is available at:
https://github.com/justacar/Plane-DUSt3R
| no_new_dataset | 0.947962 |
2502.17263 | Jinghui Cheng | Arghavan Sanei, Jinghui Cheng | Untold Stories: Unveiling the Scarce Contributions of UX Professionals
to Usability Issue Discussions of Open Source Software Projects | 6 pages, 4 figures, CHI 2025 LBW | null | 10.1145/3706599.3720063 | null | cs.HC cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Previous work established that open source software (OSS) projects can
benefit from the involvement of UX professionals, who offer user-centric
perspectives and contributions to improve software usability. However, their
participation in OSS issue discussions (places where design and implementation
decisions are often made) is relatively scarce since those platforms are
created with a developer-centric mindset. Analyzing a dataset sampled from five
OSS projects, this study identifies UX professionals' distinct approaches to
raising and following up on usability issues. Compared to other contributors,
UX professionals addressed a broader range of usability issues, well-supported
their stances, and were more factual than emotional. They also actively engage
in discussions to provide additional insights and clarifications in comments
following up on the issues they posted. Results from this study provide useful
insights for increasing UX professionals' involvement in OSS communities to
improve usability and end-user satisfaction.
| [
{
"version": "v1",
"created": "Mon, 24 Feb 2025 15:45:13 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 19:50:15 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Sanei",
"Arghavan",
""
],
[
"Cheng",
"Jinghui",
""
]
]
| TITLE: Untold Stories: Unveiling the Scarce Contributions of UX Professionals
to Usability Issue Discussions of Open Source Software Projects
ABSTRACT: Previous work established that open source software (OSS) projects can
benefit from the involvement of UX professionals, who offer user-centric
perspectives and contributions to improve software usability. However, their
participation in OSS issue discussions (places where design and implementation
decisions are often made) is relatively scarce since those platforms are
created with a developer-centric mindset. Analyzing a dataset sampled from five
OSS projects, this study identifies UX professionals' distinct approaches to
raising and following up on usability issues. Compared to other contributors,
UX professionals addressed a broader range of usability issues, well-supported
their stances, and were more factual than emotional. They also actively engage
in discussions to provide additional insights and clarifications in comments
following up on the issues they posted. Results from this study provide useful
insights for increasing UX professionals' involvement in OSS communities to
improve usability and end-user satisfaction.
| no_new_dataset | 0.945349 |
2502.17403 | Stefan Hegselmann | Stefan Hegselmann, Georg von Arnim, Tillmann Rheude, Noel Kronenberg,
David Sontag, Gerhard Hindricks, Roland Eils, Benjamin Wild | Large Language Models are Powerful EHR Encoders | null | null | null | null | cs.LG cs.AI cs.CL | http://creativecommons.org/licenses/by-sa/4.0/ | Electronic Health Records (EHRs) offer rich potential for clinical
prediction, yet their inherent complexity and heterogeneity pose significant
challenges for traditional machine learning approaches. Domain-specific EHR
foundation models trained on large collections of unlabeled EHR data have
demonstrated promising improvements in predictive accuracy and generalization;
however, their training is constrained by limited access to diverse,
high-quality datasets and inconsistencies in coding standards and healthcare
practices. In this study, we explore the possibility of using general-purpose
Large Language Models (LLMs) based embedding methods as EHR encoders. By
serializing patient records into structured Markdown text, transforming codes
into human-readable descriptors, we leverage the extensive generalization
capabilities of LLMs pretrained on vast public corpora, thereby bypassing the
need for proprietary medical datasets. We systematically evaluate two
state-of-the-art LLM-embedding models, GTE-Qwen2-7B-Instruct and
LLM2Vec-Llama3.1-8B-Instruct, across 15 diverse clinical prediction tasks from
the EHRSHOT benchmark, comparing their performance to an EHRspecific foundation
model, CLIMBR-T-Base, and traditional machine learning baselines. Our results
demonstrate that LLM-based embeddings frequently match or exceed the
performance of specialized models, even in few-shot settings, and that their
effectiveness scales with the size of the underlying LLM and the available
context window. Overall, our findings demonstrate that repurposing LLMs for EHR
encoding offers a scalable and effective approach for clinical prediction,
capable of overcoming the limitations of traditional EHR modeling and
facilitating more interoperable and generalizable healthcare applications.
| [
{
"version": "v1",
"created": "Mon, 24 Feb 2025 18:30:36 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 16:36:52 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Hegselmann",
"Stefan",
""
],
[
"von Arnim",
"Georg",
""
],
[
"Rheude",
"Tillmann",
""
],
[
"Kronenberg",
"Noel",
""
],
[
"Sontag",
"David",
""
],
[
"Hindricks",
"Gerhard",
""
],
[
"Eils",
"Roland",
""
],
[
"Wild",
"Benjamin",
""
]
]
| TITLE: Large Language Models are Powerful EHR Encoders
ABSTRACT: Electronic Health Records (EHRs) offer rich potential for clinical
prediction, yet their inherent complexity and heterogeneity pose significant
challenges for traditional machine learning approaches. Domain-specific EHR
foundation models trained on large collections of unlabeled EHR data have
demonstrated promising improvements in predictive accuracy and generalization;
however, their training is constrained by limited access to diverse,
high-quality datasets and inconsistencies in coding standards and healthcare
practices. In this study, we explore the possibility of using general-purpose
Large Language Models (LLMs) based embedding methods as EHR encoders. By
serializing patient records into structured Markdown text, transforming codes
into human-readable descriptors, we leverage the extensive generalization
capabilities of LLMs pretrained on vast public corpora, thereby bypassing the
need for proprietary medical datasets. We systematically evaluate two
state-of-the-art LLM-embedding models, GTE-Qwen2-7B-Instruct and
LLM2Vec-Llama3.1-8B-Instruct, across 15 diverse clinical prediction tasks from
the EHRSHOT benchmark, comparing their performance to an EHRspecific foundation
model, CLIMBR-T-Base, and traditional machine learning baselines. Our results
demonstrate that LLM-based embeddings frequently match or exceed the
performance of specialized models, even in few-shot settings, and that their
effectiveness scales with the size of the underlying LLM and the available
context window. Overall, our findings demonstrate that repurposing LLMs for EHR
encoding offers a scalable and effective approach for clinical prediction,
capable of overcoming the limitations of traditional EHR modeling and
facilitating more interoperable and generalizable healthcare applications.
| no_new_dataset | 0.937954 |
2502.17494 | Mingfu Liang | Mingfu Liang, Xi Liu, Rong Jin, Boyang Liu, Qiuling Suo, Qinghai Zhou,
Song Zhou, Laming Chen, Hua Zheng, Zhiyuan Li, Shali Jiang, Jiyan Yang,
Xiaozhen Xia, Fan Yang, Yasmine Badr, Ellie Wen, Shuyu Xu, Hansey Chen,
Zhengyu Zhang, Jade Nie, Chunzhi Yang, Zhichen Zeng, Weilin Zhang, Xingliang
Huang, Qianru Li, Shiquan Wang, Evelyn Lyu, Wenjing Lu, Rui Zhang, Wenjun
Wang, Jason Rudy, Mengyue Hang, Kai Wang, Yinbin Ma, Shuaiwen Wang, Sihan
Zeng, Tongyi Tang, Xiaohan Wei, Longhao Jin, Jamey Zhang, Marcus Chen, Jiayi
Zhang, Angie Huang, Chi Zhang, Zhengli Zhao, Jared Yang, Qiang Jin, Xian
Chen, Amit Anand Amlesahwaram, Lexi Song, Liang Luo, Yuchen Hao, Nan Xiao,
Yavuz Yetim, Luoshang Pan, Gaoxiang Liu, Yuxi Hu, Yuzhen Huang, Jackie Xu,
Rich Zhu, Xin Zhang, Yiqun Liu, Hang Yin, Yuxin Chen, Buyun Zhang, Xiaoyi
Liu, Xingyuan Wang, Wenguang Mao, Zhijing Li, Qin Huang, Chonglin Sun, Nancy
Yu, Shuo Gu, Shupin Mao, Benjamin Au, Jingzheng Qin, Peggy Yao, Jae-Woo Choi,
Bin Gao, Ernest Wang, Lei Zhang, Wen-Yen Chen, Ted Lee, Jay Zha, Yi Meng,
Alex Gong, Edison Gao, Alireza Vahdatpour, Yiping Han, Yantao Yao, Toshinari
Kureha, Shuo Chang, Musharaf Sultan, John Bocharov, Sagar Chordia, Xiaorui
Gan, Peng Sun, Rocky Liu, Bo Long, Wenlin Chen, Santanu Kolay, Huayu Li | External Large Foundation Model: How to Efficiently Serve Trillions of
Parameters for Online Ads Recommendation | Accepted by the ACM Web Conference (WWW) 2025 Industrial Track as
Oral Presentation | null | null | null | cs.IR cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Ads recommendation is a prominent service of online advertising systems and
has been actively studied. Recent studies indicate that scaling-up and advanced
design of the recommendation model can bring significant performance
improvement. However, with a larger model scale, such prior studies have a
significantly increasing gap from industry as they often neglect two
fundamental challenges in industrial-scale applications. First, training and
inference budgets are restricted for the model to be served, exceeding which
may incur latency and impair user experience. Second, large-volume data arrive
in a streaming mode with data distributions dynamically shifting, as new
users/ads join and existing users/ads leave the system. We propose the External
Large Foundation Model (ExFM) framework to address the overlooked challenges.
Specifically, we develop external distillation and a data augmentation system
(DAS) to control the computational cost of training/inference while maintaining
high performance. We design the teacher in a way like a foundation model (FM)
that can serve multiple students as vertical models (VMs) to amortize its
building cost. We propose Auxiliary Head and Student Adapter to mitigate the
data distribution gap between FM and VMs caused by the streaming data issue.
Comprehensive experiments on internal industrial-scale applications and public
datasets demonstrate significant performance gain by ExFM.
| [
{
"version": "v1",
"created": "Thu, 20 Feb 2025 22:35:52 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Feb 2025 05:29:28 GMT"
},
{
"version": "v3",
"created": "Thu, 27 Feb 2025 23:32:37 GMT"
},
{
"version": "v4",
"created": "Mon, 3 Mar 2025 22:21:09 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Liang",
"Mingfu",
""
],
[
"Liu",
"Xi",
""
],
[
"Jin",
"Rong",
""
],
[
"Liu",
"Boyang",
""
],
[
"Suo",
"Qiuling",
""
],
[
"Zhou",
"Qinghai",
""
],
[
"Zhou",
"Song",
""
],
[
"Chen",
"Laming",
""
],
[
"Zheng",
"Hua",
""
],
[
"Li",
"Zhiyuan",
""
],
[
"Jiang",
"Shali",
""
],
[
"Yang",
"Jiyan",
""
],
[
"Xia",
"Xiaozhen",
""
],
[
"Yang",
"Fan",
""
],
[
"Badr",
"Yasmine",
""
],
[
"Wen",
"Ellie",
""
],
[
"Xu",
"Shuyu",
""
],
[
"Chen",
"Hansey",
""
],
[
"Zhang",
"Zhengyu",
""
],
[
"Nie",
"Jade",
""
],
[
"Yang",
"Chunzhi",
""
],
[
"Zeng",
"Zhichen",
""
],
[
"Zhang",
"Weilin",
""
],
[
"Huang",
"Xingliang",
""
],
[
"Li",
"Qianru",
""
],
[
"Wang",
"Shiquan",
""
],
[
"Lyu",
"Evelyn",
""
],
[
"Lu",
"Wenjing",
""
],
[
"Zhang",
"Rui",
""
],
[
"Wang",
"Wenjun",
""
],
[
"Rudy",
"Jason",
""
],
[
"Hang",
"Mengyue",
""
],
[
"Wang",
"Kai",
""
],
[
"Ma",
"Yinbin",
""
],
[
"Wang",
"Shuaiwen",
""
],
[
"Zeng",
"Sihan",
""
],
[
"Tang",
"Tongyi",
""
],
[
"Wei",
"Xiaohan",
""
],
[
"Jin",
"Longhao",
""
],
[
"Zhang",
"Jamey",
""
],
[
"Chen",
"Marcus",
""
],
[
"Zhang",
"Jiayi",
""
],
[
"Huang",
"Angie",
""
],
[
"Zhang",
"Chi",
""
],
[
"Zhao",
"Zhengli",
""
],
[
"Yang",
"Jared",
""
],
[
"Jin",
"Qiang",
""
],
[
"Chen",
"Xian",
""
],
[
"Amlesahwaram",
"Amit Anand",
""
],
[
"Song",
"Lexi",
""
],
[
"Luo",
"Liang",
""
],
[
"Hao",
"Yuchen",
""
],
[
"Xiao",
"Nan",
""
],
[
"Yetim",
"Yavuz",
""
],
[
"Pan",
"Luoshang",
""
],
[
"Liu",
"Gaoxiang",
""
],
[
"Hu",
"Yuxi",
""
],
[
"Huang",
"Yuzhen",
""
],
[
"Xu",
"Jackie",
""
],
[
"Zhu",
"Rich",
""
],
[
"Zhang",
"Xin",
""
],
[
"Liu",
"Yiqun",
""
],
[
"Yin",
"Hang",
""
],
[
"Chen",
"Yuxin",
""
],
[
"Zhang",
"Buyun",
""
],
[
"Liu",
"Xiaoyi",
""
],
[
"Wang",
"Xingyuan",
""
],
[
"Mao",
"Wenguang",
""
],
[
"Li",
"Zhijing",
""
],
[
"Huang",
"Qin",
""
],
[
"Sun",
"Chonglin",
""
],
[
"Yu",
"Nancy",
""
],
[
"Gu",
"Shuo",
""
],
[
"Mao",
"Shupin",
""
],
[
"Au",
"Benjamin",
""
],
[
"Qin",
"Jingzheng",
""
],
[
"Yao",
"Peggy",
""
],
[
"Choi",
"Jae-Woo",
""
],
[
"Gao",
"Bin",
""
],
[
"Wang",
"Ernest",
""
],
[
"Zhang",
"Lei",
""
],
[
"Chen",
"Wen-Yen",
""
],
[
"Lee",
"Ted",
""
],
[
"Zha",
"Jay",
""
],
[
"Meng",
"Yi",
""
],
[
"Gong",
"Alex",
""
],
[
"Gao",
"Edison",
""
],
[
"Vahdatpour",
"Alireza",
""
],
[
"Han",
"Yiping",
""
],
[
"Yao",
"Yantao",
""
],
[
"Kureha",
"Toshinari",
""
],
[
"Chang",
"Shuo",
""
],
[
"Sultan",
"Musharaf",
""
],
[
"Bocharov",
"John",
""
],
[
"Chordia",
"Sagar",
""
],
[
"Gan",
"Xiaorui",
""
],
[
"Sun",
"Peng",
""
],
[
"Liu",
"Rocky",
""
],
[
"Long",
"Bo",
""
],
[
"Chen",
"Wenlin",
""
],
[
"Kolay",
"Santanu",
""
],
[
"Li",
"Huayu",
""
]
]
| TITLE: External Large Foundation Model: How to Efficiently Serve Trillions of
Parameters for Online Ads Recommendation
ABSTRACT: Ads recommendation is a prominent service of online advertising systems and
has been actively studied. Recent studies indicate that scaling-up and advanced
design of the recommendation model can bring significant performance
improvement. However, with a larger model scale, such prior studies have a
significantly increasing gap from industry as they often neglect two
fundamental challenges in industrial-scale applications. First, training and
inference budgets are restricted for the model to be served, exceeding which
may incur latency and impair user experience. Second, large-volume data arrive
in a streaming mode with data distributions dynamically shifting, as new
users/ads join and existing users/ads leave the system. We propose the External
Large Foundation Model (ExFM) framework to address the overlooked challenges.
Specifically, we develop external distillation and a data augmentation system
(DAS) to control the computational cost of training/inference while maintaining
high performance. We design the teacher in a way like a foundation model (FM)
that can serve multiple students as vertical models (VMs) to amortize its
building cost. We propose Auxiliary Head and Student Adapter to mitigate the
data distribution gap between FM and VMs caused by the streaming data issue.
Comprehensive experiments on internal industrial-scale applications and public
datasets demonstrate significant performance gain by ExFM.
| no_new_dataset | 0.947866 |
2502.18094 | Shengtian Mian | Shengtian Mian and Ya Wang and Nannan Gu and Yuping Wang and Xiaoqing
Li | FwNet-ECA: A Classification Model Enhancing Window Attention with Global
Receptive Fields via Fourier Filtering Operations | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Windowed attention mechanisms were introduced to mitigate the issue of
excessive computation inherent in global attention mechanisms. In this paper,
we present FwNet-ECA, a novel method that utilizes Fourier transforms paired
with learnable weight matrices to enhance the spectral features of images. This
method establishes a global receptive field through Filter Enhancement and
avoids the use of moving window attention. Additionally, we incorporate the
Efficient Channel Attention (ECA) module to improve communication between
different channels. Instead of relying on physically shifted windows, our
approach leverages frequency domain enhancement to implicitly bridge
information across spatial regions. We validate our model on the iCartoonFace
dataset and conduct downstream tasks on ImageNet, demonstrating that our model
achieves lower parameter counts and computational overheads compared to shifted
window approaches, while maintaining competitive accuracy. Furthermore, our
visualization operations clearly demonstrated that the Filter Enhancement
technique achieves greater effectiveness in the model's shallow layers, where
feature maps are relatively larger. This work offers a more efficient and
effective alternative for leveraging attention mechanisms in visual processing
tasks, alleviating the challenges associated with windowed attention models.
Code is available at https://github.com/qingxiaoli/FwNet-ECA
| [
{
"version": "v1",
"created": "Tue, 25 Feb 2025 11:01:53 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 00:48:00 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Mian",
"Shengtian",
""
],
[
"Wang",
"Ya",
""
],
[
"Gu",
"Nannan",
""
],
[
"Wang",
"Yuping",
""
],
[
"Li",
"Xiaoqing",
""
]
]
| TITLE: FwNet-ECA: A Classification Model Enhancing Window Attention with Global
Receptive Fields via Fourier Filtering Operations
ABSTRACT: Windowed attention mechanisms were introduced to mitigate the issue of
excessive computation inherent in global attention mechanisms. In this paper,
we present FwNet-ECA, a novel method that utilizes Fourier transforms paired
with learnable weight matrices to enhance the spectral features of images. This
method establishes a global receptive field through Filter Enhancement and
avoids the use of moving window attention. Additionally, we incorporate the
Efficient Channel Attention (ECA) module to improve communication between
different channels. Instead of relying on physically shifted windows, our
approach leverages frequency domain enhancement to implicitly bridge
information across spatial regions. We validate our model on the iCartoonFace
dataset and conduct downstream tasks on ImageNet, demonstrating that our model
achieves lower parameter counts and computational overheads compared to shifted
window approaches, while maintaining competitive accuracy. Furthermore, our
visualization operations clearly demonstrated that the Filter Enhancement
technique achieves greater effectiveness in the model's shallow layers, where
feature maps are relatively larger. This work offers a more efficient and
effective alternative for leveraging attention mechanisms in visual processing
tasks, alleviating the challenges associated with windowed attention models.
Code is available at https://github.com/qingxiaoli/FwNet-ECA
| no_new_dataset | 0.947817 |
2502.18495 | Haokun Wen | Xuemeng Song, Haoqiang Lin, Haokun Wen, Bohan Hou, Mingzhu Xu, Liqiang
Nie | A Comprehensive Survey on Composed Image Retrieval | null | null | null | null | cs.MM cs.AI cs.CV cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Composed Image Retrieval (CIR) is an emerging yet challenging task that
allows users to search for target images using a multimodal query, comprising a
reference image and a modification text specifying the user's desired changes
to the reference image. Given its significant academic and practical value, CIR
has become a rapidly growing area of interest in the computer vision and
machine learning communities, particularly with the advances in deep learning.
To the best of our knowledge, there is currently no comprehensive review of CIR
to provide a timely overview of this field. Therefore, we synthesize insights
from over 120 publications in top conferences and journals, including ACM TOIS,
SIGIR, and CVPR In particular, we systematically categorize existing supervised
CIR and zero-shot CIR models using a fine-grained taxonomy. For a comprehensive
review, we also briefly discuss approaches for tasks closely related to CIR,
such as attribute-based CIR and dialog-based CIR. Additionally, we summarize
benchmark datasets for evaluation and analyze existing supervised and zero-shot
CIR methods by comparing experimental results across multiple datasets.
Furthermore, we present promising future directions in this field, offering
practical insights for researchers interested in further exploration. The
curated collection of related works is maintained and continuously updated in
https://github.com/haokunwen/Awesome-Composed-Image-Retrieval.
| [
{
"version": "v1",
"created": "Wed, 19 Feb 2025 01:37:24 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 15:16:52 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Song",
"Xuemeng",
""
],
[
"Lin",
"Haoqiang",
""
],
[
"Wen",
"Haokun",
""
],
[
"Hou",
"Bohan",
""
],
[
"Xu",
"Mingzhu",
""
],
[
"Nie",
"Liqiang",
""
]
]
| TITLE: A Comprehensive Survey on Composed Image Retrieval
ABSTRACT: Composed Image Retrieval (CIR) is an emerging yet challenging task that
allows users to search for target images using a multimodal query, comprising a
reference image and a modification text specifying the user's desired changes
to the reference image. Given its significant academic and practical value, CIR
has become a rapidly growing area of interest in the computer vision and
machine learning communities, particularly with the advances in deep learning.
To the best of our knowledge, there is currently no comprehensive review of CIR
to provide a timely overview of this field. Therefore, we synthesize insights
from over 120 publications in top conferences and journals, including ACM TOIS,
SIGIR, and CVPR In particular, we systematically categorize existing supervised
CIR and zero-shot CIR models using a fine-grained taxonomy. For a comprehensive
review, we also briefly discuss approaches for tasks closely related to CIR,
such as attribute-based CIR and dialog-based CIR. Additionally, we summarize
benchmark datasets for evaluation and analyze existing supervised and zero-shot
CIR methods by comparing experimental results across multiple datasets.
Furthermore, we present promising future directions in this field, offering
practical insights for researchers interested in further exploration. The
curated collection of related works is maintained and continuously updated in
https://github.com/haokunwen/Awesome-Composed-Image-Retrieval.
| no_new_dataset | 0.9463 |
2502.19677 | Hu Gao | Hu Gao, Depeng Dang | Towards Differential Handling of Various Blur Regions for Accurate Image
Deblurring | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Image deblurring aims to restore high-quality images by removing undesired
degradation. Although existing methods have yielded promising results, they
either overlook the varying degrees of degradation across different regions of
the blurred image, or they approximate nonlinear function properties by
stacking numerous nonlinear activation functions. In this paper, we propose a
differential handling network (DHNet) to perform differential processing for
different blur regions. Specifically, we design a Volterra block (VBlock) to
integrate the nonlinear characteristics into the deblurring network, avoiding
the previous operation of stacking the number of nonlinear activation functions
to map complex input-output relationships. To enable the model to adaptively
address varying degradation degrees in blurred regions, we devise the
degradation degree recognition expert module (DDRE). This module initially
incorporates prior knowledge from a well-trained model to estimate spatially
variable blur information. Consequently, the router can map the learned
degradation representation and allocate weights to experts according to both
the degree of degradation and the size of the regions. Comprehensive
experimental results show that DHNet effectively surpasses state-of-the-art
(SOTA) methods on both synthetic and real-world datasets.
| [
{
"version": "v1",
"created": "Thu, 27 Feb 2025 01:37:30 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Mar 2025 12:00:01 GMT"
},
{
"version": "v3",
"created": "Tue, 4 Mar 2025 02:05:57 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Gao",
"Hu",
""
],
[
"Dang",
"Depeng",
""
]
]
| TITLE: Towards Differential Handling of Various Blur Regions for Accurate Image
Deblurring
ABSTRACT: Image deblurring aims to restore high-quality images by removing undesired
degradation. Although existing methods have yielded promising results, they
either overlook the varying degrees of degradation across different regions of
the blurred image, or they approximate nonlinear function properties by
stacking numerous nonlinear activation functions. In this paper, we propose a
differential handling network (DHNet) to perform differential processing for
different blur regions. Specifically, we design a Volterra block (VBlock) to
integrate the nonlinear characteristics into the deblurring network, avoiding
the previous operation of stacking the number of nonlinear activation functions
to map complex input-output relationships. To enable the model to adaptively
address varying degradation degrees in blurred regions, we devise the
degradation degree recognition expert module (DDRE). This module initially
incorporates prior knowledge from a well-trained model to estimate spatially
variable blur information. Consequently, the router can map the learned
degradation representation and allocate weights to experts according to both
the degree of degradation and the size of the regions. Comprehensive
experimental results show that DHNet effectively surpasses state-of-the-art
(SOTA) methods on both synthetic and real-world datasets.
| no_new_dataset | 0.93784 |
2502.19697 | Yuan Bian | Yuan Bian, Min Liu, Yunqi Yi, Xueping Wang, Yaonan Wang | Prompt-driven Transferable Adversarial Attack on Person
Re-Identification with Attribute-aware Textual Inversion | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Person re-identification (re-id) models are vital in security surveillance
systems, requiring transferable adversarial attacks to explore the
vulnerabilities of them. Recently, vision-language models (VLM) based attacks
have shown superior transferability by attacking generalized image and textual
features of VLM, but they lack comprehensive feature disruption due to the
overemphasis on discriminative semantics in integral representation. In this
paper, we introduce the Attribute-aware Prompt Attack (AP-Attack), a novel
method that leverages VLM's image-text alignment capability to explicitly
disrupt fine-grained semantic features of pedestrian images by destroying
attribute-specific textual embeddings. To obtain personalized textual
descriptions for individual attributes, textual inversion networks are designed
to map pedestrian images to pseudo tokens that represent semantic embeddings,
trained in the contrastive learning manner with images and a predefined prompt
template that explicitly describes the pedestrian attributes. Inverted benign
and adversarial fine-grained textual semantics facilitate attacker in
effectively conducting thorough disruptions, enhancing the transferability of
adversarial examples. Extensive experiments show that AP-Attack achieves
state-of-the-art transferability, significantly outperforming previous methods
by 22.9% on mean Drop Rate in cross-model&dataset attack scenarios.
| [
{
"version": "v1",
"created": "Thu, 27 Feb 2025 02:32:58 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 02:24:30 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Bian",
"Yuan",
""
],
[
"Liu",
"Min",
""
],
[
"Yi",
"Yunqi",
""
],
[
"Wang",
"Xueping",
""
],
[
"Wang",
"Yaonan",
""
]
]
| TITLE: Prompt-driven Transferable Adversarial Attack on Person
Re-Identification with Attribute-aware Textual Inversion
ABSTRACT: Person re-identification (re-id) models are vital in security surveillance
systems, requiring transferable adversarial attacks to explore the
vulnerabilities of them. Recently, vision-language models (VLM) based attacks
have shown superior transferability by attacking generalized image and textual
features of VLM, but they lack comprehensive feature disruption due to the
overemphasis on discriminative semantics in integral representation. In this
paper, we introduce the Attribute-aware Prompt Attack (AP-Attack), a novel
method that leverages VLM's image-text alignment capability to explicitly
disrupt fine-grained semantic features of pedestrian images by destroying
attribute-specific textual embeddings. To obtain personalized textual
descriptions for individual attributes, textual inversion networks are designed
to map pedestrian images to pseudo tokens that represent semantic embeddings,
trained in the contrastive learning manner with images and a predefined prompt
template that explicitly describes the pedestrian attributes. Inverted benign
and adversarial fine-grained textual semantics facilitate attacker in
effectively conducting thorough disruptions, enhancing the transferability of
adversarial examples. Extensive experiments show that AP-Attack achieves
state-of-the-art transferability, significantly outperforming previous methods
by 22.9% on mean Drop Rate in cross-model&dataset attack scenarios.
| no_new_dataset | 0.946151 |
2502.19754 | Xingyu Qiu | Xingyu Qiu, Mengying Yang, Xinghua Ma, Fanding Li, Dong Liang,
Gongning Luo, Wei Wang, Kuanquan Wang, Shuo Li | Finding Local Diffusion Schr\"odinger Bridge using Kolmogorov-Arnold
Network | 16 pages, 10 figures, accepted by CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In image generation, Schr\"odinger Bridge (SB)-based methods theoretically
enhance the efficiency and quality compared to the diffusion models by finding
the least costly path between two distributions. However, they are
computationally expensive and time-consuming when applied to complex image
data. The reason is that they focus on fitting globally optimal paths in
high-dimensional spaces, directly generating images as next step on the path
using complex networks through self-supervised training, which typically
results in a gap with the global optimum. Meanwhile, most diffusion models are
in the same path subspace generated by weights $f_A(t)$ and $f_B(t)$, as they
follow the paradigm ($x_t = f_A(t)x_{Img} + f_B(t)\epsilon$). To address the
limitations of SB-based methods, this paper proposes for the first time to find
local Diffusion Schr\"odinger Bridges (LDSB) in the diffusion path subspace,
which strengthens the connection between the SB problem and diffusion models.
Specifically, our method optimizes the diffusion paths using Kolmogorov-Arnold
Network (KAN), which has the advantage of resistance to forgetting and
continuous output. The experiment shows that our LDSB significantly improves
the quality and efficiency of image generation using the same pre-trained
denoising network and the KAN for optimising is only less than 0.1MB. The FID
metric is reduced by more than 15\%, especially with a reduction of 48.50\%
when NFE of DDIM is $5$ for the CelebA dataset. Code is available at
https://github.com/PerceptionComputingLab/LDSB.
| [
{
"version": "v1",
"created": "Thu, 27 Feb 2025 04:34:03 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 03:11:53 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Qiu",
"Xingyu",
""
],
[
"Yang",
"Mengying",
""
],
[
"Ma",
"Xinghua",
""
],
[
"Li",
"Fanding",
""
],
[
"Liang",
"Dong",
""
],
[
"Luo",
"Gongning",
""
],
[
"Wang",
"Wei",
""
],
[
"Wang",
"Kuanquan",
""
],
[
"Li",
"Shuo",
""
]
]
| TITLE: Finding Local Diffusion Schr\"odinger Bridge using Kolmogorov-Arnold
Network
ABSTRACT: In image generation, Schr\"odinger Bridge (SB)-based methods theoretically
enhance the efficiency and quality compared to the diffusion models by finding
the least costly path between two distributions. However, they are
computationally expensive and time-consuming when applied to complex image
data. The reason is that they focus on fitting globally optimal paths in
high-dimensional spaces, directly generating images as next step on the path
using complex networks through self-supervised training, which typically
results in a gap with the global optimum. Meanwhile, most diffusion models are
in the same path subspace generated by weights $f_A(t)$ and $f_B(t)$, as they
follow the paradigm ($x_t = f_A(t)x_{Img} + f_B(t)\epsilon$). To address the
limitations of SB-based methods, this paper proposes for the first time to find
local Diffusion Schr\"odinger Bridges (LDSB) in the diffusion path subspace,
which strengthens the connection between the SB problem and diffusion models.
Specifically, our method optimizes the diffusion paths using Kolmogorov-Arnold
Network (KAN), which has the advantage of resistance to forgetting and
continuous output. The experiment shows that our LDSB significantly improves
the quality and efficiency of image generation using the same pre-trained
denoising network and the KAN for optimising is only less than 0.1MB. The FID
metric is reduced by more than 15\%, especially with a reduction of 48.50\%
when NFE of DDIM is $5$ for the CelebA dataset. Code is available at
https://github.com/PerceptionComputingLab/LDSB.
| no_new_dataset | 0.951097 |
2502.20041 | Hengshuo Chu | Hengshuo Chu and Xiang Deng and Qi Lv and Xiaoyang Chen and Yinchuan
Li and Jianye Hao and Liqiang Nie | 3D-AffordanceLLM: Harnessing Large Language Models for Open-Vocabulary
Affordance Detection in 3D Worlds | ICLR | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | 3D Affordance detection is a challenging problem with broad applications on
various robotic tasks. Existing methods typically formulate the detection
paradigm as a label-based semantic segmentation task. This paradigm relies on
predefined labels and lacks the ability to comprehend complex natural language,
resulting in limited generalization in open-world scene. To address these
limitations, we reformulate the traditional affordance detection paradigm into
\textit{Instruction Reasoning Affordance Segmentation} (IRAS) task. This task
is designed to output a affordance mask region given a query reasoning text,
which avoids fixed categories of input labels. We accordingly propose the
\textit{3D-AffordanceLLM} (3D-ADLLM), a framework designed for reasoning
affordance detection in 3D open-scene. Specifically, 3D-ADLLM introduces large
language models (LLMs) to 3D affordance perception with a custom-designed
decoder for generating affordance masks, thus achieving open-world reasoning
affordance detection. In addition, given the scarcity of 3D affordance datasets
for training large models, we seek to extract knowledge from general
segmentation data and transfer it to affordance detection. Thus, we propose a
multi-stage training strategy that begins with a novel pre-training task, i.e.,
\textit{Referring Object Part Segmentation}~(ROPS). This stage is designed to
equip the model with general recognition and segmentation capabilities at the
object-part level. Then followed by fine-tuning with the IRAS task, 3D-ADLLM
obtains the reasoning ability for affordance detection. In summary, 3D-ADLLM
leverages the rich world knowledge and human-object interaction reasoning
ability of LLMs, achieving approximately an 8\% improvement in mIoU on
open-vocabulary affordance detection tasks.
| [
{
"version": "v1",
"created": "Thu, 27 Feb 2025 12:29:44 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 06:21:57 GMT"
},
{
"version": "v3",
"created": "Tue, 4 Mar 2025 07:37:57 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Chu",
"Hengshuo",
""
],
[
"Deng",
"Xiang",
""
],
[
"Lv",
"Qi",
""
],
[
"Chen",
"Xiaoyang",
""
],
[
"Li",
"Yinchuan",
""
],
[
"Hao",
"Jianye",
""
],
[
"Nie",
"Liqiang",
""
]
]
| TITLE: 3D-AffordanceLLM: Harnessing Large Language Models for Open-Vocabulary
Affordance Detection in 3D Worlds
ABSTRACT: 3D Affordance detection is a challenging problem with broad applications on
various robotic tasks. Existing methods typically formulate the detection
paradigm as a label-based semantic segmentation task. This paradigm relies on
predefined labels and lacks the ability to comprehend complex natural language,
resulting in limited generalization in open-world scene. To address these
limitations, we reformulate the traditional affordance detection paradigm into
\textit{Instruction Reasoning Affordance Segmentation} (IRAS) task. This task
is designed to output a affordance mask region given a query reasoning text,
which avoids fixed categories of input labels. We accordingly propose the
\textit{3D-AffordanceLLM} (3D-ADLLM), a framework designed for reasoning
affordance detection in 3D open-scene. Specifically, 3D-ADLLM introduces large
language models (LLMs) to 3D affordance perception with a custom-designed
decoder for generating affordance masks, thus achieving open-world reasoning
affordance detection. In addition, given the scarcity of 3D affordance datasets
for training large models, we seek to extract knowledge from general
segmentation data and transfer it to affordance detection. Thus, we propose a
multi-stage training strategy that begins with a novel pre-training task, i.e.,
\textit{Referring Object Part Segmentation}~(ROPS). This stage is designed to
equip the model with general recognition and segmentation capabilities at the
object-part level. Then followed by fine-tuning with the IRAS task, 3D-ADLLM
obtains the reasoning ability for affordance detection. In summary, 3D-ADLLM
leverages the rich world knowledge and human-object interaction reasoning
ability of LLMs, achieving approximately an 8\% improvement in mIoU on
open-vocabulary affordance detection tasks.
| no_new_dataset | 0.9455 |
2502.20092 | Mingjie Wu | Mingjie Wu, Chenggui Yang, Huihua Wang, Chen Xue, Yibo Wang, Haoyu
Wang, Yansong Wang, Can Peng, Yuqi Han, Ruoyu Li, Lijun Yun, Zaiqing Chen,
Yuelong Xia | WalnutData: A UAV Remote Sensing Dataset of Green Walnuts and Model
Evaluation | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | The UAV technology is gradually maturing and can provide extremely powerful
support for smart agriculture and precise monitoring. Currently, there is no
dataset related to green walnuts in the field of agricultural computer vision.
Thus, in order to promote the algorithm design in the field of agricultural
computer vision, we used UAV to collect remote-sensing data from 8 walnut
sample plots. Considering that green walnuts are subject to various lighting
conditions and occlusion, we constructed a large-scale dataset with a
higher-granularity of target features - WalnutData. This dataset contains a
total of 30,240 images and 706,208 instances, and there are 4 target
categories: being illuminated by frontal light and unoccluded (A1), being
backlit and unoccluded (A2), being illuminated by frontal light and occluded
(B1), and being backlit and occluded (B2). Subsequently, we evaluated many
mainstream algorithms on WalnutData and used these evaluation results as the
baseline standard. The dataset and all evaluation results can be obtained at
https://github.com/1wuming/WalnutData.
| [
{
"version": "v1",
"created": "Thu, 27 Feb 2025 13:51:56 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Mar 2025 08:56:15 GMT"
},
{
"version": "v3",
"created": "Tue, 4 Mar 2025 14:00:03 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Wu",
"Mingjie",
""
],
[
"Yang",
"Chenggui",
""
],
[
"Wang",
"Huihua",
""
],
[
"Xue",
"Chen",
""
],
[
"Wang",
"Yibo",
""
],
[
"Wang",
"Haoyu",
""
],
[
"Wang",
"Yansong",
""
],
[
"Peng",
"Can",
""
],
[
"Han",
"Yuqi",
""
],
[
"Li",
"Ruoyu",
""
],
[
"Yun",
"Lijun",
""
],
[
"Chen",
"Zaiqing",
""
],
[
"Xia",
"Yuelong",
""
]
]
| TITLE: WalnutData: A UAV Remote Sensing Dataset of Green Walnuts and Model
Evaluation
ABSTRACT: The UAV technology is gradually maturing and can provide extremely powerful
support for smart agriculture and precise monitoring. Currently, there is no
dataset related to green walnuts in the field of agricultural computer vision.
Thus, in order to promote the algorithm design in the field of agricultural
computer vision, we used UAV to collect remote-sensing data from 8 walnut
sample plots. Considering that green walnuts are subject to various lighting
conditions and occlusion, we constructed a large-scale dataset with a
higher-granularity of target features - WalnutData. This dataset contains a
total of 30,240 images and 706,208 instances, and there are 4 target
categories: being illuminated by frontal light and unoccluded (A1), being
backlit and unoccluded (A2), being illuminated by frontal light and occluded
(B1), and being backlit and occluded (B2). Subsequently, we evaluated many
mainstream algorithms on WalnutData and used these evaluation results as the
baseline standard. The dataset and all evaluation results can be obtained at
https://github.com/1wuming/WalnutData.
| new_dataset | 0.962568 |
2502.21187 | Fakrul Islam Tushar | Fakrul Islam Tushar, Lavsen Dahal, Cindy McCabe, Fong Chi Ho, Paul
Segars, Ehsan Abadi, Kyle J. Lafata, Ehsan Samei, Joseph Y. Lo | SYN-LUNGS: Towards Simulating Lung Nodules with Anatomy-Informed Digital
Twins for AI Training | 6 figures, 12 pages | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | AI models for lung cancer screening are limited by data scarcity, impacting
generalizability and clinical applicability. Generative models address this
issue but are constrained by training data variability. We introduce SYN-LUNGS,
a framework for generating high-quality 3D CT images with detailed annotations.
SYN-LUNGS integrates XCAT3 phantoms for digital twin generation, X-Lesions for
nodule simulation (varying size, location, and appearance), and DukeSim for CT
image formation with vendor and parameter variability. The dataset includes
3,072 nodule images from 1,044 simulated CT scans, with 512 lesions and 174
digital twins. Models trained on clinical + simulated data outperform clinical
only models, achieving 10% improvement in detection, 2-9% in segmentation and
classification, and enhanced synthesis.By incorporating anatomy-informed
simulations, SYN-LUNGS provides a scalable approach for AI model development,
particularly in rare disease representation and improving model reliability.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 16:02:37 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 12:18:40 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Tushar",
"Fakrul Islam",
""
],
[
"Dahal",
"Lavsen",
""
],
[
"McCabe",
"Cindy",
""
],
[
"Ho",
"Fong Chi",
""
],
[
"Segars",
"Paul",
""
],
[
"Abadi",
"Ehsan",
""
],
[
"Lafata",
"Kyle J.",
""
],
[
"Samei",
"Ehsan",
""
],
[
"Lo",
"Joseph Y.",
""
]
]
| TITLE: SYN-LUNGS: Towards Simulating Lung Nodules with Anatomy-Informed Digital
Twins for AI Training
ABSTRACT: AI models for lung cancer screening are limited by data scarcity, impacting
generalizability and clinical applicability. Generative models address this
issue but are constrained by training data variability. We introduce SYN-LUNGS,
a framework for generating high-quality 3D CT images with detailed annotations.
SYN-LUNGS integrates XCAT3 phantoms for digital twin generation, X-Lesions for
nodule simulation (varying size, location, and appearance), and DukeSim for CT
image formation with vendor and parameter variability. The dataset includes
3,072 nodule images from 1,044 simulated CT scans, with 512 lesions and 174
digital twins. Models trained on clinical + simulated data outperform clinical
only models, achieving 10% improvement in detection, 2-9% in segmentation and
classification, and enhanced synthesis.By incorporating anatomy-informed
simulations, SYN-LUNGS provides a scalable approach for AI model development,
particularly in rare disease representation and improving model reliability.
| new_dataset | 0.961929 |
2503.00032 | Shinwoo Park | Shinwoo Park, Shubin Kim, Do-Kyung Kim, Yo-Sub Han | Detecting LLM-Generated Korean Text through Linguistic Feature Analysis | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The rapid advancement of large language models (LLMs) increases the
difficulty of distinguishing between human-written and LLM-generated text.
Detecting LLM-generated text is crucial for upholding academic integrity,
preventing plagiarism, protecting copyrights, and ensuring ethical research
practices. Most prior studies on detecting LLM-generated text focus primarily
on English text. However, languages with distinct morphological and syntactic
characteristics require specialized detection approaches. Their unique
structures and usage patterns can hinder the direct application of methods
primarily designed for English. Among such languages, we focus on Korean, which
has relatively flexible spacing rules, a rich morphological system, and less
frequent comma usage compared to English. We introduce KatFish, the first
benchmark dataset for detecting LLM-generated Korean text. The dataset consists
of text written by humans and generated by four LLMs across three genres.
By examining spacing patterns, part-of-speech diversity, and comma usage, we
illuminate the linguistic differences between human-written and LLM-generated
Korean text. Building on these observations, we propose KatFishNet, a detection
method specifically designed for the Korean language. KatFishNet achieves an
average of 19.78% higher AUROC compared to the best-performing existing
detection method. Our code and data are available at
https://github.com/Shinwoo-Park/detecting_llm_generated_korean_text_through_linguistic_analysis.
| [
{
"version": "v1",
"created": "Tue, 25 Feb 2025 00:59:27 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 06:26:41 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Park",
"Shinwoo",
""
],
[
"Kim",
"Shubin",
""
],
[
"Kim",
"Do-Kyung",
""
],
[
"Han",
"Yo-Sub",
""
]
]
| TITLE: Detecting LLM-Generated Korean Text through Linguistic Feature Analysis
ABSTRACT: The rapid advancement of large language models (LLMs) increases the
difficulty of distinguishing between human-written and LLM-generated text.
Detecting LLM-generated text is crucial for upholding academic integrity,
preventing plagiarism, protecting copyrights, and ensuring ethical research
practices. Most prior studies on detecting LLM-generated text focus primarily
on English text. However, languages with distinct morphological and syntactic
characteristics require specialized detection approaches. Their unique
structures and usage patterns can hinder the direct application of methods
primarily designed for English. Among such languages, we focus on Korean, which
has relatively flexible spacing rules, a rich morphological system, and less
frequent comma usage compared to English. We introduce KatFish, the first
benchmark dataset for detecting LLM-generated Korean text. The dataset consists
of text written by humans and generated by four LLMs across three genres.
By examining spacing patterns, part-of-speech diversity, and comma usage, we
illuminate the linguistic differences between human-written and LLM-generated
Korean text. Building on these observations, we propose KatFishNet, a detection
method specifically designed for the Korean language. KatFishNet achieves an
average of 19.78% higher AUROC compared to the best-performing existing
detection method. Our code and data are available at
https://github.com/Shinwoo-Park/detecting_llm_generated_korean_text_through_linguistic_analysis.
| new_dataset | 0.960398 |
2503.00080 | Chi-Sheng Chen | Chi-Sheng Chen, Samuel Yen-Chi Chen, Huan-Hsin Tseng | Exploring the Potential of QEEGNet for Cross-Task and Cross-Dataset
Electroencephalography Encoding with Quantum Machine Learning | null | null | null | null | quant-ph cs.LG q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Electroencephalography (EEG) is widely used in neuroscience and clinical
research for analyzing brain activity. While deep learning models such as
EEGNet have shown success in decoding EEG signals, they often struggle with
data complexity, inter-subject variability, and noise robustness. Recent
advancements in quantum machine learning (QML) offer new opportunities to
enhance EEG analysis by leveraging quantum computing's unique properties. In
this study, we extend the previously proposed Quantum-EEGNet (QEEGNet), a
hybrid neural network incorporating quantum layers into EEGNet, to investigate
its generalization ability across multiple EEG datasets. Our evaluation spans a
diverse set of cognitive and motor task datasets, assessing QEEGNet's
performance in different learning scenarios. Experimental results reveal that
while QEEGNet demonstrates competitive performance and maintains robustness in
certain datasets, its improvements over traditional deep learning methods
remain inconsistent. These findings suggest that hybrid quantum-classical
architectures require further optimization to fully leverage quantum advantages
in EEG processing. Despite these limitations, our study provides new insights
into the applicability of QML in EEG research and highlights challenges that
must be addressed for future advancements.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 03:38:45 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 17:54:00 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Chen",
"Chi-Sheng",
""
],
[
"Chen",
"Samuel Yen-Chi",
""
],
[
"Tseng",
"Huan-Hsin",
""
]
]
| TITLE: Exploring the Potential of QEEGNet for Cross-Task and Cross-Dataset
Electroencephalography Encoding with Quantum Machine Learning
ABSTRACT: Electroencephalography (EEG) is widely used in neuroscience and clinical
research for analyzing brain activity. While deep learning models such as
EEGNet have shown success in decoding EEG signals, they often struggle with
data complexity, inter-subject variability, and noise robustness. Recent
advancements in quantum machine learning (QML) offer new opportunities to
enhance EEG analysis by leveraging quantum computing's unique properties. In
this study, we extend the previously proposed Quantum-EEGNet (QEEGNet), a
hybrid neural network incorporating quantum layers into EEGNet, to investigate
its generalization ability across multiple EEG datasets. Our evaluation spans a
diverse set of cognitive and motor task datasets, assessing QEEGNet's
performance in different learning scenarios. Experimental results reveal that
while QEEGNet demonstrates competitive performance and maintains robustness in
certain datasets, its improvements over traditional deep learning methods
remain inconsistent. These findings suggest that hybrid quantum-classical
architectures require further optimization to fully leverage quantum advantages
in EEG processing. Despite these limitations, our study provides new insights
into the applicability of QML in EEG research and highlights challenges that
must be addressed for future advancements.
| no_new_dataset | 0.938124 |
2503.00382 | Xuehao Gao | Xuehao Gao, Yang Yang, Shaoyi Du, Yang Wu, Yebin Liu, Guo-Jun Qi | EigenActor: Variant Body-Object Interaction Generation Evolved from
Invariant Action Basis Reasoning | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper explores a cross-modality synthesis task that infers 3D
human-object interactions (HOIs) from a given text-based instruction. Existing
text-to-HOI synthesis methods mainly deploy a direct mapping from texts to
object-specific 3D body motions, which may encounter a performance bottleneck
since the huge cross-modality gap. In this paper, we observe that those HOI
samples with the same interaction intention toward different targets, e.g.,
"lift a chair" and "lift a cup", always encapsulate similar action-specific
body motion patterns while characterizing different object-specific interaction
styles. Thus, learning effective action-specific motion priors and
object-specific interaction priors is crucial for a text-to-HOI model and
dominates its performances on text-HOI semantic consistency and body-object
interaction realism. In light of this, we propose a novel body pose generation
strategy for the text-to-HOI task: infer object-agnostic canonical body action
first and then enrich object-specific interaction styles. Specifically, the
first canonical body action inference stage focuses on learning intra-class
shareable body motion priors and mapping given text-based semantics to
action-specific canonical 3D body motions. Then, in the object-specific
interaction inference stage, we focus on object affordance learning and enrich
object-specific interaction styles on an inferred action-specific body motion
basis. Extensive experiments verify that our proposed text-to-HOI synthesis
system significantly outperforms other SOTA methods on three large-scale
datasets with better semantic consistency and interaction realism performances.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 07:15:10 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 02:17:57 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Gao",
"Xuehao",
""
],
[
"Yang",
"Yang",
""
],
[
"Du",
"Shaoyi",
""
],
[
"Wu",
"Yang",
""
],
[
"Liu",
"Yebin",
""
],
[
"Qi",
"Guo-Jun",
""
]
]
| TITLE: EigenActor: Variant Body-Object Interaction Generation Evolved from
Invariant Action Basis Reasoning
ABSTRACT: This paper explores a cross-modality synthesis task that infers 3D
human-object interactions (HOIs) from a given text-based instruction. Existing
text-to-HOI synthesis methods mainly deploy a direct mapping from texts to
object-specific 3D body motions, which may encounter a performance bottleneck
since the huge cross-modality gap. In this paper, we observe that those HOI
samples with the same interaction intention toward different targets, e.g.,
"lift a chair" and "lift a cup", always encapsulate similar action-specific
body motion patterns while characterizing different object-specific interaction
styles. Thus, learning effective action-specific motion priors and
object-specific interaction priors is crucial for a text-to-HOI model and
dominates its performances on text-HOI semantic consistency and body-object
interaction realism. In light of this, we propose a novel body pose generation
strategy for the text-to-HOI task: infer object-agnostic canonical body action
first and then enrich object-specific interaction styles. Specifically, the
first canonical body action inference stage focuses on learning intra-class
shareable body motion priors and mapping given text-based semantics to
action-specific canonical 3D body motions. Then, in the object-specific
interaction inference stage, we focus on object affordance learning and enrich
object-specific interaction styles on an inferred action-specific body motion
basis. Extensive experiments verify that our proposed text-to-HOI synthesis
system significantly outperforms other SOTA methods on three large-scale
datasets with better semantic consistency and interaction realism performances.
| no_new_dataset | 0.949201 |
2503.00507 | Zhuo Ouyang | Zhuo Ouyang, Kaiwen Hu, Qi Zhang, Yifei Wang, Yisen Wang | Projection Head is Secretly an Information Bottleneck | null | null | null | null | cs.LG cs.IT math.IT | http://creativecommons.org/licenses/by/4.0/ | Recently, contrastive learning has risen to be a promising paradigm for
extracting meaningful data representations. Among various special designs,
adding a projection head on top of the encoder during training and removing it
for downstream tasks has proven to significantly enhance the performance of
contrastive learning. However, despite its empirical success, the underlying
mechanism of the projection head remains under-explored. In this paper, we
develop an in-depth theoretical understanding of the projection head from the
information-theoretic perspective. By establishing the theoretical guarantees
on the downstream performance of the features before the projector, we reveal
that an effective projector should act as an information bottleneck, filtering
out the information irrelevant to the contrastive objective. Based on
theoretical insights, we introduce modifications to projectors with training
and structural regularizations. Empirically, our methods exhibit consistent
improvement in the downstream performance across various real-world datasets,
including CIFAR-10, CIFAR-100, and ImageNet-100. We believe our theoretical
understanding on the role of the projection head will inspire more principled
and advanced designs in this field. Code is available at
https://github.com/PKU-ML/Projector_Theory.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 14:23:31 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 04:11:50 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Ouyang",
"Zhuo",
""
],
[
"Hu",
"Kaiwen",
""
],
[
"Zhang",
"Qi",
""
],
[
"Wang",
"Yifei",
""
],
[
"Wang",
"Yisen",
""
]
]
| TITLE: Projection Head is Secretly an Information Bottleneck
ABSTRACT: Recently, contrastive learning has risen to be a promising paradigm for
extracting meaningful data representations. Among various special designs,
adding a projection head on top of the encoder during training and removing it
for downstream tasks has proven to significantly enhance the performance of
contrastive learning. However, despite its empirical success, the underlying
mechanism of the projection head remains under-explored. In this paper, we
develop an in-depth theoretical understanding of the projection head from the
information-theoretic perspective. By establishing the theoretical guarantees
on the downstream performance of the features before the projector, we reveal
that an effective projector should act as an information bottleneck, filtering
out the information irrelevant to the contrastive objective. Based on
theoretical insights, we introduce modifications to projectors with training
and structural regularizations. Empirically, our methods exhibit consistent
improvement in the downstream performance across various real-world datasets,
including CIFAR-10, CIFAR-100, and ImageNet-100. We believe our theoretical
understanding on the role of the projection head will inspire more principled
and advanced designs in this field. Code is available at
https://github.com/PKU-ML/Projector_Theory.
| no_new_dataset | 0.943919 |
2503.00796 | Zhe Wang | Zhe Wang and Xulei Yang | An Efficient 3D Convolutional Neural Network with Channel-wise,
Spatial-grouped, and Temporal Convolutions | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There has been huge progress on video action recognition in recent years.
However, many works focus on tweaking existing 2D backbones due to the reliance
of ImageNet pretraining, which restrains the models from achieving higher
efficiency for video recognition. In this work we introduce a simple and very
efficient 3D convolutional neural network for video action recognition. The
design of the building block consists of a channel-wise convolution, followed
by a spatial group convolution, and finally a temporal convolution. We evaluate
the performance and efficiency of our proposed network on several video action
recognition datasets by directly training on the target dataset without relying
on pertaining. On Something-Something-V1&V2, Kinetics-400 and Multi-Moments in
Time, our network can match or even surpass the performance of other models
which are several times larger. On the fine-grained action recognition dataset
FineGym, we beat the previous state-of-the-art accuracy achieved with 2-stream
methods by more than 5% using only RGB input.
| [
{
"version": "v1",
"created": "Sun, 2 Mar 2025 08:47:06 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 06:40:35 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Wang",
"Zhe",
""
],
[
"Yang",
"Xulei",
""
]
]
| TITLE: An Efficient 3D Convolutional Neural Network with Channel-wise,
Spatial-grouped, and Temporal Convolutions
ABSTRACT: There has been huge progress on video action recognition in recent years.
However, many works focus on tweaking existing 2D backbones due to the reliance
of ImageNet pretraining, which restrains the models from achieving higher
efficiency for video recognition. In this work we introduce a simple and very
efficient 3D convolutional neural network for video action recognition. The
design of the building block consists of a channel-wise convolution, followed
by a spatial group convolution, and finally a temporal convolution. We evaluate
the performance and efficiency of our proposed network on several video action
recognition datasets by directly training on the target dataset without relying
on pertaining. On Something-Something-V1&V2, Kinetics-400 and Multi-Moments in
Time, our network can match or even surpass the performance of other models
which are several times larger. On the fine-grained action recognition dataset
FineGym, we beat the previous state-of-the-art accuracy achieved with 2-stream
methods by more than 5% using only RGB input.
| no_new_dataset | 0.944995 |
2503.01181 | Ali Caglayan | Ali Caglayan, Nevrez Imamoglu, Toru Kouyama | SAR-W-MixMAE: SAR Foundation Model Training Using Backscatter Power
Weighting | 5 pages, 1 figure | null | null | null | cs.CV eess.IV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Foundation model approaches such as masked auto-encoders (MAE) or its
variations are now being successfully applied to satellite imagery. Most of the
ongoing technical validation of foundation models have been applied to optical
images like RGB or multi-spectral images. Due to difficulty in semantic
labeling to create datasets and higher noise content with respect to optical
images, Synthetic Aperture Radar (SAR) data has not been explored a lot in the
field for foundation models. Therefore, in this work as a pre-training
approach, we explored masked auto-encoder, specifically MixMAE on Sentinel-1
SAR images and its impact on SAR image classification tasks. Moreover, we
proposed to use the physical characteristic of SAR data for applying weighting
parameter on the auto-encoder training loss (MSE) to reduce the effect of
speckle noise and very high values on the SAR images. Proposed SAR
intensity-based weighting of the reconstruction loss demonstrates promising
results both on SAR pre-training and downstream tasks specifically on flood
detection compared with the baseline model.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 05:09:44 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 05:20:53 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Caglayan",
"Ali",
""
],
[
"Imamoglu",
"Nevrez",
""
],
[
"Kouyama",
"Toru",
""
]
]
| TITLE: SAR-W-MixMAE: SAR Foundation Model Training Using Backscatter Power
Weighting
ABSTRACT: Foundation model approaches such as masked auto-encoders (MAE) or its
variations are now being successfully applied to satellite imagery. Most of the
ongoing technical validation of foundation models have been applied to optical
images like RGB or multi-spectral images. Due to difficulty in semantic
labeling to create datasets and higher noise content with respect to optical
images, Synthetic Aperture Radar (SAR) data has not been explored a lot in the
field for foundation models. Therefore, in this work as a pre-training
approach, we explored masked auto-encoder, specifically MixMAE on Sentinel-1
SAR images and its impact on SAR image classification tasks. Moreover, we
proposed to use the physical characteristic of SAR data for applying weighting
parameter on the auto-encoder training loss (MSE) to reduce the effect of
speckle noise and very high values on the SAR images. Proposed SAR
intensity-based weighting of the reconstruction loss demonstrates promising
results both on SAR pre-training and downstream tasks specifically on flood
detection compared with the baseline model.
| no_new_dataset | 0.953362 |
2503.01220 | Jiqing Wu | Jiqing Wu, Ingrid Berg, Yawei Li, Ender Konukoglu, Viktor H. Koelzer | Tera-MIND: Tera-scale mouse brain simulation via spatial mRNA-guided
diffusion | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Holistic 3D modeling of molecularly defined brain structures is crucial for
understanding complex brain functions. Emerging tissue profiling technologies
enable the construction of a comprehensive atlas of the mammalian brain with
sub-cellular resolution and spatially resolved gene expression data. However,
such tera-scale volumetric datasets present significant computational
challenges in understanding complex brain functions within their native 3D
spatial context. Here, we propose the novel generative approach
$\textbf{Tera-MIND}$, which can simulate $\textbf{Tera}$-scale $\textbf{M}$ouse
bra$\textbf{IN}$s in 3D using a patch-based and boundary-aware
$\textbf{D}$iffusion model. Taking spatial transcriptomic data as the
conditional input, we generate virtual mouse brains with comprehensive cellular
morphological detail at teravoxel scale. Through the lens of 3D $gene$-$gene$
self-attention, we identify spatial molecular interactions for key
transcriptomic pathways in the murine brain, exemplified by glutamatergic and
dopaminergic neuronal systems. Importantly, these $in$-$silico$ biological
findings are consistent and reproducible across three tera-scale virtual mouse
brains. Therefore, Tera-MIND showcases a promising path toward efficient and
generative simulations of whole organ systems for biomedical research. Project
website: https://musikisomorphie.github.io/Tera-MIND.html
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 06:37:30 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 06:50:03 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Wu",
"Jiqing",
""
],
[
"Berg",
"Ingrid",
""
],
[
"Li",
"Yawei",
""
],
[
"Konukoglu",
"Ender",
""
],
[
"Koelzer",
"Viktor H.",
""
]
]
| TITLE: Tera-MIND: Tera-scale mouse brain simulation via spatial mRNA-guided
diffusion
ABSTRACT: Holistic 3D modeling of molecularly defined brain structures is crucial for
understanding complex brain functions. Emerging tissue profiling technologies
enable the construction of a comprehensive atlas of the mammalian brain with
sub-cellular resolution and spatially resolved gene expression data. However,
such tera-scale volumetric datasets present significant computational
challenges in understanding complex brain functions within their native 3D
spatial context. Here, we propose the novel generative approach
$\textbf{Tera-MIND}$, which can simulate $\textbf{Tera}$-scale $\textbf{M}$ouse
bra$\textbf{IN}$s in 3D using a patch-based and boundary-aware
$\textbf{D}$iffusion model. Taking spatial transcriptomic data as the
conditional input, we generate virtual mouse brains with comprehensive cellular
morphological detail at teravoxel scale. Through the lens of 3D $gene$-$gene$
self-attention, we identify spatial molecular interactions for key
transcriptomic pathways in the murine brain, exemplified by glutamatergic and
dopaminergic neuronal systems. Importantly, these $in$-$silico$ biological
findings are consistent and reproducible across three tera-scale virtual mouse
brains. Therefore, Tera-MIND showcases a promising path toward efficient and
generative simulations of whole organ systems for biomedical research. Project
website: https://musikisomorphie.github.io/Tera-MIND.html
| no_new_dataset | 0.950915 |
2503.01342 | Hao Tang | Hao Tang, Chenwei Xie, Haiyang Wang, Xiaoyi Bao, Tingyu Weng, Pandeng
Li, Yun Zheng, Liwei Wang | UFO: A Unified Approach to Fine-grained Visual Perception via Open-ended
Language Interface | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generalist models have achieved remarkable success in both language and
vision-language tasks, showcasing the potential of unified modeling. However,
effectively integrating fine-grained perception tasks like detection and
segmentation into these models remains a significant challenge. This is
primarily because these tasks often rely heavily on task-specific designs and
architectures that can complicate the modeling process. To address this
challenge, we present \ours, a framework that \textbf{U}nifies
\textbf{F}ine-grained visual perception tasks through an \textbf{O}pen-ended
language interface. By transforming all perception targets into the language
space, \ours unifies object-level detection, pixel-level segmentation, and
image-level vision-language tasks into a single model. Additionally, we
introduce a novel embedding retrieval approach that relies solely on the
language interface to support segmentation tasks. Our framework bridges the gap
between fine-grained perception and vision-language tasks, significantly
simplifying architectural design and training strategies while achieving
comparable or superior performance to methods with intricate task-specific
designs. After multi-task training on five standard visual perception datasets,
\ours outperforms the previous state-of-the-art generalist models by 12.3 mAP
on COCO instance segmentation and 3.3 mIoU on ADE20K semantic segmentation.
Furthermore, our method seamlessly integrates with existing MLLMs, effectively
combining fine-grained perception capabilities with their advanced language
abilities, thereby enabling more challenging tasks such as reasoning
segmentation. Code and models are available at https://github.com/nnnth/UFO.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 09:27:24 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 15:36:45 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Tang",
"Hao",
""
],
[
"Xie",
"Chenwei",
""
],
[
"Wang",
"Haiyang",
""
],
[
"Bao",
"Xiaoyi",
""
],
[
"Weng",
"Tingyu",
""
],
[
"Li",
"Pandeng",
""
],
[
"Zheng",
"Yun",
""
],
[
"Wang",
"Liwei",
""
]
]
| TITLE: UFO: A Unified Approach to Fine-grained Visual Perception via Open-ended
Language Interface
ABSTRACT: Generalist models have achieved remarkable success in both language and
vision-language tasks, showcasing the potential of unified modeling. However,
effectively integrating fine-grained perception tasks like detection and
segmentation into these models remains a significant challenge. This is
primarily because these tasks often rely heavily on task-specific designs and
architectures that can complicate the modeling process. To address this
challenge, we present \ours, a framework that \textbf{U}nifies
\textbf{F}ine-grained visual perception tasks through an \textbf{O}pen-ended
language interface. By transforming all perception targets into the language
space, \ours unifies object-level detection, pixel-level segmentation, and
image-level vision-language tasks into a single model. Additionally, we
introduce a novel embedding retrieval approach that relies solely on the
language interface to support segmentation tasks. Our framework bridges the gap
between fine-grained perception and vision-language tasks, significantly
simplifying architectural design and training strategies while achieving
comparable or superior performance to methods with intricate task-specific
designs. After multi-task training on five standard visual perception datasets,
\ours outperforms the previous state-of-the-art generalist models by 12.3 mAP
on COCO instance segmentation and 3.3 mIoU on ADE20K semantic segmentation.
Furthermore, our method seamlessly integrates with existing MLLMs, effectively
combining fine-grained perception capabilities with their advanced language
abilities, thereby enabling more challenging tasks such as reasoning
segmentation. Code and models are available at https://github.com/nnnth/UFO.
| no_new_dataset | 0.946349 |
2503.01622 | Eliya Habba | Eliya Habba, Ofir Arviv, Itay Itzhak, Yotam Perlitz, Elron Bandel,
Leshem Choshen, Michal Shmueli-Scheuer, Gabriel Stanovsky | DOVE: A Large-Scale Multi-Dimensional Predictions Dataset Towards
Meaningful LLM Evaluation | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Recent work found that LLMs are sensitive to a wide range of arbitrary prompt
dimensions, including the type of delimiters, answer enumerators, instruction
wording, and more. This throws into question popular single-prompt evaluation
practices. We present DOVE (Dataset Of Variation Evaluation) a large-scale
dataset containing prompt perturbations of various evaluation benchmarks. In
contrast to previous work, we examine LLM sensitivity from an holistic
perspective, and assess the joint effects of perturbations along various
dimensions, resulting in thousands of perturbations per instance. We evaluate
several model families against DOVE, leading to several findings, including
efficient methods for choosing well-performing prompts, observing that few-shot
examples reduce sensitivity, and identifying instances which are inherently
hard across all perturbations. DOVE consists of more than 250M prompt
perturbations and model outputs, which we make publicly available to spur a
community-wide effort toward meaningful, robust, and efficient evaluation.
Browse the data, contribute, and more: https://slab-nlp.github.io/DOVE/
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 14:55:41 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 13:00:55 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Habba",
"Eliya",
""
],
[
"Arviv",
"Ofir",
""
],
[
"Itzhak",
"Itay",
""
],
[
"Perlitz",
"Yotam",
""
],
[
"Bandel",
"Elron",
""
],
[
"Choshen",
"Leshem",
""
],
[
"Shmueli-Scheuer",
"Michal",
""
],
[
"Stanovsky",
"Gabriel",
""
]
]
| TITLE: DOVE: A Large-Scale Multi-Dimensional Predictions Dataset Towards
Meaningful LLM Evaluation
ABSTRACT: Recent work found that LLMs are sensitive to a wide range of arbitrary prompt
dimensions, including the type of delimiters, answer enumerators, instruction
wording, and more. This throws into question popular single-prompt evaluation
practices. We present DOVE (Dataset Of Variation Evaluation) a large-scale
dataset containing prompt perturbations of various evaluation benchmarks. In
contrast to previous work, we examine LLM sensitivity from an holistic
perspective, and assess the joint effects of perturbations along various
dimensions, resulting in thousands of perturbations per instance. We evaluate
several model families against DOVE, leading to several findings, including
efficient methods for choosing well-performing prompts, observing that few-shot
examples reduce sensitivity, and identifying instances which are inherently
hard across all perturbations. DOVE consists of more than 250M prompt
perturbations and model outputs, which we make publicly available to spur a
community-wide effort toward meaningful, robust, and efficient evaluation.
Browse the data, contribute, and more: https://slab-nlp.github.io/DOVE/
| new_dataset | 0.955319 |
2503.01725 | Zitang Zhou | Zitang Zhou, Ke Mei, Yu Lu, Tianyi Wang, Fengyun Rao | HarmonySet: A Comprehensive Dataset for Understanding Video-Music
Semantic Alignment and Temporal Synchronization | Accepted at CVPR 2025. Project page: https://harmonyset.github.io/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces HarmonySet, a comprehensive dataset designed to advance
video-music understanding. HarmonySet consists of 48,328 diverse video-music
pairs, annotated with detailed information on rhythmic synchronization,
emotional alignment, thematic coherence, and cultural relevance. We propose a
multi-step human-machine collaborative framework for efficient annotation,
combining human insights with machine-generated descriptions to identify key
transitions and assess alignment across multiple dimensions. Additionally, we
introduce a novel evaluation framework with tasks and metrics to assess the
multi-dimensional alignment of video and music, including rhythm, emotion,
theme, and cultural context. Our extensive experiments demonstrate that
HarmonySet, along with the proposed evaluation framework, significantly
improves the ability of multimodal models to capture and analyze the intricate
relationships between video and music.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 16:42:46 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 15:31:11 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Zhou",
"Zitang",
""
],
[
"Mei",
"Ke",
""
],
[
"Lu",
"Yu",
""
],
[
"Wang",
"Tianyi",
""
],
[
"Rao",
"Fengyun",
""
]
]
| TITLE: HarmonySet: A Comprehensive Dataset for Understanding Video-Music
Semantic Alignment and Temporal Synchronization
ABSTRACT: This paper introduces HarmonySet, a comprehensive dataset designed to advance
video-music understanding. HarmonySet consists of 48,328 diverse video-music
pairs, annotated with detailed information on rhythmic synchronization,
emotional alignment, thematic coherence, and cultural relevance. We propose a
multi-step human-machine collaborative framework for efficient annotation,
combining human insights with machine-generated descriptions to identify key
transitions and assess alignment across multiple dimensions. Additionally, we
introduce a novel evaluation framework with tasks and metrics to assess the
multi-dimensional alignment of video and music, including rhythm, emotion,
theme, and cultural context. Our extensive experiments demonstrate that
HarmonySet, along with the proposed evaluation framework, significantly
improves the ability of multimodal models to capture and analyze the intricate
relationships between video and music.
| new_dataset | 0.950503 |
2503.01863 | Beria Chingnabe Kalpelbe | Beria Chingnabe Kalpelbe and Angel Gabriel Adaambiik and Wei Peng | Vision Language Models in Medicine | null | null | null | null | cs.CV cs.AI cs.CL cs.CY eess.IV | http://creativecommons.org/licenses/by-sa/4.0/ | With the advent of Vision-Language Models (VLMs), medical artificial
intelligence (AI) has experienced significant technological progress and
paradigm shifts. This survey provides an extensive review of recent
advancements in Medical Vision-Language Models (Med-VLMs), which integrate
visual and textual data to enhance healthcare outcomes. We discuss the
foundational technology behind Med-VLMs, illustrating how general models are
adapted for complex medical tasks, and examine their applications in
healthcare. The transformative impact of Med-VLMs on clinical practice,
education, and patient care is highlighted, alongside challenges such as data
scarcity, narrow task generalization, interpretability issues, and ethical
concerns like fairness, accountability, and privacy. These limitations are
exacerbated by uneven dataset distribution, computational demands, and
regulatory hurdles. Rigorous evaluation methods and robust regulatory
frameworks are essential for safe integration into healthcare workflows. Future
directions include leveraging large-scale, diverse datasets, improving
cross-modal generalization, and enhancing interpretability. Innovations like
federated learning, lightweight architectures, and Electronic Health Record
(EHR) integration are explored as pathways to democratize access and improve
clinical relevance. This review aims to provide a comprehensive understanding
of Med-VLMs' strengths and limitations, fostering their ethical and balanced
adoption in healthcare.
| [
{
"version": "v1",
"created": "Mon, 24 Feb 2025 22:53:22 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Kalpelbe",
"Beria Chingnabe",
""
],
[
"Adaambiik",
"Angel Gabriel",
""
],
[
"Peng",
"Wei",
""
]
]
| TITLE: Vision Language Models in Medicine
ABSTRACT: With the advent of Vision-Language Models (VLMs), medical artificial
intelligence (AI) has experienced significant technological progress and
paradigm shifts. This survey provides an extensive review of recent
advancements in Medical Vision-Language Models (Med-VLMs), which integrate
visual and textual data to enhance healthcare outcomes. We discuss the
foundational technology behind Med-VLMs, illustrating how general models are
adapted for complex medical tasks, and examine their applications in
healthcare. The transformative impact of Med-VLMs on clinical practice,
education, and patient care is highlighted, alongside challenges such as data
scarcity, narrow task generalization, interpretability issues, and ethical
concerns like fairness, accountability, and privacy. These limitations are
exacerbated by uneven dataset distribution, computational demands, and
regulatory hurdles. Rigorous evaluation methods and robust regulatory
frameworks are essential for safe integration into healthcare workflows. Future
directions include leveraging large-scale, diverse datasets, improving
cross-modal generalization, and enhancing interpretability. Innovations like
federated learning, lightweight architectures, and Electronic Health Record
(EHR) integration are explored as pathways to democratize access and improve
clinical relevance. This review aims to provide a comprehensive understanding
of Med-VLMs' strengths and limitations, fostering their ethical and balanced
adoption in healthcare.
| no_new_dataset | 0.944842 |
2503.01864 | Kexin Huang | Kexin Huang, Junkang Wu, Ziqian Chen, Xue Wang, Jinyang Gao, Bolin
Ding, Jiancan Wu, Xiangnan He, Xiang Wang | Larger or Smaller Reward Margins to Select Preferences for Alignment? | null | null | null | null | cs.LG cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Preference learning is critical for aligning large language models (LLMs)
with human values, with the quality of preference datasets playing a crucial
role in this process. While existing metrics primarily assess data quality
based on either explicit or implicit reward margins, they often provide
contradictory evaluations for the same data. To address this issue, we
introduce the alignment potential metric, which quantifies the gap from the
model's current implicit reward margin to the target explicit reward margin,
thereby estimating the model's potential to align with the preference data.
Empirical results demonstrate that training on data selected by this metric
consistently enhances alignment performance, surpassing existing metrics across
different base models and optimization objectives. Furthermore, our method
extends to self-play data generation frameworks, where the metric is used to
identify high-quality data within the self-generated content by LLMs. Under
this data generation scenario, our method surpasses current state-of-the-art
(SOTA) results across various training settings and demonstrates continuous
improvements in alignment performance as dataset size and training iterations
increase.
| [
{
"version": "v1",
"created": "Tue, 25 Feb 2025 06:43:24 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Huang",
"Kexin",
""
],
[
"Wu",
"Junkang",
""
],
[
"Chen",
"Ziqian",
""
],
[
"Wang",
"Xue",
""
],
[
"Gao",
"Jinyang",
""
],
[
"Ding",
"Bolin",
""
],
[
"Wu",
"Jiancan",
""
],
[
"He",
"Xiangnan",
""
],
[
"Wang",
"Xiang",
""
]
]
| TITLE: Larger or Smaller Reward Margins to Select Preferences for Alignment?
ABSTRACT: Preference learning is critical for aligning large language models (LLMs)
with human values, with the quality of preference datasets playing a crucial
role in this process. While existing metrics primarily assess data quality
based on either explicit or implicit reward margins, they often provide
contradictory evaluations for the same data. To address this issue, we
introduce the alignment potential metric, which quantifies the gap from the
model's current implicit reward margin to the target explicit reward margin,
thereby estimating the model's potential to align with the preference data.
Empirical results demonstrate that training on data selected by this metric
consistently enhances alignment performance, surpassing existing metrics across
different base models and optimization objectives. Furthermore, our method
extends to self-play data generation frameworks, where the metric is used to
identify high-quality data within the self-generated content by LLMs. Under
this data generation scenario, our method surpasses current state-of-the-art
(SOTA) results across various training settings and demonstrates continuous
improvements in alignment performance as dataset size and training iterations
increase.
| no_new_dataset | 0.940953 |
2503.01869 | So Won Jeong | So Won Jeong, Veronika Rockova | From Small to Large Language Models: Revisiting the Federalist Papers | null | null | null | null | cs.CL cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | For a long time, the authorship of the Federalist Papers had been a subject
of inquiry and debate, not only by linguists and historians but also by
statisticians. In what was arguably the first Bayesian case study, Mosteller
and Wallace (1963) provided the first statistical evidence for attributing all
disputed papers to Madison. Our paper revisits this historical dataset but from
a lens of modern language models, both small and large. We review some of the
more popular Large Language Model (LLM) tools and examine them from a
statistical point of view in the context of text classification. We investigate
whether, without any attempt to fine-tune, the general embedding constructs can
be useful for stylometry and attribution. We explain differences between
various word/phrase embeddings and discuss how to aggregate them in a document.
Contrary to our expectations, we exemplify that dimension expansion with word
embeddings may not always be beneficial for attribution relative to dimension
reduction with topic embeddings. Our experiments demonstrate that default LLM
embeddings (even after manual fine-tuning) may not consistently improve
authorship attribution accuracy. Instead, Bayesian analysis with topic
embeddings trained on ``function words" yields superior out-of-sample
classification performance. This suggests that traditional (small) statistical
language models, with their interpretability and solid theoretical foundation,
can offer significant advantages in authorship attribution tasks. The code used
in this analysis is available at github.com/sowonjeong/slm-to-llm
| [
{
"version": "v1",
"created": "Tue, 25 Feb 2025 21:50:46 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Jeong",
"So Won",
""
],
[
"Rockova",
"Veronika",
""
]
]
| TITLE: From Small to Large Language Models: Revisiting the Federalist Papers
ABSTRACT: For a long time, the authorship of the Federalist Papers had been a subject
of inquiry and debate, not only by linguists and historians but also by
statisticians. In what was arguably the first Bayesian case study, Mosteller
and Wallace (1963) provided the first statistical evidence for attributing all
disputed papers to Madison. Our paper revisits this historical dataset but from
a lens of modern language models, both small and large. We review some of the
more popular Large Language Model (LLM) tools and examine them from a
statistical point of view in the context of text classification. We investigate
whether, without any attempt to fine-tune, the general embedding constructs can
be useful for stylometry and attribution. We explain differences between
various word/phrase embeddings and discuss how to aggregate them in a document.
Contrary to our expectations, we exemplify that dimension expansion with word
embeddings may not always be beneficial for attribution relative to dimension
reduction with topic embeddings. Our experiments demonstrate that default LLM
embeddings (even after manual fine-tuning) may not consistently improve
authorship attribution accuracy. Instead, Bayesian analysis with topic
embeddings trained on ``function words" yields superior out-of-sample
classification performance. This suggests that traditional (small) statistical
language models, with their interpretability and solid theoretical foundation,
can offer significant advantages in authorship attribution tasks. The code used
in this analysis is available at github.com/sowonjeong/slm-to-llm
| no_new_dataset | 0.950549 |
2503.01871 | Niklas H\"opner | Niklas H\"opner, Ilaria Tiddi, Herke van Hoof | Data Augmentation for Instruction Following Policies via Trajectory
Segmentation | null | null | null | null | cs.LG cs.AI cs.RO | http://creativecommons.org/licenses/by/4.0/ | The scalability of instructable agents in robotics or gaming is often
hindered by limited data that pairs instructions with agent trajectories.
However, large datasets of unannotated trajectories containing sequences of
various agent behaviour (play trajectories) are often available. In a
semi-supervised setup, we explore methods to extract labelled segments from
play trajectories. The goal is to augment a small annotated dataset of
instruction-trajectory pairs to improve the performance of an
instruction-following policy trained downstream via imitation learning.
Assuming little variation in segment length, recent video segmentation methods
can effectively extract labelled segments. To address the constraint of segment
length, we propose Play Segmentation (PS), a probabilistic model that finds
maximum likely segmentations of extended subsegments, while only being trained
on individual instruction segments. Our results in a game environment and a
simulated robotic gripper setting underscore the importance of segmentation;
randomly sampled segments diminish performance, while incorporating labelled
segments from PS improves policy performance to the level of a policy trained
on twice the amount of labelled data.
| [
{
"version": "v1",
"created": "Tue, 25 Feb 2025 22:06:01 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Höpner",
"Niklas",
""
],
[
"Tiddi",
"Ilaria",
""
],
[
"van Hoof",
"Herke",
""
]
]
| TITLE: Data Augmentation for Instruction Following Policies via Trajectory
Segmentation
ABSTRACT: The scalability of instructable agents in robotics or gaming is often
hindered by limited data that pairs instructions with agent trajectories.
However, large datasets of unannotated trajectories containing sequences of
various agent behaviour (play trajectories) are often available. In a
semi-supervised setup, we explore methods to extract labelled segments from
play trajectories. The goal is to augment a small annotated dataset of
instruction-trajectory pairs to improve the performance of an
instruction-following policy trained downstream via imitation learning.
Assuming little variation in segment length, recent video segmentation methods
can effectively extract labelled segments. To address the constraint of segment
length, we propose Play Segmentation (PS), a probabilistic model that finds
maximum likely segmentations of extended subsegments, while only being trained
on individual instruction segments. Our results in a game environment and a
simulated robotic gripper setting underscore the importance of segmentation;
randomly sampled segments diminish performance, while incorporating labelled
segments from PS improves policy performance to the level of a policy trained
on twice the amount of labelled data.
| no_new_dataset | 0.948251 |
2503.01872 | Shamik Roy | Mintong Kang, Vinayshekhar Bannihatti Kumar, Shamik Roy, Abhishek
Kumar, Sopan Khosla, Balakrishnan Murali Narayanaswamy, Rashmi Gangadharaiah | FairGen: Controlling Sensitive Attributes for Fair Generations in
Diffusion Models via Adaptive Latent Guidance | Under submission | null | null | null | cs.LG cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | Text-to-image diffusion models often exhibit biases toward specific
demographic groups, such as generating more males than females when prompted to
generate images of engineers, raising ethical concerns and limiting their
adoption. In this paper, we tackle the challenge of mitigating generation bias
towards any target attribute value (e.g., "male" for "gender") in diffusion
models while preserving generation quality. We propose FairGen, an adaptive
latent guidance mechanism which controls the generation distribution during
inference. In FairGen, a latent guidance module dynamically adjusts the
diffusion process to enforce specific attributes, while a memory module tracks
the generation statistics and steers latent guidance to align with the targeted
fair distribution of the attribute values. Further, given the limitations of
existing datasets in comprehensively assessing bias in diffusion models, we
introduce a holistic bias evaluation benchmark HBE, covering diverse domains
and incorporating complex prompts across various applications. Extensive
evaluations on HBE and Stable Bias datasets demonstrate that FairGen
outperforms existing bias mitigation approaches, achieving substantial bias
reduction (e.g., 68.5% gender bias reduction on Stable Diffusion 2). Ablation
studies highlight FairGen's ability to flexibly and precisely control
generation distribution at any user-specified granularity, ensuring adaptive
and targeted bias mitigation.
| [
{
"version": "v1",
"created": "Tue, 25 Feb 2025 23:47:22 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Kang",
"Mintong",
""
],
[
"Kumar",
"Vinayshekhar Bannihatti",
""
],
[
"Roy",
"Shamik",
""
],
[
"Kumar",
"Abhishek",
""
],
[
"Khosla",
"Sopan",
""
],
[
"Narayanaswamy",
"Balakrishnan Murali",
""
],
[
"Gangadharaiah",
"Rashmi",
""
]
]
| TITLE: FairGen: Controlling Sensitive Attributes for Fair Generations in
Diffusion Models via Adaptive Latent Guidance
ABSTRACT: Text-to-image diffusion models often exhibit biases toward specific
demographic groups, such as generating more males than females when prompted to
generate images of engineers, raising ethical concerns and limiting their
adoption. In this paper, we tackle the challenge of mitigating generation bias
towards any target attribute value (e.g., "male" for "gender") in diffusion
models while preserving generation quality. We propose FairGen, an adaptive
latent guidance mechanism which controls the generation distribution during
inference. In FairGen, a latent guidance module dynamically adjusts the
diffusion process to enforce specific attributes, while a memory module tracks
the generation statistics and steers latent guidance to align with the targeted
fair distribution of the attribute values. Further, given the limitations of
existing datasets in comprehensively assessing bias in diffusion models, we
introduce a holistic bias evaluation benchmark HBE, covering diverse domains
and incorporating complex prompts across various applications. Extensive
evaluations on HBE and Stable Bias datasets demonstrate that FairGen
outperforms existing bias mitigation approaches, achieving substantial bias
reduction (e.g., 68.5% gender bias reduction on Stable Diffusion 2). Ablation
studies highlight FairGen's ability to flexibly and precisely control
generation distribution at any user-specified granularity, ensuring adaptive
and targeted bias mitigation.
| new_dataset | 0.635901 |
2503.01875 | Yaxuan Kong | Yaxuan Kong, Yiyuan Yang, Yoontae Hwang, Wenjie Du, Stefan Zohren,
Zhangyang Wang, Ming Jin, Qingsong Wen | Time-MQA: Time Series Multi-Task Question Answering with Context
Enhancement | null | null | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Time series data are foundational in finance, healthcare, and energy domains.
However, most existing methods and datasets remain focused on a narrow spectrum
of tasks, such as forecasting or anomaly detection. To bridge this gap, we
introduce Time Series Multi-Task Question Answering (Time-MQA), a unified
framework that enables natural language queries across multiple time series
tasks - numerical analytical tasks and open-ended question answering with
reasoning. Central to Time-MQA is the TSQA dataset, a large-scale dataset
containing $\sim$200k question-answer pairs derived from diverse time series
spanning environment, traffic, etc. This comprehensive resource covers various
time series lengths and promotes robust model development. We further
demonstrate how continually pre-training large language models (Mistral 7B,
Llama-3 8B, and Qwen-2.5 7B) on the TSQA dataset enhanced time series reasoning
capabilities, moving beyond mere numeric tasks and enabling more advanced and
intuitive interactions with temporal data. The complete TSQA dataset, models,
executable codes, user study questionnaires for evaluation, and results have
all been open-sourced.
| [
{
"version": "v1",
"created": "Wed, 26 Feb 2025 13:47:13 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Kong",
"Yaxuan",
""
],
[
"Yang",
"Yiyuan",
""
],
[
"Hwang",
"Yoontae",
""
],
[
"Du",
"Wenjie",
""
],
[
"Zohren",
"Stefan",
""
],
[
"Wang",
"Zhangyang",
""
],
[
"Jin",
"Ming",
""
],
[
"Wen",
"Qingsong",
""
]
]
| TITLE: Time-MQA: Time Series Multi-Task Question Answering with Context
Enhancement
ABSTRACT: Time series data are foundational in finance, healthcare, and energy domains.
However, most existing methods and datasets remain focused on a narrow spectrum
of tasks, such as forecasting or anomaly detection. To bridge this gap, we
introduce Time Series Multi-Task Question Answering (Time-MQA), a unified
framework that enables natural language queries across multiple time series
tasks - numerical analytical tasks and open-ended question answering with
reasoning. Central to Time-MQA is the TSQA dataset, a large-scale dataset
containing $\sim$200k question-answer pairs derived from diverse time series
spanning environment, traffic, etc. This comprehensive resource covers various
time series lengths and promotes robust model development. We further
demonstrate how continually pre-training large language models (Mistral 7B,
Llama-3 8B, and Qwen-2.5 7B) on the TSQA dataset enhanced time series reasoning
capabilities, moving beyond mere numeric tasks and enabling more advanced and
intuitive interactions with temporal data. The complete TSQA dataset, models,
executable codes, user study questionnaires for evaluation, and results have
all been open-sourced.
| new_dataset | 0.96802 |
2503.01882 | Jungho Kim | Jungho Kim, Taeyong Kim | Constructing balanced datasets for predicting failure modes in
structural systems under seismic hazards | null | null | null | null | cs.LG physics.geo-ph stat.AP stat.ML | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Accurate prediction of structural failure modes under seismic excitations is
essential for seismic risk and resilience assessment. Traditional
simulation-based approaches often result in imbalanced datasets dominated by
non-failure or frequently observed failure scenarios, limiting the
effectiveness in machine learning-based prediction. To address this challenge,
this study proposes a framework for constructing balanced datasets that include
distinct failure modes. The framework consists of three key steps. First,
critical ground motion features (GMFs) are identified to effectively represent
ground motion time histories. Second, an adaptive algorithm is employed to
estimate the probability densities of various failure domains in the space of
critical GMFs and structural parameters. Third, samples generated from these
probability densities are transformed into ground motion time histories by
using a scaling factor optimization process. A balanced dataset is constructed
by performing nonlinear response history analyses on structural systems with
parameters matching the generated samples, subjected to corresponding
transformed ground motion time histories. Deep neural network models are
trained on balanced and imbalanced datasets to highlight the importance of
dataset balancing. To further evaluate the framework's applicability, numerical
investigations are conducted using two different structural models subjected to
recorded and synthetic ground motions. The results demonstrate the framework's
robustness and effectiveness in addressing dataset imbalance and improving
machine learning performance in seismic failure mode prediction.
| [
{
"version": "v1",
"created": "Wed, 26 Feb 2025 22:11:51 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Kim",
"Jungho",
""
],
[
"Kim",
"Taeyong",
""
]
]
| TITLE: Constructing balanced datasets for predicting failure modes in
structural systems under seismic hazards
ABSTRACT: Accurate prediction of structural failure modes under seismic excitations is
essential for seismic risk and resilience assessment. Traditional
simulation-based approaches often result in imbalanced datasets dominated by
non-failure or frequently observed failure scenarios, limiting the
effectiveness in machine learning-based prediction. To address this challenge,
this study proposes a framework for constructing balanced datasets that include
distinct failure modes. The framework consists of three key steps. First,
critical ground motion features (GMFs) are identified to effectively represent
ground motion time histories. Second, an adaptive algorithm is employed to
estimate the probability densities of various failure domains in the space of
critical GMFs and structural parameters. Third, samples generated from these
probability densities are transformed into ground motion time histories by
using a scaling factor optimization process. A balanced dataset is constructed
by performing nonlinear response history analyses on structural systems with
parameters matching the generated samples, subjected to corresponding
transformed ground motion time histories. Deep neural network models are
trained on balanced and imbalanced datasets to highlight the importance of
dataset balancing. To further evaluate the framework's applicability, numerical
investigations are conducted using two different structural models subjected to
recorded and synthetic ground motions. The results demonstrate the framework's
robustness and effectiveness in addressing dataset imbalance and improving
machine learning performance in seismic failure mode prediction.
| no_new_dataset | 0.948058 |
2503.01891 | Xinwu Ye | Xinwu Ye, Chengfan Li, Siming Chen, Xiangru Tang, Wei Wei | MMSciBench: Benchmarking Language Models on Multimodal Scientific
Problems | null | null | null | null | cs.LG cs.CL | http://creativecommons.org/licenses/by/4.0/ | Recent advances in large language models (LLMs) and vision-language models
(LVLMs) have shown promise across many tasks, yet their scientific reasoning
capabilities remain untested, particularly in multimodal settings. We present
MMSciBench, a benchmark for evaluating mathematical and physical reasoning
through text-only and text-image formats, with human-annotated difficulty
levels, solutions with detailed explanations, and taxonomic mappings.
Evaluation of state-of-the-art models reveals significant limitations, with
even the best model achieving only \textbf{63.77\%} accuracy and particularly
struggling with visual reasoning tasks. Our analysis exposes critical gaps in
complex reasoning and visual-textual integration, establishing MMSciBench as a
rigorous standard for measuring progress in multimodal scientific
understanding. The code for MMSciBench is open-sourced at GitHub, and the
dataset is available at Hugging Face.
| [
{
"version": "v1",
"created": "Thu, 27 Feb 2025 15:38:43 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Ye",
"Xinwu",
""
],
[
"Li",
"Chengfan",
""
],
[
"Chen",
"Siming",
""
],
[
"Tang",
"Xiangru",
""
],
[
"Wei",
"Wei",
""
]
]
| TITLE: MMSciBench: Benchmarking Language Models on Multimodal Scientific
Problems
ABSTRACT: Recent advances in large language models (LLMs) and vision-language models
(LVLMs) have shown promise across many tasks, yet their scientific reasoning
capabilities remain untested, particularly in multimodal settings. We present
MMSciBench, a benchmark for evaluating mathematical and physical reasoning
through text-only and text-image formats, with human-annotated difficulty
levels, solutions with detailed explanations, and taxonomic mappings.
Evaluation of state-of-the-art models reveals significant limitations, with
even the best model achieving only \textbf{63.77\%} accuracy and particularly
struggling with visual reasoning tasks. Our analysis exposes critical gaps in
complex reasoning and visual-textual integration, establishing MMSciBench as a
rigorous standard for measuring progress in multimodal scientific
understanding. The code for MMSciBench is open-sourced at GitHub, and the
dataset is available at Hugging Face.
| new_dataset | 0.935905 |
2503.01892 | Loukas Ilias | Loukas Ilias, Dimitris Askounis | Recognition of Dysarthria in Amyotrophic Lateral Sclerosis patients
using Hypernetworks | null | null | null | null | cs.LG cs.CV cs.CY | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Amyotrophic Lateral Sclerosis (ALS) constitutes a progressive
neurodegenerative disease with varying symptoms, including decline in speech
intelligibility. Existing studies, which recognize dysarthria in ALS patients
by predicting the clinical standard ALSFRS-R, rely on feature extraction
strategies and the design of customized convolutional neural networks followed
by dense layers. However, recent studies have shown that neural networks
adopting the logic of input-conditional computations enjoy a series of
benefits, including faster training, better performance, and flexibility. To
resolve these issues, we present the first study incorporating hypernetworks
for recognizing dysarthria. Specifically, we use audio files, convert them into
log-Mel spectrogram, delta, and delta-delta, and pass the resulting image
through a pretrained modified AlexNet model. Finally, we use a hypernetwork,
which generates weights for a target network. Experiments are conducted on a
newly collected publicly available dataset, namely VOC-ALS. Results showed that
the proposed approach reaches Accuracy up to 82.66% outperforming strong
baselines, including multimodal fusion methods, while findings from an ablation
study demonstrated the effectiveness of the introduced methodology. Overall,
our approach incorporating hypernetworks obtains valuable advantages over
state-of-the-art results in terms of generalization ability, parameter
efficiency, and robustness.
| [
{
"version": "v1",
"created": "Thu, 27 Feb 2025 15:57:37 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Ilias",
"Loukas",
""
],
[
"Askounis",
"Dimitris",
""
]
]
| TITLE: Recognition of Dysarthria in Amyotrophic Lateral Sclerosis patients
using Hypernetworks
ABSTRACT: Amyotrophic Lateral Sclerosis (ALS) constitutes a progressive
neurodegenerative disease with varying symptoms, including decline in speech
intelligibility. Existing studies, which recognize dysarthria in ALS patients
by predicting the clinical standard ALSFRS-R, rely on feature extraction
strategies and the design of customized convolutional neural networks followed
by dense layers. However, recent studies have shown that neural networks
adopting the logic of input-conditional computations enjoy a series of
benefits, including faster training, better performance, and flexibility. To
resolve these issues, we present the first study incorporating hypernetworks
for recognizing dysarthria. Specifically, we use audio files, convert them into
log-Mel spectrogram, delta, and delta-delta, and pass the resulting image
through a pretrained modified AlexNet model. Finally, we use a hypernetwork,
which generates weights for a target network. Experiments are conducted on a
newly collected publicly available dataset, namely VOC-ALS. Results showed that
the proposed approach reaches Accuracy up to 82.66% outperforming strong
baselines, including multimodal fusion methods, while findings from an ablation
study demonstrated the effectiveness of the introduced methodology. Overall,
our approach incorporating hypernetworks obtains valuable advantages over
state-of-the-art results in terms of generalization ability, parameter
efficiency, and robustness.
| new_dataset | 0.962673 |
2503.01893 | Maya Vilenko | Maya Vilenko | BiHRNN -- Bi-Directional Hierarchical Recurrent Neural Network for
Inflation Forecasting | Master's thesis. Under the supervision of Dr. Noam Koeningstein. 40
pages | null | null | null | cs.LG econ.GN q-fin.CP q-fin.EC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Inflation prediction guides decisions on interest rates, investments, and
wages, playing a key role in economic stability. Yet accurate forecasting is
challenging due to dynamic factors and the layered structure of the Consumer
Price Index, which organizes goods and services into multiple categories. We
propose the Bi-directional Hierarchical Recurrent Neural Network (BiHRNN) model
to address these challenges by leveraging the hierarchical structure to enable
bidirectional information flow between levels. Informative constraints on the
RNN parameters enhance predictive accuracy at all levels without the
inefficiencies of a unified model. We validated BiHRNN on inflation datasets
from the United States, Canada, and Norway by training, tuning hyperparameters,
and experimenting with various loss functions. Our results demonstrate that
BiHRNN significantly outperforms traditional RNN models, with its bidirectional
architecture playing a pivotal role in achieving improved forecasting accuracy.
| [
{
"version": "v1",
"created": "Thu, 27 Feb 2025 16:12:03 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Vilenko",
"Maya",
""
]
]
| TITLE: BiHRNN -- Bi-Directional Hierarchical Recurrent Neural Network for
Inflation Forecasting
ABSTRACT: Inflation prediction guides decisions on interest rates, investments, and
wages, playing a key role in economic stability. Yet accurate forecasting is
challenging due to dynamic factors and the layered structure of the Consumer
Price Index, which organizes goods and services into multiple categories. We
propose the Bi-directional Hierarchical Recurrent Neural Network (BiHRNN) model
to address these challenges by leveraging the hierarchical structure to enable
bidirectional information flow between levels. Informative constraints on the
RNN parameters enhance predictive accuracy at all levels without the
inefficiencies of a unified model. We validated BiHRNN on inflation datasets
from the United States, Canada, and Norway by training, tuning hyperparameters,
and experimenting with various loss functions. Our results demonstrate that
BiHRNN significantly outperforms traditional RNN models, with its bidirectional
architecture playing a pivotal role in achieving improved forecasting accuracy.
| no_new_dataset | 0.947769 |
2503.01894 | Rashid Mushkani | Rashid Mushkani, Shravan Nayak, Hugo Berard, Allison Cohen, Shin
Koseki, Hadrien Bertrand | LIVS: A Pluralistic Alignment Dataset for Inclusive Public Spaces | 30 pages, 19 figures | null | null | null | cs.CV cs.AI cs.HC | http://creativecommons.org/licenses/by/4.0/ | We introduce the Local Intersectional Visual Spaces (LIVS) dataset, a
benchmark for multi-criteria alignment of text-to-image (T2I) models in
inclusive urban planning. Developed through a two-year participatory process
with 30 community organizations, LIVS encodes diverse spatial preferences
across 634 initial concepts, consolidated into six core criteria:
Accessibility, Safety, Comfort, Invitingness, Inclusivity, and Diversity,
through 37,710 pairwise comparisons. Using Direct Preference Optimization (DPO)
to fine-tune Stable Diffusion XL, we observed a measurable increase in
alignment with community preferences, though a significant proportion of
neutral ratings highlights the complexity of modeling intersectional needs.
Additionally, as annotation volume increases, accuracy shifts further toward
the DPO-tuned model, suggesting that larger-scale preference data enhances
fine-tuning effectiveness. LIVS underscores the necessity of integrating
context-specific, stakeholder-driven criteria into generative modeling and
provides a resource for evaluating AI alignment methodologies across diverse
socio-spatial contexts.
| [
{
"version": "v1",
"created": "Thu, 27 Feb 2025 19:18:37 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Mushkani",
"Rashid",
""
],
[
"Nayak",
"Shravan",
""
],
[
"Berard",
"Hugo",
""
],
[
"Cohen",
"Allison",
""
],
[
"Koseki",
"Shin",
""
],
[
"Bertrand",
"Hadrien",
""
]
]
| TITLE: LIVS: A Pluralistic Alignment Dataset for Inclusive Public Spaces
ABSTRACT: We introduce the Local Intersectional Visual Spaces (LIVS) dataset, a
benchmark for multi-criteria alignment of text-to-image (T2I) models in
inclusive urban planning. Developed through a two-year participatory process
with 30 community organizations, LIVS encodes diverse spatial preferences
across 634 initial concepts, consolidated into six core criteria:
Accessibility, Safety, Comfort, Invitingness, Inclusivity, and Diversity,
through 37,710 pairwise comparisons. Using Direct Preference Optimization (DPO)
to fine-tune Stable Diffusion XL, we observed a measurable increase in
alignment with community preferences, though a significant proportion of
neutral ratings highlights the complexity of modeling intersectional needs.
Additionally, as annotation volume increases, accuracy shifts further toward
the DPO-tuned model, suggesting that larger-scale preference data enhances
fine-tuning effectiveness. LIVS underscores the necessity of integrating
context-specific, stakeholder-driven criteria into generative modeling and
provides a resource for evaluating AI alignment methodologies across diverse
socio-spatial contexts.
| new_dataset | 0.963506 |
2503.01896 | Vishnu Kabir Chhabra | Vishnu Kabir Chhabra, Ding Zhu, Mohammad Mahdi Khalili | Neuroplasticity and Corruption in Model Mechanisms: A Case Study Of
Indirect Object Identification | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Previous research has shown that fine-tuning language models on general tasks
enhance their underlying mechanisms. However, the impact of fine-tuning on
poisoned data and the resulting changes in these mechanisms are poorly
understood. This study investigates the changes in a model's mechanisms during
toxic fine-tuning and identifies the primary corruption mechanisms. We also
analyze the changes after retraining a corrupted model on the original dataset
and observe neuroplasticity behaviors, where the model relearns original
mechanisms after fine-tuning the corrupted model. Our findings indicate that:
(i) Underlying mechanisms are amplified across task-specific fine-tuning which
can be generalized to longer epochs, (ii) Model corruption via toxic
fine-tuning is localized to specific circuit components, (iii) Models exhibit
neuroplasticity when retraining corrupted models on clean dataset, reforming
the original model mechanisms.
| [
{
"version": "v1",
"created": "Thu, 27 Feb 2025 23:44:50 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Chhabra",
"Vishnu Kabir",
""
],
[
"Zhu",
"Ding",
""
],
[
"Khalili",
"Mohammad Mahdi",
""
]
]
| TITLE: Neuroplasticity and Corruption in Model Mechanisms: A Case Study Of
Indirect Object Identification
ABSTRACT: Previous research has shown that fine-tuning language models on general tasks
enhance their underlying mechanisms. However, the impact of fine-tuning on
poisoned data and the resulting changes in these mechanisms are poorly
understood. This study investigates the changes in a model's mechanisms during
toxic fine-tuning and identifies the primary corruption mechanisms. We also
analyze the changes after retraining a corrupted model on the original dataset
and observe neuroplasticity behaviors, where the model relearns original
mechanisms after fine-tuning the corrupted model. Our findings indicate that:
(i) Underlying mechanisms are amplified across task-specific fine-tuning which
can be generalized to longer epochs, (ii) Model corruption via toxic
fine-tuning is localized to specific circuit components, (iii) Models exhibit
neuroplasticity when retraining corrupted models on clean dataset, reforming
the original model mechanisms.
| no_new_dataset | 0.947866 |
2503.01899 | Chenxu Dang | Chenxu Dang, Zaipeng Duan, Pei An, Xinmin Zhang, Xuzhong Hu and Jie Ma | FASTer: Focal Token Acquiring-and-Scaling Transformer for Long-term 3D
Object Detection | 10pages,6 figures | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent top-performing temporal 3D detectors based on Lidars have increasingly
adopted region-based paradigms. They first generate coarse proposals, followed
by encoding and fusing regional features. However, indiscriminate sampling and
fusion often overlook the varying contributions of individual points and lead
to exponentially increased complexity as the number of input frames grows.
Moreover, arbitrary result-level concatenation limits the global information
extraction. In this paper, we propose a Focal Token Acquring-and-Scaling
Transformer (FASTer), which dynamically selects focal tokens and condenses
token sequences in an adaptive and lightweight manner. Emphasizing the
contribution of individual tokens, we propose a simple but effective Adaptive
Scaling mechanism to capture geometric contexts while sifting out focal points.
Adaptively storing and processing only focal points in historical frames
dramatically reduces the overall complexity. Furthermore, a novel Grouped
Hierarchical Fusion strategy is proposed, progressively performing sequence
scaling and Intra-Group Fusion operations to facilitate the exchange of global
spatial and temporal information. Experiments on the Waymo Open Dataset
demonstrate that our FASTer significantly outperforms other state-of-the-art
detectors in both performance and efficiency while also exhibiting improved
flexibility and robustness. The code is available at
https://github.com/MSunDYY/FASTer.git.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 03:15:33 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Dang",
"Chenxu",
""
],
[
"Duan",
"Zaipeng",
""
],
[
"An",
"Pei",
""
],
[
"Zhang",
"Xinmin",
""
],
[
"Hu",
"Xuzhong",
""
],
[
"Ma",
"Jie",
""
]
]
| TITLE: FASTer: Focal Token Acquiring-and-Scaling Transformer for Long-term 3D
Object Detection
ABSTRACT: Recent top-performing temporal 3D detectors based on Lidars have increasingly
adopted region-based paradigms. They first generate coarse proposals, followed
by encoding and fusing regional features. However, indiscriminate sampling and
fusion often overlook the varying contributions of individual points and lead
to exponentially increased complexity as the number of input frames grows.
Moreover, arbitrary result-level concatenation limits the global information
extraction. In this paper, we propose a Focal Token Acquring-and-Scaling
Transformer (FASTer), which dynamically selects focal tokens and condenses
token sequences in an adaptive and lightweight manner. Emphasizing the
contribution of individual tokens, we propose a simple but effective Adaptive
Scaling mechanism to capture geometric contexts while sifting out focal points.
Adaptively storing and processing only focal points in historical frames
dramatically reduces the overall complexity. Furthermore, a novel Grouped
Hierarchical Fusion strategy is proposed, progressively performing sequence
scaling and Intra-Group Fusion operations to facilitate the exchange of global
spatial and temporal information. Experiments on the Waymo Open Dataset
demonstrate that our FASTer significantly outperforms other state-of-the-art
detectors in both performance and efficiency while also exhibiting improved
flexibility and robustness. The code is available at
https://github.com/MSunDYY/FASTer.git.
| no_new_dataset | 0.947769 |
2503.01900 | Tianyi Ma | Tianyi Ma, Yiyue Qian, Zehong Wang, Zheyuan Zhang, Chuxu Zhang,
Yanfang Ye | LLM-Empowered Class Imbalanced Graph Prompt Learning for Online Drug
Trafficking Detection | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As the market for illicit drugs remains extremely profitable, major online
platforms have become direct-to-consumer intermediaries for illicit drug
trafficking participants. These online activities raise significant social
concerns that require immediate actions. Existing approaches to combating this
challenge are generally impractical, due to the imbalance of classes and
scarcity of labeled samples in real-world applications. To this end, we propose
a novel Large Language Model-empowered Heterogeneous Graph Prompt Learning
framework for illicit Drug Trafficking detection, called LLM-HetGDT, that
leverages LLM to facilitate heterogeneous graph neural networks (HGNNs) to
effectively identify drug trafficking activities in the class-imbalanced
scenarios. Specifically, we first pre-train HGNN over a contrastive pretext
task to capture the inherent node and structure information over the unlabeled
drug trafficking heterogeneous graph (HG). Afterward, we employ LLM to augment
the HG by generating high-quality synthetic user nodes in minority classes.
Then, we fine-tune the soft prompts on the augmented HG to capture the
important information in the minority classes for the downstream drug
trafficking detection task. To comprehensively study online illicit drug
trafficking activities, we collect a new HG dataset over Twitter, called
Twitter-HetDrug. Extensive experiments on this dataset demonstrate the
effectiveness, efficiency, and applicability of LLM-HetGDT.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 04:38:24 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Ma",
"Tianyi",
""
],
[
"Qian",
"Yiyue",
""
],
[
"Wang",
"Zehong",
""
],
[
"Zhang",
"Zheyuan",
""
],
[
"Zhang",
"Chuxu",
""
],
[
"Ye",
"Yanfang",
""
]
]
| TITLE: LLM-Empowered Class Imbalanced Graph Prompt Learning for Online Drug
Trafficking Detection
ABSTRACT: As the market for illicit drugs remains extremely profitable, major online
platforms have become direct-to-consumer intermediaries for illicit drug
trafficking participants. These online activities raise significant social
concerns that require immediate actions. Existing approaches to combating this
challenge are generally impractical, due to the imbalance of classes and
scarcity of labeled samples in real-world applications. To this end, we propose
a novel Large Language Model-empowered Heterogeneous Graph Prompt Learning
framework for illicit Drug Trafficking detection, called LLM-HetGDT, that
leverages LLM to facilitate heterogeneous graph neural networks (HGNNs) to
effectively identify drug trafficking activities in the class-imbalanced
scenarios. Specifically, we first pre-train HGNN over a contrastive pretext
task to capture the inherent node and structure information over the unlabeled
drug trafficking heterogeneous graph (HG). Afterward, we employ LLM to augment
the HG by generating high-quality synthetic user nodes in minority classes.
Then, we fine-tune the soft prompts on the augmented HG to capture the
important information in the minority classes for the downstream drug
trafficking detection task. To comprehensively study online illicit drug
trafficking activities, we collect a new HG dataset over Twitter, called
Twitter-HetDrug. Extensive experiments on this dataset demonstrate the
effectiveness, efficiency, and applicability of LLM-HetGDT.
| new_dataset | 0.955858 |
2503.01903 | Ruoxi Wang | Ruoxi Wang, Shuyu Liu, Ling Zhang, Xuequan Zhu, Rui Yang, Xinzhu Zhou,
Fei Wu, Zhi Yang, Cheng Jin, Gang Wang | PsychBench: A comprehensive and professional benchmark for evaluating
the performance of LLM-assisted psychiatric clinical practice | null | null | null | null | cs.CL cs.AI cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The advent of Large Language Models (LLMs) offers potential solutions to
address problems such as shortage of medical resources and low diagnostic
consistency in psychiatric clinical practice. Despite this potential, a robust
and comprehensive benchmarking framework to assess the efficacy of LLMs in
authentic psychiatric clinical environments is absent. This has impeded the
advancement of specialized LLMs tailored to psychiatric applications. In
response to this gap, by incorporating clinical demands in psychiatry and
clinical data, we proposed a benchmarking system, PsychBench, to evaluate the
practical performance of LLMs in psychiatric clinical settings. We conducted a
comprehensive quantitative evaluation of 16 LLMs using PsychBench, and
investigated the impact of prompt design, chain-of-thought reasoning, input
text length, and domain-specific knowledge fine-tuning on model performance.
Through detailed error analysis, we identified strengths and potential
limitations of the existing models and suggested directions for improvement.
Subsequently, a clinical reader study involving 60 psychiatrists of varying
seniority was conducted to further explore the practical benefits of existing
LLMs as supportive tools for psychiatrists of varying seniority. Through the
quantitative and reader evaluation, we show that while existing models
demonstrate significant potential, they are not yet adequate as decision-making
tools in psychiatric clinical practice. The reader study further indicates
that, as an auxiliary tool, LLM could provide particularly notable support for
junior psychiatrists, effectively enhancing their work efficiency and overall
clinical quality. To promote research in this area, we will make the dataset
and evaluation framework publicly available, with the hope of advancing the
application of LLMs in psychiatric clinical settings.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 12:17:41 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Wang",
"Ruoxi",
""
],
[
"Liu",
"Shuyu",
""
],
[
"Zhang",
"Ling",
""
],
[
"Zhu",
"Xuequan",
""
],
[
"Yang",
"Rui",
""
],
[
"Zhou",
"Xinzhu",
""
],
[
"Wu",
"Fei",
""
],
[
"Yang",
"Zhi",
""
],
[
"Jin",
"Cheng",
""
],
[
"Wang",
"Gang",
""
]
]
| TITLE: PsychBench: A comprehensive and professional benchmark for evaluating
the performance of LLM-assisted psychiatric clinical practice
ABSTRACT: The advent of Large Language Models (LLMs) offers potential solutions to
address problems such as shortage of medical resources and low diagnostic
consistency in psychiatric clinical practice. Despite this potential, a robust
and comprehensive benchmarking framework to assess the efficacy of LLMs in
authentic psychiatric clinical environments is absent. This has impeded the
advancement of specialized LLMs tailored to psychiatric applications. In
response to this gap, by incorporating clinical demands in psychiatry and
clinical data, we proposed a benchmarking system, PsychBench, to evaluate the
practical performance of LLMs in psychiatric clinical settings. We conducted a
comprehensive quantitative evaluation of 16 LLMs using PsychBench, and
investigated the impact of prompt design, chain-of-thought reasoning, input
text length, and domain-specific knowledge fine-tuning on model performance.
Through detailed error analysis, we identified strengths and potential
limitations of the existing models and suggested directions for improvement.
Subsequently, a clinical reader study involving 60 psychiatrists of varying
seniority was conducted to further explore the practical benefits of existing
LLMs as supportive tools for psychiatrists of varying seniority. Through the
quantitative and reader evaluation, we show that while existing models
demonstrate significant potential, they are not yet adequate as decision-making
tools in psychiatric clinical practice. The reader study further indicates
that, as an auxiliary tool, LLM could provide particularly notable support for
junior psychiatrists, effectively enhancing their work efficiency and overall
clinical quality. To promote research in this area, we will make the dataset
and evaluation framework publicly available, with the hope of advancing the
application of LLMs in psychiatric clinical settings.
| no_new_dataset | 0.903635 |
2503.01904 | Christian Gapp | Christian Gapp, Elias Tappeiner, Martin Welk, Karl Fritscher, Elke
Ruth Gizewski, Rainer Schubert | What are You Looking at? Modality Contribution in Multimodal Medical
Deep Learning Methods | Contribution to Conference for Computer Assisted Radiology and
Surgery (CARS 2025) | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Purpose High dimensional, multimodal data can nowadays be analyzed by huge
deep neural networks with little effort. Several fusion methods for bringing
together different modalities have been developed. Particularly, in the field
of medicine with its presence of high dimensional multimodal patient data,
multimodal models characterize the next step. However, what is yet very
underexplored is how these models process the source information in detail.
Methods To this end, we implemented an occlusion-based both model and
performance agnostic modality contribution method that quantitatively measures
the importance of each modality in the dataset for the model to fulfill its
task. We applied our method to three different multimodal medical problems for
experimental purposes. Results Herein we found that some networks have modality
preferences that tend to unimodal collapses, while some datasets are imbalanced
from the ground up. Moreover, we could determine a link between our metric and
the performance of single modality trained nets. Conclusion The information
gain through our metric holds remarkable potential to improve the development
of multimodal models and the creation of datasets in the future. With our
method we make a crucial contribution to the field of interpretability in deep
learning based multimodal research and thereby notably push the integrability
of multimodal AI into clinical practice. Our code is publicly available at
https://github.com/ChristianGappGit/MC_MMD.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 12:39:39 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Gapp",
"Christian",
""
],
[
"Tappeiner",
"Elias",
""
],
[
"Welk",
"Martin",
""
],
[
"Fritscher",
"Karl",
""
],
[
"Gizewski",
"Elke Ruth",
""
],
[
"Schubert",
"Rainer",
""
]
]
| TITLE: What are You Looking at? Modality Contribution in Multimodal Medical
Deep Learning Methods
ABSTRACT: Purpose High dimensional, multimodal data can nowadays be analyzed by huge
deep neural networks with little effort. Several fusion methods for bringing
together different modalities have been developed. Particularly, in the field
of medicine with its presence of high dimensional multimodal patient data,
multimodal models characterize the next step. However, what is yet very
underexplored is how these models process the source information in detail.
Methods To this end, we implemented an occlusion-based both model and
performance agnostic modality contribution method that quantitatively measures
the importance of each modality in the dataset for the model to fulfill its
task. We applied our method to three different multimodal medical problems for
experimental purposes. Results Herein we found that some networks have modality
preferences that tend to unimodal collapses, while some datasets are imbalanced
from the ground up. Moreover, we could determine a link between our metric and
the performance of single modality trained nets. Conclusion The information
gain through our metric holds remarkable potential to improve the development
of multimodal models and the creation of datasets in the future. With our
method we make a crucial contribution to the field of interpretability in deep
learning based multimodal research and thereby notably push the integrability
of multimodal AI into clinical practice. Our code is publicly available at
https://github.com/ChristianGappGit/MC_MMD.
| no_new_dataset | 0.944022 |
2503.01907 | Cheng-Yen Yang | Kunjun Li, Cheng-Yen Yang, Hsiang-Wei Huang, Jenq-Neng Hwang | Technical Report for ReID-SAM on SkiTB Visual Tracking Challenge 2025 | Technical report for 2nd solution of SkiTB Visual Tracking Challenge
(WACV 2025) | null | null | null | cs.CV eess.IV | http://creativecommons.org/licenses/by/4.0/ | This report introduces ReID-SAM, a novel model developed for the SkiTB
Challenge that addresses the complexities of tracking skier appearance. Our
approach integrates the SAMURAI tracker with a person re-identification (Re-ID)
module and advanced post-processing techniques to enhance accuracy in
challenging skiing scenarios. We employ an OSNet-based Re-ID model to minimize
identity switches and utilize YOLOv11 with Kalman filtering or STARK-based
object detection for precise equipment tracking. When evaluated on the SkiTB
dataset, ReID-SAM achieved a state-of-the-art F1-score of 0.870, surpassing
existing methods across alpine, ski jumping, and freestyle skiing disciplines.
These results demonstrate significant advancements in skier tracking accuracy
and provide valuable insights for computer vision applications in winter
sports.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 16:57:57 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Li",
"Kunjun",
""
],
[
"Yang",
"Cheng-Yen",
""
],
[
"Huang",
"Hsiang-Wei",
""
],
[
"Hwang",
"Jenq-Neng",
""
]
]
| TITLE: Technical Report for ReID-SAM on SkiTB Visual Tracking Challenge 2025
ABSTRACT: This report introduces ReID-SAM, a novel model developed for the SkiTB
Challenge that addresses the complexities of tracking skier appearance. Our
approach integrates the SAMURAI tracker with a person re-identification (Re-ID)
module and advanced post-processing techniques to enhance accuracy in
challenging skiing scenarios. We employ an OSNet-based Re-ID model to minimize
identity switches and utilize YOLOv11 with Kalman filtering or STARK-based
object detection for precise equipment tracking. When evaluated on the SkiTB
dataset, ReID-SAM achieved a state-of-the-art F1-score of 0.870, surpassing
existing methods across alpine, ski jumping, and freestyle skiing disciplines.
These results demonstrate significant advancements in skier tracking accuracy
and provide valuable insights for computer vision applications in winter
sports.
| no_new_dataset | 0.939025 |
2503.01908 | Jiawei Zhang | Jiawei Zhang, Shuang Yang, Bo Li | UDora: A Unified Red Teaming Framework against LLM Agents by Dynamically
Hijacking Their Own Reasoning | null | null | null | null | cs.CR cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Large Language Model (LLM) agents equipped with external tools have become
increasingly powerful for handling complex tasks such as web shopping,
automated email replies, and financial trading. However, these advancements
also amplify the risks of adversarial attacks, particularly when LLM agents can
access sensitive external functionalities. Moreover, because LLM agents engage
in extensive reasoning or planning before executing final actions, manipulating
them into performing targeted malicious actions or invoking specific tools
remains a significant challenge. Consequently, directly embedding adversarial
strings in malicious instructions or injecting malicious prompts into tool
interactions has become less effective against modern LLM agents. In this work,
we present UDora, a unified red teaming framework designed for LLM Agents that
dynamically leverages the agent's own reasoning processes to compel it toward
malicious behavior. Specifically, UDora first samples the model's reasoning for
the given task, then automatically identifies multiple optimal positions within
these reasoning traces to insert targeted perturbations. Subsequently, it uses
the modified reasoning as the objective to optimize the adversarial strings. By
iteratively applying this process, the LLM agent will then be induced to
undertake designated malicious actions or to invoke specific malicious tools.
Our approach demonstrates superior effectiveness compared to existing methods
across three LLM agent datasets.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 21:30:28 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Zhang",
"Jiawei",
""
],
[
"Yang",
"Shuang",
""
],
[
"Li",
"Bo",
""
]
]
| TITLE: UDora: A Unified Red Teaming Framework against LLM Agents by Dynamically
Hijacking Their Own Reasoning
ABSTRACT: Large Language Model (LLM) agents equipped with external tools have become
increasingly powerful for handling complex tasks such as web shopping,
automated email replies, and financial trading. However, these advancements
also amplify the risks of adversarial attacks, particularly when LLM agents can
access sensitive external functionalities. Moreover, because LLM agents engage
in extensive reasoning or planning before executing final actions, manipulating
them into performing targeted malicious actions or invoking specific tools
remains a significant challenge. Consequently, directly embedding adversarial
strings in malicious instructions or injecting malicious prompts into tool
interactions has become less effective against modern LLM agents. In this work,
we present UDora, a unified red teaming framework designed for LLM Agents that
dynamically leverages the agent's own reasoning processes to compel it toward
malicious behavior. Specifically, UDora first samples the model's reasoning for
the given task, then automatically identifies multiple optimal positions within
these reasoning traces to insert targeted perturbations. Subsequently, it uses
the modified reasoning as the objective to optimize the adversarial strings. By
iteratively applying this process, the LLM agent will then be induced to
undertake designated malicious actions or to invoke specific malicious tools.
Our approach demonstrates superior effectiveness compared to existing methods
across three LLM agent datasets.
| no_new_dataset | 0.940079 |
2503.01916 | Ahmed Farouk | Ashtakala Meghanath, Subham Das, Bikash K.Behera, Muhammad Attique
Khan, Saif Al-Kuwari and Ahmed Farouk | QDCNN: Quantum Deep Learning for Enhancing Safety and Reliability in
Autonomous Transportation Systems | 11 Pages, 7 Figures, 4 Tables | null | null | null | quant-ph cs.CV cs.RO eess.IV | http://creativecommons.org/licenses/by/4.0/ | In transportation cyber-physical systems (CPS), ensuring safety and
reliability in real-time decision-making is essential for successfully
deploying autonomous vehicles and intelligent transportation networks. However,
these systems face significant challenges, such as computational complexity and
the ability to handle ambiguous inputs like shadows in complex environments.
This paper introduces a Quantum Deep Convolutional Neural Network (QDCNN)
designed to enhance the safety and reliability of CPS in transportation by
leveraging quantum algorithms. At the core of QDCNN is the UU{\dag} method,
which is utilized to improve shadow detection through a propagation algorithm
that trains the centroid value with preprocessing and postprocessing operations
to classify shadow regions in images accurately. The proposed QDCNN is
evaluated on three datasets on normal conditions and one road affected by rain
to test its robustness. It outperforms existing methods in terms of
computational efficiency, achieving a shadow detection time of just 0.0049352
seconds, faster than classical algorithms like intensity-based thresholding
(0.03 seconds), chromaticity-based shadow detection (1.47 seconds), and local
binary pattern techniques (2.05 seconds). This remarkable speed, superior
accuracy, and noise resilience demonstrate the key factors for safe navigation
in autonomous transportation in real-time. This research demonstrates the
potential of quantum-enhanced models in addressing critical limitations of
classical methods, contributing to more dependable and robust autonomous
transportation systems within the CPS framework.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 19:04:44 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Meghanath",
"Ashtakala",
""
],
[
"Das",
"Subham",
""
],
[
"Behera",
"Bikash K.",
""
],
[
"Khan",
"Muhammad Attique",
""
],
[
"Al-Kuwari",
"Saif",
""
],
[
"Farouk",
"Ahmed",
""
]
]
| TITLE: QDCNN: Quantum Deep Learning for Enhancing Safety and Reliability in
Autonomous Transportation Systems
ABSTRACT: In transportation cyber-physical systems (CPS), ensuring safety and
reliability in real-time decision-making is essential for successfully
deploying autonomous vehicles and intelligent transportation networks. However,
these systems face significant challenges, such as computational complexity and
the ability to handle ambiguous inputs like shadows in complex environments.
This paper introduces a Quantum Deep Convolutional Neural Network (QDCNN)
designed to enhance the safety and reliability of CPS in transportation by
leveraging quantum algorithms. At the core of QDCNN is the UU{\dag} method,
which is utilized to improve shadow detection through a propagation algorithm
that trains the centroid value with preprocessing and postprocessing operations
to classify shadow regions in images accurately. The proposed QDCNN is
evaluated on three datasets on normal conditions and one road affected by rain
to test its robustness. It outperforms existing methods in terms of
computational efficiency, achieving a shadow detection time of just 0.0049352
seconds, faster than classical algorithms like intensity-based thresholding
(0.03 seconds), chromaticity-based shadow detection (1.47 seconds), and local
binary pattern techniques (2.05 seconds). This remarkable speed, superior
accuracy, and noise resilience demonstrate the key factors for safe navigation
in autonomous transportation in real-time. This research demonstrates the
potential of quantum-enhanced models in addressing critical limitations of
classical methods, contributing to more dependable and robust autonomous
transportation systems within the CPS framework.
| no_new_dataset | 0.944638 |
2503.01917 | Seongheon Park | Seongheon Park, Xuefeng Du, Min-Hsuan Yeh, Haobo Wang, Yixuan Li | How to Steer LLM Latents for Hallucination Detection? | ICLR Workshop on Quantify Uncertainty and Hallucination in Foundation
Models (QUESTION), 2025 | null | null | null | cs.LG cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | Hallucinations in LLMs pose a significant concern to their safe deployment in
real-world applications. Recent approaches have leveraged the latent space of
LLMs for hallucination detection, but their embeddings, optimized for
linguistic coherence rather than factual accuracy, often fail to clearly
separate truthful and hallucinated content. To this end, we propose the
Truthfulness Separator Vector (TSV), a lightweight and flexible steering vector
that reshapes the LLM's representation space during inference to enhance the
separation between truthful and hallucinated outputs, without altering model
parameters. Our two-stage framework first trains TSV on a small set of labeled
exemplars to form compact and well-separated clusters. It then augments the
exemplar set with unlabeled LLM generations, employing an optimal
transport-based algorithm for pseudo-labeling combined with a confidence-based
filtering process. Extensive experiments demonstrate that TSV achieves
state-of-the-art performance with minimal labeled data, exhibiting strong
generalization across datasets and providing a practical solution for
real-world LLM applications.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 19:19:34 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Park",
"Seongheon",
""
],
[
"Du",
"Xuefeng",
""
],
[
"Yeh",
"Min-Hsuan",
""
],
[
"Wang",
"Haobo",
""
],
[
"Li",
"Yixuan",
""
]
]
| TITLE: How to Steer LLM Latents for Hallucination Detection?
ABSTRACT: Hallucinations in LLMs pose a significant concern to their safe deployment in
real-world applications. Recent approaches have leveraged the latent space of
LLMs for hallucination detection, but their embeddings, optimized for
linguistic coherence rather than factual accuracy, often fail to clearly
separate truthful and hallucinated content. To this end, we propose the
Truthfulness Separator Vector (TSV), a lightweight and flexible steering vector
that reshapes the LLM's representation space during inference to enhance the
separation between truthful and hallucinated outputs, without altering model
parameters. Our two-stage framework first trains TSV on a small set of labeled
exemplars to form compact and well-separated clusters. It then augments the
exemplar set with unlabeled LLM generations, employing an optimal
transport-based algorithm for pseudo-labeling combined with a confidence-based
filtering process. Extensive experiments demonstrate that TSV achieves
state-of-the-art performance with minimal labeled data, exhibiting strong
generalization across datasets and providing a practical solution for
real-world LLM applications.
| no_new_dataset | 0.944689 |
2503.01921 | Jianfei Xu | Jiaying Hong, Thanet Markchom, Jianfei Xu, Tong Wu and Huizhi Liang | NCL-UoR at SemEval-2025 Task 3: Detecting Multilingual Hallucination and
Related Observable Overgeneration Text Spans with Modified RefChecker and
Modified SeflCheckGPT | null | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | SemEval-2025 Task 3 (Mu-SHROOM) focuses on detecting hallucinations in
content generated by various large language models (LLMs) across multiple
languages. This task involves not only identifying the presence of
hallucinations but also pinpointing their specific occurrences. To tackle this
challenge, this study introduces two methods: modified RefChecker and modified
SelfCheckGPT. The modified RefChecker integrates prompt-based factual
verification into References, structuring them as claim-based tests rather than
single external knowledge sources. The modified SelfCheckGPT incorporates
external knowledge to overcome its reliance on internal knowledge. In addition,
both methods' original prompt designs are enhanced to identify hallucinated
words within LLM-generated texts. Experimental results demonstrate the
effectiveness of the approach, achieving a high ranking on the test dataset in
detecting hallucinations across various languages, with an average IoU of
0.5310 and an average COR of 0.5669.
| [
{
"version": "v1",
"created": "Sun, 2 Mar 2025 04:21:33 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Hong",
"Jiaying",
""
],
[
"Markchom",
"Thanet",
""
],
[
"Xu",
"Jianfei",
""
],
[
"Wu",
"Tong",
""
],
[
"Liang",
"Huizhi",
""
]
]
| TITLE: NCL-UoR at SemEval-2025 Task 3: Detecting Multilingual Hallucination and
Related Observable Overgeneration Text Spans with Modified RefChecker and
Modified SeflCheckGPT
ABSTRACT: SemEval-2025 Task 3 (Mu-SHROOM) focuses on detecting hallucinations in
content generated by various large language models (LLMs) across multiple
languages. This task involves not only identifying the presence of
hallucinations but also pinpointing their specific occurrences. To tackle this
challenge, this study introduces two methods: modified RefChecker and modified
SelfCheckGPT. The modified RefChecker integrates prompt-based factual
verification into References, structuring them as claim-based tests rather than
single external knowledge sources. The modified SelfCheckGPT incorporates
external knowledge to overcome its reliance on internal knowledge. In addition,
both methods' original prompt designs are enhanced to identify hallucinated
words within LLM-generated texts. Experimental results demonstrate the
effectiveness of the approach, achieving a high ranking on the test dataset in
detecting hallucinations across various languages, with an average IoU of
0.5310 and an average COR of 0.5669.
| no_new_dataset | 0.934095 |
2503.01925 | Sinan Yang | Yueyang Wu, Sinan Yang, Yanming Wang, Jiajie He, Muhammad Mohsin
Pathan, Bensheng Qiu, and Xiaoxiao Wang | Volume-Wise Task fMRI Decoding with Deep Learning:Enhancing Temporal
Resolution and Cognitive Function Analysis | 8 pages,11 figures | null | null | null | cs.LG cs.CV cs.HC q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years,the application of deep learning in task functional Magnetic
Resonance Imaging (tfMRI) decoding has led to significant advancements.
However,most studies remain constrained by assumption of temporal stationarity
in neural activity,resulting in predominantly block-wise analysis with limited
temporal resolution on the order of tens of seconds. This limitation restricts
the ability to decode cognitive functions in detail. To address these
limitations, this study proposes a deep neural network designed for volume-wise
identification of task states within tfMRI data,thereby overcoming the
constraints of conventional methods. Evaluated on Human Connectome Project
(HCP) motor and gambling tfMRI datasets,the model achieved impressive mean
accuracy rates of 94.0% and 79.6%,respectively. These results demonstrate a
substantial enhancement in temporal resolution,enabling more detailed
exploration of cognitive processes. The study further employs visualization
algorithms to investigate dynamic brain mappings during different tasks,marking
a significant step forward in deep learning-based frame-level tfMRI decoding.
This approach offers new methodologies and tools for examining dynamic changes
in brain activities and understanding the underlying cognitive mechanisms.
| [
{
"version": "v1",
"created": "Sun, 2 Mar 2025 12:07:26 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Wu",
"Yueyang",
""
],
[
"Yang",
"Sinan",
""
],
[
"Wang",
"Yanming",
""
],
[
"He",
"Jiajie",
""
],
[
"Pathan",
"Muhammad Mohsin",
""
],
[
"Qiu",
"Bensheng",
""
],
[
"Wang",
"Xiaoxiao",
""
]
]
| TITLE: Volume-Wise Task fMRI Decoding with Deep Learning:Enhancing Temporal
Resolution and Cognitive Function Analysis
ABSTRACT: In recent years,the application of deep learning in task functional Magnetic
Resonance Imaging (tfMRI) decoding has led to significant advancements.
However,most studies remain constrained by assumption of temporal stationarity
in neural activity,resulting in predominantly block-wise analysis with limited
temporal resolution on the order of tens of seconds. This limitation restricts
the ability to decode cognitive functions in detail. To address these
limitations, this study proposes a deep neural network designed for volume-wise
identification of task states within tfMRI data,thereby overcoming the
constraints of conventional methods. Evaluated on Human Connectome Project
(HCP) motor and gambling tfMRI datasets,the model achieved impressive mean
accuracy rates of 94.0% and 79.6%,respectively. These results demonstrate a
substantial enhancement in temporal resolution,enabling more detailed
exploration of cognitive processes. The study further employs visualization
algorithms to investigate dynamic brain mappings during different tasks,marking
a significant step forward in deep learning-based frame-level tfMRI decoding.
This approach offers new methodologies and tools for examining dynamic changes
in brain activities and understanding the underlying cognitive mechanisms.
| no_new_dataset | 0.946892 |
2503.01926 | Yiran Zhao | Keyu Duan, Yiran Zhao, Zhili Feng, Jinjie Ni, Tianyu Pang, Qian Liu,
Tianle Cai, Longxu Dou, Kenji Kawaguchi, Anirudh Goyal, J. Zico Kolter,
Michael Qizhe Shieh | Unnatural Languages Are Not Bugs but Features for LLMs | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) have been observed to process non-human-readable
text sequences, such as jailbreak prompts, often viewed as a bug for aligned
LLMs. In this work, we present a systematic investigation challenging this
perception, demonstrating that unnatural languages - strings that appear
incomprehensible to humans but maintain semantic meanings for LLMs - contain
latent features usable by models. Notably, unnatural languages possess latent
features that can be generalized across different models and tasks during
inference. Furthermore, models fine-tuned on unnatural versions of instruction
datasets perform on-par with those trained on natural language, achieving 49.71
win rates in Length-controlled AlpacaEval 2.0 in average across various base
models. In addition, through comprehensive analysis, we demonstrate that LLMs
process unnatural languages by filtering noise and inferring contextual meaning
from filtered words.
| [
{
"version": "v1",
"created": "Sun, 2 Mar 2025 12:10:17 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Duan",
"Keyu",
""
],
[
"Zhao",
"Yiran",
""
],
[
"Feng",
"Zhili",
""
],
[
"Ni",
"Jinjie",
""
],
[
"Pang",
"Tianyu",
""
],
[
"Liu",
"Qian",
""
],
[
"Cai",
"Tianle",
""
],
[
"Dou",
"Longxu",
""
],
[
"Kawaguchi",
"Kenji",
""
],
[
"Goyal",
"Anirudh",
""
],
[
"Kolter",
"J. Zico",
""
],
[
"Shieh",
"Michael Qizhe",
""
]
]
| TITLE: Unnatural Languages Are Not Bugs but Features for LLMs
ABSTRACT: Large Language Models (LLMs) have been observed to process non-human-readable
text sequences, such as jailbreak prompts, often viewed as a bug for aligned
LLMs. In this work, we present a systematic investigation challenging this
perception, demonstrating that unnatural languages - strings that appear
incomprehensible to humans but maintain semantic meanings for LLMs - contain
latent features usable by models. Notably, unnatural languages possess latent
features that can be generalized across different models and tasks during
inference. Furthermore, models fine-tuned on unnatural versions of instruction
datasets perform on-par with those trained on natural language, achieving 49.71
win rates in Length-controlled AlpacaEval 2.0 in average across various base
models. In addition, through comprehensive analysis, we demonstrate that LLMs
process unnatural languages by filtering noise and inferring contextual meaning
from filtered words.
| no_new_dataset | 0.944536 |
2503.01927 | Kangyu Zheng | Kangyu Zheng, Tianfan Fu, Zhiding Liang | QCS-ADME: Quantum Circuit Search for Drug Property Prediction with
Imbalanced Data and Regression Adaptation | null | null | null | null | quant-ph cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | The biomedical field is beginning to explore the use of quantum machine
learning (QML) for tasks traditionally handled by classical machine learning,
especially in predicting ADME (absorption, distribution, metabolism, and
excretion) properties, which are essential in drug evaluation. However, ADME
tasks pose unique challenges for existing quantum computing systems (QCS)
frameworks, as they involve both classification with unbalanced dataset and
regression problems. These dual requirements make it necessary to adapt and
refine current QCS frameworks to effectively address the complexities of ADME
predictions. We propose a novel training-free scoring mechanism to evaluate QML
circuit performance on imbalanced classification and regression tasks. Our
mechanism demonstrates significant correlation between scoring metrics and test
performance on imbalanced classification tasks. Additionally, we develop
methods to quantify continuous similarity relationships between quantum states,
enabling performance prediction for regression tasks. This represents the first
comprehensive approach to searching and evaluating QCS circuits specifically
for regression applications. Validation on representative ADME tasks-one
imbalanced classification and one regression-demonstrates moderate positive
correlation between our scoring metrics and circuit performance, significantly
outperforming baseline scoring methods that show negligible correlation.
| [
{
"version": "v1",
"created": "Sun, 2 Mar 2025 19:29:04 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Zheng",
"Kangyu",
""
],
[
"Fu",
"Tianfan",
""
],
[
"Liang",
"Zhiding",
""
]
]
| TITLE: QCS-ADME: Quantum Circuit Search for Drug Property Prediction with
Imbalanced Data and Regression Adaptation
ABSTRACT: The biomedical field is beginning to explore the use of quantum machine
learning (QML) for tasks traditionally handled by classical machine learning,
especially in predicting ADME (absorption, distribution, metabolism, and
excretion) properties, which are essential in drug evaluation. However, ADME
tasks pose unique challenges for existing quantum computing systems (QCS)
frameworks, as they involve both classification with unbalanced dataset and
regression problems. These dual requirements make it necessary to adapt and
refine current QCS frameworks to effectively address the complexities of ADME
predictions. We propose a novel training-free scoring mechanism to evaluate QML
circuit performance on imbalanced classification and regression tasks. Our
mechanism demonstrates significant correlation between scoring metrics and test
performance on imbalanced classification tasks. Additionally, we develop
methods to quantify continuous similarity relationships between quantum states,
enabling performance prediction for regression tasks. This represents the first
comprehensive approach to searching and evaluating QCS circuits specifically
for regression applications. Validation on representative ADME tasks-one
imbalanced classification and one regression-demonstrates moderate positive
correlation between our scoring metrics and circuit performance, significantly
outperforming baseline scoring methods that show negligible correlation.
| no_new_dataset | 0.946101 |
2503.01935 | Kunlun Zhu | Kunlun Zhu, Hongyi Du, Zhaochen Hong, Xiaocheng Yang, Shuyi Guo, Zhe
Wang, Zhenhailong Wang, Cheng Qian, Xiangru Tang, Heng Ji, Jiaxuan You | MultiAgentBench: Evaluating the Collaboration and Competition of LLM
agents | https://github.com/MultiagentBench/MARBLE | null | null | null | cs.MA cs.AI cs.CL cs.CY | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) have shown remarkable capabilities as autonomous
agents, yet existing benchmarks either focus on single-agent tasks or are
confined to narrow domains, failing to capture the dynamics of multi-agent
coordination and competition. In this paper, we introduce MultiAgentBench, a
comprehensive benchmark designed to evaluate LLM-based multi-agent systems
across diverse, interactive scenarios. Our framework measures not only task
completion but also the quality of collaboration and competition using novel,
milestone-based key performance indicators. Moreover, we evaluate various
coordination protocols (including star, chain, tree, and graph topologies) and
innovative strategies such as group discussion and cognitive planning. Notably,
gpt-4o-mini reaches the average highest task score, graph structure performs
the best among coordination protocols in the research scenario, and cognitive
planning improves milestone achievement rates by 3%. Code and datasets are
public available at https://github.com/MultiagentBench/MARBLE.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 05:18:50 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Zhu",
"Kunlun",
""
],
[
"Du",
"Hongyi",
""
],
[
"Hong",
"Zhaochen",
""
],
[
"Yang",
"Xiaocheng",
""
],
[
"Guo",
"Shuyi",
""
],
[
"Wang",
"Zhe",
""
],
[
"Wang",
"Zhenhailong",
""
],
[
"Qian",
"Cheng",
""
],
[
"Tang",
"Xiangru",
""
],
[
"Ji",
"Heng",
""
],
[
"You",
"Jiaxuan",
""
]
]
| TITLE: MultiAgentBench: Evaluating the Collaboration and Competition of LLM
agents
ABSTRACT: Large Language Models (LLMs) have shown remarkable capabilities as autonomous
agents, yet existing benchmarks either focus on single-agent tasks or are
confined to narrow domains, failing to capture the dynamics of multi-agent
coordination and competition. In this paper, we introduce MultiAgentBench, a
comprehensive benchmark designed to evaluate LLM-based multi-agent systems
across diverse, interactive scenarios. Our framework measures not only task
completion but also the quality of collaboration and competition using novel,
milestone-based key performance indicators. Moreover, we evaluate various
coordination protocols (including star, chain, tree, and graph topologies) and
innovative strategies such as group discussion and cognitive planning. Notably,
gpt-4o-mini reaches the average highest task score, graph structure performs
the best among coordination protocols in the research scenario, and cognitive
planning improves milestone achievement rates by 3%. Code and datasets are
public available at https://github.com/MultiagentBench/MARBLE.
| new_dataset | 0.910067 |
2503.01937 | G. Charbel KINDJI | G. Charbel N. Kindji (IRISA, LACODAM), Elisa Fromont (IRISA, LACODAM),
Lina Maria Rojas-Barahona, Tanguy Urvoy | Synthetic Tabular Data Detection In the Wild | International Symposium on Intelligent Data Analysis, May 2025,
Konstanz, Germany | null | null | null | cs.LG cs.AI cs.DB cs.NE stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Detecting synthetic tabular data is essential to prevent the distribution of
false or manipulated datasets that could compromise data-driven
decision-making. This study explores whether synthetic tabular data can be
reliably identified across different tables. This challenge is unique to
tabular data, where structures (such as number of columns, data types, and
formats) can vary widely from one table to another. We propose four
table-agnostic detectors combined with simple preprocessing schemes that we
evaluate on six evaluation protocols, with different levels of ''wildness''.
Our results show that cross-table learning on a restricted set of tables is
possible even with naive preprocessing schemes. They confirm however that
cross-table transfer (i.e. deployment on a table that has not been seen before)
is challenging. This suggests that sophisticated encoding schemes are required
to handle this problem.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 07:53:16 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Kindji",
"G. Charbel N.",
"",
"IRISA, LACODAM"
],
[
"Fromont",
"Elisa",
"",
"IRISA, LACODAM"
],
[
"Rojas-Barahona",
"Lina Maria",
""
],
[
"Urvoy",
"Tanguy",
""
]
]
| TITLE: Synthetic Tabular Data Detection In the Wild
ABSTRACT: Detecting synthetic tabular data is essential to prevent the distribution of
false or manipulated datasets that could compromise data-driven
decision-making. This study explores whether synthetic tabular data can be
reliably identified across different tables. This challenge is unique to
tabular data, where structures (such as number of columns, data types, and
formats) can vary widely from one table to another. We propose four
table-agnostic detectors combined with simple preprocessing schemes that we
evaluate on six evaluation protocols, with different levels of ''wildness''.
Our results show that cross-table learning on a restricted set of tables is
possible even with naive preprocessing schemes. They confirm however that
cross-table transfer (i.e. deployment on a table that has not been seen before)
is challenging. This suggests that sophisticated encoding schemes are required
to handle this problem.
| no_new_dataset | 0.945096 |
2503.01938 | Tianrui Liu | Jun-Jie Huang, Tianrui Liu, Zihan Chen, Xinwang Liu, Meng Wang, and
Pier Luigi Dragotti | A Lightweight Deep Exclusion Unfolding Network for Single Image
Reflection Removal | null | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Single Image Reflection Removal (SIRR) is a canonical blind source separation
problem and refers to the issue of separating a reflection-contaminated image
into a transmission and a reflection image. The core challenge lies in
minimizing the commonalities among different sources. Existing deep learning
approaches either neglect the significance of feature interactions or rely on
heuristically designed architectures. In this paper, we propose a novel Deep
Exclusion unfolding Network (DExNet), a lightweight, interpretable, and
effective network architecture for SIRR. DExNet is principally constructed by
unfolding and parameterizing a simple iterative Sparse and Auxiliary Feature
Update (i-SAFU) algorithm, which is specifically designed to solve a new
model-based SIRR optimization formulation incorporating a general exclusion
prior. This general exclusion prior enables the unfolded SAFU module to
inherently identify and penalize commonalities between the transmission and
reflection features, ensuring more accurate separation. The principled design
of DExNet not only enhances its interpretability but also significantly
improves its performance. Comprehensive experiments on four benchmark datasets
demonstrate that DExNet achieves state-of-the-art visual and quantitative
results while utilizing only approximately 8\% of the parameters required by
leading methods.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 07:54:27 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Huang",
"Jun-Jie",
""
],
[
"Liu",
"Tianrui",
""
],
[
"Chen",
"Zihan",
""
],
[
"Liu",
"Xinwang",
""
],
[
"Wang",
"Meng",
""
],
[
"Dragotti",
"Pier Luigi",
""
]
]
| TITLE: A Lightweight Deep Exclusion Unfolding Network for Single Image
Reflection Removal
ABSTRACT: Single Image Reflection Removal (SIRR) is a canonical blind source separation
problem and refers to the issue of separating a reflection-contaminated image
into a transmission and a reflection image. The core challenge lies in
minimizing the commonalities among different sources. Existing deep learning
approaches either neglect the significance of feature interactions or rely on
heuristically designed architectures. In this paper, we propose a novel Deep
Exclusion unfolding Network (DExNet), a lightweight, interpretable, and
effective network architecture for SIRR. DExNet is principally constructed by
unfolding and parameterizing a simple iterative Sparse and Auxiliary Feature
Update (i-SAFU) algorithm, which is specifically designed to solve a new
model-based SIRR optimization formulation incorporating a general exclusion
prior. This general exclusion prior enables the unfolded SAFU module to
inherently identify and penalize commonalities between the transmission and
reflection features, ensuring more accurate separation. The principled design
of DExNet not only enhances its interpretability but also significantly
improves its performance. Comprehensive experiments on four benchmark datasets
demonstrate that DExNet achieves state-of-the-art visual and quantitative
results while utilizing only approximately 8\% of the parameters required by
leading methods.
| no_new_dataset | 0.9455 |
2503.01940 | Xuan Zhang | Xuan Zhang, Yongliang Shen, Zhe Zheng, Linjuan Wu, Wenqi Zhang, Yuchen
Yan, Qiuying Peng, Jun Wang, Weiming Lu | AskToAct: Enhancing LLMs Tool Use via Self-Correcting Clarification | null | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs) have demonstrated remarkable capabilities in
tool learning. In real-world scenarios, user queries are often ambiguous and
incomplete, requiring effective clarification. However, existing interactive
clarification approaches face two critical limitations: reliance on manually
constructed datasets and lack of error correction mechanisms during multi-turn
clarification. We present AskToAct, which addresses these challenges by
exploiting the structural mapping between queries and their tool invocation
solutions. Our key insight is that tool parameters naturally represent explicit
user intents. By systematically removing key parameters from queries while
retaining them as ground truth, we enable automated construction of
high-quality training data. We further enhance model robustness by fine-tuning
on error-correction augmented data using selective masking mechanism, enabling
dynamic error detection during clarification interactions. Comprehensive
experiments demonstrate that AskToAct significantly outperforms existing
approaches, achieving above 79% accuracy in recovering critical unspecified
intents and enhancing clarification efficiency by an average of 48.34% while
maintaining high accuracy in tool invocation. Our framework exhibits robust
performance across varying complexity levels and successfully generalizes to
entirely unseen APIs without additional training, achieving performance
comparable to GPT-4 with substantially fewer computational resources.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 12:55:49 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Zhang",
"Xuan",
""
],
[
"Shen",
"Yongliang",
""
],
[
"Zheng",
"Zhe",
""
],
[
"Wu",
"Linjuan",
""
],
[
"Zhang",
"Wenqi",
""
],
[
"Yan",
"Yuchen",
""
],
[
"Peng",
"Qiuying",
""
],
[
"Wang",
"Jun",
""
],
[
"Lu",
"Weiming",
""
]
]
| TITLE: AskToAct: Enhancing LLMs Tool Use via Self-Correcting Clarification
ABSTRACT: Large language models (LLMs) have demonstrated remarkable capabilities in
tool learning. In real-world scenarios, user queries are often ambiguous and
incomplete, requiring effective clarification. However, existing interactive
clarification approaches face two critical limitations: reliance on manually
constructed datasets and lack of error correction mechanisms during multi-turn
clarification. We present AskToAct, which addresses these challenges by
exploiting the structural mapping between queries and their tool invocation
solutions. Our key insight is that tool parameters naturally represent explicit
user intents. By systematically removing key parameters from queries while
retaining them as ground truth, we enable automated construction of
high-quality training data. We further enhance model robustness by fine-tuning
on error-correction augmented data using selective masking mechanism, enabling
dynamic error detection during clarification interactions. Comprehensive
experiments demonstrate that AskToAct significantly outperforms existing
approaches, achieving above 79% accuracy in recovering critical unspecified
intents and enhancing clarification efficiency by an average of 48.34% while
maintaining high accuracy in tool invocation. Our framework exhibits robust
performance across varying complexity levels and successfully generalizes to
entirely unseen APIs without additional training, achieving performance
comparable to GPT-4 with substantially fewer computational resources.
| no_new_dataset | 0.944434 |
2503.01986 | Davis Brown | Davis Brown, Prithvi Balehannina, Helen Jin, Shreya Havaldar, Hamed
Hassani, Eric Wong | Adaptively evaluating models with task elicitation | null | null | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Manual curation of evaluation datasets is struggling to keep up with the
rapidly expanding capabilities and deployment scenarios of language models.
Towards scalable model profiling, we introduce and validate a framework for
evaluating LLMs, called Adaptive Evaluations. Adaptive evaluations use
scaffolded language models (evaluator agents) to search through a target
model's behavior on a domain dataset and create difficult questions (tasks)
that can discover and probe the model's failure modes. We find that frontier
models lack consistency when adaptively probed with our framework on a diverse
suite of datasets and tasks, including but not limited to legal reasoning,
forecasting, and online harassment. Generated questions pass human validity
checks and often transfer to other models with different capability profiles,
demonstrating that adaptive evaluations can also be used to create difficult
domain-specific datasets.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 19:04:10 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Brown",
"Davis",
""
],
[
"Balehannina",
"Prithvi",
""
],
[
"Jin",
"Helen",
""
],
[
"Havaldar",
"Shreya",
""
],
[
"Hassani",
"Hamed",
""
],
[
"Wong",
"Eric",
""
]
]
| TITLE: Adaptively evaluating models with task elicitation
ABSTRACT: Manual curation of evaluation datasets is struggling to keep up with the
rapidly expanding capabilities and deployment scenarios of language models.
Towards scalable model profiling, we introduce and validate a framework for
evaluating LLMs, called Adaptive Evaluations. Adaptive evaluations use
scaffolded language models (evaluator agents) to search through a target
model's behavior on a domain dataset and create difficult questions (tasks)
that can discover and probe the model's failure modes. We find that frontier
models lack consistency when adaptively probed with our framework on a diverse
suite of datasets and tasks, including but not limited to legal reasoning,
forecasting, and online harassment. Generated questions pass human validity
checks and often transfer to other models with different capability profiles,
demonstrating that adaptive evaluations can also be used to create difficult
domain-specific datasets.
| no_new_dataset | 0.941601 |
2503.01999 | Ata Tuna | Ata Tuna | A Deep Autoregressive Model for Dynamic Combinatorial Complexes | 66 pages, 12 figures. Submitted in partial fulfillment of the
requirements for the MRes degree in Artificial Intelligence and Machine
Learning of Imperial College London | null | null | null | cs.LG cs.SI | http://creativecommons.org/licenses/by/4.0/ | We introduce DAMCC (Deep Autoregressive Model for Dynamic Combinatorial
Complexes), the first deep learning model designed to generate dynamic
combinatorial complexes (CCs). Unlike traditional graph-based models, CCs
capture higher-order interactions, making them ideal for representing social
networks, biological systems, and evolving infrastructures. While existing
models primarily focus on static graphs, DAMCC addresses the challenge of
modeling temporal dynamics and higher-order structures in dynamic networks.
DAMCC employs an autoregressive framework to predict the evolution of CCs over
time. Through comprehensive experiments on real-world and synthetic datasets,
we demonstrate its ability to capture both temporal and higher-order
dependencies. As the first model of its kind, DAMCC lays the foundation for
future advancements in dynamic combinatorial complex modeling, with
opportunities for improved scalability and efficiency on larger networks.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 19:15:40 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Tuna",
"Ata",
""
]
]
| TITLE: A Deep Autoregressive Model for Dynamic Combinatorial Complexes
ABSTRACT: We introduce DAMCC (Deep Autoregressive Model for Dynamic Combinatorial
Complexes), the first deep learning model designed to generate dynamic
combinatorial complexes (CCs). Unlike traditional graph-based models, CCs
capture higher-order interactions, making them ideal for representing social
networks, biological systems, and evolving infrastructures. While existing
models primarily focus on static graphs, DAMCC addresses the challenge of
modeling temporal dynamics and higher-order structures in dynamic networks.
DAMCC employs an autoregressive framework to predict the evolution of CCs over
time. Through comprehensive experiments on real-world and synthetic datasets,
we demonstrate its ability to capture both temporal and higher-order
dependencies. As the first model of its kind, DAMCC lays the foundation for
future advancements in dynamic combinatorial complex modeling, with
opportunities for improved scalability and efficiency on larger networks.
| no_new_dataset | 0.951504 |
2503.02007 | Faraz Faruqi | Faraz Faruqi, Maxine Perroni-Scharf, Jaskaran Singh Walia, Yunyi Zhu,
Shuyue Feng, Donald Degraen, Stefanie Mueller | TactStyle: Generating Tactile Textures with Generative AI for Digital
Fabrication | null | null | 10.1145/3706598.3713740 | null | cs.HC cs.AI | http://creativecommons.org/licenses/by/4.0/ | Recent work in Generative AI enables the stylization of 3D models based on
image prompts. However, these methods do not incorporate tactile information,
leading to designs that lack the expected tactile properties. We present
TactStyle, a system that allows creators to stylize 3D models with images while
incorporating the expected tactile properties. TactStyle accomplishes this
using a modified image-generation model fine-tuned to generate heightfields for
given surface textures. By optimizing 3D model surfaces to embody a generated
texture, TactStyle creates models that match the desired style and replicate
the tactile experience. We utilize a large-scale dataset of textures to train
our texture generation model. In a psychophysical experiment, we evaluate the
tactile qualities of a set of 3D-printed original textures and TactStyle's
generated textures. Our results show that TactStyle successfully generates a
wide range of tactile features from a single image input, enabling a novel
approach to haptic design.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 19:29:27 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Faruqi",
"Faraz",
""
],
[
"Perroni-Scharf",
"Maxine",
""
],
[
"Walia",
"Jaskaran Singh",
""
],
[
"Zhu",
"Yunyi",
""
],
[
"Feng",
"Shuyue",
""
],
[
"Degraen",
"Donald",
""
],
[
"Mueller",
"Stefanie",
""
]
]
| TITLE: TactStyle: Generating Tactile Textures with Generative AI for Digital
Fabrication
ABSTRACT: Recent work in Generative AI enables the stylization of 3D models based on
image prompts. However, these methods do not incorporate tactile information,
leading to designs that lack the expected tactile properties. We present
TactStyle, a system that allows creators to stylize 3D models with images while
incorporating the expected tactile properties. TactStyle accomplishes this
using a modified image-generation model fine-tuned to generate heightfields for
given surface textures. By optimizing 3D model surfaces to embody a generated
texture, TactStyle creates models that match the desired style and replicate
the tactile experience. We utilize a large-scale dataset of textures to train
our texture generation model. In a psychophysical experiment, we evaluate the
tactile qualities of a set of 3D-printed original textures and TactStyle's
generated textures. Our results show that TactStyle successfully generates a
wide range of tactile features from a single image input, enabling a novel
approach to haptic design.
| no_new_dataset | 0.94801 |
2503.02011 | Tung L Nguyen | Tung L Nguyen, Toby Dylan Hocking | Interval Regression: A Comparative Study with Proposed Models | 13 pages, 4 figures | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | Regression models are essential for a wide range of real-world applications.
However, in practice, target values are not always precisely known; instead,
they may be represented as intervals of acceptable values. This challenge has
led to the development of Interval Regression models. In this study, we provide
a comprehensive review of existing Interval Regression models and introduce
alternative models for comparative analysis. Experiments are conducted on both
real-world and synthetic datasets to offer a broad perspective on model
performance. The results demonstrate that no single model is universally
optimal, highlighting the importance of selecting the most suitable model for
each specific scenario.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 19:39:02 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Nguyen",
"Tung L",
""
],
[
"Hocking",
"Toby Dylan",
""
]
]
| TITLE: Interval Regression: A Comparative Study with Proposed Models
ABSTRACT: Regression models are essential for a wide range of real-world applications.
However, in practice, target values are not always precisely known; instead,
they may be represented as intervals of acceptable values. This challenge has
led to the development of Interval Regression models. In this study, we provide
a comprehensive review of existing Interval Regression models and introduce
alternative models for comparative analysis. Experiments are conducted on both
real-world and synthetic datasets to offer a broad perspective on model
performance. The results demonstrate that no single model is universally
optimal, highlighting the importance of selecting the most suitable model for
each specific scenario.
| no_new_dataset | 0.950134 |
2503.02032 | Ananya Jana | Aniruddha Maiti, Samuel Adewumi, Temesgen Alemayehu Tikure, Zichun
Wang, Niladri Sengupta, Anastasiia Sukhanova, Ananya Jana | Comparative Analysis of OpenAI GPT-4o and DeepSeek R1 for Scientific
Text Categorization Using Prompt Engineering | Accepted to ASEE North Central Section 2025 | null | null | null | cs.CL cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | This study examines how large language models categorize sentences from
scientific papers using prompt engineering. We use two advanced web-based
models, GPT-4o (by OpenAI) and DeepSeek R1, to classify sentences into
predefined relationship categories. DeepSeek R1 has been tested on benchmark
datasets in its technical report. However, its performance in scientific text
categorization remains unexplored. To address this gap, we introduce a new
evaluation method designed specifically for this task. We also compile a
dataset of cleaned scientific papers from diverse domains. This dataset
provides a platform for comparing the two models. Using this dataset, we
analyze their effectiveness and consistency in categorization.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 20:09:35 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Maiti",
"Aniruddha",
""
],
[
"Adewumi",
"Samuel",
""
],
[
"Tikure",
"Temesgen Alemayehu",
""
],
[
"Wang",
"Zichun",
""
],
[
"Sengupta",
"Niladri",
""
],
[
"Sukhanova",
"Anastasiia",
""
],
[
"Jana",
"Ananya",
""
]
]
| TITLE: Comparative Analysis of OpenAI GPT-4o and DeepSeek R1 for Scientific
Text Categorization Using Prompt Engineering
ABSTRACT: This study examines how large language models categorize sentences from
scientific papers using prompt engineering. We use two advanced web-based
models, GPT-4o (by OpenAI) and DeepSeek R1, to classify sentences into
predefined relationship categories. DeepSeek R1 has been tested on benchmark
datasets in its technical report. However, its performance in scientific text
categorization remains unexplored. To address this gap, we introduce a new
evaluation method designed specifically for this task. We also compile a
dataset of cleaned scientific papers from diverse domains. This dataset
provides a platform for comparing the two models. Using this dataset, we
analyze their effectiveness and consistency in categorization.
| new_dataset | 0.953405 |
2503.02036 | Ruth Crasto | Ruth Crasto | Robustness to Geographic Distribution Shift using Location Encoders | Accepted to ICLR 2025 Machine Learning for Remote Sensing (ML4RS)
Workshop | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Geographic distribution shift arises when the distribution of locations on
Earth in a training dataset is different from what is seen at test time. The
most common approaches to tackling geographic distribution shift treat regions
delimited by administrative boundaries such as countries or continents as
separate domains and apply standard domain adaptation methods, ignoring
geographic coordinates that are often available as metadata. This paper
proposes the use of location encoders for training models that are more robust
to geographic distribution shift. We show how both simple sine-cosine encoders
and pre-trained location encoders can be used to improve standard domain
adaptation methods for the special case of geographic distribution shift. Our
proposed methods achieve state-of-the-art results on geo-tagged imagery
datasets from the WILDS benchmark.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 20:24:07 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Crasto",
"Ruth",
""
]
]
| TITLE: Robustness to Geographic Distribution Shift using Location Encoders
ABSTRACT: Geographic distribution shift arises when the distribution of locations on
Earth in a training dataset is different from what is seen at test time. The
most common approaches to tackling geographic distribution shift treat regions
delimited by administrative boundaries such as countries or continents as
separate domains and apply standard domain adaptation methods, ignoring
geographic coordinates that are often available as metadata. This paper
proposes the use of location encoders for training models that are more robust
to geographic distribution shift. We show how both simple sine-cosine encoders
and pre-trained location encoders can be used to improve standard domain
adaptation methods for the special case of geographic distribution shift. Our
proposed methods achieve state-of-the-art results on geo-tagged imagery
datasets from the WILDS benchmark.
| no_new_dataset | 0.954265 |
2503.02038 | Angana Borah | Angana Borah, Rada Mihalcea, Ver\'onica P\'erez-Rosas | Persuasion at Play: Understanding Misinformation Dynamics in
Demographic-Aware Human-LLM Interactions | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Existing challenges in misinformation exposure and susceptibility vary across
demographic groups, as some populations are more vulnerable to misinformation
than others. Large language models (LLMs) introduce new dimensions to these
challenges through their ability to generate persuasive content at scale and
reinforcing existing biases. This study investigates the bidirectional
persuasion dynamics between LLMs and humans when exposed to misinformative
content. We analyze human-to-LLM influence using human-stance datasets and
assess LLM-to-human influence by generating LLM-based persuasive arguments.
Additionally, we use a multi-agent LLM framework to analyze the spread of
misinformation under persuasion among demographic-oriented LLM agents. Our
findings show that demographic factors influence susceptibility to
misinformation in LLMs, closely reflecting the demographic-based patterns seen
in human susceptibility. We also find that, similar to human demographic
groups, multi-agent LLMs exhibit echo chamber behavior. This research explores
the interplay between humans and LLMs, highlighting demographic differences in
the context of misinformation and offering insights for future interventions.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 20:30:22 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Borah",
"Angana",
""
],
[
"Mihalcea",
"Rada",
""
],
[
"Pérez-Rosas",
"Verónica",
""
]
]
| TITLE: Persuasion at Play: Understanding Misinformation Dynamics in
Demographic-Aware Human-LLM Interactions
ABSTRACT: Existing challenges in misinformation exposure and susceptibility vary across
demographic groups, as some populations are more vulnerable to misinformation
than others. Large language models (LLMs) introduce new dimensions to these
challenges through their ability to generate persuasive content at scale and
reinforcing existing biases. This study investigates the bidirectional
persuasion dynamics between LLMs and humans when exposed to misinformative
content. We analyze human-to-LLM influence using human-stance datasets and
assess LLM-to-human influence by generating LLM-based persuasive arguments.
Additionally, we use a multi-agent LLM framework to analyze the spread of
misinformation under persuasion among demographic-oriented LLM agents. Our
findings show that demographic factors influence susceptibility to
misinformation in LLMs, closely reflecting the demographic-based patterns seen
in human susceptibility. We also find that, similar to human demographic
groups, multi-agent LLMs exhibit echo chamber behavior. This research explores
the interplay between humans and LLMs, highlighting demographic differences in
the context of misinformation and offering insights for future interventions.
| no_new_dataset | 0.942771 |
2503.02053 | Zaifu Zhan | Zaifu Zhan, Shuang Zhou, Huixue Zhou, Zirui Liu and Rui Zhang | EPEE: Towards Efficient and Effective Foundation Models in Biomedicine | Submitted to npj Digital Medicine | null | null | null | cs.AI cs.CL cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Foundation models, including language models, e.g., GPT, and vision models,
e.g., CLIP, have significantly advanced numerous biomedical tasks. Despite
these advancements, the high inference latency and the "overthinking" issues in
model inference impair the efficiency and effectiveness of foundation models,
thus limiting their application in real-time clinical settings. To address
these challenges, we proposed EPEE (Entropy- and Patience-based Early Exiting),
a novel hybrid strategy designed to improve the inference efficiency of
foundation models. The core idea was to leverage the strengths of entropy-based
and patience-based early exiting methods to overcome their respective
weaknesses. To evaluate EPEE, we conducted experiments on three core biomedical
tasks-classification, relation extraction, and event extraction-using four
foundation models (BERT, ALBERT, GPT-2, and ViT) across twelve datasets,
including clinical notes and medical images. The results showed that EPEE
significantly reduced inference time while maintaining or improving accuracy,
demonstrating its adaptability to diverse datasets and tasks. EPEE addressed
critical barriers to deploying foundation models in healthcare by balancing
efficiency and effectiveness. It potentially provided a practical solution for
real-time clinical decision-making with foundation models, supporting reliable
and efficient workflows.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 21:11:13 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Zhan",
"Zaifu",
""
],
[
"Zhou",
"Shuang",
""
],
[
"Zhou",
"Huixue",
""
],
[
"Liu",
"Zirui",
""
],
[
"Zhang",
"Rui",
""
]
]
| TITLE: EPEE: Towards Efficient and Effective Foundation Models in Biomedicine
ABSTRACT: Foundation models, including language models, e.g., GPT, and vision models,
e.g., CLIP, have significantly advanced numerous biomedical tasks. Despite
these advancements, the high inference latency and the "overthinking" issues in
model inference impair the efficiency and effectiveness of foundation models,
thus limiting their application in real-time clinical settings. To address
these challenges, we proposed EPEE (Entropy- and Patience-based Early Exiting),
a novel hybrid strategy designed to improve the inference efficiency of
foundation models. The core idea was to leverage the strengths of entropy-based
and patience-based early exiting methods to overcome their respective
weaknesses. To evaluate EPEE, we conducted experiments on three core biomedical
tasks-classification, relation extraction, and event extraction-using four
foundation models (BERT, ALBERT, GPT-2, and ViT) across twelve datasets,
including clinical notes and medical images. The results showed that EPEE
significantly reduced inference time while maintaining or improving accuracy,
demonstrating its adaptability to diverse datasets and tasks. EPEE addressed
critical barriers to deploying foundation models in healthcare by balancing
efficiency and effectiveness. It potentially provided a practical solution for
real-time clinical decision-making with foundation models, supporting reliable
and efficient workflows.
| no_new_dataset | 0.946547 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.