Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2305.06967 | Camilla Quaresmini | Camilla Quaresmini, Giuseppe Primiero | Data quality dimensions for fair AI | null | null | null | null | cs.AI cs.LO | http://creativecommons.org/licenses/by/4.0/ | Artificial Intelligence (AI) systems are not intrinsically neutral and biases
trickle in any type of technological tool. In particular when dealing with
people, the impact of AI algorithms' technical errors originating with
mislabeled data is undeniable. As they feed wrong and discriminatory
classifications, these systems are not systematically guarded against bias. In
this article we consider the problem of bias in AI systems from the point of
view of data quality dimensions. We highlight the limited model construction of
bias mitigation tools based on accuracy strategy, illustrating potential
improvements of a specific tool in gender classification errors occurring in
two typically difficult contexts: the classification of non-binary individuals,
for which the label set becomes incomplete with respect to the dataset; and the
classification of transgender individuals, for which the dataset becomes
inconsistent with respect to the label set. Using formal methods for reasoning
about the behavior of the classification system in presence of a changing
world, we propose to reconsider the fairness of the classification task in
terms of completeness, consistency, timeliness and reliability, and offer some
theoretical results.
| [
{
"version": "v1",
"created": "Thu, 11 May 2023 16:48:58 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Dec 2024 16:54:03 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Quaresmini",
"Camilla",
""
],
[
"Primiero",
"Giuseppe",
""
]
] | TITLE: Data quality dimensions for fair AI
ABSTRACT: Artificial Intelligence (AI) systems are not intrinsically neutral and biases
trickle in any type of technological tool. In particular when dealing with
people, the impact of AI algorithms' technical errors originating with
mislabeled data is undeniable. As they feed wrong and discriminatory
classifications, these systems are not systematically guarded against bias. In
this article we consider the problem of bias in AI systems from the point of
view of data quality dimensions. We highlight the limited model construction of
bias mitigation tools based on accuracy strategy, illustrating potential
improvements of a specific tool in gender classification errors occurring in
two typically difficult contexts: the classification of non-binary individuals,
for which the label set becomes incomplete with respect to the dataset; and the
classification of transgender individuals, for which the dataset becomes
inconsistent with respect to the label set. Using formal methods for reasoning
about the behavior of the classification system in presence of a changing
world, we propose to reconsider the fairness of the classification task in
terms of completeness, consistency, timeliness and reliability, and offer some
theoretical results.
|
2305.14341 | Yue Guo | Yue Guo, Tal August, Gondy Leroy, Trevor Cohen, Lucy Lu Wang | APPLS: Evaluating Evaluation Metrics for Plain Language Summarization | This paper has been accepted by 2024 EMNLP main. Please cite the
EMNLP version | In Proceedings of the 2024 Conference on Empirical Methods in
Natural Language Processing | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | While there has been significant development of models for Plain Language
Summarization (PLS), evaluation remains a challenge. PLS lacks a dedicated
assessment metric, and the suitability of text generation evaluation metrics is
unclear due to the unique transformations involved (e.g., adding background
explanations, removing jargon). To address these questions, our study
introduces a granular meta-evaluation testbed, APPLS, designed to evaluate
metrics for PLS. We identify four PLS criteria from previous work --
informativeness, simplification, coherence, and faithfulness -- and define a
set of perturbations corresponding to these criteria that sensitive metrics
should be able to detect. We apply these perturbations to extractive hypotheses
for two PLS datasets to form our testbed. Using APPLS, we assess performance of
14 metrics, including automated scores, lexical features, and LLM prompt-based
evaluations. Our analysis reveals that while some current metrics show
sensitivity to specific criteria, no single method captures all four criteria
simultaneously. We therefore recommend a suite of automated metrics be used to
capture PLS quality along all relevant criteria. This work contributes the
first meta-evaluation testbed for PLS and a comprehensive evaluation of
existing metrics. APPLS and our evaluation code is available at
https://github.com/LinguisticAnomalies/APPLS.
| [
{
"version": "v1",
"created": "Tue, 23 May 2023 17:59:19 GMT"
},
{
"version": "v2",
"created": "Wed, 31 Jan 2024 02:32:19 GMT"
},
{
"version": "v3",
"created": "Tue, 23 Jul 2024 18:28:43 GMT"
},
{
"version": "v4",
"created": "Wed, 2 Apr 2025 04:03:37 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Guo",
"Yue",
""
],
[
"August",
"Tal",
""
],
[
"Leroy",
"Gondy",
""
],
[
"Cohen",
"Trevor",
""
],
[
"Wang",
"Lucy Lu",
""
]
] | TITLE: APPLS: Evaluating Evaluation Metrics for Plain Language Summarization
ABSTRACT: While there has been significant development of models for Plain Language
Summarization (PLS), evaluation remains a challenge. PLS lacks a dedicated
assessment metric, and the suitability of text generation evaluation metrics is
unclear due to the unique transformations involved (e.g., adding background
explanations, removing jargon). To address these questions, our study
introduces a granular meta-evaluation testbed, APPLS, designed to evaluate
metrics for PLS. We identify four PLS criteria from previous work --
informativeness, simplification, coherence, and faithfulness -- and define a
set of perturbations corresponding to these criteria that sensitive metrics
should be able to detect. We apply these perturbations to extractive hypotheses
for two PLS datasets to form our testbed. Using APPLS, we assess performance of
14 metrics, including automated scores, lexical features, and LLM prompt-based
evaluations. Our analysis reveals that while some current metrics show
sensitivity to specific criteria, no single method captures all four criteria
simultaneously. We therefore recommend a suite of automated metrics be used to
capture PLS quality along all relevant criteria. This work contributes the
first meta-evaluation testbed for PLS and a comprehensive evaluation of
existing metrics. APPLS and our evaluation code is available at
https://github.com/LinguisticAnomalies/APPLS.
|
2307.08716 | Hieu Le | Hieu Le, Jingyi Xu, Nicolas Talabot, Jiancheng Yang, Pascal Fua | Pairwise-Constrained Implicit Functions for 3D Human Heart Modelling | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Accurate 3D models of the human heart require not only correct outer surfaces
but also realistic inner structures, such as the ventricles, atria, and
myocardial layers. Approaches relying on implicit surfaces, such as signed
distance functions (SDFs), are primarily designed for single watertight
surfaces, making them ill-suited for multi-layered anatomical structures. They
often produce gaps or overlaps in shared boundaries. Unsigned distance
functions (UDFs) can model non-watertight geometries but are harder to
optimize, while voxel-based methods are limited in resolution and struggle to
produce smooth, anatomically realistic surfaces. We introduce a
pairwise-constrained SDF approach that models the heart as a set of
interdependent SDFs, each representing a distinct anatomical component. By
enforcing proper contact between adjacent SDFs, we ensure that they form
anatomically correct shared walls, preserving the internal structure of the
heart and preventing overlaps, or unwanted gaps. Our method significantly
improves inner structure accuracy over single-SDF, UDF-based, voxel-based, and
segmentation-based reconstructions. We further demonstrate its generalizability
by applying it to a vertebrae dataset, preventing unwanted contact between
structures.
| [
{
"version": "v1",
"created": "Sun, 16 Jul 2023 10:07:15 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Oct 2024 13:56:08 GMT"
},
{
"version": "v3",
"created": "Wed, 2 Apr 2025 08:23:55 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Le",
"Hieu",
""
],
[
"Xu",
"Jingyi",
""
],
[
"Talabot",
"Nicolas",
""
],
[
"Yang",
"Jiancheng",
""
],
[
"Fua",
"Pascal",
""
]
] | TITLE: Pairwise-Constrained Implicit Functions for 3D Human Heart Modelling
ABSTRACT: Accurate 3D models of the human heart require not only correct outer surfaces
but also realistic inner structures, such as the ventricles, atria, and
myocardial layers. Approaches relying on implicit surfaces, such as signed
distance functions (SDFs), are primarily designed for single watertight
surfaces, making them ill-suited for multi-layered anatomical structures. They
often produce gaps or overlaps in shared boundaries. Unsigned distance
functions (UDFs) can model non-watertight geometries but are harder to
optimize, while voxel-based methods are limited in resolution and struggle to
produce smooth, anatomically realistic surfaces. We introduce a
pairwise-constrained SDF approach that models the heart as a set of
interdependent SDFs, each representing a distinct anatomical component. By
enforcing proper contact between adjacent SDFs, we ensure that they form
anatomically correct shared walls, preserving the internal structure of the
heart and preventing overlaps, or unwanted gaps. Our method significantly
improves inner structure accuracy over single-SDF, UDF-based, voxel-based, and
segmentation-based reconstructions. We further demonstrate its generalizability
by applying it to a vertebrae dataset, preventing unwanted contact between
structures.
|
2310.10865 | Christina Chance | Christina Chance, Da Yin, Dakuo Wang, Kai-Wei Chang | Will the Prince Get True Love's Kiss? On the Model Sensitivity to Gender
Perturbation over Fairytale Texts | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | In this paper, we study whether language models are affected by learned
gender stereotypes during the comprehension of stories. Specifically, we
investigate how models respond to gender stereotype perturbations through
counterfactual data augmentation. Focusing on Question Answering (QA) tasks in
fairytales, we modify the FairytaleQA dataset by swapping gendered character
information and introducing counterfactual gender stereotypes during training.
This allows us to assess model robustness and examine whether learned biases
influence story comprehension. Our results show that models exhibit slight
performance drops when faced with gender perturbations in the test set,
indicating sensitivity to learned stereotypes. However, when fine-tuned on
counterfactual training data, models become more robust to anti-stereotypical
narratives. Additionally, we conduct a case study demonstrating how
incorporating counterfactual anti-stereotype examples can improve inclusivity
in downstream applications.
| [
{
"version": "v1",
"created": "Mon, 16 Oct 2023 22:25:09 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Nov 2023 21:32:28 GMT"
},
{
"version": "v3",
"created": "Tue, 1 Apr 2025 18:17:49 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Chance",
"Christina",
""
],
[
"Yin",
"Da",
""
],
[
"Wang",
"Dakuo",
""
],
[
"Chang",
"Kai-Wei",
""
]
] | TITLE: Will the Prince Get True Love's Kiss? On the Model Sensitivity to Gender
Perturbation over Fairytale Texts
ABSTRACT: In this paper, we study whether language models are affected by learned
gender stereotypes during the comprehension of stories. Specifically, we
investigate how models respond to gender stereotype perturbations through
counterfactual data augmentation. Focusing on Question Answering (QA) tasks in
fairytales, we modify the FairytaleQA dataset by swapping gendered character
information and introducing counterfactual gender stereotypes during training.
This allows us to assess model robustness and examine whether learned biases
influence story comprehension. Our results show that models exhibit slight
performance drops when faced with gender perturbations in the test set,
indicating sensitivity to learned stereotypes. However, when fine-tuned on
counterfactual training data, models become more robust to anti-stereotypical
narratives. Additionally, we conduct a case study demonstrating how
incorporating counterfactual anti-stereotype examples can improve inclusivity
in downstream applications.
|
2403.04443 | Yihua Fan | Yihua Fan, Yongzhen Wang, Mingqiang Wei, Fu Lee Wang, and Haoran Xie | FriendNet: Detection-Friendly Dehazing Network | We identified a fundamental flaw in the theoretical framework of this
submission, rendering the main argument unsound. To maintain academic rigor,
we request withdrawal and will submit a revised version after thorough
validation | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Adverse weather conditions often impair the quality of captured images,
inevitably inducing cutting-edge object detection models for advanced driver
assistance systems (ADAS) and autonomous driving. In this paper, we raise an
intriguing question: can the combination of image restoration and object
detection enhance detection performance in adverse weather conditions? To
answer it, we propose an effective architecture that bridges image dehazing and
object detection together via guidance information and task-driven learning to
achieve detection-friendly dehazing, termed FriendNet. FriendNet aims to
deliver both high-quality perception and high detection capacity. Different
from existing efforts that intuitively treat image dehazing as pre-processing,
FriendNet establishes a positive correlation between these two tasks. Clean
features generated by the dehazing network potentially contribute to
improvements in object detection performance. Conversely, object detection
crucially guides the learning process of the image dehazing network under the
task-driven learning scheme. We shed light on how downstream tasks can guide
upstream dehazing processes, considering both network architecture and learning
objectives. We design Guidance Fusion Block (GFB) and Guidance Attention Block
(GAB) to facilitate the integration of detection information into the network.
Furthermore, the incorporation of the detection task loss aids in refining the
optimization process. Additionally, we introduce a new Physics-aware Feature
Enhancement Block (PFEB), which integrates physics-based priors to enhance the
feature extraction and representation capabilities. Extensive experiments on
synthetic and real-world datasets demonstrate the superiority of our method
over state-of-the-art methods on both image quality and detection precision.
Our source code is available at https://github.com/fanyihua0309/FriendNet.
| [
{
"version": "v1",
"created": "Thu, 7 Mar 2024 12:19:04 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 12:34:29 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Fan",
"Yihua",
""
],
[
"Wang",
"Yongzhen",
""
],
[
"Wei",
"Mingqiang",
""
],
[
"Wang",
"Fu Lee",
""
],
[
"Xie",
"Haoran",
""
]
] | TITLE: FriendNet: Detection-Friendly Dehazing Network
ABSTRACT: Adverse weather conditions often impair the quality of captured images,
inevitably inducing cutting-edge object detection models for advanced driver
assistance systems (ADAS) and autonomous driving. In this paper, we raise an
intriguing question: can the combination of image restoration and object
detection enhance detection performance in adverse weather conditions? To
answer it, we propose an effective architecture that bridges image dehazing and
object detection together via guidance information and task-driven learning to
achieve detection-friendly dehazing, termed FriendNet. FriendNet aims to
deliver both high-quality perception and high detection capacity. Different
from existing efforts that intuitively treat image dehazing as pre-processing,
FriendNet establishes a positive correlation between these two tasks. Clean
features generated by the dehazing network potentially contribute to
improvements in object detection performance. Conversely, object detection
crucially guides the learning process of the image dehazing network under the
task-driven learning scheme. We shed light on how downstream tasks can guide
upstream dehazing processes, considering both network architecture and learning
objectives. We design Guidance Fusion Block (GFB) and Guidance Attention Block
(GAB) to facilitate the integration of detection information into the network.
Furthermore, the incorporation of the detection task loss aids in refining the
optimization process. Additionally, we introduce a new Physics-aware Feature
Enhancement Block (PFEB), which integrates physics-based priors to enhance the
feature extraction and representation capabilities. Extensive experiments on
synthetic and real-world datasets demonstrate the superiority of our method
over state-of-the-art methods on both image quality and detection precision.
Our source code is available at https://github.com/fanyihua0309/FriendNet.
|
2403.08002 | Juan Manuel Zambrano Chaves | Juan Manuel Zambrano Chaves, Shih-Cheng Huang, Yanbo Xu, Hanwen Xu,
Naoto Usuyama, Sheng Zhang, Fei Wang, Yujia Xie, Mahmoud Khademi, Ziyi Yang,
Hany Awadalla, Julia Gong, Houdong Hu, Jianwei Yang, Chunyuan Li, Jianfeng
Gao, Yu Gu, Cliff Wong, Mu Wei, Tristan Naumann, Muhao Chen, Matthew P.
Lungren, Akshay Chaudhari, Serena Yeung-Levy, Curtis P. Langlotz, Sheng Wang,
Hoifung Poon | Towards a clinically accessible radiology foundation model: open-access
and lightweight, with automated evaluation | null | Nature Communications volume 16, Article number: 3108 (2025) | 10.1038/s41467-025-58344-x | null | cs.CL cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The scaling laws and extraordinary performance of large foundation models
motivate the development and utilization of such models in biomedicine.
However, despite early promising results on some biomedical benchmarks, there
are still major challenges that need to be addressed before these models can be
used in real-world clinics. Frontier general-domain models such as GPT-4V still
have significant performance gaps in multimodal biomedical applications. More
importantly, less-acknowledged pragmatic issues, including accessibility, model
cost, and tedious manual evaluation make it hard for clinicians to use
state-of-the-art large models directly on private patient data. Here, we
explore training open-source small multimodal models (SMMs) to bridge
competency gaps for unmet clinical needs in radiology. To maximize data
efficiency, we adopt a modular approach by incorporating state-of-the-art
pre-trained models for image and text modalities, and focusing on training a
lightweight adapter to ground each modality to the text embedding space, as
exemplified by LLaVA-Med. For training, we assemble a large dataset of over 697
thousand radiology image-text pairs. For evaluation, we propose CheXprompt, a
GPT-4-based metric for factuality evaluation, and demonstrate its parity with
expert evaluation. For best practice, we conduct a systematic ablation study on
various choices in data engineering and multimodal training. The resulting
LlaVA-Rad (7B) model attains state-of-the-art results on standard radiology
tasks such as report generation and cross-modal retrieval, even outperforming
much larger models such as GPT-4V and Med-PaLM M (84B). The inference of
LlaVA-Rad is fast and can be performed on a single V100 GPU in private
settings, offering a promising state-of-the-art tool for real-world clinical
applications.
| [
{
"version": "v1",
"created": "Tue, 12 Mar 2024 18:12:02 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Mar 2024 23:31:22 GMT"
},
{
"version": "v3",
"created": "Sat, 4 May 2024 00:35:01 GMT"
},
{
"version": "v4",
"created": "Fri, 10 May 2024 23:46:33 GMT"
},
{
"version": "v5",
"created": "Thu, 27 Jun 2024 02:51:29 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Chaves",
"Juan Manuel Zambrano",
""
],
[
"Huang",
"Shih-Cheng",
""
],
[
"Xu",
"Yanbo",
""
],
[
"Xu",
"Hanwen",
""
],
[
"Usuyama",
"Naoto",
""
],
[
"Zhang",
"Sheng",
""
],
[
"Wang",
"Fei",
""
],
[
"Xie",
"Yujia",
""
],
[
"Khademi",
"Mahmoud",
""
],
[
"Yang",
"Ziyi",
""
],
[
"Awadalla",
"Hany",
""
],
[
"Gong",
"Julia",
""
],
[
"Hu",
"Houdong",
""
],
[
"Yang",
"Jianwei",
""
],
[
"Li",
"Chunyuan",
""
],
[
"Gao",
"Jianfeng",
""
],
[
"Gu",
"Yu",
""
],
[
"Wong",
"Cliff",
""
],
[
"Wei",
"Mu",
""
],
[
"Naumann",
"Tristan",
""
],
[
"Chen",
"Muhao",
""
],
[
"Lungren",
"Matthew P.",
""
],
[
"Chaudhari",
"Akshay",
""
],
[
"Yeung-Levy",
"Serena",
""
],
[
"Langlotz",
"Curtis P.",
""
],
[
"Wang",
"Sheng",
""
],
[
"Poon",
"Hoifung",
""
]
] | TITLE: Towards a clinically accessible radiology foundation model: open-access
and lightweight, with automated evaluation
ABSTRACT: The scaling laws and extraordinary performance of large foundation models
motivate the development and utilization of such models in biomedicine.
However, despite early promising results on some biomedical benchmarks, there
are still major challenges that need to be addressed before these models can be
used in real-world clinics. Frontier general-domain models such as GPT-4V still
have significant performance gaps in multimodal biomedical applications. More
importantly, less-acknowledged pragmatic issues, including accessibility, model
cost, and tedious manual evaluation make it hard for clinicians to use
state-of-the-art large models directly on private patient data. Here, we
explore training open-source small multimodal models (SMMs) to bridge
competency gaps for unmet clinical needs in radiology. To maximize data
efficiency, we adopt a modular approach by incorporating state-of-the-art
pre-trained models for image and text modalities, and focusing on training a
lightweight adapter to ground each modality to the text embedding space, as
exemplified by LLaVA-Med. For training, we assemble a large dataset of over 697
thousand radiology image-text pairs. For evaluation, we propose CheXprompt, a
GPT-4-based metric for factuality evaluation, and demonstrate its parity with
expert evaluation. For best practice, we conduct a systematic ablation study on
various choices in data engineering and multimodal training. The resulting
LlaVA-Rad (7B) model attains state-of-the-art results on standard radiology
tasks such as report generation and cross-modal retrieval, even outperforming
much larger models such as GPT-4V and Med-PaLM M (84B). The inference of
LlaVA-Rad is fast and can be performed on a single V100 GPU in private
settings, offering a promising state-of-the-art tool for real-world clinical
applications.
|
2404.14653 | Achyut Paudel | Achyut Paudel, Jostan Brown, Priyanka Upadhyaya, Atif Bilal Asad,
Safal Kshetri, Joseph R. Davidson, Cindy Grimm, Ashley Thompson, Bernardita
Sallato, Matthew D. Whiting, Manoj Karkee | Machine Vision-Based Assessment of Fall Color Changes and its
Relationship with Leaf Nitrogen Concentration | null | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Apple(\textit{Malus domestica} Borkh.) trees are deciduous, shedding leaves
each year. This process is preceded by a gradual change in leaf color from
green to yellow as chlorophyll is degraded prior to abscission. The initiation
and rate of this color change are affected by many factors including leaf
nitrogen (N) concentration. We predict that leaf color during this transition
may be indicative of the nitrogen status of apple trees. This study assesses a
machine vision-based system for quantifying the change in leaf color and its
correlation with leaf nitrogen content. An image dataset was collected in color
and 3D over five weeks in the fall of 2021 and 2023 at a commercial orchard
using a ground vehicle-based stereovision sensor. Trees in the foreground were
segmented from the point cloud using color and depth thresholding methods.
Then, to estimate the proportion of yellow leaves per canopy, the color
information of the segmented canopy area was quantified using a custom-defined
metric, \textit{yellowness index} (a normalized ratio of yellow to green
foliage in the tree) that varied from -1 to +1 (-1 being completely green and
+1 being completely yellow). Both K-means-based methods and gradient boosting
methods were used to estimate the \textit{yellowness index}. The gradient
boosting based method proposed in this study was better than the K-means-based
method (both in terms of computational time and accuracy), achieving an $R^2$
of 0.72 in estimating the \textit{yellowness index}. The metric was able to
capture the gradual color transition from green to yellow over the study
duration. Trees with lower leaf nitrogen showed the color transition to yellow
earlier than the trees with higher nitrogen.
Keywords: Fruit Tree Nitrogen Management, Machine Vision, Point Cloud
Segmentation, Precision Nitrogen Management
| [
{
"version": "v1",
"created": "Tue, 23 Apr 2024 01:19:19 GMT"
},
{
"version": "v2",
"created": "Sat, 28 Sep 2024 22:30:25 GMT"
},
{
"version": "v3",
"created": "Mon, 18 Nov 2024 06:03:47 GMT"
},
{
"version": "v4",
"created": "Tue, 1 Apr 2025 18:39:32 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Paudel",
"Achyut",
""
],
[
"Brown",
"Jostan",
""
],
[
"Upadhyaya",
"Priyanka",
""
],
[
"Asad",
"Atif Bilal",
""
],
[
"Kshetri",
"Safal",
""
],
[
"Davidson",
"Joseph R.",
""
],
[
"Grimm",
"Cindy",
""
],
[
"Thompson",
"Ashley",
""
],
[
"Sallato",
"Bernardita",
""
],
[
"Whiting",
"Matthew D.",
""
],
[
"Karkee",
"Manoj",
""
]
] | TITLE: Machine Vision-Based Assessment of Fall Color Changes and its
Relationship with Leaf Nitrogen Concentration
ABSTRACT: Apple(\textit{Malus domestica} Borkh.) trees are deciduous, shedding leaves
each year. This process is preceded by a gradual change in leaf color from
green to yellow as chlorophyll is degraded prior to abscission. The initiation
and rate of this color change are affected by many factors including leaf
nitrogen (N) concentration. We predict that leaf color during this transition
may be indicative of the nitrogen status of apple trees. This study assesses a
machine vision-based system for quantifying the change in leaf color and its
correlation with leaf nitrogen content. An image dataset was collected in color
and 3D over five weeks in the fall of 2021 and 2023 at a commercial orchard
using a ground vehicle-based stereovision sensor. Trees in the foreground were
segmented from the point cloud using color and depth thresholding methods.
Then, to estimate the proportion of yellow leaves per canopy, the color
information of the segmented canopy area was quantified using a custom-defined
metric, \textit{yellowness index} (a normalized ratio of yellow to green
foliage in the tree) that varied from -1 to +1 (-1 being completely green and
+1 being completely yellow). Both K-means-based methods and gradient boosting
methods were used to estimate the \textit{yellowness index}. The gradient
boosting based method proposed in this study was better than the K-means-based
method (both in terms of computational time and accuracy), achieving an $R^2$
of 0.72 in estimating the \textit{yellowness index}. The metric was able to
capture the gradual color transition from green to yellow over the study
duration. Trees with lower leaf nitrogen showed the color transition to yellow
earlier than the trees with higher nitrogen.
Keywords: Fruit Tree Nitrogen Management, Machine Vision, Point Cloud
Segmentation, Precision Nitrogen Management
|
2404.17034 | Keziah Naggita Ms | Keziah Naggita and Matthew R. Walter and Avrim Blum | Learning Actionable Counterfactual Explanations in Large State Spaces | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Recourse generators provide actionable insights, often through feature-based
counterfactual explanations (CFEs), to help negatively classified individuals
understand how to adjust their input features to achieve a positive
classification. These feature-based CFEs, which we refer to as \emph{low-level}
CFEs, are overly specific (e.g., coding experience: $4 \to 5+$ years) and often
recommended in feature space that doesn't straightforwardly align with
real-world actions. To bridge this gap, we introduce three novel recourse types
grounded in real-world actions: high-level continuous (\emph{hl-continuous}),
high-level discrete (\emph{hl-discrete}), and high-level ID (\emph{hl-id})
CFEs.
We formulate single-agent CFE generation methods, where we model the
hl-discrete CFE as a solution to a weighted set cover problem and the
hl-continuous CFE as a solution to an integer linear program. Since these
methods require costly optimization per agent, we propose data-driven CFE
generation approaches that, given instances of agents and their optimal CFEs,
learn a CFE generator that quickly provides optimal CFEs for new agents. This
approach, also viewed as one of learning an optimal policy in a family of large
but deterministic MDPs, considers several problem formulations, including
formulations in which the actions and their effects are unknown, and therefore
addresses informational and computational challenges.
Through extensive empirical evaluation using publicly available healthcare
datasets (BRFSS, Foods, and NHANES), we compare the proposed forms of recourse
to low-level CFEs and assess the effectiveness of our data-driven approaches.
Empirical results show that the proposed data-driven CFE generators are
accurate and resource-efficient, and the proposed forms of recourse have
various advantages over the low-level CFEs.
| [
{
"version": "v1",
"created": "Thu, 25 Apr 2024 20:49:03 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 20:36:37 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Naggita",
"Keziah",
""
],
[
"Walter",
"Matthew R.",
""
],
[
"Blum",
"Avrim",
""
]
] | TITLE: Learning Actionable Counterfactual Explanations in Large State Spaces
ABSTRACT: Recourse generators provide actionable insights, often through feature-based
counterfactual explanations (CFEs), to help negatively classified individuals
understand how to adjust their input features to achieve a positive
classification. These feature-based CFEs, which we refer to as \emph{low-level}
CFEs, are overly specific (e.g., coding experience: $4 \to 5+$ years) and often
recommended in feature space that doesn't straightforwardly align with
real-world actions. To bridge this gap, we introduce three novel recourse types
grounded in real-world actions: high-level continuous (\emph{hl-continuous}),
high-level discrete (\emph{hl-discrete}), and high-level ID (\emph{hl-id})
CFEs.
We formulate single-agent CFE generation methods, where we model the
hl-discrete CFE as a solution to a weighted set cover problem and the
hl-continuous CFE as a solution to an integer linear program. Since these
methods require costly optimization per agent, we propose data-driven CFE
generation approaches that, given instances of agents and their optimal CFEs,
learn a CFE generator that quickly provides optimal CFEs for new agents. This
approach, also viewed as one of learning an optimal policy in a family of large
but deterministic MDPs, considers several problem formulations, including
formulations in which the actions and their effects are unknown, and therefore
addresses informational and computational challenges.
Through extensive empirical evaluation using publicly available healthcare
datasets (BRFSS, Foods, and NHANES), we compare the proposed forms of recourse
to low-level CFEs and assess the effectiveness of our data-driven approaches.
Empirical results show that the proposed data-driven CFE generators are
accurate and resource-efficient, and the proposed forms of recourse have
various advantages over the low-level CFEs.
|
2405.14325 | Jia Guo | Jia Guo, Shuai Lu, Weihang Zhang, Fang Chen, Huiqi Li, Hongen Liao | Dinomaly: The Less Is More Philosophy in Multi-Class Unsupervised
Anomaly Detection | IEEE/CVF CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Recent studies highlighted a practical setting of unsupervised anomaly
detection (UAD) that builds a unified model for multi-class images. Despite
various advancements addressing this challenging task, the detection
performance under the multi-class setting still lags far behind
state-of-the-art class-separated models. Our research aims to bridge this
substantial performance gap. In this paper, we introduce a minimalistic
reconstruction-based anomaly detection framework, namely Dinomaly, which
leverages pure Transformer architectures without relying on complex designs,
additional modules, or specialized tricks. Given this powerful framework
consisted of only Attentions and MLPs, we found four simple components that are
essential to multi-class anomaly detection: (1) Foundation Transformers that
extracts universal and discriminative features, (2) Noisy Bottleneck where
pre-existing Dropouts do all the noise injection tricks, (3) Linear Attention
that naturally cannot focus, and (4) Loose Reconstruction that does not force
layer-to-layer and point-by-point reconstruction. Extensive experiments are
conducted across popular anomaly detection benchmarks including MVTec-AD, VisA,
and Real-IAD. Our proposed Dinomaly achieves impressive image-level AUROC of
99.6%, 98.7%, and 89.3% on the three datasets respectively, which is not only
superior to state-of-the-art multi-class UAD methods, but also achieves the
most advanced class-separated UAD records.
| [
{
"version": "v1",
"created": "Thu, 23 May 2024 08:55:20 GMT"
},
{
"version": "v2",
"created": "Wed, 29 May 2024 08:57:31 GMT"
},
{
"version": "v3",
"created": "Thu, 31 Oct 2024 05:47:33 GMT"
},
{
"version": "v4",
"created": "Thu, 14 Nov 2024 15:47:04 GMT"
},
{
"version": "v5",
"created": "Wed, 2 Apr 2025 12:01:42 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Guo",
"Jia",
""
],
[
"Lu",
"Shuai",
""
],
[
"Zhang",
"Weihang",
""
],
[
"Chen",
"Fang",
""
],
[
"Li",
"Huiqi",
""
],
[
"Liao",
"Hongen",
""
]
] | TITLE: Dinomaly: The Less Is More Philosophy in Multi-Class Unsupervised
Anomaly Detection
ABSTRACT: Recent studies highlighted a practical setting of unsupervised anomaly
detection (UAD) that builds a unified model for multi-class images. Despite
various advancements addressing this challenging task, the detection
performance under the multi-class setting still lags far behind
state-of-the-art class-separated models. Our research aims to bridge this
substantial performance gap. In this paper, we introduce a minimalistic
reconstruction-based anomaly detection framework, namely Dinomaly, which
leverages pure Transformer architectures without relying on complex designs,
additional modules, or specialized tricks. Given this powerful framework
consisted of only Attentions and MLPs, we found four simple components that are
essential to multi-class anomaly detection: (1) Foundation Transformers that
extracts universal and discriminative features, (2) Noisy Bottleneck where
pre-existing Dropouts do all the noise injection tricks, (3) Linear Attention
that naturally cannot focus, and (4) Loose Reconstruction that does not force
layer-to-layer and point-by-point reconstruction. Extensive experiments are
conducted across popular anomaly detection benchmarks including MVTec-AD, VisA,
and Real-IAD. Our proposed Dinomaly achieves impressive image-level AUROC of
99.6%, 98.7%, and 89.3% on the three datasets respectively, which is not only
superior to state-of-the-art multi-class UAD methods, but also achieves the
most advanced class-separated UAD records.
|
2405.16625 | Shuvendu Roy | Shuvendu Roy, Elham Dolatabadi, Arash Afkanpour, Ali Etemad | Consistency-Guided Asynchronous Contrastive Tuning for Few-Shot
Class-Incremental Tuning of Foundation Models | Accepted in Transactions on Machine Learning Research (TMLR) | null | null | null | cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | We propose Consistency-guided Asynchronous Contrastive Tuning (CoACT), a
novel method for continuously tuning foundation models to learn new classes in
few-shot settings. CoACT consists of three key components:(i) asynchronous
contrastive tuning, which learns new classes by including LoRA modules in the
pre-trained encoder while enforcing consistency between two asynchronous
encoders; (ii) controlled fine-tuning, which facilitates effective tuning of a
subset of the foundation model; and (iii) consistency-guided incremental
tuning, which enforces additional regularization during later sessions to
reduce forgetting of the learned classes. We evaluate our proposed solution on
Few-Shot Class-Incremental Learning (FSCIL) as well as a new and more
challenging setup called Few-Shot Class-Incremental Tuning (FSCIT), which
facilitates the continual tuning of vision foundation models to learn new
classes with only a few samples per class. Unlike traditional FSCIL, FSCIT does
not require a large in-distribution base session for initial fully supervised
training prior to the incremental few-shot sessions. We conduct extensive
evaluations across 16 diverse datasets, demonstrating the effectiveness of
CoACT in both FSCIL and FSCIT setups. CoACT outperforms existing methods by up
to 5.02% in FSCIL and up to 12.51% in FSCIT for individual datasets, with an
average improvement of 2.47%. Furthermore, CoACT exhibits reduced forgetting
and enhanced robustness in low-shot experiments. Detailed ablation and
sensitivity studies highlight the contribution of each component of CoACT. We
make our code publicly available at https://github.com/ShuvenduRoy/CoACT-FSCIL.
| [
{
"version": "v1",
"created": "Sun, 26 May 2024 16:41:03 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 19:28:44 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Roy",
"Shuvendu",
""
],
[
"Dolatabadi",
"Elham",
""
],
[
"Afkanpour",
"Arash",
""
],
[
"Etemad",
"Ali",
""
]
] | TITLE: Consistency-Guided Asynchronous Contrastive Tuning for Few-Shot
Class-Incremental Tuning of Foundation Models
ABSTRACT: We propose Consistency-guided Asynchronous Contrastive Tuning (CoACT), a
novel method for continuously tuning foundation models to learn new classes in
few-shot settings. CoACT consists of three key components:(i) asynchronous
contrastive tuning, which learns new classes by including LoRA modules in the
pre-trained encoder while enforcing consistency between two asynchronous
encoders; (ii) controlled fine-tuning, which facilitates effective tuning of a
subset of the foundation model; and (iii) consistency-guided incremental
tuning, which enforces additional regularization during later sessions to
reduce forgetting of the learned classes. We evaluate our proposed solution on
Few-Shot Class-Incremental Learning (FSCIL) as well as a new and more
challenging setup called Few-Shot Class-Incremental Tuning (FSCIT), which
facilitates the continual tuning of vision foundation models to learn new
classes with only a few samples per class. Unlike traditional FSCIL, FSCIT does
not require a large in-distribution base session for initial fully supervised
training prior to the incremental few-shot sessions. We conduct extensive
evaluations across 16 diverse datasets, demonstrating the effectiveness of
CoACT in both FSCIL and FSCIT setups. CoACT outperforms existing methods by up
to 5.02% in FSCIL and up to 12.51% in FSCIT for individual datasets, with an
average improvement of 2.47%. Furthermore, CoACT exhibits reduced forgetting
and enhanced robustness in low-shot experiments. Detailed ablation and
sensitivity studies highlight the contribution of each component of CoACT. We
make our code publicly available at https://github.com/ShuvenduRoy/CoACT-FSCIL.
|
2406.10462 | Wei Chen | Wei Chen, Lin Li, Yongqi Yang, Bin Wen, Fan Yang, Tingting Gao, Yu Wu,
Long Chen | CoMM: A Coherent Interleaved Image-Text Dataset for Multimodal
Understanding and Generation | 22 pages, Accepted by CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Interleaved image-text generation has emerged as a crucial multimodal task,
aiming at creating sequences of interleaved visual and textual content given a
query. Despite notable advancements in recent multimodal large language models
(MLLMs), generating integrated image-text sequences that exhibit narrative
coherence and entity and style consistency remains challenging due to poor
training data quality. To address this gap, we introduce CoMM, a high-quality
Coherent interleaved image-text MultiModal dataset designed to enhance the
coherence, consistency, and alignment of generated multimodal content.
Initially, CoMM harnesses raw data from diverse sources, focusing on
instructional content and visual storytelling, establishing a foundation for
coherent and consistent content. To further refine the data quality, we devise
a multi-perspective filter strategy that leverages advanced pre-trained models
to ensure the development of sentences, consistency of inserted images, and
semantic alignment between them. Various quality evaluation metrics are
designed to prove the high quality of the filtered dataset. Meanwhile,
extensive few-shot experiments on various downstream tasks demonstrate CoMM's
effectiveness in significantly enhancing the in-context learning capabilities
of MLLMs. Moreover, we propose four new tasks to evaluate MLLMs' interleaved
generation abilities, supported by a comprehensive evaluation framework. We
believe CoMM opens a new avenue for advanced MLLMs with superior multimodal
in-context learning and understanding ability.
| [
{
"version": "v1",
"created": "Sat, 15 Jun 2024 01:27:58 GMT"
},
{
"version": "v2",
"created": "Sun, 1 Dec 2024 11:39:46 GMT"
},
{
"version": "v3",
"created": "Wed, 2 Apr 2025 13:30:29 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Chen",
"Wei",
""
],
[
"Li",
"Lin",
""
],
[
"Yang",
"Yongqi",
""
],
[
"Wen",
"Bin",
""
],
[
"Yang",
"Fan",
""
],
[
"Gao",
"Tingting",
""
],
[
"Wu",
"Yu",
""
],
[
"Chen",
"Long",
""
]
] | TITLE: CoMM: A Coherent Interleaved Image-Text Dataset for Multimodal
Understanding and Generation
ABSTRACT: Interleaved image-text generation has emerged as a crucial multimodal task,
aiming at creating sequences of interleaved visual and textual content given a
query. Despite notable advancements in recent multimodal large language models
(MLLMs), generating integrated image-text sequences that exhibit narrative
coherence and entity and style consistency remains challenging due to poor
training data quality. To address this gap, we introduce CoMM, a high-quality
Coherent interleaved image-text MultiModal dataset designed to enhance the
coherence, consistency, and alignment of generated multimodal content.
Initially, CoMM harnesses raw data from diverse sources, focusing on
instructional content and visual storytelling, establishing a foundation for
coherent and consistent content. To further refine the data quality, we devise
a multi-perspective filter strategy that leverages advanced pre-trained models
to ensure the development of sentences, consistency of inserted images, and
semantic alignment between them. Various quality evaluation metrics are
designed to prove the high quality of the filtered dataset. Meanwhile,
extensive few-shot experiments on various downstream tasks demonstrate CoMM's
effectiveness in significantly enhancing the in-context learning capabilities
of MLLMs. Moreover, we propose four new tasks to evaluate MLLMs' interleaved
generation abilities, supported by a comprehensive evaluation framework. We
believe CoMM opens a new avenue for advanced MLLMs with superior multimodal
in-context learning and understanding ability.
|
2406.12501 | Guipeng Xv | Guipeng Xv, Xinyu Li, Ruobing Xie, Chen Lin, Chong Liu, Feng Xia,
Zhanhui Kang, Leyu Lin | Improving Multi-modal Recommender Systems by Denoising and Aligning
Multi-modal Content and User Feedback | After further review, we believe the content of the paper is not yet
fully ready and requires additional time for improvement. To ensure quality,
we have decided to withdraw this preprint | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-modal recommender systems (MRSs) are pivotal in diverse online web
platforms and have garnered considerable attention in recent years. However,
previous studies overlook the challenges of (1) noisy multi-modal content, (2)
noisy user feedback, and (3) aligning multi-modal content with user feedback.
In order to tackle these challenges, we propose Denoising and Aligning
Multi-modal Recommender System (DA-MRS). To mitigate multi-modal noise, DA-MRS
first constructs item-item graphs determined by consistent content similarity
across modalities. To denoise user feedback, DA-MRS associates the probability
of observed feedback with multi-modal content and devises a denoised BPR loss.
Furthermore, DA-MRS implements Alignment guided by User preference to enhance
task-specific item representation and Alignment guided by graded Item relations
to provide finer-grained alignment. Extensive experiments verify that DA-MRS is
a plug-and-play framework and achieves significant and consistent improvements
across various datasets, backbone models, and noisy scenarios.
| [
{
"version": "v1",
"created": "Tue, 18 Jun 2024 11:05:32 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 06:51:31 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Xv",
"Guipeng",
""
],
[
"Li",
"Xinyu",
""
],
[
"Xie",
"Ruobing",
""
],
[
"Lin",
"Chen",
""
],
[
"Liu",
"Chong",
""
],
[
"Xia",
"Feng",
""
],
[
"Kang",
"Zhanhui",
""
],
[
"Lin",
"Leyu",
""
]
] | TITLE: Improving Multi-modal Recommender Systems by Denoising and Aligning
Multi-modal Content and User Feedback
ABSTRACT: Multi-modal recommender systems (MRSs) are pivotal in diverse online web
platforms and have garnered considerable attention in recent years. However,
previous studies overlook the challenges of (1) noisy multi-modal content, (2)
noisy user feedback, and (3) aligning multi-modal content with user feedback.
In order to tackle these challenges, we propose Denoising and Aligning
Multi-modal Recommender System (DA-MRS). To mitigate multi-modal noise, DA-MRS
first constructs item-item graphs determined by consistent content similarity
across modalities. To denoise user feedback, DA-MRS associates the probability
of observed feedback with multi-modal content and devises a denoised BPR loss.
Furthermore, DA-MRS implements Alignment guided by User preference to enhance
task-specific item representation and Alignment guided by graded Item relations
to provide finer-grained alignment. Extensive experiments verify that DA-MRS is
a plug-and-play framework and achieves significant and consistent improvements
across various datasets, backbone models, and noisy scenarios.
|
2406.12909 | Massimiliano Lupo Pasini Dr. | Massimiliano Lupo Pasini, Jong Youl Choi, Kshitij Mehta, Pei Zhang,
David Rogers, Jonghyun Bae, Khaled Z. Ibrahim, Ashwin M. Aji, Karl W. Schulz,
Jorda Polo, Prasanna Balaprakash | Scalable Training of Trustworthy and Energy-Efficient Predictive Graph
Foundation Models for Atomistic Materials Modeling: A Case Study with
HydraGNN | 51 pages, 32 figures | null | 10.1007/s11227-025-07029-9 | null | cs.LG physics.comp-ph | http://creativecommons.org/licenses/by/4.0/ | We present our work on developing and training scalable, trustworthy, and
energy-efficient predictive graph foundation models (GFMs) using HydraGNN, a
multi-headed graph convolutional neural network architecture. HydraGNN expands
the boundaries of graph neural network (GNN) computations in both training
scale and data diversity. It abstracts over message passing algorithms,
allowing both reproduction of and comparison across algorithmic innovations
that define nearest-neighbor convolution in GNNs. This work discusses a series
of optimizations that have allowed scaling up the GFMs training to tens of
thousands of GPUs on datasets consisting of hundreds of millions of graphs. Our
GFMs use multi-task learning (MTL) to simultaneously learn graph-level and
node-level properties of atomistic structures, such as energy and atomic
forces. Using over 154 million atomistic structures for training, we illustrate
the performance of our approach along with the lessons learned on two
state-of-the-art United States Department of Energy (US-DOE) supercomputers,
namely the Perlmutter petascale system at the National Energy Research
Scientific Computing Center and the Frontier exascale system at Oak Ridge
Leadership Computing Facility. The HydraGNN architecture enables the GFM to
achieve near-linear strong scaling performance using more than 2,000 GPUs on
Perlmutter and 16,000 GPUs on Frontier.
| [
{
"version": "v1",
"created": "Wed, 12 Jun 2024 21:21:42 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Jun 2024 17:58:27 GMT"
},
{
"version": "v3",
"created": "Thu, 17 Oct 2024 02:46:46 GMT"
},
{
"version": "v4",
"created": "Fri, 1 Nov 2024 17:09:52 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Pasini",
"Massimiliano Lupo",
""
],
[
"Choi",
"Jong Youl",
""
],
[
"Mehta",
"Kshitij",
""
],
[
"Zhang",
"Pei",
""
],
[
"Rogers",
"David",
""
],
[
"Bae",
"Jonghyun",
""
],
[
"Ibrahim",
"Khaled Z.",
""
],
[
"Aji",
"Ashwin M.",
""
],
[
"Schulz",
"Karl W.",
""
],
[
"Polo",
"Jorda",
""
],
[
"Balaprakash",
"Prasanna",
""
]
] | TITLE: Scalable Training of Trustworthy and Energy-Efficient Predictive Graph
Foundation Models for Atomistic Materials Modeling: A Case Study with
HydraGNN
ABSTRACT: We present our work on developing and training scalable, trustworthy, and
energy-efficient predictive graph foundation models (GFMs) using HydraGNN, a
multi-headed graph convolutional neural network architecture. HydraGNN expands
the boundaries of graph neural network (GNN) computations in both training
scale and data diversity. It abstracts over message passing algorithms,
allowing both reproduction of and comparison across algorithmic innovations
that define nearest-neighbor convolution in GNNs. This work discusses a series
of optimizations that have allowed scaling up the GFMs training to tens of
thousands of GPUs on datasets consisting of hundreds of millions of graphs. Our
GFMs use multi-task learning (MTL) to simultaneously learn graph-level and
node-level properties of atomistic structures, such as energy and atomic
forces. Using over 154 million atomistic structures for training, we illustrate
the performance of our approach along with the lessons learned on two
state-of-the-art United States Department of Energy (US-DOE) supercomputers,
namely the Perlmutter petascale system at the National Energy Research
Scientific Computing Center and the Frontier exascale system at Oak Ridge
Leadership Computing Facility. The HydraGNN architecture enables the GFM to
achieve near-linear strong scaling performance using more than 2,000 GPUs on
Perlmutter and 16,000 GPUs on Frontier.
|
2406.13337 | Khai Le-Duc | Khai Le-Duc, David Thulke, Hung-Phong Tran, Long Vo-Dang, Khai-Nguyen
Nguyen, Truong-Son Hy, Ralf Schl\"uter | Medical Spoken Named Entity Recognition | NAACL 2025, 60 pages | null | null | null | eess.AS cs.CL cs.LG cs.SD | http://creativecommons.org/licenses/by/4.0/ | Spoken Named Entity Recognition (NER) aims to extract named entities from
speech and categorise them into types like person, location, organization, etc.
In this work, we present VietMed-NER - the first spoken NER dataset in the
medical domain. To our knowledge, our Vietnamese real-world dataset is the
largest spoken NER dataset in the world regarding the number of entity types,
featuring 18 distinct types. Furthermore, we present baseline results using
various state-of-the-art pre-trained models: encoder-only and
sequence-to-sequence; and conduct quantitative and qualitative error analysis.
We found that pre-trained multilingual models generally outperform monolingual
models on reference text and ASR output and encoders outperform
sequence-to-sequence models in NER tasks. By translating the transcripts, the
dataset can also be utilised for text NER in the medical domain in other
languages than Vietnamese. All code, data and models are publicly available:
https://github.com/leduckhai/MultiMed/tree/master/VietMed-NER.
| [
{
"version": "v1",
"created": "Wed, 19 Jun 2024 08:39:09 GMT"
},
{
"version": "v2",
"created": "Sun, 21 Jul 2024 00:54:08 GMT"
},
{
"version": "v3",
"created": "Wed, 2 Apr 2025 09:12:03 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Le-Duc",
"Khai",
""
],
[
"Thulke",
"David",
""
],
[
"Tran",
"Hung-Phong",
""
],
[
"Vo-Dang",
"Long",
""
],
[
"Nguyen",
"Khai-Nguyen",
""
],
[
"Hy",
"Truong-Son",
""
],
[
"Schlüter",
"Ralf",
""
]
] | TITLE: Medical Spoken Named Entity Recognition
ABSTRACT: Spoken Named Entity Recognition (NER) aims to extract named entities from
speech and categorise them into types like person, location, organization, etc.
In this work, we present VietMed-NER - the first spoken NER dataset in the
medical domain. To our knowledge, our Vietnamese real-world dataset is the
largest spoken NER dataset in the world regarding the number of entity types,
featuring 18 distinct types. Furthermore, we present baseline results using
various state-of-the-art pre-trained models: encoder-only and
sequence-to-sequence; and conduct quantitative and qualitative error analysis.
We found that pre-trained multilingual models generally outperform monolingual
models on reference text and ASR output and encoders outperform
sequence-to-sequence models in NER tasks. By translating the transcripts, the
dataset can also be utilised for text NER in the medical domain in other
languages than Vietnamese. All code, data and models are publicly available:
https://github.com/leduckhai/MultiMed/tree/master/VietMed-NER.
|
2406.16959 | Dianhui Wang | Dianhui Wang and Gang Dang | Recurrent Stochastic Configuration Networks for Temporal Data Analytics | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Temporal data modelling techniques with neural networks are useful in many
domain applications, including time-series forecasting and control engineering.
This paper aims at developing a recurrent version of stochastic configuration
networks (RSCNs) for problem solving, where we have no underlying assumption on
the dynamic orders of the input variables. Given a collection of historical
data, we first build an initial RSCN model in the light of a supervisory
mechanism, followed by an online update of the output weights by using a
projection algorithm. Some theoretical results are established, including the
echo state property, the universal approximation property of RSCNs for both the
offline and online learnings, and the convergence of the output weights. The
proposed RSCN model is remarkably distinguished from the well-known echo state
networks (ESNs) in terms of the way of assigning the input random weight matrix
and a special structure of the random feedback matrix. A comprehensive
comparison study among the long short-term memory (LSTM) network, the original
ESN, and several state-of-the-art ESN methods such as the simple cycle
reservoir (SCR), the polynomial ESN (PESN), the leaky-integrator ESN (LIESN)
and RSCN is carried out. Numerical results clearly indicate that the proposed
RSCN performs favourably over all of the datasets.
| [
{
"version": "v1",
"created": "Fri, 21 Jun 2024 03:21:22 GMT"
},
{
"version": "v2",
"created": "Thu, 26 Sep 2024 08:12:59 GMT"
},
{
"version": "v3",
"created": "Wed, 2 Apr 2025 02:12:52 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Wang",
"Dianhui",
""
],
[
"Dang",
"Gang",
""
]
] | TITLE: Recurrent Stochastic Configuration Networks for Temporal Data Analytics
ABSTRACT: Temporal data modelling techniques with neural networks are useful in many
domain applications, including time-series forecasting and control engineering.
This paper aims at developing a recurrent version of stochastic configuration
networks (RSCNs) for problem solving, where we have no underlying assumption on
the dynamic orders of the input variables. Given a collection of historical
data, we first build an initial RSCN model in the light of a supervisory
mechanism, followed by an online update of the output weights by using a
projection algorithm. Some theoretical results are established, including the
echo state property, the universal approximation property of RSCNs for both the
offline and online learnings, and the convergence of the output weights. The
proposed RSCN model is remarkably distinguished from the well-known echo state
networks (ESNs) in terms of the way of assigning the input random weight matrix
and a special structure of the random feedback matrix. A comprehensive
comparison study among the long short-term memory (LSTM) network, the original
ESN, and several state-of-the-art ESN methods such as the simple cycle
reservoir (SCR), the polynomial ESN (PESN), the leaky-integrator ESN (LIESN)
and RSCN is carried out. Numerical results clearly indicate that the proposed
RSCN performs favourably over all of the datasets.
|
2407.21185 | Ingrid Navarro | Ingrid Navarro, Pablo Ortega-Kral, Jay Patrikar, Haichuan Wang, Alonso
Cano, Zelin Ye, Jong Hoon Park, Jean Oh and Sebastian Scherer | Amelia: A Large Dataset and Model for Airport Surface Movement
Forecasting | 25 pages, 9 figures, 8 tables | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | The growing demand for air travel necessitates advancements in air traffic
management technologies to ensure safe and efficient operations. Predictive
models for terminal airspace can help anticipate future movements and traffic
flows, enabling proactive planning for efficient coordination, collision risk
assessment, taxi-out time prediction, departure metering, and emission
estimations. Although data-driven predictive models have shown promise in
tackling some of these challenges, the absence of large-scale curated surface
movement datasets in the public domain has hindered the development of scalable
and generalizable approaches.
In this context, we propose the Amelia framework, which consists of four key
contributions. First, Amelia-48, a large dataset of airport surface movement
collected through the FAA's System Wide Information Management (SWIM) Program.
This dataset includes over two years' worth of trajectory data (~70TB) across
48 US airports and map data. Second, we develop AmeliaTF, a large
transformer-based baseline for multi-agent, multi-airport trajectory
forecasting. Third, we propose Amelia-10, a training and evaluation benchmark
consisting of 292 days of post-processed data from 10 different airports and a
series of experiments to promote the development of foundation models in
aviation. We provide baseline results across our benchmark using AmeliaTF.
Finally, we release our framework and tools to encourage further aviation
research in the forecasting domain and beyond at https://ameliacmu.github.io
| [
{
"version": "v1",
"created": "Tue, 30 Jul 2024 20:50:48 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 22:28:25 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Navarro",
"Ingrid",
""
],
[
"Ortega-Kral",
"Pablo",
""
],
[
"Patrikar",
"Jay",
""
],
[
"Wang",
"Haichuan",
""
],
[
"Cano",
"Alonso",
""
],
[
"Ye",
"Zelin",
""
],
[
"Park",
"Jong Hoon",
""
],
[
"Oh",
"Jean",
""
],
[
"Scherer",
"Sebastian",
""
]
] | TITLE: Amelia: A Large Dataset and Model for Airport Surface Movement
Forecasting
ABSTRACT: The growing demand for air travel necessitates advancements in air traffic
management technologies to ensure safe and efficient operations. Predictive
models for terminal airspace can help anticipate future movements and traffic
flows, enabling proactive planning for efficient coordination, collision risk
assessment, taxi-out time prediction, departure metering, and emission
estimations. Although data-driven predictive models have shown promise in
tackling some of these challenges, the absence of large-scale curated surface
movement datasets in the public domain has hindered the development of scalable
and generalizable approaches.
In this context, we propose the Amelia framework, which consists of four key
contributions. First, Amelia-48, a large dataset of airport surface movement
collected through the FAA's System Wide Information Management (SWIM) Program.
This dataset includes over two years' worth of trajectory data (~70TB) across
48 US airports and map data. Second, we develop AmeliaTF, a large
transformer-based baseline for multi-agent, multi-airport trajectory
forecasting. Third, we propose Amelia-10, a training and evaluation benchmark
consisting of 292 days of post-processed data from 10 different airports and a
series of experiments to promote the development of foundation models in
aviation. We provide baseline results across our benchmark using AmeliaTF.
Finally, we release our framework and tools to encourage further aviation
research in the forecasting domain and beyond at https://ameliacmu.github.io
|
2408.00490 | Chu Zhao | Chu Zhao, Enneng Yang, Yuliang Liang, Pengxiang Lan, Yuting Liu,
Jianzhe Zhao, Guibing Guo, and Xingwei Wang | Graph Representation Learning via Causal Diffusion for
Out-of-Distribution Recommendation | 14 pages, accepted by WWW2025 | null | null | null | cs.LG cs.AI cs.IR cs.SI | http://creativecommons.org/licenses/by/4.0/ | Graph Neural Networks (GNNs)-based recommendation algorithms typically assume
that training and testing data are drawn from independent and identically
distributed (IID) spaces. However, this assumption often fails in the presence
of out-of-distribution (OOD) data, resulting in significant performance
degradation. In this study, we construct a Structural Causal Model (SCM) to
analyze interaction data, revealing that environmental confounders (e.g., the
COVID-19 pandemic) lead to unstable correlations in GNN-based models, thus
impairing their generalization to OOD data. To address this issue, we propose a
novel approach, graph representation learning via causal diffusion
(CausalDiffRec) for OOD recommendation. This method enhances the model's
generalization on OOD data by eliminating environmental confounding factors and
learning invariant graph representations. Specifically, we use backdoor
adjustment and variational inference to infer the real environmental
distribution, thereby eliminating the impact of environmental confounders. This
inferred distribution is then used as prior knowledge to guide the
representation learning in the reverse phase of the diffusion process to learn
the invariant representation. In addition, we provide a theoretical derivation
that proves optimizing the objective function of CausalDiffRec can encourage
the model to learn environment-invariant graph representations, thereby
achieving excellent generalization performance in recommendations under
distribution shifts. Our extensive experiments validate the effectiveness of
CausalDiffRec in improving the generalization of OOD data, and the average
improvement is up to 10.69% on Food, 18.83% on KuaiRec, 22.41% on Yelp2018, and
11.65% on Douban datasets.
| [
{
"version": "v1",
"created": "Thu, 1 Aug 2024 11:51:52 GMT"
},
{
"version": "v2",
"created": "Sun, 26 Jan 2025 10:08:58 GMT"
},
{
"version": "v3",
"created": "Sat, 29 Mar 2025 14:13:14 GMT"
},
{
"version": "v4",
"created": "Wed, 2 Apr 2025 13:16:51 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Zhao",
"Chu",
""
],
[
"Yang",
"Enneng",
""
],
[
"Liang",
"Yuliang",
""
],
[
"Lan",
"Pengxiang",
""
],
[
"Liu",
"Yuting",
""
],
[
"Zhao",
"Jianzhe",
""
],
[
"Guo",
"Guibing",
""
],
[
"Wang",
"Xingwei",
""
]
] | TITLE: Graph Representation Learning via Causal Diffusion for
Out-of-Distribution Recommendation
ABSTRACT: Graph Neural Networks (GNNs)-based recommendation algorithms typically assume
that training and testing data are drawn from independent and identically
distributed (IID) spaces. However, this assumption often fails in the presence
of out-of-distribution (OOD) data, resulting in significant performance
degradation. In this study, we construct a Structural Causal Model (SCM) to
analyze interaction data, revealing that environmental confounders (e.g., the
COVID-19 pandemic) lead to unstable correlations in GNN-based models, thus
impairing their generalization to OOD data. To address this issue, we propose a
novel approach, graph representation learning via causal diffusion
(CausalDiffRec) for OOD recommendation. This method enhances the model's
generalization on OOD data by eliminating environmental confounding factors and
learning invariant graph representations. Specifically, we use backdoor
adjustment and variational inference to infer the real environmental
distribution, thereby eliminating the impact of environmental confounders. This
inferred distribution is then used as prior knowledge to guide the
representation learning in the reverse phase of the diffusion process to learn
the invariant representation. In addition, we provide a theoretical derivation
that proves optimizing the objective function of CausalDiffRec can encourage
the model to learn environment-invariant graph representations, thereby
achieving excellent generalization performance in recommendations under
distribution shifts. Our extensive experiments validate the effectiveness of
CausalDiffRec in improving the generalization of OOD data, and the average
improvement is up to 10.69% on Food, 18.83% on KuaiRec, 22.41% on Yelp2018, and
11.65% on Douban datasets.
|
2408.05366 | Hany Farid | Sarah Barrington, Matyas Bohacek, Hany Farid | The DeepSpeak Dataset | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We describe a large-scale dataset - DeepSpeak - of real and deepfake footage
of people talking and gesturing in front of their webcams. The real videos in
this dataset consist of a total of 50 hours of footage from 500 diverse
individuals. Constituting more than 50 hours of footage, the fake videos
consist of a range of different state-of-the-art avatar, face-swap, and
lip-sync deepfakes with natural and AI-generated voices. We are regularly
releasing updated versions of this dataset with the latest deepfake
technologies. This preprint describes the construction of versions 1.0, 1.1,
and 2.0. This dataset is made freely available for research and non-commercial
uses; requests for commercial use will be considered.
| [
{
"version": "v1",
"created": "Fri, 9 Aug 2024 22:29:43 GMT"
},
{
"version": "v2",
"created": "Fri, 30 Aug 2024 22:26:55 GMT"
},
{
"version": "v3",
"created": "Tue, 1 Apr 2025 18:02:39 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Barrington",
"Sarah",
""
],
[
"Bohacek",
"Matyas",
""
],
[
"Farid",
"Hany",
""
]
] | TITLE: The DeepSpeak Dataset
ABSTRACT: We describe a large-scale dataset - DeepSpeak - of real and deepfake footage
of people talking and gesturing in front of their webcams. The real videos in
this dataset consist of a total of 50 hours of footage from 500 diverse
individuals. Constituting more than 50 hours of footage, the fake videos
consist of a range of different state-of-the-art avatar, face-swap, and
lip-sync deepfakes with natural and AI-generated voices. We are regularly
releasing updated versions of this dataset with the latest deepfake
technologies. This preprint describes the construction of versions 1.0, 1.1,
and 2.0. This dataset is made freely available for research and non-commercial
uses; requests for commercial use will be considered.
|
2408.10561 | Qingyu Liu | Qingyu Liu, Longfei Song, Dongxing Xu, Yanhua Long | ICSD: An Open-source Dataset for Infant Cry and Snoring Detection | 11 pages, 6 figures | null | null | null | cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The detection and analysis of infant cry and snoring events are crucial tasks
within the field of audio signal processing. While existing datasets for
general sound event detection are plentiful, they often fall short in providing
sufficient, strongly labeled data specific to infant cries and snoring. To
provide a benchmark dataset and thus foster the research of infant cry and
snoring detection, this paper introduces the Infant Cry and Snoring Detection
(ICSD) dataset, a novel, publicly available dataset specially designed for ICSD
tasks. The ICSD comprises three types of subsets: a real strongly labeled
subset with event-based labels annotated manually, a weakly labeled subset with
only clip-level event annotations, and a synthetic subset generated and labeled
with strong annotations. This paper provides a detailed description of the ICSD
creation process, including the challenges encountered and the solutions
adopted. We offer a comprehensive characterization of the dataset, discussing
its limitations and key factors for ICSD usage. Additionally, we conduct
extensive experiments on the ICSD dataset to establish baseline systems and
offer insights into the main factors when using this dataset for ICSD research.
Our goal is to develop a dataset that will be widely adopted by the community
as a new open benchmark for future ICSD research.
| [
{
"version": "v1",
"created": "Tue, 20 Aug 2024 06:01:50 GMT"
},
{
"version": "v2",
"created": "Thu, 19 Dec 2024 03:14:40 GMT"
},
{
"version": "v3",
"created": "Wed, 2 Apr 2025 16:23:00 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Liu",
"Qingyu",
""
],
[
"Song",
"Longfei",
""
],
[
"Xu",
"Dongxing",
""
],
[
"Long",
"Yanhua",
""
]
] | TITLE: ICSD: An Open-source Dataset for Infant Cry and Snoring Detection
ABSTRACT: The detection and analysis of infant cry and snoring events are crucial tasks
within the field of audio signal processing. While existing datasets for
general sound event detection are plentiful, they often fall short in providing
sufficient, strongly labeled data specific to infant cries and snoring. To
provide a benchmark dataset and thus foster the research of infant cry and
snoring detection, this paper introduces the Infant Cry and Snoring Detection
(ICSD) dataset, a novel, publicly available dataset specially designed for ICSD
tasks. The ICSD comprises three types of subsets: a real strongly labeled
subset with event-based labels annotated manually, a weakly labeled subset with
only clip-level event annotations, and a synthetic subset generated and labeled
with strong annotations. This paper provides a detailed description of the ICSD
creation process, including the challenges encountered and the solutions
adopted. We offer a comprehensive characterization of the dataset, discussing
its limitations and key factors for ICSD usage. Additionally, we conduct
extensive experiments on the ICSD dataset to establish baseline systems and
offer insights into the main factors when using this dataset for ICSD research.
Our goal is to develop a dataset that will be widely adopted by the community
as a new open benchmark for future ICSD research.
|
2408.11878 | Xiao-Yang Liu | Jimin Huang, Mengxi Xiao, Dong Li, Zihao Jiang, Yuzhe Yang, Yifei
Zhang, Lingfei Qian, Yan Wang, Xueqing Peng, Yang Ren, Ruoyu Xiang, Zhengyu
Chen, Xiao Zhang, Yueru He, Weiguang Han, Shunian Chen, Lihang Shen, Daniel
Kim, Yangyang Yu, Yupeng Cao, Zhiyang Deng, Haohang Li, Duanyu Feng, Yongfu
Dai, VijayaSai Somasundaram, Peng Lu, Guojun Xiong, Zhiwei Liu, Zheheng Luo,
Zhiyuan Yao, Ruey-Ling Weng, Meikang Qiu, Kaleb E Smith, Honghai Yu, Yanzhao
Lai, Min Peng, Jian-Yun Nie, Jordan W. Suchow, Xiao-Yang Liu, Benyou Wang,
Alejandro Lopez-Lira, Qianqian Xie, Sophia Ananiadou and Junichi Tsujii | Open-FinLLMs: Open Multimodal Large Language Models for Financial
Applications | 33 pages, 13 figures | null | null | null | cs.CL cs.CE q-fin.CP | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Financial LLMs hold promise for advancing financial tasks and domain-specific
applications. However, they are limited by scarce corpora, weak multimodal
capabilities, and narrow evaluations, making them less suited for real-world
application. To address this, we introduce \textit{Open-FinLLMs}, the first
open-source multimodal financial LLMs designed to handle diverse tasks across
text, tabular, time-series, and chart data, excelling in zero-shot, few-shot,
and fine-tuning settings. The suite includes FinLLaMA, pre-trained on a
comprehensive 52-billion-token corpus; FinLLaMA-Instruct, fine-tuned with 573K
financial instructions; and FinLLaVA, enhanced with 1.43M multimodal tuning
pairs for strong cross-modal reasoning. We comprehensively evaluate
Open-FinLLMs across 14 financial tasks, 30 datasets, and 4 multimodal tasks in
zero-shot, few-shot, and supervised fine-tuning settings, introducing two new
multimodal evaluation datasets. Our results show that Open-FinLLMs outperforms
afvanced financial and general LLMs such as GPT-4, across financial NLP,
decision-making, and multi-modal tasks, highlighting their potential to tackle
real-world challenges. To foster innovation and collaboration across academia
and industry, we release all codes
(https://anonymous.4open.science/r/PIXIU2-0D70/B1D7/LICENSE) and models under
OSI-approved licenses.
| [
{
"version": "v1",
"created": "Tue, 20 Aug 2024 16:15:28 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 14:18:35 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Huang",
"Jimin",
""
],
[
"Xiao",
"Mengxi",
""
],
[
"Li",
"Dong",
""
],
[
"Jiang",
"Zihao",
""
],
[
"Yang",
"Yuzhe",
""
],
[
"Zhang",
"Yifei",
""
],
[
"Qian",
"Lingfei",
""
],
[
"Wang",
"Yan",
""
],
[
"Peng",
"Xueqing",
""
],
[
"Ren",
"Yang",
""
],
[
"Xiang",
"Ruoyu",
""
],
[
"Chen",
"Zhengyu",
""
],
[
"Zhang",
"Xiao",
""
],
[
"He",
"Yueru",
""
],
[
"Han",
"Weiguang",
""
],
[
"Chen",
"Shunian",
""
],
[
"Shen",
"Lihang",
""
],
[
"Kim",
"Daniel",
""
],
[
"Yu",
"Yangyang",
""
],
[
"Cao",
"Yupeng",
""
],
[
"Deng",
"Zhiyang",
""
],
[
"Li",
"Haohang",
""
],
[
"Feng",
"Duanyu",
""
],
[
"Dai",
"Yongfu",
""
],
[
"Somasundaram",
"VijayaSai",
""
],
[
"Lu",
"Peng",
""
],
[
"Xiong",
"Guojun",
""
],
[
"Liu",
"Zhiwei",
""
],
[
"Luo",
"Zheheng",
""
],
[
"Yao",
"Zhiyuan",
""
],
[
"Weng",
"Ruey-Ling",
""
],
[
"Qiu",
"Meikang",
""
],
[
"Smith",
"Kaleb E",
""
],
[
"Yu",
"Honghai",
""
],
[
"Lai",
"Yanzhao",
""
],
[
"Peng",
"Min",
""
],
[
"Nie",
"Jian-Yun",
""
],
[
"Suchow",
"Jordan W.",
""
],
[
"Liu",
"Xiao-Yang",
""
],
[
"Wang",
"Benyou",
""
],
[
"Lopez-Lira",
"Alejandro",
""
],
[
"Xie",
"Qianqian",
""
],
[
"Ananiadou",
"Sophia",
""
],
[
"Tsujii",
"Junichi",
""
]
] | TITLE: Open-FinLLMs: Open Multimodal Large Language Models for Financial
Applications
ABSTRACT: Financial LLMs hold promise for advancing financial tasks and domain-specific
applications. However, they are limited by scarce corpora, weak multimodal
capabilities, and narrow evaluations, making them less suited for real-world
application. To address this, we introduce \textit{Open-FinLLMs}, the first
open-source multimodal financial LLMs designed to handle diverse tasks across
text, tabular, time-series, and chart data, excelling in zero-shot, few-shot,
and fine-tuning settings. The suite includes FinLLaMA, pre-trained on a
comprehensive 52-billion-token corpus; FinLLaMA-Instruct, fine-tuned with 573K
financial instructions; and FinLLaVA, enhanced with 1.43M multimodal tuning
pairs for strong cross-modal reasoning. We comprehensively evaluate
Open-FinLLMs across 14 financial tasks, 30 datasets, and 4 multimodal tasks in
zero-shot, few-shot, and supervised fine-tuning settings, introducing two new
multimodal evaluation datasets. Our results show that Open-FinLLMs outperforms
afvanced financial and general LLMs such as GPT-4, across financial NLP,
decision-making, and multi-modal tasks, highlighting their potential to tackle
real-world challenges. To foster innovation and collaboration across academia
and industry, we release all codes
(https://anonymous.4open.science/r/PIXIU2-0D70/B1D7/LICENSE) and models under
OSI-approved licenses.
|
2409.00822 | Xi Xie | Xi Xie, Yuebo Luo, Hongwu Peng, and Caiwen Ding | RTop-K: Ultra-Fast Row-Wise Top-K Selection for Neural Network
Acceleration on GPUs | ICLR 2025 Conference | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Top-k selection algorithms are fundamental in a wide range of applications,
including high-performance computing, information retrieval, big data
processing, and neural network model training. In this paper, we present
RTop-K, a highly efficient parallel row-wise top-k selection algorithm
specifically designed for GPUs. RTop-K leverages a binary search-based approach
to optimize row-wise top-k selection, providing a scalable and accelerated
solution. We conduct a detailed analysis of early stopping in our algorithm,
showing that it effectively maintains the testing accuracy of neural network
models while substantially improving performance. Our GPU implementation of
RTop-K demonstrates superior performance over state-of-the-art row-wise top-k
GPU implementations, achieving an average speed-up of up to 11.49$\times$ with
early stopping and 7.29$\times$ without early stopping. Moreover, RTop-K
accelerates the overall training workflow of MaxK-GNNs, delivering speed-ups
ranging from 11.97% to 33.29% across different models and datasets.
| [
{
"version": "v1",
"created": "Sun, 1 Sep 2024 19:43:40 GMT"
},
{
"version": "v2",
"created": "Mon, 16 Sep 2024 16:24:05 GMT"
},
{
"version": "v3",
"created": "Sat, 22 Mar 2025 21:35:56 GMT"
},
{
"version": "v4",
"created": "Wed, 2 Apr 2025 06:22:29 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Xie",
"Xi",
""
],
[
"Luo",
"Yuebo",
""
],
[
"Peng",
"Hongwu",
""
],
[
"Ding",
"Caiwen",
""
]
] | TITLE: RTop-K: Ultra-Fast Row-Wise Top-K Selection for Neural Network
Acceleration on GPUs
ABSTRACT: Top-k selection algorithms are fundamental in a wide range of applications,
including high-performance computing, information retrieval, big data
processing, and neural network model training. In this paper, we present
RTop-K, a highly efficient parallel row-wise top-k selection algorithm
specifically designed for GPUs. RTop-K leverages a binary search-based approach
to optimize row-wise top-k selection, providing a scalable and accelerated
solution. We conduct a detailed analysis of early stopping in our algorithm,
showing that it effectively maintains the testing accuracy of neural network
models while substantially improving performance. Our GPU implementation of
RTop-K demonstrates superior performance over state-of-the-art row-wise top-k
GPU implementations, achieving an average speed-up of up to 11.49$\times$ with
early stopping and 7.29$\times$ without early stopping. Moreover, RTop-K
accelerates the overall training workflow of MaxK-GNNs, delivering speed-ups
ranging from 11.97% to 33.29% across different models and datasets.
|
2409.15273 | Yehonathan Litman | Yehonathan Litman, Or Patashnik, Kangle Deng, Aviral Agrawal,
Rushikesh Zawar, Fernando De la Torre, Shubham Tulsiani | MaterialFusion: Enhancing Inverse Rendering with Material Diffusion
Priors | 3DV 2025. Project Page, Data, & Code:
https://yehonathanlitman.github.io/material_fusion | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent works in inverse rendering have shown promise in using multi-view
images of an object to recover shape, albedo, and materials. However, the
recovered components often fail to render accurately under new lighting
conditions due to the intrinsic challenge of disentangling albedo and material
properties from input images. To address this challenge, we introduce
MaterialFusion, an enhanced conventional 3D inverse rendering pipeline that
incorporates a 2D prior on texture and material properties. We present
StableMaterial, a 2D diffusion model prior that refines multi-lit data to
estimate the most likely albedo and material from given input appearances. This
model is trained on albedo, material, and relit image data derived from a
curated dataset of approximately ~12K artist-designed synthetic Blender objects
called BlenderVault. we incorporate this diffusion prior with an inverse
rendering framework where we use score distillation sampling (SDS) to guide the
optimization of the albedo and materials, improving relighting performance in
comparison with previous work. We validate MaterialFusion's relighting
performance on 4 datasets of synthetic and real objects under diverse
illumination conditions, showing our diffusion-aided approach significantly
improves the appearance of reconstructed objects under novel lighting
conditions. We intend to publicly release our BlenderVault dataset to support
further research in this field.
| [
{
"version": "v1",
"created": "Mon, 23 Sep 2024 17:59:06 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 22:35:49 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Litman",
"Yehonathan",
""
],
[
"Patashnik",
"Or",
""
],
[
"Deng",
"Kangle",
""
],
[
"Agrawal",
"Aviral",
""
],
[
"Zawar",
"Rushikesh",
""
],
[
"De la Torre",
"Fernando",
""
],
[
"Tulsiani",
"Shubham",
""
]
] | TITLE: MaterialFusion: Enhancing Inverse Rendering with Material Diffusion
Priors
ABSTRACT: Recent works in inverse rendering have shown promise in using multi-view
images of an object to recover shape, albedo, and materials. However, the
recovered components often fail to render accurately under new lighting
conditions due to the intrinsic challenge of disentangling albedo and material
properties from input images. To address this challenge, we introduce
MaterialFusion, an enhanced conventional 3D inverse rendering pipeline that
incorporates a 2D prior on texture and material properties. We present
StableMaterial, a 2D diffusion model prior that refines multi-lit data to
estimate the most likely albedo and material from given input appearances. This
model is trained on albedo, material, and relit image data derived from a
curated dataset of approximately ~12K artist-designed synthetic Blender objects
called BlenderVault. we incorporate this diffusion prior with an inverse
rendering framework where we use score distillation sampling (SDS) to guide the
optimization of the albedo and materials, improving relighting performance in
comparison with previous work. We validate MaterialFusion's relighting
performance on 4 datasets of synthetic and real objects under diverse
illumination conditions, showing our diffusion-aided approach significantly
improves the appearance of reconstructed objects under novel lighting
conditions. We intend to publicly release our BlenderVault dataset to support
further research in this field.
|
2409.16902 | Chunhui Zhang | Chunhui Zhang, Li Liu, Guanjie Huang, Zhipeng Zhang, Hao Wen, Xi Zhou,
Shiming Ge, Yanfeng Wang | Underwater Camouflaged Object Tracking Meets Vision-Language SAM2 | Preprint.
https://github.com/983632847/Awesome-Multimodal-Object-Tracking | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Over the past decade, significant progress has been made in visual object
tracking, largely due to the availability of large-scale datasets. However,
these datasets have primarily focused on open-air scenarios and have largely
overlooked underwater animal tracking-especially the complex challenges posed
by camouflaged marine animals. To bridge this gap, we take a step forward by
proposing the first large-scale multi-modal underwater camouflaged object
tracking dataset, namely UW-COT220. Based on the proposed dataset, this work
first comprehensively evaluates current advanced visual object tracking
methods, including SAM- and SAM2-based trackers, in challenging underwater
environments, \eg, coral reefs. Our findings highlight the improvements of SAM2
over SAM, demonstrating its enhanced ability to handle the complexities of
underwater camouflaged objects. Furthermore, we propose a novel vision-language
tracking framework called VL-SAM2, based on the video foundation model SAM2.
Experimental results demonstrate that our VL-SAM2 achieves state-of-the-art
performance on the UW-COT220 dataset. The dataset and codes are available
at~\href{https://github.com/983632847/Awesome-Multimodal-Object-Tracking}{\color{magenta}{here}}.
| [
{
"version": "v1",
"created": "Wed, 25 Sep 2024 13:10:03 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Jan 2025 13:01:46 GMT"
},
{
"version": "v3",
"created": "Wed, 2 Apr 2025 09:15:59 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Zhang",
"Chunhui",
""
],
[
"Liu",
"Li",
""
],
[
"Huang",
"Guanjie",
""
],
[
"Zhang",
"Zhipeng",
""
],
[
"Wen",
"Hao",
""
],
[
"Zhou",
"Xi",
""
],
[
"Ge",
"Shiming",
""
],
[
"Wang",
"Yanfeng",
""
]
] | TITLE: Underwater Camouflaged Object Tracking Meets Vision-Language SAM2
ABSTRACT: Over the past decade, significant progress has been made in visual object
tracking, largely due to the availability of large-scale datasets. However,
these datasets have primarily focused on open-air scenarios and have largely
overlooked underwater animal tracking-especially the complex challenges posed
by camouflaged marine animals. To bridge this gap, we take a step forward by
proposing the first large-scale multi-modal underwater camouflaged object
tracking dataset, namely UW-COT220. Based on the proposed dataset, this work
first comprehensively evaluates current advanced visual object tracking
methods, including SAM- and SAM2-based trackers, in challenging underwater
environments, \eg, coral reefs. Our findings highlight the improvements of SAM2
over SAM, demonstrating its enhanced ability to handle the complexities of
underwater camouflaged objects. Furthermore, we propose a novel vision-language
tracking framework called VL-SAM2, based on the video foundation model SAM2.
Experimental results demonstrate that our VL-SAM2 achieves state-of-the-art
performance on the UW-COT220 dataset. The dataset and codes are available
at~\href{https://github.com/983632847/Awesome-Multimodal-Object-Tracking}{\color{magenta}{here}}.
|
2409.17004 | Fethiye Irmak Do\u{g}an | Fethiye Irmak Dogan, Maithili Patel, Weiyu Liu, Iolanda Leite, Sonia
Chernova | A Model-Agnostic Approach for Semantically Driven Disambiguation in
Human-Robot Interaction | Under review for 2025 IEEE International Conference on Robot & Human
Interactive Communication (RO-MAN), Supplementary video:
https://youtu.be/_P0v07Xc24Y, Dataset publicly available:
https://github.com/IrmakDogan/ExpressionDataset | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ambiguities are inevitable in human-robot interaction, especially when a
robot follows user instructions in a large, shared space. For example, if a
user asks the robot to find an object in a home environment with underspecified
instructions, the object could be in multiple locations depending on missing
factors. For instance, a bowl might be in the kitchen cabinet or on the dining
room table, depending on whether it is clean or dirty, full or empty, and the
presence of other objects around it. Previous works on object search have
assumed that the queried object is immediately visible to the robot or have
predicted object locations using one-shot inferences, which are likely to fail
for ambiguous or partially understood instructions. This paper focuses on these
gaps and presents a novel model-agnostic approach leveraging semantically
driven clarifications to enhance the robot's ability to locate queried objects
in fewer attempts. Specifically, we leverage different knowledge embedding
models, and when ambiguities arise, we propose an informative clarification
method, which follows an iterative prediction process. The user experiment
evaluation of our method shows that our approach is applicable to different
custom semantic encoders as well as LLMs, and informative clarifications
improve performances, enabling the robot to locate objects on its first
attempts. The user experiment data is publicly available at
https://github.com/IrmakDogan/ExpressionDataset.
| [
{
"version": "v1",
"created": "Wed, 25 Sep 2024 15:07:47 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 13:51:04 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Dogan",
"Fethiye Irmak",
""
],
[
"Patel",
"Maithili",
""
],
[
"Liu",
"Weiyu",
""
],
[
"Leite",
"Iolanda",
""
],
[
"Chernova",
"Sonia",
""
]
] | TITLE: A Model-Agnostic Approach for Semantically Driven Disambiguation in
Human-Robot Interaction
ABSTRACT: Ambiguities are inevitable in human-robot interaction, especially when a
robot follows user instructions in a large, shared space. For example, if a
user asks the robot to find an object in a home environment with underspecified
instructions, the object could be in multiple locations depending on missing
factors. For instance, a bowl might be in the kitchen cabinet or on the dining
room table, depending on whether it is clean or dirty, full or empty, and the
presence of other objects around it. Previous works on object search have
assumed that the queried object is immediately visible to the robot or have
predicted object locations using one-shot inferences, which are likely to fail
for ambiguous or partially understood instructions. This paper focuses on these
gaps and presents a novel model-agnostic approach leveraging semantically
driven clarifications to enhance the robot's ability to locate queried objects
in fewer attempts. Specifically, we leverage different knowledge embedding
models, and when ambiguities arise, we propose an informative clarification
method, which follows an iterative prediction process. The user experiment
evaluation of our method shows that our approach is applicable to different
custom semantic encoders as well as LLMs, and informative clarifications
improve performances, enabling the robot to locate objects on its first
attempts. The user experiment data is publicly available at
https://github.com/IrmakDogan/ExpressionDataset.
|
2410.03862 | Kaleb Domenico Ruscitti | Kaleb D. Ruscitti and Leland McInnes | Improving Mapper's Robustness by Varying Resolution According to
Lens-Space Density | 35 pages, 9 figures | null | null | null | cs.LG math.AT stat.ML | http://creativecommons.org/licenses/by/4.0/ | We propose a modification of the Mapper algorithm that removes the assumption
of a single resolution scale across semantic space and improves the robustness
of the results under change of parameters. Our work is motivated by datasets
where the density in the image of the Morse-type function (the lens-space
density) varies widely. For such datasets, tuning the resolution parameter of
Mapper is difficult because small changes can lead to significant variations in
the output. By improving the robustness of the output under these variations,
our method makes it easier to tune the resolution for datasets with highly
variable lens-space density. This improvement is achieved by generalising the
type of permitted cover for Mapper and incorporating the lens-space density
into the cover. Furthermore, we prove that for covers satisfying natural
assumptions, the graph produced by Mapper still converges in bottleneck
distance to the Reeb graph of the Rips complex of the data, while possibly
capturing more topological features than a standard Mapper cover. Finally, we
discuss implementation details and present the results of computational
experiments. We also provide an accompanying reference implementation.
| [
{
"version": "v1",
"created": "Fri, 4 Oct 2024 18:51:44 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 20:21:04 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Ruscitti",
"Kaleb D.",
""
],
[
"McInnes",
"Leland",
""
]
] | TITLE: Improving Mapper's Robustness by Varying Resolution According to
Lens-Space Density
ABSTRACT: We propose a modification of the Mapper algorithm that removes the assumption
of a single resolution scale across semantic space and improves the robustness
of the results under change of parameters. Our work is motivated by datasets
where the density in the image of the Morse-type function (the lens-space
density) varies widely. For such datasets, tuning the resolution parameter of
Mapper is difficult because small changes can lead to significant variations in
the output. By improving the robustness of the output under these variations,
our method makes it easier to tune the resolution for datasets with highly
variable lens-space density. This improvement is achieved by generalising the
type of permitted cover for Mapper and incorporating the lens-space density
into the cover. Furthermore, we prove that for covers satisfying natural
assumptions, the graph produced by Mapper still converges in bottleneck
distance to the Reeb graph of the Rips complex of the data, while possibly
capturing more topological features than a standard Mapper cover. Finally, we
discuss implementation details and present the results of computational
experiments. We also provide an accompanying reference implementation.
|
2410.05939 | Chao Sun | Chao Sun, Yaobo Liang, Yaming Yang, Shilin Xu, Tianmeng Yang, Yunhai
Tong | Direct Preference Optimization for LLM-Enhanced Recommendation Systems | This paper has been accepted to ICME 2025 | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) have exhibited remarkable performance across a
wide range of domains, motivating research into their potential for
recommendation systems. Early efforts have leveraged LLMs' rich knowledge and
strong generalization capabilities via in-context learning, where
recommendation tasks are framed as prompts. However, LLM performance in
recommendation scenarios remains limited due to the mismatch between their
pretraining objectives and recommendation tasks, as well as the lack of
recommendation-specific data during pretraining. To address these challenges,
we propose DPO4Rec, a novel framework that integrates Direct Preference
Optimization (DPO) into LLM-enhanced recommendation systems. First, we prompt
the LLM to infer user preferences from historical interactions, which are then
used to augment traditional ID-based sequential recommendation models. Next, we
train a reward model based on knowledge-augmented recommendation architectures
to assess the quality of LLM-generated reasoning. Using this, we select the
highest- and lowest-ranked responses from N samples to construct a dataset for
LLM fine-tuning. Finally, we apply a structure alignment strategy via DPO to
align the LLM's outputs with desirable recommendation behavior. Extensive
experiments show that DPO4Rec significantly improves re-ranking performance
over strong baselines, demonstrating enhanced instruction-following
capabilities of LLMs in recommendation tasks.
| [
{
"version": "v1",
"created": "Tue, 8 Oct 2024 11:42:37 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 06:22:49 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Sun",
"Chao",
""
],
[
"Liang",
"Yaobo",
""
],
[
"Yang",
"Yaming",
""
],
[
"Xu",
"Shilin",
""
],
[
"Yang",
"Tianmeng",
""
],
[
"Tong",
"Yunhai",
""
]
] | TITLE: Direct Preference Optimization for LLM-Enhanced Recommendation Systems
ABSTRACT: Large Language Models (LLMs) have exhibited remarkable performance across a
wide range of domains, motivating research into their potential for
recommendation systems. Early efforts have leveraged LLMs' rich knowledge and
strong generalization capabilities via in-context learning, where
recommendation tasks are framed as prompts. However, LLM performance in
recommendation scenarios remains limited due to the mismatch between their
pretraining objectives and recommendation tasks, as well as the lack of
recommendation-specific data during pretraining. To address these challenges,
we propose DPO4Rec, a novel framework that integrates Direct Preference
Optimization (DPO) into LLM-enhanced recommendation systems. First, we prompt
the LLM to infer user preferences from historical interactions, which are then
used to augment traditional ID-based sequential recommendation models. Next, we
train a reward model based on knowledge-augmented recommendation architectures
to assess the quality of LLM-generated reasoning. Using this, we select the
highest- and lowest-ranked responses from N samples to construct a dataset for
LLM fine-tuning. Finally, we apply a structure alignment strategy via DPO to
align the LLM's outputs with desirable recommendation behavior. Extensive
experiments show that DPO4Rec significantly improves re-ranking performance
over strong baselines, demonstrating enhanced instruction-following
capabilities of LLMs in recommendation tasks.
|
2410.07022 | Mohd Omama | Mohammad Omama, Po-han Li, Sandeep P. Chinchali | Exploiting Distribution Constraints for Scalable and Efficient Image
Retrieval | null | null | null | null | cs.IR | http://creativecommons.org/licenses/by/4.0/ | Image retrieval is crucial in robotics and computer vision, with downstream
applications in robot place recognition and vision-based product
recommendations. Modern retrieval systems face two key challenges: scalability
and efficiency. State-of-the-art image retrieval systems train specific neural
networks for each dataset, an approach that lacks scalability. Furthermore,
since retrieval speed is directly proportional to embedding size, existing
systems that use large embeddings lack efficiency. To tackle scalability,
recent works propose using off-the-shelf foundation models. However, these
models, though applicable across datasets, fall short in achieving performance
comparable to that of dataset-specific models. Our key observation is that,
while foundation models capture necessary subtleties for effective retrieval,
the underlying distribution of their embedding space can negatively impact
cosine similarity searches. We introduce Autoencoders with Strong Variance
Constraints (AE-SVC), which, when used for projection, significantly improves
the performance of foundation models. We provide an in-depth theoretical
analysis of AE-SVC. Addressing efficiency, we introduce Single-shot Similarity
Space Distillation ((SS)$_2$D), a novel approach to learn embeddings with
adaptive sizes that offers a better trade-off between size and performance. We
conducted extensive experiments on four retrieval datasets, including Stanford
Online Products (SoP) and Pittsburgh30k, using four different off-the-shelf
foundation models, including DinoV2 and CLIP. AE-SVC demonstrates up to a
$16\%$ improvement in retrieval performance, while (SS)$_2$D shows a further
$10\%$ improvement for smaller embedding sizes.
| [
{
"version": "v1",
"created": "Wed, 9 Oct 2024 16:05:16 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Mar 2025 19:31:31 GMT"
},
{
"version": "v3",
"created": "Tue, 1 Apr 2025 22:45:37 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Omama",
"Mohammad",
""
],
[
"Li",
"Po-han",
""
],
[
"Chinchali",
"Sandeep P.",
""
]
] | TITLE: Exploiting Distribution Constraints for Scalable and Efficient Image
Retrieval
ABSTRACT: Image retrieval is crucial in robotics and computer vision, with downstream
applications in robot place recognition and vision-based product
recommendations. Modern retrieval systems face two key challenges: scalability
and efficiency. State-of-the-art image retrieval systems train specific neural
networks for each dataset, an approach that lacks scalability. Furthermore,
since retrieval speed is directly proportional to embedding size, existing
systems that use large embeddings lack efficiency. To tackle scalability,
recent works propose using off-the-shelf foundation models. However, these
models, though applicable across datasets, fall short in achieving performance
comparable to that of dataset-specific models. Our key observation is that,
while foundation models capture necessary subtleties for effective retrieval,
the underlying distribution of their embedding space can negatively impact
cosine similarity searches. We introduce Autoencoders with Strong Variance
Constraints (AE-SVC), which, when used for projection, significantly improves
the performance of foundation models. We provide an in-depth theoretical
analysis of AE-SVC. Addressing efficiency, we introduce Single-shot Similarity
Space Distillation ((SS)$_2$D), a novel approach to learn embeddings with
adaptive sizes that offers a better trade-off between size and performance. We
conducted extensive experiments on four retrieval datasets, including Stanford
Online Products (SoP) and Pittsburgh30k, using four different off-the-shelf
foundation models, including DinoV2 and CLIP. AE-SVC demonstrates up to a
$16\%$ improvement in retrieval performance, while (SS)$_2$D shows a further
$10\%$ improvement for smaller embedding sizes.
|
2410.08407 | Aida Mohammadshahi | Aida Mohammadshahi, Yani Ioannou | What is Left After Distillation? How Knowledge Transfer Impacts Fairness
and Bias | Published in Transactions on Machine Learning Research (TMLR), March
2024. https://openreview.net/forum?id=xBbj46Y2fN | Transactions on Machine Learning Research, 2835-8856, March 2025.
https://openreview.net/forum?id=xBbj46Y2fN | null | null | cs.LG cs.CY stat.ML | http://creativecommons.org/licenses/by/4.0/ | Knowledge Distillation is a commonly used Deep Neural Network (DNN)
compression method, which often maintains overall generalization performance.
However, we show that even for balanced image classification datasets, such as
CIFAR-100, Tiny ImageNet and ImageNet, as many as 41% of the classes are
statistically significantly affected by distillation when comparing class-wise
accuracy (i.e. class bias) between a teacher/distilled student or distilled
student/non-distilled student model. Changes in class bias are not necessarily
an undesirable outcome when considered outside of the context of a model's
usage. Using two common fairness metrics, Demographic Parity Difference (DPD)
and Equalized Odds Difference (EOD) on models trained with the CelebA,
Trifeature, and HateXplain datasets, our results suggest that increasing the
distillation temperature improves the distilled student model's fairness, and
the distilled student fairness can even surpass the fairness of the teacher
model at high temperatures. Additionally, we examine individual fairness,
ensuring similar instances receive similar predictions. Our results confirm
that higher temperatures also improve the distilled student model's individual
fairness. This study highlights the uneven effects of distillation on certain
classes and its potentially significant role in fairness, emphasizing that
caution is warranted when using distilled models for sensitive application
domains.
| [
{
"version": "v1",
"created": "Thu, 10 Oct 2024 22:43:00 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 00:08:06 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Mohammadshahi",
"Aida",
""
],
[
"Ioannou",
"Yani",
""
]
] | TITLE: What is Left After Distillation? How Knowledge Transfer Impacts Fairness
and Bias
ABSTRACT: Knowledge Distillation is a commonly used Deep Neural Network (DNN)
compression method, which often maintains overall generalization performance.
However, we show that even for balanced image classification datasets, such as
CIFAR-100, Tiny ImageNet and ImageNet, as many as 41% of the classes are
statistically significantly affected by distillation when comparing class-wise
accuracy (i.e. class bias) between a teacher/distilled student or distilled
student/non-distilled student model. Changes in class bias are not necessarily
an undesirable outcome when considered outside of the context of a model's
usage. Using two common fairness metrics, Demographic Parity Difference (DPD)
and Equalized Odds Difference (EOD) on models trained with the CelebA,
Trifeature, and HateXplain datasets, our results suggest that increasing the
distillation temperature improves the distilled student model's fairness, and
the distilled student fairness can even surpass the fairness of the teacher
model at high temperatures. Additionally, we examine individual fairness,
ensuring similar instances receive similar predictions. Our results confirm
that higher temperatures also improve the distilled student model's individual
fairness. This study highlights the uneven effects of distillation on certain
classes and its potentially significant role in fairness, emphasizing that
caution is warranted when using distilled models for sensitive application
domains.
|
2410.10166 | Yongjin Yang | Yongjin Yang, Sihyeon Kim, Hojung Jung, Sangmin Bae, SangMook Kim,
Se-Young Yun, Kimin Lee | Automated Filtering of Human Feedback Data for Aligning Text-to-Image
Diffusion Models | ICLR 2025; Project Page available at :
https://sprain02.github.io/FiFA/ | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Fine-tuning text-to-image diffusion models with human feedback is an
effective method for aligning model behavior with human intentions. However,
this alignment process often suffers from slow convergence due to the large
size and noise present in human feedback datasets. In this work, we propose
FiFA, a novel automated data filtering algorithm designed to enhance the
fine-tuning of diffusion models using human feedback datasets with direct
preference optimization (DPO). Specifically, our approach selects data by
solving an optimization problem to maximize three components: preference
margin, text quality, and text diversity. The concept of preference margin is
used to identify samples that are highly informative in addressing the noisy
nature of feedback dataset, which is calculated using a proxy reward model.
Additionally, we incorporate text quality, assessed by large language models to
prevent harmful contents, and consider text diversity through a k-nearest
neighbor entropy estimator to improve generalization. Finally, we integrate all
these components into an optimization process, with approximating the solution
by assigning importance score to each data pair and selecting the most
important ones. As a result, our method efficiently filters data automatically,
without the need for manual intervention, and can be applied to any large-scale
dataset. Experimental results show that FiFA significantly enhances training
stability and achieves better performance, being preferred by humans 17% more,
while using less than 0.5% of the full data and thus 1% of the GPU hours
compared to utilizing full human feedback datasets.
| [
{
"version": "v1",
"created": "Mon, 14 Oct 2024 05:18:07 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 08:25:01 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Yang",
"Yongjin",
""
],
[
"Kim",
"Sihyeon",
""
],
[
"Jung",
"Hojung",
""
],
[
"Bae",
"Sangmin",
""
],
[
"Kim",
"SangMook",
""
],
[
"Yun",
"Se-Young",
""
],
[
"Lee",
"Kimin",
""
]
] | TITLE: Automated Filtering of Human Feedback Data for Aligning Text-to-Image
Diffusion Models
ABSTRACT: Fine-tuning text-to-image diffusion models with human feedback is an
effective method for aligning model behavior with human intentions. However,
this alignment process often suffers from slow convergence due to the large
size and noise present in human feedback datasets. In this work, we propose
FiFA, a novel automated data filtering algorithm designed to enhance the
fine-tuning of diffusion models using human feedback datasets with direct
preference optimization (DPO). Specifically, our approach selects data by
solving an optimization problem to maximize three components: preference
margin, text quality, and text diversity. The concept of preference margin is
used to identify samples that are highly informative in addressing the noisy
nature of feedback dataset, which is calculated using a proxy reward model.
Additionally, we incorporate text quality, assessed by large language models to
prevent harmful contents, and consider text diversity through a k-nearest
neighbor entropy estimator to improve generalization. Finally, we integrate all
these components into an optimization process, with approximating the solution
by assigning importance score to each data pair and selecting the most
important ones. As a result, our method efficiently filters data automatically,
without the need for manual intervention, and can be applied to any large-scale
dataset. Experimental results show that FiFA significantly enhances training
stability and achieves better performance, being preferred by humans 17% more,
while using less than 0.5% of the full data and thus 1% of the GPU hours
compared to utilizing full human feedback datasets.
|
2410.11247 | Medha Sawhney | Naveen Gupta, Medha Sawhney, Arka Daw, Youzuo Lin, Anuj Karpatne | A Unified Framework for Forward and Inverse Problems in Subsurface
Imaging using Latent Space Translations | Accepted at ICLR 2025 | null | null | null | cs.LG math-ph math.MP physics.geo-ph | http://creativecommons.org/licenses/by/4.0/ | In subsurface imaging, learning the mapping from velocity maps to seismic
waveforms (forward problem) and waveforms to velocity (inverse problem) is
important for several applications. While traditional techniques for solving
forward and inverse problems are computationally prohibitive, there is a
growing interest in leveraging recent advances in deep learning to learn the
mapping between velocity maps and seismic waveform images directly from data.
Despite the variety of architectures explored in previous works, several open
questions still remain unanswered such as the effect of latent space sizes, the
importance of manifold learning, the complexity of translation models, and the
value of jointly solving forward and inverse problems. We propose a unified
framework to systematically characterize prior research in this area termed the
Generalized Forward-Inverse (GFI) framework, building on the assumption of
manifolds and latent space translations. We show that GFI encompasses previous
works in deep learning for subsurface imaging, which can be viewed as specific
instantiations of GFI. We also propose two new model architectures within the
framework of GFI: Latent U-Net and Invertible X-Net, leveraging the power of
U-Nets for domain translation and the ability of IU-Nets to simultaneously
learn forward and inverse translations, respectively. We show that our proposed
models achieve state-of-the-art (SOTA) performance for forward and inverse
problems on a wide range of synthetic datasets, and also investigate their
zero-shot effectiveness on two real-world-like datasets. Our code is available
at https://github.com/KGML-lab/Generalized-Forward-Inverse-Framework-for-DL4SI
| [
{
"version": "v1",
"created": "Tue, 15 Oct 2024 04:07:25 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Oct 2024 01:41:49 GMT"
},
{
"version": "v3",
"created": "Wed, 2 Apr 2025 03:10:37 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Gupta",
"Naveen",
""
],
[
"Sawhney",
"Medha",
""
],
[
"Daw",
"Arka",
""
],
[
"Lin",
"Youzuo",
""
],
[
"Karpatne",
"Anuj",
""
]
] | TITLE: A Unified Framework for Forward and Inverse Problems in Subsurface
Imaging using Latent Space Translations
ABSTRACT: In subsurface imaging, learning the mapping from velocity maps to seismic
waveforms (forward problem) and waveforms to velocity (inverse problem) is
important for several applications. While traditional techniques for solving
forward and inverse problems are computationally prohibitive, there is a
growing interest in leveraging recent advances in deep learning to learn the
mapping between velocity maps and seismic waveform images directly from data.
Despite the variety of architectures explored in previous works, several open
questions still remain unanswered such as the effect of latent space sizes, the
importance of manifold learning, the complexity of translation models, and the
value of jointly solving forward and inverse problems. We propose a unified
framework to systematically characterize prior research in this area termed the
Generalized Forward-Inverse (GFI) framework, building on the assumption of
manifolds and latent space translations. We show that GFI encompasses previous
works in deep learning for subsurface imaging, which can be viewed as specific
instantiations of GFI. We also propose two new model architectures within the
framework of GFI: Latent U-Net and Invertible X-Net, leveraging the power of
U-Nets for domain translation and the ability of IU-Nets to simultaneously
learn forward and inverse translations, respectively. We show that our proposed
models achieve state-of-the-art (SOTA) performance for forward and inverse
problems on a wide range of synthetic datasets, and also investigate their
zero-shot effectiveness on two real-world-like datasets. Our code is available
at https://github.com/KGML-lab/Generalized-Forward-Inverse-Framework-for-DL4SI
|
2410.12082 | Christiaan Geldenhuys | Christiaan M. Geldenhuys, Thomas R. Niesler | Learning to rumble: Automated elephant call classification, detection
and endpointing using deep architectures | null | null | 10.1080/09524622.2025.2487099 | null | cs.SD cs.LG eess.AS q-bio.QM | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We consider the problem of detecting, isolating and classifying elephant
calls in continuously recorded audio. Such automatic call characterisation can
assist conservation efforts and inform environmental management strategies. In
contrast to previous work in which call detection was performed at a segment
level, we perform call detection at a frame level which implicitly also allows
call endpointing, the isolation of a call in a longer recording. For
experimentation, we employ two annotated datasets, one containing Asian and the
other African elephant vocalisations. We evaluate several shallow and deep
classifier models, and show that the current best performance can be improved
by using an audio spectrogram transformer (AST), a neural architecture which
has not been used for this purpose before, and which we have configured in a
novel sequence-to-sequence manner. We also show that using transfer learning by
pre-training leads to further improvements both in terms of computational
complexity and performance. Finally, we consider sub-call classification using
an accepted taxonomy of call types, a task which has not previously been
considered. We show that also in this case the transformer architectures
provide the best performance. Our best classifiers achieve an average precision
(AP) of 0.962 for framewise binary call classification, and an area under the
receiver operating characteristic (AUC) of 0.957 and 0.979 for call
classification with 5 classes and sub-call classification with 7 classes
respectively. All of these represent either new benchmarks (sub-call
classifications) or improvements on previously best systems. We conclude that a
fully-automated elephant call detection and subcall classification system is
within reach. Such a system would provide valuable information on the behaviour
and state of elephant herds for the purposes of conservation and management.
| [
{
"version": "v1",
"created": "Tue, 15 Oct 2024 21:56:40 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Geldenhuys",
"Christiaan M.",
""
],
[
"Niesler",
"Thomas R.",
""
]
] | TITLE: Learning to rumble: Automated elephant call classification, detection
and endpointing using deep architectures
ABSTRACT: We consider the problem of detecting, isolating and classifying elephant
calls in continuously recorded audio. Such automatic call characterisation can
assist conservation efforts and inform environmental management strategies. In
contrast to previous work in which call detection was performed at a segment
level, we perform call detection at a frame level which implicitly also allows
call endpointing, the isolation of a call in a longer recording. For
experimentation, we employ two annotated datasets, one containing Asian and the
other African elephant vocalisations. We evaluate several shallow and deep
classifier models, and show that the current best performance can be improved
by using an audio spectrogram transformer (AST), a neural architecture which
has not been used for this purpose before, and which we have configured in a
novel sequence-to-sequence manner. We also show that using transfer learning by
pre-training leads to further improvements both in terms of computational
complexity and performance. Finally, we consider sub-call classification using
an accepted taxonomy of call types, a task which has not previously been
considered. We show that also in this case the transformer architectures
provide the best performance. Our best classifiers achieve an average precision
(AP) of 0.962 for framewise binary call classification, and an area under the
receiver operating characteristic (AUC) of 0.957 and 0.979 for call
classification with 5 classes and sub-call classification with 7 classes
respectively. All of these represent either new benchmarks (sub-call
classifications) or improvements on previously best systems. We conclude that a
fully-automated elephant call detection and subcall classification system is
within reach. Such a system would provide valuable information on the behaviour
and state of elephant herds for the purposes of conservation and management.
|
2410.12836 | Kaizhi Zheng | Kaizhi Zheng, Xiaotong Chen, Xuehai He, Jing Gu, Linjie Li, Zhengyuan
Yang, Kevin Lin, Jianfeng Wang, Lijuan Wang, Xin Eric Wang | EditRoom: LLM-parameterized Graph Diffusion for Composable 3D Room
Layout Editing | null | null | null | null | cs.GR cs.AI cs.CV cs.HC | http://creativecommons.org/licenses/by/4.0/ | Given the steep learning curve of professional 3D software and the
time-consuming process of managing large 3D assets, language-guided 3D scene
editing has significant potential in fields such as virtual reality, augmented
reality, and gaming. However, recent approaches to language-guided 3D scene
editing either require manual interventions or focus only on appearance
modifications without supporting comprehensive scene layout changes. In
response, we propose EditRoom, a unified framework capable of executing a
variety of layout edits through natural language commands, without requiring
manual intervention. Specifically, EditRoom leverages Large Language Models
(LLMs) for command planning and generates target scenes using a diffusion-based
method, enabling six types of edits: rotate, translate, scale, replace, add,
and remove. To address the lack of data for language-guided 3D scene editing,
we have developed an automatic pipeline to augment existing 3D scene synthesis
datasets and introduced EditRoom-DB, a large-scale dataset with 83k editing
pairs, for training and evaluation. Our experiments demonstrate that our
approach consistently outperforms other baselines across all metrics,
indicating higher accuracy and coherence in language-guided scene layout
editing.
| [
{
"version": "v1",
"created": "Thu, 3 Oct 2024 17:42:24 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 23:38:07 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Zheng",
"Kaizhi",
""
],
[
"Chen",
"Xiaotong",
""
],
[
"He",
"Xuehai",
""
],
[
"Gu",
"Jing",
""
],
[
"Li",
"Linjie",
""
],
[
"Yang",
"Zhengyuan",
""
],
[
"Lin",
"Kevin",
""
],
[
"Wang",
"Jianfeng",
""
],
[
"Wang",
"Lijuan",
""
],
[
"Wang",
"Xin Eric",
""
]
] | TITLE: EditRoom: LLM-parameterized Graph Diffusion for Composable 3D Room
Layout Editing
ABSTRACT: Given the steep learning curve of professional 3D software and the
time-consuming process of managing large 3D assets, language-guided 3D scene
editing has significant potential in fields such as virtual reality, augmented
reality, and gaming. However, recent approaches to language-guided 3D scene
editing either require manual interventions or focus only on appearance
modifications without supporting comprehensive scene layout changes. In
response, we propose EditRoom, a unified framework capable of executing a
variety of layout edits through natural language commands, without requiring
manual intervention. Specifically, EditRoom leverages Large Language Models
(LLMs) for command planning and generates target scenes using a diffusion-based
method, enabling six types of edits: rotate, translate, scale, replace, add,
and remove. To address the lack of data for language-guided 3D scene editing,
we have developed an automatic pipeline to augment existing 3D scene synthesis
datasets and introduced EditRoom-DB, a large-scale dataset with 83k editing
pairs, for training and evaluation. Our experiments demonstrate that our
approach consistently outperforms other baselines across all metrics,
indicating higher accuracy and coherence in language-guided scene layout
editing.
|
2410.13798 | Kaveh Hassani | Limei Wang, Kaveh Hassani, Si Zhang, Dongqi Fu, Baichuan Yuan, Weilin
Cong, Zhigang Hua, Hao Wu, Ning Yao, Bo Long | Learning Graph Quantized Tokenizers | ICLR 2025 | null | null | null | cs.NE cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Transformers serve as the backbone architectures of Foundational Models,
where domain-specific tokenizers allow them to adapt to various domains. Graph
Transformers (GTs) have recently emerged as leading models in geometric deep
learning, outperforming Graph Neural Networks (GNNs) in various graph learning
tasks. However, the development of tokenizers for graphs has lagged behind
other modalities. To address this, we introduce GQT (\textbf{G}raph
\textbf{Q}uantized \textbf{T}okenizer), which decouples tokenizer training from
Transformer training by leveraging multi-task graph self-supervised learning,
yielding robust and generalizable graph tokens. Furthermore, the GQT utilizes
Residual Vector Quantization (RVQ) to learn hierarchical discrete tokens,
resulting in significantly reduced memory requirements and improved
generalization capabilities. By combining the GQT with token modulation, a
Transformer encoder achieves state-of-the-art performance on 20 out of 22
benchmarks, including large-scale homophilic and heterophilic datasets.
| [
{
"version": "v1",
"created": "Thu, 17 Oct 2024 17:38:24 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 03:04:44 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Wang",
"Limei",
""
],
[
"Hassani",
"Kaveh",
""
],
[
"Zhang",
"Si",
""
],
[
"Fu",
"Dongqi",
""
],
[
"Yuan",
"Baichuan",
""
],
[
"Cong",
"Weilin",
""
],
[
"Hua",
"Zhigang",
""
],
[
"Wu",
"Hao",
""
],
[
"Yao",
"Ning",
""
],
[
"Long",
"Bo",
""
]
] | TITLE: Learning Graph Quantized Tokenizers
ABSTRACT: Transformers serve as the backbone architectures of Foundational Models,
where domain-specific tokenizers allow them to adapt to various domains. Graph
Transformers (GTs) have recently emerged as leading models in geometric deep
learning, outperforming Graph Neural Networks (GNNs) in various graph learning
tasks. However, the development of tokenizers for graphs has lagged behind
other modalities. To address this, we introduce GQT (\textbf{G}raph
\textbf{Q}uantized \textbf{T}okenizer), which decouples tokenizer training from
Transformer training by leveraging multi-task graph self-supervised learning,
yielding robust and generalizable graph tokens. Furthermore, the GQT utilizes
Residual Vector Quantization (RVQ) to learn hierarchical discrete tokens,
resulting in significantly reduced memory requirements and improved
generalization capabilities. By combining the GQT with token modulation, a
Transformer encoder achieves state-of-the-art performance on 20 out of 22
benchmarks, including large-scale homophilic and heterophilic datasets.
|
2410.15912 | Zhengming Wang | Zhengming Wang, Junli Wang, Pengfei Li, Zhaohan Li, Chunyang Liu, Bo
Zhang, Peng Li, Yilun Chen | Bench4Merge: A Comprehensive Benchmark for Merging in Realistic Dense
Traffic with Micro-Interactive Vehicles | 6 pages, 8 figures, on submitted | null | null | null | cs.RO cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | While the capabilities of autonomous driving have advanced rapidly, merging
into dense traffic remains a significant challenge, many motion planning
methods for this scenario have been proposed but it is hard to evaluate them.
Most existing closed-loop simulators rely on rule-based controls for other
vehicles, which results in a lack of diversity and randomness, thus failing to
accurately assess the motion planning capabilities in highly interactive
scenarios. Moreover, traditional evaluation metrics are insufficient for
comprehensively evaluating the performance of merging in dense traffic. In
response, we proposed a closed-loop evaluation benchmark for assessing motion
planning capabilities in merging scenarios. Our approach involves other
vehicles trained in large scale datasets with micro-behavioral characteristics
that significantly enhance the complexity and diversity. Additionally, we have
restructured the evaluation mechanism by leveraging Large Language Models
(LLMs) to assess each autonomous vehicle merging onto the main lane. Extensive
experiments and test-vehicle deployment have demonstrated the progressiveness
of this benchmark. Through this benchmark, we have obtained an evaluation of
existing methods and identified common issues. The simulation environment and
evaluation process can be accessed at https://github.com/WZM5853/Bench4Merge.
| [
{
"version": "v1",
"created": "Mon, 21 Oct 2024 11:35:33 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Feb 2025 16:05:26 GMT"
},
{
"version": "v3",
"created": "Wed, 2 Apr 2025 09:02:05 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Wang",
"Zhengming",
""
],
[
"Wang",
"Junli",
""
],
[
"Li",
"Pengfei",
""
],
[
"Li",
"Zhaohan",
""
],
[
"Liu",
"Chunyang",
""
],
[
"Zhang",
"Bo",
""
],
[
"Li",
"Peng",
""
],
[
"Chen",
"Yilun",
""
]
] | TITLE: Bench4Merge: A Comprehensive Benchmark for Merging in Realistic Dense
Traffic with Micro-Interactive Vehicles
ABSTRACT: While the capabilities of autonomous driving have advanced rapidly, merging
into dense traffic remains a significant challenge, many motion planning
methods for this scenario have been proposed but it is hard to evaluate them.
Most existing closed-loop simulators rely on rule-based controls for other
vehicles, which results in a lack of diversity and randomness, thus failing to
accurately assess the motion planning capabilities in highly interactive
scenarios. Moreover, traditional evaluation metrics are insufficient for
comprehensively evaluating the performance of merging in dense traffic. In
response, we proposed a closed-loop evaluation benchmark for assessing motion
planning capabilities in merging scenarios. Our approach involves other
vehicles trained in large scale datasets with micro-behavioral characteristics
that significantly enhance the complexity and diversity. Additionally, we have
restructured the evaluation mechanism by leveraging Large Language Models
(LLMs) to assess each autonomous vehicle merging onto the main lane. Extensive
experiments and test-vehicle deployment have demonstrated the progressiveness
of this benchmark. Through this benchmark, we have obtained an evaluation of
existing methods and identified common issues. The simulation environment and
evaluation process can be accessed at https://github.com/WZM5853/Bench4Merge.
|
2411.04371 | Yonas Sium | Yonas Sium, Qi Li | ComFairGNN: Community Fair Graph Neural Network | Published at PAKDD 2025 | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Graph Neural Networks (GNNs) have become the leading approach for addressing
graph analytical problems in various real-world scenarios. However, GNNs may
produce biased predictions against certain demographic subgroups due to node
attributes and neighbors surrounding a node. Most current research on GNN
fairness focuses predominantly on debiasing GNNs using oversimplified fairness
evaluation metrics, which can give a misleading impression of fairness.
Understanding the potential evaluation paradoxes due to the complicated nature
of the graph structure is crucial for developing effective GNN debiasing
mechanisms. In this paper, we examine the effectiveness of current GNN
debiasing methods in terms of unfairness evaluation. Specifically, we introduce
a community-level strategy to measure bias in GNNs and evaluate debiasing
methods at this level. Further, We introduce ComFairGNN, a novel framework
designed to mitigate community-level bias in GNNs. Our approach employs a
learnable coreset-based debiasing function that addresses bias arising from
diverse local neighborhood distributions during GNNs neighborhood aggregation.
Comprehensive evaluations on three benchmark datasets demonstrate our model's
effectiveness in both accuracy and fairness metrics.
| [
{
"version": "v1",
"created": "Thu, 7 Nov 2024 02:04:34 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 21:14:17 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Sium",
"Yonas",
""
],
[
"Li",
"Qi",
""
]
] | TITLE: ComFairGNN: Community Fair Graph Neural Network
ABSTRACT: Graph Neural Networks (GNNs) have become the leading approach for addressing
graph analytical problems in various real-world scenarios. However, GNNs may
produce biased predictions against certain demographic subgroups due to node
attributes and neighbors surrounding a node. Most current research on GNN
fairness focuses predominantly on debiasing GNNs using oversimplified fairness
evaluation metrics, which can give a misleading impression of fairness.
Understanding the potential evaluation paradoxes due to the complicated nature
of the graph structure is crucial for developing effective GNN debiasing
mechanisms. In this paper, we examine the effectiveness of current GNN
debiasing methods in terms of unfairness evaluation. Specifically, we introduce
a community-level strategy to measure bias in GNNs and evaluate debiasing
methods at this level. Further, We introduce ComFairGNN, a novel framework
designed to mitigate community-level bias in GNNs. Our approach employs a
learnable coreset-based debiasing function that addresses bias arising from
diverse local neighborhood distributions during GNNs neighborhood aggregation.
Comprehensive evaluations on three benchmark datasets demonstrate our model's
effectiveness in both accuracy and fairness metrics.
|
2411.07751 | Jiaran Gao | Xinyuan Qian, Jiaran Gao, Yaodan Zhang, Qiquan Zhang, Hexin Liu,
Leibny Paola Garcia, Haizhou Li | SAV-SE: Scene-aware Audio-Visual Speech Enhancement with Selective State
Space Model | accepted by IEEE Journal of Selected Topics in Signal Processing | null | null | null | cs.SD cs.AI cs.CV cs.MM eess.AS | http://creativecommons.org/licenses/by/4.0/ | Speech enhancement plays an essential role in various applications, and the
integration of visual information has been demonstrated to bring substantial
advantages. However, the majority of current research concentrates on the
examination of facial and lip movements, which can be compromised or entirely
inaccessible in scenarios where occlusions occur or when the camera view is
distant. Whereas contextual visual cues from the surrounding environment have
been overlooked: for example, when we see a dog bark, our brain has the innate
ability to discern and filter out the barking noise. To this end, in this
paper, we introduce a novel task, i.e. SAV-SE. To our best knowledge, this is
the first proposal to use rich contextual information from synchronized video
as auxiliary cues to indicate the type of noise, which eventually improves the
speech enhancement performance. Specifically, we propose the VC-S$^2$E method,
which incorporates the Conformer and Mamba modules for their complementary
strengths. Extensive experiments are conducted on public MUSIC, AVSpeech and
AudioSet datasets, where the results demonstrate the superiority of VC-S$^2$E
over other competitive methods. We will make the source code publicly
available. Project demo page: https://AVSEPage.github.io/
| [
{
"version": "v1",
"created": "Tue, 12 Nov 2024 12:23:41 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 10:39:14 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Qian",
"Xinyuan",
""
],
[
"Gao",
"Jiaran",
""
],
[
"Zhang",
"Yaodan",
""
],
[
"Zhang",
"Qiquan",
""
],
[
"Liu",
"Hexin",
""
],
[
"Garcia",
"Leibny Paola",
""
],
[
"Li",
"Haizhou",
""
]
] | TITLE: SAV-SE: Scene-aware Audio-Visual Speech Enhancement with Selective State
Space Model
ABSTRACT: Speech enhancement plays an essential role in various applications, and the
integration of visual information has been demonstrated to bring substantial
advantages. However, the majority of current research concentrates on the
examination of facial and lip movements, which can be compromised or entirely
inaccessible in scenarios where occlusions occur or when the camera view is
distant. Whereas contextual visual cues from the surrounding environment have
been overlooked: for example, when we see a dog bark, our brain has the innate
ability to discern and filter out the barking noise. To this end, in this
paper, we introduce a novel task, i.e. SAV-SE. To our best knowledge, this is
the first proposal to use rich contextual information from synchronized video
as auxiliary cues to indicate the type of noise, which eventually improves the
speech enhancement performance. Specifically, we propose the VC-S$^2$E method,
which incorporates the Conformer and Mamba modules for their complementary
strengths. Extensive experiments are conducted on public MUSIC, AVSpeech and
AudioSet datasets, where the results demonstrate the superiority of VC-S$^2$E
over other competitive methods. We will make the source code publicly
available. Project demo page: https://AVSEPage.github.io/
|
2412.07360 | Xuerui Qiu | Xuerui Qiu, Man Yao, Jieyuan Zhang, Yuhong Chou, Ning Qiao, Shibo
Zhou, Bo Xu, Guoqi Li | Efficient 3D Recognition with Event-driven Spike Sparse Convolution | Accepted by AAAI 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Spiking Neural Networks (SNNs) provide an energy-efficient way to extract 3D
spatio-temporal features. Point clouds are sparse 3D spatial data, which
suggests that SNNs should be well-suited for processing them. However, when
applying SNNs to point clouds, they often exhibit limited performance and fewer
application scenarios. We attribute this to inappropriate preprocessing and
feature extraction methods. To address this issue, we first introduce the Spike
Voxel Coding (SVC) scheme, which encodes the 3D point clouds into a sparse
spike train space, reducing the storage requirements and saving time on point
cloud preprocessing. Then, we propose a Spike Sparse Convolution (SSC) model
for efficiently extracting 3D sparse point cloud features. Combining SVC and
SSC, we design an efficient 3D SNN backbone (E-3DSNN), which is friendly with
neuromorphic hardware. For instance, SSC can be implemented on neuromorphic
chips with only minor modifications to the addressing function of vanilla spike
convolution. Experiments on ModelNet40, KITTI, and Semantic KITTI datasets
demonstrate that E-3DSNN achieves state-of-the-art (SOTA) results with
remarkable efficiency. Notably, our E-3DSNN (1.87M) obtained 91.7\% top-1
accuracy on ModelNet40, surpassing the current best SNN baselines (14.3M) by
3.0\%. To our best knowledge, it is the first direct training 3D SNN backbone
that can simultaneously handle various 3D computer vision tasks (e.g.,
classification, detection, and segmentation) with an event-driven nature. Code
is available: https://github.com/bollossom/E-3DSNN/.
| [
{
"version": "v1",
"created": "Tue, 10 Dec 2024 09:55:15 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Feb 2025 02:52:37 GMT"
},
{
"version": "v3",
"created": "Wed, 2 Apr 2025 10:05:10 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Qiu",
"Xuerui",
""
],
[
"Yao",
"Man",
""
],
[
"Zhang",
"Jieyuan",
""
],
[
"Chou",
"Yuhong",
""
],
[
"Qiao",
"Ning",
""
],
[
"Zhou",
"Shibo",
""
],
[
"Xu",
"Bo",
""
],
[
"Li",
"Guoqi",
""
]
] | TITLE: Efficient 3D Recognition with Event-driven Spike Sparse Convolution
ABSTRACT: Spiking Neural Networks (SNNs) provide an energy-efficient way to extract 3D
spatio-temporal features. Point clouds are sparse 3D spatial data, which
suggests that SNNs should be well-suited for processing them. However, when
applying SNNs to point clouds, they often exhibit limited performance and fewer
application scenarios. We attribute this to inappropriate preprocessing and
feature extraction methods. To address this issue, we first introduce the Spike
Voxel Coding (SVC) scheme, which encodes the 3D point clouds into a sparse
spike train space, reducing the storage requirements and saving time on point
cloud preprocessing. Then, we propose a Spike Sparse Convolution (SSC) model
for efficiently extracting 3D sparse point cloud features. Combining SVC and
SSC, we design an efficient 3D SNN backbone (E-3DSNN), which is friendly with
neuromorphic hardware. For instance, SSC can be implemented on neuromorphic
chips with only minor modifications to the addressing function of vanilla spike
convolution. Experiments on ModelNet40, KITTI, and Semantic KITTI datasets
demonstrate that E-3DSNN achieves state-of-the-art (SOTA) results with
remarkable efficiency. Notably, our E-3DSNN (1.87M) obtained 91.7\% top-1
accuracy on ModelNet40, surpassing the current best SNN baselines (14.3M) by
3.0\%. To our best knowledge, it is the first direct training 3D SNN backbone
that can simultaneously handle various 3D computer vision tasks (e.g.,
classification, detection, and segmentation) with an event-driven nature. Code
is available: https://github.com/bollossom/E-3DSNN/.
|
2412.09756 | Rayne Holland Ph. D | Rayne Holland, Seyit Camtepe, Chandra Thapa, Minhui Xue | Private Synthetic Data Generation in Small Memory | 24 Pages, 1 Table, 3 Figures, 3 Algorithms | null | null | null | cs.CR cs.DS | http://creativecommons.org/licenses/by/4.0/ | We propose $\mathtt{PrivHP}$, a lightweight synthetic data generator with
\textit{differential privacy} guarantees. $\mathtt{PrivHP}$ uses a novel
hierarchical decomposition that approximates the input's cumulative
distribution function (CDF) in bounded memory. It balances hierarchy depth,
noise addition, and pruning of low-frequency subdomains while preserving
frequent ones. Private sketches estimate subdomain frequencies efficiently
without full data access.
A key feature is the pruning parameter $k$, which controls the trade-off
between space and utility. We define the skew measure $\mathtt{tail}_k$,
capturing all but the top $k$ subdomain frequencies. Given a dataset
$\mathcal{X}$, $\mathtt{PrivHP}$ uses $M=\mathcal{O}(k\log^2 |X|)$ space and,
for input domain $\Omega = [0,1]$, ensures $\varepsilon$-differential privacy.
It yields a generator with expected Wasserstein distance: \[
\mathcal{O}\left(\frac{\log^2 M}{\varepsilon n} +
\frac{||\mathtt{tail}_k(\mathcal{X})||_1}{M n}\right) \] from the empirical
distribution. This parameterized trade-off offers a level of flexibility
unavailable in prior work. We also provide interpretable utility bounds that
account for hierarchy depth, privacy noise, pruning, and frequency estimation
errors.
| [
{
"version": "v1",
"created": "Thu, 12 Dec 2024 23:24:05 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Feb 2025 00:58:38 GMT"
},
{
"version": "v3",
"created": "Wed, 2 Apr 2025 05:01:51 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Holland",
"Rayne",
""
],
[
"Camtepe",
"Seyit",
""
],
[
"Thapa",
"Chandra",
""
],
[
"Xue",
"Minhui",
""
]
] | TITLE: Private Synthetic Data Generation in Small Memory
ABSTRACT: We propose $\mathtt{PrivHP}$, a lightweight synthetic data generator with
\textit{differential privacy} guarantees. $\mathtt{PrivHP}$ uses a novel
hierarchical decomposition that approximates the input's cumulative
distribution function (CDF) in bounded memory. It balances hierarchy depth,
noise addition, and pruning of low-frequency subdomains while preserving
frequent ones. Private sketches estimate subdomain frequencies efficiently
without full data access.
A key feature is the pruning parameter $k$, which controls the trade-off
between space and utility. We define the skew measure $\mathtt{tail}_k$,
capturing all but the top $k$ subdomain frequencies. Given a dataset
$\mathcal{X}$, $\mathtt{PrivHP}$ uses $M=\mathcal{O}(k\log^2 |X|)$ space and,
for input domain $\Omega = [0,1]$, ensures $\varepsilon$-differential privacy.
It yields a generator with expected Wasserstein distance: \[
\mathcal{O}\left(\frac{\log^2 M}{\varepsilon n} +
\frac{||\mathtt{tail}_k(\mathcal{X})||_1}{M n}\right) \] from the empirical
distribution. This parameterized trade-off offers a level of flexibility
unavailable in prior work. We also provide interpretable utility bounds that
account for hierarchy depth, privacy noise, pruning, and frequency estimation
errors.
|
2412.14123 | Guillaume Astruc | Guillaume Astruc, Nicolas Gonthier, Clement Mallet, Loic Landrieu | AnySat: One Earth Observation Model for Many Resolutions, Scales, and
Modalities | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Geospatial models must adapt to the diversity of Earth observation data in
terms of resolutions, scales, and modalities. However, existing approaches
expect fixed input configurations, which limits their practical applicability.
We propose AnySat, a multimodal model based on joint embedding predictive
architecture (JEPA) and scale-adaptive spatial encoders, allowing us to train a
single model on highly heterogeneous data in a self-supervised manner. To
demonstrate the advantages of this unified approach, we compile GeoPlex, a
collection of $5$ multimodal datasets with varying characteristics and $11$
distinct sensors. We then train a single powerful model on these diverse
datasets simultaneously. Once fine-tuned or probed, we reach state-of-the-art
results on the test sets of GeoPlex and for $6$ external datasets across
various environment monitoring tasks: land cover mapping, tree species
identification, crop type classification, change detection, climate type
classification, and segmentation of flood, burn scar, and deforestation. The
code and models are available at https://github.com/gastruc/AnySat.
| [
{
"version": "v1",
"created": "Wed, 18 Dec 2024 18:11:53 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 08:19:39 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Astruc",
"Guillaume",
""
],
[
"Gonthier",
"Nicolas",
""
],
[
"Mallet",
"Clement",
""
],
[
"Landrieu",
"Loic",
""
]
] | TITLE: AnySat: One Earth Observation Model for Many Resolutions, Scales, and
Modalities
ABSTRACT: Geospatial models must adapt to the diversity of Earth observation data in
terms of resolutions, scales, and modalities. However, existing approaches
expect fixed input configurations, which limits their practical applicability.
We propose AnySat, a multimodal model based on joint embedding predictive
architecture (JEPA) and scale-adaptive spatial encoders, allowing us to train a
single model on highly heterogeneous data in a self-supervised manner. To
demonstrate the advantages of this unified approach, we compile GeoPlex, a
collection of $5$ multimodal datasets with varying characteristics and $11$
distinct sensors. We then train a single powerful model on these diverse
datasets simultaneously. Once fine-tuned or probed, we reach state-of-the-art
results on the test sets of GeoPlex and for $6$ external datasets across
various environment monitoring tasks: land cover mapping, tree species
identification, crop type classification, change detection, climate type
classification, and segmentation of flood, burn scar, and deforestation. The
code and models are available at https://github.com/gastruc/AnySat.
|
2412.20727 | Xiaoqiang Wang | Gaoxiang Zhao, Li Zhou, Xiaoqiang Wang | AverageTime: Enhance Long-Term Time Series Forecasting with Simple
Averaging | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Long-term time series forecasting focuses on leveraging historical data to
predict future trends. The core challenge lies in effectively modeling
dependencies both within sequences and channels. Convolutional Neural Networks
and Linear models often excel in sequence modeling but frequently fall short in
capturing complex channel dependencies. In contrast, Transformer-based models,
with their attention mechanisms applied to both sequences and channels, have
demonstrated strong predictive performance. Our research proposes a new
approach for capturing sequence and channel dependencies: AverageTime, an
exceptionally simple yet effective structure. By employing mixed channel
embedding and averaging operations, AverageTime separately captures
correlations for sequences and channels through channel mapping and result
averaging. In addition, we integrate clustering methods to further accelerate
the model's training process. Experiments on real-world datasets demonstrate
that AverageTime surpasses state-of-the-art models in predictive performance
while maintaining efficiency comparable to lightweight linear models. This
provides a new and effective framework for modeling long time series.
| [
{
"version": "v1",
"created": "Mon, 30 Dec 2024 05:56:25 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 01:13:27 GMT"
},
{
"version": "v3",
"created": "Wed, 2 Apr 2025 09:14:55 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Zhao",
"Gaoxiang",
""
],
[
"Zhou",
"Li",
""
],
[
"Wang",
"Xiaoqiang",
""
]
] | TITLE: AverageTime: Enhance Long-Term Time Series Forecasting with Simple
Averaging
ABSTRACT: Long-term time series forecasting focuses on leveraging historical data to
predict future trends. The core challenge lies in effectively modeling
dependencies both within sequences and channels. Convolutional Neural Networks
and Linear models often excel in sequence modeling but frequently fall short in
capturing complex channel dependencies. In contrast, Transformer-based models,
with their attention mechanisms applied to both sequences and channels, have
demonstrated strong predictive performance. Our research proposes a new
approach for capturing sequence and channel dependencies: AverageTime, an
exceptionally simple yet effective structure. By employing mixed channel
embedding and averaging operations, AverageTime separately captures
correlations for sequences and channels through channel mapping and result
averaging. In addition, we integrate clustering methods to further accelerate
the model's training process. Experiments on real-world datasets demonstrate
that AverageTime surpasses state-of-the-art models in predictive performance
while maintaining efficiency comparable to lightweight linear models. This
provides a new and effective framework for modeling long time series.
|
2501.05396 | Yongkang Du | Yongkang Du, Jen-tse Huang, Jieyu Zhao, Lu Lin | FairCoder: Evaluating Social Bias of LLMs in Code Generation | null | null | null | null | cs.CL cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs) have been widely deployed in coding tasks,
drawing increasing attention to the evaluation of the quality and safety of
LLMs' outputs. However, research on bias in code generation remains limited.
Existing studies typically identify bias by applying malicious prompts or
reusing tasks and dataset originally designed for discriminative models. Given
that prior datasets are not fully optimized for code-related tasks, there is a
pressing need for benchmarks specifically designed for evaluating code models.
In this study, we introduce FairCoder, a novel benchmark for evaluating social
bias in code generation. FairCoder explores the bias issue following the
pipeline in software development, from function implementation to unit test,
with diverse real-world scenarios. Additionally, three metrics are designed to
assess fairness performance on this benchmark. We conduct experiments on widely
used LLMs and provide a comprehensive analysis of the results. The findings
reveal that all tested LLMs exhibit social bias.
| [
{
"version": "v1",
"created": "Thu, 9 Jan 2025 17:42:23 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 19:17:32 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Du",
"Yongkang",
""
],
[
"Huang",
"Jen-tse",
""
],
[
"Zhao",
"Jieyu",
""
],
[
"Lin",
"Lu",
""
]
] | TITLE: FairCoder: Evaluating Social Bias of LLMs in Code Generation
ABSTRACT: Large language models (LLMs) have been widely deployed in coding tasks,
drawing increasing attention to the evaluation of the quality and safety of
LLMs' outputs. However, research on bias in code generation remains limited.
Existing studies typically identify bias by applying malicious prompts or
reusing tasks and dataset originally designed for discriminative models. Given
that prior datasets are not fully optimized for code-related tasks, there is a
pressing need for benchmarks specifically designed for evaluating code models.
In this study, we introduce FairCoder, a novel benchmark for evaluating social
bias in code generation. FairCoder explores the bias issue following the
pipeline in software development, from function implementation to unit test,
with diverse real-world scenarios. Additionally, three metrics are designed to
assess fairness performance on this benchmark. We conduct experiments on widely
used LLMs and provide a comprehensive analysis of the results. The findings
reveal that all tested LLMs exhibit social bias.
|
2501.07171 | Alejandro Lozano | Alejandro Lozano, Min Woo Sun, James Burgess, Liangyu Chen, Jeffrey J
Nirschl, Jeffrey Gu, Ivan Lopez, Josiah Aklilu, Austin Wolfgang Katzer,
Collin Chiu, Anita Rau, Xiaohan Wang, Yuhui Zhang, Alfred Seunghoon Song,
Robert Tibshirani, Serena Yeung-Levy | BIOMEDICA: An Open Biomedical Image-Caption Archive, Dataset, and
Vision-Language Models Derived from Scientific Literature | null | null | null | null | cs.CV cs.CL | http://creativecommons.org/licenses/by/4.0/ | The development of vision-language models (VLMs) is driven by large-scale and
diverse multimodal datasets. However, progress toward generalist biomedical
VLMs is limited by the lack of annotated, publicly accessible datasets across
biology and medicine. Existing efforts are restricted to narrow domains,
missing the full diversity of biomedical knowledge encoded in scientific
literature. To address this gap, we introduce BIOMEDICA, a scalable,
open-source framework to extract, annotate, and serialize the entirety of the
PubMed Central Open Access subset into an easy-to-use, publicly accessible
dataset. Our framework produces a comprehensive archive with over 24 million
unique image-text pairs from over 6 million articles. Metadata and
expert-guided annotations are also provided. We demonstrate the utility and
accessibility of our resource by releasing BMCA-CLIP, a suite of CLIP-style
models continuously pre-trained on the BIOMEDICA dataset via streaming,
eliminating the need to download 27 TB of data locally. On average, our models
achieve state-of-the-art performance across 40 tasks - spanning pathology,
radiology, ophthalmology, dermatology, surgery, molecular biology,
parasitology, and cell biology - excelling in zero-shot classification with a
6.56% average improvement (as high as 29.8% and 17.5% in dermatology and
ophthalmology, respectively), and stronger image-text retrieval, all while
using 10x less compute. To foster reproducibility and collaboration, we release
our codebase and dataset for the broader research community.
| [
{
"version": "v1",
"created": "Mon, 13 Jan 2025 09:58:03 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Jan 2025 06:46:14 GMT"
},
{
"version": "v3",
"created": "Tue, 1 Apr 2025 19:50:25 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Lozano",
"Alejandro",
""
],
[
"Sun",
"Min Woo",
""
],
[
"Burgess",
"James",
""
],
[
"Chen",
"Liangyu",
""
],
[
"Nirschl",
"Jeffrey J",
""
],
[
"Gu",
"Jeffrey",
""
],
[
"Lopez",
"Ivan",
""
],
[
"Aklilu",
"Josiah",
""
],
[
"Katzer",
"Austin Wolfgang",
""
],
[
"Chiu",
"Collin",
""
],
[
"Rau",
"Anita",
""
],
[
"Wang",
"Xiaohan",
""
],
[
"Zhang",
"Yuhui",
""
],
[
"Song",
"Alfred Seunghoon",
""
],
[
"Tibshirani",
"Robert",
""
],
[
"Yeung-Levy",
"Serena",
""
]
] | TITLE: BIOMEDICA: An Open Biomedical Image-Caption Archive, Dataset, and
Vision-Language Models Derived from Scientific Literature
ABSTRACT: The development of vision-language models (VLMs) is driven by large-scale and
diverse multimodal datasets. However, progress toward generalist biomedical
VLMs is limited by the lack of annotated, publicly accessible datasets across
biology and medicine. Existing efforts are restricted to narrow domains,
missing the full diversity of biomedical knowledge encoded in scientific
literature. To address this gap, we introduce BIOMEDICA, a scalable,
open-source framework to extract, annotate, and serialize the entirety of the
PubMed Central Open Access subset into an easy-to-use, publicly accessible
dataset. Our framework produces a comprehensive archive with over 24 million
unique image-text pairs from over 6 million articles. Metadata and
expert-guided annotations are also provided. We demonstrate the utility and
accessibility of our resource by releasing BMCA-CLIP, a suite of CLIP-style
models continuously pre-trained on the BIOMEDICA dataset via streaming,
eliminating the need to download 27 TB of data locally. On average, our models
achieve state-of-the-art performance across 40 tasks - spanning pathology,
radiology, ophthalmology, dermatology, surgery, molecular biology,
parasitology, and cell biology - excelling in zero-shot classification with a
6.56% average improvement (as high as 29.8% and 17.5% in dermatology and
ophthalmology, respectively), and stronger image-text retrieval, all while
using 10x less compute. To foster reproducibility and collaboration, we release
our codebase and dataset for the broader research community.
|
2501.13432 | Samer Attrah | Samer Attrah | Emotion estimation from video footage with LSTM | 12 pages, 5 figures, 34 references, 4 tables, 3 equations | null | null | null | cs.CV cs.LG cs.RO | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Emotion estimation in general is a field that has been studied for a long
time, and several approaches exist using machine learning. in this paper, we
present an LSTM model, that processes the blend-shapes produced by the library
MediaPipe, for a face detected in a live stream of a camera, to estimate the
main emotion from the facial expressions, this model is trained on the FER2013
dataset and delivers a result of 71% accuracy and 62% f1-score which meets the
accuracy benchmark of the FER2013 dataset, with significantly reduced
computation costs.
https://github.com/Samir-atra/Emotion_estimation_from_video_footage_with_LSTM_ML_algorithm
| [
{
"version": "v1",
"created": "Thu, 23 Jan 2025 07:35:47 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Feb 2025 18:37:12 GMT"
},
{
"version": "v3",
"created": "Tue, 1 Apr 2025 23:11:09 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Attrah",
"Samer",
""
]
] | TITLE: Emotion estimation from video footage with LSTM
ABSTRACT: Emotion estimation in general is a field that has been studied for a long
time, and several approaches exist using machine learning. in this paper, we
present an LSTM model, that processes the blend-shapes produced by the library
MediaPipe, for a face detected in a live stream of a camera, to estimate the
main emotion from the facial expressions, this model is trained on the FER2013
dataset and delivers a result of 71% accuracy and 62% f1-score which meets the
accuracy benchmark of the FER2013 dataset, with significantly reduced
computation costs.
https://github.com/Samir-atra/Emotion_estimation_from_video_footage_with_LSTM_ML_algorithm
|
2501.15738 | Shun Ishihara | Shun Ishihara and Taka Matsutsuka | Towards Interoperable Data Spaces: Comparative Analysis of Data Space
Implementations between Japan and Europe | null | null | null | null | cs.DB | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Data spaces are evolving rapidly. In Europe, the concept of data spaces,
which emphasises the importance of trust, sovereignty, and interoperability, is
being implemented as a platform such as Catena-X. Meanwhile, Japan has been
developing its approach to data sharing, in line with global trends but also to
address unique domestic challenges, resulting a platform such as DATA-EX.
Achieving interoperability between European and Japanese data spaces remains a
critical challenge due to the differences created by these parallel advances.
Although interoperability between data spaces has several aspects,
compatibility of trust in the participating entities and the data exchanged is
a significant aspect due to its influence on business. This paper undertakes a
comparative analysis of DATA-EX and Catena-X while focusing on aspect of trust,
to explore the challenges and opportunities for achieving interoperability
between Japanese and European data spaces. By examining common data exchange
processes, key objects such as datasets, and specific evaluation criteria, the
study identifies gaps, challenges, and proposes actionable solutions such as
inter-exchangeable topology. Through this analysis, the paper aims to
contribute to the ongoing discourse on global data interoperability.
| [
{
"version": "v1",
"created": "Mon, 27 Jan 2025 02:56:17 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 04:41:51 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Ishihara",
"Shun",
""
],
[
"Matsutsuka",
"Taka",
""
]
] | TITLE: Towards Interoperable Data Spaces: Comparative Analysis of Data Space
Implementations between Japan and Europe
ABSTRACT: Data spaces are evolving rapidly. In Europe, the concept of data spaces,
which emphasises the importance of trust, sovereignty, and interoperability, is
being implemented as a platform such as Catena-X. Meanwhile, Japan has been
developing its approach to data sharing, in line with global trends but also to
address unique domestic challenges, resulting a platform such as DATA-EX.
Achieving interoperability between European and Japanese data spaces remains a
critical challenge due to the differences created by these parallel advances.
Although interoperability between data spaces has several aspects,
compatibility of trust in the participating entities and the data exchanged is
a significant aspect due to its influence on business. This paper undertakes a
comparative analysis of DATA-EX and Catena-X while focusing on aspect of trust,
to explore the challenges and opportunities for achieving interoperability
between Japanese and European data spaces. By examining common data exchange
processes, key objects such as datasets, and specific evaluation criteria, the
study identifies gaps, challenges, and proposes actionable solutions such as
inter-exchangeable topology. Through this analysis, the paper aims to
contribute to the ongoing discourse on global data interoperability.
|
2501.19347 | Svein Anders Tunheim | Svein Anders Tunheim, Yujin Zheng, Lei Jiao, Rishad Shafik, Alex
Yakovlev, Ole-Christoffer Granmo | An All-digital 8.6-nJ/Frame 65-nm Tsetlin Machine Image Classification
Accelerator | This work has been submitted to the IEEE for possible publication | null | null | null | cs.LG cs.AR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an all-digital programmable machine learning accelerator chip for
image classification, underpinning on the Tsetlin machine (TM) principles. The
TM is an emerging machine learning algorithm founded on propositional logic,
utilizing sub-pattern recognition expressions called clauses. The accelerator
implements the coalesced TM version with convolution, and classifies
booleanized images of 28$\times$28 pixels with 10 categories. A configuration
with 128 clauses is used in a highly parallel architecture. Fast clause
evaluation is achieved by keeping all clause weights and Tsetlin automata (TA)
action signals in registers. The chip is implemented in a 65 nm low-leakage
CMOS technology, and occupies an active area of 2.7 mm$^2$. At a clock
frequency of 27.8 MHz, the accelerator achieves 60.3k classifications per
second, and consumes 8.6 nJ per classification. This demonstrates the
energy-efficiency of the TM, which was the main motivation for developing this
chip. The latency for classifying a single image is 25.4 $\mu$s which includes
system timing overhead. The accelerator achieves 97.42%, 84.54% and 82.55% test
accuracies for the datasets MNIST, Fashion-MNIST and Kuzushiji-MNIST,
respectively, matching the TM software models.
| [
{
"version": "v1",
"created": "Fri, 31 Jan 2025 17:51:46 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 09:46:06 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Tunheim",
"Svein Anders",
""
],
[
"Zheng",
"Yujin",
""
],
[
"Jiao",
"Lei",
""
],
[
"Shafik",
"Rishad",
""
],
[
"Yakovlev",
"Alex",
""
],
[
"Granmo",
"Ole-Christoffer",
""
]
] | TITLE: An All-digital 8.6-nJ/Frame 65-nm Tsetlin Machine Image Classification
Accelerator
ABSTRACT: We present an all-digital programmable machine learning accelerator chip for
image classification, underpinning on the Tsetlin machine (TM) principles. The
TM is an emerging machine learning algorithm founded on propositional logic,
utilizing sub-pattern recognition expressions called clauses. The accelerator
implements the coalesced TM version with convolution, and classifies
booleanized images of 28$\times$28 pixels with 10 categories. A configuration
with 128 clauses is used in a highly parallel architecture. Fast clause
evaluation is achieved by keeping all clause weights and Tsetlin automata (TA)
action signals in registers. The chip is implemented in a 65 nm low-leakage
CMOS technology, and occupies an active area of 2.7 mm$^2$. At a clock
frequency of 27.8 MHz, the accelerator achieves 60.3k classifications per
second, and consumes 8.6 nJ per classification. This demonstrates the
energy-efficiency of the TM, which was the main motivation for developing this
chip. The latency for classifying a single image is 25.4 $\mu$s which includes
system timing overhead. The accelerator achieves 97.42%, 84.54% and 82.55% test
accuracies for the datasets MNIST, Fashion-MNIST and Kuzushiji-MNIST,
respectively, matching the TM software models.
|
2502.02624 | William O'Donnell | William O'Donnell, David Mahon, Guangliang Yang, Simon Gardner | Muographic Image Upsampling with Machine Learning for Built
Infrastructure Applications | null | ODonnell, W.; Mahon, D.; Yang, G.; Gardner, S. Muographic Image
Upsampling with Machine Learning for Built Infrastructure Applications.
Particles 2025, 8, 33 | 10.3390/particles8010033 | null | eess.IV cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The civil engineering industry faces a critical need for innovative
non-destructive evaluation methods, particularly for ageing critical
infrastructure, such as bridges, where current techniques fall short.
Muography, a non-invasive imaging technique, constructs three-dimensional
density maps by detecting interactions of naturally occurring cosmic-ray muons
within the scanned volume. Cosmic-ray muons provide deep penetration and
inherent safety due to their high momenta and natural source. However, the
technology's reliance on this source results in constrained muon flux, leading
to prolonged acquisition times, noisy reconstructions and image interpretation
challenges. To address these limitations, we developed a two-model deep
learning approach. First, we employed a conditional Wasserstein generative
adversarial network with gradient penalty (cWGAN-GP) to perform predictive
upsampling of undersampled muography images. Using the Structural Similarity
Index Measure (SSIM), 1-day sampled images matched the perceptual qualities of
a 21-day image, while the Peak Signal-to-Noise Ratio (PSNR) indicated noise
improvement equivalent to 31 days of sampling. A second cWGAN-GP model, trained
for semantic segmentation, quantitatively assessed the upsampling model's
impact on concrete sample features. This model achieved segmentation of rebar
grids and tendon ducts, with Dice-S{\o}rensen accuracy coefficients of 0.8174
and 0.8663. Notably, it could mitigate or remove z-plane smearing artifacts
caused by muography's inverse imaging problem. Both models were trained on a
comprehensive Geant4 Monte-Carlo simulation dataset reflecting realistic civil
infrastructure scenarios. Our results demonstrate significant improvements in
acquisition speed and image quality, marking a substantial step toward making
muography more practical for reinforced concrete infrastructure monitoring
applications.
| [
{
"version": "v1",
"created": "Tue, 4 Feb 2025 14:37:37 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 08:33:01 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"O'Donnell",
"William",
""
],
[
"Mahon",
"David",
""
],
[
"Yang",
"Guangliang",
""
],
[
"Gardner",
"Simon",
""
]
] | TITLE: Muographic Image Upsampling with Machine Learning for Built
Infrastructure Applications
ABSTRACT: The civil engineering industry faces a critical need for innovative
non-destructive evaluation methods, particularly for ageing critical
infrastructure, such as bridges, where current techniques fall short.
Muography, a non-invasive imaging technique, constructs three-dimensional
density maps by detecting interactions of naturally occurring cosmic-ray muons
within the scanned volume. Cosmic-ray muons provide deep penetration and
inherent safety due to their high momenta and natural source. However, the
technology's reliance on this source results in constrained muon flux, leading
to prolonged acquisition times, noisy reconstructions and image interpretation
challenges. To address these limitations, we developed a two-model deep
learning approach. First, we employed a conditional Wasserstein generative
adversarial network with gradient penalty (cWGAN-GP) to perform predictive
upsampling of undersampled muography images. Using the Structural Similarity
Index Measure (SSIM), 1-day sampled images matched the perceptual qualities of
a 21-day image, while the Peak Signal-to-Noise Ratio (PSNR) indicated noise
improvement equivalent to 31 days of sampling. A second cWGAN-GP model, trained
for semantic segmentation, quantitatively assessed the upsampling model's
impact on concrete sample features. This model achieved segmentation of rebar
grids and tendon ducts, with Dice-S{\o}rensen accuracy coefficients of 0.8174
and 0.8663. Notably, it could mitigate or remove z-plane smearing artifacts
caused by muography's inverse imaging problem. Both models were trained on a
comprehensive Geant4 Monte-Carlo simulation dataset reflecting realistic civil
infrastructure scenarios. Our results demonstrate significant improvements in
acquisition speed and image quality, marking a substantial step toward making
muography more practical for reinforced concrete infrastructure monitoring
applications.
|
2502.03160 | Boyin Tan | Boyin Tan and Junjielong Xu and Zhouruixing Zhu and Pinjia He | AL-Bench: A Benchmark for Automatic Logging | 20pages | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Logging, the practice of inserting log statements into source code, is
critical for improving software reliability. Recently, language model-based
techniques have been developed to automate log statement generation based on
input code. While these tools show promising results in prior studies, the
fairness of their results comparisons is not guaranteed due to the use of ad
hoc datasets. In addition, existing evaluation approaches exclusively dependent
on code similarity metrics fail to capture the impact of code diff on runtime
logging behavior, as minor code modifications can induce program uncompilable
and substantial discrepancies in log output semantics. To enhance the
consistency and reproducibility of logging evaluation, we introduce AL-Bench, a
comprehensive benchmark designed specifically for automatic logging tools.
AL-Bench includes a large-scale, high-quality, diverse dataset collected from
10 widely recognized projects with varying logging requirements. Moreover, it
introduces a novel dynamic evaluation methodology to provide a run-time
perspective of logging quality in addition to the traditional static evaluation
at source code level. Specifically, AL-Bench not only evaluates the similarity
between the oracle and predicted log statements in source code, but also
evaluates the difference between the log files printed by both log statements
during runtime. AL-Bench reveals significant limitations in existing static
evaluation, as all logging tools show average accuracy drops of 37.49%, 23.43%,
and 15.80% in predicting log position, level, and message compared to their
reported results. Furthermore, with dynamic evaluation, AL-Bench reveals that
20.1%-83.6% of these generated log statements are unable to compile. Moreover,
the best-performing tool achieves only 21.32% cosine similarity between the log
files of the oracle and generated log statements.
| [
{
"version": "v1",
"created": "Wed, 5 Feb 2025 13:32:39 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Feb 2025 13:46:57 GMT"
},
{
"version": "v3",
"created": "Wed, 2 Apr 2025 04:13:04 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Tan",
"Boyin",
""
],
[
"Xu",
"Junjielong",
""
],
[
"Zhu",
"Zhouruixing",
""
],
[
"He",
"Pinjia",
""
]
] | TITLE: AL-Bench: A Benchmark for Automatic Logging
ABSTRACT: Logging, the practice of inserting log statements into source code, is
critical for improving software reliability. Recently, language model-based
techniques have been developed to automate log statement generation based on
input code. While these tools show promising results in prior studies, the
fairness of their results comparisons is not guaranteed due to the use of ad
hoc datasets. In addition, existing evaluation approaches exclusively dependent
on code similarity metrics fail to capture the impact of code diff on runtime
logging behavior, as minor code modifications can induce program uncompilable
and substantial discrepancies in log output semantics. To enhance the
consistency and reproducibility of logging evaluation, we introduce AL-Bench, a
comprehensive benchmark designed specifically for automatic logging tools.
AL-Bench includes a large-scale, high-quality, diverse dataset collected from
10 widely recognized projects with varying logging requirements. Moreover, it
introduces a novel dynamic evaluation methodology to provide a run-time
perspective of logging quality in addition to the traditional static evaluation
at source code level. Specifically, AL-Bench not only evaluates the similarity
between the oracle and predicted log statements in source code, but also
evaluates the difference between the log files printed by both log statements
during runtime. AL-Bench reveals significant limitations in existing static
evaluation, as all logging tools show average accuracy drops of 37.49%, 23.43%,
and 15.80% in predicting log position, level, and message compared to their
reported results. Furthermore, with dynamic evaluation, AL-Bench reveals that
20.1%-83.6% of these generated log statements are unable to compile. Moreover,
the best-performing tool achieves only 21.32% cosine similarity between the log
files of the oracle and generated log statements.
|
2502.06682 | Tai-Yu Pan | Tai-Yu Pan, Sooyoung Jeon, Mengdi Fan, Jinsu Yoo, Zhenyang Feng, Mark
Campbell, Kilian Q. Weinberger, Bharath Hariharan, Wei-Lun Chao | Transfer Your Perspective: Controllable 3D Generation from Any Viewpoint
in a Driving Scene | Accepted to CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Self-driving cars relying solely on ego-centric perception face limitations
in sensing, often failing to detect occluded, faraway objects. Collaborative
autonomous driving (CAV) seems like a promising direction, but collecting data
for development is non-trivial. It requires placing multiple sensor-equipped
agents in a real-world driving scene, simultaneously! As such, existing
datasets are limited in locations and agents. We introduce a novel surrogate to
the rescue, which is to generate realistic perception from different viewpoints
in a driving scene, conditioned on a real-world sample - the ego-car's sensory
data. This surrogate has huge potential: it could potentially turn any ego-car
dataset into a collaborative driving one to scale up the development of CAV. We
present the very first solution, using a combination of simulated collaborative
data and real ego-car data. Our method, Transfer Your Perspective (TYP), learns
a conditioned diffusion model whose output samples are not only realistic but
also consistent in both semantics and layouts with the given ego-car data.
Empirical results demonstrate TYP's effectiveness in aiding in a CAV setting.
In particular, TYP enables us to (pre-)train collaborative perception
algorithms like early and late fusion with little or no real-world
collaborative data, greatly facilitating downstream CAV applications.
| [
{
"version": "v1",
"created": "Mon, 10 Feb 2025 17:07:53 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 19:10:21 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Pan",
"Tai-Yu",
""
],
[
"Jeon",
"Sooyoung",
""
],
[
"Fan",
"Mengdi",
""
],
[
"Yoo",
"Jinsu",
""
],
[
"Feng",
"Zhenyang",
""
],
[
"Campbell",
"Mark",
""
],
[
"Weinberger",
"Kilian Q.",
""
],
[
"Hariharan",
"Bharath",
""
],
[
"Chao",
"Wei-Lun",
""
]
] | TITLE: Transfer Your Perspective: Controllable 3D Generation from Any Viewpoint
in a Driving Scene
ABSTRACT: Self-driving cars relying solely on ego-centric perception face limitations
in sensing, often failing to detect occluded, faraway objects. Collaborative
autonomous driving (CAV) seems like a promising direction, but collecting data
for development is non-trivial. It requires placing multiple sensor-equipped
agents in a real-world driving scene, simultaneously! As such, existing
datasets are limited in locations and agents. We introduce a novel surrogate to
the rescue, which is to generate realistic perception from different viewpoints
in a driving scene, conditioned on a real-world sample - the ego-car's sensory
data. This surrogate has huge potential: it could potentially turn any ego-car
dataset into a collaborative driving one to scale up the development of CAV. We
present the very first solution, using a combination of simulated collaborative
data and real ego-car data. Our method, Transfer Your Perspective (TYP), learns
a conditioned diffusion model whose output samples are not only realistic but
also consistent in both semantics and layouts with the given ego-car data.
Empirical results demonstrate TYP's effectiveness in aiding in a CAV setting.
In particular, TYP enables us to (pre-)train collaborative perception
algorithms like early and late fusion with little or no real-world
collaborative data, greatly facilitating downstream CAV applications.
|
2502.07531 | Sixiao Zheng | Sixiao Zheng, Zimian Peng, Yanpeng Zhou, Yi Zhu, Hang Xu, Xiangru
Huang, Yanwei Fu | VidCRAFT3: Camera, Object, and Lighting Control for Image-to-Video
Generation | null | null | null | null | cs.CV cs.AI cs.LG cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent image-to-video generation methods have demonstrated success in
enabling control over one or two visual elements, such as camera motion or
object motion. However, these methods are unable to offer control over multiple
visual elements due to limitations in data and network efficacy. In this paper,
we introduce VidCRAFT3, a novel framework for precise image-to-video generation
that enables control over camera motion, object motion, and lighting direction
simultaneously. VidCRAFT3 integrates three core components: Image2Cloud
generates 3D point cloud from a reference image; ObjMotionNet encodes sparse
object trajectories using multi-scale optical flow features; and Spatial
Triple-Attention Transformer incorporates lighting direction embeddings via
parallel cross-attention modules. Additionally, we introduce the
VideoLightingDirection dataset, providing synthetic yet realistic video clips
with accurate per-frame lighting direction annotations, effectively mitigating
the lack of annotated real-world datasets. We further adopt a three-stage
training strategy, ensuring robust learning even without joint multi-element
annotations. Extensive experiments show that VidCRAFT3 produces high-quality
video content, outperforming state-of-the-art methods in control granularity
and visual coherence. Code and data will be publicly available.
| [
{
"version": "v1",
"created": "Tue, 11 Feb 2025 13:11:59 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Feb 2025 07:35:56 GMT"
},
{
"version": "v3",
"created": "Wed, 2 Apr 2025 03:56:07 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Zheng",
"Sixiao",
""
],
[
"Peng",
"Zimian",
""
],
[
"Zhou",
"Yanpeng",
""
],
[
"Zhu",
"Yi",
""
],
[
"Xu",
"Hang",
""
],
[
"Huang",
"Xiangru",
""
],
[
"Fu",
"Yanwei",
""
]
] | TITLE: VidCRAFT3: Camera, Object, and Lighting Control for Image-to-Video
Generation
ABSTRACT: Recent image-to-video generation methods have demonstrated success in
enabling control over one or two visual elements, such as camera motion or
object motion. However, these methods are unable to offer control over multiple
visual elements due to limitations in data and network efficacy. In this paper,
we introduce VidCRAFT3, a novel framework for precise image-to-video generation
that enables control over camera motion, object motion, and lighting direction
simultaneously. VidCRAFT3 integrates three core components: Image2Cloud
generates 3D point cloud from a reference image; ObjMotionNet encodes sparse
object trajectories using multi-scale optical flow features; and Spatial
Triple-Attention Transformer incorporates lighting direction embeddings via
parallel cross-attention modules. Additionally, we introduce the
VideoLightingDirection dataset, providing synthetic yet realistic video clips
with accurate per-frame lighting direction annotations, effectively mitigating
the lack of annotated real-world datasets. We further adopt a three-stage
training strategy, ensuring robust learning even without joint multi-element
annotations. Extensive experiments show that VidCRAFT3 produces high-quality
video content, outperforming state-of-the-art methods in control granularity
and visual coherence. Code and data will be publicly available.
|
2502.07631 | Yinzhe Shen | Yinzhe Shen, Omer Sahin Tas, Kaiwen Wang, Royden Wagner, Christoph
Stiller | Divide and Merge: Motion and Semantic Learning in End-to-End Autonomous
Driving | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Perceiving the environment and its changes over time corresponds to two
fundamental yet heterogeneous types of information: semantics and motion.
Previous end-to-end autonomous driving works represent both types of
information in a single feature vector. However, including motion related
tasks, such as prediction and planning, impairs detection and tracking
performance, a phenomenon known as negative transfer in multi-task learning. To
address this issue, we propose Neural-Bayes motion decoding, a novel parallel
detection, tracking, and prediction method that separates semantic and motion
learning. Specifically, we employ a set of learned motion queries that operate
in parallel with detection and tracking queries, sharing a unified set of
recursively updated reference points. Moreover, we employ interactive semantic
decoding to enhance information exchange in semantic tasks, promoting positive
transfer. Experiments on the nuScenes dataset with UniAD and SparseDrive
confirm the effectiveness of our divide and merge approach, resulting in
performance improvements across perception, prediction, and planning. Our code
is available at https://github.com/shenyinzhe/DMAD.
| [
{
"version": "v1",
"created": "Tue, 11 Feb 2025 15:21:31 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 09:10:39 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Shen",
"Yinzhe",
""
],
[
"Tas",
"Omer Sahin",
""
],
[
"Wang",
"Kaiwen",
""
],
[
"Wagner",
"Royden",
""
],
[
"Stiller",
"Christoph",
""
]
] | TITLE: Divide and Merge: Motion and Semantic Learning in End-to-End Autonomous
Driving
ABSTRACT: Perceiving the environment and its changes over time corresponds to two
fundamental yet heterogeneous types of information: semantics and motion.
Previous end-to-end autonomous driving works represent both types of
information in a single feature vector. However, including motion related
tasks, such as prediction and planning, impairs detection and tracking
performance, a phenomenon known as negative transfer in multi-task learning. To
address this issue, we propose Neural-Bayes motion decoding, a novel parallel
detection, tracking, and prediction method that separates semantic and motion
learning. Specifically, we employ a set of learned motion queries that operate
in parallel with detection and tracking queries, sharing a unified set of
recursively updated reference points. Moreover, we employ interactive semantic
decoding to enhance information exchange in semantic tasks, promoting positive
transfer. Experiments on the nuScenes dataset with UniAD and SparseDrive
confirm the effectiveness of our divide and merge approach, resulting in
performance improvements across perception, prediction, and planning. Our code
is available at https://github.com/shenyinzhe/DMAD.
|
2502.09980 | Hsu-Kuang Chiu | Hsu-kuang Chiu, Ryo Hachiuma, Chien-Yi Wang, Stephen F. Smith,
Yu-Chiang Frank Wang, Min-Hung Chen | V2V-LLM: Vehicle-to-Vehicle Cooperative Autonomous Driving with
Multi-Modal Large Language Models | Our project website: https://eddyhkchiu.github.io/v2vllm.github.io/ | null | null | null | cs.CV cs.RO | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Current autonomous driving vehicles rely mainly on their individual sensors
to understand surrounding scenes and plan for future trajectories, which can be
unreliable when the sensors are malfunctioning or occluded. To address this
problem, cooperative perception methods via vehicle-to-vehicle (V2V)
communication have been proposed, but they have tended to focus on perception
tasks like detection or tracking. How those approaches contribute to overall
cooperative planning performance is still under-explored. Inspired by recent
progress using Large Language Models (LLMs) to build autonomous driving
systems, we propose a novel problem setting that integrates a Multi-Modal LLM
into cooperative autonomous driving, with the proposed Vehicle-to-Vehicle
Question-Answering (V2V-QA) dataset and benchmark. We also propose our baseline
method Vehicle-to-Vehicle Multi-Modal Large Language Model (V2V-LLM), which
uses an LLM to fuse perception information from multiple connected autonomous
vehicles (CAVs) and answer various types of driving-related questions:
grounding, notable object identification, and planning. Experimental results
show that our proposed V2V-LLM can be a promising unified model architecture
for performing various tasks in cooperative autonomous driving, and outperforms
other baseline methods that use different fusion approaches. Our work also
creates a new research direction that can improve the safety of future
autonomous driving systems. The code and data will be released to the public to
facilitate open-source research in this field. Our project website:
https://eddyhkchiu.github.io/v2vllm.github.io/ .
| [
{
"version": "v1",
"created": "Fri, 14 Feb 2025 08:05:41 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Feb 2025 19:34:15 GMT"
},
{
"version": "v3",
"created": "Tue, 1 Apr 2025 20:13:32 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Chiu",
"Hsu-kuang",
""
],
[
"Hachiuma",
"Ryo",
""
],
[
"Wang",
"Chien-Yi",
""
],
[
"Smith",
"Stephen F.",
""
],
[
"Wang",
"Yu-Chiang Frank",
""
],
[
"Chen",
"Min-Hung",
""
]
] | TITLE: V2V-LLM: Vehicle-to-Vehicle Cooperative Autonomous Driving with
Multi-Modal Large Language Models
ABSTRACT: Current autonomous driving vehicles rely mainly on their individual sensors
to understand surrounding scenes and plan for future trajectories, which can be
unreliable when the sensors are malfunctioning or occluded. To address this
problem, cooperative perception methods via vehicle-to-vehicle (V2V)
communication have been proposed, but they have tended to focus on perception
tasks like detection or tracking. How those approaches contribute to overall
cooperative planning performance is still under-explored. Inspired by recent
progress using Large Language Models (LLMs) to build autonomous driving
systems, we propose a novel problem setting that integrates a Multi-Modal LLM
into cooperative autonomous driving, with the proposed Vehicle-to-Vehicle
Question-Answering (V2V-QA) dataset and benchmark. We also propose our baseline
method Vehicle-to-Vehicle Multi-Modal Large Language Model (V2V-LLM), which
uses an LLM to fuse perception information from multiple connected autonomous
vehicles (CAVs) and answer various types of driving-related questions:
grounding, notable object identification, and planning. Experimental results
show that our proposed V2V-LLM can be a promising unified model architecture
for performing various tasks in cooperative autonomous driving, and outperforms
other baseline methods that use different fusion approaches. Our work also
creates a new research direction that can improve the safety of future
autonomous driving systems. The code and data will be released to the public to
facilitate open-source research in this field. Our project website:
https://eddyhkchiu.github.io/v2vllm.github.io/ .
|
2502.11570 | Arnaud Bougaham | Arnaud Bougaham and Beno\^it Fr\'enay | Towards a Trustworthy Anomaly Detection for Critical Applications
through Approximated Partial AUC Loss | null | null | null | null | cs.LG cs.CV | http://creativecommons.org/licenses/by/4.0/ | Anomaly Detection is a crucial step for critical applications such in the
industrial, medical or cybersecurity domains. These sectors share the same
requirement of handling differently the different types of classification
errors. Indeed, even if false positives are acceptable, false negatives are
not, because it would reflect a missed detection of a quality issue, a disease
or a cyber threat. To fulfill this requirement, we propose a method that
dynamically applies a trustworthy approximated partial AUC ROC loss (tapAUC). A
binary classifier is trained to optimize the specific range of the AUC ROC
curve that prevents the True Positive Rate (TPR) to reach 100% while minimizing
the False Positive Rate (FPR). The optimal threshold that does not trigger any
false negative is then kept and used at the test step. The results show a TPR
of 92.52% at a 20.43% FPR for an average across 6 datasets, representing a TPR
improvement of 4.3% for a FPR cost of 12.2% against other state-of-the-art
methods. The code is available at https://github.com/ArnaudBougaham/tapAUC.
| [
{
"version": "v1",
"created": "Mon, 17 Feb 2025 08:59:59 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 20:27:35 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Bougaham",
"Arnaud",
""
],
[
"Frénay",
"Benoît",
""
]
] | TITLE: Towards a Trustworthy Anomaly Detection for Critical Applications
through Approximated Partial AUC Loss
ABSTRACT: Anomaly Detection is a crucial step for critical applications such in the
industrial, medical or cybersecurity domains. These sectors share the same
requirement of handling differently the different types of classification
errors. Indeed, even if false positives are acceptable, false negatives are
not, because it would reflect a missed detection of a quality issue, a disease
or a cyber threat. To fulfill this requirement, we propose a method that
dynamically applies a trustworthy approximated partial AUC ROC loss (tapAUC). A
binary classifier is trained to optimize the specific range of the AUC ROC
curve that prevents the True Positive Rate (TPR) to reach 100% while minimizing
the False Positive Rate (FPR). The optimal threshold that does not trigger any
false negative is then kept and used at the test step. The results show a TPR
of 92.52% at a 20.43% FPR for an average across 6 datasets, representing a TPR
improvement of 4.3% for a FPR cost of 12.2% against other state-of-the-art
methods. The code is available at https://github.com/ArnaudBougaham/tapAUC.
|
2502.12895 | Georg Rehm | Fabio Barth, Georg Rehm | Multilingual European Language Models: Benchmarking Approaches and
Challenges | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | The breakthrough of generative large language models (LLMs) that can solve
different tasks through chat interaction has led to a significant increase in
the use of general benchmarks to assess the quality or performance of these
models beyond individual applications. There is also a need for better methods
to evaluate and also to compare models due to the ever increasing number of new
models published. However, most of the established benchmarks revolve around
the English language. This paper analyses the benefits and limitations of
current evaluation datasets, focusing on multilingual European benchmarks. We
analyse seven multilingual benchmarks and identify four major challenges.
Furthermore, we discuss potential solutions to enhance translation quality and
mitigate cultural biases, including human-in-the-loop verification and
iterative translation ranking. Our analysis highlights the need for culturally
aware and rigorously validated benchmarks to assess the reasoning and
question-answering capabilities of multilingual LLMs accurately.
| [
{
"version": "v1",
"created": "Tue, 18 Feb 2025 14:32:17 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 16:57:12 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Barth",
"Fabio",
""
],
[
"Rehm",
"Georg",
""
]
] | TITLE: Multilingual European Language Models: Benchmarking Approaches and
Challenges
ABSTRACT: The breakthrough of generative large language models (LLMs) that can solve
different tasks through chat interaction has led to a significant increase in
the use of general benchmarks to assess the quality or performance of these
models beyond individual applications. There is also a need for better methods
to evaluate and also to compare models due to the ever increasing number of new
models published. However, most of the established benchmarks revolve around
the English language. This paper analyses the benefits and limitations of
current evaluation datasets, focusing on multilingual European benchmarks. We
analyse seven multilingual benchmarks and identify four major challenges.
Furthermore, we discuss potential solutions to enhance translation quality and
mitigate cultural biases, including human-in-the-loop verification and
iterative translation ranking. Our analysis highlights the need for culturally
aware and rigorously validated benchmarks to assess the reasoning and
question-answering capabilities of multilingual LLMs accurately.
|
2502.13820 | Aleksander Ficek | Aleksander Ficek, Somshubra Majumdar, Vahid Noroozi, Boris Ginsburg | Scoring Verifiers: Evaluating Synthetic Verification for Code and
Reasoning | null | null | null | null | cs.AI cs.CL cs.LG cs.SE | http://creativecommons.org/licenses/by/4.0/ | Synthetic verification techniques such as generating test cases and reward
modelling are common ways to enhance the coding capabilities of large language
models (LLM) beyond predefined tests. Additionally, code verification has
recently found great success as a critical component in improving reasoning
capability of LLMs via reinforcement learning. In this paper, we propose a an
approach which can transform existing coding benchmarks into scoring and
ranking datasets to evaluate the effectiveness of synthetic verifiers. We also
propose multiple metrics to measure different aspects of the synthetic
verifiers with the proposed benchmarks. By employing the proposed approach, we
release four new benchmarks (HE-R, HE-R+, MBPP-R, and MBPP-R+), and analyzed
synthetic verification methods with standard, reasoning-based, and reward-based
LLMs. Our experiments show that reasoning can significantly improve test case
generation and that scaling the number of test cases enhances the verification
accuracy.
| [
{
"version": "v1",
"created": "Wed, 19 Feb 2025 15:32:11 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 18:19:14 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Ficek",
"Aleksander",
""
],
[
"Majumdar",
"Somshubra",
""
],
[
"Noroozi",
"Vahid",
""
],
[
"Ginsburg",
"Boris",
""
]
] | TITLE: Scoring Verifiers: Evaluating Synthetic Verification for Code and
Reasoning
ABSTRACT: Synthetic verification techniques such as generating test cases and reward
modelling are common ways to enhance the coding capabilities of large language
models (LLM) beyond predefined tests. Additionally, code verification has
recently found great success as a critical component in improving reasoning
capability of LLMs via reinforcement learning. In this paper, we propose a an
approach which can transform existing coding benchmarks into scoring and
ranking datasets to evaluate the effectiveness of synthetic verifiers. We also
propose multiple metrics to measure different aspects of the synthetic
verifiers with the proposed benchmarks. By employing the proposed approach, we
release four new benchmarks (HE-R, HE-R+, MBPP-R, and MBPP-R+), and analyzed
synthetic verification methods with standard, reasoning-based, and reward-based
LLMs. Our experiments show that reasoning can significantly improve test case
generation and that scaling the number of test cases enhances the verification
accuracy.
|
2502.18227 | Yachao Yuan Dr. | Yachao Yuan, Xiao Tang, Yu Huang, Jin Wang | Local Differential Privacy for Tensors in Distributed Computing Systems | null | null | null | null | cs.CR | http://creativecommons.org/licenses/by-sa/4.0/ | Tensor-valued data, increasingly common in distributed big data applications
like autonomous driving and smart healthcare, poses unique challenges for
privacy protection due to its multidimensional structure and the risk of losing
critical structural information. Traditional local differential privacy
methods, designed for scalars and matrices, are insufficient for tensors, as
they fail to preserve essential relationships among tensor elements. We
introduce TLDP, a novel LDP algorithm for Tensors, which employs a randomized
response mechanism to perturb tensor components while maintaining structural
integrity. To strike a better balance between utility and privacy, we
incorporate a weight matrix that selectively protects sensitive regions. Both
theoretical analysis and empirical findings from real-world datasets show that
TLDP achieves superior utility while preserving privacy, making it a robust
solution for high-dimensional tensor data.
| [
{
"version": "v1",
"created": "Tue, 25 Feb 2025 14:11:45 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 08:25:43 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Yuan",
"Yachao",
""
],
[
"Tang",
"Xiao",
""
],
[
"Huang",
"Yu",
""
],
[
"Wang",
"Jin",
""
]
] | TITLE: Local Differential Privacy for Tensors in Distributed Computing Systems
ABSTRACT: Tensor-valued data, increasingly common in distributed big data applications
like autonomous driving and smart healthcare, poses unique challenges for
privacy protection due to its multidimensional structure and the risk of losing
critical structural information. Traditional local differential privacy
methods, designed for scalars and matrices, are insufficient for tensors, as
they fail to preserve essential relationships among tensor elements. We
introduce TLDP, a novel LDP algorithm for Tensors, which employs a randomized
response mechanism to perturb tensor components while maintaining structural
integrity. To strike a better balance between utility and privacy, we
incorporate a weight matrix that selectively protects sensitive regions. Both
theoretical analysis and empirical findings from real-world datasets show that
TLDP achieves superior utility while preserving privacy, making it a robust
solution for high-dimensional tensor data.
|
2503.01845 | Vladislav Golyanik | Aleksei Zhuravlev and Zorah L\"ahner and Vladislav Golyanik | Denoising Functional Maps: Diffusion Models for Shape Correspondence | CVPR 2025; Project page:
https://alekseizhuravlev.github.io/denoising-functional-maps/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Estimating correspondences between pairs of deformable shapes remains a
challenging problem. Despite substantial progress, existing methods lack broad
generalization capabilities and require category-specific training data. To
address these limitations, we propose a fundamentally new approach to shape
correspondence based on denoising diffusion models. In our method, a diffusion
model learns to directly predict the functional map, a low-dimensional
representation of a point-wise map between shapes. We use a large dataset of
synthetic human meshes for training and employ two steps to reduce the number
of functional maps that need to be learned. First, the maps refer to a template
rather than shape pairs. Second, the functional map is defined in a basis of
eigenvectors of the Laplacian, which is not unique due to sign ambiguity.
Therefore, we introduce an unsupervised approach to select a specific basis by
correcting the signs of eigenvectors based on surface features. Our model
achieves competitive performance on standard human datasets, meshes with
anisotropic connectivity, non-isometric humanoid shapes, as well as animals
compared to existing descriptor-based and large-scale shape deformation
methods. See our project page for the source code and the datasets.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 18:59:56 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 14:01:32 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Zhuravlev",
"Aleksei",
""
],
[
"Lähner",
"Zorah",
""
],
[
"Golyanik",
"Vladislav",
""
]
] | TITLE: Denoising Functional Maps: Diffusion Models for Shape Correspondence
ABSTRACT: Estimating correspondences between pairs of deformable shapes remains a
challenging problem. Despite substantial progress, existing methods lack broad
generalization capabilities and require category-specific training data. To
address these limitations, we propose a fundamentally new approach to shape
correspondence based on denoising diffusion models. In our method, a diffusion
model learns to directly predict the functional map, a low-dimensional
representation of a point-wise map between shapes. We use a large dataset of
synthetic human meshes for training and employ two steps to reduce the number
of functional maps that need to be learned. First, the maps refer to a template
rather than shape pairs. Second, the functional map is defined in a basis of
eigenvectors of the Laplacian, which is not unique due to sign ambiguity.
Therefore, we introduce an unsupervised approach to select a specific basis by
correcting the signs of eigenvectors based on surface features. Our model
achieves competitive performance on standard human datasets, meshes with
anisotropic connectivity, non-isometric humanoid shapes, as well as animals
compared to existing descriptor-based and large-scale shape deformation
methods. See our project page for the source code and the datasets.
|
2503.02175 | Saeed Ranjbar Alvar | Saeed Ranjbar Alvar, Gursimran Singh, Mohammad Akbari, Yong Zhang | DivPrune: Diversity-based Visual Token Pruning for Large Multimodal
Models | Accepted to CVPR 2025 | null | null | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Multimodal Models (LMMs) have emerged as powerful models capable of
understanding various data modalities, including text, images, and videos. LMMs
encode both text and visual data into tokens that are then combined and
processed by an integrated Large Language Model (LLM). Including visual tokens
substantially increases the total token count, often by thousands. The
increased input length for LLM significantly raises the complexity of
inference, resulting in high latency in LMMs. To address this issue, token
pruning methods, which remove part of the visual tokens, are proposed. The
existing token pruning methods either require extensive calibration and
fine-tuning or rely on suboptimal importance metrics which results in increased
redundancy among the retained tokens. In this paper, we first formulate token
pruning as Max-Min Diversity Problem (MMDP) where the goal is to select a
subset such that the diversity among the selected {tokens} is maximized. Then,
we solve the MMDP to obtain the selected subset and prune the rest. The
proposed method, DivPrune, reduces redundancy and achieves the highest
diversity of the selected tokens. By ensuring high diversity, the selected
tokens better represent the original tokens, enabling effective performance
even at high pruning ratios without requiring fine-tuning. Extensive
experiments with various LMMs show that DivPrune achieves state-of-the-art
accuracy over 16 image- and video-language datasets. Additionally, DivPrune
reduces both the end-to-end latency and GPU memory usage for the tested models.
The code is available $\href{https://github.com/vbdi/divprune}{\text{here}}$.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 01:33:14 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 19:02:04 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Alvar",
"Saeed Ranjbar",
""
],
[
"Singh",
"Gursimran",
""
],
[
"Akbari",
"Mohammad",
""
],
[
"Zhang",
"Yong",
""
]
] | TITLE: DivPrune: Diversity-based Visual Token Pruning for Large Multimodal
Models
ABSTRACT: Large Multimodal Models (LMMs) have emerged as powerful models capable of
understanding various data modalities, including text, images, and videos. LMMs
encode both text and visual data into tokens that are then combined and
processed by an integrated Large Language Model (LLM). Including visual tokens
substantially increases the total token count, often by thousands. The
increased input length for LLM significantly raises the complexity of
inference, resulting in high latency in LMMs. To address this issue, token
pruning methods, which remove part of the visual tokens, are proposed. The
existing token pruning methods either require extensive calibration and
fine-tuning or rely on suboptimal importance metrics which results in increased
redundancy among the retained tokens. In this paper, we first formulate token
pruning as Max-Min Diversity Problem (MMDP) where the goal is to select a
subset such that the diversity among the selected {tokens} is maximized. Then,
we solve the MMDP to obtain the selected subset and prune the rest. The
proposed method, DivPrune, reduces redundancy and achieves the highest
diversity of the selected tokens. By ensuring high diversity, the selected
tokens better represent the original tokens, enabling effective performance
even at high pruning ratios without requiring fine-tuning. Extensive
experiments with various LMMs show that DivPrune achieves state-of-the-art
accuracy over 16 image- and video-language datasets. Additionally, DivPrune
reduces both the end-to-end latency and GPU memory usage for the tested models.
The code is available $\href{https://github.com/vbdi/divprune}{\text{here}}$.
|
2503.04530 | Chen Li | Chen Li, Yinyi Luo, Anudeep Bolimera, Uzair Ahmed, Shri Kiran
Srinivasan, Hrishikesh Gokhale, Marios Savvides | SOLAR: Scalable Optimization of Large-scale Architecture for Reasoning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models excel in reasoning yet often rely on Chain-of-Thought
prompts, limiting performance on tasks demanding more nuanced topological
structures. We present SOLAR (Scalable Optimization of Large-scale Architecture
for Reasoning), a framework that dynamically optimizes Chain-of-Thought (CoT),
Tree-of-Thought (ToT), and Graph-of-Thought (GoT) topologies to boost accuracy
and efficiency. Our Topological-Annotation-Generation (TAG) system automates
dataset creation, annotation, and difficulty segmentation, leading to stronger
post training and test-time performance. We also propose Topological-Scaling, a
curriculum-learning-based approach that adaptively combines post training and
inference scaling to each task. On MATH and GSM8K, SOLAR delivers notable
gains: +5% accuracy with Topological Tuning, +9% with Topological Rewarding,
and +10.02% with Hybrid Scaling, while reducing response length by over 5%,
lowering inference latency. To further enhance efficiency, we introduce a
multi-task Topological Reward Model (M-TRM) that selects both the optimal
reasoning topology and final answer in a single pass, eliminating multiple
single-task TRMs. Remarkably, M-TRM also surpasses all single-task TRMs,
improving accuracy by +10% and rank correlation by +9%. Overall, SOLAR
establishes a new benchmark for scalable, high-precision LLM reasoning and
introduces a fully automated, dynamic topology competition mechanism.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 15:19:17 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 04:51:45 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Li",
"Chen",
""
],
[
"Luo",
"Yinyi",
""
],
[
"Bolimera",
"Anudeep",
""
],
[
"Ahmed",
"Uzair",
""
],
[
"Srinivasan",
"Shri Kiran",
""
],
[
"Gokhale",
"Hrishikesh",
""
],
[
"Savvides",
"Marios",
""
]
] | TITLE: SOLAR: Scalable Optimization of Large-scale Architecture for Reasoning
ABSTRACT: Large Language Models excel in reasoning yet often rely on Chain-of-Thought
prompts, limiting performance on tasks demanding more nuanced topological
structures. We present SOLAR (Scalable Optimization of Large-scale Architecture
for Reasoning), a framework that dynamically optimizes Chain-of-Thought (CoT),
Tree-of-Thought (ToT), and Graph-of-Thought (GoT) topologies to boost accuracy
and efficiency. Our Topological-Annotation-Generation (TAG) system automates
dataset creation, annotation, and difficulty segmentation, leading to stronger
post training and test-time performance. We also propose Topological-Scaling, a
curriculum-learning-based approach that adaptively combines post training and
inference scaling to each task. On MATH and GSM8K, SOLAR delivers notable
gains: +5% accuracy with Topological Tuning, +9% with Topological Rewarding,
and +10.02% with Hybrid Scaling, while reducing response length by over 5%,
lowering inference latency. To further enhance efficiency, we introduce a
multi-task Topological Reward Model (M-TRM) that selects both the optimal
reasoning topology and final answer in a single pass, eliminating multiple
single-task TRMs. Remarkably, M-TRM also surpasses all single-task TRMs,
improving accuracy by +10% and rank correlation by +9%. Overall, SOLAR
establishes a new benchmark for scalable, high-precision LLM reasoning and
introduces a fully automated, dynamic topology competition mechanism.
|
2503.07649 | Kanghui Ning | Kanghui Ning, Zijie Pan, Yu Liu, Yushan Jiang, James Y. Zhang, Kashif
Rasul, Anderson Schneider, Lintao Ma, Yuriy Nevmyvaka, Dongjin Song | TS-RAG: Retrieval-Augmented Generation based Time Series Foundation
Models are Stronger Zero-Shot Forecaster | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Recently, Large Language Models (LLMs) and Foundation Models (FMs) have
become prevalent for time series forecasting tasks. However, fine-tuning large
language models (LLMs) for forecasting enables the adaptation to specific
domains but may not generalize well across diverse, unseen datasets. Meanwhile,
existing time series foundation models (TSFMs) lack inherent mechanisms for
domain adaptation and suffer from limited interpretability, making them
suboptimal for zero-shot forecasting. To this end, we present TS-RAG, a
retrieval-augmented generation based time series forecasting framework that
enhances the generalization capability and interpretability of TSFMs.
Specifically, TS-RAG leverages pre-trained time series encoders to retrieve
semantically relevant time series segments from a dedicated knowledge database,
incorporating contextual patterns for the given time series query. Next, we
develop a learnable Mixture-of-Experts (MoE)-based augmentation module, which
dynamically fuses retrieved time series patterns with the TSFM's representation
of the input query, improving forecasting accuracy without requiring
task-specific fine-tuning. Thorough empirical studies on seven public benchmark
datasets demonstrate that TS-RAG achieves state-of-the-art zero-shot
forecasting performance, outperforming TSFMs by up to 6.51% across diverse
domains and showcasing desired interpretability.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 16:48:48 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 21:23:59 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Ning",
"Kanghui",
""
],
[
"Pan",
"Zijie",
""
],
[
"Liu",
"Yu",
""
],
[
"Jiang",
"Yushan",
""
],
[
"Zhang",
"James Y.",
""
],
[
"Rasul",
"Kashif",
""
],
[
"Schneider",
"Anderson",
""
],
[
"Ma",
"Lintao",
""
],
[
"Nevmyvaka",
"Yuriy",
""
],
[
"Song",
"Dongjin",
""
]
] | TITLE: TS-RAG: Retrieval-Augmented Generation based Time Series Foundation
Models are Stronger Zero-Shot Forecaster
ABSTRACT: Recently, Large Language Models (LLMs) and Foundation Models (FMs) have
become prevalent for time series forecasting tasks. However, fine-tuning large
language models (LLMs) for forecasting enables the adaptation to specific
domains but may not generalize well across diverse, unseen datasets. Meanwhile,
existing time series foundation models (TSFMs) lack inherent mechanisms for
domain adaptation and suffer from limited interpretability, making them
suboptimal for zero-shot forecasting. To this end, we present TS-RAG, a
retrieval-augmented generation based time series forecasting framework that
enhances the generalization capability and interpretability of TSFMs.
Specifically, TS-RAG leverages pre-trained time series encoders to retrieve
semantically relevant time series segments from a dedicated knowledge database,
incorporating contextual patterns for the given time series query. Next, we
develop a learnable Mixture-of-Experts (MoE)-based augmentation module, which
dynamically fuses retrieved time series patterns with the TSFM's representation
of the input query, improving forecasting accuracy without requiring
task-specific fine-tuning. Thorough empirical studies on seven public benchmark
datasets demonstrate that TS-RAG achieves state-of-the-art zero-shot
forecasting performance, outperforming TSFMs by up to 6.51% across diverse
domains and showcasing desired interpretability.
|
2503.09423 | Kechun Xu | Kechun Xu, Xunlong Xia, Kaixuan Wang, Yifei Yang, Yunxuan Mao, Bing
Deng, Rong Xiong, Yue Wang | Efficient Alignment of Unconditioned Action Prior for
Language-conditioned Pick and Place in Clutter | null | null | null | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the task of language-conditioned pick and place in clutter, where a
robot should grasp a target object in open clutter and move it to a specified
place. Some approaches learn end-to-end policies with features from vision
foundation models, requiring large datasets. Others combine foundation models
in a zero-shot setting, suffering from cascading errors. In addition, they
primarily leverage vision and language foundation models, focusing less on
action priors. In this paper, we aim to develop an effective policy by
integrating foundation priors from vision, language, and action. We propose
A$^2$, an action prior alignment method that aligns unconditioned action priors
with 3D vision-language priors by learning one attention layer. The alignment
formulation enables our policy to train with less data and preserve zero-shot
generalization capabilities. We show that a shared policy for both pick and
place actions enhances the performance for each task, and introduce a policy
adaptation scheme to accommodate the multi-modal nature of actions. Extensive
experiments in simulation and the real-world show that our policy achieves
higher task success rates with fewer steps for both pick and place tasks in
clutter, effectively generalizing to unseen objects and language instructions.
Videos and codes are available at https://xukechun.github.io/papers/A2.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 14:20:33 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 09:52:34 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Xu",
"Kechun",
""
],
[
"Xia",
"Xunlong",
""
],
[
"Wang",
"Kaixuan",
""
],
[
"Yang",
"Yifei",
""
],
[
"Mao",
"Yunxuan",
""
],
[
"Deng",
"Bing",
""
],
[
"Xiong",
"Rong",
""
],
[
"Wang",
"Yue",
""
]
] | TITLE: Efficient Alignment of Unconditioned Action Prior for
Language-conditioned Pick and Place in Clutter
ABSTRACT: We study the task of language-conditioned pick and place in clutter, where a
robot should grasp a target object in open clutter and move it to a specified
place. Some approaches learn end-to-end policies with features from vision
foundation models, requiring large datasets. Others combine foundation models
in a zero-shot setting, suffering from cascading errors. In addition, they
primarily leverage vision and language foundation models, focusing less on
action priors. In this paper, we aim to develop an effective policy by
integrating foundation priors from vision, language, and action. We propose
A$^2$, an action prior alignment method that aligns unconditioned action priors
with 3D vision-language priors by learning one attention layer. The alignment
formulation enables our policy to train with less data and preserve zero-shot
generalization capabilities. We show that a shared policy for both pick and
place actions enhances the performance for each task, and introduce a policy
adaptation scheme to accommodate the multi-modal nature of actions. Extensive
experiments in simulation and the real-world show that our policy achieves
higher task success rates with fewer steps for both pick and place tasks in
clutter, effectively generalizing to unseen objects and language instructions.
Videos and codes are available at https://xukechun.github.io/papers/A2.
|
2503.10732 | Shima Shabani | Shima Shabani, Mohammadsadegh Khoshghiaferezaee and Michael Breu{\ss} | Sparse Dictionary Learning for Image Recovery by Iterative Shrinkage | 19 pages, 5 Figures, IntelliSys 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | In this paper we study the sparse coding problem in the context of sparse
dictionary learning for image recovery. To this end, we consider and compare
several state-of-the-art sparse optimization methods constructed using the
shrinkage operation. As the mathematical setting of these methods, we consider
an online approach as algorithmical basis together with the basis pursuit
denoising problem that arises by the convex optimization approach to the
dictionary learning problem.
By a dedicated construction of datasets and corresponding dictionaries, we
study the effect of enlarging the underlying learning database on
reconstruction quality making use of several error measures. Our study
illuminates that the choice of the optimization method may be practically
important in the context of availability of training data. In the context of
different settings for training data as may be considered part of our study, we
illuminate the computational efficiency of the assessed optimization methods.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 13:45:37 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 08:08:10 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Shabani",
"Shima",
""
],
[
"Khoshghiaferezaee",
"Mohammadsadegh",
""
],
[
"Breuß",
"Michael",
""
]
] | TITLE: Sparse Dictionary Learning for Image Recovery by Iterative Shrinkage
ABSTRACT: In this paper we study the sparse coding problem in the context of sparse
dictionary learning for image recovery. To this end, we consider and compare
several state-of-the-art sparse optimization methods constructed using the
shrinkage operation. As the mathematical setting of these methods, we consider
an online approach as algorithmical basis together with the basis pursuit
denoising problem that arises by the convex optimization approach to the
dictionary learning problem.
By a dedicated construction of datasets and corresponding dictionaries, we
study the effect of enlarging the underlying learning database on
reconstruction quality making use of several error measures. Our study
illuminates that the choice of the optimization method may be practically
important in the context of availability of training data. In the context of
different settings for training data as may be considered part of our study, we
illuminate the computational efficiency of the assessed optimization methods.
|
2503.11206 | Javier Naranjo-Alcazar | Andres Larroza, Javier Naranjo-Alcazar, Vicent Ortiz Castell\'o, Pedro
Zuccarello | Comparative Study of Spike Encoding Methods for Environmental Sound
Classification | Under review EUSIPCO 2025 | null | null | null | cs.SD cs.ET eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Spiking Neural Networks (SNNs) offer a promising approach to reduce energy
consumption and computational demands, making them particularly beneficial for
embedded machine learning in edge applications. However, data from conventional
digital sensors must first be converted into spike trains to be processed using
neuromorphic computing technologies. The classification of environmental sounds
presents unique challenges due to the high variability of frequencies,
background noise, and overlapping acoustic events. Despite these challenges,
most studies on spike-based audio encoding focus on speech processing, leaving
non-speech environmental sounds underexplored. In this work, we conduct a
comprehensive comparison of widely used spike encoding techniques, evaluating
their effectiveness on the ESC-10 dataset. By understanding the impact of
encoding choices on environmental sound processing, researchers and
practitioners can select the most suitable approach for real-world applications
such as smart surveillance, environmental monitoring, and industrial acoustic
analysis. This study serves as a benchmark for spike encoding in environmental
sound classification, providing a foundational reference for future research in
neuromorphic audio processing.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 08:52:04 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 10:12:57 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Larroza",
"Andres",
""
],
[
"Naranjo-Alcazar",
"Javier",
""
],
[
"Castelló",
"Vicent Ortiz",
""
],
[
"Zuccarello",
"Pedro",
""
]
] | TITLE: Comparative Study of Spike Encoding Methods for Environmental Sound
Classification
ABSTRACT: Spiking Neural Networks (SNNs) offer a promising approach to reduce energy
consumption and computational demands, making them particularly beneficial for
embedded machine learning in edge applications. However, data from conventional
digital sensors must first be converted into spike trains to be processed using
neuromorphic computing technologies. The classification of environmental sounds
presents unique challenges due to the high variability of frequencies,
background noise, and overlapping acoustic events. Despite these challenges,
most studies on spike-based audio encoding focus on speech processing, leaving
non-speech environmental sounds underexplored. In this work, we conduct a
comprehensive comparison of widely used spike encoding techniques, evaluating
their effectiveness on the ESC-10 dataset. By understanding the impact of
encoding choices on environmental sound processing, researchers and
practitioners can select the most suitable approach for real-world applications
such as smart surveillance, environmental monitoring, and industrial acoustic
analysis. This study serves as a benchmark for spike encoding in environmental
sound classification, providing a foundational reference for future research in
neuromorphic audio processing.
|
2503.14485 | Yiqun Mei | Yiqun Mei, Mingming He, Li Ma, Julien Philip, Wenqi Xian, David M
George, Xueming Yu, Gabriel Dedic, Ahmet Levent Ta\c{s}el, Ning Yu, Vishal M.
Patel and Paul Debevec | Lux Post Facto: Learning Portrait Performance Relighting with
Conditional Video Diffusion and a Hybrid Dataset | CVPR 2025 | null | null | null | cs.GR cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Video portrait relighting remains challenging because the results need to be
both photorealistic and temporally stable. This typically requires a strong
model design that can capture complex facial reflections as well as intensive
training on a high-quality paired video dataset, such as dynamic
one-light-at-a-time (OLAT). In this work, we introduce Lux Post Facto, a novel
portrait video relighting method that produces both photorealistic and
temporally consistent lighting effects. From the model side, we design a new
conditional video diffusion model built upon state-of-the-art pre-trained video
diffusion model, alongside a new lighting injection mechanism to enable precise
control. This way we leverage strong spatial and temporal generative capability
to generate plausible solutions to the ill-posed relighting problem. Our
technique uses a hybrid dataset consisting of static expression OLAT data and
in-the-wild portrait performance videos to jointly learn relighting and
temporal modeling. This avoids the need to acquire paired video data in
different lighting conditions. Our extensive experiments show that our model
produces state-of-the-art results both in terms of photorealism and temporal
consistency.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 17:55:22 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 02:46:45 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Mei",
"Yiqun",
""
],
[
"He",
"Mingming",
""
],
[
"Ma",
"Li",
""
],
[
"Philip",
"Julien",
""
],
[
"Xian",
"Wenqi",
""
],
[
"George",
"David M",
""
],
[
"Yu",
"Xueming",
""
],
[
"Dedic",
"Gabriel",
""
],
[
"Taşel",
"Ahmet Levent",
""
],
[
"Yu",
"Ning",
""
],
[
"Patel",
"Vishal M.",
""
],
[
"Debevec",
"Paul",
""
]
] | TITLE: Lux Post Facto: Learning Portrait Performance Relighting with
Conditional Video Diffusion and a Hybrid Dataset
ABSTRACT: Video portrait relighting remains challenging because the results need to be
both photorealistic and temporally stable. This typically requires a strong
model design that can capture complex facial reflections as well as intensive
training on a high-quality paired video dataset, such as dynamic
one-light-at-a-time (OLAT). In this work, we introduce Lux Post Facto, a novel
portrait video relighting method that produces both photorealistic and
temporally consistent lighting effects. From the model side, we design a new
conditional video diffusion model built upon state-of-the-art pre-trained video
diffusion model, alongside a new lighting injection mechanism to enable precise
control. This way we leverage strong spatial and temporal generative capability
to generate plausible solutions to the ill-posed relighting problem. Our
technique uses a hybrid dataset consisting of static expression OLAT data and
in-the-wild portrait performance videos to jointly learn relighting and
temporal modeling. This avoids the need to acquire paired video data in
different lighting conditions. Our extensive experiments show that our model
produces state-of-the-art results both in terms of photorealism and temporal
consistency.
|
2503.14489 | Jinghao Zhou | Jensen Zhou, Hang Gao, Vikram Voleti, Aaryaman Vasishta, Chun-Han Yao,
Mark Boss, Philip Torr, Christian Rupprecht, Varun Jampani | Stable Virtual Camera: Generative View Synthesis with Diffusion Models | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | We present Stable Virtual Camera (Seva), a generalist diffusion model that
creates novel views of a scene, given any number of input views and target
cameras. Existing works struggle to generate either large viewpoint changes or
temporally smooth samples, while relying on specific task configurations. Our
approach overcomes these limitations through simple model design, optimized
training recipe, and flexible sampling strategy that generalize across view
synthesis tasks at test time. As a result, our samples maintain high
consistency without requiring additional 3D representation-based distillation,
thus streamlining view synthesis in the wild. Furthermore, we show that our
method can generate high-quality videos lasting up to half a minute with
seamless loop closure. Extensive benchmarking demonstrates that Seva
outperforms existing methods across different datasets and settings. Project
page with code and model: https://stable-virtual-camera.github.io/.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 17:57:22 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 18:22:54 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Zhou",
"Jensen",
""
],
[
"Gao",
"Hang",
""
],
[
"Voleti",
"Vikram",
""
],
[
"Vasishta",
"Aaryaman",
""
],
[
"Yao",
"Chun-Han",
""
],
[
"Boss",
"Mark",
""
],
[
"Torr",
"Philip",
""
],
[
"Rupprecht",
"Christian",
""
],
[
"Jampani",
"Varun",
""
]
] | TITLE: Stable Virtual Camera: Generative View Synthesis with Diffusion Models
ABSTRACT: We present Stable Virtual Camera (Seva), a generalist diffusion model that
creates novel views of a scene, given any number of input views and target
cameras. Existing works struggle to generate either large viewpoint changes or
temporally smooth samples, while relying on specific task configurations. Our
approach overcomes these limitations through simple model design, optimized
training recipe, and flexible sampling strategy that generalize across view
synthesis tasks at test time. As a result, our samples maintain high
consistency without requiring additional 3D representation-based distillation,
thus streamlining view synthesis in the wild. Furthermore, we show that our
method can generate high-quality videos lasting up to half a minute with
seamless loop closure. Extensive benchmarking demonstrates that Seva
outperforms existing methods across different datasets and settings. Project
page with code and model: https://stable-virtual-camera.github.io/.
|
2503.18950 | Taeksoo Kim | Taeksoo Kim and Hanbyul Joo | Target-Aware Video Diffusion Models | The project page is available at https://taeksuu.github.io/tavid/ | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | We present a target-aware video diffusion model that generates videos from an
input image in which an actor interacts with a specified target while
performing a desired action. The target is defined by a segmentation mask and
the desired action is described via a text prompt. Unlike existing controllable
image-to-video diffusion models that often rely on dense structural or motion
cues to guide the actor's movements toward the target, our target-aware model
requires only a simple mask to indicate the target, leveraging the
generalization capabilities of pretrained models to produce plausible actions.
This makes our method particularly effective for human-object interaction (HOI)
scenarios, where providing precise action guidance is challenging, and further
enables the use of video diffusion models for high-level action planning in
applications such as robotics. We build our target-aware model by extending a
baseline model to incorporate the target mask as an additional input. To
enforce target awareness, we introduce a special token that encodes the
target's spatial information within the text prompt. We then fine-tune the
model with our curated dataset using a novel cross-attention loss that aligns
the cross-attention maps associated with this token with the input target mask.
To further improve performance, we selectively apply this loss to the most
semantically relevant transformer blocks and attention regions. Experimental
results show that our target-aware model outperforms existing solutions in
generating videos where actors interact accurately with the specified targets.
We further demonstrate its efficacy in two downstream applications: video
content creation and zero-shot 3D HOI motion synthesis.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 17:59:59 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 14:11:15 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Kim",
"Taeksoo",
""
],
[
"Joo",
"Hanbyul",
""
]
] | TITLE: Target-Aware Video Diffusion Models
ABSTRACT: We present a target-aware video diffusion model that generates videos from an
input image in which an actor interacts with a specified target while
performing a desired action. The target is defined by a segmentation mask and
the desired action is described via a text prompt. Unlike existing controllable
image-to-video diffusion models that often rely on dense structural or motion
cues to guide the actor's movements toward the target, our target-aware model
requires only a simple mask to indicate the target, leveraging the
generalization capabilities of pretrained models to produce plausible actions.
This makes our method particularly effective for human-object interaction (HOI)
scenarios, where providing precise action guidance is challenging, and further
enables the use of video diffusion models for high-level action planning in
applications such as robotics. We build our target-aware model by extending a
baseline model to incorporate the target mask as an additional input. To
enforce target awareness, we introduce a special token that encodes the
target's spatial information within the text prompt. We then fine-tune the
model with our curated dataset using a novel cross-attention loss that aligns
the cross-attention maps associated with this token with the input target mask.
To further improve performance, we selectively apply this loss to the most
semantically relevant transformer blocks and attention regions. Experimental
results show that our target-aware model outperforms existing solutions in
generating videos where actors interact accurately with the specified targets.
We further demonstrate its efficacy in two downstream applications: video
content creation and zero-shot 3D HOI motion synthesis.
|
2503.20087 | Dmitry Rokhlin B. | Dmitry B. Rokhlin and Olga V. Gurtovaya | Random feature-based double Vovk-Azoury-Warmuth algorithm for online
multi-kernel learning | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a novel multi-kernel learning algorithm, VAW$^2$, for online
least squares regression in reproducing kernel Hilbert spaces (RKHS). VAW$^2$
leverages random Fourier feature-based functional approximation and the
Vovk-Azoury-Warmuth (VAW) method in a two-level procedure: VAW is used to
construct expert strategies from random features generated for each kernel at
the first level, and then again to combine their predictions at the second
level. A theoretical analysis yields a regret bound of $O(T^{1/2}\ln T)$ in
expectation with respect to artificial randomness, when the number of random
features scales as $T^{1/2}$. Empirical results on some benchmark datasets
demonstrate that VAW$^2$ achieves superior performance compared to the existing
online multi-kernel learning algorithms: Raker and OMKL-GF, and to other
theoretically grounded method methods involving convex combination of expert
predictions at the second level.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 21:57:35 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 18:53:42 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Rokhlin",
"Dmitry B.",
""
],
[
"Gurtovaya",
"Olga V.",
""
]
] | TITLE: Random feature-based double Vovk-Azoury-Warmuth algorithm for online
multi-kernel learning
ABSTRACT: We introduce a novel multi-kernel learning algorithm, VAW$^2$, for online
least squares regression in reproducing kernel Hilbert spaces (RKHS). VAW$^2$
leverages random Fourier feature-based functional approximation and the
Vovk-Azoury-Warmuth (VAW) method in a two-level procedure: VAW is used to
construct expert strategies from random features generated for each kernel at
the first level, and then again to combine their predictions at the second
level. A theoretical analysis yields a regret bound of $O(T^{1/2}\ln T)$ in
expectation with respect to artificial randomness, when the number of random
features scales as $T^{1/2}$. Empirical results on some benchmark datasets
demonstrate that VAW$^2$ achieves superior performance compared to the existing
online multi-kernel learning algorithms: Raker and OMKL-GF, and to other
theoretically grounded method methods involving convex combination of expert
predictions at the second level.
|
2503.20204 | Yuto Nakamura Mr. | Yuto Nakamura, Yuma Kuroda, Shintaro Sato, Naofumi Ohnishi | Energy transfer and budget analysis for transient process with
operator-driven reduced-order model | null | null | null | null | physics.flu-dyn | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We present the possibility of energy transfer and budget analysis for
transient flow using eigenmodes of the operator from the Navier-Stokes
equation. We derive the energy transfer equation, which provides the energy
budget for the eigenmodes, through the Galerkin projection of the equation
using the bi-orthogonality of the eigenmodes and the adjoint mode. Energy
budget and transfer analysis between modes were conducted for two-dimensional
flow around a cylinder with eigenmodes growing or decaying from a steady flow.
Using the linearized energy transfer equation and eigenmodes from global
stability analysis, we identify the energy budget and spatial distribution that
determine mode growth rates. Moreover, energy budget and transfer analysis are
realized by considering the time evolution of the eigenmodes, even during the
nonlinear development of the eigenmodes. By introducing time-varying dynamic
mode decomposition with a phase-control strategy for multiple time-series
datasets from numerical simulations of the phase-controlled initial flow,
time-varying eigenmodes are extracted in the transient two-dimensional cylinder
flow. With the extracted time-dependent modes and the derived energy transfer
equations, the time evolution of the energy budget and spatial distribution of
energy transfer can be computed until the eigenmodes developed from the steady
field reach the post-transient periodic flow. From the time variation of the
energy budget and the transfer distribution, the transient process of the
cylinder flow can be characterized by the ratio of the magnitude of viscous
diffusion for the eigenmode and energy transfer from the base flow.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 04:00:47 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 04:53:41 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Nakamura",
"Yuto",
""
],
[
"Kuroda",
"Yuma",
""
],
[
"Sato",
"Shintaro",
""
],
[
"Ohnishi",
"Naofumi",
""
]
] | TITLE: Energy transfer and budget analysis for transient process with
operator-driven reduced-order model
ABSTRACT: We present the possibility of energy transfer and budget analysis for
transient flow using eigenmodes of the operator from the Navier-Stokes
equation. We derive the energy transfer equation, which provides the energy
budget for the eigenmodes, through the Galerkin projection of the equation
using the bi-orthogonality of the eigenmodes and the adjoint mode. Energy
budget and transfer analysis between modes were conducted for two-dimensional
flow around a cylinder with eigenmodes growing or decaying from a steady flow.
Using the linearized energy transfer equation and eigenmodes from global
stability analysis, we identify the energy budget and spatial distribution that
determine mode growth rates. Moreover, energy budget and transfer analysis are
realized by considering the time evolution of the eigenmodes, even during the
nonlinear development of the eigenmodes. By introducing time-varying dynamic
mode decomposition with a phase-control strategy for multiple time-series
datasets from numerical simulations of the phase-controlled initial flow,
time-varying eigenmodes are extracted in the transient two-dimensional cylinder
flow. With the extracted time-dependent modes and the derived energy transfer
equations, the time evolution of the energy budget and spatial distribution of
energy transfer can be computed until the eigenmodes developed from the steady
field reach the post-transient periodic flow. From the time variation of the
energy budget and the transfer distribution, the transient process of the
cylinder flow can be characterized by the ratio of the magnitude of viscous
diffusion for the eigenmode and energy transfer from the base flow.
|
2503.22288 | Min Fang | Ruiguang Pei, Junjie Wu, Dan Peng, Min Fang, Jianan Zhang, Zhihui Fu,
Jun Wang | SimDC: A High-Fidelity Device Simulation Platform for Device-Cloud
Collaborative Computing | Accepted by ICDCS 2025 | null | null | null | cs.DC | http://creativecommons.org/licenses/by/4.0/ | The advent of edge intelligence and escalating concerns for data privacy
protection have sparked a surge of interest in device-cloud collaborative
computing. Large-scale device deployments to validate prototype solutions are
often prohibitively expensive and practically challenging, resulting in a
pronounced demand for simulation tools that can emulate realworld scenarios.
However, existing simulators predominantly rely solely on high-performance
servers to emulate edge computing devices, overlooking (1) the discrepancies
between virtual computing units and actual heterogeneous computing devices and
(2) the simulation of device behaviors in real-world environments. In this
paper, we propose a high-fidelity device simulation platform, called SimDC,
which uses a hybrid heterogeneous resource and integrates high-performance
servers and physical mobile phones. Utilizing this platform, developers can
simulate numerous devices for functional testing cost-effectively and capture
precise operational responses from varied real devices. To simulate real
behaviors of heterogeneous devices, we offer a configurable device behavior
traffic controller that dispatches results on devices to the cloud using a
user-defined operation strategy. Comprehensive experiments on the public
dataset show the effectiveness of our simulation platform and its great
potential for application. The code is available at
https://github.com/opas-lab/olearning-sim.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 10:04:40 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 04:07:04 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Pei",
"Ruiguang",
""
],
[
"Wu",
"Junjie",
""
],
[
"Peng",
"Dan",
""
],
[
"Fang",
"Min",
""
],
[
"Zhang",
"Jianan",
""
],
[
"Fu",
"Zhihui",
""
],
[
"Wang",
"Jun",
""
]
] | TITLE: SimDC: A High-Fidelity Device Simulation Platform for Device-Cloud
Collaborative Computing
ABSTRACT: The advent of edge intelligence and escalating concerns for data privacy
protection have sparked a surge of interest in device-cloud collaborative
computing. Large-scale device deployments to validate prototype solutions are
often prohibitively expensive and practically challenging, resulting in a
pronounced demand for simulation tools that can emulate realworld scenarios.
However, existing simulators predominantly rely solely on high-performance
servers to emulate edge computing devices, overlooking (1) the discrepancies
between virtual computing units and actual heterogeneous computing devices and
(2) the simulation of device behaviors in real-world environments. In this
paper, we propose a high-fidelity device simulation platform, called SimDC,
which uses a hybrid heterogeneous resource and integrates high-performance
servers and physical mobile phones. Utilizing this platform, developers can
simulate numerous devices for functional testing cost-effectively and capture
precise operational responses from varied real devices. To simulate real
behaviors of heterogeneous devices, we offer a configurable device behavior
traffic controller that dispatches results on devices to the cloud using a
user-defined operation strategy. Comprehensive experiments on the public
dataset show the effectiveness of our simulation platform and its great
potential for application. The code is available at
https://github.com/opas-lab/olearning-sim.
|
2503.22346 | Zhengjie Liu | Ruifeng Luo, Zhengjie Liu, Tianxiao Cheng, Jie Wang, Tongjie Wang,
Xingguang Wei, Haomin Wang, YanPeng Li, Fu Chai, Fei Cheng, Shenglong Ye,
Wenhai Wang, Yanting Zhang, Yu Qiao, Hongjie Zhang, Xianzhong Zhao | ArchCAD-400K: An Open Large-Scale Architectural CAD Dataset and New
Baseline for Panoptic Symbol Spotting | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Recognizing symbols in architectural CAD drawings is critical for various
advanced engineering applications. In this paper, we propose a novel CAD data
annotation engine that leverages intrinsic attributes from systematically
archived CAD drawings to automatically generate high-quality annotations, thus
significantly reducing manual labeling efforts. Utilizing this engine, we
construct ArchCAD-400K, a large-scale CAD dataset consisting of 413,062 chunks
from 5538 highly standardized drawings, making it over 26 times larger than the
largest existing CAD dataset. ArchCAD-400K boasts an extended drawing diversity
and broader categories, offering line-grained annotations. Furthermore, we
present a new baseline model for panoptic symbol spotting, termed Dual-Pathway
Symbol Spotter (DPSS). It incorporates an adaptive fusion module to enhance
primitive features with complementary image features, achieving
state-of-the-art performance and enhanced robustness. Extensive experiments
validate the effectiveness of DPSS, demonstrating the value of ArchCAD-400K and
its potential to drive innovation in architectural design and construction.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 11:40:53 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 06:24:01 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Luo",
"Ruifeng",
""
],
[
"Liu",
"Zhengjie",
""
],
[
"Cheng",
"Tianxiao",
""
],
[
"Wang",
"Jie",
""
],
[
"Wang",
"Tongjie",
""
],
[
"Wei",
"Xingguang",
""
],
[
"Wang",
"Haomin",
""
],
[
"Li",
"YanPeng",
""
],
[
"Chai",
"Fu",
""
],
[
"Cheng",
"Fei",
""
],
[
"Ye",
"Shenglong",
""
],
[
"Wang",
"Wenhai",
""
],
[
"Zhang",
"Yanting",
""
],
[
"Qiao",
"Yu",
""
],
[
"Zhang",
"Hongjie",
""
],
[
"Zhao",
"Xianzhong",
""
]
] | TITLE: ArchCAD-400K: An Open Large-Scale Architectural CAD Dataset and New
Baseline for Panoptic Symbol Spotting
ABSTRACT: Recognizing symbols in architectural CAD drawings is critical for various
advanced engineering applications. In this paper, we propose a novel CAD data
annotation engine that leverages intrinsic attributes from systematically
archived CAD drawings to automatically generate high-quality annotations, thus
significantly reducing manual labeling efforts. Utilizing this engine, we
construct ArchCAD-400K, a large-scale CAD dataset consisting of 413,062 chunks
from 5538 highly standardized drawings, making it over 26 times larger than the
largest existing CAD dataset. ArchCAD-400K boasts an extended drawing diversity
and broader categories, offering line-grained annotations. Furthermore, we
present a new baseline model for panoptic symbol spotting, termed Dual-Pathway
Symbol Spotter (DPSS). It incorporates an adaptive fusion module to enhance
primitive features with complementary image features, achieving
state-of-the-art performance and enhanced robustness. Extensive experiments
validate the effectiveness of DPSS, demonstrating the value of ArchCAD-400K and
its potential to drive innovation in architectural design and construction.
|
2503.22368 | Marc Hellmuth | Johannes B.S. Petersen, Akbar Davoodi, Thomas G\"artner, Marc Hellmuth
and Daniel Merkle | On Finding All Connected Maximum-Sized Common Subgraphs in Multiple
Labeled Graphs | null | null | null | null | cs.DS cs.DM math.CO q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an exact algorithm for computing all common subgraphs with the
maximum number of vertices across multiple graphs. Our approach is further
extended to handle the connected Maximum Common Subgraph (MCS), identifying the
largest common subgraph in terms of either vertices or edges across multiple
graphs, where edges or vertices may additionally be labeled to account for
possible atom types or bond types, a classical labeling used in molecular
graphs. Our approach leverages modular product graphs and a modified
Bron-Kerbosch algorithm to enumerate maximal cliques, ensuring all intermediate
solutions are retained. A pruning heuristic efficiently reduces the modular
product size, improving computational feasibility. Additionally, we introduce a
graph ordering strategy based on graph-kernel similarity measures to optimize
the search process. Our method is particularly relevant for bioinformatics and
cheminformatics, where identifying conserved structural motifs in molecular
graphs is crucial. Empirical results on molecular datasets demonstrate that our
approach is scalable and fast.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 12:20:05 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 16:26:54 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Petersen",
"Johannes B. S.",
""
],
[
"Davoodi",
"Akbar",
""
],
[
"Gärtner",
"Thomas",
""
],
[
"Hellmuth",
"Marc",
""
],
[
"Merkle",
"Daniel",
""
]
] | TITLE: On Finding All Connected Maximum-Sized Common Subgraphs in Multiple
Labeled Graphs
ABSTRACT: We present an exact algorithm for computing all common subgraphs with the
maximum number of vertices across multiple graphs. Our approach is further
extended to handle the connected Maximum Common Subgraph (MCS), identifying the
largest common subgraph in terms of either vertices or edges across multiple
graphs, where edges or vertices may additionally be labeled to account for
possible atom types or bond types, a classical labeling used in molecular
graphs. Our approach leverages modular product graphs and a modified
Bron-Kerbosch algorithm to enumerate maximal cliques, ensuring all intermediate
solutions are retained. A pruning heuristic efficiently reduces the modular
product size, improving computational feasibility. Additionally, we introduce a
graph ordering strategy based on graph-kernel similarity measures to optimize
the search process. Our method is particularly relevant for bioinformatics and
cheminformatics, where identifying conserved structural motifs in molecular
graphs is crucial. Empirical results on molecular datasets demonstrate that our
approach is scalable and fast.
|
2503.22727 | Alejandro Lozano | Alejandro Lozano, Min Woo Sun, James Burgess, Jeffrey J. Nirschl,
Christopher Polzak, Yuhui Zhang, Liangyu Chen, Jeffrey Gu, Ivan Lopez, Josiah
Aklilu, Anita Rau, Austin Wolfgang Katzer, Collin Chiu, Orr Zohar, Xiaohan
Wang, Alfred Seunghoon Song, Chiang Chia-Chun, Robert Tibshirani, Serena
Yeung-Levy | A Large-Scale Vision-Language Dataset Derived from Open Scientific
Literature to Advance Biomedical Generalist AI | null | null | null | null | cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Despite the excitement behind biomedical artificial intelligence (AI), access
to high-quality, diverse, and large-scale data - the foundation for modern AI
systems - is still a bottleneck to unlocking its full potential. To address
this gap, we introduce Biomedica, an open-source dataset derived from the
PubMed Central Open Access subset, containing over 6 million scientific
articles and 24 million image-text pairs, along with 27 metadata fields
(including expert human annotations). To overcome the challenges of accessing
our large-scale dataset, we provide scalable streaming and search APIs through
a web server, facilitating seamless integration with AI systems. We demonstrate
the utility of the Biomedica dataset by building embedding models, chat-style
models, and retrieval-augmented chat agents. Notably, all our AI models surpass
previous open systems in their respective categories, underscoring the critical
role of diverse, high-quality, and large-scale biomedical data.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 05:56:46 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 19:34:20 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Lozano",
"Alejandro",
""
],
[
"Sun",
"Min Woo",
""
],
[
"Burgess",
"James",
""
],
[
"Nirschl",
"Jeffrey J.",
""
],
[
"Polzak",
"Christopher",
""
],
[
"Zhang",
"Yuhui",
""
],
[
"Chen",
"Liangyu",
""
],
[
"Gu",
"Jeffrey",
""
],
[
"Lopez",
"Ivan",
""
],
[
"Aklilu",
"Josiah",
""
],
[
"Rau",
"Anita",
""
],
[
"Katzer",
"Austin Wolfgang",
""
],
[
"Chiu",
"Collin",
""
],
[
"Zohar",
"Orr",
""
],
[
"Wang",
"Xiaohan",
""
],
[
"Song",
"Alfred Seunghoon",
""
],
[
"Chia-Chun",
"Chiang",
""
],
[
"Tibshirani",
"Robert",
""
],
[
"Yeung-Levy",
"Serena",
""
]
] | TITLE: A Large-Scale Vision-Language Dataset Derived from Open Scientific
Literature to Advance Biomedical Generalist AI
ABSTRACT: Despite the excitement behind biomedical artificial intelligence (AI), access
to high-quality, diverse, and large-scale data - the foundation for modern AI
systems - is still a bottleneck to unlocking its full potential. To address
this gap, we introduce Biomedica, an open-source dataset derived from the
PubMed Central Open Access subset, containing over 6 million scientific
articles and 24 million image-text pairs, along with 27 metadata fields
(including expert human annotations). To overcome the challenges of accessing
our large-scale dataset, we provide scalable streaming and search APIs through
a web server, facilitating seamless integration with AI systems. We demonstrate
the utility of the Biomedica dataset by building embedding models, chat-style
models, and retrieval-augmented chat agents. Notably, all our AI models surpass
previous open systems in their respective categories, underscoring the critical
role of diverse, high-quality, and large-scale biomedical data.
|
2503.22876 | Kushagra Srivastava | Kushagra Srivastava, Rutwik Kulkarni, Manoj Velmurugan, Nitin J.
Sanket | VizFlyt: Perception-centric Pedagogical Framework For Autonomous Aerial
Robots | Accepted at ICRA 2025. Projected Page:
https://pear.wpi.edu/research/vizflyt.html | null | null | null | cs.RO cs.CV | http://creativecommons.org/licenses/by/4.0/ | Autonomous aerial robots are becoming commonplace in our lives. Hands-on
aerial robotics courses are pivotal in training the next-generation workforce
to meet the growing market demands. Such an efficient and compelling course
depends on a reliable testbed. In this paper, we present VizFlyt, an
open-source perception-centric Hardware-In-The-Loop (HITL) photorealistic
testing framework for aerial robotics courses. We utilize pose from an external
localization system to hallucinate real-time and photorealistic visual sensors
using 3D Gaussian Splatting. This enables stress-free testing of autonomy
algorithms on aerial robots without the risk of crashing into obstacles. We
achieve over 100Hz of system update rate. Lastly, we build upon our past
experiences of offering hands-on aerial robotics courses and propose a new
open-source and open-hardware curriculum based on VizFlyt for the future. We
test our framework on various course projects in real-world HITL experiments
and present the results showing the efficacy of such a system and its large
potential use cases. Code, datasets, hardware guides and demo videos are
available at https://pear.wpi.edu/research/vizflyt.html
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 21:03:30 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 22:39:54 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Srivastava",
"Kushagra",
""
],
[
"Kulkarni",
"Rutwik",
""
],
[
"Velmurugan",
"Manoj",
""
],
[
"Sanket",
"Nitin J.",
""
]
] | TITLE: VizFlyt: Perception-centric Pedagogical Framework For Autonomous Aerial
Robots
ABSTRACT: Autonomous aerial robots are becoming commonplace in our lives. Hands-on
aerial robotics courses are pivotal in training the next-generation workforce
to meet the growing market demands. Such an efficient and compelling course
depends on a reliable testbed. In this paper, we present VizFlyt, an
open-source perception-centric Hardware-In-The-Loop (HITL) photorealistic
testing framework for aerial robotics courses. We utilize pose from an external
localization system to hallucinate real-time and photorealistic visual sensors
using 3D Gaussian Splatting. This enables stress-free testing of autonomy
algorithms on aerial robots without the risk of crashing into obstacles. We
achieve over 100Hz of system update rate. Lastly, we build upon our past
experiences of offering hands-on aerial robotics courses and propose a new
open-source and open-hardware curriculum based on VizFlyt for the future. We
test our framework on various course projects in real-world HITL experiments
and present the results showing the efficacy of such a system and its large
potential use cases. Code, datasets, hardware guides and demo videos are
available at https://pear.wpi.edu/research/vizflyt.html
|
2503.23339 | Neil Mallinar | Neil Mallinar, A. Ali Heydari, Xin Liu, Anthony Z. Faranesh, Brent
Winslow, Nova Hammerquist, Benjamin Graef, Cathy Speed, Mark Malhotra,
Shwetak Patel, Javier L. Prieto, Daniel McDuff, Ahmed A. Metwally | A Scalable Framework for Evaluating Health Language Models | null | null | null | null | cs.AI cs.CL cs.HC | http://creativecommons.org/licenses/by/4.0/ | Large language models (LLMs) have emerged as powerful tools for analyzing
complex datasets. Recent studies demonstrate their potential to generate
useful, personalized responses when provided with patient-specific health
information that encompasses lifestyle, biomarkers, and context. As LLM-driven
health applications are increasingly adopted, rigorous and efficient one-sided
evaluation methodologies are crucial to ensure response quality across multiple
dimensions, including accuracy, personalization and safety. Current evaluation
practices for open-ended text responses heavily rely on human experts. This
approach introduces human factors and is often cost-prohibitive,
labor-intensive, and hinders scalability, especially in complex domains like
healthcare where response assessment necessitates domain expertise and
considers multifaceted patient data. In this work, we introduce Adaptive
Precise Boolean rubrics: an evaluation framework that streamlines human and
automated evaluation of open-ended questions by identifying gaps in model
responses using a minimal set of targeted rubrics questions. Our approach is
based on recent work in more general evaluation settings that contrasts a
smaller set of complex evaluation targets with a larger set of more precise,
granular targets answerable with simple boolean responses. We validate this
approach in metabolic health, a domain encompassing diabetes, cardiovascular
disease, and obesity. Our results demonstrate that Adaptive Precise Boolean
rubrics yield higher inter-rater agreement among expert and non-expert human
evaluators, and in automated assessments, compared to traditional Likert
scales, while requiring approximately half the evaluation time of Likert-based
methods. This enhanced efficiency, particularly in automated evaluation and
non-expert contributions, paves the way for more extensive and cost-effective
evaluation of LLMs in health.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 06:47:57 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 21:17:55 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Mallinar",
"Neil",
""
],
[
"Heydari",
"A. Ali",
""
],
[
"Liu",
"Xin",
""
],
[
"Faranesh",
"Anthony Z.",
""
],
[
"Winslow",
"Brent",
""
],
[
"Hammerquist",
"Nova",
""
],
[
"Graef",
"Benjamin",
""
],
[
"Speed",
"Cathy",
""
],
[
"Malhotra",
"Mark",
""
],
[
"Patel",
"Shwetak",
""
],
[
"Prieto",
"Javier L.",
""
],
[
"McDuff",
"Daniel",
""
],
[
"Metwally",
"Ahmed A.",
""
]
] | TITLE: A Scalable Framework for Evaluating Health Language Models
ABSTRACT: Large language models (LLMs) have emerged as powerful tools for analyzing
complex datasets. Recent studies demonstrate their potential to generate
useful, personalized responses when provided with patient-specific health
information that encompasses lifestyle, biomarkers, and context. As LLM-driven
health applications are increasingly adopted, rigorous and efficient one-sided
evaluation methodologies are crucial to ensure response quality across multiple
dimensions, including accuracy, personalization and safety. Current evaluation
practices for open-ended text responses heavily rely on human experts. This
approach introduces human factors and is often cost-prohibitive,
labor-intensive, and hinders scalability, especially in complex domains like
healthcare where response assessment necessitates domain expertise and
considers multifaceted patient data. In this work, we introduce Adaptive
Precise Boolean rubrics: an evaluation framework that streamlines human and
automated evaluation of open-ended questions by identifying gaps in model
responses using a minimal set of targeted rubrics questions. Our approach is
based on recent work in more general evaluation settings that contrasts a
smaller set of complex evaluation targets with a larger set of more precise,
granular targets answerable with simple boolean responses. We validate this
approach in metabolic health, a domain encompassing diabetes, cardiovascular
disease, and obesity. Our results demonstrate that Adaptive Precise Boolean
rubrics yield higher inter-rater agreement among expert and non-expert human
evaluators, and in automated assessments, compared to traditional Likert
scales, while requiring approximately half the evaluation time of Likert-based
methods. This enhanced efficiency, particularly in automated evaluation and
non-expert contributions, paves the way for more extensive and cost-effective
evaluation of LLMs in health.
|
2503.23671 | Tongke Ni | Tongke Ni, Yang Fan, Junru Zhou, Xiangping Wu, Qingcai Chen | CrossFormer: Cross-Segment Semantic Fusion for Document Segmentation | 10 pages, 4 figures | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Text semantic segmentation involves partitioning a document into multiple
paragraphs with continuous semantics based on the subject matter, contextual
information, and document structure. Traditional approaches have typically
relied on preprocessing documents into segments to address input length
constraints, resulting in the loss of critical semantic information across
segments. To address this, we present CrossFormer, a transformer-based model
featuring a novel cross-segment fusion module that dynamically models latent
semantic dependencies across document segments, substantially elevating
segmentation accuracy. Additionally, CrossFormer can replace rule-based chunk
methods within the Retrieval-Augmented Generation (RAG) system, producing more
semantically coherent chunks that enhance its efficacy. Comprehensive
evaluations confirm CrossFormer's state-of-the-art performance on public text
semantic segmentation datasets, alongside considerable gains on RAG benchmarks.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 02:27:49 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 07:47:56 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Ni",
"Tongke",
""
],
[
"Fan",
"Yang",
""
],
[
"Zhou",
"Junru",
""
],
[
"Wu",
"Xiangping",
""
],
[
"Chen",
"Qingcai",
""
]
] | TITLE: CrossFormer: Cross-Segment Semantic Fusion for Document Segmentation
ABSTRACT: Text semantic segmentation involves partitioning a document into multiple
paragraphs with continuous semantics based on the subject matter, contextual
information, and document structure. Traditional approaches have typically
relied on preprocessing documents into segments to address input length
constraints, resulting in the loss of critical semantic information across
segments. To address this, we present CrossFormer, a transformer-based model
featuring a novel cross-segment fusion module that dynamically models latent
semantic dependencies across document segments, substantially elevating
segmentation accuracy. Additionally, CrossFormer can replace rule-based chunk
methods within the Retrieval-Augmented Generation (RAG) system, producing more
semantically coherent chunks that enhance its efficacy. Comprehensive
evaluations confirm CrossFormer's state-of-the-art performance on public text
semantic segmentation datasets, alongside considerable gains on RAG benchmarks.
|
2503.23927 | Sebastian Springer | Sebastian Springer and Andre Scaffidi and Maximilian Autenrieth and
Gabriella Contardo and Alessandro Laio and Roberto Trotta and Heikki Haario | Detecting Localized Density Anomalies in Multivariate Data via Coin-Flip
Statistics | Code Availability: The code used to generate the results of this
study is available at GitHub via the link:
https://github.com/sspring137/EagleEye | null | null | null | stat.ML cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Detecting localized density differences in multivariate data is a crucial
task in computational science. Such anomalies can indicate a critical system
failure, lead to a groundbreaking scientific discovery, or reveal unexpected
changes in data distribution. We introduce EagleEye, an anomaly detection
method to compare two multivariate datasets with the aim of identifying local
density anomalies, namely over- or under-densities affecting only localised
regions of the feature space. Anomalies are detected by modelling, for each
point, the ordered sequence of its neighbours' membership label as a
coin-flipping process and monitoring deviations from the expected behaviour of
such process. A unique advantage of our method is its ability to provide an
accurate, entirely unsupervised estimate of the local signal purity. We
demonstrate its effectiveness through experiments on both synthetic and
real-world datasets. In synthetic data, EagleEye accurately detects anomalies
in multiple dimensions even when they affect a tiny fraction of the data. When
applied to a challenging resonant anomaly detection benchmark task in simulated
Large Hadron Collider data, EagleEye successfully identifies particle decay
events present in just 0.3% of the dataset. In global temperature data,
EagleEye uncovers previously unidentified, geographically localised changes in
temperature fields that occurred in the most recent years. Thanks to its key
advantages of conceptual simplicity, computational efficiency, trivial
parallelisation, and scalability, EagleEye is widely applicable across many
fields.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 10:20:04 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 10:07:05 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Springer",
"Sebastian",
""
],
[
"Scaffidi",
"Andre",
""
],
[
"Autenrieth",
"Maximilian",
""
],
[
"Contardo",
"Gabriella",
""
],
[
"Laio",
"Alessandro",
""
],
[
"Trotta",
"Roberto",
""
],
[
"Haario",
"Heikki",
""
]
] | TITLE: Detecting Localized Density Anomalies in Multivariate Data via Coin-Flip
Statistics
ABSTRACT: Detecting localized density differences in multivariate data is a crucial
task in computational science. Such anomalies can indicate a critical system
failure, lead to a groundbreaking scientific discovery, or reveal unexpected
changes in data distribution. We introduce EagleEye, an anomaly detection
method to compare two multivariate datasets with the aim of identifying local
density anomalies, namely over- or under-densities affecting only localised
regions of the feature space. Anomalies are detected by modelling, for each
point, the ordered sequence of its neighbours' membership label as a
coin-flipping process and monitoring deviations from the expected behaviour of
such process. A unique advantage of our method is its ability to provide an
accurate, entirely unsupervised estimate of the local signal purity. We
demonstrate its effectiveness through experiments on both synthetic and
real-world datasets. In synthetic data, EagleEye accurately detects anomalies
in multiple dimensions even when they affect a tiny fraction of the data. When
applied to a challenging resonant anomaly detection benchmark task in simulated
Large Hadron Collider data, EagleEye successfully identifies particle decay
events present in just 0.3% of the dataset. In global temperature data,
EagleEye uncovers previously unidentified, geographically localised changes in
temperature fields that occurred in the most recent years. Thanks to its key
advantages of conceptual simplicity, computational efficiency, trivial
parallelisation, and scalability, EagleEye is widely applicable across many
fields.
|
2503.24115 | Peidong Wang | Zhiming Ma, Peidong Wang, Minhua Huang, Jingpeng Wang, Kai Wu,
Xiangzhao Lv, Yachun Pang, Yin Yang, Wenjie Tang, Yuchen Kang | TeleAntiFraud-28k: An Audio-Text Slow-Thinking Dataset for Telecom Fraud
Detection | null | null | null | null | cs.CL cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The detection of telecom fraud faces significant challenges due to the lack
of high-quality multimodal training data that integrates audio signals with
reasoning-oriented textual analysis. To address this gap, we present
TeleAntiFraud-28k, the first open-source audio-text slow-thinking dataset
specifically designed for automated telecom fraud analysis. Our dataset is
constructed through three strategies: (1) Privacy-preserved text-truth sample
generation using automatically speech recognition (ASR)-transcribed call
recordings (with anonymized original audio), ensuring real-world consistency
through text-to-speech (TTS) model regeneration; (2) Semantic enhancement via
large language model (LLM)-based self-instruction sampling on authentic ASR
outputs to expand scenario coverage; (3) Multi-agent adversarial synthesis that
simulates emerging fraud tactics through predefined communication scenarios and
fraud typologies. The generated dataset contains 28,511 rigorously processed
speech-text pairs, complete with detailed annotations for fraud reasoning. The
dataset is divided into three tasks: scenario classification, fraud detection,
fraud type classification. Furthermore, we construct TeleAntiFraud-Bench, a
standardized evaluation benchmark comprising proportionally sampled instances
from the dataset, to facilitate systematic testing of model performance on
telecom fraud detection tasks. We also contribute a production-optimized
supervised fine-tuning (SFT) model trained on hybrid real/synthetic data, while
open-sourcing the data processing framework to enable community-driven dataset
expansion. This work establishes a foundational framework for multimodal
anti-fraud research while addressing critical challenges in data privacy and
scenario diversity. The project will be released at
https://github.com/JimmyMa99/TeleAntiFraud.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 14:06:17 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 14:04:47 GMT"
},
{
"version": "v3",
"created": "Wed, 2 Apr 2025 13:32:22 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Ma",
"Zhiming",
""
],
[
"Wang",
"Peidong",
""
],
[
"Huang",
"Minhua",
""
],
[
"Wang",
"Jingpeng",
""
],
[
"Wu",
"Kai",
""
],
[
"Lv",
"Xiangzhao",
""
],
[
"Pang",
"Yachun",
""
],
[
"Yang",
"Yin",
""
],
[
"Tang",
"Wenjie",
""
],
[
"Kang",
"Yuchen",
""
]
] | TITLE: TeleAntiFraud-28k: An Audio-Text Slow-Thinking Dataset for Telecom Fraud
Detection
ABSTRACT: The detection of telecom fraud faces significant challenges due to the lack
of high-quality multimodal training data that integrates audio signals with
reasoning-oriented textual analysis. To address this gap, we present
TeleAntiFraud-28k, the first open-source audio-text slow-thinking dataset
specifically designed for automated telecom fraud analysis. Our dataset is
constructed through three strategies: (1) Privacy-preserved text-truth sample
generation using automatically speech recognition (ASR)-transcribed call
recordings (with anonymized original audio), ensuring real-world consistency
through text-to-speech (TTS) model regeneration; (2) Semantic enhancement via
large language model (LLM)-based self-instruction sampling on authentic ASR
outputs to expand scenario coverage; (3) Multi-agent adversarial synthesis that
simulates emerging fraud tactics through predefined communication scenarios and
fraud typologies. The generated dataset contains 28,511 rigorously processed
speech-text pairs, complete with detailed annotations for fraud reasoning. The
dataset is divided into three tasks: scenario classification, fraud detection,
fraud type classification. Furthermore, we construct TeleAntiFraud-Bench, a
standardized evaluation benchmark comprising proportionally sampled instances
from the dataset, to facilitate systematic testing of model performance on
telecom fraud detection tasks. We also contribute a production-optimized
supervised fine-tuning (SFT) model trained on hybrid real/synthetic data, while
open-sourcing the data processing framework to enable community-driven dataset
expansion. This work establishes a foundational framework for multimodal
anti-fraud research while addressing critical challenges in data privacy and
scenario diversity. The project will be released at
https://github.com/JimmyMa99/TeleAntiFraud.
|
2503.24187 | James Gardner | James A. D. Gardner, Will Rowan, William A. P. Smith | NeuRaLaTeX: A machine learning library written in pure LaTeX | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | In this paper, we introduce NeuRaLaTeX, which we believe to be the first deep
learning library written entirely in LaTeX. As part of your LaTeX document you
can specify the architecture of a neural network and its loss functions, define
how to generate or load training data, and specify training hyperparameters and
experiments. When the document is compiled, the LaTeX compiler will generate or
load training data, train the network, run experiments, and generate figures.
This paper generates a random 100 point spiral dataset, trains a two layer MLP
on it, evaluates on a different random spiral dataset, produces plots and
tables of results. The paper took 48 hours to compile and the entire source
code for NeuRaLaTeX is contained within the source code of the paper. We
propose two new metrics: the Written In Latex (WIL) metric measures the
proportion of a machine learning library that is written in pure LaTeX, while
the Source Code Of Method in Source Code of Paper (SCOMISCOP) metric measures
the proportion of a paper's implementation that is contained within the paper
source. We are state-of-the-art for both metrics, outperforming the ResNet and
Transformer papers, as well as the PyTorch and Tensorflow libraries. Source
code, documentation, videos, crypto scams and an invitation to invest in the
commercialisation of NeuRaLaTeX are available at https://www.neuralatex.com
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 15:05:19 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 10:46:42 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Gardner",
"James A. D.",
""
],
[
"Rowan",
"Will",
""
],
[
"Smith",
"William A. P.",
""
]
] | TITLE: NeuRaLaTeX: A machine learning library written in pure LaTeX
ABSTRACT: In this paper, we introduce NeuRaLaTeX, which we believe to be the first deep
learning library written entirely in LaTeX. As part of your LaTeX document you
can specify the architecture of a neural network and its loss functions, define
how to generate or load training data, and specify training hyperparameters and
experiments. When the document is compiled, the LaTeX compiler will generate or
load training data, train the network, run experiments, and generate figures.
This paper generates a random 100 point spiral dataset, trains a two layer MLP
on it, evaluates on a different random spiral dataset, produces plots and
tables of results. The paper took 48 hours to compile and the entire source
code for NeuRaLaTeX is contained within the source code of the paper. We
propose two new metrics: the Written In Latex (WIL) metric measures the
proportion of a machine learning library that is written in pure LaTeX, while
the Source Code Of Method in Source Code of Paper (SCOMISCOP) metric measures
the proportion of a paper's implementation that is contained within the paper
source. We are state-of-the-art for both metrics, outperforming the ResNet and
Transformer papers, as well as the PyTorch and Tensorflow libraries. Source
code, documentation, videos, crypto scams and an invitation to invest in the
commercialisation of NeuRaLaTeX are available at https://www.neuralatex.com
|
2503.24193 | Enrico Palumbo | Enrico Palumbo, Gustavo Penha, Andreas Damianou, Jos\'e Luis Redondo
Garc\'ia, Timothy Christopher Heath, Alice Wang, Hugues Bouchard, and Mounia
Lalmas | Text2Tracks: Prompt-based Music Recommendation via Generative Retrieval | null | null | null | null | cs.IR | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In recent years, Large Language Models (LLMs) have enabled users to provide
highly specific music recommendation requests using natural language prompts
(e.g. "Can you recommend some old classics for slow dancing?"). In this setup,
the recommended tracks are predicted by the LLM in an autoregressive way, i.e.
the LLM generates the track titles one token at a time. While intuitive, this
approach has several limitation. First, it is based on a general purpose
tokenization that is optimized for words rather than for track titles. Second,
it necessitates an additional entity resolution layer that matches the track
title to the actual track identifier. Third, the number of decoding steps
scales linearly with the length of the track title, slowing down inference. In
this paper, we propose to address the task of prompt-based music recommendation
as a generative retrieval task. Within this setting, we introduce novel,
effective, and efficient representations of track identifiers that
significantly outperform commonly used strategies. We introduce Text2Tracks, a
generative retrieval model that learns a mapping from a user's music
recommendation prompt to the relevant track IDs directly. Through an offline
evaluation on a dataset of playlists with language inputs, we find that (1) the
strategy to create IDs for music tracks is the most important factor for the
effectiveness of Text2Tracks and semantic IDs significantly outperform commonly
used strategies that rely on song titles as identifiers (2) provided with the
right choice of track identifiers, Text2Tracks outperforms sparse and dense
retrieval solutions trained to retrieve tracks from language prompts.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 15:09:19 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 14:08:21 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Palumbo",
"Enrico",
""
],
[
"Penha",
"Gustavo",
""
],
[
"Damianou",
"Andreas",
""
],
[
"García",
"José Luis Redondo",
""
],
[
"Heath",
"Timothy Christopher",
""
],
[
"Wang",
"Alice",
""
],
[
"Bouchard",
"Hugues",
""
],
[
"Lalmas",
"Mounia",
""
]
] | TITLE: Text2Tracks: Prompt-based Music Recommendation via Generative Retrieval
ABSTRACT: In recent years, Large Language Models (LLMs) have enabled users to provide
highly specific music recommendation requests using natural language prompts
(e.g. "Can you recommend some old classics for slow dancing?"). In this setup,
the recommended tracks are predicted by the LLM in an autoregressive way, i.e.
the LLM generates the track titles one token at a time. While intuitive, this
approach has several limitation. First, it is based on a general purpose
tokenization that is optimized for words rather than for track titles. Second,
it necessitates an additional entity resolution layer that matches the track
title to the actual track identifier. Third, the number of decoding steps
scales linearly with the length of the track title, slowing down inference. In
this paper, we propose to address the task of prompt-based music recommendation
as a generative retrieval task. Within this setting, we introduce novel,
effective, and efficient representations of track identifiers that
significantly outperform commonly used strategies. We introduce Text2Tracks, a
generative retrieval model that learns a mapping from a user's music
recommendation prompt to the relevant track IDs directly. Through an offline
evaluation on a dataset of playlists with language inputs, we find that (1) the
strategy to create IDs for music tracks is the most important factor for the
effectiveness of Text2Tracks and semantic IDs significantly outperform commonly
used strategies that rely on song titles as identifiers (2) provided with the
right choice of track identifiers, Text2Tracks outperforms sparse and dense
retrieval solutions trained to retrieve tracks from language prompts.
|
2503.24361 | Zhenyu Jiang | Abhiram Maddukuri, Zhenyu Jiang, Lawrence Yunliang Chen, Soroush
Nasiriany, Yuqi Xie, Yu Fang, Wenqi Huang, Zu Wang, Zhenjia Xu, Nikita
Chernyadev, Scott Reed, Ken Goldberg, Ajay Mandlekar, Linxi Fan, Yuke Zhu | Sim-and-Real Co-Training: A Simple Recipe for Vision-Based Robotic
Manipulation | Project website: https://co-training.github.io/ | null | null | null | cs.RO cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large real-world robot datasets hold great potential to train generalist
robot models, but scaling real-world human data collection is time-consuming
and resource-intensive. Simulation has great potential in supplementing
large-scale data, especially with recent advances in generative AI and
automated data generation tools that enable scalable creation of robot behavior
datasets. However, training a policy solely in simulation and transferring it
to the real world often demands substantial human effort to bridge the reality
gap. A compelling alternative is to co-train the policy on a mixture of
simulation and real-world datasets. Preliminary studies have recently shown
this strategy to substantially improve the performance of a policy over one
trained on a limited amount of real-world data. Nonetheless, the community
lacks a systematic understanding of sim-and-real co-training and what it takes
to reap the benefits of simulation data for real-robot learning. This work
presents a simple yet effective recipe for utilizing simulation data to solve
vision-based robotic manipulation tasks. We derive this recipe from
comprehensive experiments that validate the co-training strategy on various
simulation and real-world datasets. Using two domains--a robot arm and a
humanoid--across diverse tasks, we demonstrate that simulation data can enhance
real-world task performance by an average of 38%, even with notable differences
between the simulation and real-world data. Videos and additional results can
be found at https://co-training.github.io/
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 17:39:38 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 16:40:11 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Maddukuri",
"Abhiram",
""
],
[
"Jiang",
"Zhenyu",
""
],
[
"Chen",
"Lawrence Yunliang",
""
],
[
"Nasiriany",
"Soroush",
""
],
[
"Xie",
"Yuqi",
""
],
[
"Fang",
"Yu",
""
],
[
"Huang",
"Wenqi",
""
],
[
"Wang",
"Zu",
""
],
[
"Xu",
"Zhenjia",
""
],
[
"Chernyadev",
"Nikita",
""
],
[
"Reed",
"Scott",
""
],
[
"Goldberg",
"Ken",
""
],
[
"Mandlekar",
"Ajay",
""
],
[
"Fan",
"Linxi",
""
],
[
"Zhu",
"Yuke",
""
]
] | TITLE: Sim-and-Real Co-Training: A Simple Recipe for Vision-Based Robotic
Manipulation
ABSTRACT: Large real-world robot datasets hold great potential to train generalist
robot models, but scaling real-world human data collection is time-consuming
and resource-intensive. Simulation has great potential in supplementing
large-scale data, especially with recent advances in generative AI and
automated data generation tools that enable scalable creation of robot behavior
datasets. However, training a policy solely in simulation and transferring it
to the real world often demands substantial human effort to bridge the reality
gap. A compelling alternative is to co-train the policy on a mixture of
simulation and real-world datasets. Preliminary studies have recently shown
this strategy to substantially improve the performance of a policy over one
trained on a limited amount of real-world data. Nonetheless, the community
lacks a systematic understanding of sim-and-real co-training and what it takes
to reap the benefits of simulation data for real-robot learning. This work
presents a simple yet effective recipe for utilizing simulation data to solve
vision-based robotic manipulation tasks. We derive this recipe from
comprehensive experiments that validate the co-training strategy on various
simulation and real-world datasets. Using two domains--a robot arm and a
humanoid--across diverse tasks, we demonstrate that simulation data can enhance
real-world task performance by an average of 38%, even with notable differences
between the simulation and real-world data. Videos and additional results can
be found at https://co-training.github.io/
|
2504.00022 | Anandakumar D | Bargava Subramanian, Shajeev Jaikumar, Praveen Shastry, Naveen
Kumarasami, Kalyan Sivasailam, Anandakumar D, Keerthana R, Mounigasri M,
Kishore Prasath Venkatesh | Autonomous AI for Multi-Pathology Detection in Chest X-Rays: A
Multi-Site Study in the Indian Healthcare System | 27 pages , 8 figures | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Study Design: The study outlines the development of an autonomous AI system
for chest X-ray (CXR) interpretation, trained on a vast dataset of over 5
million X rays sourced from healthcare systems across India. This AI system
integrates advanced architectures including Vision Transformers, Faster R-CNN,
and various U Net models (such as Attention U-Net, U-Net++, and Dense U-Net) to
enable comprehensive classification, detection, and segmentation of 75 distinct
pathologies. To ensure robustness, the study design includes subgroup analyses
across age, gender, and equipment type, validating the model's adaptability and
performance across diverse patient demographics and imaging environments.
Performance: The AI system achieved up to 98% precision and over 95% recall
for multi pathology classification, with stable performance across demographic
and equipment subgroups. For normal vs. abnormal classification, it reached
99.8% precision, 99.6% recall, and 99.9% negative predictive value (NPV). It
was deployed in 17 major healthcare systems in India including diagnostic
centers, large hospitals, and government hospitals. Over the deployment period,
the system processed over 150,000 scans, averaging 2,000 chest X rays daily,
resulting in reduced reporting times and improved diagnostic accuracy.
Conclusion: The high precision and recall validate the AI's capability as a
reliable tool for autonomous normal abnormal classification, pathology
localization, and segmentation. This scalable AI model addresses diagnostic
gaps in underserved areas, optimizing radiology workflows and enhancing patient
care across diverse healthcare settings in India.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 09:07:17 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 08:36:56 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Subramanian",
"Bargava",
""
],
[
"Jaikumar",
"Shajeev",
""
],
[
"Shastry",
"Praveen",
""
],
[
"Kumarasami",
"Naveen",
""
],
[
"Sivasailam",
"Kalyan",
""
],
[
"D",
"Anandakumar",
""
],
[
"R",
"Keerthana",
""
],
[
"M",
"Mounigasri",
""
],
[
"Venkatesh",
"Kishore Prasath",
""
]
] | TITLE: Autonomous AI for Multi-Pathology Detection in Chest X-Rays: A
Multi-Site Study in the Indian Healthcare System
ABSTRACT: Study Design: The study outlines the development of an autonomous AI system
for chest X-ray (CXR) interpretation, trained on a vast dataset of over 5
million X rays sourced from healthcare systems across India. This AI system
integrates advanced architectures including Vision Transformers, Faster R-CNN,
and various U Net models (such as Attention U-Net, U-Net++, and Dense U-Net) to
enable comprehensive classification, detection, and segmentation of 75 distinct
pathologies. To ensure robustness, the study design includes subgroup analyses
across age, gender, and equipment type, validating the model's adaptability and
performance across diverse patient demographics and imaging environments.
Performance: The AI system achieved up to 98% precision and over 95% recall
for multi pathology classification, with stable performance across demographic
and equipment subgroups. For normal vs. abnormal classification, it reached
99.8% precision, 99.6% recall, and 99.9% negative predictive value (NPV). It
was deployed in 17 major healthcare systems in India including diagnostic
centers, large hospitals, and government hospitals. Over the deployment period,
the system processed over 150,000 scans, averaging 2,000 chest X rays daily,
resulting in reduced reporting times and improved diagnostic accuracy.
Conclusion: The high precision and recall validate the AI's capability as a
reliable tool for autonomous normal abnormal classification, pathology
localization, and segmentation. This scalable AI model addresses diagnostic
gaps in underserved areas, optimizing radiology workflows and enhancing patient
care across diverse healthcare settings in India.
|
2504.00176 | Marco Canducci | Marco Canducci, Lida Abdi, Alessandro Prete, Roland J. Veen, Michael
Biehl, Wiebke Arlt, Peter Tino | Discriminative Subspace Emersion from learning feature relevances across
different populations | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | In a given classification task, the accuracy of the learner is often hampered
by finiteness of the training set, high-dimensionality of the feature space and
severe overlap between classes. In the context of interpretable learners, with
(piecewise) linear separation boundaries, these issues can be mitigated by
careful construction of optimization procedures and/or estimation of relevant
features for the task. However, when the task is shared across two disjoint
populations the main interest is shifted towards estimating a set of features
that discriminate the most between the two, when performing classification. We
propose a new Discriminative Subspace Emersion (DSE) method to extend subspace
learning toward a general relevance learning framework. DSE allows us to
identify the most relevant features in distinguishing the classification task
across two populations, even in cases of high overlap between classes. The
proposed methodology is designed to work with multiple sets of labels and is
derived in principle without being tied to a specific choice of base learner.
Theoretical and empirical investigations over synthetic and real-world datasets
indicate that DSE accurately identifies a common subspace for the
classification across different populations. This is shown to be true for a
surprisingly high degree of overlap between classes.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 19:33:39 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 12:00:53 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Canducci",
"Marco",
""
],
[
"Abdi",
"Lida",
""
],
[
"Prete",
"Alessandro",
""
],
[
"Veen",
"Roland J.",
""
],
[
"Biehl",
"Michael",
""
],
[
"Arlt",
"Wiebke",
""
],
[
"Tino",
"Peter",
""
]
] | TITLE: Discriminative Subspace Emersion from learning feature relevances across
different populations
ABSTRACT: In a given classification task, the accuracy of the learner is often hampered
by finiteness of the training set, high-dimensionality of the feature space and
severe overlap between classes. In the context of interpretable learners, with
(piecewise) linear separation boundaries, these issues can be mitigated by
careful construction of optimization procedures and/or estimation of relevant
features for the task. However, when the task is shared across two disjoint
populations the main interest is shifted towards estimating a set of features
that discriminate the most between the two, when performing classification. We
propose a new Discriminative Subspace Emersion (DSE) method to extend subspace
learning toward a general relevance learning framework. DSE allows us to
identify the most relevant features in distinguishing the classification task
across two populations, even in cases of high overlap between classes. The
proposed methodology is designed to work with multiple sets of labels and is
derived in principle without being tied to a specific choice of base learner.
Theoretical and empirical investigations over synthetic and real-world datasets
indicate that DSE accurately identifies a common subspace for the
classification across different populations. This is shown to be true for a
surprisingly high degree of overlap between classes.
|
2504.00336 | Kerui Wu | Kerui Wu, Ziyue Zhao, B\"ulent Yener | SeizureTransformer: Scaling U-Net with Transformer for Simultaneous
Time-Step Level Seizure Detection from Long EEG Recordings | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Epilepsy is a common neurological disorder that affects around 65 million
people worldwide. Detecting seizures quickly and accurately is vital, given the
prevalence and severity of the associated complications. Recently, deep
learning-based automated seizure detection methods have emerged as solutions;
however, most existing methods require extensive post-processing and do not
effectively handle the crucial long-range patterns in EEG data. In this work,
we propose SeizureTransformer, a simple model comprised of (i) a deep encoder
comprising 1D convolutions (ii) a residual CNN stack and a transformer encoder
to embed previous output into high-level representation with contextual
information, and (iii) streamlined decoder which converts these features into a
sequence of probabilities, directly indicating the presence or absence of
seizures at every time step. Extensive experiments on public and private EEG
seizure detection datasets demonstrate that our model significantly outperforms
existing approaches (ranked in the first place in the 2025 "seizure detection
challenge" organized in the International Conference on Artificial Intelligence
in Epilepsy and Other Neurological Disorders), underscoring its potential for
real-time, precise seizure detection.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 01:33:42 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 16:23:11 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Wu",
"Kerui",
""
],
[
"Zhao",
"Ziyue",
""
],
[
"Yener",
"Bülent",
""
]
] | TITLE: SeizureTransformer: Scaling U-Net with Transformer for Simultaneous
Time-Step Level Seizure Detection from Long EEG Recordings
ABSTRACT: Epilepsy is a common neurological disorder that affects around 65 million
people worldwide. Detecting seizures quickly and accurately is vital, given the
prevalence and severity of the associated complications. Recently, deep
learning-based automated seizure detection methods have emerged as solutions;
however, most existing methods require extensive post-processing and do not
effectively handle the crucial long-range patterns in EEG data. In this work,
we propose SeizureTransformer, a simple model comprised of (i) a deep encoder
comprising 1D convolutions (ii) a residual CNN stack and a transformer encoder
to embed previous output into high-level representation with contextual
information, and (iii) streamlined decoder which converts these features into a
sequence of probabilities, directly indicating the presence or absence of
seizures at every time step. Extensive experiments on public and private EEG
seizure detection datasets demonstrate that our model significantly outperforms
existing approaches (ranked in the first place in the 2025 "seizure detection
challenge" organized in the International Conference on Artificial Intelligence
in Epilepsy and Other Neurological Disorders), underscoring its potential for
real-time, precise seizure detection.
|
2504.00487 | Jie Ma | Jie Ma, Zhitao Gao, Qi Chai, Jun Liu, Pinghui Wang, Jing Tao, Zhou Su | FortisAVQA and MAVEN: a Benchmark Dataset and Debiasing Framework for
Robust Multimodal Reasoning | Under Review | null | null | null | cs.MM cs.CL cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Audio-Visual Question Answering (AVQA) is a challenging multimodal reasoning
task requiring intelligent systems to answer natural language queries based on
paired audio-video inputs accurately. However, existing AVQA approaches often
suffer from overfitting to dataset biases, leading to poor robustness.
Moreover, current datasets may not effectively diagnose these methods. To
address these challenges, we first introduce a novel dataset, FortisAVQA,
constructed in two stages: (1) rephrasing questions in the test split of the
public MUSIC-AVQA dataset and (2) introducing distribution shifts across
questions. The first stage expands the test space with greater diversity, while
the second enables a refined robustness evaluation across rare, frequent, and
overall question distributions. Second, we introduce a robust Multimodal
Audio-Visual Epistemic Network (MAVEN) that leverages a multifaceted cycle
collaborative debiasing strategy to mitigate bias learning. Experimental
results demonstrate that our architecture achieves state-of-the-art performance
on FortisAVQA, with a notable improvement of 7.81\%. Extensive ablation studies
on both datasets validate the effectiveness of our debiasing components.
Additionally, our evaluation reveals the limited robustness of existing
multimodal QA methods. We also verify the plug-and-play capability of our
strategy by integrating it with various baseline models across both datasets.
Our dataset and code are available at https://github.com/reml-group/fortisavqa.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 07:23:50 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 09:19:00 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Ma",
"Jie",
""
],
[
"Gao",
"Zhitao",
""
],
[
"Chai",
"Qi",
""
],
[
"Liu",
"Jun",
""
],
[
"Wang",
"Pinghui",
""
],
[
"Tao",
"Jing",
""
],
[
"Su",
"Zhou",
""
]
] | TITLE: FortisAVQA and MAVEN: a Benchmark Dataset and Debiasing Framework for
Robust Multimodal Reasoning
ABSTRACT: Audio-Visual Question Answering (AVQA) is a challenging multimodal reasoning
task requiring intelligent systems to answer natural language queries based on
paired audio-video inputs accurately. However, existing AVQA approaches often
suffer from overfitting to dataset biases, leading to poor robustness.
Moreover, current datasets may not effectively diagnose these methods. To
address these challenges, we first introduce a novel dataset, FortisAVQA,
constructed in two stages: (1) rephrasing questions in the test split of the
public MUSIC-AVQA dataset and (2) introducing distribution shifts across
questions. The first stage expands the test space with greater diversity, while
the second enables a refined robustness evaluation across rare, frequent, and
overall question distributions. Second, we introduce a robust Multimodal
Audio-Visual Epistemic Network (MAVEN) that leverages a multifaceted cycle
collaborative debiasing strategy to mitigate bias learning. Experimental
results demonstrate that our architecture achieves state-of-the-art performance
on FortisAVQA, with a notable improvement of 7.81\%. Extensive ablation studies
on both datasets validate the effectiveness of our debiasing components.
Additionally, our evaluation reveals the limited robustness of existing
multimodal QA methods. We also verify the plug-and-play capability of our
strategy by integrating it with various baseline models across both datasets.
Our dataset and code are available at https://github.com/reml-group/fortisavqa.
|
2504.00595 | Weizhi Wang | Weizhi Wang, Yu Tian, Linjie Yang, Heng Wang, Xifeng Yan | Open-Qwen2VL: Compute-Efficient Pre-Training of Fully-Open Multimodal
LLMs on Academic Resources | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The reproduction of state-of-the-art multimodal LLM pre-training faces
barriers at every stage of the pipeline, including high-quality data filtering,
multimodal data mixture strategies, sequence packing techniques, and training
frameworks. We introduce Open-Qwen2VL, a fully open-source 2B-parameter
Multimodal Large Language Model pre-trained efficiently on 29M image-text pairs
using only 220 A100-40G GPU hours. Our approach employs low-to-high dynamic
image resolution and multimodal sequence packing to significantly enhance
pre-training efficiency. The training dataset was carefully curated using both
MLLM-based filtering techniques (e.g., MLM-Filter) and conventional CLIP-based
filtering methods, substantially improving data quality and training
efficiency. The Open-Qwen2VL pre-training is conducted on academic level
8xA100-40G GPUs at UCSB on 5B packed multimodal tokens, which is 0.36% of 1.4T
multimodal pre-training tokens of Qwen2-VL. The final instruction-tuned
Open-Qwen2VL outperforms partially-open state-of-the-art MLLM Qwen2-VL-2B on
various multimodal benchmarks of MMBench, SEEDBench, MMstar, and MathVista,
indicating the remarkable training efficiency of Open-Qwen2VL. We open-source
all aspects of our work, including compute-efficient and data-efficient
training details, data filtering methods, sequence packing scripts,
pre-training data in WebDataset format, FSDP-based training codebase, and both
base and instruction-tuned model checkpoints. We redefine "fully open" for
multimodal LLMs as the complete release of: 1) the training codebase, 2)
detailed data filtering techniques, and 3) all pre-training and supervised
fine-tuning data used to develop the model.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 09:54:00 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 11:17:09 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Wang",
"Weizhi",
""
],
[
"Tian",
"Yu",
""
],
[
"Yang",
"Linjie",
""
],
[
"Wang",
"Heng",
""
],
[
"Yan",
"Xifeng",
""
]
] | TITLE: Open-Qwen2VL: Compute-Efficient Pre-Training of Fully-Open Multimodal
LLMs on Academic Resources
ABSTRACT: The reproduction of state-of-the-art multimodal LLM pre-training faces
barriers at every stage of the pipeline, including high-quality data filtering,
multimodal data mixture strategies, sequence packing techniques, and training
frameworks. We introduce Open-Qwen2VL, a fully open-source 2B-parameter
Multimodal Large Language Model pre-trained efficiently on 29M image-text pairs
using only 220 A100-40G GPU hours. Our approach employs low-to-high dynamic
image resolution and multimodal sequence packing to significantly enhance
pre-training efficiency. The training dataset was carefully curated using both
MLLM-based filtering techniques (e.g., MLM-Filter) and conventional CLIP-based
filtering methods, substantially improving data quality and training
efficiency. The Open-Qwen2VL pre-training is conducted on academic level
8xA100-40G GPUs at UCSB on 5B packed multimodal tokens, which is 0.36% of 1.4T
multimodal pre-training tokens of Qwen2-VL. The final instruction-tuned
Open-Qwen2VL outperforms partially-open state-of-the-art MLLM Qwen2-VL-2B on
various multimodal benchmarks of MMBench, SEEDBench, MMstar, and MathVista,
indicating the remarkable training efficiency of Open-Qwen2VL. We open-source
all aspects of our work, including compute-efficient and data-efficient
training details, data filtering methods, sequence packing scripts,
pre-training data in WebDataset format, FSDP-based training codebase, and both
base and instruction-tuned model checkpoints. We redefine "fully open" for
multimodal LLMs as the complete release of: 1) the training codebase, 2)
detailed data filtering techniques, and 3) all pre-training and supervised
fine-tuning data used to develop the model.
|
2504.00762 | Jianhao Chen | Jianhao Chen, Zishuo Xun, Bocheng Zhou, Han Qi, Qiaosheng Zhang, Yang
Chen, Wei Hu, Yuzhong Qu, Wanli Ouyang, Shuyue Hu | Do We Truly Need So Many Samples? Multi-LLM Repeated Sampling
Efficiently Scales Test-Time Compute | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This paper presents a simple, effective, and cost-efficient strategy to
improve LLM performance by scaling test-time compute. Our strategy builds upon
the repeated-sampling-then-voting framework, with a novel twist: incorporating
multiple models, even weaker ones, to leverage their complementary strengths
that potentially arise from diverse training data and paradigms. By using
consistency as a signal, our strategy dynamically switches between models.
Theoretical analysis highlights the efficiency and performance advantages of
our strategy. Extensive experiments on six datasets demonstrate that our
strategy not only outperforms self-consistency and state-of-the-art multi-agent
debate approaches, but also significantly reduces inference costs.
Additionally, ModelSwitch requires only a few comparable LLMs to achieve
optimal performance and can be extended with verification methods,
demonstrating the potential of leveraging multiple LLMs in the
generation-verification paradigm.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 13:13:43 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 08:55:04 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Chen",
"Jianhao",
""
],
[
"Xun",
"Zishuo",
""
],
[
"Zhou",
"Bocheng",
""
],
[
"Qi",
"Han",
""
],
[
"Zhang",
"Qiaosheng",
""
],
[
"Chen",
"Yang",
""
],
[
"Hu",
"Wei",
""
],
[
"Qu",
"Yuzhong",
""
],
[
"Ouyang",
"Wanli",
""
],
[
"Hu",
"Shuyue",
""
]
] | TITLE: Do We Truly Need So Many Samples? Multi-LLM Repeated Sampling
Efficiently Scales Test-Time Compute
ABSTRACT: This paper presents a simple, effective, and cost-efficient strategy to
improve LLM performance by scaling test-time compute. Our strategy builds upon
the repeated-sampling-then-voting framework, with a novel twist: incorporating
multiple models, even weaker ones, to leverage their complementary strengths
that potentially arise from diverse training data and paradigms. By using
consistency as a signal, our strategy dynamically switches between models.
Theoretical analysis highlights the efficiency and performance advantages of
our strategy. Extensive experiments on six datasets demonstrate that our
strategy not only outperforms self-consistency and state-of-the-art multi-agent
debate approaches, but also significantly reduces inference costs.
Additionally, ModelSwitch requires only a few comparable LLMs to achieve
optimal performance and can be extended with verification methods,
demonstrating the potential of leveraging multiple LLMs in the
generation-verification paradigm.
|
2504.01023 | Wu Chaofan | Chaofan Wu, Jiaheng Li, Jinghao Cao, Ming Li, Yongkang Feng, Jiayu Wu
Shuwen Xu, Zihang Gao, Sidan Du, Yang Li | Omnidirectional Depth-Aided Occupancy Prediction based on Cylindrical
Voxel for Autonomous Driving | null | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate 3D perception is essential for autonomous driving. Traditional
methods often struggle with geometric ambiguity due to a lack of geometric
prior. To address these challenges, we use omnidirectional depth estimation to
introduce geometric prior. Based on the depth information, we propose a
Sketch-Coloring framework OmniDepth-Occ. Additionally, our approach introduces
a cylindrical voxel representation based on polar coordinate to better align
with the radial nature of panoramic camera views. To address the lack of
fisheye camera dataset in autonomous driving tasks, we also build a virtual
scene dataset with six fisheye cameras, and the data volume has reached twice
that of SemanticKITTI. Experimental results demonstrate that our
Sketch-Coloring network significantly enhances 3D perception performance.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 00:07:21 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Wu",
"Chaofan",
""
],
[
"Li",
"Jiaheng",
""
],
[
"Cao",
"Jinghao",
""
],
[
"Li",
"Ming",
""
],
[
"Feng",
"Yongkang",
""
],
[
"Xu",
"Jiayu Wu Shuwen",
""
],
[
"Gao",
"Zihang",
""
],
[
"Du",
"Sidan",
""
],
[
"Li",
"Yang",
""
]
] | TITLE: Omnidirectional Depth-Aided Occupancy Prediction based on Cylindrical
Voxel for Autonomous Driving
ABSTRACT: Accurate 3D perception is essential for autonomous driving. Traditional
methods often struggle with geometric ambiguity due to a lack of geometric
prior. To address these challenges, we use omnidirectional depth estimation to
introduce geometric prior. Based on the depth information, we propose a
Sketch-Coloring framework OmniDepth-Occ. Additionally, our approach introduces
a cylindrical voxel representation based on polar coordinate to better align
with the radial nature of panoramic camera views. To address the lack of
fisheye camera dataset in autonomous driving tasks, we also build a virtual
scene dataset with six fisheye cameras, and the data volume has reached twice
that of SemanticKITTI. Experimental results demonstrate that our
Sketch-Coloring network significantly enhances 3D perception performance.
|
2504.01024 | Yufei He | Yufei He, Xucong Zhang, and Arno H. A. Stienen | Gaze-Guided 3D Hand Motion Prediction for Detecting Intent in Egocentric
Grasping Tasks | null | null | null | null | cs.CV cs.AI cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human intention detection with hand motion prediction is critical to drive
the upper-extremity assistive robots in neurorehabilitation applications.
However, the traditional methods relying on physiological signal measurement
are restrictive and often lack environmental context. We propose a novel
approach that predicts future sequences of both hand poses and joint positions.
This method integrates gaze information, historical hand motion sequences, and
environmental object data, adapting dynamically to the assistive needs of the
patient without prior knowledge of the intended object for grasping.
Specifically, we use a vector-quantized variational autoencoder for robust hand
pose encoding with an autoregressive generative transformer for effective hand
motion sequence prediction. We demonstrate the usability of these novel
techniques in a pilot study with healthy subjects. To train and evaluate the
proposed method, we collect a dataset consisting of various types of grasp
actions on different objects from multiple subjects. Through extensive
experiments, we demonstrate that the proposed method can successfully predict
sequential hand movement. Especially, the gaze information shows significant
enhancements in prediction capabilities, particularly with fewer input frames,
highlighting the potential of the proposed method for real-world applications.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 15:26:41 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"He",
"Yufei",
""
],
[
"Zhang",
"Xucong",
""
],
[
"Stienen",
"Arno H. A.",
""
]
] | TITLE: Gaze-Guided 3D Hand Motion Prediction for Detecting Intent in Egocentric
Grasping Tasks
ABSTRACT: Human intention detection with hand motion prediction is critical to drive
the upper-extremity assistive robots in neurorehabilitation applications.
However, the traditional methods relying on physiological signal measurement
are restrictive and often lack environmental context. We propose a novel
approach that predicts future sequences of both hand poses and joint positions.
This method integrates gaze information, historical hand motion sequences, and
environmental object data, adapting dynamically to the assistive needs of the
patient without prior knowledge of the intended object for grasping.
Specifically, we use a vector-quantized variational autoencoder for robust hand
pose encoding with an autoregressive generative transformer for effective hand
motion sequence prediction. We demonstrate the usability of these novel
techniques in a pilot study with healthy subjects. To train and evaluate the
proposed method, we collect a dataset consisting of various types of grasp
actions on different objects from multiple subjects. Through extensive
experiments, we demonstrate that the proposed method can successfully predict
sequential hand movement. Especially, the gaze information shows significant
enhancements in prediction capabilities, particularly with fewer input frames,
highlighting the potential of the proposed method for real-world applications.
|
2504.01028 | Malte Prie{\ss} | Anket Mehra, Malte Prie{\ss}, Marian Himstedt | Improving Applicability of Deep Learning based Token Classification
models during Training | null | null | null | null | cs.CV cs.CL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper shows that further evaluation metrics during model training are
needed to decide about its applicability in inference. As an example, a
LayoutLM-based model is trained for token classification in documents. The
documents are German receipts. We show that conventional classification
metrics, represented by the F1-Score in our experiments, are insufficient for
evaluating the applicability of machine learning models in practice. To address
this problem, we introduce a novel metric, Document Integrity Precision (DIP),
as a solution for visual document understanding and the token classification
task. To the best of our knowledge, nothing comparable has been introduced in
this context. DIP is a rigorous metric, describing how many documents of the
test dataset require manual interventions. It enables AI researchers and
software developers to conduct an in-depth investigation of the level of
process automation in business software. In order to validate DIP, we conduct
experiments with our created models to highlight and analyze the impact and
relevance of DIP to evaluate if the model should be deployed or not in
different training settings. Our results demonstrate that existing metrics
barely change for isolated model impairments, whereas DIP indicates that the
model requires substantial human interventions in deployment. The larger the
set of entities being predicted, the less sensitive conventional metrics are,
entailing poor automation quality. DIP, in contrast, remains a single value to
be interpreted for entire entity sets. This highlights the importance of having
metrics that focus on the business task for model training in production. Since
DIP is created for the token classification task, more research is needed to
find suitable metrics for other training tasks.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 17:01:19 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Mehra",
"Anket",
""
],
[
"Prieß",
"Malte",
""
],
[
"Himstedt",
"Marian",
""
]
] | TITLE: Improving Applicability of Deep Learning based Token Classification
models during Training
ABSTRACT: This paper shows that further evaluation metrics during model training are
needed to decide about its applicability in inference. As an example, a
LayoutLM-based model is trained for token classification in documents. The
documents are German receipts. We show that conventional classification
metrics, represented by the F1-Score in our experiments, are insufficient for
evaluating the applicability of machine learning models in practice. To address
this problem, we introduce a novel metric, Document Integrity Precision (DIP),
as a solution for visual document understanding and the token classification
task. To the best of our knowledge, nothing comparable has been introduced in
this context. DIP is a rigorous metric, describing how many documents of the
test dataset require manual interventions. It enables AI researchers and
software developers to conduct an in-depth investigation of the level of
process automation in business software. In order to validate DIP, we conduct
experiments with our created models to highlight and analyze the impact and
relevance of DIP to evaluate if the model should be deployed or not in
different training settings. Our results demonstrate that existing metrics
barely change for isolated model impairments, whereas DIP indicates that the
model requires substantial human interventions in deployment. The larger the
set of entities being predicted, the less sensitive conventional metrics are,
entailing poor automation quality. DIP, in contrast, remains a single value to
be interpreted for entire entity sets. This highlights the importance of having
metrics that focus on the business task for model training in production. Since
DIP is created for the token classification task, more research is needed to
find suitable metrics for other training tasks.
|
2504.01030 | Jian Huang | Xueyu Zhou, Chun Yin IP, and Jian Huang | Fair Sufficient Representation Learning | 35 pages, 11 figures, and 6 tables (1 in the main text, 5 in the
appendix) | null | null | null | stat.ML cs.LG | http://creativecommons.org/licenses/by/4.0/ | The main objective of fair statistical modeling and machine learning is to
minimize or eliminate biases that may arise from the data or the model itself,
ensuring that predictions and decisions are not unjustly influenced by
sensitive attributes such as race, gender, age, or other protected
characteristics. In this paper, we introduce a Fair Sufficient Representation
Learning (FSRL) method that balances sufficiency and fairness. Sufficiency
ensures that the representation should capture all necessary information about
the target variables, while fairness requires that the learned representation
remains independent of sensitive attributes. FSRL is based on a convex
combination of an objective function for learning a sufficient representation
and an objective function that ensures fairness. Our approach manages fairness
and sufficiency at the representation level, offering a novel perspective on
fair representation learning. We implement this method using distance
covariance, which is effective for characterizing independence between random
variables. We establish the convergence properties of the learned
representations. Experiments conducted on healthcase and text datasets with
diverse structures demonstrate that FSRL achieves a superior trade-off between
fairness and accuracy compared to existing approaches.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 10:37:49 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Zhou",
"Xueyu",
""
],
[
"IP",
"Chun Yin",
""
],
[
"Huang",
"Jian",
""
]
] | TITLE: Fair Sufficient Representation Learning
ABSTRACT: The main objective of fair statistical modeling and machine learning is to
minimize or eliminate biases that may arise from the data or the model itself,
ensuring that predictions and decisions are not unjustly influenced by
sensitive attributes such as race, gender, age, or other protected
characteristics. In this paper, we introduce a Fair Sufficient Representation
Learning (FSRL) method that balances sufficiency and fairness. Sufficiency
ensures that the representation should capture all necessary information about
the target variables, while fairness requires that the learned representation
remains independent of sensitive attributes. FSRL is based on a convex
combination of an objective function for learning a sufficient representation
and an objective function that ensures fairness. Our approach manages fairness
and sufficiency at the representation level, offering a novel perspective on
fair representation learning. We implement this method using distance
covariance, which is effective for characterizing independence between random
variables. We establish the convergence properties of the learned
representations. Experiments conducted on healthcase and text datasets with
diverse structures demonstrate that FSRL achieves a superior trade-off between
fairness and accuracy compared to existing approaches.
|
2504.01041 | Jun Cui | Jun Cui | Empirical Analysis of Digital Innovations Impact on Corporate ESG
Performance: The Mediating Role of GAI Technology | null | null | null | null | econ.GN cs.CY q-fin.EC | http://creativecommons.org/publicdomain/zero/1.0/ | This study investigates the relationship between corporate digital innovation
and Environmental, Social, and Governance (ESG) performance, with a specific
focus on the mediating role of Generative artificial intelligence technology
adoption. Using a comprehensive panel dataset of 8,000 observations from the
CMARS and WIND database spanning from 2015 to 2023, we employ multiple
econometric techniques to examine this relationship. Our findings reveal that
digital innovation significantly enhances corporate ESG performance, with GAI
technology adoption serving as a crucial mediating mechanism. Specifically,
digital innovation positively influences GAI technology adoption, which
subsequently improves ESG performance. Furthermore, our heterogeneity analysis
indicates that this relationship varies across firm size, industry type, and
ownership structure. Finally, our results remain robust after addressing
potential endogeneity concerns through instrumental variable estimation,
propensity score matching, and differenc in differences approaches. This
research contributes to the growing literature on technologydriven
sustainability transformations and offers practical implications for corporate
strategy and policy development in promoting sustainable business practices
through technological advancement.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 12:34:02 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Cui",
"Jun",
""
]
] | TITLE: Empirical Analysis of Digital Innovations Impact on Corporate ESG
Performance: The Mediating Role of GAI Technology
ABSTRACT: This study investigates the relationship between corporate digital innovation
and Environmental, Social, and Governance (ESG) performance, with a specific
focus on the mediating role of Generative artificial intelligence technology
adoption. Using a comprehensive panel dataset of 8,000 observations from the
CMARS and WIND database spanning from 2015 to 2023, we employ multiple
econometric techniques to examine this relationship. Our findings reveal that
digital innovation significantly enhances corporate ESG performance, with GAI
technology adoption serving as a crucial mediating mechanism. Specifically,
digital innovation positively influences GAI technology adoption, which
subsequently improves ESG performance. Furthermore, our heterogeneity analysis
indicates that this relationship varies across firm size, industry type, and
ownership structure. Finally, our results remain robust after addressing
potential endogeneity concerns through instrumental variable estimation,
propensity score matching, and differenc in differences approaches. This
research contributes to the growing literature on technologydriven
sustainability transformations and offers practical implications for corporate
strategy and policy development in promoting sustainable business practices
through technological advancement.
|
2504.01047 | Asraa Muayed | Asraa Muayed Abdalah, Noor Redha Alkazaz | Predicting Movie Production Years through Facial Recognition of Actors
with Machine Learning | null | null | 10.21123/bsj.2024.8996 | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | This study used machine learning algorithms to identify actors and extract
the age of actors from images taken randomly from movies. The use of images
taken from Arab movies includes challenges such as non-uniform lighting,
different and multiple poses for the actors and multiple elements with the
actor or a group of actors. Additionally, the use of make-up, wigs, beards, and
wearing different accessories and costumes made it difficult for the system to
identify the personality of the same actor. The Arab Actors Dataset-AAD
comprises 574 images sourced from various movies, encompassing both black and
white as well as color compositions. The images depict complete scenes or
fragments thereof. Multiple models were employed for feature extraction, and
diverse machine learning algorithms were utilized during the classification and
prediction stages to determine the most effective algorithm for handling such
image types. The study demonstrated the effectiveness of the Logistic
Regression model exhibited the best performance compared to other models in the
training phase, as evidenced by its AUC, precision, CA and F1score values of
99%, 86%, 85.5% and 84.2% respectively. The findings of this study can be used
to improve the precision and reliability of facial recognition technology for
various uses as with movies search services, movie suggestion algorithms, and
genre classification of movies.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 04:46:05 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Abdalah",
"Asraa Muayed",
""
],
[
"Alkazaz",
"Noor Redha",
""
]
] | TITLE: Predicting Movie Production Years through Facial Recognition of Actors
with Machine Learning
ABSTRACT: This study used machine learning algorithms to identify actors and extract
the age of actors from images taken randomly from movies. The use of images
taken from Arab movies includes challenges such as non-uniform lighting,
different and multiple poses for the actors and multiple elements with the
actor or a group of actors. Additionally, the use of make-up, wigs, beards, and
wearing different accessories and costumes made it difficult for the system to
identify the personality of the same actor. The Arab Actors Dataset-AAD
comprises 574 images sourced from various movies, encompassing both black and
white as well as color compositions. The images depict complete scenes or
fragments thereof. Multiple models were employed for feature extraction, and
diverse machine learning algorithms were utilized during the classification and
prediction stages to determine the most effective algorithm for handling such
image types. The study demonstrated the effectiveness of the Logistic
Regression model exhibited the best performance compared to other models in the
training phase, as evidenced by its AUC, precision, CA and F1score values of
99%, 86%, 85.5% and 84.2% respectively. The findings of this study can be used
to improve the precision and reliability of facial recognition technology for
various uses as with movies search services, movie suggestion algorithms, and
genre classification of movies.
|
2504.01089 | James Mullen Jr | James F. Mullen Jr, Dhruva Kumar, Xuewei Qi, Rajasimman Madhivanan,
Arnie Sen, Dinesh Manocha, Richard Kim | HomeEmergency -- Using Audio to Find and Respond to Emergencies in the
Home | null | null | null | null | cs.RO cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | In the United States alone accidental home deaths exceed 128,000 per year.
Our work aims to enable home robots who respond to emergency scenarios in the
home, preventing injuries and deaths. We introduce a new dataset of household
emergencies based in the ThreeDWorld simulator. Each scenario in our dataset
begins with an instantaneous or periodic sound which may or may not be an
emergency. The agent must navigate the multi-room home scene using prior
observations, alongside audio signals and images from the simulator, to
determine if there is an emergency or not.
In addition to our new dataset, we present a modular approach for localizing
and identifying potential home emergencies. Underpinning our approach is a
novel probabilistic dynamic scene graph (P-DSG), where our key insight is that
graph nodes corresponding to agents can be represented with a probabilistic
edge. This edge, when refined using Bayesian inference, enables efficient and
effective localization of agents in the scene. We also utilize multi-modal
vision-language models (VLMs) as a component in our approach, determining
object traits (e.g. flammability) and identifying emergencies. We present a
demonstration of our method completing a real-world version of our task on a
consumer robot, showing the transferability of both our task and our method.
Our dataset will be released to the public upon this papers publication.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 18:07:25 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Mullen",
"James F.",
"Jr"
],
[
"Kumar",
"Dhruva",
""
],
[
"Qi",
"Xuewei",
""
],
[
"Madhivanan",
"Rajasimman",
""
],
[
"Sen",
"Arnie",
""
],
[
"Manocha",
"Dinesh",
""
],
[
"Kim",
"Richard",
""
]
] | TITLE: HomeEmergency -- Using Audio to Find and Respond to Emergencies in the
Home
ABSTRACT: In the United States alone accidental home deaths exceed 128,000 per year.
Our work aims to enable home robots who respond to emergency scenarios in the
home, preventing injuries and deaths. We introduce a new dataset of household
emergencies based in the ThreeDWorld simulator. Each scenario in our dataset
begins with an instantaneous or periodic sound which may or may not be an
emergency. The agent must navigate the multi-room home scene using prior
observations, alongside audio signals and images from the simulator, to
determine if there is an emergency or not.
In addition to our new dataset, we present a modular approach for localizing
and identifying potential home emergencies. Underpinning our approach is a
novel probabilistic dynamic scene graph (P-DSG), where our key insight is that
graph nodes corresponding to agents can be represented with a probabilistic
edge. This edge, when refined using Bayesian inference, enables efficient and
effective localization of agents in the scene. We also utilize multi-modal
vision-language models (VLMs) as a component in our approach, determining
object traits (e.g. flammability) and identifying emergencies. We present a
demonstration of our method completing a real-world version of our task on a
consumer robot, showing the transferability of both our task and our method.
Our dataset will be released to the public upon this papers publication.
|
2504.01094 | Jaechul Roh | Jaechul Roh, Virat Shejwalkar, Amir Houmansadr | Multilingual and Multi-Accent Jailbreaking of Audio LLMs | 21 pages, 6 figures, 15 tables | null | null | null | cs.SD cs.AI cs.CL cs.CR eess.AS | http://creativecommons.org/licenses/by/4.0/ | Large Audio Language Models (LALMs) have significantly advanced audio
understanding but introduce critical security risks, particularly through audio
jailbreaks. While prior work has focused on English-centric attacks, we expose
a far more severe vulnerability: adversarial multilingual and multi-accent
audio jailbreaks, where linguistic and acoustic variations dramatically amplify
attack success. In this paper, we introduce Multi-AudioJail, the first
systematic framework to exploit these vulnerabilities through (1) a novel
dataset of adversarially perturbed multilingual/multi-accent audio jailbreaking
prompts, and (2) a hierarchical evaluation pipeline revealing that how acoustic
perturbations (e.g., reverberation, echo, and whisper effects) interacts with
cross-lingual phonetics to cause jailbreak success rates (JSRs) to surge by up
to +57.25 percentage points (e.g., reverberated Kenyan-accented attack on
MERaLiON). Crucially, our work further reveals that multimodal LLMs are
inherently more vulnerable than unimodal systems: attackers need only exploit
the weakest link (e.g., non-English audio inputs) to compromise the entire
model, which we empirically show by multilingual audio-only attacks achieving
3.1x higher success rates than text-only attacks. We plan to release our
dataset to spur research into cross-modal defenses, urging the community to
address this expanding attack surface in multimodality as LALMs evolve.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 18:12:23 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Roh",
"Jaechul",
""
],
[
"Shejwalkar",
"Virat",
""
],
[
"Houmansadr",
"Amir",
""
]
] | TITLE: Multilingual and Multi-Accent Jailbreaking of Audio LLMs
ABSTRACT: Large Audio Language Models (LALMs) have significantly advanced audio
understanding but introduce critical security risks, particularly through audio
jailbreaks. While prior work has focused on English-centric attacks, we expose
a far more severe vulnerability: adversarial multilingual and multi-accent
audio jailbreaks, where linguistic and acoustic variations dramatically amplify
attack success. In this paper, we introduce Multi-AudioJail, the first
systematic framework to exploit these vulnerabilities through (1) a novel
dataset of adversarially perturbed multilingual/multi-accent audio jailbreaking
prompts, and (2) a hierarchical evaluation pipeline revealing that how acoustic
perturbations (e.g., reverberation, echo, and whisper effects) interacts with
cross-lingual phonetics to cause jailbreak success rates (JSRs) to surge by up
to +57.25 percentage points (e.g., reverberated Kenyan-accented attack on
MERaLiON). Crucially, our work further reveals that multimodal LLMs are
inherently more vulnerable than unimodal systems: attackers need only exploit
the weakest link (e.g., non-English audio inputs) to compromise the entire
model, which we empirically show by multilingual audio-only attacks achieving
3.1x higher success rates than text-only attacks. We plan to release our
dataset to spur research into cross-modal defenses, urging the community to
address this expanding attack surface in multimodality as LALMs evolve.
|
2504.01127 | Ziyi Liu | Ziyi Liu, Priyanka Dey, Zhenyu Zhao, Jen-tse Huang, Rahul Gupta, Yang
Liu, Jieyu Zhao | Can LLMs Grasp Implicit Cultural Values? Benchmarking LLMs'
Metacognitive Cultural Intelligence with CQ-Bench | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Cultural Intelligence (CQ) refers to the ability to understand unfamiliar
cultural contexts-a crucial skill for large language models (LLMs) to
effectively engage with globally diverse users. While existing research often
focuses on explicitly stated cultural norms, such approaches fail to capture
the subtle, implicit values that underlie real-world conversations. To address
this gap, we introduce CQ-Bench, a benchmark specifically designed to assess
LLMs' capability to infer implicit cultural values from natural conversational
contexts. We generate a multi-character conversation-based stories dataset
using values from the World Value Survey and GlobalOpinions datasets, with
topics including ethical, religious, social, and political. Our dataset
construction pipeline includes rigorous validation procedures-incorporation,
consistency, and implicitness checks-using GPT-4o, with 98.2% human-model
agreement in the final validation. Our benchmark consists of three tasks of
increasing complexity: attitude detection, value selection, and value
extraction. We find that while o1 and Deepseek-R1 models reach human-level
performance in value selection (0.809 and 0.814), they still fall short in
nuanced attitude detection, with F1 scores of 0.622 and 0.635, respectively. In
the value extraction task, GPT-4o-mini and o3-mini score 0.602 and 0.598,
highlighting the difficulty of open-ended cultural reasoning. Notably,
fine-tuning smaller models (e.g., LLaMA-3.2-3B) on only 500 culturally rich
examples improves performance by over 10%, even outperforming stronger
baselines (o3-mini) in some cases. Using CQ-Bench, we provide insights into the
current challenges in LLMs' CQ research and suggest practical pathways for
enhancing LLMs' cross-cultural reasoning abilities.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 18:54:47 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Liu",
"Ziyi",
""
],
[
"Dey",
"Priyanka",
""
],
[
"Zhao",
"Zhenyu",
""
],
[
"Huang",
"Jen-tse",
""
],
[
"Gupta",
"Rahul",
""
],
[
"Liu",
"Yang",
""
],
[
"Zhao",
"Jieyu",
""
]
] | TITLE: Can LLMs Grasp Implicit Cultural Values? Benchmarking LLMs'
Metacognitive Cultural Intelligence with CQ-Bench
ABSTRACT: Cultural Intelligence (CQ) refers to the ability to understand unfamiliar
cultural contexts-a crucial skill for large language models (LLMs) to
effectively engage with globally diverse users. While existing research often
focuses on explicitly stated cultural norms, such approaches fail to capture
the subtle, implicit values that underlie real-world conversations. To address
this gap, we introduce CQ-Bench, a benchmark specifically designed to assess
LLMs' capability to infer implicit cultural values from natural conversational
contexts. We generate a multi-character conversation-based stories dataset
using values from the World Value Survey and GlobalOpinions datasets, with
topics including ethical, religious, social, and political. Our dataset
construction pipeline includes rigorous validation procedures-incorporation,
consistency, and implicitness checks-using GPT-4o, with 98.2% human-model
agreement in the final validation. Our benchmark consists of three tasks of
increasing complexity: attitude detection, value selection, and value
extraction. We find that while o1 and Deepseek-R1 models reach human-level
performance in value selection (0.809 and 0.814), they still fall short in
nuanced attitude detection, with F1 scores of 0.622 and 0.635, respectively. In
the value extraction task, GPT-4o-mini and o3-mini score 0.602 and 0.598,
highlighting the difficulty of open-ended cultural reasoning. Notably,
fine-tuning smaller models (e.g., LLaMA-3.2-3B) on only 500 culturally rich
examples improves performance by over 10%, even outperforming stronger
baselines (o3-mini) in some cases. Using CQ-Bench, we provide insights into the
current challenges in LLMs' CQ research and suggest practical pathways for
enhancing LLMs' cross-cultural reasoning abilities.
|
2504.01142 | Tiantian Liu | Tiantian Liu, Hengyu Liu, Tianyi Li, Kristian Torp, Christian S.
Jensen | ACTIVE: Continuous Similarity Search for Vessel Trajectories | null | null | null | null | cs.DB | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Publicly available vessel trajectory data is emitted continuously from the
global AIS system. Continuous trajectory similarity search on this data has
applications in, e.g., maritime navigation and safety. Existing proposals
typically assume an offline setting and focus on finding similarities between
complete trajectories. Such proposals are less effective when applied to online
scenarios, where similarity comparisons must be performed continuously as new
trajectory data arrives and trajectories evolve. We therefore propose a
real-time continuous trajectory similarity search method for vessels (ACTIVE).
We introduce a novel similarity measure, object-trajectory real-time distance,
that emphasizes the anticipated future movement trends of vessels, enabling
more predictive and forward-looking comparisons. Next, we propose a
segment-based vessel trajectory index structure that organizes historical
trajectories into smaller and manageable segments, facilitating accelerated
similarity computations. Leveraging this index, we propose an efficient
continuous similar trajectory search (CSTS) algorithm together with a variety
of search space pruning strategies that reduce unnecessary computations during
the continuous similarity search, thereby further improving efficiency.
Extensive experiments on two large real-world AIS datasets offer evidence that
ACTIVE is capable of outperforming state-of-the-art methods considerably.
ACTIVE significantly reduces index construction costs and index size while
achieving a 70% reduction in terms of query time and a 60% increase in terms of
hit rate.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 19:25:27 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Liu",
"Tiantian",
""
],
[
"Liu",
"Hengyu",
""
],
[
"Li",
"Tianyi",
""
],
[
"Torp",
"Kristian",
""
],
[
"Jensen",
"Christian S.",
""
]
] | TITLE: ACTIVE: Continuous Similarity Search for Vessel Trajectories
ABSTRACT: Publicly available vessel trajectory data is emitted continuously from the
global AIS system. Continuous trajectory similarity search on this data has
applications in, e.g., maritime navigation and safety. Existing proposals
typically assume an offline setting and focus on finding similarities between
complete trajectories. Such proposals are less effective when applied to online
scenarios, where similarity comparisons must be performed continuously as new
trajectory data arrives and trajectories evolve. We therefore propose a
real-time continuous trajectory similarity search method for vessels (ACTIVE).
We introduce a novel similarity measure, object-trajectory real-time distance,
that emphasizes the anticipated future movement trends of vessels, enabling
more predictive and forward-looking comparisons. Next, we propose a
segment-based vessel trajectory index structure that organizes historical
trajectories into smaller and manageable segments, facilitating accelerated
similarity computations. Leveraging this index, we propose an efficient
continuous similar trajectory search (CSTS) algorithm together with a variety
of search space pruning strategies that reduce unnecessary computations during
the continuous similarity search, thereby further improving efficiency.
Extensive experiments on two large real-world AIS datasets offer evidence that
ACTIVE is capable of outperforming state-of-the-art methods considerably.
ACTIVE significantly reduces index construction costs and index size while
achieving a 70% reduction in terms of query time and a 60% increase in terms of
hit rate.
|
2504.01145 | Bikash Saha | Bikash Saha, Nanda Rani, Sandeep Kumar Shukla | MaLAware: Automating the Comprehension of Malicious Software Behaviours
using Large Language Models (LLMs) | Accepted at MSR 2025 | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current malware (malicious software) analysis tools focus on detection and
family classification but fail to provide clear and actionable narrative
insights into the malignant activity of the malware. Therefore, there is a need
for a tool that translates raw malware data into human-readable descriptions.
Developing such a tool accelerates incident response, reduces malware analysts'
cognitive load, and enables individuals having limited technical expertise to
understand malicious software behaviour. With this objective, we present
MaLAware, which automatically summarizes the full spectrum of malicious
activity of malware executables. MaLAware processes Cuckoo Sandbox-generated
reports using large language models (LLMs) to correlate malignant activities
and generate concise summaries explaining malware behaviour. We evaluate the
tool's performance on five open-source LLMs. The evaluation uses the
human-written malware behaviour description dataset as ground truth. The
model's performance is measured using 11 extensive performance metrics, which
boosts the confidence of MaLAware's effectiveness. The current version of the
tool, i.e., MaLAware, supports Qwen2.5-7B, Llama2-7B, Llama3.1-8B, Mistral-7B,
and Falcon-7B, along with the quantization feature for resource-constrained
environments. MaLAware lays a foundation for future research in malware
behavior explanation, and its extensive evaluation demonstrates LLMs' ability
to narrate malware behavior in an actionable and comprehensive manner.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 19:27:17 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Saha",
"Bikash",
""
],
[
"Rani",
"Nanda",
""
],
[
"Shukla",
"Sandeep Kumar",
""
]
] | TITLE: MaLAware: Automating the Comprehension of Malicious Software Behaviours
using Large Language Models (LLMs)
ABSTRACT: Current malware (malicious software) analysis tools focus on detection and
family classification but fail to provide clear and actionable narrative
insights into the malignant activity of the malware. Therefore, there is a need
for a tool that translates raw malware data into human-readable descriptions.
Developing such a tool accelerates incident response, reduces malware analysts'
cognitive load, and enables individuals having limited technical expertise to
understand malicious software behaviour. With this objective, we present
MaLAware, which automatically summarizes the full spectrum of malicious
activity of malware executables. MaLAware processes Cuckoo Sandbox-generated
reports using large language models (LLMs) to correlate malignant activities
and generate concise summaries explaining malware behaviour. We evaluate the
tool's performance on five open-source LLMs. The evaluation uses the
human-written malware behaviour description dataset as ground truth. The
model's performance is measured using 11 extensive performance metrics, which
boosts the confidence of MaLAware's effectiveness. The current version of the
tool, i.e., MaLAware, supports Qwen2.5-7B, Llama2-7B, Llama3.1-8B, Mistral-7B,
and Falcon-7B, along with the quantization feature for resource-constrained
environments. MaLAware lays a foundation for future research in malware
behavior explanation, and its extensive evaluation demonstrates LLMs' ability
to narrate malware behavior in an actionable and comprehensive manner.
|
2504.01159 | Noah Schnitzer | Noah Schnitzer, Lopa Bhatt, Ismail El Baggari, Robert Hovden, Benjamin
H. Savitzky, Michelle A. Smeaton, Berit H. Goodge | Quantitative approaches for multi-scale structural analysis with atomic
resolution electron microscopy | 18 pages, 13 figures | null | null | null | cond-mat.mtrl-sci physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Atomic-resolution imaging with scanning transmission electron microscopy is a
powerful tool for characterizing the nanoscale structure of materials, in
particular features such as defects, local strains, and symmetry-breaking
distortions. In addition to advanced instrumentation, the effectiveness of the
technique depends on computational image analysis to extract meaningful
features from complex datasets recorded in experiments, which can be
complicated by the presence of noise and artifacts, small or overlapping
features, and the need to scale analysis over large representative areas. Here,
we present image analysis approaches which synergize real and reciprocal space
information to efficiently and reliably obtain meaningful structural
information with picometer scale precision across hundreds of nanometers of
material from atomic-resolution electron microscope images. Damping
superstructure peaks in reciprocal space allows symmetry-breaking structural
distortions to be disentangled from other sources of inhomogeneity and measured
with high precision. Real space fitting of the wave-like signals resulting from
Fourier filtering enables absolute quantification of lattice parameter
variations and strain, as well as the uncertainty associated with these
measurements. Implementations of these algorithms are made available as an open
source Python package.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 19:53:23 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Schnitzer",
"Noah",
""
],
[
"Bhatt",
"Lopa",
""
],
[
"Baggari",
"Ismail El",
""
],
[
"Hovden",
"Robert",
""
],
[
"Savitzky",
"Benjamin H.",
""
],
[
"Smeaton",
"Michelle A.",
""
],
[
"Goodge",
"Berit H.",
""
]
] | TITLE: Quantitative approaches for multi-scale structural analysis with atomic
resolution electron microscopy
ABSTRACT: Atomic-resolution imaging with scanning transmission electron microscopy is a
powerful tool for characterizing the nanoscale structure of materials, in
particular features such as defects, local strains, and symmetry-breaking
distortions. In addition to advanced instrumentation, the effectiveness of the
technique depends on computational image analysis to extract meaningful
features from complex datasets recorded in experiments, which can be
complicated by the presence of noise and artifacts, small or overlapping
features, and the need to scale analysis over large representative areas. Here,
we present image analysis approaches which synergize real and reciprocal space
information to efficiently and reliably obtain meaningful structural
information with picometer scale precision across hundreds of nanometers of
material from atomic-resolution electron microscope images. Damping
superstructure peaks in reciprocal space allows symmetry-breaking structural
distortions to be disentangled from other sources of inhomogeneity and measured
with high precision. Real space fitting of the wave-like signals resulting from
Fourier filtering enables absolute quantification of lattice parameter
variations and strain, as well as the uncertainty associated with these
measurements. Implementations of these algorithms are made available as an open
source Python package.
|
2504.01169 | Alberto D\'iaz-\'Alvarez | V\'ictor Ramos-Osuna and Alberto D\'iaz-\'Alvarez and Ra\'ul
Lara-Cabrera | Efficient n-body simulations using physics informed graph neural
networks | 10 pages, 6 figures, 3 tables, accepted in conference MAEB 2025 (more
info at
https://www.uik.eus/es/curso/xvi-congreso-espanol-metaheuristicas-algoritmos-evolutivos-bioinspirados) | null | null | null | cs.LG physics.comp-ph | http://creativecommons.org/licenses/by-sa/4.0/ | This paper presents a novel approach for accelerating n-body simulations by
integrating a physics-informed graph neural networks (GNN) with traditional
numerical methods. Our method implements a leapfrog-based simulation engine to
generate datasets from diverse astrophysical scenarios which are then
transformed into graph representations. A custom-designed GNN is trained to
predict particle accelerations with high precision. Experiments, conducted on
60 training and 6 testing simulations spanning from 3 to 500 bodies over 1000
time steps, demonstrate that the proposed model achieves extremely low
prediction errors-loss values while maintaining robust long-term stability,
with accumulated errors in position, velocity, and acceleration remaining
insignificant. Furthermore, our method yields a modest speedup of approximately
17% over conventional simulation techniques. These results indicate that the
integration of deep learning with traditional physical simulation methods
offers a promising pathway to significantly enhance computational efficiency
without compromising accuracy.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 20:23:34 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Ramos-Osuna",
"Víctor",
""
],
[
"Díaz-Álvarez",
"Alberto",
""
],
[
"Lara-Cabrera",
"Raúl",
""
]
] | TITLE: Efficient n-body simulations using physics informed graph neural
networks
ABSTRACT: This paper presents a novel approach for accelerating n-body simulations by
integrating a physics-informed graph neural networks (GNN) with traditional
numerical methods. Our method implements a leapfrog-based simulation engine to
generate datasets from diverse astrophysical scenarios which are then
transformed into graph representations. A custom-designed GNN is trained to
predict particle accelerations with high precision. Experiments, conducted on
60 training and 6 testing simulations spanning from 3 to 500 bodies over 1000
time steps, demonstrate that the proposed model achieves extremely low
prediction errors-loss values while maintaining robust long-term stability,
with accumulated errors in position, velocity, and acceleration remaining
insignificant. Furthermore, our method yields a modest speedup of approximately
17% over conventional simulation techniques. These results indicate that the
integration of deep learning with traditional physical simulation methods
offers a promising pathway to significantly enhance computational efficiency
without compromising accuracy.
|
2504.01170 | Huan Ning | Huan Ning, Zhenlong Li, Manzhu Yu, Shiyan Zhang, Shan Qiao | Estimating Hourly Neighborhood Population Using Mobile Phone Data in the
United States | null | null | null | null | cs.SI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Traditional population estimation techniques often fail to capture the
dynamic fluctuations inherent in urban and rural population movements.
Recognizing the need for a high spatiotemporal dynamic population dataset, we
propose a method using smartphone-based human mobility data to reconstruct the
hourly population for each neighborhood across the US. We quantify population
fluctuations on an hourly, diurnal, daily, and seasonal basis, and compare
these with static population data to highlight the limitations of traditional
models in capturing temporal dynamics. This study is one of the first hourly
population products at a large geographic extent (US), contributing to various
studies that involve dynamic populations with high spatiotemporal resolution,
such as air pollution exposure analysis and emergency response.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 20:25:32 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Ning",
"Huan",
""
],
[
"Li",
"Zhenlong",
""
],
[
"Yu",
"Manzhu",
""
],
[
"Zhang",
"Shiyan",
""
],
[
"Qiao",
"Shan",
""
]
] | TITLE: Estimating Hourly Neighborhood Population Using Mobile Phone Data in the
United States
ABSTRACT: Traditional population estimation techniques often fail to capture the
dynamic fluctuations inherent in urban and rural population movements.
Recognizing the need for a high spatiotemporal dynamic population dataset, we
propose a method using smartphone-based human mobility data to reconstruct the
hourly population for each neighborhood across the US. We quantify population
fluctuations on an hourly, diurnal, daily, and seasonal basis, and compare
these with static population data to highlight the limitations of traditional
models in capturing temporal dynamics. This study is one of the first hourly
population products at a large geographic extent (US), contributing to various
studies that involve dynamic populations with high spatiotemporal resolution,
such as air pollution exposure analysis and emergency response.
|
2504.01190 | Jingwen Zhu | Jingwen Zhu and Yixu Chen and Hai Wei and Sriram Sethuraman and
Yongjun Wu | Video Quality Assessment for Resolution Cross-Over in Live Sports | null | null | null | null | cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In adaptive bitrate streaming, resolution cross-over refers to the point on
the convex hull where the encoding resolution should switch to achieve better
quality. Accurate cross-over prediction is crucial for streaming providers to
optimize resolution at given bandwidths. Most existing works rely on objective
Video Quality Metrics (VQM), particularly VMAF, to determine the resolution
cross-over. However, these metrics have limitations in accurately predicting
resolution cross-overs. Furthermore, widely used VQMs are often trained on
subjective datasets collected using the Absolute Category Rating (ACR)
methodologies, which we demonstrate introduces significant uncertainty and
errors in resolution cross-over predictions. To address these problems, we
first investigate different subjective methodologies and demonstrate that
Pairwise Comparison (PC) achieves better cross-over accuracy than ACR. We then
propose a novel metric, Resolution Cross-over Quality Loss (RCQL), to measure
the quality loss caused by resolution cross-over errors. Furthermore, we
collected a new subjective dataset (LSCO) focusing on live streaming scenarios
and evaluated widely used VQMs, by benchmarking their resolution cross-over
accuracy.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 21:12:02 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Zhu",
"Jingwen",
""
],
[
"Chen",
"Yixu",
""
],
[
"Wei",
"Hai",
""
],
[
"Sethuraman",
"Sriram",
""
],
[
"Wu",
"Yongjun",
""
]
] | TITLE: Video Quality Assessment for Resolution Cross-Over in Live Sports
ABSTRACT: In adaptive bitrate streaming, resolution cross-over refers to the point on
the convex hull where the encoding resolution should switch to achieve better
quality. Accurate cross-over prediction is crucial for streaming providers to
optimize resolution at given bandwidths. Most existing works rely on objective
Video Quality Metrics (VQM), particularly VMAF, to determine the resolution
cross-over. However, these metrics have limitations in accurately predicting
resolution cross-overs. Furthermore, widely used VQMs are often trained on
subjective datasets collected using the Absolute Category Rating (ACR)
methodologies, which we demonstrate introduces significant uncertainty and
errors in resolution cross-over predictions. To address these problems, we
first investigate different subjective methodologies and demonstrate that
Pairwise Comparison (PC) achieves better cross-over accuracy than ACR. We then
propose a novel metric, Resolution Cross-over Quality Loss (RCQL), to measure
the quality loss caused by resolution cross-over errors. Furthermore, we
collected a new subjective dataset (LSCO) focusing on live streaming scenarios
and evaluated widely used VQMs, by benchmarking their resolution cross-over
accuracy.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.