id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
listlengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2503.03848 | Zhu Shizhan | Daniel C. Moura, Shizhan Zhu, Orly Zvitia | Nexar Dashcam Collision Prediction Dataset and Challenge | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | This paper presents the Nexar Dashcam Collision Prediction Dataset and
Challenge, designed to support research in traffic event analysis, collision
prediction, and autonomous vehicle safety. The dataset consists of 1,500
annotated video clips, each approximately 40 seconds long, capturing a diverse
range of real-world traffic scenarios. Videos are labeled with event type
(collision/near-collision vs. normal driving), environmental conditions
(lighting conditions and weather), and scene type (urban, rural, highway,
etc.). For collision and near-collision cases, additional temporal labels are
provided, including the precise moment of the event and the alert time, marking
when the collision first becomes predictable.
To advance research on accident prediction, we introduce the Nexar Dashcam
Collision Prediction Challenge, a public competition on top of this dataset.
Participants are tasked with developing machine learning models that predict
the likelihood of an imminent collision, given an input video. Model
performance is evaluated using the average precision (AP) computed across
multiple intervals before the accident (i.e. 500 ms, 1000 ms, and 1500 ms prior
to the event), emphasizing the importance of early and reliable predictions.
The dataset is released under an open license with restrictions on unethical
use, ensuring responsible research and innovation.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 19:20:28 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Moura",
"Daniel C.",
""
],
[
"Zhu",
"Shizhan",
""
],
[
"Zvitia",
"Orly",
""
]
]
| TITLE: Nexar Dashcam Collision Prediction Dataset and Challenge
ABSTRACT: This paper presents the Nexar Dashcam Collision Prediction Dataset and
Challenge, designed to support research in traffic event analysis, collision
prediction, and autonomous vehicle safety. The dataset consists of 1,500
annotated video clips, each approximately 40 seconds long, capturing a diverse
range of real-world traffic scenarios. Videos are labeled with event type
(collision/near-collision vs. normal driving), environmental conditions
(lighting conditions and weather), and scene type (urban, rural, highway,
etc.). For collision and near-collision cases, additional temporal labels are
provided, including the precise moment of the event and the alert time, marking
when the collision first becomes predictable.
To advance research on accident prediction, we introduce the Nexar Dashcam
Collision Prediction Challenge, a public competition on top of this dataset.
Participants are tasked with developing machine learning models that predict
the likelihood of an imminent collision, given an input video. Model
performance is evaluated using the average precision (AP) computed across
multiple intervals before the accident (i.e. 500 ms, 1000 ms, and 1500 ms prior
to the event), emphasizing the importance of early and reliable predictions.
The dataset is released under an open license with restrictions on unethical
use, ensuring responsible research and innovation.
| new_dataset | 0.958615 |
2503.03882 | Jiangtong Zhu | Jiangtong Zhu, Zhao Yang, Yinan Shi, Jianwu Fang, Jianru Xue | IC-Mapper: Instance-Centric Spatio-Temporal Modeling for Online
Vectorized Map Construction | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Online vector map construction based on visual data can bypass the processes
of data collection, post-processing, and manual annotation required by
traditional map construction, which significantly enhances map-building
efficiency. However, existing work treats the online mapping task as a local
range perception task, overlooking the spatial scalability required for map
construction. We propose IC-Mapper, an instance-centric online mapping
framework, which comprises two primary components: 1) Instance-centric temporal
association module: For the detection queries of adjacent frames, we measure
them in both feature and geometric dimensions to obtain the matching
correspondence between instances across frames. 2) Instance-centric spatial
fusion module: We perform point sampling on the historical global map from a
spatial dimension and integrate it with the detection results of instances
corresponding to the current frame to achieve real-time expansion and update of
the map. Based on the nuScenes dataset, we evaluate our approach on detection,
tracking, and global mapping metrics. Experimental results demonstrate the
superiority of IC-Mapper against other state-of-the-art methods. Code will be
released on https://github.com/Brickzhuantou/IC-Mapper.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 20:28:34 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Zhu",
"Jiangtong",
""
],
[
"Yang",
"Zhao",
""
],
[
"Shi",
"Yinan",
""
],
[
"Fang",
"Jianwu",
""
],
[
"Xue",
"Jianru",
""
]
]
| TITLE: IC-Mapper: Instance-Centric Spatio-Temporal Modeling for Online
Vectorized Map Construction
ABSTRACT: Online vector map construction based on visual data can bypass the processes
of data collection, post-processing, and manual annotation required by
traditional map construction, which significantly enhances map-building
efficiency. However, existing work treats the online mapping task as a local
range perception task, overlooking the spatial scalability required for map
construction. We propose IC-Mapper, an instance-centric online mapping
framework, which comprises two primary components: 1) Instance-centric temporal
association module: For the detection queries of adjacent frames, we measure
them in both feature and geometric dimensions to obtain the matching
correspondence between instances across frames. 2) Instance-centric spatial
fusion module: We perform point sampling on the historical global map from a
spatial dimension and integrate it with the detection results of instances
corresponding to the current frame to achieve real-time expansion and update of
the map. Based on the nuScenes dataset, we evaluate our approach on detection,
tracking, and global mapping metrics. Experimental results demonstrate the
superiority of IC-Mapper against other state-of-the-art methods. Code will be
released on https://github.com/Brickzhuantou/IC-Mapper.
| no_new_dataset | 0.952042 |
2503.03885 | Edoardo Zorzi | Edoardo Zorzi, Alberto Castellini, Leonidas Bakopoulos, Georgios
Chalkiadakis, Alessandro Farinelli | Seldonian Reinforcement Learning for Ad Hoc Teamwork | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most offline RL algorithms return optimal policies but do not provide
statistical guarantees on undesirable behaviors. This could generate
reliability issues in safety-critical applications, such as in some multiagent
domains where agents, and possibly humans, need to interact to reach their
goals without harming each other. In this work, we propose a novel offline RL
approach, inspired by Seldonian optimization, which returns policies with good
performance and statistically guaranteed properties with respect to predefined
undesirable behaviors. In particular, our focus is on Ad Hoc Teamwork settings,
where agents must collaborate with new teammates without prior coordination.
Our method requires only a pre-collected dataset, a set of candidate policies
for our agent, and a specification about the possible policies followed by the
other players -- it does not require further interactions, training, or
assumptions on the type and architecture of the policies. We test our algorithm
in Ad Hoc Teamwork problems and show that it consistently finds reliable
policies while improving sample efficiency with respect to standard ML
baselines.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 20:37:02 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Zorzi",
"Edoardo",
""
],
[
"Castellini",
"Alberto",
""
],
[
"Bakopoulos",
"Leonidas",
""
],
[
"Chalkiadakis",
"Georgios",
""
],
[
"Farinelli",
"Alessandro",
""
]
]
| TITLE: Seldonian Reinforcement Learning for Ad Hoc Teamwork
ABSTRACT: Most offline RL algorithms return optimal policies but do not provide
statistical guarantees on undesirable behaviors. This could generate
reliability issues in safety-critical applications, such as in some multiagent
domains where agents, and possibly humans, need to interact to reach their
goals without harming each other. In this work, we propose a novel offline RL
approach, inspired by Seldonian optimization, which returns policies with good
performance and statistically guaranteed properties with respect to predefined
undesirable behaviors. In particular, our focus is on Ad Hoc Teamwork settings,
where agents must collaborate with new teammates without prior coordination.
Our method requires only a pre-collected dataset, a set of candidate policies
for our agent, and a specification about the possible policies followed by the
other players -- it does not require further interactions, training, or
assumptions on the type and architecture of the policies. We test our algorithm
in Ad Hoc Teamwork problems and show that it consistently finds reliable
policies while improving sample efficiency with respect to standard ML
baselines.
| no_new_dataset | 0.942665 |
2503.03921 | Arthur Zhang | Arthur Zhang, Harshit Sikchi, Amy Zhang, Joydeep Biswas | CREStE: Scalable Mapless Navigation with Internet Scale Priors and
Counterfactual Guidance | 19 pages, 10 figures, 5 tables | null | null | null | cs.RO cs.AI cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | We address the long-horizon mapless navigation problem: enabling robots to
traverse novel environments without relying on high-definition maps or precise
waypoints that specify exactly where to navigate. Achieving this requires
overcoming two major challenges -- learning robust, generalizable perceptual
representations of the environment without pre-enumerating all possible
navigation factors and forms of perceptual aliasing and utilizing these learned
representations to plan human-aligned navigation paths. Existing solutions
struggle to generalize due to their reliance on hand-curated object lists that
overlook unforeseen factors, end-to-end learning of navigation features from
scarce large-scale robot datasets, and handcrafted reward functions that scale
poorly to diverse scenarios. To overcome these limitations, we propose CREStE,
the first method that learns representations and rewards for addressing the
full mapless navigation problem without relying on large-scale robot datasets
or manually curated features. CREStE leverages visual foundation models trained
on internet-scale data to learn continuous bird's-eye-view representations
capturing elevation, semantics, and instance-level features. To utilize learned
representations for planning, we propose a counterfactual-based loss and active
learning procedure that focuses on the most salient perceptual cues by querying
humans for counterfactual trajectory annotations in challenging scenes. We
evaluate CREStE in kilometer-scale navigation tasks across six distinct urban
environments. CREStE significantly outperforms all state-of-the-art approaches
with 70% fewer human interventions per mission, including a 2-kilometer mission
in an unseen environment with just 1 intervention; showcasing its robustness
and effectiveness for long-horizon mapless navigation. For videos and
additional materials, see https://amrl.cs.utexas.edu/creste .
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 21:42:46 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Zhang",
"Arthur",
""
],
[
"Sikchi",
"Harshit",
""
],
[
"Zhang",
"Amy",
""
],
[
"Biswas",
"Joydeep",
""
]
]
| TITLE: CREStE: Scalable Mapless Navigation with Internet Scale Priors and
Counterfactual Guidance
ABSTRACT: We address the long-horizon mapless navigation problem: enabling robots to
traverse novel environments without relying on high-definition maps or precise
waypoints that specify exactly where to navigate. Achieving this requires
overcoming two major challenges -- learning robust, generalizable perceptual
representations of the environment without pre-enumerating all possible
navigation factors and forms of perceptual aliasing and utilizing these learned
representations to plan human-aligned navigation paths. Existing solutions
struggle to generalize due to their reliance on hand-curated object lists that
overlook unforeseen factors, end-to-end learning of navigation features from
scarce large-scale robot datasets, and handcrafted reward functions that scale
poorly to diverse scenarios. To overcome these limitations, we propose CREStE,
the first method that learns representations and rewards for addressing the
full mapless navigation problem without relying on large-scale robot datasets
or manually curated features. CREStE leverages visual foundation models trained
on internet-scale data to learn continuous bird's-eye-view representations
capturing elevation, semantics, and instance-level features. To utilize learned
representations for planning, we propose a counterfactual-based loss and active
learning procedure that focuses on the most salient perceptual cues by querying
humans for counterfactual trajectory annotations in challenging scenes. We
evaluate CREStE in kilometer-scale navigation tasks across six distinct urban
environments. CREStE significantly outperforms all state-of-the-art approaches
with 70% fewer human interventions per mission, including a 2-kilometer mission
in an unseen environment with just 1 intervention; showcasing its robustness
and effectiveness for long-horizon mapless navigation. For videos and
additional materials, see https://amrl.cs.utexas.edu/creste .
| no_new_dataset | 0.951369 |
2503.03932 | Sabur Butt | Sabur Butt, Hector G. Ceballos and Diana P. Madera | Tec-Habilidad: Skill Classification for Bridging Education and
Employment | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Job application and assessment processes have evolved significantly in recent
years, largely due to advancements in technology and changes in the way
companies operate. Skill extraction and classification remain an important
component of the modern hiring process as it provides a more objective way to
evaluate candidates and automatically align their skills with the job
requirements. However, to effectively evaluate the skills, the skill extraction
tools must recognize varied mentions of skills on resumes, including direct
mentions, implications, synonyms, acronyms, phrases, and proficiency levels,
and differentiate between hard and soft skills. While tools like LLMs (Large
Model Models) help extract and categorize skills from job applications, there's
a lack of comprehensive datasets for evaluating the effectiveness of these
models in accurately identifying and classifying skills in Spanish-language job
applications. This gap hinders our ability to assess the reliability and
precision of the models, which is crucial for ensuring that the selected
candidates truly possess the required skills for the job. In this paper, we
develop a Spanish language dataset for skill extraction and classification,
provide annotation methodology to distinguish between knowledge, skill, and
abilities, and provide deep learning baselines to advance robust solutions for
skill classification.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 22:05:42 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Butt",
"Sabur",
""
],
[
"Ceballos",
"Hector G.",
""
],
[
"Madera",
"Diana P.",
""
]
]
| TITLE: Tec-Habilidad: Skill Classification for Bridging Education and
Employment
ABSTRACT: Job application and assessment processes have evolved significantly in recent
years, largely due to advancements in technology and changes in the way
companies operate. Skill extraction and classification remain an important
component of the modern hiring process as it provides a more objective way to
evaluate candidates and automatically align their skills with the job
requirements. However, to effectively evaluate the skills, the skill extraction
tools must recognize varied mentions of skills on resumes, including direct
mentions, implications, synonyms, acronyms, phrases, and proficiency levels,
and differentiate between hard and soft skills. While tools like LLMs (Large
Model Models) help extract and categorize skills from job applications, there's
a lack of comprehensive datasets for evaluating the effectiveness of these
models in accurately identifying and classifying skills in Spanish-language job
applications. This gap hinders our ability to assess the reliability and
precision of the models, which is crucial for ensuring that the selected
candidates truly possess the required skills for the job. In this paper, we
develop a Spanish language dataset for skill extraction and classification,
provide annotation methodology to distinguish between knowledge, skill, and
abilities, and provide deep learning baselines to advance robust solutions for
skill classification.
| new_dataset | 0.95846 |
2503.03942 | Devanish Kamtam | Devanish N. Kamtam, Joseph B. Shrager, Satya Deepya Malla, Xiaohan
Wang, Nicole Lin, Juan J. Cardona, Serena Yeung-Levy, Clarence Hu | SurgiSAM2: Fine-tuning a foundational model for surgical video anatomy
segmentation and detection | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Background: We evaluate SAM 2 for surgical scene understanding by examining
its semantic segmentation capabilities for organs/tissues both in zero-shot
scenarios and after fine-tuning. Methods: We utilized five public datasets to
evaluate and fine-tune SAM 2 for segmenting anatomical tissues in surgical
videos/images. Fine-tuning was applied to the image encoder and mask decoder.
We limited training subsets from 50 to 400 samples per class to better model
real-world constraints with data acquisition. The impact of dataset size on
fine-tuning performance was evaluated with weighted mean Dice coefficient
(WMDC), and the results were also compared against previously reported
state-of-the-art (SOTA) results. Results: SurgiSAM 2, a fine-tuned SAM 2 model,
demonstrated significant improvements in segmentation performance, achieving a
17.9% relative WMDC gain compared to the baseline SAM 2. Increasing prompt
points from 1 to 10 and training data scale from 50/class to 400/class enhanced
performance; the best WMDC of 0.92 on the validation subset was achieved with
10 prompt points and 400 samples per class. On the test subset, this model
outperformed prior SOTA methods in 24/30 (80%) of the classes with a WMDC of
0.91 using 10-point prompts. Notably, SurgiSAM 2 generalized effectively to
unseen organ classes, achieving SOTA on 7/9 (77.8%) of them. Conclusion: SAM 2
achieves remarkable zero-shot and fine-tuned performance for surgical scene
segmentation, surpassing prior SOTA models across several organ classes of
diverse datasets. This suggests immense potential for enabling
automated/semi-automated annotation pipelines, thereby decreasing the burden of
annotations facilitating several surgical applications.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 22:18:32 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Kamtam",
"Devanish N.",
""
],
[
"Shrager",
"Joseph B.",
""
],
[
"Malla",
"Satya Deepya",
""
],
[
"Wang",
"Xiaohan",
""
],
[
"Lin",
"Nicole",
""
],
[
"Cardona",
"Juan J.",
""
],
[
"Yeung-Levy",
"Serena",
""
],
[
"Hu",
"Clarence",
""
]
]
| TITLE: SurgiSAM2: Fine-tuning a foundational model for surgical video anatomy
segmentation and detection
ABSTRACT: Background: We evaluate SAM 2 for surgical scene understanding by examining
its semantic segmentation capabilities for organs/tissues both in zero-shot
scenarios and after fine-tuning. Methods: We utilized five public datasets to
evaluate and fine-tune SAM 2 for segmenting anatomical tissues in surgical
videos/images. Fine-tuning was applied to the image encoder and mask decoder.
We limited training subsets from 50 to 400 samples per class to better model
real-world constraints with data acquisition. The impact of dataset size on
fine-tuning performance was evaluated with weighted mean Dice coefficient
(WMDC), and the results were also compared against previously reported
state-of-the-art (SOTA) results. Results: SurgiSAM 2, a fine-tuned SAM 2 model,
demonstrated significant improvements in segmentation performance, achieving a
17.9% relative WMDC gain compared to the baseline SAM 2. Increasing prompt
points from 1 to 10 and training data scale from 50/class to 400/class enhanced
performance; the best WMDC of 0.92 on the validation subset was achieved with
10 prompt points and 400 samples per class. On the test subset, this model
outperformed prior SOTA methods in 24/30 (80%) of the classes with a WMDC of
0.91 using 10-point prompts. Notably, SurgiSAM 2 generalized effectively to
unseen organ classes, achieving SOTA on 7/9 (77.8%) of them. Conclusion: SAM 2
achieves remarkable zero-shot and fine-tuned performance for surgical scene
segmentation, surpassing prior SOTA models across several organ classes of
diverse datasets. This suggests immense potential for enabling
automated/semi-automated annotation pipelines, thereby decreasing the burden of
annotations facilitating several surgical applications.
| no_new_dataset | 0.956675 |
2503.03947 | Aurelio Noca | Aurelio Noca, Xianmei Lei, Jonathan Becktor, Jeffrey Edlund, Anna
Sabel, Patrick Spieler, Curtis Padgett, Alexandre Alahi, Deegan Atha | COARSE: Collaborative Pseudo-Labeling with Coarse Real Labels for
Off-Road Semantic Segmentation | preprint, 8 pages | null | null | null | cs.CV cs.AI cs.RO | http://creativecommons.org/licenses/by-sa/4.0/ | Autonomous off-road navigation faces challenges due to diverse, unstructured
environments, requiring robust perception with both geometric and semantic
understanding. However, scarce densely labeled semantic data limits
generalization across domains. Simulated data helps, but introduces domain
adaptation issues. We propose COARSE, a semi-supervised domain adaptation
framework for off-road semantic segmentation, leveraging sparse, coarse
in-domain labels and densely labeled out-of-domain data. Using pretrained
vision transformers, we bridge domain gaps with complementary pixel-level and
patch-level decoders, enhanced by a collaborative pseudo-labeling strategy on
unlabeled data. Evaluations on RUGD and Rellis-3D datasets show significant
improvements of 9.7\% and 8.4\% respectively, versus only using coarse data.
Tests on real-world off-road vehicle data in a multi-biome setting further
demonstrate COARSE's applicability.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 22:25:54 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Noca",
"Aurelio",
""
],
[
"Lei",
"Xianmei",
""
],
[
"Becktor",
"Jonathan",
""
],
[
"Edlund",
"Jeffrey",
""
],
[
"Sabel",
"Anna",
""
],
[
"Spieler",
"Patrick",
""
],
[
"Padgett",
"Curtis",
""
],
[
"Alahi",
"Alexandre",
""
],
[
"Atha",
"Deegan",
""
]
]
| TITLE: COARSE: Collaborative Pseudo-Labeling with Coarse Real Labels for
Off-Road Semantic Segmentation
ABSTRACT: Autonomous off-road navigation faces challenges due to diverse, unstructured
environments, requiring robust perception with both geometric and semantic
understanding. However, scarce densely labeled semantic data limits
generalization across domains. Simulated data helps, but introduces domain
adaptation issues. We propose COARSE, a semi-supervised domain adaptation
framework for off-road semantic segmentation, leveraging sparse, coarse
in-domain labels and densely labeled out-of-domain data. Using pretrained
vision transformers, we bridge domain gaps with complementary pixel-level and
patch-level decoders, enhanced by a collaborative pseudo-labeling strategy on
unlabeled data. Evaluations on RUGD and Rellis-3D datasets show significant
improvements of 9.7\% and 8.4\% respectively, versus only using coarse data.
Tests on real-world off-road vehicle data in a multi-biome setting further
demonstrate COARSE's applicability.
| no_new_dataset | 0.950915 |
2503.03965 | Chaitanya K. Joshi | Chaitanya K. Joshi, Xiang Fu, Yi-Lun Liao, Vahe Gharakhanyan, Benjamin
Kurt Miller, Anuroop Sriram, Zachary W. Ulissi | All-atom Diffusion Transformers: Unified generative modelling of
molecules and materials | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Diffusion models are the standard toolkit for generative modelling of 3D
atomic systems. However, for different types of atomic systems - such as
molecules and materials - the generative processes are usually highly specific
to the target system despite the underlying physics being the same. We
introduce the All-atom Diffusion Transformer (ADiT), a unified latent diffusion
framework for jointly generating both periodic materials and non-periodic
molecular systems using the same model: (1) An autoencoder maps a unified,
all-atom representations of molecules and materials to a shared latent
embedding space; and (2) A diffusion model is trained to generate new latent
embeddings that the autoencoder can decode to sample new molecules or
materials. Experiments on QM9 and MP20 datasets demonstrate that jointly
trained ADiT generates realistic and valid molecules as well as materials,
exceeding state-of-the-art results from molecule and crystal-specific models.
ADiT uses standard Transformers for both the autoencoder and diffusion model,
resulting in significant speedups during training and inference compared to
equivariant diffusion models. Scaling ADiT up to half a billion parameters
predictably improves performance, representing a step towards broadly
generalizable foundation models for generative chemistry. Open source code:
https://github.com/facebookresearch/all-atom-diffusion-transformer
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 23:35:44 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Joshi",
"Chaitanya K.",
""
],
[
"Fu",
"Xiang",
""
],
[
"Liao",
"Yi-Lun",
""
],
[
"Gharakhanyan",
"Vahe",
""
],
[
"Miller",
"Benjamin Kurt",
""
],
[
"Sriram",
"Anuroop",
""
],
[
"Ulissi",
"Zachary W.",
""
]
]
| TITLE: All-atom Diffusion Transformers: Unified generative modelling of
molecules and materials
ABSTRACT: Diffusion models are the standard toolkit for generative modelling of 3D
atomic systems. However, for different types of atomic systems - such as
molecules and materials - the generative processes are usually highly specific
to the target system despite the underlying physics being the same. We
introduce the All-atom Diffusion Transformer (ADiT), a unified latent diffusion
framework for jointly generating both periodic materials and non-periodic
molecular systems using the same model: (1) An autoencoder maps a unified,
all-atom representations of molecules and materials to a shared latent
embedding space; and (2) A diffusion model is trained to generate new latent
embeddings that the autoencoder can decode to sample new molecules or
materials. Experiments on QM9 and MP20 datasets demonstrate that jointly
trained ADiT generates realistic and valid molecules as well as materials,
exceeding state-of-the-art results from molecule and crystal-specific models.
ADiT uses standard Transformers for both the autoencoder and diffusion model,
resulting in significant speedups during training and inference compared to
equivariant diffusion models. Scaling ADiT up to half a billion parameters
predictably improves performance, representing a step towards broadly
generalizable foundation models for generative chemistry. Open source code:
https://github.com/facebookresearch/all-atom-diffusion-transformer
| no_new_dataset | 0.952086 |
2503.03967 | Soya Park | Soya Park, J.D. Zamfirescu-Pereira, Chinmay Kulkarni | Model Behavior Specification by Leveraging LLM Self-Playing and
Self-Improving | null | null | null | null | cs.HC | http://creativecommons.org/licenses/by/4.0/ | Training AI models is challenging, particularly when crafting behavior
instructions. Traditional methods rely on machines (supervised learning) or
manual pattern discovery, which results in not interpretable models or time
sink. While Large Language Models (LLMs) simplify instruction writing through
natural language, articulating intended model behavior still remains difficult.
We introduce Visionary Tuning, a human-in-the-loop self-playing followed by
automatic self-refinement to improve behavior specification. Our system helps
users clarify desired behavior through self-playing and generates prompts
through self-improving, Our first evaluation involves user study conducted on a
system implementation of Visionary Tuning within the context of chatbot
behavior. Our system self-play itself by simulating user interactions to
identify patterns and create effective prompts based on the pattern. In a
within-subject study (N=12), participants pinpointed more patterns through
self-playing and crafted better prompts. Surprisingly, users felt more or less
success level in specifying the model behavior. Follow-up crowd studies (N=60)
confirmed that the chatbot adhered to instructions without sacrificing quality.
Our second evaluation is a case study on a real-world implementation using a
movie rating dataset with Visionary Tuning, demonstrating its effectiveness and
robustness in modeling a critic's preferences across the spectrum of low to
highly rated movies.
Together, these results suggest how AI improves the design process of
interactive AI systems. Furthermore, they suggest how the benefits of these
tools may be non-obvious to end-users. We reflect on these findings and suggest
future directions.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 23:39:51 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Park",
"Soya",
""
],
[
"Zamfirescu-Pereira",
"J. D.",
""
],
[
"Kulkarni",
"Chinmay",
""
]
]
| TITLE: Model Behavior Specification by Leveraging LLM Self-Playing and
Self-Improving
ABSTRACT: Training AI models is challenging, particularly when crafting behavior
instructions. Traditional methods rely on machines (supervised learning) or
manual pattern discovery, which results in not interpretable models or time
sink. While Large Language Models (LLMs) simplify instruction writing through
natural language, articulating intended model behavior still remains difficult.
We introduce Visionary Tuning, a human-in-the-loop self-playing followed by
automatic self-refinement to improve behavior specification. Our system helps
users clarify desired behavior through self-playing and generates prompts
through self-improving, Our first evaluation involves user study conducted on a
system implementation of Visionary Tuning within the context of chatbot
behavior. Our system self-play itself by simulating user interactions to
identify patterns and create effective prompts based on the pattern. In a
within-subject study (N=12), participants pinpointed more patterns through
self-playing and crafted better prompts. Surprisingly, users felt more or less
success level in specifying the model behavior. Follow-up crowd studies (N=60)
confirmed that the chatbot adhered to instructions without sacrificing quality.
Our second evaluation is a case study on a real-world implementation using a
movie rating dataset with Visionary Tuning, demonstrating its effectiveness and
robustness in modeling a critic's preferences across the spectrum of low to
highly rated movies.
Together, these results suggest how AI improves the design process of
interactive AI systems. Furthermore, they suggest how the benefits of these
tools may be non-obvious to end-users. We reflect on these findings and suggest
future directions.
| no_new_dataset | 0.945951 |
2503.03973 | Yixiao Ge Mr. | Yixiao Ge, Arthur Pearce, Pieter van Goor, Robert Mahony | Equivariant Filter Design for Range-only SLAM | 11 pages, 5 figures, accepted for presentation at IEEE International
Conference on Robotics and Automation 2025 | null | null | null | cs.RO cs.SY eess.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Range-only Simultaneous Localisation and Mapping (RO-SLAM) is of interest due
to its practical applications in ultra-wideband (UWB) and Bluetooth Low Energy
(BLE) localisation in terrestrial and aerial applications and acoustic beacon
localisation in submarine applications. In this work, we consider a mobile
robot equipped with an inertial measurement unit (IMU) and a range sensor that
measures distances to a collection of fixed landmarks. We derive an equivariant
filter (EqF) for the RO-SLAM problem based on a symmetry Lie group that is
compatible with the range measurements. The proposed filter does not require
bootstrapping or initialisation of landmark positions, and demonstrates
robustness to the no-prior situation. The filter is demonstrated on a
real-world dataset, and it is shown to significantly outperform a
state-of-the-art EKF alternative in terms of both accuracy and robustness.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 23:48:32 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Ge",
"Yixiao",
""
],
[
"Pearce",
"Arthur",
""
],
[
"van Goor",
"Pieter",
""
],
[
"Mahony",
"Robert",
""
]
]
| TITLE: Equivariant Filter Design for Range-only SLAM
ABSTRACT: Range-only Simultaneous Localisation and Mapping (RO-SLAM) is of interest due
to its practical applications in ultra-wideband (UWB) and Bluetooth Low Energy
(BLE) localisation in terrestrial and aerial applications and acoustic beacon
localisation in submarine applications. In this work, we consider a mobile
robot equipped with an inertial measurement unit (IMU) and a range sensor that
measures distances to a collection of fixed landmarks. We derive an equivariant
filter (EqF) for the RO-SLAM problem based on a symmetry Lie group that is
compatible with the range measurements. The proposed filter does not require
bootstrapping or initialisation of landmark positions, and demonstrates
robustness to the no-prior situation. The filter is demonstrated on a
real-world dataset, and it is shown to significantly outperform a
state-of-the-art EKF alternative in terms of both accuracy and robustness.
| no_new_dataset | 0.943712 |
2503.03983 | Zhifeng Kong | Sreyan Ghosh, Zhifeng Kong, Sonal Kumar, S Sakshi, Jaehyeon Kim, Wei
Ping, Rafael Valle, Dinesh Manocha, Bryan Catanzaro | Audio Flamingo 2: An Audio-Language Model with Long-Audio Understanding
and Expert Reasoning Abilities | null | null | null | null | cs.SD cs.CL cs.LG eess.AS | http://creativecommons.org/licenses/by/4.0/ | Understanding and reasoning over non-speech sounds and music are crucial for
both humans and AI agents to interact effectively with their environments. In
this paper, we introduce Audio Flamingo 2 (AF2), an Audio-Language Model (ALM)
with advanced audio understanding and reasoning capabilities. AF2 leverages (i)
a custom CLAP model, (ii) synthetic Audio QA data for fine-grained audio
reasoning, and (iii) a multi-stage curriculum learning strategy. AF2 achieves
state-of-the-art performance with only a 3B parameter small language model,
surpassing large open-source and proprietary models across over 20 benchmarks.
Next, for the first time, we extend audio understanding to long audio segments
(30 secs to 5 mins) and propose LongAudio, a large and novel dataset for
training ALMs on long audio captioning and question-answering tasks.
Fine-tuning AF2 on LongAudio leads to exceptional performance on our proposed
LongAudioBench, an expert annotated benchmark for evaluating ALMs on long audio
understanding capabilities. We conduct extensive ablation studies to confirm
the efficacy of our approach. Project Website:
https://research.nvidia.com/labs/adlr/AF2/.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 00:10:26 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Ghosh",
"Sreyan",
""
],
[
"Kong",
"Zhifeng",
""
],
[
"Kumar",
"Sonal",
""
],
[
"Sakshi",
"S",
""
],
[
"Kim",
"Jaehyeon",
""
],
[
"Ping",
"Wei",
""
],
[
"Valle",
"Rafael",
""
],
[
"Manocha",
"Dinesh",
""
],
[
"Catanzaro",
"Bryan",
""
]
]
| TITLE: Audio Flamingo 2: An Audio-Language Model with Long-Audio Understanding
and Expert Reasoning Abilities
ABSTRACT: Understanding and reasoning over non-speech sounds and music are crucial for
both humans and AI agents to interact effectively with their environments. In
this paper, we introduce Audio Flamingo 2 (AF2), an Audio-Language Model (ALM)
with advanced audio understanding and reasoning capabilities. AF2 leverages (i)
a custom CLAP model, (ii) synthetic Audio QA data for fine-grained audio
reasoning, and (iii) a multi-stage curriculum learning strategy. AF2 achieves
state-of-the-art performance with only a 3B parameter small language model,
surpassing large open-source and proprietary models across over 20 benchmarks.
Next, for the first time, we extend audio understanding to long audio segments
(30 secs to 5 mins) and propose LongAudio, a large and novel dataset for
training ALMs on long audio captioning and question-answering tasks.
Fine-tuning AF2 on LongAudio leads to exceptional performance on our proposed
LongAudioBench, an expert annotated benchmark for evaluating ALMs on long audio
understanding capabilities. We conduct extensive ablation studies to confirm
the efficacy of our approach. Project Website:
https://research.nvidia.com/labs/adlr/AF2/.
| new_dataset | 0.964085 |
2503.03987 | Wenhui Zhu | Wenhui Zhu, Xin Li, Xiwen Chen, Peijie Qiu, Vamsi Krishna Vasa,
Xuanzhao Dong, Yanxi Chen, Natasha Lepore, Oana Dumitrascu, Yi Su, Yalin Wang | RetinalGPT: A Retinal Clinical Preference Conversational Assistant
Powered by Large Vision-Language Models | null | null | null | null | cs.CV cs.AI cs.CL cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Recently, Multimodal Large Language Models (MLLMs) have gained significant
attention for their remarkable ability to process and analyze non-textual data,
such as images, videos, and audio. Notably, several adaptations of
general-domain MLLMs to the medical field have been explored, including
LLaVA-Med. However, these medical adaptations remain insufficiently advanced in
understanding and interpreting retinal images. In contrast, medical experts
emphasize the importance of quantitative analyses for disease detection and
interpretation. This underscores a gap between general-domain and
medical-domain MLLMs: while general-domain MLLMs excel in broad applications,
they lack the specialized knowledge necessary for precise diagnostic and
interpretative tasks in the medical field. To address these challenges, we
introduce \textit{RetinalGPT}, a multimodal conversational assistant for
clinically preferred quantitative analysis of retinal images. Specifically, we
achieve this by compiling a large retinal image dataset, developing a novel
data pipeline, and employing customized visual instruction tuning to enhance
both retinal analysis and enrich medical knowledge. In particular, RetinalGPT
outperforms MLLM in the generic domain by a large margin in the diagnosis of
retinal diseases in 8 benchmark retinal datasets. Beyond disease diagnosis,
RetinalGPT features quantitative analyses and lesion localization, representing
a pioneering step in leveraging LLMs for an interpretable and end-to-end
clinical research framework. The code is available at
https://github.com/Retinal-Research/RetinalGPT
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 00:19:54 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Zhu",
"Wenhui",
""
],
[
"Li",
"Xin",
""
],
[
"Chen",
"Xiwen",
""
],
[
"Qiu",
"Peijie",
""
],
[
"Vasa",
"Vamsi Krishna",
""
],
[
"Dong",
"Xuanzhao",
""
],
[
"Chen",
"Yanxi",
""
],
[
"Lepore",
"Natasha",
""
],
[
"Dumitrascu",
"Oana",
""
],
[
"Su",
"Yi",
""
],
[
"Wang",
"Yalin",
""
]
]
| TITLE: RetinalGPT: A Retinal Clinical Preference Conversational Assistant
Powered by Large Vision-Language Models
ABSTRACT: Recently, Multimodal Large Language Models (MLLMs) have gained significant
attention for their remarkable ability to process and analyze non-textual data,
such as images, videos, and audio. Notably, several adaptations of
general-domain MLLMs to the medical field have been explored, including
LLaVA-Med. However, these medical adaptations remain insufficiently advanced in
understanding and interpreting retinal images. In contrast, medical experts
emphasize the importance of quantitative analyses for disease detection and
interpretation. This underscores a gap between general-domain and
medical-domain MLLMs: while general-domain MLLMs excel in broad applications,
they lack the specialized knowledge necessary for precise diagnostic and
interpretative tasks in the medical field. To address these challenges, we
introduce \textit{RetinalGPT}, a multimodal conversational assistant for
clinically preferred quantitative analysis of retinal images. Specifically, we
achieve this by compiling a large retinal image dataset, developing a novel
data pipeline, and employing customized visual instruction tuning to enhance
both retinal analysis and enrich medical knowledge. In particular, RetinalGPT
outperforms MLLM in the generic domain by a large margin in the diagnosis of
retinal diseases in 8 benchmark retinal datasets. Beyond disease diagnosis,
RetinalGPT features quantitative analyses and lesion localization, representing
a pioneering step in leveraging LLMs for an interpretable and end-to-end
clinical research framework. The code is available at
https://github.com/Retinal-Research/RetinalGPT
| no_new_dataset | 0.785391 |
2503.03989 | Xiangxin Zhou | Xiangxin Zhou, Yi Xiao, Haowei Lin, Xinheng He, Jiaqi Guan, Yang Wang,
Qiang Liu, Feng Zhou, Liang Wang, Jianzhu Ma | Integrating Protein Dynamics into Structure-Based Drug Design via
Full-Atom Stochastic Flows | Accepted to ICLR 2025 | null | null | null | q-bio.BM cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The dynamic nature of proteins, influenced by ligand interactions, is
essential for comprehending protein function and progressing drug discovery.
Traditional structure-based drug design (SBDD) approaches typically target
binding sites with rigid structures, limiting their practical application in
drug development. While molecular dynamics simulation can theoretically capture
all the biologically relevant conformations, the transition rate is dictated by
the intrinsic energy barrier between them, making the sampling process
computationally expensive. To overcome the aforementioned challenges, we
propose to use generative modeling for SBDD considering conformational changes
of protein pockets. We curate a dataset of apo and multiple holo states of
protein-ligand complexes, simulated by molecular dynamics, and propose a
full-atom flow model (and a stochastic version), named DynamicFlow, that learns
to transform apo pockets and noisy ligands into holo pockets and corresponding
3D ligand molecules. Our method uncovers promising ligand molecules and
corresponding holo conformations of pockets. Additionally, the resultant
holo-like states provide superior inputs for traditional SBDD approaches,
playing a significant role in practical drug discovery.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 00:34:44 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Zhou",
"Xiangxin",
""
],
[
"Xiao",
"Yi",
""
],
[
"Lin",
"Haowei",
""
],
[
"He",
"Xinheng",
""
],
[
"Guan",
"Jiaqi",
""
],
[
"Wang",
"Yang",
""
],
[
"Liu",
"Qiang",
""
],
[
"Zhou",
"Feng",
""
],
[
"Wang",
"Liang",
""
],
[
"Ma",
"Jianzhu",
""
]
]
| TITLE: Integrating Protein Dynamics into Structure-Based Drug Design via
Full-Atom Stochastic Flows
ABSTRACT: The dynamic nature of proteins, influenced by ligand interactions, is
essential for comprehending protein function and progressing drug discovery.
Traditional structure-based drug design (SBDD) approaches typically target
binding sites with rigid structures, limiting their practical application in
drug development. While molecular dynamics simulation can theoretically capture
all the biologically relevant conformations, the transition rate is dictated by
the intrinsic energy barrier between them, making the sampling process
computationally expensive. To overcome the aforementioned challenges, we
propose to use generative modeling for SBDD considering conformational changes
of protein pockets. We curate a dataset of apo and multiple holo states of
protein-ligand complexes, simulated by molecular dynamics, and propose a
full-atom flow model (and a stochastic version), named DynamicFlow, that learns
to transform apo pockets and noisy ligands into holo pockets and corresponding
3D ligand molecules. Our method uncovers promising ligand molecules and
corresponding holo conformations of pockets. Additionally, the resultant
holo-like states provide superior inputs for traditional SBDD approaches,
playing a significant role in practical drug discovery.
| new_dataset | 0.955194 |
2503.03995 | Sungwon Kim | Sungwon Kim, Yoonho Lee, Yunhak Oh, Namkyeong Lee, Sukwon Yun, Junseok
Lee, Sein Kim, Carl Yang, Chanyoung Park | Subgraph Federated Learning for Local Generalization | ICLR 2025 (oral) | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Federated Learning (FL) on graphs enables collaborative model training to
enhance performance without compromising the privacy of each client. However,
existing methods often overlook the mutable nature of graph data, which
frequently introduces new nodes and leads to shifts in label distribution.
Since they focus solely on performing well on each client's local data, they
are prone to overfitting to their local distributions (i.e., local
overfitting), which hinders their ability to generalize to unseen data with
diverse label distributions. In contrast, our proposed method, FedLoG,
effectively tackles this issue by mitigating local overfitting. Our model
generates global synthetic data by condensing the reliable information from
each class representation and its structural information across clients. Using
these synthetic data as a training set, we alleviate the local overfitting
problem by adaptively generalizing the absent knowledge within each local
dataset. This enhances the generalization capabilities of local models,
enabling them to handle unseen data effectively. Our model outperforms
baselines in our proposed experimental settings, which are designed to measure
generalization power to unseen data in practical scenarios. Our code is
available at https://github.com/sung-won-kim/FedLoG
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 01:08:01 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Kim",
"Sungwon",
""
],
[
"Lee",
"Yoonho",
""
],
[
"Oh",
"Yunhak",
""
],
[
"Lee",
"Namkyeong",
""
],
[
"Yun",
"Sukwon",
""
],
[
"Lee",
"Junseok",
""
],
[
"Kim",
"Sein",
""
],
[
"Yang",
"Carl",
""
],
[
"Park",
"Chanyoung",
""
]
]
| TITLE: Subgraph Federated Learning for Local Generalization
ABSTRACT: Federated Learning (FL) on graphs enables collaborative model training to
enhance performance without compromising the privacy of each client. However,
existing methods often overlook the mutable nature of graph data, which
frequently introduces new nodes and leads to shifts in label distribution.
Since they focus solely on performing well on each client's local data, they
are prone to overfitting to their local distributions (i.e., local
overfitting), which hinders their ability to generalize to unseen data with
diverse label distributions. In contrast, our proposed method, FedLoG,
effectively tackles this issue by mitigating local overfitting. Our model
generates global synthetic data by condensing the reliable information from
each class representation and its structural information across clients. Using
these synthetic data as a training set, we alleviate the local overfitting
problem by adaptively generalizing the absent knowledge within each local
dataset. This enhances the generalization capabilities of local models,
enabling them to handle unseen data effectively. Our model outperforms
baselines in our proposed experimental settings, which are designed to measure
generalization power to unseen data in practical scenarios. Our code is
available at https://github.com/sung-won-kim/FedLoG
| no_new_dataset | 0.948251 |
2503.04002 | Md Nizam Uddin | Md Nizam Uddin, Yihe Zhang, and Xiali Hei | Deep Learning Aided Software Vulnerability Detection: A Survey | null | null | null | null | cs.SE cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The pervasive nature of software vulnerabilities has emerged as a primary
factor for the surge in cyberattacks. Traditional vulnerability detection
methods, including rule-based, signature-based, manual review, static, and
dynamic analysis, often exhibit limitations when encountering increasingly
complex systems and a fast-evolving attack landscape. Deep learning (DL)
methods excel at automatically learning and identifying complex patterns in
code, enabling more effective detection of emerging vulnerabilities. This
survey analyzes 34 relevant studies from high-impact journals and conferences
between 2017 and 2024. This survey introduces the conceptual framework
Vulnerability Detection Lifecycle for the first time to systematically analyze
and compare various DL-based vulnerability detection methods and unify them
into the same analysis perspective. The framework includes six phases: (1)
Dataset Construction, (2) Vulnerability Granularity Definition, (3) Code
Representation, (4) Model Design, (5) Model Performance Evaluation, and (6)
Real-world Project Implementation. For each phase of the framework, we identify
and explore key issues through in-depth analysis of existing research while
also highlighting challenges that remain inadequately addressed. This survey
provides guidelines for future software vulnerability detection, facilitating
further implementation of deep learning techniques applications in this field.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 01:35:16 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Uddin",
"Md Nizam",
""
],
[
"Zhang",
"Yihe",
""
],
[
"Hei",
"Xiali",
""
]
]
| TITLE: Deep Learning Aided Software Vulnerability Detection: A Survey
ABSTRACT: The pervasive nature of software vulnerabilities has emerged as a primary
factor for the surge in cyberattacks. Traditional vulnerability detection
methods, including rule-based, signature-based, manual review, static, and
dynamic analysis, often exhibit limitations when encountering increasingly
complex systems and a fast-evolving attack landscape. Deep learning (DL)
methods excel at automatically learning and identifying complex patterns in
code, enabling more effective detection of emerging vulnerabilities. This
survey analyzes 34 relevant studies from high-impact journals and conferences
between 2017 and 2024. This survey introduces the conceptual framework
Vulnerability Detection Lifecycle for the first time to systematically analyze
and compare various DL-based vulnerability detection methods and unify them
into the same analysis perspective. The framework includes six phases: (1)
Dataset Construction, (2) Vulnerability Granularity Definition, (3) Code
Representation, (4) Model Design, (5) Model Performance Evaluation, and (6)
Real-world Project Implementation. For each phase of the framework, we identify
and explore key issues through in-depth analysis of existing research while
also highlighting challenges that remain inadequately addressed. This survey
provides guidelines for future software vulnerability detection, facilitating
further implementation of deep learning techniques applications in this field.
| no_new_dataset | 0.942718 |
2503.04003 | Umar Farooq | Moshood Fakorede, Umar Farooq | Understanding and Detecting Compatibility Issues in Android Auto Apps | 12 pages, 9 tables | null | null | null | cs.SE cs.PL | http://creativecommons.org/licenses/by/4.0/ | Mobile platforms now power not only smartphones but also in-vehicle systems
like Android Auto and CarPlay. Despite an ecosystem of over 3.5 million Android
apps and more than 200 million Android Auto-compatible vehicles, only a few
hundred apps have been adapted for automotive use. To better understand this
gap, we studied 147 reported issues related to Android Auto and identified
their root causes. We found that more than 70% of issues result from UI
incompatibilities, 24% from media playback errors, and around 5% from failures
in voice command handling, showing a lack of effective tools for developers. We
introduce CarCompat, a static analysis framework that detects compatibility
problems in Android Auto apps. CarCompat constructs a Car-Control Flow Graph
(CCFG) to capture interactions among app components, lifecycle methods, and
platform-specific callbacks. It applies specialized checkers to detect UI
violations, media playback errors, and issues with voice command handling. We
evaluated CarCompat on a dataset of 54 Android Auto apps and detected 25 new
issues, 4 of which were confirmed by developers, and 2 developers have already
released their fixes. The results show that CarCompat helps developers identify
and fix compatibility issues, improving the in-vehicle experience.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 01:37:02 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Fakorede",
"Moshood",
""
],
[
"Farooq",
"Umar",
""
]
]
| TITLE: Understanding and Detecting Compatibility Issues in Android Auto Apps
ABSTRACT: Mobile platforms now power not only smartphones but also in-vehicle systems
like Android Auto and CarPlay. Despite an ecosystem of over 3.5 million Android
apps and more than 200 million Android Auto-compatible vehicles, only a few
hundred apps have been adapted for automotive use. To better understand this
gap, we studied 147 reported issues related to Android Auto and identified
their root causes. We found that more than 70% of issues result from UI
incompatibilities, 24% from media playback errors, and around 5% from failures
in voice command handling, showing a lack of effective tools for developers. We
introduce CarCompat, a static analysis framework that detects compatibility
problems in Android Auto apps. CarCompat constructs a Car-Control Flow Graph
(CCFG) to capture interactions among app components, lifecycle methods, and
platform-specific callbacks. It applies specialized checkers to detect UI
violations, media playback errors, and issues with voice command handling. We
evaluated CarCompat on a dataset of 54 Android Auto apps and detected 25 new
issues, 4 of which were confirmed by developers, and 2 developers have already
released their fixes. The results show that CarCompat helps developers identify
and fix compatibility issues, improving the in-vehicle experience.
| no_new_dataset | 0.884888 |
2503.04006 | Amin Karimi | Amin Karimi, Charalambos Poullis | DSV-LFS: Unifying LLM-Driven Semantic Cues with Visual Features for
Robust Few-Shot Segmentation | null | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Few-shot semantic segmentation (FSS) aims to enable models to segment
novel/unseen object classes using only a limited number of labeled examples.
However, current FSS methods frequently struggle with generalization due to
incomplete and biased feature representations, especially when support images
do not capture the full appearance variability of the target class. To improve
the FSS pipeline, we propose a novel framework that utilizes large language
models (LLMs) to adapt general class semantic information to the query image.
Furthermore, the framework employs dense pixel-wise matching to identify
similarities between query and support images, resulting in enhanced FSS
performance. Inspired by reasoning-based segmentation frameworks, our method,
named DSV-LFS, introduces an additional token into the LLM vocabulary, allowing
a multimodal LLM to generate a "semantic prompt" from class descriptions. In
parallel, a dense matching module identifies visual similarities between the
query and support images, generating a "visual prompt". These prompts are then
jointly employed to guide the prompt-based decoder for accurate segmentation of
the query image. Comprehensive experiments on the benchmark datasets
Pascal-$5^{i}$ and COCO-$20^{i}$ demonstrate that our framework achieves
state-of-the-art performance-by a significant margin-demonstrating superior
generalization to novel classes and robustness across diverse scenarios. The
source code is available at
\href{https://github.com/aminpdik/DSV-LFS}{https://github.com/aminpdik/DSV-LFS}
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 01:42:28 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Karimi",
"Amin",
""
],
[
"Poullis",
"Charalambos",
""
]
]
| TITLE: DSV-LFS: Unifying LLM-Driven Semantic Cues with Visual Features for
Robust Few-Shot Segmentation
ABSTRACT: Few-shot semantic segmentation (FSS) aims to enable models to segment
novel/unseen object classes using only a limited number of labeled examples.
However, current FSS methods frequently struggle with generalization due to
incomplete and biased feature representations, especially when support images
do not capture the full appearance variability of the target class. To improve
the FSS pipeline, we propose a novel framework that utilizes large language
models (LLMs) to adapt general class semantic information to the query image.
Furthermore, the framework employs dense pixel-wise matching to identify
similarities between query and support images, resulting in enhanced FSS
performance. Inspired by reasoning-based segmentation frameworks, our method,
named DSV-LFS, introduces an additional token into the LLM vocabulary, allowing
a multimodal LLM to generate a "semantic prompt" from class descriptions. In
parallel, a dense matching module identifies visual similarities between the
query and support images, generating a "visual prompt". These prompts are then
jointly employed to guide the prompt-based decoder for accurate segmentation of
the query image. Comprehensive experiments on the benchmark datasets
Pascal-$5^{i}$ and COCO-$20^{i}$ demonstrate that our framework achieves
state-of-the-art performance-by a significant margin-demonstrating superior
generalization to novel classes and robustness across diverse scenarios. The
source code is available at
\href{https://github.com/aminpdik/DSV-LFS}{https://github.com/aminpdik/DSV-LFS}
| no_new_dataset | 0.948822 |
2503.04007 | Burak Aksoy | Burak Aksoy, John Wen | Planning and Control for Deformable Linear Object Manipulation | SUBMITTED TO IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING
(T-ASE) | null | null | null | cs.RO | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Manipulating a deformable linear object (DLO) such as wire, cable, and rope
is a common yet challenging task due to their high degrees of freedom and
complex deformation behaviors, especially in an environment with obstacles.
Existing local control methods are efficient but prone to failure in complex
scenarios, while precise global planners are computationally intensive and
difficult to deploy. This paper presents an efficient, easy-to-deploy framework
for collision-free DLO manipulation using mobile manipulators. We demonstrate
the effectiveness of leveraging standard planning tools for high-dimensional
DLO manipulation without requiring custom planners or extensive data-driven
models. Our approach combines an off-the-shelf global planner with a real-time
local controller. The global planner approximates the DLO as a series of rigid
links connected by spherical joints, enabling rapid path planning without the
need for problem-specific planners or large datasets. The local controller
employs control barrier functions (CBFs) to enforce safety constraints,
maintain the DLO integrity, prevent overstress, and handle obstacle avoidance.
It compensates for modeling inaccuracies by using a state-of-the-art
position-based dynamics technique that approximates physical properties like
Young's and shear moduli. We validate our framework through extensive
simulations and real-world demonstrations. In complex obstacle
scenarios-including tent pole transport, corridor navigation, and tasks
requiring varied stiffness-our method achieves a 100% success rate over
thousands of trials, with significantly reduced planning times compared to
state-of-the-art techniques. Real-world experiments include transportation of a
tent pole and a rope using mobile manipulators. We share our ROS-based
implementation to facilitate adoption in various applications.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 01:44:36 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Aksoy",
"Burak",
""
],
[
"Wen",
"John",
""
]
]
| TITLE: Planning and Control for Deformable Linear Object Manipulation
ABSTRACT: Manipulating a deformable linear object (DLO) such as wire, cable, and rope
is a common yet challenging task due to their high degrees of freedom and
complex deformation behaviors, especially in an environment with obstacles.
Existing local control methods are efficient but prone to failure in complex
scenarios, while precise global planners are computationally intensive and
difficult to deploy. This paper presents an efficient, easy-to-deploy framework
for collision-free DLO manipulation using mobile manipulators. We demonstrate
the effectiveness of leveraging standard planning tools for high-dimensional
DLO manipulation without requiring custom planners or extensive data-driven
models. Our approach combines an off-the-shelf global planner with a real-time
local controller. The global planner approximates the DLO as a series of rigid
links connected by spherical joints, enabling rapid path planning without the
need for problem-specific planners or large datasets. The local controller
employs control barrier functions (CBFs) to enforce safety constraints,
maintain the DLO integrity, prevent overstress, and handle obstacle avoidance.
It compensates for modeling inaccuracies by using a state-of-the-art
position-based dynamics technique that approximates physical properties like
Young's and shear moduli. We validate our framework through extensive
simulations and real-world demonstrations. In complex obstacle
scenarios-including tent pole transport, corridor navigation, and tasks
requiring varied stiffness-our method achieves a 100% success rate over
thousands of trials, with significantly reduced planning times compared to
state-of-the-art techniques. Real-world experiments include transportation of a
tent pole and a rope using mobile manipulators. We share our ROS-based
implementation to facilitate adoption in various applications.
| no_new_dataset | 0.949059 |
2503.04018 | Kequan Chen | Kequan Chen, Pan Liu, Yuxuan Wang, David Z. W. Wang, Yifan Dai, Zhibin
Li | NsBM-GAT: A Non-stationary Block Maximum and Graph Attention Framework
for General Traffic Crash Risk Prediction | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate prediction of traffic crash risks for individual vehicles is
essential for enhancing vehicle safety. While significant attention has been
given to traffic crash risk prediction, existing studies face two main
challenges: First, due to the scarcity of individual vehicle data before
crashes, most models rely on hypothetical scenarios deemed dangerous by
researchers. This raises doubts about their applicability to actual pre-crash
conditions. Second, some crash risk prediction frameworks were learned from
dashcam videos. Although such videos capture the pre-crash behavior of
individual vehicles, they often lack critical information about the movements
of surrounding vehicles. However, the interaction between a vehicle and its
surrounding vehicles is highly influential in crash occurrences. To overcome
these challenges, we propose a novel non-stationary extreme value theory (EVT),
where the covariate function is optimized in a nonlinear fashion using a graph
attention network. The EVT component incorporates the stochastic nature of
crashes through probability distribution, which enhances model
interpretability. Notably, the nonlinear covariate function enables the model
to capture the interactive behavior between the target vehicle and its multiple
surrounding vehicles, facilitating crash risk prediction across different
driving tasks. We train and test our model using 100 sets of vehicle trajectory
data before real crashes, collected via drones over three years from merging
and weaving segments. We demonstrate that our model successfully learns
micro-level precursors of crashes and fits a more accurate distribution with
the aid of the nonlinear covariate function. Our experiments on the testing
dataset show that the proposed model outperforms existing models by providing
more accurate predictions for both rear-end and sideswipe crashes
simultaneously.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 02:12:40 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Chen",
"Kequan",
""
],
[
"Liu",
"Pan",
""
],
[
"Wang",
"Yuxuan",
""
],
[
"Wang",
"David Z. W.",
""
],
[
"Dai",
"Yifan",
""
],
[
"Li",
"Zhibin",
""
]
]
| TITLE: NsBM-GAT: A Non-stationary Block Maximum and Graph Attention Framework
for General Traffic Crash Risk Prediction
ABSTRACT: Accurate prediction of traffic crash risks for individual vehicles is
essential for enhancing vehicle safety. While significant attention has been
given to traffic crash risk prediction, existing studies face two main
challenges: First, due to the scarcity of individual vehicle data before
crashes, most models rely on hypothetical scenarios deemed dangerous by
researchers. This raises doubts about their applicability to actual pre-crash
conditions. Second, some crash risk prediction frameworks were learned from
dashcam videos. Although such videos capture the pre-crash behavior of
individual vehicles, they often lack critical information about the movements
of surrounding vehicles. However, the interaction between a vehicle and its
surrounding vehicles is highly influential in crash occurrences. To overcome
these challenges, we propose a novel non-stationary extreme value theory (EVT),
where the covariate function is optimized in a nonlinear fashion using a graph
attention network. The EVT component incorporates the stochastic nature of
crashes through probability distribution, which enhances model
interpretability. Notably, the nonlinear covariate function enables the model
to capture the interactive behavior between the target vehicle and its multiple
surrounding vehicles, facilitating crash risk prediction across different
driving tasks. We train and test our model using 100 sets of vehicle trajectory
data before real crashes, collected via drones over three years from merging
and weaving segments. We demonstrate that our model successfully learns
micro-level precursors of crashes and fits a more accurate distribution with
the aid of the nonlinear covariate function. Our experiments on the testing
dataset show that the proposed model outperforms existing models by providing
more accurate predictions for both rear-end and sideswipe crashes
simultaneously.
| no_new_dataset | 0.943086 |
2503.04021 | Wanglong Lu | Wanglong Lu, Lingming Su, Jingjing Zheng, Vin\'icius Veloso de Melo,
Farzaneh Shoeleh, John Hawkin, Terrence Tricco, Hanli Zhao, Xianta Jiang | TextDoctor: Unified Document Image Inpainting via Patch Pyramid
Diffusion Models | 28 pages, 25 figures | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Digital versions of real-world text documents often suffer from issues like
environmental corrosion of the original document, low-quality scanning, or
human interference. Existing document restoration and inpainting methods
typically struggle with generalizing to unseen document styles and handling
high-resolution images. To address these challenges, we introduce TextDoctor, a
novel unified document image inpainting method. Inspired by human reading
behavior, TextDoctor restores fundamental text elements from patches and then
applies diffusion models to entire document images instead of training models
on specific document types. To handle varying text sizes and avoid
out-of-memory issues, common in high-resolution documents, we propose using
structure pyramid prediction and patch pyramid diffusion models. These
techniques leverage multiscale inputs and pyramid patches to enhance the
quality of inpainting both globally and locally. Extensive qualitative and
quantitative experiments on seven public datasets validated that TextDoctor
outperforms state-of-the-art methods in restoring various types of
high-resolution document images.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 02:16:35 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Lu",
"Wanglong",
""
],
[
"Su",
"Lingming",
""
],
[
"Zheng",
"Jingjing",
""
],
[
"de Melo",
"Vinícius Veloso",
""
],
[
"Shoeleh",
"Farzaneh",
""
],
[
"Hawkin",
"John",
""
],
[
"Tricco",
"Terrence",
""
],
[
"Zhao",
"Hanli",
""
],
[
"Jiang",
"Xianta",
""
]
]
| TITLE: TextDoctor: Unified Document Image Inpainting via Patch Pyramid
Diffusion Models
ABSTRACT: Digital versions of real-world text documents often suffer from issues like
environmental corrosion of the original document, low-quality scanning, or
human interference. Existing document restoration and inpainting methods
typically struggle with generalizing to unseen document styles and handling
high-resolution images. To address these challenges, we introduce TextDoctor, a
novel unified document image inpainting method. Inspired by human reading
behavior, TextDoctor restores fundamental text elements from patches and then
applies diffusion models to entire document images instead of training models
on specific document types. To handle varying text sizes and avoid
out-of-memory issues, common in high-resolution documents, we propose using
structure pyramid prediction and patch pyramid diffusion models. These
techniques leverage multiscale inputs and pyramid patches to enhance the
quality of inpainting both globally and locally. Extensive qualitative and
quantitative experiments on seven public datasets validated that TextDoctor
outperforms state-of-the-art methods in restoring various types of
high-resolution document images.
| no_new_dataset | 0.953275 |
2503.04024 | Philip Charles | Philip Charles, Deep Ray, Yue Yu, Joost Prins, Hugo Melchers, Michael
R. A. Abdelmalik, Jeffrey Cochran, Assad A. Oberai, Thomas J. R. Hughes, Mats
G. Larson | An optimal Petrov-Galerkin framework for operator networks | 39 pages, 22 figures, 5 tables | null | null | null | math.NA cs.LG cs.NA | http://creativecommons.org/licenses/by/4.0/ | The optimal Petrov-Galerkin formulation to solve partial differential
equations (PDEs) recovers the best approximation in a specified
finite-dimensional (trial) space with respect to a suitable norm. However, the
recovery of this optimal solution is contingent on being able to construct the
optimal weighting functions associated with the trial basis. While explicit
constructions are available for simple one- and two-dimensional problems, such
constructions for a general multidimensional problem remain elusive. In the
present work, we revisit the optimal Petrov-Galerkin formulation through the
lens of deep learning. We propose an operator network framework called
Petrov-Galerkin Variationally Mimetic Operator Network (PG-VarMiON), which
emulates the optimal Petrov-Galerkin weak form of the underlying PDE. The
PG-VarMiON is trained in a supervised manner using a labeled dataset comprising
the PDE data and the corresponding PDE solution, with the training loss
depending on the choice of the optimal norm. The special architecture of the
PG-VarMiON allows it to implicitly learn the optimal weighting functions, thus
endowing the proposed operator network with the ability to generalize well
beyond the training set. We derive approximation error estimates for
PG-VarMiON, highlighting the contributions of various error sources,
particularly the error in learning the true weighting functions. Several
numerical results are presented for the advection-diffusion equation to
demonstrate the efficacy of the proposed method. By embedding the
Petrov-Galerkin structure into the network architecture, PG-VarMiON exhibits
greater robustness and improved generalization compared to other popular deep
operator frameworks, particularly when the training data is limited.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 02:21:32 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Charles",
"Philip",
""
],
[
"Ray",
"Deep",
""
],
[
"Yu",
"Yue",
""
],
[
"Prins",
"Joost",
""
],
[
"Melchers",
"Hugo",
""
],
[
"Abdelmalik",
"Michael R. A.",
""
],
[
"Cochran",
"Jeffrey",
""
],
[
"Oberai",
"Assad A.",
""
],
[
"Hughes",
"Thomas J. R.",
""
],
[
"Larson",
"Mats G.",
""
]
]
| TITLE: An optimal Petrov-Galerkin framework for operator networks
ABSTRACT: The optimal Petrov-Galerkin formulation to solve partial differential
equations (PDEs) recovers the best approximation in a specified
finite-dimensional (trial) space with respect to a suitable norm. However, the
recovery of this optimal solution is contingent on being able to construct the
optimal weighting functions associated with the trial basis. While explicit
constructions are available for simple one- and two-dimensional problems, such
constructions for a general multidimensional problem remain elusive. In the
present work, we revisit the optimal Petrov-Galerkin formulation through the
lens of deep learning. We propose an operator network framework called
Petrov-Galerkin Variationally Mimetic Operator Network (PG-VarMiON), which
emulates the optimal Petrov-Galerkin weak form of the underlying PDE. The
PG-VarMiON is trained in a supervised manner using a labeled dataset comprising
the PDE data and the corresponding PDE solution, with the training loss
depending on the choice of the optimal norm. The special architecture of the
PG-VarMiON allows it to implicitly learn the optimal weighting functions, thus
endowing the proposed operator network with the ability to generalize well
beyond the training set. We derive approximation error estimates for
PG-VarMiON, highlighting the contributions of various error sources,
particularly the error in learning the true weighting functions. Several
numerical results are presented for the advection-diffusion equation to
demonstrate the efficacy of the proposed method. By embedding the
Petrov-Galerkin structure into the network architecture, PG-VarMiON exhibits
greater robustness and improved generalization compared to other popular deep
operator frameworks, particularly when the training data is limited.
| no_new_dataset | 0.9455 |
2503.04034 | Xihan Wang | Xihan Wang, Dianyi Yang, Yu Gao, Yufeng Yue, Yi Yang, Mengyin Fu | GaussianGraph: 3D Gaussian-based Scene Graph Generation for Open-world
Scene Understanding | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advancements in 3D Gaussian Splatting(3DGS) have significantly
improved semantic scene understanding, enabling natural language queries to
localize objects within a scene. However, existing methods primarily focus on
embedding compressed CLIP features to 3D Gaussians, suffering from low object
segmentation accuracy and lack spatial reasoning capabilities. To address these
limitations, we propose GaussianGraph, a novel framework that enhances
3DGS-based scene understanding by integrating adaptive semantic clustering and
scene graph generation. We introduce a "Control-Follow" clustering strategy,
which dynamically adapts to scene scale and feature distribution, avoiding
feature compression and significantly improving segmentation accuracy.
Additionally, we enrich scene representation by integrating object attributes
and spatial relations extracted from 2D foundation models. To address
inaccuracies in spatial relationships, we propose 3D correction modules that
filter implausible relations through spatial consistency verification, ensuring
reliable scene graph construction. Extensive experiments on three datasets
demonstrate that GaussianGraph outperforms state-of-the-art methods in both
semantic segmentation and object grounding tasks, providing a robust solution
for complex scene understanding and interaction.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 02:36:59 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Wang",
"Xihan",
""
],
[
"Yang",
"Dianyi",
""
],
[
"Gao",
"Yu",
""
],
[
"Yue",
"Yufeng",
""
],
[
"Yang",
"Yi",
""
],
[
"Fu",
"Mengyin",
""
]
]
| TITLE: GaussianGraph: 3D Gaussian-based Scene Graph Generation for Open-world
Scene Understanding
ABSTRACT: Recent advancements in 3D Gaussian Splatting(3DGS) have significantly
improved semantic scene understanding, enabling natural language queries to
localize objects within a scene. However, existing methods primarily focus on
embedding compressed CLIP features to 3D Gaussians, suffering from low object
segmentation accuracy and lack spatial reasoning capabilities. To address these
limitations, we propose GaussianGraph, a novel framework that enhances
3DGS-based scene understanding by integrating adaptive semantic clustering and
scene graph generation. We introduce a "Control-Follow" clustering strategy,
which dynamically adapts to scene scale and feature distribution, avoiding
feature compression and significantly improving segmentation accuracy.
Additionally, we enrich scene representation by integrating object attributes
and spatial relations extracted from 2D foundation models. To address
inaccuracies in spatial relationships, we propose 3D correction modules that
filter implausible relations through spatial consistency verification, ensuring
reliable scene graph construction. Extensive experiments on three datasets
demonstrate that GaussianGraph outperforms state-of-the-art methods in both
semantic segmentation and object grounding tasks, providing a robust solution
for complex scene understanding and interaction.
| no_new_dataset | 0.94625 |
2503.04037 | Yifei Gao | Yifei Gao, Jun Huang, Lei Wang, Ruiting Dai, Jun Cheng | Beyond Existance: Fulfill 3D Reconstructed Scenes with Pseudo Details | null | null | null | null | cs.GR cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The emergence of 3D Gaussian Splatting (3D-GS) has significantly advanced 3D
reconstruction by providing high fidelity and fast training speeds across
various scenarios. While recent efforts have mainly focused on improving model
structures to compress data volume or reduce artifacts during zoom-in and
zoom-out operations, they often overlook an underlying issue: training sampling
deficiency. In zoomed-in views, Gaussian primitives can appear unregulated and
distorted due to their dilation limitations and the insufficient availability
of scale-specific training samples. Consequently, incorporating pseudo-details
that ensure the completeness and alignment of the scene becomes essential. In
this paper, we introduce a new training method that integrates diffusion models
and multi-scale training using pseudo-ground-truth data. This approach not only
notably mitigates the dilation and zoomed-in artifacts but also enriches
reconstructed scenes with precise details out of existing scenarios. Our method
achieves state-of-the-art performance across various benchmarks and extends the
capabilities of 3D reconstruction beyond training datasets.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 02:46:10 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Gao",
"Yifei",
""
],
[
"Huang",
"Jun",
""
],
[
"Wang",
"Lei",
""
],
[
"Dai",
"Ruiting",
""
],
[
"Cheng",
"Jun",
""
]
]
| TITLE: Beyond Existance: Fulfill 3D Reconstructed Scenes with Pseudo Details
ABSTRACT: The emergence of 3D Gaussian Splatting (3D-GS) has significantly advanced 3D
reconstruction by providing high fidelity and fast training speeds across
various scenarios. While recent efforts have mainly focused on improving model
structures to compress data volume or reduce artifacts during zoom-in and
zoom-out operations, they often overlook an underlying issue: training sampling
deficiency. In zoomed-in views, Gaussian primitives can appear unregulated and
distorted due to their dilation limitations and the insufficient availability
of scale-specific training samples. Consequently, incorporating pseudo-details
that ensure the completeness and alignment of the scene becomes essential. In
this paper, we introduce a new training method that integrates diffusion models
and multi-scale training using pseudo-ground-truth data. This approach not only
notably mitigates the dilation and zoomed-in artifacts but also enriches
reconstructed scenes with precise details out of existing scenarios. Our method
achieves state-of-the-art performance across various benchmarks and extends the
capabilities of 3D reconstruction beyond training datasets.
| no_new_dataset | 0.948585 |
2503.04046 | Zhipeng Zhou | Zhipeng Zhou, Ziqiao Meng, Pengcheng Wu, Peilin Zhao, Chunyan Miao | Continual Optimization with Symmetry Teleportation for Multi-Task
Learning | 10 pages,8 figures | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Multi-task learning (MTL) is a widely explored paradigm that enables the
simultaneous learning of multiple tasks using a single model. Despite numerous
solutions, the key issues of optimization conflict and task imbalance remain
under-addressed, limiting performance. Unlike existing optimization-based
approaches that typically reweight task losses or gradients to mitigate
conflicts or promote progress, we propose a novel approach based on Continual
Optimization with Symmetry Teleportation (COST). During MTL optimization, when
an optimization conflict arises, we seek an alternative loss-equivalent point
on the loss landscape to reduce conflict. Specifically, we utilize a low-rank
adapter (LoRA) to facilitate this practical teleportation by designing
convergent, loss-invariant objectives. Additionally, we introduce a historical
trajectory reuse strategy to continually leverage the benefits of advanced
optimizers. Extensive experiments on multiple mainstream datasets demonstrate
the effectiveness of our approach. COST is a plug-and-play solution that
enhances a wide range of existing MTL methods. When integrated with
state-of-the-art methods, COST achieves superior performance.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 02:58:09 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Zhou",
"Zhipeng",
""
],
[
"Meng",
"Ziqiao",
""
],
[
"Wu",
"Pengcheng",
""
],
[
"Zhao",
"Peilin",
""
],
[
"Miao",
"Chunyan",
""
]
]
| TITLE: Continual Optimization with Symmetry Teleportation for Multi-Task
Learning
ABSTRACT: Multi-task learning (MTL) is a widely explored paradigm that enables the
simultaneous learning of multiple tasks using a single model. Despite numerous
solutions, the key issues of optimization conflict and task imbalance remain
under-addressed, limiting performance. Unlike existing optimization-based
approaches that typically reweight task losses or gradients to mitigate
conflicts or promote progress, we propose a novel approach based on Continual
Optimization with Symmetry Teleportation (COST). During MTL optimization, when
an optimization conflict arises, we seek an alternative loss-equivalent point
on the loss landscape to reduce conflict. Specifically, we utilize a low-rank
adapter (LoRA) to facilitate this practical teleportation by designing
convergent, loss-invariant objectives. Additionally, we introduce a historical
trajectory reuse strategy to continually leverage the benefits of advanced
optimizers. Extensive experiments on multiple mainstream datasets demonstrate
the effectiveness of our approach. COST is a plug-and-play solution that
enhances a wide range of existing MTL methods. When integrated with
state-of-the-art methods, COST achieves superior performance.
| no_new_dataset | 0.944842 |
2503.04050 | Weilong Cao | Zhong Ji, Weilong Cao, Yan Zhang, Yanwei Pang, Jungong Han, Xuelong Li | Underlying Semantic Diffusion for Effective and Efficient In-Context
Learning | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Diffusion models has emerged as a powerful framework for tasks like image
controllable generation and dense prediction. However, existing models often
struggle to capture underlying semantics (e.g., edges, textures, shapes) and
effectively utilize in-context learning, limiting their contextual
understanding and image generation quality. Additionally, high computational
costs and slow inference speeds hinder their real-time applicability. To
address these challenges, we propose Underlying Semantic Diffusion
(US-Diffusion), an enhanced diffusion model that boosts underlying semantics
learning, computational efficiency, and in-context learning capabilities on
multi-task scenarios. We introduce Separate & Gather Adapter (SGA), which
decouples input conditions for different tasks while sharing the architecture,
enabling better in-context learning and generalization across diverse visual
domains. We also present a Feedback-Aided Learning (FAL) framework, which
leverages feedback signals to guide the model in capturing semantic details and
dynamically adapting to task-specific contextual cues. Furthermore, we propose
a plug-and-play Efficient Sampling Strategy (ESS) for dense sampling at time
steps with high-noise levels, which aims at optimizing training and inference
efficiency while maintaining strong in-context learning performance.
Experimental results demonstrate that US-Diffusion outperforms the
state-of-the-art method, achieving an average reduction of 7.47 in FID on
Map2Image tasks and an average reduction of 0.026 in RMSE on Image2Map tasks,
while achieving approximately 9.45 times faster inference speed. Our method
also demonstrates superior training efficiency and in-context learning
capabilities, excelling in new datasets and tasks, highlighting its robustness
and adaptability across diverse visual domains.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 03:06:22 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Ji",
"Zhong",
""
],
[
"Cao",
"Weilong",
""
],
[
"Zhang",
"Yan",
""
],
[
"Pang",
"Yanwei",
""
],
[
"Han",
"Jungong",
""
],
[
"Li",
"Xuelong",
""
]
]
| TITLE: Underlying Semantic Diffusion for Effective and Efficient In-Context
Learning
ABSTRACT: Diffusion models has emerged as a powerful framework for tasks like image
controllable generation and dense prediction. However, existing models often
struggle to capture underlying semantics (e.g., edges, textures, shapes) and
effectively utilize in-context learning, limiting their contextual
understanding and image generation quality. Additionally, high computational
costs and slow inference speeds hinder their real-time applicability. To
address these challenges, we propose Underlying Semantic Diffusion
(US-Diffusion), an enhanced diffusion model that boosts underlying semantics
learning, computational efficiency, and in-context learning capabilities on
multi-task scenarios. We introduce Separate & Gather Adapter (SGA), which
decouples input conditions for different tasks while sharing the architecture,
enabling better in-context learning and generalization across diverse visual
domains. We also present a Feedback-Aided Learning (FAL) framework, which
leverages feedback signals to guide the model in capturing semantic details and
dynamically adapting to task-specific contextual cues. Furthermore, we propose
a plug-and-play Efficient Sampling Strategy (ESS) for dense sampling at time
steps with high-noise levels, which aims at optimizing training and inference
efficiency while maintaining strong in-context learning performance.
Experimental results demonstrate that US-Diffusion outperforms the
state-of-the-art method, achieving an average reduction of 7.47 in FID on
Map2Image tasks and an average reduction of 0.026 in RMSE on Image2Map tasks,
while achieving approximately 9.45 times faster inference speed. Our method
also demonstrates superior training efficiency and in-context learning
capabilities, excelling in new datasets and tasks, highlighting its robustness
and adaptability across diverse visual domains.
| no_new_dataset | 0.948775 |
2503.04058 | Haiyang Yu | Haiyang Yu, Jinghui Lu, Yanjie Wang, Yang Li, Han Wang, Can Huang, Bin
Li | EVE: Towards End-to-End Video Subtitle Extraction with Vision-Language
Models | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The advent of Large Vision-Language Models (LVLMs) has advanced the
video-based tasks, such as video captioning and video understanding. Some
previous research indicates that taking texts in videos as input can further
improve the performance of video understanding. As a type of indispensable
information in short videos or movies, subtitles can assist LVLMs to better
understand videos. Most existing methods for video subtitle extraction are
based on a multi-stage framework, handling each frame independently. They can
hardly exploit the temporal information of videos. Although some LVLMs exhibit
the robust OCR capability, predicting accurate timestamps for subtitle texts is
still challenging. In this paper, we propose an End-to-end Video Subtitle
Extraction method, called EVE, which consists of three modules: a vision
encoder, an adapter module, and a large language model. To effectively compress
the visual tokens from the vision encoder, we propose a novel adapter
InterleavedVT to interleave two modalities. It contains a visual compressor and
a textual region compressor. The proposed InterleavedVT exploits both the
merits of average pooling and Q-Former in token compression. Taking the
temporal information of videos into account, we introduce a sliding-window
mechanism in the textual region compressor. To benchmark the video subtitle
extraction task, we propose a large dataset ViSa including 2.5M videos.
Extensive experiments on ViSa demonstrate that the proposed EVE can outperform
existing open-sourced tools and LVLMs.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 03:19:56 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Yu",
"Haiyang",
""
],
[
"Lu",
"Jinghui",
""
],
[
"Wang",
"Yanjie",
""
],
[
"Li",
"Yang",
""
],
[
"Wang",
"Han",
""
],
[
"Huang",
"Can",
""
],
[
"Li",
"Bin",
""
]
]
| TITLE: EVE: Towards End-to-End Video Subtitle Extraction with Vision-Language
Models
ABSTRACT: The advent of Large Vision-Language Models (LVLMs) has advanced the
video-based tasks, such as video captioning and video understanding. Some
previous research indicates that taking texts in videos as input can further
improve the performance of video understanding. As a type of indispensable
information in short videos or movies, subtitles can assist LVLMs to better
understand videos. Most existing methods for video subtitle extraction are
based on a multi-stage framework, handling each frame independently. They can
hardly exploit the temporal information of videos. Although some LVLMs exhibit
the robust OCR capability, predicting accurate timestamps for subtitle texts is
still challenging. In this paper, we propose an End-to-end Video Subtitle
Extraction method, called EVE, which consists of three modules: a vision
encoder, an adapter module, and a large language model. To effectively compress
the visual tokens from the vision encoder, we propose a novel adapter
InterleavedVT to interleave two modalities. It contains a visual compressor and
a textual region compressor. The proposed InterleavedVT exploits both the
merits of average pooling and Q-Former in token compression. Taking the
temporal information of videos into account, we introduce a sliding-window
mechanism in the textual region compressor. To benchmark the video subtitle
extraction task, we propose a large dataset ViSa including 2.5M videos.
Extensive experiments on ViSa demonstrate that the proposed EVE can outperform
existing open-sourced tools and LVLMs.
| new_dataset | 0.958304 |
2503.04070 | Aaron Kaplan | Aaron D. Kaplan (1), Runze Liu (2), Ji Qi (2), Tsz Wai Ko (2), Bowen
Deng (1 and 3), Janosh Riebesell (1 and 4), Gerbrand Ceder (1), Kristin A.
Persson (1 and 3), Shyue Ping Ong (2) ((1) Lawrence Berkeley National
Laboratory, (2) University of California San Diego, (3) University of
California Berkeley, (4) University of Cambridge) | A Foundational Potential Energy Surface Dataset for Materials | The first three listed authors contributed equally to this work. For
training data, see http://matpes.ai or
https://materialsproject-contribs.s3.amazonaws.com/index.html#MatPES_2025_1/ | null | null | null | cond-mat.mtrl-sci physics.comp-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate potential energy surface (PES) descriptions are essential for
atomistic simulations of materials. Universal machine learning interatomic
potentials (UMLIPs)$^{1-3}$ offer a computationally efficient alternative to
density functional theory (DFT)$^4$ for PES modeling across the periodic table.
However, their accuracy today is fundamentally constrained due to a reliance on
DFT relaxation data.$^{5,6}$ Here, we introduce MatPES, a foundational PES
dataset comprising $\sim 400,000$ structures carefully sampled from 281 million
molecular dynamics snapshots that span 16 billion atomic environments. We
demonstrate that UMLIPs trained on the modestly sized MatPES dataset can rival,
or even outperform, prior models trained on much larger datasets across a broad
range of equilibrium, near-equilibrium, and molecular dynamics property
benchmarks. We also introduce the first high-fidelity PES dataset based on the
revised regularized strongly constrained and appropriately normed (r$^2$SCAN)
functional$^7$ with greatly improved descriptions of interatomic bonding. The
open source MatPES initiative emphasizes the importance of data quality over
quantity in materials science and enables broad community-driven advancements
toward more reliable, generalizable, and efficient UMLIPs for large-scale
materials discovery and design.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 04:06:59 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Kaplan",
"Aaron D.",
"",
"1 and 3"
],
[
"Liu",
"Runze",
"",
"1 and 3"
],
[
"Qi",
"Ji",
"",
"1 and 3"
],
[
"Ko",
"Tsz Wai",
"",
"1 and 3"
],
[
"Deng",
"Bowen",
"",
"1 and 3"
],
[
"Riebesell",
"Janosh",
"",
"1 and 4"
],
[
"Ceder",
"Gerbrand",
"",
"1 and 3"
],
[
"Persson",
"Kristin A.",
"",
"1 and 3"
],
[
"Ong",
"Shyue Ping",
""
]
]
| TITLE: A Foundational Potential Energy Surface Dataset for Materials
ABSTRACT: Accurate potential energy surface (PES) descriptions are essential for
atomistic simulations of materials. Universal machine learning interatomic
potentials (UMLIPs)$^{1-3}$ offer a computationally efficient alternative to
density functional theory (DFT)$^4$ for PES modeling across the periodic table.
However, their accuracy today is fundamentally constrained due to a reliance on
DFT relaxation data.$^{5,6}$ Here, we introduce MatPES, a foundational PES
dataset comprising $\sim 400,000$ structures carefully sampled from 281 million
molecular dynamics snapshots that span 16 billion atomic environments. We
demonstrate that UMLIPs trained on the modestly sized MatPES dataset can rival,
or even outperform, prior models trained on much larger datasets across a broad
range of equilibrium, near-equilibrium, and molecular dynamics property
benchmarks. We also introduce the first high-fidelity PES dataset based on the
revised regularized strongly constrained and appropriately normed (r$^2$SCAN)
functional$^7$ with greatly improved descriptions of interatomic bonding. The
open source MatPES initiative emphasizes the importance of data quality over
quantity in materials science and enables broad community-driven advancements
toward more reliable, generalizable, and efficient UMLIPs for large-scale
materials discovery and design.
| new_dataset | 0.965932 |
2503.04071 | Miao Li | Miao Li, Michael Klamkin, Mathieu Tanneau, Reza Zandehshahvar, and
Pascal Van Hentenryck | Conformal Prediction with Upper and Lower Bound Models | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper studies a Conformal Prediction (CP) methodology for building
prediction intervals in a regression setting, given only deterministic lower
and upper bounds on the target variable. It proposes a new CP mechanism (CPUL)
that goes beyond post-processing by adopting a model selection approach over
multiple nested interval construction methods. Paradoxically, many
well-established CP methods, including CPUL, may fail to provide adequate
coverage in regions where the bounds are tight. To remedy this limitation, the
paper proposes an optimal thresholding mechanism, OMLT, that adjusts CPUL
intervals in tight regions with undercoverage. The combined CPUL-OMLT is
validated on large-scale learning tasks where the goal is to bound the optimal
value of a parametric optimization problem. The experimental results
demonstrate substantial improvements over baseline methods across various
datasets.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 04:07:25 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Li",
"Miao",
""
],
[
"Klamkin",
"Michael",
""
],
[
"Tanneau",
"Mathieu",
""
],
[
"Zandehshahvar",
"Reza",
""
],
[
"Van Hentenryck",
"Pascal",
""
]
]
| TITLE: Conformal Prediction with Upper and Lower Bound Models
ABSTRACT: This paper studies a Conformal Prediction (CP) methodology for building
prediction intervals in a regression setting, given only deterministic lower
and upper bounds on the target variable. It proposes a new CP mechanism (CPUL)
that goes beyond post-processing by adopting a model selection approach over
multiple nested interval construction methods. Paradoxically, many
well-established CP methods, including CPUL, may fail to provide adequate
coverage in regions where the bounds are tight. To remedy this limitation, the
paper proposes an optimal thresholding mechanism, OMLT, that adjusts CPUL
intervals in tight regions with undercoverage. The combined CPUL-OMLT is
validated on large-scale learning tasks where the goal is to bound the optimal
value of a parametric optimization problem. The experimental results
demonstrate substantial improvements over baseline methods across various
datasets.
| no_new_dataset | 0.950549 |
2503.04076 | Yiwen Dong | Yiwen Dong, Zhenyang Xu, Yongqiang Tian, Chengnian Sun | Beyond Memorization: Evaluating the True Type Inference Capabilities of
LLMs for Java Code Snippets | under review | null | null | null | cs.SE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Type inference is a crucial task for reusing online code snippets, often
found on platforms like StackOverflow, which frequently lack essential type
information such as fully qualified names (FQNs) and required libraries. Recent
studies have leveraged Large Language Models (LLMs) for type inference on code
snippets, showing promising results. However, these results are potentially
affected by data leakage, as the benchmark suite (StatType-SO) has been public
on GitHub since 2017 (full suite in 2023). Thus, it is uncertain whether LLMs'
strong performance reflects genuine code semantics understanding or a mere
retrieval of ground truth from training data.
To comprehensively assess LLMs' type inference capabilities on Java code
snippets, we conducted a three-pronged evaluation. First, utilizing Thalia, a
program synthesis technique, we created ThaliaType--a new, unseen dataset for
type inference evaluation. On unseen snippets, LLM performance dropped
significantly, with up to a 59% decrease in precision and 72% in recall.
Second, we developed semantic-preserving transformations that significantly
degraded LLMs' type inference performance, revealing weaknesses in
understanding code semantics. Third, we used delta debugging to identify the
minimal syntax elements sufficient for LLM inference. While type inference
primarily involves inferring FQNs for types in the code snippet, LLMs correctly
infer FQNs even when the types were absent from the snippets, suggesting a
reliance on knowledge from training instead of thoroughly analyzing the
snippets.
Our findings indicate that LLMs' strong past performance likely stemmed from
data leakage, rather than a genuine understanding of the semantics of code
snippets. Our findings highlight the crucial need for carefully designed
benchmarks using unseen code snippets to assess the true capabilities of LLMs
for type inference tasks.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 04:13:40 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Dong",
"Yiwen",
""
],
[
"Xu",
"Zhenyang",
""
],
[
"Tian",
"Yongqiang",
""
],
[
"Sun",
"Chengnian",
""
]
]
| TITLE: Beyond Memorization: Evaluating the True Type Inference Capabilities of
LLMs for Java Code Snippets
ABSTRACT: Type inference is a crucial task for reusing online code snippets, often
found on platforms like StackOverflow, which frequently lack essential type
information such as fully qualified names (FQNs) and required libraries. Recent
studies have leveraged Large Language Models (LLMs) for type inference on code
snippets, showing promising results. However, these results are potentially
affected by data leakage, as the benchmark suite (StatType-SO) has been public
on GitHub since 2017 (full suite in 2023). Thus, it is uncertain whether LLMs'
strong performance reflects genuine code semantics understanding or a mere
retrieval of ground truth from training data.
To comprehensively assess LLMs' type inference capabilities on Java code
snippets, we conducted a three-pronged evaluation. First, utilizing Thalia, a
program synthesis technique, we created ThaliaType--a new, unseen dataset for
type inference evaluation. On unseen snippets, LLM performance dropped
significantly, with up to a 59% decrease in precision and 72% in recall.
Second, we developed semantic-preserving transformations that significantly
degraded LLMs' type inference performance, revealing weaknesses in
understanding code semantics. Third, we used delta debugging to identify the
minimal syntax elements sufficient for LLM inference. While type inference
primarily involves inferring FQNs for types in the code snippet, LLMs correctly
infer FQNs even when the types were absent from the snippets, suggesting a
reliance on knowledge from training instead of thoroughly analyzing the
snippets.
Our findings indicate that LLMs' strong past performance likely stemmed from
data leakage, rather than a genuine understanding of the semantics of code
snippets. Our findings highlight the crucial need for carefully designed
benchmarks using unseen code snippets to assess the true capabilities of LLMs
for type inference tasks.
| new_dataset | 0.963882 |
2503.04079 | Idris Sunmola | Idris O. Sunmola, Zhenjun Zhao, Samuel Schmidgall, Yumeng Wang, Paul
Maria Scheikl, and Axel Krieger | Surgical Gaussian Surfels: Highly Accurate Real-time Surgical Scene
Rendering | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | Accurate geometric reconstruction of deformable tissues in monocular
endoscopic video remains a fundamental challenge in robot-assisted minimally
invasive surgery. Although recent volumetric and point primitive methods based
on neural radiance fields (NeRF) and 3D Gaussian primitives have efficiently
rendered surgical scenes, they still struggle with handling artifact-free tool
occlusions and preserving fine anatomical details. These limitations stem from
unrestricted Gaussian scaling and insufficient surface alignment constraints
during reconstruction. To address these issues, we introduce Surgical Gaussian
Surfels (SGS), which transforms anisotropic point primitives into
surface-aligned elliptical splats by constraining the scale component of the
Gaussian covariance matrix along the view-aligned axis. We predict accurate
surfel motion fields using a lightweight Multi-Layer Perceptron (MLP) coupled
with locality constraints to handle complex tissue deformations. We use
homodirectional view-space positional gradients to capture fine image details
by splitting Gaussian Surfels in over-reconstructed regions. In addition, we
define surface normals as the direction of the steepest density change within
each Gaussian surfel primitive, enabling accurate normal estimation without
requiring monocular normal priors. We evaluate our method on two in-vivo
surgical datasets, where it outperforms current state-of-the-art methods in
surface geometry, normal map quality, and rendering efficiency, while remaining
competitive in real-time rendering performance. We make our code available at
https://github.com/aloma85/SurgicalGaussianSurfels
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 04:33:19 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Sunmola",
"Idris O.",
""
],
[
"Zhao",
"Zhenjun",
""
],
[
"Schmidgall",
"Samuel",
""
],
[
"Wang",
"Yumeng",
""
],
[
"Scheikl",
"Paul Maria",
""
],
[
"Krieger",
"Axel",
""
]
]
| TITLE: Surgical Gaussian Surfels: Highly Accurate Real-time Surgical Scene
Rendering
ABSTRACT: Accurate geometric reconstruction of deformable tissues in monocular
endoscopic video remains a fundamental challenge in robot-assisted minimally
invasive surgery. Although recent volumetric and point primitive methods based
on neural radiance fields (NeRF) and 3D Gaussian primitives have efficiently
rendered surgical scenes, they still struggle with handling artifact-free tool
occlusions and preserving fine anatomical details. These limitations stem from
unrestricted Gaussian scaling and insufficient surface alignment constraints
during reconstruction. To address these issues, we introduce Surgical Gaussian
Surfels (SGS), which transforms anisotropic point primitives into
surface-aligned elliptical splats by constraining the scale component of the
Gaussian covariance matrix along the view-aligned axis. We predict accurate
surfel motion fields using a lightweight Multi-Layer Perceptron (MLP) coupled
with locality constraints to handle complex tissue deformations. We use
homodirectional view-space positional gradients to capture fine image details
by splitting Gaussian Surfels in over-reconstructed regions. In addition, we
define surface normals as the direction of the steepest density change within
each Gaussian surfel primitive, enabling accurate normal estimation without
requiring monocular normal priors. We evaluate our method on two in-vivo
surgical datasets, where it outperforms current state-of-the-art methods in
surface geometry, normal map quality, and rendering efficiency, while remaining
competitive in real-time rendering performance. We make our code available at
https://github.com/aloma85/SurgicalGaussianSurfels
| no_new_dataset | 0.950319 |
2503.04085 | Arash Mozhdehi | Arash Mozhdehi, Yunli Wang, Sun Sun, Xin Wang | SED2AM: Solving Multi-Trip Time-Dependent Vehicle Routing Problem using
Deep Reinforcement Learning | Accepted by ACM TKDD: https://dl.acm.org/doi/10.1145/3721983 | null | 10.1145/3721983 | null | cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Deep reinforcement learning (DRL)-based frameworks, featuring
Transformer-style policy networks, have demonstrated their efficacy across
various vehicle routing problem (VRP) variants. However, the application of
these methods to the multi-trip time-dependent vehicle routing problem
(MTTDVRP) with maximum working hours constraints -- a pivotal element of urban
logistics -- remains largely unexplored. This paper introduces a DRL-based
method called the Simultaneous Encoder and Dual Decoder Attention Model
(SED2AM), tailored for the MTTDVRP with maximum working hours constraints. The
proposed method introduces a temporal locality inductive bias to the encoding
module of the policy networks, enabling it to effectively account for the
time-dependency in travel distance or time. The decoding module of SED2AM
includes a vehicle selection decoder that selects a vehicle from the fleet,
effectively associating trips with vehicles for functional multi-trip routing.
Additionally, this decoding module is equipped with a trip construction decoder
leveraged for constructing trips for the vehicles. This policy model is
equipped with two classes of state representations, fleet state and routing
state, providing the information needed for effective route construction in the
presence of maximum working hours constraints. Experimental results using
real-world datasets from two major Canadian cities not only show that SED2AM
outperforms the current state-of-the-art DRL-based and metaheuristic-based
baselines but also demonstrate its generalizability to solve larger-scale
problems.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 04:47:49 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Mozhdehi",
"Arash",
""
],
[
"Wang",
"Yunli",
""
],
[
"Sun",
"Sun",
""
],
[
"Wang",
"Xin",
""
]
]
| TITLE: SED2AM: Solving Multi-Trip Time-Dependent Vehicle Routing Problem using
Deep Reinforcement Learning
ABSTRACT: Deep reinforcement learning (DRL)-based frameworks, featuring
Transformer-style policy networks, have demonstrated their efficacy across
various vehicle routing problem (VRP) variants. However, the application of
these methods to the multi-trip time-dependent vehicle routing problem
(MTTDVRP) with maximum working hours constraints -- a pivotal element of urban
logistics -- remains largely unexplored. This paper introduces a DRL-based
method called the Simultaneous Encoder and Dual Decoder Attention Model
(SED2AM), tailored for the MTTDVRP with maximum working hours constraints. The
proposed method introduces a temporal locality inductive bias to the encoding
module of the policy networks, enabling it to effectively account for the
time-dependency in travel distance or time. The decoding module of SED2AM
includes a vehicle selection decoder that selects a vehicle from the fleet,
effectively associating trips with vehicles for functional multi-trip routing.
Additionally, this decoding module is equipped with a trip construction decoder
leveraged for constructing trips for the vehicles. This policy model is
equipped with two classes of state representations, fleet state and routing
state, providing the information needed for effective route construction in the
presence of maximum working hours constraints. Experimental results using
real-world datasets from two major Canadian cities not only show that SED2AM
outperforms the current state-of-the-art DRL-based and metaheuristic-based
baselines but also demonstrate its generalizability to solve larger-scale
problems.
| no_new_dataset | 0.944995 |
2503.04094 | Seth Karten | Seth Karten, Andy Luu Nguyen, Chi Jin | Pok\'eChamp: an Expert-level Minimax Language Agent | 24 pages, 13 figures | null | null | null | cs.LG cs.MA | http://creativecommons.org/licenses/by/4.0/ | We introduce Pok\'eChamp, a minimax agent powered by Large Language Models
(LLMs) for Pok\'emon battles. Built on a general framework for two-player
competitive games, Pok\'eChamp leverages the generalist capabilities of LLMs to
enhance minimax tree search. Specifically, LLMs replace three key modules: (1)
player action sampling, (2) opponent modeling, and (3) value function
estimation, enabling the agent to effectively utilize gameplay history and
human knowledge to reduce the search space and address partial observability.
Notably, our framework requires no additional LLM training. We evaluate
Pok\'eChamp in the popular Gen 9 OU format. When powered by GPT-4o, it achieves
a win rate of 76% against the best existing LLM-based bot and 84% against the
strongest rule-based bot, demonstrating its superior performance. Even with an
open-source 8-billion-parameter Llama 3.1 model, Pok\'eChamp consistently
outperforms the previous best LLM-based bot, Pok\'ellmon powered by GPT-4o,
with a 64% win rate. Pok\'eChamp attains a projected Elo of 1300-1500 on the
Pok\'emon Showdown online ladder, placing it among the top 30%-10% of human
players. In addition, this work compiles the largest real-player Pok\'emon
battle dataset, featuring over 3 million games, including more than 500k
high-Elo matches. Based on this dataset, we establish a series of battle
benchmarks and puzzles to evaluate specific battling skills. We further provide
key updates to the local game engine. We hope this work fosters further
research that leverage Pok\'emon battle as benchmark to integrate LLM
technologies with game-theoretic algorithms addressing general multiagent
problems. Videos, code, and dataset available at
https://sites.google.com/view/pokechamp-llm.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 05:06:27 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Karten",
"Seth",
""
],
[
"Nguyen",
"Andy Luu",
""
],
[
"Jin",
"Chi",
""
]
]
| TITLE: Pok\'eChamp: an Expert-level Minimax Language Agent
ABSTRACT: We introduce Pok\'eChamp, a minimax agent powered by Large Language Models
(LLMs) for Pok\'emon battles. Built on a general framework for two-player
competitive games, Pok\'eChamp leverages the generalist capabilities of LLMs to
enhance minimax tree search. Specifically, LLMs replace three key modules: (1)
player action sampling, (2) opponent modeling, and (3) value function
estimation, enabling the agent to effectively utilize gameplay history and
human knowledge to reduce the search space and address partial observability.
Notably, our framework requires no additional LLM training. We evaluate
Pok\'eChamp in the popular Gen 9 OU format. When powered by GPT-4o, it achieves
a win rate of 76% against the best existing LLM-based bot and 84% against the
strongest rule-based bot, demonstrating its superior performance. Even with an
open-source 8-billion-parameter Llama 3.1 model, Pok\'eChamp consistently
outperforms the previous best LLM-based bot, Pok\'ellmon powered by GPT-4o,
with a 64% win rate. Pok\'eChamp attains a projected Elo of 1300-1500 on the
Pok\'emon Showdown online ladder, placing it among the top 30%-10% of human
players. In addition, this work compiles the largest real-player Pok\'emon
battle dataset, featuring over 3 million games, including more than 500k
high-Elo matches. Based on this dataset, we establish a series of battle
benchmarks and puzzles to evaluate specific battling skills. We further provide
key updates to the local game engine. We hope this work fosters further
research that leverage Pok\'emon battle as benchmark to integrate LLM
technologies with game-theoretic algorithms addressing general multiagent
problems. Videos, code, and dataset available at
https://sites.google.com/view/pokechamp-llm.
| new_dataset | 0.943191 |
2503.04096 | Beverley Gorry Miss | Beverley Gorry, Tobias Fischer, Michael Milford, Alejandro Fontan | Image-Based Relocalization and Alignment for Long-Term Monitoring of
Dynamic Underwater Environments | null | null | null | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Effective monitoring of underwater ecosystems is crucial for tracking
environmental changes, guiding conservation efforts, and ensuring long-term
ecosystem health. However, automating underwater ecosystem management with
robotic platforms remains challenging due to the complexities of underwater
imagery, which pose significant difficulties for traditional visual
localization methods. We propose an integrated pipeline that combines Visual
Place Recognition (VPR), feature matching, and image segmentation on
video-derived images. This method enables robust identification of revisited
areas, estimation of rigid transformations, and downstream analysis of
ecosystem changes. Furthermore, we introduce the SQUIDLE+ VPR Benchmark-the
first large-scale underwater VPR benchmark designed to leverage an extensive
collection of unstructured data from multiple robotic platforms, spanning time
intervals from days to years. The dataset encompasses diverse trajectories,
arbitrary overlap and diverse seafloor types captured under varying
environmental conditions, including differences in depth, lighting, and
turbidity. Our code is available at: https://github.com/bev-gorry/underloc
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 05:13:19 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Gorry",
"Beverley",
""
],
[
"Fischer",
"Tobias",
""
],
[
"Milford",
"Michael",
""
],
[
"Fontan",
"Alejandro",
""
]
]
| TITLE: Image-Based Relocalization and Alignment for Long-Term Monitoring of
Dynamic Underwater Environments
ABSTRACT: Effective monitoring of underwater ecosystems is crucial for tracking
environmental changes, guiding conservation efforts, and ensuring long-term
ecosystem health. However, automating underwater ecosystem management with
robotic platforms remains challenging due to the complexities of underwater
imagery, which pose significant difficulties for traditional visual
localization methods. We propose an integrated pipeline that combines Visual
Place Recognition (VPR), feature matching, and image segmentation on
video-derived images. This method enables robust identification of revisited
areas, estimation of rigid transformations, and downstream analysis of
ecosystem changes. Furthermore, we introduce the SQUIDLE+ VPR Benchmark-the
first large-scale underwater VPR benchmark designed to leverage an extensive
collection of unstructured data from multiple robotic platforms, spanning time
intervals from days to years. The dataset encompasses diverse trajectories,
arbitrary overlap and diverse seafloor types captured under varying
environmental conditions, including differences in depth, lighting, and
turbidity. Our code is available at: https://github.com/bev-gorry/underloc
| new_dataset | 0.960287 |
2503.04106 | Haoran Wang | Haoran Wang, Lian Huai, Wenbin Li, Lei Qi, Xingqun Jiang, Yinghuan Shi | WeakMedSAM: Weakly-Supervised Medical Image Segmentation via SAM with
Sub-Class Exploration and Prompt Affinity Mining | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We have witnessed remarkable progress in foundation models in vision tasks.
Currently, several recent works have utilized the segmenting anything model
(SAM) to boost the segmentation performance in medical images, where most of
them focus on training an adaptor for fine-tuning a large amount of pixel-wise
annotated medical images following a fully supervised manner. In this paper, to
reduce the labeling cost, we investigate a novel weakly-supervised SAM-based
segmentation model, namely WeakMedSAM. Specifically, our proposed WeakMedSAM
contains two modules: 1) to mitigate severe co-occurrence in medical images, a
sub-class exploration module is introduced to learn accurate feature
representations. 2) to improve the quality of the class activation maps, our
prompt affinity mining module utilizes the prompt capability of SAM to obtain
an affinity map for random-walk refinement. Our method can be applied to any
SAM-like backbone, and we conduct experiments with SAMUS and EfficientSAM. The
experimental results on three popularly-used benchmark datasets, i.e., BraTS
2019, AbdomenCT-1K, and MSD Cardiac dataset, show the promising results of our
proposed WeakMedSAM. Our code is available at
https://github.com/wanghr64/WeakMedSAM.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 05:28:44 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Wang",
"Haoran",
""
],
[
"Huai",
"Lian",
""
],
[
"Li",
"Wenbin",
""
],
[
"Qi",
"Lei",
""
],
[
"Jiang",
"Xingqun",
""
],
[
"Shi",
"Yinghuan",
""
]
]
| TITLE: WeakMedSAM: Weakly-Supervised Medical Image Segmentation via SAM with
Sub-Class Exploration and Prompt Affinity Mining
ABSTRACT: We have witnessed remarkable progress in foundation models in vision tasks.
Currently, several recent works have utilized the segmenting anything model
(SAM) to boost the segmentation performance in medical images, where most of
them focus on training an adaptor for fine-tuning a large amount of pixel-wise
annotated medical images following a fully supervised manner. In this paper, to
reduce the labeling cost, we investigate a novel weakly-supervised SAM-based
segmentation model, namely WeakMedSAM. Specifically, our proposed WeakMedSAM
contains two modules: 1) to mitigate severe co-occurrence in medical images, a
sub-class exploration module is introduced to learn accurate feature
representations. 2) to improve the quality of the class activation maps, our
prompt affinity mining module utilizes the prompt capability of SAM to obtain
an affinity map for random-walk refinement. Our method can be applied to any
SAM-like backbone, and we conduct experiments with SAMUS and EfficientSAM. The
experimental results on three popularly-used benchmark datasets, i.e., BraTS
2019, AbdomenCT-1K, and MSD Cardiac dataset, show the promising results of our
proposed WeakMedSAM. Our code is available at
https://github.com/wanghr64/WeakMedSAM.
| no_new_dataset | 0.948106 |
2503.04118 | Congxi Xiao | Congxi Xiao, Jingbo Zhou, Yixiong Xiao, Xinjiang Lu, Le Zhang, Hui
Xiong | TimeFound: A Foundation Model for Time Series Forecasting | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We present TimeFound, an encoder-decoder transformer-based time series
foundation model for out-of-the-box zero-shot forecasting. To handle time
series data from various domains, TimeFound employs a multi-resolution patching
strategy to capture complex temporal patterns at multiple scales. We pre-train
our model with two sizes (200M and 710M parameters) on a large time-series
corpus comprising both real-world and synthetic datasets. Over a collection of
unseen datasets across diverse domains and forecasting horizons, our empirical
evaluations suggest that TimeFound can achieve superior or competitive
zero-shot forecasting performance, compared to state-of-the-art time series
foundation models.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 05:55:45 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Xiao",
"Congxi",
""
],
[
"Zhou",
"Jingbo",
""
],
[
"Xiao",
"Yixiong",
""
],
[
"Lu",
"Xinjiang",
""
],
[
"Zhang",
"Le",
""
],
[
"Xiong",
"Hui",
""
]
]
| TITLE: TimeFound: A Foundation Model for Time Series Forecasting
ABSTRACT: We present TimeFound, an encoder-decoder transformer-based time series
foundation model for out-of-the-box zero-shot forecasting. To handle time
series data from various domains, TimeFound employs a multi-resolution patching
strategy to capture complex temporal patterns at multiple scales. We pre-train
our model with two sizes (200M and 710M parameters) on a large time-series
corpus comprising both real-world and synthetic datasets. Over a collection of
unseen datasets across diverse domains and forecasting horizons, our empirical
evaluations suggest that TimeFound can achieve superior or competitive
zero-shot forecasting performance, compared to state-of-the-art time series
foundation models.
| no_new_dataset | 0.949576 |
2503.04121 | Alan Luo | Alan Luo, Kaiwen Yuan | Simple Self Organizing Map with Visual Transformer | 5 pages, 4 figures. Submitted to IEEE. All experiments and code work
were performed by the first author, with the second author serving in a
PI/mentor role, guiding the progression of the work | null | null | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vision Transformers (ViTs) have demonstrated exceptional performance in
various vision tasks. However, they tend to underperform on smaller datasets
due to their inherent lack of inductive biases. Current approaches address this
limitation implicitly-often by pairing ViTs with pretext tasks or by distilling
knowledge from convolutional neural networks (CNNs) to strengthen the prior. In
contrast, Self-Organizing Maps (SOMs), a widely adopted self-supervised
framework, are inherently structured to preserve topology and spatial
organization, making them a promising candidate to directly address the
limitations of ViTs in limited or small training datasets. Despite this
potential, equipping SOMs with modern deep learning architectures remains
largely unexplored. In this study, we conduct a novel exploration on how Vision
Transformers (ViTs) and Self-Organizing Maps (SOMs) can empower each other,
aiming to bridge this critical research gap. Our findings demonstrate that
these architectures can synergistically enhance each other, leading to
significantly improved performance in both unsupervised and supervised tasks.
Code will be publicly available.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 05:58:41 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Luo",
"Alan",
""
],
[
"Yuan",
"Kaiwen",
""
]
]
| TITLE: Simple Self Organizing Map with Visual Transformer
ABSTRACT: Vision Transformers (ViTs) have demonstrated exceptional performance in
various vision tasks. However, they tend to underperform on smaller datasets
due to their inherent lack of inductive biases. Current approaches address this
limitation implicitly-often by pairing ViTs with pretext tasks or by distilling
knowledge from convolutional neural networks (CNNs) to strengthen the prior. In
contrast, Self-Organizing Maps (SOMs), a widely adopted self-supervised
framework, are inherently structured to preserve topology and spatial
organization, making them a promising candidate to directly address the
limitations of ViTs in limited or small training datasets. Despite this
potential, equipping SOMs with modern deep learning architectures remains
largely unexplored. In this study, we conduct a novel exploration on how Vision
Transformers (ViTs) and Self-Organizing Maps (SOMs) can empower each other,
aiming to bridge this critical research gap. Our findings demonstrate that
these architectures can synergistically enhance each other, leading to
significantly improved performance in both unsupervised and supervised tasks.
Code will be publicly available.
| no_new_dataset | 0.944536 |
2503.04133 | Kamal Choudhary | Kamal Choudhary | The JARVIS Infrastructure is All You Need for Materials Design | null | null | null | null | cond-mat.mtrl-sci physics.comp-ph | http://creativecommons.org/licenses/by/4.0/ | Joint Automated Repository for Various Integrated Simulations (JARVIS) is a
comprehensive infrastructure offering databases, tools, tutorials, and
benchmarks for multiscale, multimodal, forward, and inverse materials design.
Emphasizing open access principles and reproducibility, it integrates
theoretical and experimental methodologies such as density functional theory,
quantum Monte Carlo, tight-binding, classical force fields, and
machine-learning approaches-including fingerprinting, graph neural networks,
and transformer models. Its experimental data collection spans cryogenics,
microscopy, and diffraction, covering materials like metals, semiconductors,
insulators, superconductors, carbon capture systems, high-strength compounds,
and low-dimensional materials, heterostructures and defects. JARVIS
disseminates resources via open datasets, web applications, executable scripts,
and peer-reviewed publications, ensuring broad accessibility and
reproducibility. Widely adopted worldwide, it has facilitated millions of data
and tool downloads. By unifying diverse methods and data under one platform,
JARVIS drives both fundamental discoveries and real-world innovations,
advancing conventional and data-driven materials design.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 06:26:32 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Choudhary",
"Kamal",
""
]
]
| TITLE: The JARVIS Infrastructure is All You Need for Materials Design
ABSTRACT: Joint Automated Repository for Various Integrated Simulations (JARVIS) is a
comprehensive infrastructure offering databases, tools, tutorials, and
benchmarks for multiscale, multimodal, forward, and inverse materials design.
Emphasizing open access principles and reproducibility, it integrates
theoretical and experimental methodologies such as density functional theory,
quantum Monte Carlo, tight-binding, classical force fields, and
machine-learning approaches-including fingerprinting, graph neural networks,
and transformer models. Its experimental data collection spans cryogenics,
microscopy, and diffraction, covering materials like metals, semiconductors,
insulators, superconductors, carbon capture systems, high-strength compounds,
and low-dimensional materials, heterostructures and defects. JARVIS
disseminates resources via open datasets, web applications, executable scripts,
and peer-reviewed publications, ensuring broad accessibility and
reproducibility. Widely adopted worldwide, it has facilitated millions of data
and tool downloads. By unifying diverse methods and data under one platform,
JARVIS drives both fundamental discoveries and real-world innovations,
advancing conventional and data-driven materials design.
| no_new_dataset | 0.942188 |
2503.04137 | Yan Zhang | Bruce Nguyen and Yan Zhang | A Comparative Study of Diabetes Prediction Based on Lifestyle Factors
Using Machine Learning | 5 pages, 2 figures, submitted CSCSU 2025 | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Diabetes is a prevalent chronic disease with significant health and economic
burdens worldwide. Early prediction and diagnosis can aid in effective
management and prevention of complications. This study explores the use of
machine learning models to predict diabetes based on lifestyle factors using
data from the Behavioral Risk Factor Surveillance System (BRFSS) 2015 survey.
The dataset consists of 21 lifestyle and health-related features, capturing
aspects such as physical activity, diet, mental health, and socioeconomic
status. Three classification models, Decision Tree, K-Nearest Neighbors (KNN),
and Logistic Regression, are implemented and evaluated to determine their
predictive performance. The models are trained and tested using a balanced
dataset, and their performances are assessed based on accuracy, precision,
recall, and F1-score. The results indicate that the Decision Tree, KNN, and
Logistic Regression achieve an accuracy of 0.74, 0.72, and 0.75, respectively,
with varying strengths in precision and recall. The findings highlight the
potential of machine learning in diabetes prediction and suggest future
improvements through feature selection and ensemble learning techniques.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 06:31:40 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Nguyen",
"Bruce",
""
],
[
"Zhang",
"Yan",
""
]
]
| TITLE: A Comparative Study of Diabetes Prediction Based on Lifestyle Factors
Using Machine Learning
ABSTRACT: Diabetes is a prevalent chronic disease with significant health and economic
burdens worldwide. Early prediction and diagnosis can aid in effective
management and prevention of complications. This study explores the use of
machine learning models to predict diabetes based on lifestyle factors using
data from the Behavioral Risk Factor Surveillance System (BRFSS) 2015 survey.
The dataset consists of 21 lifestyle and health-related features, capturing
aspects such as physical activity, diet, mental health, and socioeconomic
status. Three classification models, Decision Tree, K-Nearest Neighbors (KNN),
and Logistic Regression, are implemented and evaluated to determine their
predictive performance. The models are trained and tested using a balanced
dataset, and their performances are assessed based on accuracy, precision,
recall, and F1-score. The results indicate that the Decision Tree, KNN, and
Logistic Regression achieve an accuracy of 0.74, 0.72, and 0.75, respectively,
with varying strengths in precision and recall. The findings highlight the
potential of machine learning in diabetes prediction and suggest future
improvements through feature selection and ensemble learning techniques.
| no_new_dataset | 0.909747 |
2503.04143 | Fengchen Gu | Fengchen Gu, Zhengyong Jiang, \'Angel F. Garc\'ia-Fern\'andez, Angelos
Stefanidis, Jionglong Su, Huakang Li | MTS: A Deep Reinforcement Learning Portfolio Management Framework with
Time-Awareness and Short-Selling | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Portfolio management remains a crucial challenge in finance, with traditional
methods often falling short in complex and volatile market environments. While
deep reinforcement approaches have shown promise, they still face limitations
in dynamic risk management, exploitation of temporal markets, and incorporation
of complex trading strategies such as short-selling. These limitations can lead
to suboptimal portfolio performance, increased vulnerability to market
volatility, and missed opportunities in capturing potential returns from
diverse market conditions. This paper introduces a Deep Reinforcement Learning
Portfolio Management Framework with Time-Awareness and Short-Selling (MTS),
offering a robust and adaptive strategy for sustainable investment performance.
This framework utilizes a novel encoder-attention mechanism to address the
limitations by incorporating temporal market characteristics, a parallel
strategy for automated short-selling based on market trends, and risk
management through innovative Incremental Conditional Value at Risk, enhancing
adaptability and performance. Experimental validation on five diverse datasets
from 2019 to 2023 demonstrates MTS's superiority over traditional algorithms
and advanced machine learning techniques. MTS consistently achieves higher
cumulative returns, Sharpe, Omega, and Sortino ratios, underscoring its
effectiveness in balancing risk and return while adapting to market dynamics.
MTS demonstrates an average relative increase of 30.67% in cumulative returns
and 29.33% in Sharpe ratio compared to the next best-performing strategies
across various datasets.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 06:41:17 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Gu",
"Fengchen",
""
],
[
"Jiang",
"Zhengyong",
""
],
[
"García-Fernández",
"Ángel F.",
""
],
[
"Stefanidis",
"Angelos",
""
],
[
"Su",
"Jionglong",
""
],
[
"Li",
"Huakang",
""
]
]
| TITLE: MTS: A Deep Reinforcement Learning Portfolio Management Framework with
Time-Awareness and Short-Selling
ABSTRACT: Portfolio management remains a crucial challenge in finance, with traditional
methods often falling short in complex and volatile market environments. While
deep reinforcement approaches have shown promise, they still face limitations
in dynamic risk management, exploitation of temporal markets, and incorporation
of complex trading strategies such as short-selling. These limitations can lead
to suboptimal portfolio performance, increased vulnerability to market
volatility, and missed opportunities in capturing potential returns from
diverse market conditions. This paper introduces a Deep Reinforcement Learning
Portfolio Management Framework with Time-Awareness and Short-Selling (MTS),
offering a robust and adaptive strategy for sustainable investment performance.
This framework utilizes a novel encoder-attention mechanism to address the
limitations by incorporating temporal market characteristics, a parallel
strategy for automated short-selling based on market trends, and risk
management through innovative Incremental Conditional Value at Risk, enhancing
adaptability and performance. Experimental validation on five diverse datasets
from 2019 to 2023 demonstrates MTS's superiority over traditional algorithms
and advanced machine learning techniques. MTS consistently achieves higher
cumulative returns, Sharpe, Omega, and Sortino ratios, underscoring its
effectiveness in balancing risk and return while adapting to market dynamics.
MTS demonstrates an average relative increase of 30.67% in cumulative returns
and 29.33% in Sharpe ratio compared to the next best-performing strategies
across various datasets.
| no_new_dataset | 0.947575 |
2503.04149 | Simin Chen | Simin Chen, Pranav Pusarla, Baishakhi Ray | Dynamic Benchmarking of Reasoning Capabilities in Code Large Language
Models Under Data Contamination | https://codekaleidoscope.github.io/dycodeeval.html | null | null | null | cs.SE cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rapid evolution of code largelanguage models underscores the need for
effective and transparent benchmarking of their reasoning capabilities.
However, the current benchmarking approach heavily depends on publicly
available, human-created datasets. The widespread use of these fixed benchmark
datasets makes the benchmarking process to be static and thus particularly
susceptible to data contamination, an unavoidable consequence of the extensive
data collection processes used to train Code LLMs. Existing approaches that
address data contamination often suffer from human effort limitations and
imbalanced problem complexity. To tackle these challenges, we propose \tool, a
novel benchmarking suite for evaluating Code LLMs under potential data
contamination. Given a seed programming problem, \tool employs multiple agents
to extract and modify the context without altering the core logic, generating
semantically equivalent variations. We introduce a dynamic data generation
methods and conduct empirical studies on two seed datasets across 21 Code LLMs.
Results show that \tool effectively benchmarks reasoning capabilities under
contamination risks while generating diverse problem sets to ensure consistent
and reliable evaluations.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 06:56:59 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Chen",
"Simin",
""
],
[
"Pusarla",
"Pranav",
""
],
[
"Ray",
"Baishakhi",
""
]
]
| TITLE: Dynamic Benchmarking of Reasoning Capabilities in Code Large Language
Models Under Data Contamination
ABSTRACT: The rapid evolution of code largelanguage models underscores the need for
effective and transparent benchmarking of their reasoning capabilities.
However, the current benchmarking approach heavily depends on publicly
available, human-created datasets. The widespread use of these fixed benchmark
datasets makes the benchmarking process to be static and thus particularly
susceptible to data contamination, an unavoidable consequence of the extensive
data collection processes used to train Code LLMs. Existing approaches that
address data contamination often suffer from human effort limitations and
imbalanced problem complexity. To tackle these challenges, we propose \tool, a
novel benchmarking suite for evaluating Code LLMs under potential data
contamination. Given a seed programming problem, \tool employs multiple agents
to extract and modify the context without altering the core logic, generating
semantically equivalent variations. We introduce a dynamic data generation
methods and conduct empirical studies on two seed datasets across 21 Code LLMs.
Results show that \tool effectively benchmarks reasoning capabilities under
contamination risks while generating diverse problem sets to ensure consistent
and reliable evaluations.
| no_new_dataset | 0.895477 |
2503.04151 | Jie Xu | Jie Xu, Na Zhao, Gang Niu, Masashi Sugiyama, Xiaofeng Zhu | Robust Multi-View Learning via Representation Fusion of Sample-Level
Attention and Alignment of Simulated Perturbation | null | null | null | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, multi-view learning (MVL) has garnered significant attention due to
its ability to fuse discriminative information from multiple views. However,
real-world multi-view datasets are often heterogeneous and imperfect, which
usually makes MVL methods designed for specific combinations of views lack
application potential and limits their effectiveness. To address this issue, we
propose a novel robust MVL method (namely RML) with simultaneous representation
fusion and alignment. Specifically, we introduce a simple yet effective
multi-view transformer fusion network where we transform heterogeneous
multi-view data into homogeneous word embeddings, and then integrate multiple
views by the sample-level attention mechanism to obtain a fused representation.
Furthermore, we propose a simulated perturbation based multi-view contrastive
learning framework that dynamically generates the noise and unusable
perturbations for simulating imperfect data conditions. The simulated noisy and
unusable data obtain two distinct fused representations, and we utilize
contrastive learning to align them for learning discriminative and robust
representations. Our RML is self-supervised and can also be applied for
downstream tasks as a regularization. In experiments, we employ it in
unsupervised multi-view clustering, noise-label classification, and as a
plug-and-play module for cross-modal hashing retrieval. Extensive comparison
experiments and ablation studies validate the effectiveness of RML.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 07:01:08 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Xu",
"Jie",
""
],
[
"Zhao",
"Na",
""
],
[
"Niu",
"Gang",
""
],
[
"Sugiyama",
"Masashi",
""
],
[
"Zhu",
"Xiaofeng",
""
]
]
| TITLE: Robust Multi-View Learning via Representation Fusion of Sample-Level
Attention and Alignment of Simulated Perturbation
ABSTRACT: Recently, multi-view learning (MVL) has garnered significant attention due to
its ability to fuse discriminative information from multiple views. However,
real-world multi-view datasets are often heterogeneous and imperfect, which
usually makes MVL methods designed for specific combinations of views lack
application potential and limits their effectiveness. To address this issue, we
propose a novel robust MVL method (namely RML) with simultaneous representation
fusion and alignment. Specifically, we introduce a simple yet effective
multi-view transformer fusion network where we transform heterogeneous
multi-view data into homogeneous word embeddings, and then integrate multiple
views by the sample-level attention mechanism to obtain a fused representation.
Furthermore, we propose a simulated perturbation based multi-view contrastive
learning framework that dynamically generates the noise and unusable
perturbations for simulating imperfect data conditions. The simulated noisy and
unusable data obtain two distinct fused representations, and we utilize
contrastive learning to align them for learning discriminative and robust
representations. Our RML is self-supervised and can also be applied for
downstream tasks as a regularization. In experiments, we employ it in
unsupervised multi-view clustering, noise-label classification, and as a
plug-and-play module for cross-modal hashing retrieval. Extensive comparison
experiments and ablation studies validate the effectiveness of RML.
| no_new_dataset | 0.94366 |
2503.04155 | Chi Hang | Chi Hang, Ruiqi Deng, Lavender Yao Jiang, Zihao Yang, Anton Alyakin,
Daniel Alber, Eric Karl Oermann | BPQA Dataset: Evaluating How Well Language Models Leverage Blood
Pressures to Answer Biomedical Questions | 9 pages | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Clinical measurements such as blood pressures and respiration rates are
critical in diagnosing and monitoring patient outcomes. It is an important
component of biomedical data, which can be used to train transformer-based
language models (LMs) for improving healthcare delivery. It is, however,
unclear whether LMs can effectively interpret and use clinical measurements. We
investigate two questions: First, can LMs effectively leverage clinical
measurements to answer related medical questions? Second, how to enhance an
LM's performance on medical question-answering (QA) tasks that involve
measurements? We performed a case study on blood pressure readings (BPs), a
vital sign routinely monitored by medical professionals. We evaluated the
performance of four LMs: BERT, BioBERT, MedAlpaca, and GPT-3.5, on our newly
developed dataset, BPQA (Blood Pressure Question Answering). BPQA contains
$100$ medical QA pairs that were verified by medical students and designed to
rely on BPs . We found that GPT-3.5 and MedAlpaca (larger and medium sized LMs)
benefit more from the inclusion of BPs than BERT and BioBERT (small sized LMs).
Further, augmenting measurements with labels improves the performance of
BioBERT and Medalpaca (domain specific LMs), suggesting that retrieval may be
useful for improving domain-specific LMs.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 07:06:46 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Hang",
"Chi",
""
],
[
"Deng",
"Ruiqi",
""
],
[
"Jiang",
"Lavender Yao",
""
],
[
"Yang",
"Zihao",
""
],
[
"Alyakin",
"Anton",
""
],
[
"Alber",
"Daniel",
""
],
[
"Oermann",
"Eric Karl",
""
]
]
| TITLE: BPQA Dataset: Evaluating How Well Language Models Leverage Blood
Pressures to Answer Biomedical Questions
ABSTRACT: Clinical measurements such as blood pressures and respiration rates are
critical in diagnosing and monitoring patient outcomes. It is an important
component of biomedical data, which can be used to train transformer-based
language models (LMs) for improving healthcare delivery. It is, however,
unclear whether LMs can effectively interpret and use clinical measurements. We
investigate two questions: First, can LMs effectively leverage clinical
measurements to answer related medical questions? Second, how to enhance an
LM's performance on medical question-answering (QA) tasks that involve
measurements? We performed a case study on blood pressure readings (BPs), a
vital sign routinely monitored by medical professionals. We evaluated the
performance of four LMs: BERT, BioBERT, MedAlpaca, and GPT-3.5, on our newly
developed dataset, BPQA (Blood Pressure Question Answering). BPQA contains
$100$ medical QA pairs that were verified by medical students and designed to
rely on BPs . We found that GPT-3.5 and MedAlpaca (larger and medium sized LMs)
benefit more from the inclusion of BPs than BERT and BioBERT (small sized LMs).
Further, augmenting measurements with labels improves the performance of
BioBERT and Medalpaca (domain specific LMs), suggesting that retrieval may be
useful for improving domain-specific LMs.
| new_dataset | 0.95995 |
2503.04156 | Yuan Liao | Yuan Liao, Yuhong Zhang, Qiushi Han, Yuhang Yang, Weiwei Ding, Yuzhe
Gu, Hengxin Yang, and Liya Huang | Frequency-Based Alignment of EEG and Audio Signals Using Contrastive
Learning and SincNet for Auditory Attention Detection | null | null | null | null | eess.SP cs.SD eess.AS | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Humans exhibit a remarkable ability to focus auditory attention in complex
acoustic environments, such as cocktail parties. Auditory attention detection
(AAD) aims to identify the attended speaker by analyzing brain signals, such as
electroencephalography (EEG) data. Existing AAD algorithms often leverage deep
learning's powerful nonlinear modeling capabilities, few consider the neural
mechanisms underlying auditory processing in the brain. In this paper, we
propose SincAlignNet, a novel network based on an improved SincNet and
contrastive learning, designed to align audio and EEG features for auditory
attention detection. The SincNet component simulates the brain's processing of
audio during auditory attention, while contrastive learning guides the model to
learn the relationship between EEG signals and attended speech. During
inference, we calculate the cosine similarity between EEG and audio features
and also explore direct inference of the attended speaker using EEG data.
Cross-trial evaluations results demonstrate that SincAlignNet outperforms
state-of-the-art AAD methods on two publicly available datasets, KUL and DTU,
achieving average accuracies of 78.3% and 92.2%, respectively, with a 1-second
decision window. The model exhibits strong interpretability, revealing that the
left and right temporal lobes are more active during both male and female
speaker scenarios. Furthermore, we found that using data from only six
electrodes near the temporal lobes maintains similar or even better performance
compared to using 64 electrodes. These findings indicate that efficient
low-density EEG online decoding is achievable, marking an important step toward
the practical implementation of neuro-guided hearing aids in real-world
applications. Code is available at: https://github.com/LiaoEuan/SincAlignNet.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 07:11:01 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Liao",
"Yuan",
""
],
[
"Zhang",
"Yuhong",
""
],
[
"Han",
"Qiushi",
""
],
[
"Yang",
"Yuhang",
""
],
[
"Ding",
"Weiwei",
""
],
[
"Gu",
"Yuzhe",
""
],
[
"Yang",
"Hengxin",
""
],
[
"Huang",
"Liya",
""
]
]
| TITLE: Frequency-Based Alignment of EEG and Audio Signals Using Contrastive
Learning and SincNet for Auditory Attention Detection
ABSTRACT: Humans exhibit a remarkable ability to focus auditory attention in complex
acoustic environments, such as cocktail parties. Auditory attention detection
(AAD) aims to identify the attended speaker by analyzing brain signals, such as
electroencephalography (EEG) data. Existing AAD algorithms often leverage deep
learning's powerful nonlinear modeling capabilities, few consider the neural
mechanisms underlying auditory processing in the brain. In this paper, we
propose SincAlignNet, a novel network based on an improved SincNet and
contrastive learning, designed to align audio and EEG features for auditory
attention detection. The SincNet component simulates the brain's processing of
audio during auditory attention, while contrastive learning guides the model to
learn the relationship between EEG signals and attended speech. During
inference, we calculate the cosine similarity between EEG and audio features
and also explore direct inference of the attended speaker using EEG data.
Cross-trial evaluations results demonstrate that SincAlignNet outperforms
state-of-the-art AAD methods on two publicly available datasets, KUL and DTU,
achieving average accuracies of 78.3% and 92.2%, respectively, with a 1-second
decision window. The model exhibits strong interpretability, revealing that the
left and right temporal lobes are more active during both male and female
speaker scenarios. Furthermore, we found that using data from only six
electrodes near the temporal lobes maintains similar or even better performance
compared to using 64 electrodes. These findings indicate that efficient
low-density EEG online decoding is achievable, marking an important step toward
the practical implementation of neuro-guided hearing aids in real-world
applications. Code is available at: https://github.com/LiaoEuan/SincAlignNet.
| no_new_dataset | 0.946001 |
2503.04160 | Shuzhi Gong | Shuzhi Gong, Richard Sinnott, Jianzhong Qi, Cecile Paris | Unseen Fake News Detection Through Casual Debiasing | 2025 The Web Conference, 6 pages, 4 figures | null | null | null | cs.SI cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The widespread dissemination of fake news on social media poses significant
risks, necessitating timely and accurate detection. However, existing methods
struggle with unseen news due to their reliance on training data from past
events and domains, leaving the challenge of detecting novel fake news largely
unresolved. To address this, we identify biases in training data tied to
specific domains and propose a debiasing solution FNDCD. Originating from
causal analysis, FNDCD employs a reweighting strategy based on classification
confidence and propagation structure regularization to reduce the influence of
domain-specific biases, enhancing the detection of unseen fake news.
Experiments on real-world datasets with non-overlapping news domains
demonstrate FNDCD's effectiveness in improving generalization across domains.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 07:23:44 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Gong",
"Shuzhi",
""
],
[
"Sinnott",
"Richard",
""
],
[
"Qi",
"Jianzhong",
""
],
[
"Paris",
"Cecile",
""
]
]
| TITLE: Unseen Fake News Detection Through Casual Debiasing
ABSTRACT: The widespread dissemination of fake news on social media poses significant
risks, necessitating timely and accurate detection. However, existing methods
struggle with unseen news due to their reliance on training data from past
events and domains, leaving the challenge of detecting novel fake news largely
unresolved. To address this, we identify biases in training data tied to
specific domains and propose a debiasing solution FNDCD. Originating from
causal analysis, FNDCD employs a reweighting strategy based on classification
confidence and propagation structure regularization to reduce the influence of
domain-specific biases, enhancing the detection of unseen fake news.
Experiments on real-world datasets with non-overlapping news domains
demonstrate FNDCD's effectiveness in improving generalization across domains.
| no_new_dataset | 0.949669 |
2503.04162 | Ziqiang Cui | Ziqiang Cui, Yunpeng Weng, Xing Tang, Xiaokun Zhang, Dugang Liu,
Shiwei Li, Peiyang Liu, Bowei He, Weihong Luo, Xiuqiang He, Chen Ma | Semantic Retrieval Augmented Contrastive Learning for Sequential
Recommendation | null | null | null | null | cs.IR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sequential recommendation aims to model user preferences based on historical
behavior sequences, which is crucial for various online platforms. Data
sparsity remains a significant challenge in this area as most users have
limited interactions and many items receive little attention. To mitigate this
issue, contrastive learning has been widely adopted. By constructing positive
sample pairs from the data itself and maximizing their agreement in the
embedding space,it can leverage available data more effectively. Constructing
reasonable positive sample pairs is crucial for the success of contrastive
learning. However, current approaches struggle to generate reliable positive
pairs as they either rely on representations learned from inherently sparse
collaborative signals or use random perturbations which introduce significant
uncertainty. To address these limitations, we propose a novel approach named
Semantic Retrieval Augmented Contrastive Learning (SRA-CL), which leverages
semantic information to improve the reliability of contrastive samples. SRA-CL
comprises two main components: (1) Cross-Sequence Contrastive Learning via User
Semantic Retrieval, which utilizes large language models (LLMs) to understand
diverse user preferences and retrieve semantically similar users to form
reliable positive samples through a learnable sample synthesis method; and (2)
Intra-Sequence Contrastive Learning via Item Semantic Retrieval, which employs
LLMs to comprehend items and retrieve similar items to perform semantic-based
item substitution, thereby creating semantically consistent augmented views for
contrastive learning. SRA-CL is plug-and-play and can be integrated into
standard sequential recommendation models. Extensive experiments on four public
datasets demonstrate the effectiveness and generalizability of the proposed
approach.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 07:25:19 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Cui",
"Ziqiang",
""
],
[
"Weng",
"Yunpeng",
""
],
[
"Tang",
"Xing",
""
],
[
"Zhang",
"Xiaokun",
""
],
[
"Liu",
"Dugang",
""
],
[
"Li",
"Shiwei",
""
],
[
"Liu",
"Peiyang",
""
],
[
"He",
"Bowei",
""
],
[
"Luo",
"Weihong",
""
],
[
"He",
"Xiuqiang",
""
],
[
"Ma",
"Chen",
""
]
]
| TITLE: Semantic Retrieval Augmented Contrastive Learning for Sequential
Recommendation
ABSTRACT: Sequential recommendation aims to model user preferences based on historical
behavior sequences, which is crucial for various online platforms. Data
sparsity remains a significant challenge in this area as most users have
limited interactions and many items receive little attention. To mitigate this
issue, contrastive learning has been widely adopted. By constructing positive
sample pairs from the data itself and maximizing their agreement in the
embedding space,it can leverage available data more effectively. Constructing
reasonable positive sample pairs is crucial for the success of contrastive
learning. However, current approaches struggle to generate reliable positive
pairs as they either rely on representations learned from inherently sparse
collaborative signals or use random perturbations which introduce significant
uncertainty. To address these limitations, we propose a novel approach named
Semantic Retrieval Augmented Contrastive Learning (SRA-CL), which leverages
semantic information to improve the reliability of contrastive samples. SRA-CL
comprises two main components: (1) Cross-Sequence Contrastive Learning via User
Semantic Retrieval, which utilizes large language models (LLMs) to understand
diverse user preferences and retrieve semantically similar users to form
reliable positive samples through a learnable sample synthesis method; and (2)
Intra-Sequence Contrastive Learning via Item Semantic Retrieval, which employs
LLMs to comprehend items and retrieve similar items to perform semantic-based
item substitution, thereby creating semantically consistent augmented views for
contrastive learning. SRA-CL is plug-and-play and can be integrated into
standard sequential recommendation models. Extensive experiments on four public
datasets demonstrate the effectiveness and generalizability of the proposed
approach.
| no_new_dataset | 0.951233 |
2503.04165 | Bodong Zhang | Bodong Zhang, Hamid Manoochehri, Beatrice S. Knudsen, Tolga Tasdizen | WeakSupCon: Weakly Supervised Contrastive Learning for Encoder
Pre-training | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Weakly supervised multiple instance learning (MIL) is a challenging task
given that only bag-level labels are provided, while each bag typically
contains multiple instances. This topic has been extensively studied in
histopathological image analysis, where labels are usually available only at
the whole slide image (WSI) level, while each whole slide image can be divided
into thousands of small image patches for training. The dominant MIL approaches
take fixed patch features as inputs to address computational constraints and
ensure model stability. These features are commonly generated by encoders
pre-trained on ImageNet, foundation encoders pre-trained on large datasets, or
through self-supervised learning on local datasets. While the self-supervised
encoder pre-training on the same dataset as downstream MIL tasks helps mitigate
domain shift and generate better features, the bag-level labels are not
utilized during the process, and the features of patches from different
categories may cluster together, reducing classification performance on MIL
tasks. Recently, pre-training with supervised contrastive learning (SupCon) has
demonstrated superior performance compared to self-supervised contrastive
learning and even end-to-end training on traditional image classification
tasks. In this paper, we propose a novel encoder pre-training method for
downstream MIL tasks called Weakly Supervised Contrastive Learning (WeakSupCon)
that utilizes bag-level labels. In our method, we employ multi-task learning
and define distinct contrastive learning losses for samples with different bag
labels. Our experiments demonstrate that the features generated using
WeakSupCon significantly enhance MIL classification performance compared to
self-supervised approaches across three datasets.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 07:25:43 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Zhang",
"Bodong",
""
],
[
"Manoochehri",
"Hamid",
""
],
[
"Knudsen",
"Beatrice S.",
""
],
[
"Tasdizen",
"Tolga",
""
]
]
| TITLE: WeakSupCon: Weakly Supervised Contrastive Learning for Encoder
Pre-training
ABSTRACT: Weakly supervised multiple instance learning (MIL) is a challenging task
given that only bag-level labels are provided, while each bag typically
contains multiple instances. This topic has been extensively studied in
histopathological image analysis, where labels are usually available only at
the whole slide image (WSI) level, while each whole slide image can be divided
into thousands of small image patches for training. The dominant MIL approaches
take fixed patch features as inputs to address computational constraints and
ensure model stability. These features are commonly generated by encoders
pre-trained on ImageNet, foundation encoders pre-trained on large datasets, or
through self-supervised learning on local datasets. While the self-supervised
encoder pre-training on the same dataset as downstream MIL tasks helps mitigate
domain shift and generate better features, the bag-level labels are not
utilized during the process, and the features of patches from different
categories may cluster together, reducing classification performance on MIL
tasks. Recently, pre-training with supervised contrastive learning (SupCon) has
demonstrated superior performance compared to self-supervised contrastive
learning and even end-to-end training on traditional image classification
tasks. In this paper, we propose a novel encoder pre-training method for
downstream MIL tasks called Weakly Supervised Contrastive Learning (WeakSupCon)
that utilizes bag-level labels. In our method, we employ multi-task learning
and define distinct contrastive learning losses for samples with different bag
labels. Our experiments demonstrate that the features generated using
WeakSupCon significantly enhance MIL classification performance compared to
self-supervised approaches across three datasets.
| no_new_dataset | 0.953101 |
2503.04167 | Yufang Liu | Yufang Liu, Yao Du, Tao Ji, Jianing Wang, Yang Liu, Yuanbin Wu, Aimin
Zhou, Mengdi Zhang, Xunliang Cai | The Role of Visual Modality in Multimodal Mathematical Reasoning:
Challenges and Insights | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Recent research has increasingly focused on multimodal mathematical
reasoning, particularly emphasizing the creation of relevant datasets and
benchmarks. Despite this, the role of visual information in reasoning has been
underexplored. Our findings show that existing multimodal mathematical models
minimally leverage visual information, and model performance remains largely
unaffected by changes to or removal of images in the dataset. We attribute this
to the dominance of textual information and answer options that inadvertently
guide the model to correct answers. To improve evaluation methods, we introduce
the HC-M3D dataset, specifically designed to require image reliance for
problem-solving and to challenge models with similar, yet distinct, images that
change the correct answer. In testing leading models, their failure to detect
these subtle visual differences suggests limitations in current visual
perception capabilities. Additionally, we observe that the common approach of
improving general VQA capabilities by combining various types of image encoders
does not contribute to math reasoning performance. This finding also presents a
challenge to enhancing visual reliance during math reasoning. Our benchmark and
code would be available at
\href{https://github.com/Yufang-Liu/visual_modality_role}{https://github.com/Yufang-Liu/visual\_modality\_role}.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 07:29:33 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Liu",
"Yufang",
""
],
[
"Du",
"Yao",
""
],
[
"Ji",
"Tao",
""
],
[
"Wang",
"Jianing",
""
],
[
"Liu",
"Yang",
""
],
[
"Wu",
"Yuanbin",
""
],
[
"Zhou",
"Aimin",
""
],
[
"Zhang",
"Mengdi",
""
],
[
"Cai",
"Xunliang",
""
]
]
| TITLE: The Role of Visual Modality in Multimodal Mathematical Reasoning:
Challenges and Insights
ABSTRACT: Recent research has increasingly focused on multimodal mathematical
reasoning, particularly emphasizing the creation of relevant datasets and
benchmarks. Despite this, the role of visual information in reasoning has been
underexplored. Our findings show that existing multimodal mathematical models
minimally leverage visual information, and model performance remains largely
unaffected by changes to or removal of images in the dataset. We attribute this
to the dominance of textual information and answer options that inadvertently
guide the model to correct answers. To improve evaluation methods, we introduce
the HC-M3D dataset, specifically designed to require image reliance for
problem-solving and to challenge models with similar, yet distinct, images that
change the correct answer. In testing leading models, their failure to detect
these subtle visual differences suggests limitations in current visual
perception capabilities. Additionally, we observe that the common approach of
improving general VQA capabilities by combining various types of image encoders
does not contribute to math reasoning performance. This finding also presents a
challenge to enhancing visual reliance during math reasoning. Our benchmark and
code would be available at
\href{https://github.com/Yufang-Liu/visual_modality_role}{https://github.com/Yufang-Liu/visual\_modality\_role}.
| new_dataset | 0.966315 |
2503.04178 | Evgeniy Eremin | Evgeniy Eremin | Unsupervised anomaly detection on cybersecurity data streams: a case
with BETH dataset | null | null | null | null | cs.CR cs.LG | http://creativecommons.org/licenses/by/4.0/ | In modern world the importance of cybersecurity of various systems is
increasing from year to year. The number of information security events
generated by information security tools grows up with the development of the IT
infrastructure. At the same time, the cyber threat landscape does not remain
constant, and monitoring should take into account both already known attack
indicators and those for which there are no signature rules in information
security products of various classes yet. Detecting anomalies in large
cybersecurity data streams is a complex task that, if properly addressed, can
allow for timely response to atypical and previously unknown cyber threats. The
possibilities of using of offline algorithms may be limited for a number of
reasons related to the time of training and the frequency of retraining. Using
stream learning algorithms for solving this task is capable of providing
near-real-time data processing. This article examines the results of ten
algorithms from three Python stream machine-learning libraries on BETH dataset
with cybersecurity events, which contains information about the creation,
cloning, and destruction of operating system processes collected using extended
eBPF. ROC-AUC metric and total processing time of processing with these
algorithms are presented. Several combinations of features and the order of
events are considered. In conclusion, some mentions are given about the most
promising algorithms and possible directions for further research are outlined.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 07:45:48 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Eremin",
"Evgeniy",
""
]
]
| TITLE: Unsupervised anomaly detection on cybersecurity data streams: a case
with BETH dataset
ABSTRACT: In modern world the importance of cybersecurity of various systems is
increasing from year to year. The number of information security events
generated by information security tools grows up with the development of the IT
infrastructure. At the same time, the cyber threat landscape does not remain
constant, and monitoring should take into account both already known attack
indicators and those for which there are no signature rules in information
security products of various classes yet. Detecting anomalies in large
cybersecurity data streams is a complex task that, if properly addressed, can
allow for timely response to atypical and previously unknown cyber threats. The
possibilities of using of offline algorithms may be limited for a number of
reasons related to the time of training and the frequency of retraining. Using
stream learning algorithms for solving this task is capable of providing
near-real-time data processing. This article examines the results of ten
algorithms from three Python stream machine-learning libraries on BETH dataset
with cybersecurity events, which contains information about the creation,
cloning, and destruction of operating system processes collected using extended
eBPF. ROC-AUC metric and total processing time of processing with these
algorithms are presented. Several combinations of features and the order of
events are considered. In conclusion, some mentions are given about the most
promising algorithms and possible directions for further research are outlined.
| no_new_dataset | 0.841435 |
2503.04190 | Yuyan Wu | Yuyan Wu, Yiwen Dong, Sumer Vaid, Gabriella M. Harari and Hae Young
Noh | Personalized Emotion Detection from Floor Vibrations Induced by
Footsteps | null | null | null | null | eess.SY cs.HC cs.SY eess.SP | http://creativecommons.org/licenses/by/4.0/ | Emotion recognition is critical for various applications such as early
detection of mental health disorders and emotion based smart home systems.
Previous studies used various sensing methods for emotion recognition, such as
wearable sensors, cameras, and microphones. However, these methods have
limitations in long term domestic, including intrusiveness and privacy
concerns. To overcome these limitations, this paper introduces a nonintrusive
and privacy friendly personalized emotion recognition system, EmotionVibe,
which leverages footstep induced floor vibrations for emotion recognition. The
main idea of EmotionVibe is that individuals' emotional states influence their
gait patterns, subsequently affecting the floor vibrations induced by their
footsteps. However, there are two main research challenges: 1) the complex and
indirect relationship between human emotions and footstep induced floor
vibrations and 2) the large between person variations within the relationship
between emotions and gait patterns. To address these challenges, we first
empirically characterize this complex relationship and develop an emotion
sensitive feature set including gait related and vibration related features
from footstep induced floor vibrations. Furthermore, we personalize the emotion
recognition system for each user by calculating gait similarities between the
target person (i.e., the person whose emotions we aim to recognize) and those
in the training dataset and assigning greater weights to training people with
similar gait patterns in the loss function. We evaluated our system in a
real-world walking experiment with 20 participants, summing up to 37,001
footstep samples. EmotionVibe achieved the mean absolute error (MAE) of 1.11
and 1.07 for valence and arousal score estimations, respectively, reflecting
19.0% and 25.7% error reduction compared to the baseline method.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 08:04:43 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Wu",
"Yuyan",
""
],
[
"Dong",
"Yiwen",
""
],
[
"Vaid",
"Sumer",
""
],
[
"Harari",
"Gabriella M.",
""
],
[
"Noh",
"Hae Young",
""
]
]
| TITLE: Personalized Emotion Detection from Floor Vibrations Induced by
Footsteps
ABSTRACT: Emotion recognition is critical for various applications such as early
detection of mental health disorders and emotion based smart home systems.
Previous studies used various sensing methods for emotion recognition, such as
wearable sensors, cameras, and microphones. However, these methods have
limitations in long term domestic, including intrusiveness and privacy
concerns. To overcome these limitations, this paper introduces a nonintrusive
and privacy friendly personalized emotion recognition system, EmotionVibe,
which leverages footstep induced floor vibrations for emotion recognition. The
main idea of EmotionVibe is that individuals' emotional states influence their
gait patterns, subsequently affecting the floor vibrations induced by their
footsteps. However, there are two main research challenges: 1) the complex and
indirect relationship between human emotions and footstep induced floor
vibrations and 2) the large between person variations within the relationship
between emotions and gait patterns. To address these challenges, we first
empirically characterize this complex relationship and develop an emotion
sensitive feature set including gait related and vibration related features
from footstep induced floor vibrations. Furthermore, we personalize the emotion
recognition system for each user by calculating gait similarities between the
target person (i.e., the person whose emotions we aim to recognize) and those
in the training dataset and assigning greater weights to training people with
similar gait patterns in the loss function. We evaluated our system in a
real-world walking experiment with 20 participants, summing up to 37,001
footstep samples. EmotionVibe achieved the mean absolute error (MAE) of 1.11
and 1.07 for valence and arousal score estimations, respectively, reflecting
19.0% and 25.7% error reduction compared to the baseline method.
| no_new_dataset | 0.927888 |
2503.04201 | Bin Chen | Bin Chen, Yu Zhang, Hongfei Ye, Ziyi Huang, Hongyang Chen | Knowledge-Decoupled Synergetic Learning: An MLLM based Collaborative
Approach to Few-shot Multimodal Dialogue Intention Recognition | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Few-shot multimodal dialogue intention recognition is a critical challenge in
the e-commerce domainn. Previous methods have primarily enhanced model
classification capabilities through post-training techniques. However, our
analysis reveals that training for few-shot multimodal dialogue intention
recognition involves two interconnected tasks, leading to a seesaw effect in
multi-task learning. This phenomenon is attributed to knowledge interference
stemming from the superposition of weight matrix updates during the training
process. To address these challenges, we propose Knowledge-Decoupled Synergetic
Learning (KDSL), which mitigates these issues by utilizing smaller models to
transform knowledge into interpretable rules, while applying the post-training
of larger models. By facilitating collaboration between the large and small
multimodal large language models for prediction, our approach demonstrates
significant improvements. Notably, we achieve outstanding results on two real
Taobao datasets, with enhancements of 6.37\% and 6.28\% in online weighted F1
scores compared to the state-of-the-art method, thereby validating the efficacy
of our framework.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 08:28:44 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Chen",
"Bin",
""
],
[
"Zhang",
"Yu",
""
],
[
"Ye",
"Hongfei",
""
],
[
"Huang",
"Ziyi",
""
],
[
"Chen",
"Hongyang",
""
]
]
| TITLE: Knowledge-Decoupled Synergetic Learning: An MLLM based Collaborative
Approach to Few-shot Multimodal Dialogue Intention Recognition
ABSTRACT: Few-shot multimodal dialogue intention recognition is a critical challenge in
the e-commerce domainn. Previous methods have primarily enhanced model
classification capabilities through post-training techniques. However, our
analysis reveals that training for few-shot multimodal dialogue intention
recognition involves two interconnected tasks, leading to a seesaw effect in
multi-task learning. This phenomenon is attributed to knowledge interference
stemming from the superposition of weight matrix updates during the training
process. To address these challenges, we propose Knowledge-Decoupled Synergetic
Learning (KDSL), which mitigates these issues by utilizing smaller models to
transform knowledge into interpretable rules, while applying the post-training
of larger models. By facilitating collaboration between the large and small
multimodal large language models for prediction, our approach demonstrates
significant improvements. Notably, we achieve outstanding results on two real
Taobao datasets, with enhancements of 6.37\% and 6.28\% in online weighted F1
scores compared to the state-of-the-art method, thereby validating the efficacy
of our framework.
| no_new_dataset | 0.945851 |
2503.04204 | Md Zahid Hasan | Zhanhong Jiang, Md Zahid Hasan, Aditya Balu, Joshua R. Waite, Genyi
Huang, Soumik Sarkar | FUSE: First-Order and Second-Order Unified SynthEsis in Stochastic
Optimization | 6 pages, 7 figures | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Stochastic optimization methods have actively been playing a critical role in
modern machine learning algorithms to deliver decent performance. While
numerous works have proposed and developed diverse approaches, first-order and
second-order methods are in entirely different situations. The former is
significantly pivotal and dominating in emerging deep learning but only leads
convergence to a stationary point. However, second-order methods are less
popular due to their computational intensity in large-dimensional problems.
This paper presents a novel method that leverages both the first-order and
second-order methods in a unified algorithmic framework, termed FUSE, from
which a practical version (PV) is derived accordingly. FUSE-PV stands as a
simple yet efficient optimization method involving a switch-over between first
and second orders. Additionally, we develop different criteria that determine
when to switch. FUSE-PV has provably shown a smaller computational complexity
than SGD and Adam. To validate our proposed scheme, we present an ablation
study on several simple test functions and show a comparison with baselines for
benchmark datasets.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 08:30:18 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Jiang",
"Zhanhong",
""
],
[
"Hasan",
"Md Zahid",
""
],
[
"Balu",
"Aditya",
""
],
[
"Waite",
"Joshua R.",
""
],
[
"Huang",
"Genyi",
""
],
[
"Sarkar",
"Soumik",
""
]
]
| TITLE: FUSE: First-Order and Second-Order Unified SynthEsis in Stochastic
Optimization
ABSTRACT: Stochastic optimization methods have actively been playing a critical role in
modern machine learning algorithms to deliver decent performance. While
numerous works have proposed and developed diverse approaches, first-order and
second-order methods are in entirely different situations. The former is
significantly pivotal and dominating in emerging deep learning but only leads
convergence to a stationary point. However, second-order methods are less
popular due to their computational intensity in large-dimensional problems.
This paper presents a novel method that leverages both the first-order and
second-order methods in a unified algorithmic framework, termed FUSE, from
which a practical version (PV) is derived accordingly. FUSE-PV stands as a
simple yet efficient optimization method involving a switch-over between first
and second orders. Additionally, we develop different criteria that determine
when to switch. FUSE-PV has provably shown a smaller computational complexity
than SGD and Adam. To validate our proposed scheme, we present an ablation
study on several simple test functions and show a comparison with baselines for
benchmark datasets.
| no_new_dataset | 0.946448 |
2503.04205 | Xingcan Hu | Xingcan Hu and Wei Wang and Li Xiao | Learning 3D Medical Image Models From Brain Functional Connectivity
Network Supervision For Mental Disorder Diagnosis | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In MRI-based mental disorder diagnosis, most previous studies focus on
functional connectivity network (FCN) derived from functional MRI (fMRI).
However, the small size of annotated fMRI datasets restricts its wide
application. Meanwhile, structural MRIs (sMRIs), such as 3D T1-weighted (T1w)
MRI, which are commonly used and readily accessible in clinical settings, are
often overlooked. To integrate the complementary information from both function
and structure for improved diagnostic accuracy, we propose CINP (Contrastive
Image-Network Pre-training), a framework that employs contrastive learning
between sMRI and FCN. During pre-training, we incorporate masked image modeling
and network-image matching to enhance visual representation learning and
modality alignment. Since the CINP facilitates knowledge transfer from FCN to
sMRI, we introduce network prompting. It utilizes only sMRI from suspected
patients and a small amount of FCNs from different patient classes for
diagnosing mental disorders, which is practical in real-world clinical
scenario. The competitive performance on three mental disorder diagnosis tasks
demonstrate the effectiveness of the CINP in integrating multimodal MRI
information, as well as the potential of incorporating sMRI into clinical
diagnosis using network prompting.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 08:30:33 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Hu",
"Xingcan",
""
],
[
"Wang",
"Wei",
""
],
[
"Xiao",
"Li",
""
]
]
| TITLE: Learning 3D Medical Image Models From Brain Functional Connectivity
Network Supervision For Mental Disorder Diagnosis
ABSTRACT: In MRI-based mental disorder diagnosis, most previous studies focus on
functional connectivity network (FCN) derived from functional MRI (fMRI).
However, the small size of annotated fMRI datasets restricts its wide
application. Meanwhile, structural MRIs (sMRIs), such as 3D T1-weighted (T1w)
MRI, which are commonly used and readily accessible in clinical settings, are
often overlooked. To integrate the complementary information from both function
and structure for improved diagnostic accuracy, we propose CINP (Contrastive
Image-Network Pre-training), a framework that employs contrastive learning
between sMRI and FCN. During pre-training, we incorporate masked image modeling
and network-image matching to enhance visual representation learning and
modality alignment. Since the CINP facilitates knowledge transfer from FCN to
sMRI, we introduce network prompting. It utilizes only sMRI from suspected
patients and a small amount of FCNs from different patient classes for
diagnosing mental disorders, which is practical in real-world clinical
scenario. The competitive performance on three mental disorder diagnosis tasks
demonstrate the effectiveness of the CINP in integrating multimodal MRI
information, as well as the potential of incorporating sMRI into clinical
diagnosis using network prompting.
| no_new_dataset | 0.951414 |
2503.04222 | Yang Ziyi | Ziyi Yang, Fanqi Wan, Longguang Zhong, Canbin Huang, Guosheng Liang,
Xiaojun Quan | FuseChat-3.0: Preference Optimization Meets Heterogeneous Model Fusion | Technical report | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce FuseChat-3.0, a suite of large language models (LLMs) developed
by integrating the strengths of heterogeneous source LLMs into more compact
target LLMs. Our source models include the powerful Gemma-2-27B-it,
Mistral-Large-Instruct-2407, Qwen-2.5-72B-Instruct, and Llama-3.1-70B-Instruct.
For target models, we focus on three widely-used smaller
variants-Llama-3.1-8B-Instruct, Gemma-2-9B-it, and Qwen-2.5-7B-Instruct-along
with two ultra-compact options, Llama-3.2-3B-Instruct and
Llama-3.2-1B-Instruct. To leverage the diverse capabilities of these source
models, we develop a specialized data construction protocol tailored to various
tasks and domains. The FuseChat-3.0 training pipeline consists of two key
stages: (1) supervised fine-tuning (SFT) to align the target and source model
distributions, and (2) Direct Preference Optimization (DPO) to apply
preferences from multiple source LLMs to fine-tune the target model. The
resulting FuseChat-3.0 models exhibit significant performance gains across
tasks such as instruction following, general knowledge, mathematics, and
coding. As illustrated in Figure 1, using Llama-3.1-8B-Instruct as the target
model, our fusion approach achieves an average improvement of 6.8 points across
14 benchmarks. Moreover, it demonstrates remarkable gains of 37.1 points and
30.1 points on the instruction-following benchmarks AlpacaEval-2 and
Arena-Hard, respectively. Our code, models, and datasets are available at
https://github.com/SLIT-AI/FuseChat-3.0.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 09:03:36 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Yang",
"Ziyi",
""
],
[
"Wan",
"Fanqi",
""
],
[
"Zhong",
"Longguang",
""
],
[
"Huang",
"Canbin",
""
],
[
"Liang",
"Guosheng",
""
],
[
"Quan",
"Xiaojun",
""
]
]
| TITLE: FuseChat-3.0: Preference Optimization Meets Heterogeneous Model Fusion
ABSTRACT: We introduce FuseChat-3.0, a suite of large language models (LLMs) developed
by integrating the strengths of heterogeneous source LLMs into more compact
target LLMs. Our source models include the powerful Gemma-2-27B-it,
Mistral-Large-Instruct-2407, Qwen-2.5-72B-Instruct, and Llama-3.1-70B-Instruct.
For target models, we focus on three widely-used smaller
variants-Llama-3.1-8B-Instruct, Gemma-2-9B-it, and Qwen-2.5-7B-Instruct-along
with two ultra-compact options, Llama-3.2-3B-Instruct and
Llama-3.2-1B-Instruct. To leverage the diverse capabilities of these source
models, we develop a specialized data construction protocol tailored to various
tasks and domains. The FuseChat-3.0 training pipeline consists of two key
stages: (1) supervised fine-tuning (SFT) to align the target and source model
distributions, and (2) Direct Preference Optimization (DPO) to apply
preferences from multiple source LLMs to fine-tune the target model. The
resulting FuseChat-3.0 models exhibit significant performance gains across
tasks such as instruction following, general knowledge, mathematics, and
coding. As illustrated in Figure 1, using Llama-3.1-8B-Instruct as the target
model, our fusion approach achieves an average improvement of 6.8 points across
14 benchmarks. Moreover, it demonstrates remarkable gains of 37.1 points and
30.1 points on the instruction-following benchmarks AlpacaEval-2 and
Arena-Hard, respectively. Our code, models, and datasets are available at
https://github.com/SLIT-AI/FuseChat-3.0.
| no_new_dataset | 0.944893 |
2503.04231 | Roberto Pellungrini | Maciej Krzysztof Zuziak and Roberto Pellungrini and Salvatore
Rinzivillo | One-Shot Clustering for Federated Learning | null | 2024 IEEE International Conference on Big Data (BigData) | 10.1109/BigData62323.2024.10825763 | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Federated Learning (FL) is a widespread and well adopted paradigm of
decentralized learning that allows training one model from multiple sources
without the need to directly transfer data between participating clients. Since
its inception in 2015, it has been divided into numerous sub-fields that deal
with application-specific issues, be it data heterogeneity or resource
allocation. One such sub-field, Clustered Federated Learning (CFL), is dealing
with the problem of clustering the population of clients into separate cohorts
to deliver personalized models. Although few remarkable works have been
published in this domain, the problem is still largely unexplored, as its basic
assumption and settings are slightly different from standard FL. In this work,
we present One-Shot Clustered Federated Learning (OCFL), a clustering-agnostic
algorithm that can automatically detect the earliest suitable moment for
clustering. Our algorithm is based on the computation of cosine similarity
between gradients of the clients and a temperature measure that detects when
the federated model starts to converge. We empirically evaluate our methodology
by testing various one-shot clustering algorithms for over thirty different
tasks on three benchmark datasets. Our experiments showcase the good
performance of our approach when used to perform CFL in an automated manner
without the need to adjust hyperparameters.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 09:12:43 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Zuziak",
"Maciej Krzysztof",
""
],
[
"Pellungrini",
"Roberto",
""
],
[
"Rinzivillo",
"Salvatore",
""
]
]
| TITLE: One-Shot Clustering for Federated Learning
ABSTRACT: Federated Learning (FL) is a widespread and well adopted paradigm of
decentralized learning that allows training one model from multiple sources
without the need to directly transfer data between participating clients. Since
its inception in 2015, it has been divided into numerous sub-fields that deal
with application-specific issues, be it data heterogeneity or resource
allocation. One such sub-field, Clustered Federated Learning (CFL), is dealing
with the problem of clustering the population of clients into separate cohorts
to deliver personalized models. Although few remarkable works have been
published in this domain, the problem is still largely unexplored, as its basic
assumption and settings are slightly different from standard FL. In this work,
we present One-Shot Clustered Federated Learning (OCFL), a clustering-agnostic
algorithm that can automatically detect the earliest suitable moment for
clustering. Our algorithm is based on the computation of cosine similarity
between gradients of the clients and a temperature measure that detects when
the federated model starts to converge. We empirically evaluate our methodology
by testing various one-shot clustering algorithms for over thirty different
tasks on three benchmark datasets. Our experiments showcase the good
performance of our approach when used to perform CFL in an automated manner
without the need to adjust hyperparameters.
| no_new_dataset | 0.944177 |
2503.04232 | Jie He | Jie He, Bo Peng, Yi Liao, Qun Liu, Deyi Xiong | Tgea: An error-annotated dataset and benchmark tasks for text generation
from pretrained language models | ACL 2021 | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In order to deeply understand the capability of pretrained language models in
text generation and conduct a diagnostic evaluation, we propose TGEA, an
error-annotated dataset with multiple benchmark tasks for text generation from
pretrained language models (PLMs). We use carefully selected prompt words to
guide GPT-2 to generate candidate sentences, from which we select 47K for error
annotation. Crowdsourced workers manually check each of these sentences and
detect 12k erroneous sentences. We create an error taxonomy to cover 24 types
of errors occurring in these erroneous sentences according to the nature of
errors with respect to linguistics and knowledge (eg, common sense). For each
erroneous span in PLM-generated sentences, we also detect another span that is
closely associated with it. Each error is hence manually labeled with
comprehensive annotations, including the span of the error, the associated
span, minimal correction to the error, the type of the error, and rationale
behind the error. Apart from the fully annotated dataset, we also present a
detailed description of the data collection procedure, statistics and analysis
of the dataset. This is the first dataset with comprehensive annotations for
PLM-generated texts, which facilitates the diagnostic evaluation of PLM-based
text generation. Furthermore, we use TGEA as a benchmark dataset and propose a
series of automatic diagnosis tasks, including error detection, error type
classification, associated span detection, error rationale generation, to
further promote future study on the automatic error detection and correction on
texts generated by pretrained language models.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 09:14:02 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"He",
"Jie",
""
],
[
"Peng",
"Bo",
""
],
[
"Liao",
"Yi",
""
],
[
"Liu",
"Qun",
""
],
[
"Xiong",
"Deyi",
""
]
]
| TITLE: Tgea: An error-annotated dataset and benchmark tasks for text generation
from pretrained language models
ABSTRACT: In order to deeply understand the capability of pretrained language models in
text generation and conduct a diagnostic evaluation, we propose TGEA, an
error-annotated dataset with multiple benchmark tasks for text generation from
pretrained language models (PLMs). We use carefully selected prompt words to
guide GPT-2 to generate candidate sentences, from which we select 47K for error
annotation. Crowdsourced workers manually check each of these sentences and
detect 12k erroneous sentences. We create an error taxonomy to cover 24 types
of errors occurring in these erroneous sentences according to the nature of
errors with respect to linguistics and knowledge (eg, common sense). For each
erroneous span in PLM-generated sentences, we also detect another span that is
closely associated with it. Each error is hence manually labeled with
comprehensive annotations, including the span of the error, the associated
span, minimal correction to the error, the type of the error, and rationale
behind the error. Apart from the fully annotated dataset, we also present a
detailed description of the data collection procedure, statistics and analysis
of the dataset. This is the first dataset with comprehensive annotations for
PLM-generated texts, which facilitates the diagnostic evaluation of PLM-based
text generation. Furthermore, we use TGEA as a benchmark dataset and propose a
series of automatic diagnosis tasks, including error detection, error type
classification, associated span detection, error rationale generation, to
further promote future study on the automatic error detection and correction on
texts generated by pretrained language models.
| new_dataset | 0.97377 |
2503.04234 | Jianzhong Qi | Zesong Zhang, Jianzhong Qi, Xin Cao, Christian S. Jensen | SemaSK: Answering Semantics-aware Spatial Keyword Queries with Large
Language Models | Accepted for publication at EDBT'25 | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Geo-textual objects, i.e., objects with both spatial and textual attributes,
such as points-of-interest or web documents with location tags, are prevalent
and fuel a range of location-based services. Existing spatial keyword querying
methods that target such data have focused primarily on efficiency and often
involve proposals for index structures for efficient query processing. In these
studies, due to challenges in measuring the semantic relevance of textual data,
query constraints on the textual attributes are largely treated as a keyword
matching process, ignoring richer query and data semantics. To advance the
semantic aspects, we propose a system named SemaSK that exploits the semantic
capabilities of large language models to retrieve geo-textual objects that are
more semantically relevant to a query. Experimental results on a real dataset
offer evidence of the effectiveness of the system, and a system demonstration
is presented in this paper.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 09:15:11 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Zhang",
"Zesong",
""
],
[
"Qi",
"Jianzhong",
""
],
[
"Cao",
"Xin",
""
],
[
"Jensen",
"Christian S.",
""
]
]
| TITLE: SemaSK: Answering Semantics-aware Spatial Keyword Queries with Large
Language Models
ABSTRACT: Geo-textual objects, i.e., objects with both spatial and textual attributes,
such as points-of-interest or web documents with location tags, are prevalent
and fuel a range of location-based services. Existing spatial keyword querying
methods that target such data have focused primarily on efficiency and often
involve proposals for index structures for efficient query processing. In these
studies, due to challenges in measuring the semantic relevance of textual data,
query constraints on the textual attributes are largely treated as a keyword
matching process, ignoring richer query and data semantics. To advance the
semantic aspects, we propose a system named SemaSK that exploits the semantic
capabilities of large language models to retrieve geo-textual objects that are
more semantically relevant to a query. Experimental results on a real dataset
offer evidence of the effectiveness of the system, and a system demonstration
is presented in this paper.
| no_new_dataset | 0.949201 |
2503.04242 | Manh Cuong Dao | Manh Cuong Dao, Phi Le Nguyen, Thao Nguyen Truong, Trong Nghia Hoang | Incorporating Surrogate Gradient Norm to Improve Offline Optimization
Techniques | null | The Thirty-eighth Annual Conference on Neural Information
Processing Systems, 2024 | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Offline optimization has recently emerged as an increasingly popular approach
to mitigate the prohibitively expensive cost of online experimentation. The key
idea is to learn a surrogate of the black-box function that underlines the
target experiment using a static (offline) dataset of its previous input-output
queries. Such an approach is, however, fraught with an out-of-distribution
issue where the learned surrogate becomes inaccurate outside the offline data
regimes. To mitigate this, existing offline optimizers have proposed numerous
conditioning techniques to prevent the learned surrogate from being too
erratic. Nonetheless, such conditioning strategies are often specific to
particular surrogate or search models, which might not generalize to a
different model choice. This motivates us to develop a model-agnostic approach
instead, which incorporates a notion of model sharpness into the training loss
of the surrogate as a regularizer. Our approach is supported by a new
theoretical analysis demonstrating that reducing surrogate sharpness on the
offline dataset provably reduces its generalized sharpness on unseen data. Our
analysis extends existing theories from bounding generalized prediction loss
(on unseen data) with loss sharpness to bounding the worst-case generalized
surrogate sharpness with its empirical estimate on training data, providing a
new perspective on sharpness regularization. Our extensive experimentation on a
diverse range of optimization tasks also shows that reducing surrogate
sharpness often leads to significant improvement, marking (up to) a noticeable
9.6% performance boost. Our code is publicly available at
https://github.com/cuong-dm/IGNITE
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 09:24:23 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Dao",
"Manh Cuong",
""
],
[
"Nguyen",
"Phi Le",
""
],
[
"Truong",
"Thao Nguyen",
""
],
[
"Hoang",
"Trong Nghia",
""
]
]
| TITLE: Incorporating Surrogate Gradient Norm to Improve Offline Optimization
Techniques
ABSTRACT: Offline optimization has recently emerged as an increasingly popular approach
to mitigate the prohibitively expensive cost of online experimentation. The key
idea is to learn a surrogate of the black-box function that underlines the
target experiment using a static (offline) dataset of its previous input-output
queries. Such an approach is, however, fraught with an out-of-distribution
issue where the learned surrogate becomes inaccurate outside the offline data
regimes. To mitigate this, existing offline optimizers have proposed numerous
conditioning techniques to prevent the learned surrogate from being too
erratic. Nonetheless, such conditioning strategies are often specific to
particular surrogate or search models, which might not generalize to a
different model choice. This motivates us to develop a model-agnostic approach
instead, which incorporates a notion of model sharpness into the training loss
of the surrogate as a regularizer. Our approach is supported by a new
theoretical analysis demonstrating that reducing surrogate sharpness on the
offline dataset provably reduces its generalized sharpness on unseen data. Our
analysis extends existing theories from bounding generalized prediction loss
(on unseen data) with loss sharpness to bounding the worst-case generalized
surrogate sharpness with its empirical estimate on training data, providing a
new perspective on sharpness regularization. Our extensive experimentation on a
diverse range of optimization tasks also shows that reducing surrogate
sharpness often leads to significant improvement, marking (up to) a noticeable
9.6% performance boost. Our code is publicly available at
https://github.com/cuong-dm/IGNITE
| no_new_dataset | 0.944331 |
2503.04252 | Biao Ouyang | Biao Ouyang, Yingying Zhang, Hanyin Cheng, Yang Shu, Chenjuan Guo, Bin
Yang, Qingsong Wen, Lunting Fan, Christian S. Jensen | RCRank: Multimodal Ranking of Root Causes of Slow Queries in Cloud
Database Systems | Accepted by VLDB 2025 | null | null | null | cs.DB cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the continued migration of storage to cloud database systems,the impact
of slow queries in such systems on services and user experience is increasing.
Root-cause diagnosis plays an indispensable role in facilitating slow-query
detection and revision. This paper proposes a method capable of both
identifying possible root cause types for slow queries and ranking these
according to their potential for accelerating slow queries. This enables
prioritizing root causes with the highest impact, in turn improving slow-query
revision effectiveness. To enable more accurate and detailed diagnoses, we
propose the multimodal Ranking for the Root Causes of slow queries (RCRank)
framework, which formulates root cause analysis as a multimodal machine
learning problem and leverages multimodal information from query statements,
execution plans, execution logs, and key performance indicators. To obtain
expressive embeddings from its heterogeneous multimodal input, RCRank
integrates self-supervised pre-training that enhances cross-modal alignment and
task relevance. Next, the framework integrates root-cause-adaptive cross
Transformers that enable adaptive fusion of multimodal features with varying
characteristics. Finally, the framework offers a unified model that features an
impact-aware training objective for identifying and ranking root causes. We
report on experiments on real and synthetic datasets, finding that RCRank is
capable of consistently outperforming the state-of-the-art methods at root
cause identification and ranking according to a range of metrics.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 09:35:20 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Ouyang",
"Biao",
""
],
[
"Zhang",
"Yingying",
""
],
[
"Cheng",
"Hanyin",
""
],
[
"Shu",
"Yang",
""
],
[
"Guo",
"Chenjuan",
""
],
[
"Yang",
"Bin",
""
],
[
"Wen",
"Qingsong",
""
],
[
"Fan",
"Lunting",
""
],
[
"Jensen",
"Christian S.",
""
]
]
| TITLE: RCRank: Multimodal Ranking of Root Causes of Slow Queries in Cloud
Database Systems
ABSTRACT: With the continued migration of storage to cloud database systems,the impact
of slow queries in such systems on services and user experience is increasing.
Root-cause diagnosis plays an indispensable role in facilitating slow-query
detection and revision. This paper proposes a method capable of both
identifying possible root cause types for slow queries and ranking these
according to their potential for accelerating slow queries. This enables
prioritizing root causes with the highest impact, in turn improving slow-query
revision effectiveness. To enable more accurate and detailed diagnoses, we
propose the multimodal Ranking for the Root Causes of slow queries (RCRank)
framework, which formulates root cause analysis as a multimodal machine
learning problem and leverages multimodal information from query statements,
execution plans, execution logs, and key performance indicators. To obtain
expressive embeddings from its heterogeneous multimodal input, RCRank
integrates self-supervised pre-training that enhances cross-modal alignment and
task relevance. Next, the framework integrates root-cause-adaptive cross
Transformers that enable adaptive fusion of multimodal features with varying
characteristics. Finally, the framework offers a unified model that features an
impact-aware training objective for identifying and ranking root causes. We
report on experiments on real and synthetic datasets, finding that RCRank is
capable of consistently outperforming the state-of-the-art methods at root
cause identification and ranking according to a range of metrics.
| no_new_dataset | 0.946597 |
2503.04257 | Wonkwang Lee | Wonkwang Lee, Jongwon Jeong, Taehong Moon, Hyeon-Jong Kim, Jaehyeon
Kim, Gunhee Kim, Byeong-Uk Lee | How to Move Your Dragon: Text-to-Motion Synthesis for Large-Vocabulary
Objects | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Motion synthesis for diverse object categories holds great potential for 3D
content creation but remains underexplored due to two key challenges: (1) the
lack of comprehensive motion datasets that include a wide range of high-quality
motions and annotations, and (2) the absence of methods capable of handling
heterogeneous skeletal templates from diverse objects. To address these
challenges, we contribute the following: First, we augment the Truebones Zoo
dataset, a high-quality animal motion dataset covering over 70 species, by
annotating it with detailed text descriptions, making it suitable for
text-based motion synthesis. Second, we introduce rig augmentation techniques
that generate diverse motion data while preserving consistent dynamics,
enabling models to adapt to various skeletal configurations. Finally, we
redesign existing motion diffusion models to dynamically adapt to arbitrary
skeletal templates, enabling motion synthesis for a diverse range of objects
with varying structures. Experiments show that our method learns to generate
high-fidelity motions from textual descriptions for diverse and even unseen
objects, setting a strong foundation for motion synthesis across diverse object
categories and skeletal templates. Qualitative results are available on this
link: t2m4lvo.github.io
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 09:39:09 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Lee",
"Wonkwang",
""
],
[
"Jeong",
"Jongwon",
""
],
[
"Moon",
"Taehong",
""
],
[
"Kim",
"Hyeon-Jong",
""
],
[
"Kim",
"Jaehyeon",
""
],
[
"Kim",
"Gunhee",
""
],
[
"Lee",
"Byeong-Uk",
""
]
]
| TITLE: How to Move Your Dragon: Text-to-Motion Synthesis for Large-Vocabulary
Objects
ABSTRACT: Motion synthesis for diverse object categories holds great potential for 3D
content creation but remains underexplored due to two key challenges: (1) the
lack of comprehensive motion datasets that include a wide range of high-quality
motions and annotations, and (2) the absence of methods capable of handling
heterogeneous skeletal templates from diverse objects. To address these
challenges, we contribute the following: First, we augment the Truebones Zoo
dataset, a high-quality animal motion dataset covering over 70 species, by
annotating it with detailed text descriptions, making it suitable for
text-based motion synthesis. Second, we introduce rig augmentation techniques
that generate diverse motion data while preserving consistent dynamics,
enabling models to adapt to various skeletal configurations. Finally, we
redesign existing motion diffusion models to dynamically adapt to arbitrary
skeletal templates, enabling motion synthesis for a diverse range of objects
with varying structures. Experiments show that our method learns to generate
high-fidelity motions from textual descriptions for diverse and even unseen
objects, setting a strong foundation for motion synthesis across diverse object
categories and skeletal templates. Qualitative results are available on this
link: t2m4lvo.github.io
| no_new_dataset | 0.619443 |
2503.04258 | Xu Gu | Yingfei Sun, Xu Gu, Wei Ji, Hanbin Zhao, Hao Fei, Yifang Yin, Roger
Zimmermann | TAIL: Text-Audio Incremental Learning | 4 figures, 5 tables | null | null | null | cs.SD cs.AI cs.CV eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many studies combine text and audio to capture multi-modal information but
they overlook the model's generalization ability on new datasets. Introducing
new datasets may affect the feature space of the original dataset, leading to
catastrophic forgetting. Meanwhile, large model parameters can significantly
impact training performance. To address these limitations, we introduce a novel
task called Text-Audio Incremental Learning (TAIL) task for text-audio
retrieval, and propose a new method, PTAT, Prompt Tuning for Audio-Text
incremental learning. This method utilizes prompt tuning to optimize the model
parameters while incorporating an audio-text similarity and feature
distillation module to effectively mitigate catastrophic forgetting. We
benchmark our method and previous incremental learning methods on AudioCaps,
Clotho, BBC Sound Effects and Audioset datasets, and our method outperforms
previous methods significantly, particularly demonstrating stronger resistance
to forgetting on older datasets. Compared to the full-parameters Finetune
(Sequential) method, our model only requires 2.42\% of its parameters,
achieving 4.46\% higher performance.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 09:39:36 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Sun",
"Yingfei",
""
],
[
"Gu",
"Xu",
""
],
[
"Ji",
"Wei",
""
],
[
"Zhao",
"Hanbin",
""
],
[
"Fei",
"Hao",
""
],
[
"Yin",
"Yifang",
""
],
[
"Zimmermann",
"Roger",
""
]
]
| TITLE: TAIL: Text-Audio Incremental Learning
ABSTRACT: Many studies combine text and audio to capture multi-modal information but
they overlook the model's generalization ability on new datasets. Introducing
new datasets may affect the feature space of the original dataset, leading to
catastrophic forgetting. Meanwhile, large model parameters can significantly
impact training performance. To address these limitations, we introduce a novel
task called Text-Audio Incremental Learning (TAIL) task for text-audio
retrieval, and propose a new method, PTAT, Prompt Tuning for Audio-Text
incremental learning. This method utilizes prompt tuning to optimize the model
parameters while incorporating an audio-text similarity and feature
distillation module to effectively mitigate catastrophic forgetting. We
benchmark our method and previous incremental learning methods on AudioCaps,
Clotho, BBC Sound Effects and Audioset datasets, and our method outperforms
previous methods significantly, particularly demonstrating stronger resistance
to forgetting on older datasets. Compared to the full-parameters Finetune
(Sequential) method, our model only requires 2.42\% of its parameters,
achieving 4.46\% higher performance.
| no_new_dataset | 0.943295 |
2503.04261 | Georgios Makridis | Georgios Makridis, Vasileios Koukos, Georgios Fatouros, Dimosthenis
Kyriazis | VirtualXAI: A User-Centric Framework for Explainability Assessment
Leveraging GPT-Generated Personas | 8 pages, 6 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | In today's data-driven era, computational systems generate vast amounts of
data that drive the digital transformation of industries, where Artificial
Intelligence (AI) plays a key role. Currently, the demand for eXplainable AI
(XAI) has increased to enhance the interpretability, transparency, and
trustworthiness of AI models. However, evaluating XAI methods remains
challenging: existing evaluation frameworks typically focus on quantitative
properties such as fidelity, consistency, and stability without taking into
account qualitative characteristics such as satisfaction and interpretability.
In addition, practitioners face a lack of guidance in selecting appropriate
datasets, AI models, and XAI methods -a major hurdle in human-AI collaboration.
To address these gaps, we propose a framework that integrates quantitative
benchmarking with qualitative user assessments through virtual personas based
on the "Anthology" of backstories of the Large Language Model (LLM). Our
framework also incorporates a content-based recommender system that leverages
dataset-specific characteristics to match new input data with a repository of
benchmarked datasets. This yields an estimated XAI score and provides tailored
recommendations for both the optimal AI model and the XAI method for a given
scenario.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 09:44:18 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Makridis",
"Georgios",
""
],
[
"Koukos",
"Vasileios",
""
],
[
"Fatouros",
"Georgios",
""
],
[
"Kyriazis",
"Dimosthenis",
""
]
]
| TITLE: VirtualXAI: A User-Centric Framework for Explainability Assessment
Leveraging GPT-Generated Personas
ABSTRACT: In today's data-driven era, computational systems generate vast amounts of
data that drive the digital transformation of industries, where Artificial
Intelligence (AI) plays a key role. Currently, the demand for eXplainable AI
(XAI) has increased to enhance the interpretability, transparency, and
trustworthiness of AI models. However, evaluating XAI methods remains
challenging: existing evaluation frameworks typically focus on quantitative
properties such as fidelity, consistency, and stability without taking into
account qualitative characteristics such as satisfaction and interpretability.
In addition, practitioners face a lack of guidance in selecting appropriate
datasets, AI models, and XAI methods -a major hurdle in human-AI collaboration.
To address these gaps, we propose a framework that integrates quantitative
benchmarking with qualitative user assessments through virtual personas based
on the "Anthology" of backstories of the Large Language Model (LLM). Our
framework also incorporates a content-based recommender system that leverages
dataset-specific characteristics to match new input data with a repository of
benchmarked datasets. This yields an estimated XAI score and provides tailored
recommendations for both the optimal AI model and the XAI method for a given
scenario.
| no_new_dataset | 0.945851 |
2503.04279 | Muhammad Amien Ibrahim | Muhammad Amien Ibrahim, Faisal, Tora Sangputra Yopie Winarto, Zefanya
Delvin Sulistiya | Dual-Class Prompt Generation: Enhancing Indonesian Gender-Based Hate
Speech Detection through Data Augmentation | Accepted to the 8th World Conference on Computing and Communication
Technologies (WCCCT 2025) | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Detecting gender-based hate speech in Indonesian social media remains
challenging due to limited labeled datasets. While binary hate speech
classification has advanced, a more granular category like gender-targeted hate
speech is understudied because of class imbalance issues. This paper addresses
this gap by comparing three data augmentation techniques for Indonesian
gender-based hate speech detection. We evaluate backtranslation, single-class
prompt generation (using only hate speech examples), and our proposed
dual-class prompt generation (using both hate speech and non-hate speech
examples). Experiments show all augmentation methods improve classification
performance, with our dual-class approach achieving the best results (88.5%
accuracy, 88.1% F1-score using Random Forest). Semantic similarity analysis
reveals dual-class prompt generation produces the most novel content, while
T-SNE visualizations confirm these samples occupy distinct feature space
regions while maintaining class characteristics. Our findings suggest that
incorporating examples from both classes helps language models generate more
diverse yet representative samples, effectively addressing limited data
challenges in specialized hate speech detection.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 10:07:51 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Ibrahim",
"Muhammad Amien",
""
],
[
"Faisal",
"",
""
],
[
"Winarto",
"Tora Sangputra Yopie",
""
],
[
"Sulistiya",
"Zefanya Delvin",
""
]
]
| TITLE: Dual-Class Prompt Generation: Enhancing Indonesian Gender-Based Hate
Speech Detection through Data Augmentation
ABSTRACT: Detecting gender-based hate speech in Indonesian social media remains
challenging due to limited labeled datasets. While binary hate speech
classification has advanced, a more granular category like gender-targeted hate
speech is understudied because of class imbalance issues. This paper addresses
this gap by comparing three data augmentation techniques for Indonesian
gender-based hate speech detection. We evaluate backtranslation, single-class
prompt generation (using only hate speech examples), and our proposed
dual-class prompt generation (using both hate speech and non-hate speech
examples). Experiments show all augmentation methods improve classification
performance, with our dual-class approach achieving the best results (88.5%
accuracy, 88.1% F1-score using Random Forest). Semantic similarity analysis
reveals dual-class prompt generation produces the most novel content, while
T-SNE visualizations confirm these samples occupy distinct feature space
regions while maintaining class characteristics. Our findings suggest that
incorporating examples from both classes helps language models generate more
diverse yet representative samples, effectively addressing limited data
challenges in specialized hate speech detection.
| no_new_dataset | 0.956268 |
2503.04290 | Alexander Nolte | Jeanette Falk, Yiyi Chen, Janet Rafner, Mike Zhang, Johannes Bjerva,
Alexander Nolte | How Do Hackathons Foster Creativity? Towards AI Collaborative Evaluation
of Creativity at Scale | Accepted in Proceedings of the 2025 CHI Conference on Human Factors
in Computing Systems | null | null | null | cs.HC cs.AI cs.SE | http://creativecommons.org/licenses/by/4.0/ | Hackathons have become popular collaborative events for accelerating the
development of creative ideas and prototypes. There are several case studies
showcasing creative outcomes across domains such as industry, education, and
research. However, there are no large-scale studies on creativity in hackathons
which can advance theory on how hackathon formats lead to creative outcomes. We
conducted a computational analysis of 193,353 hackathon projects. By
operationalizing creativity through usefulness and novelty, we refined our
dataset to 10,363 projects, allowing us to analyze how participant
characteristics, collaboration patterns, and hackathon setups influence the
development of creative projects. The contribution of our paper is twofold: We
identified means for organizers to foster creativity in hackathons. We also
explore the use of large language models (LLMs) to augment the evaluation of
creative outcomes and discuss challenges and opportunities of doing this, which
has implications for creativity research at large.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 10:17:52 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Falk",
"Jeanette",
""
],
[
"Chen",
"Yiyi",
""
],
[
"Rafner",
"Janet",
""
],
[
"Zhang",
"Mike",
""
],
[
"Bjerva",
"Johannes",
""
],
[
"Nolte",
"Alexander",
""
]
]
| TITLE: How Do Hackathons Foster Creativity? Towards AI Collaborative Evaluation
of Creativity at Scale
ABSTRACT: Hackathons have become popular collaborative events for accelerating the
development of creative ideas and prototypes. There are several case studies
showcasing creative outcomes across domains such as industry, education, and
research. However, there are no large-scale studies on creativity in hackathons
which can advance theory on how hackathon formats lead to creative outcomes. We
conducted a computational analysis of 193,353 hackathon projects. By
operationalizing creativity through usefulness and novelty, we refined our
dataset to 10,363 projects, allowing us to analyze how participant
characteristics, collaboration patterns, and hackathon setups influence the
development of creative projects. The contribution of our paper is twofold: We
identified means for organizers to foster creativity in hackathons. We also
explore the use of large language models (LLMs) to augment the evaluation of
creative outcomes and discuss challenges and opportunities of doing this, which
has implications for creativity research at large.
| no_new_dataset | 0.916185 |
2503.04302 | Christian Rondanini | Christian Rondanini, Barbara Carminati, Elena Ferrari, Antonio
Gaudiano, Ashish Kundu | Malware Detection at the Edge with Lightweight LLMs: A Performance
Evaluation | null | null | null | null | cs.CR cs.AI cs.DC | http://creativecommons.org/licenses/by/4.0/ | The rapid evolution of malware attacks calls for the development of
innovative detection methods, especially in resource-constrained edge
computing. Traditional detection techniques struggle to keep up with modern
malware's sophistication and adaptability, prompting a shift towards advanced
methodologies like those leveraging Large Language Models (LLMs) for enhanced
malware detection. However, deploying LLMs for malware detection directly at
edge devices raises several challenges, including ensuring accuracy in
constrained environments and addressing edge devices' energy and computational
limits. To tackle these challenges, this paper proposes an architecture
leveraging lightweight LLMs' strengths while addressing limitations like
reduced accuracy and insufficient computational power. To evaluate the
effectiveness of the proposed lightweight LLM-based approach for edge
computing, we perform an extensive experimental evaluation using several
state-of-the-art lightweight LLMs. We test them with several publicly available
datasets specifically designed for edge and IoT scenarios and different edge
nodes with varying computational power and characteristics.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 10:42:18 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Rondanini",
"Christian",
""
],
[
"Carminati",
"Barbara",
""
],
[
"Ferrari",
"Elena",
""
],
[
"Gaudiano",
"Antonio",
""
],
[
"Kundu",
"Ashish",
""
]
]
| TITLE: Malware Detection at the Edge with Lightweight LLMs: A Performance
Evaluation
ABSTRACT: The rapid evolution of malware attacks calls for the development of
innovative detection methods, especially in resource-constrained edge
computing. Traditional detection techniques struggle to keep up with modern
malware's sophistication and adaptability, prompting a shift towards advanced
methodologies like those leveraging Large Language Models (LLMs) for enhanced
malware detection. However, deploying LLMs for malware detection directly at
edge devices raises several challenges, including ensuring accuracy in
constrained environments and addressing edge devices' energy and computational
limits. To tackle these challenges, this paper proposes an architecture
leveraging lightweight LLMs' strengths while addressing limitations like
reduced accuracy and insufficient computational power. To evaluate the
effectiveness of the proposed lightweight LLM-based approach for edge
computing, we perform an extensive experimental evaluation using several
state-of-the-art lightweight LLMs. We test them with several publicly available
datasets specifically designed for edge and IoT scenarios and different edge
nodes with varying computational power and characteristics.
| new_dataset | 0.971293 |
2503.04308 | Luk\'a\v{s} Gajdo\v{s}ech | Luk\'a\v{s} Gajdo\v{s}ech, Hassan Ali, Jan-Gerrit Habekost, Martin
Madaras, Matthias Kerzel, Stefan Wermter | Shaken, Not Stirred: A Novel Dataset for Visual Understanding of Glasses
in Human-Robot Bartending Tasks | Submitted to IEEE/RSJ International Conference on Intelligent Robots
and Systems (IROS) 2025 | null | null | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Datasets for object detection often do not account for enough variety of
glasses, due to their transparent and reflective properties. Specifically,
open-vocabulary object detectors, widely used in embodied robotic agents, fail
to distinguish subclasses of glasses. This scientific gap poses an issue to
robotic applications that suffer from accumulating errors between detection,
planning, and action execution. The paper introduces a novel method for the
acquisition of real-world data from RGB-D sensors that minimizes human effort.
We propose an auto-labeling pipeline that generates labels for all the acquired
frames based on the depth measurements. We provide a novel real-world glass
object dataset that was collected on the Neuro-Inspired COLlaborator (NICOL), a
humanoid robot platform. The data set consists of 7850 images recorded from
five different cameras. We show that our trained baseline model outperforms
state-of-the-art open-vocabulary approaches. In addition, we deploy our
baseline model in an embodied agent approach to the NICOL platform, on which it
achieves a success rate of 81% in a human-robot bartending scenario.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 10:51:04 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Gajdošech",
"Lukáš",
""
],
[
"Ali",
"Hassan",
""
],
[
"Habekost",
"Jan-Gerrit",
""
],
[
"Madaras",
"Martin",
""
],
[
"Kerzel",
"Matthias",
""
],
[
"Wermter",
"Stefan",
""
]
]
| TITLE: Shaken, Not Stirred: A Novel Dataset for Visual Understanding of Glasses
in Human-Robot Bartending Tasks
ABSTRACT: Datasets for object detection often do not account for enough variety of
glasses, due to their transparent and reflective properties. Specifically,
open-vocabulary object detectors, widely used in embodied robotic agents, fail
to distinguish subclasses of glasses. This scientific gap poses an issue to
robotic applications that suffer from accumulating errors between detection,
planning, and action execution. The paper introduces a novel method for the
acquisition of real-world data from RGB-D sensors that minimizes human effort.
We propose an auto-labeling pipeline that generates labels for all the acquired
frames based on the depth measurements. We provide a novel real-world glass
object dataset that was collected on the Neuro-Inspired COLlaborator (NICOL), a
humanoid robot platform. The data set consists of 7850 images recorded from
five different cameras. We show that our trained baseline model outperforms
state-of-the-art open-vocabulary approaches. In addition, we deploy our
baseline model in an embodied agent approach to the NICOL platform, on which it
achieves a success rate of 81% in a human-robot bartending scenario.
| new_dataset | 0.958654 |
2503.04316 | Robert Jankowski | Robert Jankowski, Roya Aliakbarisani, M. \'Angeles Serrano, Mari\'an
Bogu\~n\'a | Mapping bipartite networks into multidimensional hyperbolic spaces | null | null | null | null | physics.soc-ph cs.SI | http://creativecommons.org/licenses/by/4.0/ | Bipartite networks appear in many real-world contexts, linking entities
across two distinct sets. They are often analyzed via one-mode projections, but
such projections can introduce artificial correlations and inflated clustering,
obscuring the true underlying structure. In this paper, we propose a geometric
model for bipartite networks that leverages the high levels of bipartite
four-cycles as a measure of clustering to place both node types in the same
similarity space, where link probabilities decrease with distance.
Additionally, we introduce B-Mercator, an algorithm that infers node positions
from the bipartite structure. We evaluate its performance on diverse datasets,
illustrating how the resulting embeddings improve downstream tasks such as node
classification and distance-based link prediction in machine learning. These
hyperbolic embeddings also enable the generation of synthetic networks with
node features closely resembling real-world ones, thereby safeguarding
sensitive information while allowing secure data sharing. In addition, we show
how preserving bipartite structure avoids the pitfalls of projection-based
techniques, yielding more accurate descriptions and better performance. Our
method provides a robust framework for uncovering hidden geometry in complex
bipartite systems.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 10:59:26 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Jankowski",
"Robert",
""
],
[
"Aliakbarisani",
"Roya",
""
],
[
"Serrano",
"M. Ángeles",
""
],
[
"Boguñá",
"Marián",
""
]
]
| TITLE: Mapping bipartite networks into multidimensional hyperbolic spaces
ABSTRACT: Bipartite networks appear in many real-world contexts, linking entities
across two distinct sets. They are often analyzed via one-mode projections, but
such projections can introduce artificial correlations and inflated clustering,
obscuring the true underlying structure. In this paper, we propose a geometric
model for bipartite networks that leverages the high levels of bipartite
four-cycles as a measure of clustering to place both node types in the same
similarity space, where link probabilities decrease with distance.
Additionally, we introduce B-Mercator, an algorithm that infers node positions
from the bipartite structure. We evaluate its performance on diverse datasets,
illustrating how the resulting embeddings improve downstream tasks such as node
classification and distance-based link prediction in machine learning. These
hyperbolic embeddings also enable the generation of synthetic networks with
node features closely resembling real-world ones, thereby safeguarding
sensitive information while allowing secure data sharing. In addition, we show
how preserving bipartite structure avoids the pitfalls of projection-based
techniques, yielding more accurate descriptions and better performance. Our
method provides a robust framework for uncovering hidden geometry in complex
bipartite systems.
| no_new_dataset | 0.951774 |
2503.04318 | Abdulrahman Mohamed Selim | Tim Maurer, Abdulrahman Mohamed Selim, Hasan Md Tusfiqur Alam,
Matthias Eiletz, Michael Barz, Daniel Sonntag | InFL-UX: A Toolkit for Web-Based Interactive Federated Learning | null | null | null | null | cs.LG cs.HC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | This paper presents InFL-UX, an interactive, proof-of-concept browser-based
Federated Learning (FL) toolkit designed to integrate user contributions
seamlessly into the machine learning (ML) workflow. InFL-UX enables users
across multiple devices to upload datasets, define classes, and collaboratively
train classification models directly in the browser using modern web
technologies. Unlike traditional FL toolkits, which often focus on backend
simulations, InFL-UX provides a simple user interface for researchers to
explore how users interact with and contribute to FL systems in real-world,
interactive settings. By prioritising usability and decentralised model
training, InFL-UX bridges the gap between FL and Interactive Machine Learning
(IML), empowering non-technical users to actively participate in ML
classification tasks.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 11:00:18 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Maurer",
"Tim",
""
],
[
"Selim",
"Abdulrahman Mohamed",
""
],
[
"Alam",
"Hasan Md Tusfiqur",
""
],
[
"Eiletz",
"Matthias",
""
],
[
"Barz",
"Michael",
""
],
[
"Sonntag",
"Daniel",
""
]
]
| TITLE: InFL-UX: A Toolkit for Web-Based Interactive Federated Learning
ABSTRACT: This paper presents InFL-UX, an interactive, proof-of-concept browser-based
Federated Learning (FL) toolkit designed to integrate user contributions
seamlessly into the machine learning (ML) workflow. InFL-UX enables users
across multiple devices to upload datasets, define classes, and collaboratively
train classification models directly in the browser using modern web
technologies. Unlike traditional FL toolkits, which often focus on backend
simulations, InFL-UX provides a simple user interface for researchers to
explore how users interact with and contribute to FL systems in real-world,
interactive settings. By prioritising usability and decentralised model
training, InFL-UX bridges the gap between FL and Interactive Machine Learning
(IML), empowering non-technical users to actively participate in ML
classification tasks.
| no_new_dataset | 0.947817 |
2503.04322 | Lars Bredereke | Lars Bredereke, Yale Hartmann, Tanja Schultz | A Modular Pipeline for 3D Object Tracking Using RGB Cameras | 9 pages, 11 figures, original paper not to be published anywhere else | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Object tracking is a key challenge of computer vision with various
applications that all require different architectures. Most tracking systems
have limitations such as constraining all movement to a 2D plane and they often
track only one object. In this paper, we present a new modular pipeline that
calculates 3D trajectories of multiple objects. It is adaptable to various
settings where multiple time-synced and stationary cameras record moving
objects, using off the shelf webcams. Our pipeline was tested on the Table
Setting Dataset, where participants are recorded with various sensors as they
set a table with tableware objects. We need to track these manipulated objects,
using 6 rgb webcams. Challenges include: Detecting small objects in 9.874.699
camera frames, determining camera poses, discriminating between nearby and
overlapping objects, temporary occlusions, and finally calculating a 3D
trajectory using the right subset of an average of 11.12.456 pixel coordinates
per 3-minute trial. We implement a robust pipeline that results in accurate
trajectories with covariance of x,y,z-position as a confidence metric. It deals
dynamically with appearing and disappearing objects, instantiating new Extended
Kalman Filters. It scales to hundreds of table-setting trials with very little
human annotation input, even with the camera poses of each trial unknown. The
code is available at https://github.com/LarsBredereke/object_tracking
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 11:14:59 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Bredereke",
"Lars",
""
],
[
"Hartmann",
"Yale",
""
],
[
"Schultz",
"Tanja",
""
]
]
| TITLE: A Modular Pipeline for 3D Object Tracking Using RGB Cameras
ABSTRACT: Object tracking is a key challenge of computer vision with various
applications that all require different architectures. Most tracking systems
have limitations such as constraining all movement to a 2D plane and they often
track only one object. In this paper, we present a new modular pipeline that
calculates 3D trajectories of multiple objects. It is adaptable to various
settings where multiple time-synced and stationary cameras record moving
objects, using off the shelf webcams. Our pipeline was tested on the Table
Setting Dataset, where participants are recorded with various sensors as they
set a table with tableware objects. We need to track these manipulated objects,
using 6 rgb webcams. Challenges include: Detecting small objects in 9.874.699
camera frames, determining camera poses, discriminating between nearby and
overlapping objects, temporary occlusions, and finally calculating a 3D
trajectory using the right subset of an average of 11.12.456 pixel coordinates
per 3-minute trial. We implement a robust pipeline that results in accurate
trajectories with covariance of x,y,z-position as a confidence metric. It deals
dynamically with appearing and disappearing objects, instantiating new Extended
Kalman Filters. It scales to hundreds of table-setting trials with very little
human annotation input, even with the camera poses of each trial unknown. The
code is available at https://github.com/LarsBredereke/object_tracking
| no_new_dataset | 0.93511 |
2503.04324 | Robin Haunschild | Robin Haunschild and Lutz Bornmann | Paper self-citation: An unexplored phenomenon | 12 pages, 4 tables, and 4 figures | null | null | null | cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this study, we investigated a phenomenon that one intuitively would assume
does not exist: self-citations on the paper basis. Actually, papers citing
themselves do exist in the Web of Science (WoS) database. In total, we obtained
44,857 papers that have self-citation relations in the WoS raw dataset. In
part, they are database artefacts but in part they are due to papers citing
themselves in the conclusion or appendix. We also found cases where paper
self-citations occur due to publisher-made highlights promoting and citing the
paper. We analyzed the self-citing papers according to selected metadata. We
observed accumulations of the number of self-citing papers across publication
years. We found a skewed distribution across countries, journals, authors,
fields, and document types. Finally, we discuss the implications of paper
self-citations for bibliometric indicators.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 11:17:23 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Haunschild",
"Robin",
""
],
[
"Bornmann",
"Lutz",
""
]
]
| TITLE: Paper self-citation: An unexplored phenomenon
ABSTRACT: In this study, we investigated a phenomenon that one intuitively would assume
does not exist: self-citations on the paper basis. Actually, papers citing
themselves do exist in the Web of Science (WoS) database. In total, we obtained
44,857 papers that have self-citation relations in the WoS raw dataset. In
part, they are database artefacts but in part they are due to papers citing
themselves in the conclusion or appendix. We also found cases where paper
self-citations occur due to publisher-made highlights promoting and citing the
paper. We analyzed the self-citing papers according to selected metadata. We
observed accumulations of the number of self-citing papers across publication
years. We found a skewed distribution across countries, journals, authors,
fields, and document types. Finally, we discuss the implications of paper
self-citations for bibliometric indicators.
| no_new_dataset | 0.949995 |
2503.04328 | Tadej \v{S}kvorc | Tadej \v{S}kvorc and Marko Robnik-\v{S}ikonja | Solving Word-Sense Disambiguation and Word-Sense Induction with
Dictionary Examples | 12 pages, 1 figure | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Many less-resourced languages struggle with a lack of large, task-specific
datasets that are required for solving relevant tasks with modern
transformer-based large language models (LLMs). On the other hand, many
linguistic resources, such as dictionaries, are rarely used in this context
despite their large information contents. We show how LLMs can be used to
extend existing language resources in less-resourced languages for two
important tasks: word-sense disambiguation (WSD) and word-sense induction
(WSI). We approach the two tasks through the related but much more accessible
word-in-context (WiC) task where, given a pair of sentences and a target word,
a classification model is tasked with predicting whether the sense of a given
word differs between sentences. We demonstrate that a well-trained model for
this task can distinguish between different word senses and can be adapted to
solve the WSD and WSI tasks. The advantage of using the WiC task, instead of
directly predicting senses, is that the WiC task does not need pre-constructed
sense inventories with a sufficient number of examples for each sense, which
are rarely available in less-resourced languages. We show that sentence pairs
for the WiC task can be successfully generated from dictionary examples using
LLMs. The resulting prediction models outperform existing models on WiC, WSD,
and WSI tasks. We demonstrate our methodology on the Slovene language, where a
monolingual dictionary is available, but word-sense resources are tiny.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 11:27:55 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Škvorc",
"Tadej",
""
],
[
"Robnik-Šikonja",
"Marko",
""
]
]
| TITLE: Solving Word-Sense Disambiguation and Word-Sense Induction with
Dictionary Examples
ABSTRACT: Many less-resourced languages struggle with a lack of large, task-specific
datasets that are required for solving relevant tasks with modern
transformer-based large language models (LLMs). On the other hand, many
linguistic resources, such as dictionaries, are rarely used in this context
despite their large information contents. We show how LLMs can be used to
extend existing language resources in less-resourced languages for two
important tasks: word-sense disambiguation (WSD) and word-sense induction
(WSI). We approach the two tasks through the related but much more accessible
word-in-context (WiC) task where, given a pair of sentences and a target word,
a classification model is tasked with predicting whether the sense of a given
word differs between sentences. We demonstrate that a well-trained model for
this task can distinguish between different word senses and can be adapted to
solve the WSD and WSI tasks. The advantage of using the WiC task, instead of
directly predicting senses, is that the WiC task does not need pre-constructed
sense inventories with a sufficient number of examples for each sense, which
are rarely available in less-resourced languages. We show that sentence pairs
for the WiC task can be successfully generated from dictionary examples using
LLMs. The resulting prediction models outperform existing models on WiC, WSD,
and WSI tasks. We demonstrate our methodology on the Slovene language, where a
monolingual dictionary is available, but word-sense resources are tiny.
| no_new_dataset | 0.947088 |
2503.04338 | Yingli Zhou | Yingli Zhou, Yaodong Su, Youran Sun, Shu Wang, Taotao Wang, Runyuan
He, Yongwei Zhang, Sicong Liang, Xilin Liu, Yuchi Ma, Yixiang Fang | In-depth Analysis of Graph-based RAG in a Unified Framework | null | null | null | null | cs.IR cs.CL cs.DB | http://creativecommons.org/licenses/by/4.0/ | Graph-based Retrieval-Augmented Generation (RAG) has proven effective in
integrating external knowledge into large language models (LLMs), improving
their factual accuracy, adaptability, interpretability, and trustworthiness. A
number of graph-based RAG methods have been proposed in the literature.
However, these methods have not been systematically and comprehensively
compared under the same experimental settings. In this paper, we first
summarize a unified framework to incorporate all graph-based RAG methods from a
high-level perspective. We then extensively compare representative graph-based
RAG methods over a range of questing-answering (QA) datasets -- from specific
questions to abstract questions -- and examine the effectiveness of all
methods, providing a thorough analysis of graph-based RAG approaches. As a
byproduct of our experimental analysis, we are also able to identify new
variants of the graph-based RAG methods over specific QA and abstract QA tasks
respectively, by combining existing techniques, which outperform the
state-of-the-art methods. Finally, based on these findings, we offer promising
research opportunities. We believe that a deeper understanding of the behavior
of existing methods can provide new valuable insights for future research.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 11:34:49 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Zhou",
"Yingli",
""
],
[
"Su",
"Yaodong",
""
],
[
"Sun",
"Youran",
""
],
[
"Wang",
"Shu",
""
],
[
"Wang",
"Taotao",
""
],
[
"He",
"Runyuan",
""
],
[
"Zhang",
"Yongwei",
""
],
[
"Liang",
"Sicong",
""
],
[
"Liu",
"Xilin",
""
],
[
"Ma",
"Yuchi",
""
],
[
"Fang",
"Yixiang",
""
]
]
| TITLE: In-depth Analysis of Graph-based RAG in a Unified Framework
ABSTRACT: Graph-based Retrieval-Augmented Generation (RAG) has proven effective in
integrating external knowledge into large language models (LLMs), improving
their factual accuracy, adaptability, interpretability, and trustworthiness. A
number of graph-based RAG methods have been proposed in the literature.
However, these methods have not been systematically and comprehensively
compared under the same experimental settings. In this paper, we first
summarize a unified framework to incorporate all graph-based RAG methods from a
high-level perspective. We then extensively compare representative graph-based
RAG methods over a range of questing-answering (QA) datasets -- from specific
questions to abstract questions -- and examine the effectiveness of all
methods, providing a thorough analysis of graph-based RAG approaches. As a
byproduct of our experimental analysis, we are also able to identify new
variants of the graph-based RAG methods over specific QA and abstract QA tasks
respectively, by combining existing techniques, which outperform the
state-of-the-art methods. Finally, based on these findings, we offer promising
research opportunities. We believe that a deeper understanding of the behavior
of existing methods can provide new valuable insights for future research.
| no_new_dataset | 0.941708 |
2503.04342 | Ivan Oleksiyuk | Ivan Oleksiyuk, Svyatoslav Voloshynovskiy, Tobias Golling | TRANSIT your events into a new mass: Fast background interpolation for
weakly-supervised anomaly searches | 34 pages, 14 figures | null | null | null | hep-ph cs.LG hep-ex | http://creativecommons.org/licenses/by/4.0/ | We introduce a new model for conditional and continuous data morphing called
TRansport Adversarial Network for Smooth InTerpolation (TRANSIT). We apply it
to create a background data template for weakly-supervised searches at the LHC.
The method smoothly transforms sideband events to match signal region mass
distributions. We demonstrate the performance of TRANSIT using the LHC Olympics
R\&D dataset. The model captures non-linear mass correlations of features and
produces a template that offers a competitive anomaly sensitivity compared to
state-of-the-art transport-based template generators. Moreover, the
computational training time required for TRANSIT is an order of magnitude lower
than that of competing deep learning methods. This makes it ideal for analyses
that iterate over many signal regions and signal models. Unlike generative
models, which must learn a full probability density distribution, i.e., the
correlations between all the variables, the proposed transport model only has
to learn a smooth conditional shift of the distribution. This allows for a
simpler, more efficient residual architecture, enabling mass uncorrelated
features to pass the network unchanged while the mass correlated features are
adjusted accordingly. Furthermore, we show that the latent space of the model
provides a set of mass decorrelated features useful for anomaly detection
without background sculpting.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 11:39:07 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Oleksiyuk",
"Ivan",
""
],
[
"Voloshynovskiy",
"Svyatoslav",
""
],
[
"Golling",
"Tobias",
""
]
]
| TITLE: TRANSIT your events into a new mass: Fast background interpolation for
weakly-supervised anomaly searches
ABSTRACT: We introduce a new model for conditional and continuous data morphing called
TRansport Adversarial Network for Smooth InTerpolation (TRANSIT). We apply it
to create a background data template for weakly-supervised searches at the LHC.
The method smoothly transforms sideband events to match signal region mass
distributions. We demonstrate the performance of TRANSIT using the LHC Olympics
R\&D dataset. The model captures non-linear mass correlations of features and
produces a template that offers a competitive anomaly sensitivity compared to
state-of-the-art transport-based template generators. Moreover, the
computational training time required for TRANSIT is an order of magnitude lower
than that of competing deep learning methods. This makes it ideal for analyses
that iterate over many signal regions and signal models. Unlike generative
models, which must learn a full probability density distribution, i.e., the
correlations between all the variables, the proposed transport model only has
to learn a smooth conditional shift of the distribution. This allows for a
simpler, more efficient residual architecture, enabling mass uncorrelated
features to pass the network unchanged while the mass correlated features are
adjusted accordingly. Furthermore, we show that the latent space of the model
provides a set of mass decorrelated features useful for anomaly detection
without background sculpting.
| no_new_dataset | 0.954351 |
2503.04350 | Joana Sim\~oes | Joana Sim\~oes and Jo\~ao Correia | EDCA -- An Evolutionary Data-Centric AutoML Framework for Efficient
Pipelines | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automated Machine Learning (AutoML) gained popularity due to the increased
demand for Machine Learning (ML) specialists, allowing them to apply ML
techniques effortlessly and quickly. AutoML implementations use optimisation
methods to identify the most effective ML solution for a given dataset, aiming
to improve one or more predefined metrics. However, most implementations focus
on model selection and hyperparameter tuning. Despite being an important factor
in obtaining high-performance ML systems, data quality is usually an overlooked
part of AutoML and continues to be a manual and time-consuming task. This work
presents EDCA, an Evolutionary Data Centric AutoML framework. In addition to
the traditional tasks such as selecting the best models and hyperparameters,
EDCA enhances the given data by optimising data processing tasks such as data
reduction and cleaning according to the problems' needs. All these steps create
an ML pipeline that is optimised by an evolutionary algorithm. To assess its
effectiveness, EDCA was compared to FLAML and TPOT, two frameworks at the top
of the AutoML benchmarks. The frameworks were evaluated in the same conditions
using datasets from AMLB classification benchmarks. EDCA achieved statistically
similar results in performance to FLAML and TPOT but used significantly less
data to train the final solutions. Moreover, EDCA experimental results reveal
that a good performance can be achieved using less data and efficient ML
algorithm aspects that align with Green AutoML guidelines
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 11:46:07 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Simões",
"Joana",
""
],
[
"Correia",
"João",
""
]
]
| TITLE: EDCA -- An Evolutionary Data-Centric AutoML Framework for Efficient
Pipelines
ABSTRACT: Automated Machine Learning (AutoML) gained popularity due to the increased
demand for Machine Learning (ML) specialists, allowing them to apply ML
techniques effortlessly and quickly. AutoML implementations use optimisation
methods to identify the most effective ML solution for a given dataset, aiming
to improve one or more predefined metrics. However, most implementations focus
on model selection and hyperparameter tuning. Despite being an important factor
in obtaining high-performance ML systems, data quality is usually an overlooked
part of AutoML and continues to be a manual and time-consuming task. This work
presents EDCA, an Evolutionary Data Centric AutoML framework. In addition to
the traditional tasks such as selecting the best models and hyperparameters,
EDCA enhances the given data by optimising data processing tasks such as data
reduction and cleaning according to the problems' needs. All these steps create
an ML pipeline that is optimised by an evolutionary algorithm. To assess its
effectiveness, EDCA was compared to FLAML and TPOT, two frameworks at the top
of the AutoML benchmarks. The frameworks were evaluated in the same conditions
using datasets from AMLB classification benchmarks. EDCA achieved statistically
similar results in performance to FLAML and TPOT but used significantly less
data to train the final solutions. Moreover, EDCA experimental results reveal
that a good performance can be achieved using less data and efficient ML
algorithm aspects that align with Green AutoML guidelines
| no_new_dataset | 0.947721 |
2503.04355 | Changze Lv | Zhenghua Wang, Yiran Ding, Changze Lv, Zhibo Xu, Tianlong Li, Tianyuan
Shi, Xiaoqing Zheng, Xuanjing Huang | Layer-Specific Scaling of Positional Encodings for Superior Long-Context
Modeling | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although large language models (LLMs) have achieved significant progress in
handling long-context inputs, they still suffer from the ``lost-in-the-middle''
problem, where crucial information in the middle of the context is often
underrepresented or lost. Our extensive experiments reveal that this issue may
arise from the rapid long-term decay in Rotary Position Embedding (RoPE). To
address this problem, we propose a layer-specific positional encoding scaling
method that assigns distinct scaling factors to each layer, slowing down the
decay rate caused by RoPE to make the model pay more attention to the middle
context. A specially designed genetic algorithm is employed to efficiently
select the optimal scaling factors for each layer by incorporating Bezier
curves to reduce the search space. Through comprehensive experimentation, we
demonstrate that our method significantly alleviates the ``lost-in-the-middle''
problem. Our approach results in an average accuracy improvement of up to 20%
on the Key-Value Retrieval dataset. Furthermore, we show that layer-specific
interpolation, as opposed to uniform interpolation across all layers, enhances
the model's extrapolation capabilities when combined with PI and Dynamic-NTK
positional encoding schemes.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 11:59:55 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Wang",
"Zhenghua",
""
],
[
"Ding",
"Yiran",
""
],
[
"Lv",
"Changze",
""
],
[
"Xu",
"Zhibo",
""
],
[
"Li",
"Tianlong",
""
],
[
"Shi",
"Tianyuan",
""
],
[
"Zheng",
"Xiaoqing",
""
],
[
"Huang",
"Xuanjing",
""
]
]
| TITLE: Layer-Specific Scaling of Positional Encodings for Superior Long-Context
Modeling
ABSTRACT: Although large language models (LLMs) have achieved significant progress in
handling long-context inputs, they still suffer from the ``lost-in-the-middle''
problem, where crucial information in the middle of the context is often
underrepresented or lost. Our extensive experiments reveal that this issue may
arise from the rapid long-term decay in Rotary Position Embedding (RoPE). To
address this problem, we propose a layer-specific positional encoding scaling
method that assigns distinct scaling factors to each layer, slowing down the
decay rate caused by RoPE to make the model pay more attention to the middle
context. A specially designed genetic algorithm is employed to efficiently
select the optimal scaling factors for each layer by incorporating Bezier
curves to reduce the search space. Through comprehensive experimentation, we
demonstrate that our method significantly alleviates the ``lost-in-the-middle''
problem. Our approach results in an average accuracy improvement of up to 20%
on the Key-Value Retrieval dataset. Furthermore, we show that layer-specific
interpolation, as opposed to uniform interpolation across all layers, enhances
the model's extrapolation capabilities when combined with PI and Dynamic-NTK
positional encoding schemes.
| no_new_dataset | 0.946498 |
2503.04357 | Zhen Yu | Zhen Yu, Jianan Han, Yang Liu, Qingchao Chen | scDD: Latent Codes Based scRNA-seq Dataset Distillation with Foundation
Model Knowledge | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Single-cell RNA sequencing (scRNA-seq) technology has profiled hundreds of
millions of human cells across organs, diseases, development and perturbations
to date. However, the high-dimensional sparsity, batch effect noise, category
imbalance, and ever-increasing data scale of the original sequencing data pose
significant challenges for multi-center knowledge transfer, data fusion, and
cross-validation between scRNA-seq datasets. To address these barriers, (1) we
first propose a latent codes-based scRNA-seq dataset distillation framework
named scDD, which transfers and distills foundation model knowledge and
original dataset information into a compact latent space and generates
synthetic scRNA-seq dataset by a generator to replace the original dataset.
Then, (2) we propose a single-step conditional diffusion generator named SCDG,
which perform single-step gradient back-propagation to help scDD optimize
distillation quality and avoid gradient decay caused by multi-step
back-propagation. Meanwhile, SCDG ensures the scRNA-seq data characteristics
and inter-class discriminability of the synthetic dataset through flexible
conditional control and generation quality assurance. Finally, we propose a
comprehensive benchmark to evaluate the performance of scRNA-seq dataset
distillation in different data analysis tasks. It is validated that our
proposed method can achieve 7.61% absolute and 15.70% relative improvement over
previous state-of-the-art methods on average task.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 12:01:20 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Yu",
"Zhen",
""
],
[
"Han",
"Jianan",
""
],
[
"Liu",
"Yang",
""
],
[
"Chen",
"Qingchao",
""
]
]
| TITLE: scDD: Latent Codes Based scRNA-seq Dataset Distillation with Foundation
Model Knowledge
ABSTRACT: Single-cell RNA sequencing (scRNA-seq) technology has profiled hundreds of
millions of human cells across organs, diseases, development and perturbations
to date. However, the high-dimensional sparsity, batch effect noise, category
imbalance, and ever-increasing data scale of the original sequencing data pose
significant challenges for multi-center knowledge transfer, data fusion, and
cross-validation between scRNA-seq datasets. To address these barriers, (1) we
first propose a latent codes-based scRNA-seq dataset distillation framework
named scDD, which transfers and distills foundation model knowledge and
original dataset information into a compact latent space and generates
synthetic scRNA-seq dataset by a generator to replace the original dataset.
Then, (2) we propose a single-step conditional diffusion generator named SCDG,
which perform single-step gradient back-propagation to help scDD optimize
distillation quality and avoid gradient decay caused by multi-step
back-propagation. Meanwhile, SCDG ensures the scRNA-seq data characteristics
and inter-class discriminability of the synthetic dataset through flexible
conditional control and generation quality assurance. Finally, we propose a
comprehensive benchmark to evaluate the performance of scRNA-seq dataset
distillation in different data analysis tasks. It is validated that our
proposed method can achieve 7.61% absolute and 15.70% relative improvement over
previous state-of-the-art methods on average task.
| no_new_dataset | 0.933975 |
2503.04370 | Antonio Guill\'en Teruel | Antonio Guill\'en-Teruel (1), Marcos Caracena (1), Jose A. Pardo (1),
Fernando de-la-G\'andara (1), Jos\'e Palma (1), Juan A. Bot\'ia (1,2) ((1)
Departamento de Ingenier\'ia de la Informaci\'on y Las Comunicaciones,
Universidad de Murcia, Murcia, 30100, Murcia, Spain, (2) Department of
Neurodegenerative Disease, Institute of Neurology, University College London,
London, WC1N 3BG, UK.) | FILM: Framework for Imbalanced Learning Machines based on a new unbiased
performance measure and a new ensemble-based technique | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | This research addresses the challenges of handling unbalanced datasets for
binary classification tasks. In such scenarios, standard evaluation metrics are
often biased by the disproportionate representation of the minority class.
Conducting experiments across seven datasets, we uncovered inconsistencies in
evaluation metrics when determining the model that outperforms others for each
binary classification problem. This justifies the need for a metric that
provides a more consistent and unbiased evaluation across unbalanced datasets,
thereby supporting robust model selection. To mitigate this problem, we propose
a novel metric, the Unbiased Integration Coefficients (UIC), which exhibits
significantly reduced bias ($p < 10^{-4}$) towards the minority class compared
to conventional metrics. The UIC is constructed by aggregating existing metrics
while penalising those more prone to imbalance. In addition, we introduce the
Identical Partitions for Imbalance Problems (IPIP) algorithm for imbalanced ML
problems, an ensemble-based approach. Our experimental results show that IPIP
outperforms other baseline imbalance-aware approaches using Random Forest and
Logistic Regression models in three out of seven datasets as assessed by the
UIC metric, demonstrating its effectiveness in addressing imbalanced data
challenges in binary classification tasks. This new framework for dealing with
imbalanced datasets is materialized in the FILM (Framework for Imbalanced
Learning Machines) R Package, accessible at https://github.com/antoniogt/FILM.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 12:15:56 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Guillén-Teruel",
"Antonio",
""
],
[
"Caracena",
"Marcos",
""
],
[
"Pardo",
"Jose A.",
""
],
[
"de-la-Gándara",
"Fernando",
""
],
[
"Palma",
"José",
""
],
[
"Botía",
"Juan A.",
""
]
]
| TITLE: FILM: Framework for Imbalanced Learning Machines based on a new unbiased
performance measure and a new ensemble-based technique
ABSTRACT: This research addresses the challenges of handling unbalanced datasets for
binary classification tasks. In such scenarios, standard evaluation metrics are
often biased by the disproportionate representation of the minority class.
Conducting experiments across seven datasets, we uncovered inconsistencies in
evaluation metrics when determining the model that outperforms others for each
binary classification problem. This justifies the need for a metric that
provides a more consistent and unbiased evaluation across unbalanced datasets,
thereby supporting robust model selection. To mitigate this problem, we propose
a novel metric, the Unbiased Integration Coefficients (UIC), which exhibits
significantly reduced bias ($p < 10^{-4}$) towards the minority class compared
to conventional metrics. The UIC is constructed by aggregating existing metrics
while penalising those more prone to imbalance. In addition, we introduce the
Identical Partitions for Imbalance Problems (IPIP) algorithm for imbalanced ML
problems, an ensemble-based approach. Our experimental results show that IPIP
outperforms other baseline imbalance-aware approaches using Random Forest and
Logistic Regression models in three out of seven datasets as assessed by the
UIC metric, demonstrating its effectiveness in addressing imbalanced data
challenges in binary classification tasks. This new framework for dealing with
imbalanced datasets is materialized in the FILM (Framework for Imbalanced
Learning Machines) R Package, accessible at https://github.com/antoniogt/FILM.
| no_new_dataset | 0.949809 |
2503.04372 | Orfeas Menis-Mastromichalakis | Orfeas Menis Mastromichalakis, Giorgos Filandrianos, Maria Symeonaki
and Giorgos Stamou | Assumed Identities: Quantifying Gender Bias in Machine Translation of
Ambiguous Occupational Terms | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Machine Translation (MT) systems frequently encounter ambiguous scenarios
where they must assign gender to certain occupations when translating without
explicit guidance or contextual cues. While individual translations in such
cases may not be inherently biased, systematic patterns-such as the repeated
association of certain professions with specific genders-can emerge, reflecting
and perpetuating societal stereotypes. This ambiguity challenges traditional
instance-level single-answer evaluation approaches, as no single gold standard
translation exists. To address this, we propose an approach that evaluates
gender bias through aggregated model responses. Specifically, we introduce a
methodology to detect gender imbalances between source texts and translations,
a benchmarking dataset with ambiguous English inputs, and probability-based
metrics to quantify a model's divergence from normative standards or reference
distributions.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 12:16:14 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Mastromichalakis",
"Orfeas Menis",
""
],
[
"Filandrianos",
"Giorgos",
""
],
[
"Symeonaki",
"Maria",
""
],
[
"Stamou",
"Giorgos",
""
]
]
| TITLE: Assumed Identities: Quantifying Gender Bias in Machine Translation of
Ambiguous Occupational Terms
ABSTRACT: Machine Translation (MT) systems frequently encounter ambiguous scenarios
where they must assign gender to certain occupations when translating without
explicit guidance or contextual cues. While individual translations in such
cases may not be inherently biased, systematic patterns-such as the repeated
association of certain professions with specific genders-can emerge, reflecting
and perpetuating societal stereotypes. This ambiguity challenges traditional
instance-level single-answer evaluation approaches, as no single gold standard
translation exists. To address this, we propose an approach that evaluates
gender bias through aggregated model responses. Specifically, we introduce a
methodology to detect gender imbalances between source texts and translations,
a benchmarking dataset with ambiguous English inputs, and probability-based
metrics to quantify a model's divergence from normative standards or reference
distributions.
| no_new_dataset | 0.919823 |
2503.04376 | Peng Xu | Peng Xu, Zhiyu Xiang, Jingyun Fu, Tianyu Pu, Hanzhi Zhong, Eryun Liu | MIDAS: Modeling Ground-Truth Distributions with Dark Knowledge for
Domain Generalized Stereo Matching | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the significant advances in domain generalized stereo matching,
existing methods still exhibit domain-specific preferences when transferring
from synthetic to real domains, hindering their practical applications in
complex and diverse scenarios. The probability distributions predicted by the
stereo network naturally encode rich similarity and uncertainty information.
Inspired by this observation, we propose to extract these two types of dark
knowledge from the pre-trained network to model intuitive multi-modal
ground-truth distributions for both edge and non-edge regions. To mitigate the
inherent domain preferences of a single network, we adopt network ensemble and
further distinguish between objective and biased knowledge in the Laplace
parameter space. Finally, the objective knowledge and the original disparity
labels are jointly modeled as a mixture of Laplacians to provide fine-grained
supervision for the stereo network training. Extensive experiments demonstrate
that: 1) Our method is generic and effectively improves the generalization of
existing networks. 2) PCWNet with our method achieves the state-of-the-art
generalization performance on both KITTI 2015 and 2012 datasets. 3) Our method
outperforms existing methods in comprehensive ranking across four popular
real-world datasets.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 12:27:58 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Xu",
"Peng",
""
],
[
"Xiang",
"Zhiyu",
""
],
[
"Fu",
"Jingyun",
""
],
[
"Pu",
"Tianyu",
""
],
[
"Zhong",
"Hanzhi",
""
],
[
"Liu",
"Eryun",
""
]
]
| TITLE: MIDAS: Modeling Ground-Truth Distributions with Dark Knowledge for
Domain Generalized Stereo Matching
ABSTRACT: Despite the significant advances in domain generalized stereo matching,
existing methods still exhibit domain-specific preferences when transferring
from synthetic to real domains, hindering their practical applications in
complex and diverse scenarios. The probability distributions predicted by the
stereo network naturally encode rich similarity and uncertainty information.
Inspired by this observation, we propose to extract these two types of dark
knowledge from the pre-trained network to model intuitive multi-modal
ground-truth distributions for both edge and non-edge regions. To mitigate the
inherent domain preferences of a single network, we adopt network ensemble and
further distinguish between objective and biased knowledge in the Laplace
parameter space. Finally, the objective knowledge and the original disparity
labels are jointly modeled as a mixture of Laplacians to provide fine-grained
supervision for the stereo network training. Extensive experiments demonstrate
that: 1) Our method is generic and effectively improves the generalization of
existing networks. 2) PCWNet with our method achieves the state-of-the-art
generalization performance on both KITTI 2015 and 2012 datasets. 3) Our method
outperforms existing methods in comprehensive ranking across four popular
real-world datasets.
| no_new_dataset | 0.949201 |
2503.04381 | Cheng-Han Chiang | Cheng-Han Chiang, Hung-yi Lee, Michal Lukasik | TRACT: Regression-Aware Fine-tuning Meets Chain-of-Thought Reasoning for
LLM-as-a-Judge | Codes and models are available at https://github.com/d223302/TRACT | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | The LLM-as-a-judge paradigm uses large language models (LLMs) for automated
text evaluation, where a numerical assessment is assigned by an LLM to the
input text following scoring rubrics. Existing methods for LLM-as-a-judge use
cross-entropy (CE) loss for fine-tuning, which neglects the numeric nature of
score prediction. Recent work addresses numerical prediction limitations of LLM
fine-tuning through regression-aware fine-tuning, which, however, does not
consider chain-of-thought (CoT) reasoning for score prediction. In this paper,
we introduce TRACT (Two-stage Regression-Aware fine-tuning with CoT), a method
combining CoT reasoning with regression-aware training. TRACT consists of two
stages: first, seed LLM is fine-tuned to generate CoTs, which serve as
supervision for the second stage fine-tuning. The training objective of TRACT
combines the CE loss for learning the CoT reasoning capabilities, and the
regression-aware loss for the score prediction. Experiments across four
LLM-as-a-judge datasets and two LLMs show that TRACT significantly outperforms
existing methods. Extensive ablation studies validate the importance of each
component in TRACT.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 12:33:20 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Chiang",
"Cheng-Han",
""
],
[
"Lee",
"Hung-yi",
""
],
[
"Lukasik",
"Michal",
""
]
]
| TITLE: TRACT: Regression-Aware Fine-tuning Meets Chain-of-Thought Reasoning for
LLM-as-a-Judge
ABSTRACT: The LLM-as-a-judge paradigm uses large language models (LLMs) for automated
text evaluation, where a numerical assessment is assigned by an LLM to the
input text following scoring rubrics. Existing methods for LLM-as-a-judge use
cross-entropy (CE) loss for fine-tuning, which neglects the numeric nature of
score prediction. Recent work addresses numerical prediction limitations of LLM
fine-tuning through regression-aware fine-tuning, which, however, does not
consider chain-of-thought (CoT) reasoning for score prediction. In this paper,
we introduce TRACT (Two-stage Regression-Aware fine-tuning with CoT), a method
combining CoT reasoning with regression-aware training. TRACT consists of two
stages: first, seed LLM is fine-tuned to generate CoTs, which serve as
supervision for the second stage fine-tuning. The training objective of TRACT
combines the CE loss for learning the CoT reasoning capabilities, and the
regression-aware loss for the score prediction. Experiments across four
LLM-as-a-judge datasets and two LLMs show that TRACT significantly outperforms
existing methods. Extensive ablation studies validate the importance of each
component in TRACT.
| no_new_dataset | 0.947769 |
2503.04388 | Shahar Levy | Shahar Levy, Nir Mazor, Lihi Shalmon, Michael Hassid, Gabriel
Stanovsky | More Documents, Same Length: Isolating the Challenge of Multiple
Documents in RAG | Preprint | null | null | null | cs.CL | http://creativecommons.org/publicdomain/zero/1.0/ | Retrieval-augmented generation (RAG) provides LLMs with relevant documents.
Although previous studies noted that retrieving many documents can degrade
performance, they did not isolate how the quantity of documents affects
performance while controlling for context length. We evaluate various language
models on custom datasets derived from a multi-hop QA task. We keep the context
length and position of relevant information constant while varying the number
of documents, and find that increasing the document count in RAG settings poses
significant challenges for LLMs. Additionally, our results indicate that
processing multiple documents is a separate challenge from handling long
contexts. We also make the datasets and code available:
https://github.com/shaharl6000/MoreDocsSameLen .
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 12:38:17 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Levy",
"Shahar",
""
],
[
"Mazor",
"Nir",
""
],
[
"Shalmon",
"Lihi",
""
],
[
"Hassid",
"Michael",
""
],
[
"Stanovsky",
"Gabriel",
""
]
]
| TITLE: More Documents, Same Length: Isolating the Challenge of Multiple
Documents in RAG
ABSTRACT: Retrieval-augmented generation (RAG) provides LLMs with relevant documents.
Although previous studies noted that retrieving many documents can degrade
performance, they did not isolate how the quantity of documents affects
performance while controlling for context length. We evaluate various language
models on custom datasets derived from a multi-hop QA task. We keep the context
length and position of relevant information constant while varying the number
of documents, and find that increasing the document count in RAG settings poses
significant challenges for LLMs. Additionally, our results indicate that
processing multiple documents is a separate challenge from handling long
contexts. We also make the datasets and code available:
https://github.com/shaharl6000/MoreDocsSameLen .
| new_dataset | 0.959383 |
2503.04396 | Xinyi He | Xinyi He, Yihao Liu, Mengyu Zhou, Yeye He, Haoyu Dong, Shi Han, Zejian
Yuan, Dongmei Zhang | TableLoRA: Low-rank Adaptation on Table Structure Understanding for
Large Language Models | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tabular data are crucial in many fields and their understanding by large
language models (LLMs) under high parameter efficiency paradigm is important.
However, directly applying parameter-efficient fine-tuning (PEFT) techniques to
tabular tasks presents significant challenges, particularly in terms of better
table serialization and the representation of two-dimensional structured
information within a one-dimensional sequence. To address this, we propose
TableLoRA, a module designed to improve LLMs' understanding of table structure
during PEFT. It incorporates special tokens for serializing tables with special
token encoder and uses 2D LoRA to encode low-rank information on cell
positions. Experiments on four tabular-related datasets demonstrate that
TableLoRA consistently outperforms vanilla LoRA and surpasses various table
encoding methods tested in control experiments. These findings reveal that
TableLoRA, as a table-specific LoRA, enhances the ability of LLMs to process
tabular data effectively, especially in low-parameter settings, demonstrating
its potential as a robust solution for handling table-related tasks.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 12:50:14 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"He",
"Xinyi",
""
],
[
"Liu",
"Yihao",
""
],
[
"Zhou",
"Mengyu",
""
],
[
"He",
"Yeye",
""
],
[
"Dong",
"Haoyu",
""
],
[
"Han",
"Shi",
""
],
[
"Yuan",
"Zejian",
""
],
[
"Zhang",
"Dongmei",
""
]
]
| TITLE: TableLoRA: Low-rank Adaptation on Table Structure Understanding for
Large Language Models
ABSTRACT: Tabular data are crucial in many fields and their understanding by large
language models (LLMs) under high parameter efficiency paradigm is important.
However, directly applying parameter-efficient fine-tuning (PEFT) techniques to
tabular tasks presents significant challenges, particularly in terms of better
table serialization and the representation of two-dimensional structured
information within a one-dimensional sequence. To address this, we propose
TableLoRA, a module designed to improve LLMs' understanding of table structure
during PEFT. It incorporates special tokens for serializing tables with special
token encoder and uses 2D LoRA to encode low-rank information on cell
positions. Experiments on four tabular-related datasets demonstrate that
TableLoRA consistently outperforms vanilla LoRA and surpasses various table
encoding methods tested in control experiments. These findings reveal that
TableLoRA, as a table-specific LoRA, enhances the ability of LLMs to process
tabular data effectively, especially in low-parameter settings, demonstrating
its potential as a robust solution for handling table-related tasks.
| no_new_dataset | 0.946399 |
2503.04406 | Won-Yong Shin | Yu-Seung Roh, Joo-Young Kim, Jin-Duk Park, Won-Yong Shin | Training-Free Graph Filtering via Multimodal Feature Refinement for
Extremely Fast Multimodal Recommendation | 10 pages, 6 figures, 6 tables | null | null | null | cs.IR cs.AI cs.IT cs.LG cs.SI math.IT | http://creativecommons.org/licenses/by/4.0/ | Multimodal recommender systems improve the performance of canonical
recommender systems with no item features by utilizing diverse content types
such as text, images, and videos, while alleviating inherent sparsity of
user-item interactions and accelerating user engagement. However, current
neural network-based models often incur significant computational overhead due
to the complex training process required to learn and integrate information
from multiple modalities. To overcome this limitation, we propose
MultiModal-Graph Filtering (MM-GF), a training-free method based on the notion
of graph filtering (GF) for efficient and accurate multimodal recommendations.
Specifically, MM-GF first constructs multiple similarity graphs through
nontrivial multimodal feature refinement such as robust scaling and vector
shifting by addressing the heterogeneous characteristics across modalities.
Then, MM-GF optimally fuses multimodal information using linear low-pass
filters across different modalities. Extensive experiments on real-world
benchmark datasets demonstrate that MM-GF not only improves recommendation
accuracy by up to 13.35% compared to the best competitor but also dramatically
reduces computational costs by achieving the runtime of less than 10 seconds.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 13:00:53 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Roh",
"Yu-Seung",
""
],
[
"Kim",
"Joo-Young",
""
],
[
"Park",
"Jin-Duk",
""
],
[
"Shin",
"Won-Yong",
""
]
]
| TITLE: Training-Free Graph Filtering via Multimodal Feature Refinement for
Extremely Fast Multimodal Recommendation
ABSTRACT: Multimodal recommender systems improve the performance of canonical
recommender systems with no item features by utilizing diverse content types
such as text, images, and videos, while alleviating inherent sparsity of
user-item interactions and accelerating user engagement. However, current
neural network-based models often incur significant computational overhead due
to the complex training process required to learn and integrate information
from multiple modalities. To overcome this limitation, we propose
MultiModal-Graph Filtering (MM-GF), a training-free method based on the notion
of graph filtering (GF) for efficient and accurate multimodal recommendations.
Specifically, MM-GF first constructs multiple similarity graphs through
nontrivial multimodal feature refinement such as robust scaling and vector
shifting by addressing the heterogeneous characteristics across modalities.
Then, MM-GF optimally fuses multimodal information using linear low-pass
filters across different modalities. Extensive experiments on real-world
benchmark datasets demonstrate that MM-GF not only improves recommendation
accuracy by up to 13.35% compared to the best competitor but also dramatically
reduces computational costs by achieving the runtime of less than 10 seconds.
| no_new_dataset | 0.946498 |
2503.04420 | Harry Owen Dr | Harry J. F. Owen, Matthew J. A. Allen, Stuart W. D. Grieve, Phill
Wilkes, Emily R. Lines | PointsToWood: A deep learning framework for complete canopy leaf-wood
segmentation of TLS data across diverse European forests | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Point clouds from Terrestrial Laser Scanning (TLS) are an increasingly
popular source of data for studying plant structure and function but typically
require extensive manual processing to extract ecologically important
information. One key task is the accurate semantic segmentation of different
plant material within point clouds, particularly wood and leaves, which is
required to understand plant productivity, architecture and physiology.
Existing automated semantic segmentation methods are primarily developed for
single ecosystem types, and whilst they show good accuracy for biomass
assessment from the trunk and large branches, often perform less well within
the crown. In this study, we demonstrate a new framework that uses a deep
learning architecture newly developed from PointNet and pointNEXT for
processing 3D point clouds to provide a reliable semantic segmentation of wood
and leaf in TLS point clouds from the tree base to branch tips, trained on data
from diverse mature European forests. Our model uses meticulously labelled data
combined with voxel-based sampling, neighbourhood rescaling, and a novel gated
reflectance integration module embedded throughout the feature extraction
layers. We evaluate its performance across open datasets from boreal,
temperate, Mediterranean and tropical regions, encompassing diverse ecosystem
types and sensor characteristics. Our results show consistent outperformance
against the most widely used PointNet based approach for leaf/wood segmentation
on our high-density TLS dataset collected across diverse mixed forest plots
across all major biomes in Europe. We also find consistently strong performance
tested on others open data from China, Eastern Cameroon, Germany and Finland,
collected using both time-of-flight and phase-shift sensors, showcasing the
transferability of our model to a wide range of ecosystems and sensors.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 13:23:03 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Owen",
"Harry J. F.",
""
],
[
"Allen",
"Matthew J. A.",
""
],
[
"Grieve",
"Stuart W. D.",
""
],
[
"Wilkes",
"Phill",
""
],
[
"Lines",
"Emily R.",
""
]
]
| TITLE: PointsToWood: A deep learning framework for complete canopy leaf-wood
segmentation of TLS data across diverse European forests
ABSTRACT: Point clouds from Terrestrial Laser Scanning (TLS) are an increasingly
popular source of data for studying plant structure and function but typically
require extensive manual processing to extract ecologically important
information. One key task is the accurate semantic segmentation of different
plant material within point clouds, particularly wood and leaves, which is
required to understand plant productivity, architecture and physiology.
Existing automated semantic segmentation methods are primarily developed for
single ecosystem types, and whilst they show good accuracy for biomass
assessment from the trunk and large branches, often perform less well within
the crown. In this study, we demonstrate a new framework that uses a deep
learning architecture newly developed from PointNet and pointNEXT for
processing 3D point clouds to provide a reliable semantic segmentation of wood
and leaf in TLS point clouds from the tree base to branch tips, trained on data
from diverse mature European forests. Our model uses meticulously labelled data
combined with voxel-based sampling, neighbourhood rescaling, and a novel gated
reflectance integration module embedded throughout the feature extraction
layers. We evaluate its performance across open datasets from boreal,
temperate, Mediterranean and tropical regions, encompassing diverse ecosystem
types and sensor characteristics. Our results show consistent outperformance
against the most widely used PointNet based approach for leaf/wood segmentation
on our high-density TLS dataset collected across diverse mixed forest plots
across all major biomes in Europe. We also find consistently strong performance
tested on others open data from China, Eastern Cameroon, Germany and Finland,
collected using both time-of-flight and phase-shift sensors, showcasing the
transferability of our model to a wide range of ecosystems and sensors.
| no_new_dataset | 0.940735 |
2503.04424 | Siavash Ameli | Siavash Ameli, Chris van der Heide, Liam Hodgkinson, Fred Roosta,
Michael W. Mahoney | Determinant Estimation under Memory Constraints and Neural Scaling Laws | null | null | null | null | stat.ML cs.LG cs.NA math.NA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Calculating or accurately estimating log-determinants of large positive
semi-definite matrices is of fundamental importance in many machine learning
tasks. While its cubic computational complexity can already be prohibitive, in
modern applications, even storing the matrices themselves can pose a memory
bottleneck. To address this, we derive a novel hierarchical algorithm based on
block-wise computation of the LDL decomposition for large-scale log-determinant
calculation in memory-constrained settings. In extreme cases where matrices are
highly ill-conditioned, accurately computing the full matrix itself may be
infeasible. This is particularly relevant when considering kernel matrices at
scale, including the empirical Neural Tangent Kernel (NTK) of neural networks
trained on large datasets. Under the assumption of neural scaling laws in the
test error, we show that the ratio of pseudo-determinants satisfies a power-law
relationship, allowing us to derive corresponding scaling laws. This enables
accurate estimation of NTK log-determinants from a tiny fraction of the full
dataset; in our experiments, this results in a $\sim$100,000$\times$ speedup
with improved accuracy over competing approximations. Using these techniques,
we successfully estimate log-determinants for dense matrices of extreme sizes,
which were previously deemed intractable and inaccessible due to their enormous
scale and computational demands.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 13:32:13 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Ameli",
"Siavash",
""
],
[
"van der Heide",
"Chris",
""
],
[
"Hodgkinson",
"Liam",
""
],
[
"Roosta",
"Fred",
""
],
[
"Mahoney",
"Michael W.",
""
]
]
| TITLE: Determinant Estimation under Memory Constraints and Neural Scaling Laws
ABSTRACT: Calculating or accurately estimating log-determinants of large positive
semi-definite matrices is of fundamental importance in many machine learning
tasks. While its cubic computational complexity can already be prohibitive, in
modern applications, even storing the matrices themselves can pose a memory
bottleneck. To address this, we derive a novel hierarchical algorithm based on
block-wise computation of the LDL decomposition for large-scale log-determinant
calculation in memory-constrained settings. In extreme cases where matrices are
highly ill-conditioned, accurately computing the full matrix itself may be
infeasible. This is particularly relevant when considering kernel matrices at
scale, including the empirical Neural Tangent Kernel (NTK) of neural networks
trained on large datasets. Under the assumption of neural scaling laws in the
test error, we show that the ratio of pseudo-determinants satisfies a power-law
relationship, allowing us to derive corresponding scaling laws. This enables
accurate estimation of NTK log-determinants from a tiny fraction of the full
dataset; in our experiments, this results in a $\sim$100,000$\times$ speedup
with improved accuracy over competing approximations. Using these techniques,
we successfully estimate log-determinants for dense matrices of extreme sizes,
which were previously deemed intractable and inaccessible due to their enormous
scale and computational demands.
| no_new_dataset | 0.944228 |
2503.04439 | Owen Cook | Owen Cook, Yida Mu, Xinye Yang, Xingyi Song and Kalina Bontcheva | A Dataset for Analysing News Framing in Chinese Media | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Framing is an essential device in news reporting, allowing the writer to
influence public perceptions of current affairs. While there are existing
automatic news framing detection datasets in various languages, none of them
focus on news framing in the Chinese language which has complex character
meanings and unique linguistic features. This study introduces the first
Chinese News Framing dataset, to be used as either a stand-alone dataset or a
supplementary resource to the SemEval-2023 task 3 dataset. We detail its
creation and we run baseline experiments to highlight the need for such a
dataset and create benchmarks for future research, providing results obtained
through fine-tuning XLM-RoBERTa-Base and using GPT-4o in the zero-shot setting.
We find that GPT-4o performs significantly worse than fine-tuned XLM-RoBERTa
across all languages. For the Chinese language, we obtain an F1-micro (the
performance metric for SemEval task 3, subtask 2) score of 0.719 using only
samples from our Chinese News Framing dataset and a score of 0.753 when we
augment the SemEval dataset with Chinese news framing samples. With positive
news frame detection results, this dataset is a valuable resource for detecting
news frames in the Chinese language and is a valuable supplement to the
SemEval-2023 task 3 dataset.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 13:55:33 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Cook",
"Owen",
""
],
[
"Mu",
"Yida",
""
],
[
"Yang",
"Xinye",
""
],
[
"Song",
"Xingyi",
""
],
[
"Bontcheva",
"Kalina",
""
]
]
| TITLE: A Dataset for Analysing News Framing in Chinese Media
ABSTRACT: Framing is an essential device in news reporting, allowing the writer to
influence public perceptions of current affairs. While there are existing
automatic news framing detection datasets in various languages, none of them
focus on news framing in the Chinese language which has complex character
meanings and unique linguistic features. This study introduces the first
Chinese News Framing dataset, to be used as either a stand-alone dataset or a
supplementary resource to the SemEval-2023 task 3 dataset. We detail its
creation and we run baseline experiments to highlight the need for such a
dataset and create benchmarks for future research, providing results obtained
through fine-tuning XLM-RoBERTa-Base and using GPT-4o in the zero-shot setting.
We find that GPT-4o performs significantly worse than fine-tuned XLM-RoBERTa
across all languages. For the Chinese language, we obtain an F1-micro (the
performance metric for SemEval task 3, subtask 2) score of 0.719 using only
samples from our Chinese News Framing dataset and a score of 0.753 when we
augment the SemEval dataset with Chinese news framing samples. With positive
news frame detection results, this dataset is a valuable resource for detecting
news frames in the Chinese language and is a valuable supplement to the
SemEval-2023 task 3 dataset.
| new_dataset | 0.973139 |
2503.04447 | Wei Liu | Wei Liu, Xin Liu, Michael K. Ng, Zaikun Zhang | A Graph-Partitioning Based Continuous Optimization Approach to
Semi-supervised Clustering Problems | null | null | null | null | math.OC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Semi-supervised clustering is a basic problem in various applications. Most
existing methods require knowledge of the ideal cluster number, which is often
difficult to obtain in practice. Besides, satisfying the must-link constraints
is another major challenge for these methods. In this work, we view the
semi-supervised clustering task as a partitioning problem on a graph associated
with the given dataset, where the similarity matrix includes a scaling
parameter to reflect the must-link constraints. Utilizing a relaxation
technique, we formulate the graph partitioning problem into a continuous
optimization model that does not require the exact cluster number, but only an
overestimate of it. We then propose a block coordinate descent algorithm to
efficiently solve this model, and establish its convergence result. Based on
the obtained solution, we can construct the clusters that theoretically meet
the must-link constraints under mild assumptions. Furthermore, we verify the
effectiveness and efficiency of our proposed method through comprehensive
numerical experiments.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 14:02:28 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Liu",
"Wei",
""
],
[
"Liu",
"Xin",
""
],
[
"Ng",
"Michael K.",
""
],
[
"Zhang",
"Zaikun",
""
]
]
| TITLE: A Graph-Partitioning Based Continuous Optimization Approach to
Semi-supervised Clustering Problems
ABSTRACT: Semi-supervised clustering is a basic problem in various applications. Most
existing methods require knowledge of the ideal cluster number, which is often
difficult to obtain in practice. Besides, satisfying the must-link constraints
is another major challenge for these methods. In this work, we view the
semi-supervised clustering task as a partitioning problem on a graph associated
with the given dataset, where the similarity matrix includes a scaling
parameter to reflect the must-link constraints. Utilizing a relaxation
technique, we formulate the graph partitioning problem into a continuous
optimization model that does not require the exact cluster number, but only an
overestimate of it. We then propose a block coordinate descent algorithm to
efficiently solve this model, and establish its convergence result. Based on
the obtained solution, we can construct the clusters that theoretically meet
the must-link constraints under mild assumptions. Furthermore, we verify the
effectiveness and efficiency of our proposed method through comprehensive
numerical experiments.
| no_new_dataset | 0.949669 |
2503.04451 | Mert Cihangiroglu | Marco Arazzi, Mert Cihangiroglu, Antonino Nocera | Privacy Preserving and Robust Aggregation for Cross-Silo Federated
Learning in Non-IID Settings | null | null | null | null | cs.LG cs.AI cs.CR | http://creativecommons.org/licenses/by/4.0/ | Federated Averaging remains the most widely used aggregation strategy in
federated learning due to its simplicity and scalability. However, its
performance degrades significantly in non-IID data settings, where client
distributions are highly imbalanced or skewed. Additionally, it relies on
clients transmitting metadata, specifically the number of training samples,
which introduces privacy risks and may conflict with regulatory frameworks like
the European GDPR. In this paper, we propose a novel aggregation strategy that
addresses these challenges by introducing class-aware gradient masking. Unlike
traditional approaches, our method relies solely on gradient updates,
eliminating the need for any additional client metadata, thereby enhancing
privacy protection. Furthermore, our approach validates and dynamically weights
client contributions based on class-specific importance, ensuring robustness
against non-IID distributions, convergence prevention, and backdoor attacks.
Extensive experiments on benchmark datasets demonstrate that our method not
only outperforms FedAvg and other widely accepted aggregation strategies in
non-IID settings but also preserves model integrity in adversarial scenarios.
Our results establish the effectiveness of gradient masking as a practical and
secure solution for federated learning.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 14:06:20 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Arazzi",
"Marco",
""
],
[
"Cihangiroglu",
"Mert",
""
],
[
"Nocera",
"Antonino",
""
]
]
| TITLE: Privacy Preserving and Robust Aggregation for Cross-Silo Federated
Learning in Non-IID Settings
ABSTRACT: Federated Averaging remains the most widely used aggregation strategy in
federated learning due to its simplicity and scalability. However, its
performance degrades significantly in non-IID data settings, where client
distributions are highly imbalanced or skewed. Additionally, it relies on
clients transmitting metadata, specifically the number of training samples,
which introduces privacy risks and may conflict with regulatory frameworks like
the European GDPR. In this paper, we propose a novel aggregation strategy that
addresses these challenges by introducing class-aware gradient masking. Unlike
traditional approaches, our method relies solely on gradient updates,
eliminating the need for any additional client metadata, thereby enhancing
privacy protection. Furthermore, our approach validates and dynamically weights
client contributions based on class-specific importance, ensuring robustness
against non-IID distributions, convergence prevention, and backdoor attacks.
Extensive experiments on benchmark datasets demonstrate that our method not
only outperforms FedAvg and other widely accepted aggregation strategies in
non-IID settings but also preserves model integrity in adversarial scenarios.
Our results establish the effectiveness of gradient masking as a practical and
secure solution for federated learning.
| no_new_dataset | 0.948917 |
2503.04452 | Xuerui Zhang | Xuerui Zhang | A lightweight model FDM-YOLO for small target improvement based on
YOLOv8 | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Small targets are particularly difficult to detect due to their low pixel
count, complex backgrounds, and varying shooting angles, which make it hard for
models to extract effective features. While some large-scale models offer high
accuracy, their long inference times make them unsuitable for real-time
deployment on edge devices. On the other hand, models designed for low
computational power often suffer from poor detection accuracy. This paper
focuses on small target detection and explores methods for object detection
under low computational constraints. Building on the YOLOv8 model, we propose a
new network architecture called FDM-YOLO. Our research includes the following
key contributions: We introduce FDM-YOLO by analyzing the output of the YOLOv8
detection head. We add a highresolution layer and remove the large target
detection layer to better handle small targets. Based on PConv, we propose a
lightweight network structure called Fast-C2f, which is integrated into the PAN
module of the model. To mitigate the accuracy loss caused by model
lightweighting, we employ dynamic upsampling (Dysample) and a lightweight EMA
attention mechanism.The FDM-YOLO model was validated on the Visdrone dataset,
achieving a 38% reduction in parameter count and improving the Map0.5 score
from 38.4% to 42.5%, all while maintaining nearly the same inference speed.
This demonstrates the effectiveness of our approach in balancing accuracy and
efficiency for edge device deployment.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 14:06:35 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Zhang",
"Xuerui",
""
]
]
| TITLE: A lightweight model FDM-YOLO for small target improvement based on
YOLOv8
ABSTRACT: Small targets are particularly difficult to detect due to their low pixel
count, complex backgrounds, and varying shooting angles, which make it hard for
models to extract effective features. While some large-scale models offer high
accuracy, their long inference times make them unsuitable for real-time
deployment on edge devices. On the other hand, models designed for low
computational power often suffer from poor detection accuracy. This paper
focuses on small target detection and explores methods for object detection
under low computational constraints. Building on the YOLOv8 model, we propose a
new network architecture called FDM-YOLO. Our research includes the following
key contributions: We introduce FDM-YOLO by analyzing the output of the YOLOv8
detection head. We add a highresolution layer and remove the large target
detection layer to better handle small targets. Based on PConv, we propose a
lightweight network structure called Fast-C2f, which is integrated into the PAN
module of the model. To mitigate the accuracy loss caused by model
lightweighting, we employ dynamic upsampling (Dysample) and a lightweight EMA
attention mechanism.The FDM-YOLO model was validated on the Visdrone dataset,
achieving a 38% reduction in parameter count and improving the Map0.5 score
from 38.4% to 42.5%, all while maintaining nearly the same inference speed.
This demonstrates the effectiveness of our approach in balancing accuracy and
efficiency for edge device deployment.
| no_new_dataset | 0.944125 |
2503.04470 | Edoardo Bianchi | Edoardo Bianchi and Oswald Lanz | Gate-Shift-Pose: Enhancing Action Recognition in Sports with Skeleton
Information | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | This paper introduces Gate-Shift-Pose, an enhanced version of Gate-Shift-Fuse
networks, designed for athlete fall classification in figure skating by
integrating skeleton pose data alongside RGB frames. We evaluate two fusion
strategies: early-fusion, which combines RGB frames with Gaussian heatmaps of
pose keypoints at the input stage, and late-fusion, which employs a
multi-stream architecture with attention mechanisms to combine RGB and pose
features. Experiments on the FR-FS dataset demonstrate that Gate-Shift-Pose
significantly outperforms the RGB-only baseline, improving accuracy by up to
40% with ResNet18 and 20% with ResNet50. Early-fusion achieves the highest
accuracy (98.08%) with ResNet50, leveraging the model's capacity for effective
multimodal integration, while late-fusion is better suited for lighter
backbones like ResNet18. These results highlight the potential of multimodal
architectures for sports action recognition and the critical role of skeleton
pose information in capturing complex motion patterns.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 14:21:43 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Bianchi",
"Edoardo",
""
],
[
"Lanz",
"Oswald",
""
]
]
| TITLE: Gate-Shift-Pose: Enhancing Action Recognition in Sports with Skeleton
Information
ABSTRACT: This paper introduces Gate-Shift-Pose, an enhanced version of Gate-Shift-Fuse
networks, designed for athlete fall classification in figure skating by
integrating skeleton pose data alongside RGB frames. We evaluate two fusion
strategies: early-fusion, which combines RGB frames with Gaussian heatmaps of
pose keypoints at the input stage, and late-fusion, which employs a
multi-stream architecture with attention mechanisms to combine RGB and pose
features. Experiments on the FR-FS dataset demonstrate that Gate-Shift-Pose
significantly outperforms the RGB-only baseline, improving accuracy by up to
40% with ResNet18 and 20% with ResNet50. Early-fusion achieves the highest
accuracy (98.08%) with ResNet50, leveraging the model's capacity for effective
multimodal integration, while late-fusion is better suited for lighter
backbones like ResNet18. These results highlight the potential of multimodal
architectures for sports action recognition and the critical role of skeleton
pose information in capturing complex motion patterns.
| no_new_dataset | 0.950824 |
2503.04472 | Wenjing Zhang | Yi Shen, Jian Zhang, Jieyun Huang, Shuming Shi, Wenjing Zhang, Jiangze
Yan, Ning Wang, Kai Wang and Shiguo Lian | DAST: Difficulty-Adaptive Slow-Thinking for Large Reasoning Models | working in progress | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advancements in slow-thinking reasoning models have shown exceptional
performance in complex reasoning tasks. However, these models often exhibit
overthinking-generating redundant reasoning steps for simple problems, leading
to excessive computational resource usage. While current mitigation strategies
uniformly reduce reasoning tokens, they risk degrading performance on
challenging tasks that require extended reasoning. This paper introduces
Difficulty-Adaptive Slow-Thinking (DAST), a novel framework that enables models
to autonomously adjust the length of Chain-of-Thought(CoT) based on problem
difficulty. We first propose a Token Length Budget (TLB) metric to quantify
difficulty, then leveraging length-aware reward shaping and length preference
optimization to implement DAST. DAST penalizes overlong responses for simple
tasks while incentivizing sufficient reasoning for complex problems.
Experiments on diverse datasets and model scales demonstrate that DAST
effectively mitigates overthinking (reducing token usage by over 30\% on
average) while preserving reasoning accuracy on complex problems.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 14:23:06 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Shen",
"Yi",
""
],
[
"Zhang",
"Jian",
""
],
[
"Huang",
"Jieyun",
""
],
[
"Shi",
"Shuming",
""
],
[
"Zhang",
"Wenjing",
""
],
[
"Yan",
"Jiangze",
""
],
[
"Wang",
"Ning",
""
],
[
"Wang",
"Kai",
""
],
[
"Lian",
"Shiguo",
""
]
]
| TITLE: DAST: Difficulty-Adaptive Slow-Thinking for Large Reasoning Models
ABSTRACT: Recent advancements in slow-thinking reasoning models have shown exceptional
performance in complex reasoning tasks. However, these models often exhibit
overthinking-generating redundant reasoning steps for simple problems, leading
to excessive computational resource usage. While current mitigation strategies
uniformly reduce reasoning tokens, they risk degrading performance on
challenging tasks that require extended reasoning. This paper introduces
Difficulty-Adaptive Slow-Thinking (DAST), a novel framework that enables models
to autonomously adjust the length of Chain-of-Thought(CoT) based on problem
difficulty. We first propose a Token Length Budget (TLB) metric to quantify
difficulty, then leveraging length-aware reward shaping and length preference
optimization to implement DAST. DAST penalizes overlong responses for simple
tasks while incentivizing sufficient reasoning for complex problems.
Experiments on diverse datasets and model scales demonstrate that DAST
effectively mitigates overthinking (reducing token usage by over 30\% on
average) while preserving reasoning accuracy on complex problems.
| no_new_dataset | 0.947478 |
2503.04474 | Francisco Eiras | Francisco Eiras, Eliott Zemour, Eric Lin, Vaikkunth Mugunthan | Know Thy Judge: On the Robustness Meta-Evaluation of LLM Safety Judges | Accepted to the ICBINB Workshop at ICLR'25 | null | null | null | cs.LG cs.CR | http://creativecommons.org/licenses/by/4.0/ | Large Language Model (LLM) based judges form the underpinnings of key safety
evaluation processes such as offline benchmarking, automated red-teaming, and
online guardrailing. This widespread requirement raises the crucial question:
can we trust the evaluations of these evaluators? In this paper, we highlight
two critical challenges that are typically overlooked: (i) evaluations in the
wild where factors like prompt sensitivity and distribution shifts can affect
performance and (ii) adversarial attacks that target the judge. We highlight
the importance of these through a study of commonly used safety judges, showing
that small changes such as the style of the model output can lead to jumps of
up to 0.24 in the false negative rate on the same dataset, whereas adversarial
attacks on the model generation can fool some judges into misclassifying 100%
of harmful generations as safe ones. These findings reveal gaps in commonly
used meta-evaluation benchmarks and weaknesses in the robustness of current LLM
judges, indicating that low attack success under certain judges could create a
false sense of security.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 14:24:12 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Eiras",
"Francisco",
""
],
[
"Zemour",
"Eliott",
""
],
[
"Lin",
"Eric",
""
],
[
"Mugunthan",
"Vaikkunth",
""
]
]
| TITLE: Know Thy Judge: On the Robustness Meta-Evaluation of LLM Safety Judges
ABSTRACT: Large Language Model (LLM) based judges form the underpinnings of key safety
evaluation processes such as offline benchmarking, automated red-teaming, and
online guardrailing. This widespread requirement raises the crucial question:
can we trust the evaluations of these evaluators? In this paper, we highlight
two critical challenges that are typically overlooked: (i) evaluations in the
wild where factors like prompt sensitivity and distribution shifts can affect
performance and (ii) adversarial attacks that target the judge. We highlight
the importance of these through a study of commonly used safety judges, showing
that small changes such as the style of the model output can lead to jumps of
up to 0.24 in the false negative rate on the same dataset, whereas adversarial
attacks on the model generation can fool some judges into misclassifying 100%
of harmful generations as safe ones. These findings reveal gaps in commonly
used meta-evaluation benchmarks and weaknesses in the robustness of current LLM
judges, indicating that low attack success under certain judges could create a
false sense of security.
| no_new_dataset | 0.950365 |
2503.04475 | Yanqing Shen | Yanqing Shen, Turcan Tuna, Marco Hutter, Cesar Cadena, Nanning Zheng | ForestLPR: LiDAR Place Recognition in Forests Attentioning Multiple BEV
Density Images | accepted by CVPR2025 | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Place recognition is essential to maintain global consistency in large-scale
localization systems. While research in urban environments has progressed
significantly using LiDARs or cameras, applications in natural forest-like
environments remain largely under-explored. Furthermore, forests present
particular challenges due to high self-similarity and substantial variations in
vegetation growth over time. In this work, we propose a robust LiDAR-based
place recognition method for natural forests, ForestLPR. We hypothesize that a
set of cross-sectional images of the forest's geometry at different heights
contains the information needed to recognize revisiting a place. The
cross-sectional images are represented by \ac{bev} density images of horizontal
slices of the point cloud at different heights. Our approach utilizes a visual
transformer as the shared backbone to produce sets of local descriptors and
introduces a multi-BEV interaction module to attend to information at different
heights adaptively. It is followed by an aggregation layer that produces a
rotation-invariant place descriptor. We evaluated the efficacy of our method
extensively on real-world data from public benchmarks as well as robotic
datasets and compared it against the state-of-the-art (SOTA) methods. The
results indicate that ForestLPR has consistently good performance on all
evaluations and achieves an average increase of 7.38\% and 9.11\% on Recall@1
over the closest competitor on intra-sequence loop closure detection and
inter-sequence re-localization, respectively, validating our hypothesis
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 14:24:22 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Shen",
"Yanqing",
""
],
[
"Tuna",
"Turcan",
""
],
[
"Hutter",
"Marco",
""
],
[
"Cadena",
"Cesar",
""
],
[
"Zheng",
"Nanning",
""
]
]
| TITLE: ForestLPR: LiDAR Place Recognition in Forests Attentioning Multiple BEV
Density Images
ABSTRACT: Place recognition is essential to maintain global consistency in large-scale
localization systems. While research in urban environments has progressed
significantly using LiDARs or cameras, applications in natural forest-like
environments remain largely under-explored. Furthermore, forests present
particular challenges due to high self-similarity and substantial variations in
vegetation growth over time. In this work, we propose a robust LiDAR-based
place recognition method for natural forests, ForestLPR. We hypothesize that a
set of cross-sectional images of the forest's geometry at different heights
contains the information needed to recognize revisiting a place. The
cross-sectional images are represented by \ac{bev} density images of horizontal
slices of the point cloud at different heights. Our approach utilizes a visual
transformer as the shared backbone to produce sets of local descriptors and
introduces a multi-BEV interaction module to attend to information at different
heights adaptively. It is followed by an aggregation layer that produces a
rotation-invariant place descriptor. We evaluated the efficacy of our method
extensively on real-world data from public benchmarks as well as robotic
datasets and compared it against the state-of-the-art (SOTA) methods. The
results indicate that ForestLPR has consistently good performance on all
evaluations and achieves an average increase of 7.38\% and 9.11\% on Recall@1
over the closest competitor on intra-sequence loop closure detection and
inter-sequence re-localization, respectively, validating our hypothesis
| no_new_dataset | 0.951414 |
2503.04478 | Maxime Di Folco | Maxime Di Folco, Emily Chan, Marta Hasny, Cosmin I. Bercea, Julia A.
Schnabel | Semantic Alignment of Unimodal Medical Text and Vision Representations | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | General-purpose AI models, particularly those designed for text and vision,
demonstrate impressive versatility across a wide range of deep-learning tasks.
However, they often underperform in specialised domains like medical imaging,
where domain-specific solutions or alternative knowledge transfer approaches
are typically required. Recent studies have noted that general-purpose models
can exhibit similar latent spaces when processing semantically related data,
although this alignment does not occur naturally. Building on this insight, it
has been shown that applying a simple transformation - at most affine -
estimated from a subset of semantically corresponding samples, known as
anchors, enables model stitching across diverse training paradigms,
architectures, and modalities. In this paper, we explore how semantic alignment
- estimating transformations between anchors - can bridge general-purpose AI
with specialised medical knowledge. Using multiple public chest X-ray datasets,
we demonstrate that model stitching across model architectures allows general
models to integrate domain-specific knowledge without additional training,
leading to improved performance on medical tasks. Furthermore, we introduce a
novel zero-shot classification approach for unimodal vision encoders that
leverages semantic alignment across modalities. Our results show that our
method not only outperforms general multimodal models but also approaches the
performance levels of fully trained, medical-specific multimodal solutions
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 14:28:17 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Di Folco",
"Maxime",
""
],
[
"Chan",
"Emily",
""
],
[
"Hasny",
"Marta",
""
],
[
"Bercea",
"Cosmin I.",
""
],
[
"Schnabel",
"Julia A.",
""
]
]
| TITLE: Semantic Alignment of Unimodal Medical Text and Vision Representations
ABSTRACT: General-purpose AI models, particularly those designed for text and vision,
demonstrate impressive versatility across a wide range of deep-learning tasks.
However, they often underperform in specialised domains like medical imaging,
where domain-specific solutions or alternative knowledge transfer approaches
are typically required. Recent studies have noted that general-purpose models
can exhibit similar latent spaces when processing semantically related data,
although this alignment does not occur naturally. Building on this insight, it
has been shown that applying a simple transformation - at most affine -
estimated from a subset of semantically corresponding samples, known as
anchors, enables model stitching across diverse training paradigms,
architectures, and modalities. In this paper, we explore how semantic alignment
- estimating transformations between anchors - can bridge general-purpose AI
with specialised medical knowledge. Using multiple public chest X-ray datasets,
we demonstrate that model stitching across model architectures allows general
models to integrate domain-specific knowledge without additional training,
leading to improved performance on medical tasks. Furthermore, we introduce a
novel zero-shot classification approach for unimodal vision encoders that
leverages semantic alignment across modalities. Our results show that our
method not only outperforms general multimodal models but also approaches the
performance levels of fully trained, medical-specific multimodal solutions
| no_new_dataset | 0.946745 |
2503.04483 | Tianyu Cui | Tianyu Cui, Song-Jun Xu, Artem Moskalev, Shuwei Li, Tommaso Mansi,
Mangal Prakash, Rui Liao | InfoSEM: A Deep Generative Model with Informative Priors for Gene
Regulatory Network Inference | ICLR 2025 AI4NA Oral, ICLR 2025 MLGenX Spotlight, ICLR 2025 LMRL | null | null | null | stat.ML cs.LG q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Inferring Gene Regulatory Networks (GRNs) from gene expression data is
crucial for understanding biological processes. While supervised models are
reported to achieve high performance for this task, they rely on costly ground
truth (GT) labels and risk learning gene-specific biases, such as class
imbalances of GT interactions, rather than true regulatory mechanisms. To
address these issues, we introduce InfoSEM, an unsupervised generative model
that leverages textual gene embeddings as informative priors, improving GRN
inference without GT labels. InfoSEM can also integrate GT labels as an
additional prior when available, avoiding biases and further enhancing
performance. Additionally, we propose a biologically motivated benchmarking
framework that better reflects real-world applications such as biomarker
discovery and reveals learned biases of existing supervised methods. InfoSEM
outperforms existing models by 38.5% across four datasets using textual
embeddings prior and further boosts performance by 11.1% when integrating
labeled data as priors.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 14:32:00 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Cui",
"Tianyu",
""
],
[
"Xu",
"Song-Jun",
""
],
[
"Moskalev",
"Artem",
""
],
[
"Li",
"Shuwei",
""
],
[
"Mansi",
"Tommaso",
""
],
[
"Prakash",
"Mangal",
""
],
[
"Liao",
"Rui",
""
]
]
| TITLE: InfoSEM: A Deep Generative Model with Informative Priors for Gene
Regulatory Network Inference
ABSTRACT: Inferring Gene Regulatory Networks (GRNs) from gene expression data is
crucial for understanding biological processes. While supervised models are
reported to achieve high performance for this task, they rely on costly ground
truth (GT) labels and risk learning gene-specific biases, such as class
imbalances of GT interactions, rather than true regulatory mechanisms. To
address these issues, we introduce InfoSEM, an unsupervised generative model
that leverages textual gene embeddings as informative priors, improving GRN
inference without GT labels. InfoSEM can also integrate GT labels as an
additional prior when available, avoiding biases and further enhancing
performance. Additionally, we propose a biologically motivated benchmarking
framework that better reflects real-world applications such as biomarker
discovery and reveals learned biases of existing supervised methods. InfoSEM
outperforms existing models by 38.5% across four datasets using textual
embeddings prior and further boosts performance by 11.1% when integrating
labeled data as priors.
| no_new_dataset | 0.94743 |
2503.04492 | Joohwi Lee | Joohwi Lee, Kaito Miyamoto | Accurate predictive model of band gap with selected important features
based on explainable machine learning | 9 pages, 4 figures, SI is included | null | null | null | cond-mat.mtrl-sci cs.LG | http://creativecommons.org/licenses/by/4.0/ | In the rapidly advancing field of materials informatics, nonlinear machine
learning models have demonstrated exceptional predictive capabilities for
material properties. However, their black-box nature limits interpretability,
and they may incorporate features that do not contribute to, or even
deteriorate, model performance. This study employs explainable ML (XML)
techniques, including permutation feature importance and the SHapley Additive
exPlanation, applied to a pristine support vector regression model designed to
predict band gaps at the GW level using 18 input features. Guided by
XML-derived individual feature importance, a simple framework is proposed to
construct reduced-feature predictive models. Model evaluations indicate that an
XML-guided compact model, consisting of the top five features, achieves
comparable accuracy to the pristine model on in-domain datasets while
demonstrating superior generalization with lower prediction errors on
out-of-domain data. Additionally, the study underscores the necessity for
eliminating strongly correlated features to prevent misinterpretation and
overestimation of feature importance before applying XML. This study highlights
XML's effectiveness in developing simplified yet highly accurate machine
learning models by clarifying feature roles.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 14:40:21 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Lee",
"Joohwi",
""
],
[
"Miyamoto",
"Kaito",
""
]
]
| TITLE: Accurate predictive model of band gap with selected important features
based on explainable machine learning
ABSTRACT: In the rapidly advancing field of materials informatics, nonlinear machine
learning models have demonstrated exceptional predictive capabilities for
material properties. However, their black-box nature limits interpretability,
and they may incorporate features that do not contribute to, or even
deteriorate, model performance. This study employs explainable ML (XML)
techniques, including permutation feature importance and the SHapley Additive
exPlanation, applied to a pristine support vector regression model designed to
predict band gaps at the GW level using 18 input features. Guided by
XML-derived individual feature importance, a simple framework is proposed to
construct reduced-feature predictive models. Model evaluations indicate that an
XML-guided compact model, consisting of the top five features, achieves
comparable accuracy to the pristine model on in-domain datasets while
demonstrating superior generalization with lower prediction errors on
out-of-domain data. Additionally, the study underscores the necessity for
eliminating strongly correlated features to prevent misinterpretation and
overestimation of feature importance before applying XML. This study highlights
XML's effectiveness in developing simplified yet highly accurate machine
learning models by clarifying feature roles.
| no_new_dataset | 0.944893 |
2503.04496 | Adrian Chang | Adrian Chang, Kai Wang, Yuanbo Li, Manolis Savva, Angel X. Chang,
Daniel Ritchie | Learning Object Placement Programs for Indoor Scene Synthesis with
Iterative Self Training | 21 pages, 20 figures Subjects: Graphics (cs.GR), Computer Vision and
Pattern Recognition (cs.CV), Machine Learning (cs.LG) | null | null | null | cs.GR cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Data driven and autoregressive indoor scene synthesis systems generate indoor
scenes automatically by suggesting and then placing objects one at a time.
Empirical observations show that current systems tend to produce incomplete
next object location distributions. We introduce a system which addresses this
problem. We design a Domain Specific Language (DSL) that specifies functional
constraints. Programs from our language take as input a partial scene and
object to place. Upon execution they predict possible object placements. We
design a generative model which writes these programs automatically. Available
3D scene datasets do not contain programs to train on, so we build upon
previous work in unsupervised program induction to introduce a new program
bootstrapping algorithm. In order to quantify our empirical observations we
introduce a new evaluation procedure which captures how well a system models
per-object location distributions. We ask human annotators to label all the
possible places an object can go in a scene and show that our system produces
per-object location distributions more consistent with human annotators. Our
system also generates indoor scenes of comparable quality to previous systems
and while previous systems degrade in performance when training data is sparse,
our system does not degrade to the same degree.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 14:44:25 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Chang",
"Adrian",
""
],
[
"Wang",
"Kai",
""
],
[
"Li",
"Yuanbo",
""
],
[
"Savva",
"Manolis",
""
],
[
"Chang",
"Angel X.",
""
],
[
"Ritchie",
"Daniel",
""
]
]
| TITLE: Learning Object Placement Programs for Indoor Scene Synthesis with
Iterative Self Training
ABSTRACT: Data driven and autoregressive indoor scene synthesis systems generate indoor
scenes automatically by suggesting and then placing objects one at a time.
Empirical observations show that current systems tend to produce incomplete
next object location distributions. We introduce a system which addresses this
problem. We design a Domain Specific Language (DSL) that specifies functional
constraints. Programs from our language take as input a partial scene and
object to place. Upon execution they predict possible object placements. We
design a generative model which writes these programs automatically. Available
3D scene datasets do not contain programs to train on, so we build upon
previous work in unsupervised program induction to introduce a new program
bootstrapping algorithm. In order to quantify our empirical observations we
introduce a new evaluation procedure which captures how well a system models
per-object location distributions. We ask human annotators to label all the
possible places an object can go in a scene and show that our system produces
per-object location distributions more consistent with human annotators. Our
system also generates indoor scenes of comparable quality to previous systems
and while previous systems degrade in performance when training data is sparse,
our system does not degrade to the same degree.
| no_new_dataset | 0.951997 |
2503.04502 | Osnat Mokryn | Osnat Mokryn, Teddy Lazebnik, Hagit Ben Shoshan | Interpretable Transformation and Analysis of Timelines through Learning
via Surprisability | null | null | null | null | stat.ME cs.AI cs.IT math.IT | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The analysis of high-dimensional timeline data and the identification of
outliers and anomalies is critical across diverse domains, including sensor
readings, biological and medical data, historical records, and global
statistics. However, conventional analysis techniques often struggle with
challenges such as high dimensionality, complex distributions, and sparsity.
These limitations hinder the ability to extract meaningful insights from
complex temporal datasets, making it difficult to identify trending features,
outliers, and anomalies effectively. Inspired by surprisability -- a cognitive
science concept describing how humans instinctively focus on unexpected
deviations - we propose Learning via Surprisability (LvS), a novel approach for
transforming high-dimensional timeline data. LvS quantifies and prioritizes
anomalies in time-series data by formalizing deviations from expected behavior.
LvS bridges cognitive theories of attention with computational methods,
enabling the detection of anomalies and shifts in a way that preserves critical
context, offering a new lens for interpreting complex datasets. We demonstrate
the usefulness of LvS on three high-dimensional timeline use cases: a time
series of sensor data, a global dataset of mortality causes over multiple
years, and a textual corpus containing over two centuries of State of the Union
Addresses by U.S. presidents. Our results show that the LvS transformation
enables efficient and interpretable identification of outliers, anomalies, and
the most variable features along the timeline.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 14:50:29 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Mokryn",
"Osnat",
""
],
[
"Lazebnik",
"Teddy",
""
],
[
"Shoshan",
"Hagit Ben",
""
]
]
| TITLE: Interpretable Transformation and Analysis of Timelines through Learning
via Surprisability
ABSTRACT: The analysis of high-dimensional timeline data and the identification of
outliers and anomalies is critical across diverse domains, including sensor
readings, biological and medical data, historical records, and global
statistics. However, conventional analysis techniques often struggle with
challenges such as high dimensionality, complex distributions, and sparsity.
These limitations hinder the ability to extract meaningful insights from
complex temporal datasets, making it difficult to identify trending features,
outliers, and anomalies effectively. Inspired by surprisability -- a cognitive
science concept describing how humans instinctively focus on unexpected
deviations - we propose Learning via Surprisability (LvS), a novel approach for
transforming high-dimensional timeline data. LvS quantifies and prioritizes
anomalies in time-series data by formalizing deviations from expected behavior.
LvS bridges cognitive theories of attention with computational methods,
enabling the detection of anomalies and shifts in a way that preserves critical
context, offering a new lens for interpreting complex datasets. We demonstrate
the usefulness of LvS on three high-dimensional timeline use cases: a time
series of sensor data, a global dataset of mortality causes over multiple
years, and a textual corpus containing over two centuries of State of the Union
Addresses by U.S. presidents. Our results show that the LvS transformation
enables efficient and interpretable identification of outliers, anomalies, and
the most variable features along the timeline.
| no_new_dataset | 0.939582 |
2503.04504 | Sunghyun Ahn | Sunghyun Ahn, Youngwan Jo, Kijung Lee, Sein Kwon, Inpyo Hong, Sanghyun
Park | AnyAnomaly: Zero-Shot Customizable Video Anomaly Detection with LVLM | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Video anomaly detection (VAD) is crucial for video analysis and surveillance
in computer vision. However, existing VAD models rely on learned normal
patterns, which makes them difficult to apply to diverse environments.
Consequently, users should retrain models or develop separate AI models for new
environments, which requires expertise in machine learning, high-performance
hardware, and extensive data collection, limiting the practical usability of
VAD. To address these challenges, this study proposes customizable video
anomaly detection (C-VAD) technique and the AnyAnomaly model. C-VAD considers
user-defined text as an abnormal event and detects frames containing a
specified event in a video. We effectively implemented AnyAnomaly using a
context-aware visual question answering without fine-tuning the large vision
language model. To validate the effectiveness of the proposed model, we
constructed C-VAD datasets and demonstrated the superiority of AnyAnomaly.
Furthermore, our approach showed competitive performance on VAD benchmark
datasets, achieving state-of-the-art results on the UBnormal dataset and
outperforming other methods in generalization across all datasets. Our code is
available online at github.com/SkiddieAhn/Paper-AnyAnomaly.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 14:52:34 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Ahn",
"Sunghyun",
""
],
[
"Jo",
"Youngwan",
""
],
[
"Lee",
"Kijung",
""
],
[
"Kwon",
"Sein",
""
],
[
"Hong",
"Inpyo",
""
],
[
"Park",
"Sanghyun",
""
]
]
| TITLE: AnyAnomaly: Zero-Shot Customizable Video Anomaly Detection with LVLM
ABSTRACT: Video anomaly detection (VAD) is crucial for video analysis and surveillance
in computer vision. However, existing VAD models rely on learned normal
patterns, which makes them difficult to apply to diverse environments.
Consequently, users should retrain models or develop separate AI models for new
environments, which requires expertise in machine learning, high-performance
hardware, and extensive data collection, limiting the practical usability of
VAD. To address these challenges, this study proposes customizable video
anomaly detection (C-VAD) technique and the AnyAnomaly model. C-VAD considers
user-defined text as an abnormal event and detects frames containing a
specified event in a video. We effectively implemented AnyAnomaly using a
context-aware visual question answering without fine-tuning the large vision
language model. To validate the effectiveness of the proposed model, we
constructed C-VAD datasets and demonstrated the superiority of AnyAnomaly.
Furthermore, our approach showed competitive performance on VAD benchmark
datasets, achieving state-of-the-art results on the UBnormal dataset and
outperforming other methods in generalization across all datasets. Our code is
available online at github.com/SkiddieAhn/Paper-AnyAnomaly.
| new_dataset | 0.580828 |
2503.04507 | Alexander Tanaka | Alexander M. Tanaka, Aras T. Asaad, Richard Cooper and Vidit Nanda | A Morse Transform for Drug Discovery | 25 pages, 5 main figures, 2 main tables, 6 supplementary figures and
4 supplementary tables | null | null | null | q-bio.QM cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We introduce a new ligand-based virtual screening (LBVS) framework that uses
piecewise linear (PL) Morse theory to predict ligand binding potential. We
model ligands as simplicial complexes via a pruned Delaunay triangulation, and
catalogue the critical points across multiple directional height functions.
This produces a rich feature vector, consisting of crucial topological features
-- peaks, troughs, and saddles -- that characterise ligand surfaces relevant to
binding interactions. Unlike contemporary LBVS methods that rely on
computationally-intensive deep neural networks, we require only a lightweight
classifier. The Morse theoretic approach achieves state-of-the-art performance
on standard datasets while offering an interpretable feature vector and
scalable method for ligand prioritization in early-stage drug discovery.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 14:54:28 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Tanaka",
"Alexander M.",
""
],
[
"Asaad",
"Aras T.",
""
],
[
"Cooper",
"Richard",
""
],
[
"Nanda",
"Vidit",
""
]
]
| TITLE: A Morse Transform for Drug Discovery
ABSTRACT: We introduce a new ligand-based virtual screening (LBVS) framework that uses
piecewise linear (PL) Morse theory to predict ligand binding potential. We
model ligands as simplicial complexes via a pruned Delaunay triangulation, and
catalogue the critical points across multiple directional height functions.
This produces a rich feature vector, consisting of crucial topological features
-- peaks, troughs, and saddles -- that characterise ligand surfaces relevant to
binding interactions. Unlike contemporary LBVS methods that rely on
computationally-intensive deep neural networks, we require only a lightweight
classifier. The Morse theoretic approach achieves state-of-the-art performance
on standard datasets while offering an interpretable feature vector and
scalable method for ligand prioritization in early-stage drug discovery.
| no_new_dataset | 0.948298 |
2503.04513 | Jiageng Zhong | Jiageng Zhong, Qi Zhou, Ming Li, Armin Gruen, Xuan Liao | A Novel Solution for Drone Photogrammetry with Low-overlap Aerial Images
using Monocular Depth Estimation | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Low-overlap aerial imagery poses significant challenges to traditional
photogrammetric methods, which rely heavily on high image overlap to produce
accurate and complete mapping products. In this study, we propose a novel
workflow based on monocular depth estimation to address the limitations of
conventional techniques. Our method leverages tie points obtained from aerial
triangulation to establish a relationship between monocular depth and metric
depth, thus transforming the original depth map into a metric depth map,
enabling the generation of dense depth information and the comprehensive
reconstruction of the scene. For the experiments, a high-overlap drone dataset
containing 296 images is processed using Metashape to generate depth maps and
DSMs as ground truth. Subsequently, we create a low-overlap dataset by
selecting 20 images for experimental evaluation. Results demonstrate that while
the recovered depth maps and resulting DSMs achieve meter-level accuracy, they
provide significantly better completeness compared to traditional methods,
particularly in regions covered by single images. This study showcases the
potential of monocular depth estimation in low-overlap aerial photogrammetry.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 14:59:38 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Zhong",
"Jiageng",
""
],
[
"Zhou",
"Qi",
""
],
[
"Li",
"Ming",
""
],
[
"Gruen",
"Armin",
""
],
[
"Liao",
"Xuan",
""
]
]
| TITLE: A Novel Solution for Drone Photogrammetry with Low-overlap Aerial Images
using Monocular Depth Estimation
ABSTRACT: Low-overlap aerial imagery poses significant challenges to traditional
photogrammetric methods, which rely heavily on high image overlap to produce
accurate and complete mapping products. In this study, we propose a novel
workflow based on monocular depth estimation to address the limitations of
conventional techniques. Our method leverages tie points obtained from aerial
triangulation to establish a relationship between monocular depth and metric
depth, thus transforming the original depth map into a metric depth map,
enabling the generation of dense depth information and the comprehensive
reconstruction of the scene. For the experiments, a high-overlap drone dataset
containing 296 images is processed using Metashape to generate depth maps and
DSMs as ground truth. Subsequently, we create a low-overlap dataset by
selecting 20 images for experimental evaluation. Results demonstrate that while
the recovered depth maps and resulting DSMs achieve meter-level accuracy, they
provide significantly better completeness compared to traditional methods,
particularly in regions covered by single images. This study showcases the
potential of monocular depth estimation in low-overlap aerial photogrammetry.
| new_dataset | 0.960212 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.