id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
listlengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2503.04528 | Faicel Chamroukhi | Thien Pham, Angelo Furno, Fa\"icel Chamroukhi, Latifa Oukhellou | Federated Dynamic Modeling and Learning for Spatiotemporal Data
Forecasting | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents an advanced Federated Learning (FL) framework for
forecasting complex spatiotemporal data, improving upon recent state-of-the-art
models. In the proposed approach, the original Gated Recurrent Unit (GRU)
module within previous Dynamic Spatial--Temporal Graph Convolutional Recurrent
Network (DSTGCRN) modeling is first replaced with a Long Short-Term Memory
(LSTM) network, enabling the resulting model to more effectively capture
long-term dependencies inherent to time series data. The resulting architecture
significantly improves the model's capacity to handle complex temporal patterns
in diverse forecasting applications. Furthermore, the proposed FL framework
integrates a novel Client-Side Validation (CSV) mechanism, introducing a
critical validation step at the client level before incorporating aggregated
parameters from the central server into local models. This ensures that only
the most effective updates are adopted, improving both the robustness and
accuracy of the forecasting model across clients. The efficiency of our
approach is demonstrated through extensive experiments on real-world
applications, including public datasets for multimodal transport demand
forecasting and private datasets for Origin-Destination (OD) matrix forecasting
in urban areas. The results demonstrate substantial improvements over
conventional methods, highlighting the framework's ability to capture complex
spatiotemporal dependencies while preserving data privacy. This work not only
provides a scalable and privacy-preserving solution for real-time,
region-specific forecasting and management but also underscores the potential
of leveraging distributed data sources in a FL context. We provide our
algorithms as open-source on GitHub.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 15:16:57 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Pham",
"Thien",
""
],
[
"Furno",
"Angelo",
""
],
[
"Chamroukhi",
"Faïcel",
""
],
[
"Oukhellou",
"Latifa",
""
]
]
| TITLE: Federated Dynamic Modeling and Learning for Spatiotemporal Data
Forecasting
ABSTRACT: This paper presents an advanced Federated Learning (FL) framework for
forecasting complex spatiotemporal data, improving upon recent state-of-the-art
models. In the proposed approach, the original Gated Recurrent Unit (GRU)
module within previous Dynamic Spatial--Temporal Graph Convolutional Recurrent
Network (DSTGCRN) modeling is first replaced with a Long Short-Term Memory
(LSTM) network, enabling the resulting model to more effectively capture
long-term dependencies inherent to time series data. The resulting architecture
significantly improves the model's capacity to handle complex temporal patterns
in diverse forecasting applications. Furthermore, the proposed FL framework
integrates a novel Client-Side Validation (CSV) mechanism, introducing a
critical validation step at the client level before incorporating aggregated
parameters from the central server into local models. This ensures that only
the most effective updates are adopted, improving both the robustness and
accuracy of the forecasting model across clients. The efficiency of our
approach is demonstrated through extensive experiments on real-world
applications, including public datasets for multimodal transport demand
forecasting and private datasets for Origin-Destination (OD) matrix forecasting
in urban areas. The results demonstrate substantial improvements over
conventional methods, highlighting the framework's ability to capture complex
spatiotemporal dependencies while preserving data privacy. This work not only
provides a scalable and privacy-preserving solution for real-time,
region-specific forecasting and management but also underscores the potential
of leveraging distributed data sources in a FL context. We provide our
algorithms as open-source on GitHub.
| no_new_dataset | 0.948202 |
2503.04543 | Wenke Huang | Wenke Huang, Jian Liang, Xianda Guo, Yiyang Fang, Guancheng Wan,
Xuankun Rong, Chi Wen, Zekun Shi, Qingyun Li, Didi Zhu, Yanbiao Ma, Ke Liang,
Bin Yang, He Li, Jiawei Shao, Mang Ye, Bo Du | Keeping Yourself is Important in Downstream Tuning Multimodal Large
Language Model | null | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-modal Large Language Models (MLLMs) integrate visual and linguistic
reasoning to address complex tasks such as image captioning and visual question
answering. While MLLMs demonstrate remarkable versatility, MLLMs appears
limited performance on special applications. But tuning MLLMs for downstream
tasks encounters two key challenges: Task-Expert Specialization, where
distribution shifts between pre-training and target datasets constrain target
performance, and Open-World Stabilization, where catastrophic forgetting erases
the model general knowledge. In this work, we systematically review recent
advancements in MLLM tuning methodologies, classifying them into three
paradigms: (I) Selective Tuning, (II) Additive Tuning, and (III)
Reparameterization Tuning. Furthermore, we benchmark these tuning strategies
across popular MLLM architectures and diverse downstream tasks to establish
standardized evaluation analysis and systematic tuning principles. Finally, we
highlight several open challenges in this domain and propose future research
directions. To facilitate ongoing progress in this rapidly evolving field, we
provide a public repository that continuously tracks developments:
https://github.com/WenkeHuang/Awesome-MLLM-Tuning.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 15:29:13 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Huang",
"Wenke",
""
],
[
"Liang",
"Jian",
""
],
[
"Guo",
"Xianda",
""
],
[
"Fang",
"Yiyang",
""
],
[
"Wan",
"Guancheng",
""
],
[
"Rong",
"Xuankun",
""
],
[
"Wen",
"Chi",
""
],
[
"Shi",
"Zekun",
""
],
[
"Li",
"Qingyun",
""
],
[
"Zhu",
"Didi",
""
],
[
"Ma",
"Yanbiao",
""
],
[
"Liang",
"Ke",
""
],
[
"Yang",
"Bin",
""
],
[
"Li",
"He",
""
],
[
"Shao",
"Jiawei",
""
],
[
"Ye",
"Mang",
""
],
[
"Du",
"Bo",
""
]
]
| TITLE: Keeping Yourself is Important in Downstream Tuning Multimodal Large
Language Model
ABSTRACT: Multi-modal Large Language Models (MLLMs) integrate visual and linguistic
reasoning to address complex tasks such as image captioning and visual question
answering. While MLLMs demonstrate remarkable versatility, MLLMs appears
limited performance on special applications. But tuning MLLMs for downstream
tasks encounters two key challenges: Task-Expert Specialization, where
distribution shifts between pre-training and target datasets constrain target
performance, and Open-World Stabilization, where catastrophic forgetting erases
the model general knowledge. In this work, we systematically review recent
advancements in MLLM tuning methodologies, classifying them into three
paradigms: (I) Selective Tuning, (II) Additive Tuning, and (III)
Reparameterization Tuning. Furthermore, we benchmark these tuning strategies
across popular MLLM architectures and diverse downstream tasks to establish
standardized evaluation analysis and systematic tuning principles. Finally, we
highlight several open challenges in this domain and propose future research
directions. To facilitate ongoing progress in this rapidly evolving field, we
provide a public repository that continuously tracks developments:
https://github.com/WenkeHuang/Awesome-MLLM-Tuning.
| no_new_dataset | 0.9434 |
2503.04550 | Tong Yu | Tong Yu, Yongcheng Jing, Xikun Zhang, Wentao Jiang, Wenjie Wu, Yingjie
Wang, Wenbin Hu, Bo Du, Dacheng Tao | Benchmarking Reasoning Robustness in Large Language Models | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the recent success of large language models (LLMs) in reasoning such
as DeepSeek, we for the first time identify a key dilemma in reasoning
robustness and generalization: significant performance degradation on novel or
incomplete data, suggesting a reliance on memorized patterns rather than
systematic reasoning. Our closer examination reveals four key unique
limitations underlying this issue:(1) Positional bias--models favor earlier
queries in multi-query inputs but answering the wrong one in the latter (e.g.,
GPT-4o's accuracy drops from 75.8 percent to 72.8 percent); (2) Instruction
sensitivity--performance declines by 5.0 to 7.5 percent in the Qwen2.5 Series
and by 5.0 percent in DeepSeek-V3 with auxiliary guidance; (3) Numerical
fragility--value substitution sharply reduces accuracy (e.g., GPT-4o drops from
97.5 percent to 82.5 percent, GPT-o1-mini drops from 97.5 percent to 92.5
percent); and (4) Memory dependence--models resort to guesswork when missing
critical data. These findings further highlight the reliance on heuristic
recall over rigorous logical inference, demonstrating challenges in reasoning
robustness. To comprehensively investigate these robustness challenges, this
paper introduces a novel benchmark, termed as Math-RoB, that exploits
hallucinations triggered by missing information to expose reasoning gaps. This
is achieved by an instruction-based approach to generate diverse datasets that
closely resemble training distributions, facilitating a holistic robustness
assessment and advancing the development of more robust reasoning frameworks.
Bad character(s) in field Abstract.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 15:36:06 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Yu",
"Tong",
""
],
[
"Jing",
"Yongcheng",
""
],
[
"Zhang",
"Xikun",
""
],
[
"Jiang",
"Wentao",
""
],
[
"Wu",
"Wenjie",
""
],
[
"Wang",
"Yingjie",
""
],
[
"Hu",
"Wenbin",
""
],
[
"Du",
"Bo",
""
],
[
"Tao",
"Dacheng",
""
]
]
| TITLE: Benchmarking Reasoning Robustness in Large Language Models
ABSTRACT: Despite the recent success of large language models (LLMs) in reasoning such
as DeepSeek, we for the first time identify a key dilemma in reasoning
robustness and generalization: significant performance degradation on novel or
incomplete data, suggesting a reliance on memorized patterns rather than
systematic reasoning. Our closer examination reveals four key unique
limitations underlying this issue:(1) Positional bias--models favor earlier
queries in multi-query inputs but answering the wrong one in the latter (e.g.,
GPT-4o's accuracy drops from 75.8 percent to 72.8 percent); (2) Instruction
sensitivity--performance declines by 5.0 to 7.5 percent in the Qwen2.5 Series
and by 5.0 percent in DeepSeek-V3 with auxiliary guidance; (3) Numerical
fragility--value substitution sharply reduces accuracy (e.g., GPT-4o drops from
97.5 percent to 82.5 percent, GPT-o1-mini drops from 97.5 percent to 92.5
percent); and (4) Memory dependence--models resort to guesswork when missing
critical data. These findings further highlight the reliance on heuristic
recall over rigorous logical inference, demonstrating challenges in reasoning
robustness. To comprehensively investigate these robustness challenges, this
paper introduces a novel benchmark, termed as Math-RoB, that exploits
hallucinations triggered by missing information to expose reasoning gaps. This
is achieved by an instruction-based approach to generate diverse datasets that
closely resemble training distributions, facilitating a holistic robustness
assessment and advancing the development of more robust reasoning frameworks.
Bad character(s) in field Abstract.
| no_new_dataset | 0.948058 |
2503.04569 | Yitong Luo | Yitong Luo, Hou Hei Lam, Ziang Chen, Zhenliang Zhang, Xue Feng | ValuePilot: A Two-Phase Framework for Value-Driven Decision-Making | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite recent advances in artificial intelligence (AI), it poses challenges
to ensure personalized decision-making in tasks that are not considered in
training datasets. To address this issue, we propose ValuePilot, a two-phase
value-driven decision-making framework comprising a dataset generation toolkit
DGT and a decision-making module DMM trained on the generated data. DGT is
capable of generating scenarios based on value dimensions and closely mirroring
real-world tasks, with automated filtering techniques and human curation to
ensure the validity of the dataset. In the generated dataset, DMM learns to
recognize the inherent values of scenarios, computes action feasibility and
navigates the trade-offs between multiple value dimensions to make personalized
decisions. Extensive experiments demonstrate that, given human value
preferences, our DMM most closely aligns with human decisions, outperforming
Claude-3.5-Sonnet, Gemini-2-flash, Llama-3.1-405b and GPT-4o. This research is
a preliminary exploration of value-driven decision-making. We hope it will
stimulate interest in value-driven decision-making and personalized
decision-making within the community.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 16:02:53 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Luo",
"Yitong",
""
],
[
"Lam",
"Hou Hei",
""
],
[
"Chen",
"Ziang",
""
],
[
"Zhang",
"Zhenliang",
""
],
[
"Feng",
"Xue",
""
]
]
| TITLE: ValuePilot: A Two-Phase Framework for Value-Driven Decision-Making
ABSTRACT: Despite recent advances in artificial intelligence (AI), it poses challenges
to ensure personalized decision-making in tasks that are not considered in
training datasets. To address this issue, we propose ValuePilot, a two-phase
value-driven decision-making framework comprising a dataset generation toolkit
DGT and a decision-making module DMM trained on the generated data. DGT is
capable of generating scenarios based on value dimensions and closely mirroring
real-world tasks, with automated filtering techniques and human curation to
ensure the validity of the dataset. In the generated dataset, DMM learns to
recognize the inherent values of scenarios, computes action feasibility and
navigates the trade-offs between multiple value dimensions to make personalized
decisions. Extensive experiments demonstrate that, given human value
preferences, our DMM most closely aligns with human decisions, outperforming
Claude-3.5-Sonnet, Gemini-2-flash, Llama-3.1-405b and GPT-4o. This research is
a preliminary exploration of value-driven decision-making. We hope it will
stimulate interest in value-driven decision-making and personalized
decision-making within the community.
| new_dataset | 0.968856 |
2503.04580 | Yibin Wu | Yibin Wu, Jian Kuang, Shahram Khorshidi, Xiaoji Niu, Lasse Klingbeil,
Maren Bennewitz, and Heiner Kuhlmann | DogLegs: Robust Proprioceptive State Estimation for Legged Robots Using
Multiple Leg-Mounted IMUs | 8 pages, 8 figures | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Robust and accurate proprioceptive state estimation of the main body is
crucial for legged robots to execute tasks in extreme environments where
exteroceptive sensors, such as LiDARs and cameras may become unreliable. In
this paper, we propose DogLegs, a state estimation system for legged robots
that fuses the measurements from a body-mounted inertial measurement unit
(Body-IMU), joint encoders, and multiple leg-mounted IMUs (Leg-IMU) using an
extended Kalman filter (EKF). The filter system contains the error states of
all IMU frames. The Leg-IMUs are used to detect foot contact, thereby providing
zero velocity measurements to update the state of the Leg-IMU frames.
Additionally, we compute the relative position constraints between the Body-IMU
and Leg-IMUs by the leg kinematics and use them to update the main body state
and reduce the error drift of the individual IMU frames. Field experimental
results have shown that our proposed system can achieve better state estimation
accuracy compared to the traditional leg odometry method (using only Body-IMU
and joint encoders) across different terrains. We make our datasets publicly
available to benefit the research community.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 16:17:48 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Wu",
"Yibin",
""
],
[
"Kuang",
"Jian",
""
],
[
"Khorshidi",
"Shahram",
""
],
[
"Niu",
"Xiaoji",
""
],
[
"Klingbeil",
"Lasse",
""
],
[
"Bennewitz",
"Maren",
""
],
[
"Kuhlmann",
"Heiner",
""
]
]
| TITLE: DogLegs: Robust Proprioceptive State Estimation for Legged Robots Using
Multiple Leg-Mounted IMUs
ABSTRACT: Robust and accurate proprioceptive state estimation of the main body is
crucial for legged robots to execute tasks in extreme environments where
exteroceptive sensors, such as LiDARs and cameras may become unreliable. In
this paper, we propose DogLegs, a state estimation system for legged robots
that fuses the measurements from a body-mounted inertial measurement unit
(Body-IMU), joint encoders, and multiple leg-mounted IMUs (Leg-IMU) using an
extended Kalman filter (EKF). The filter system contains the error states of
all IMU frames. The Leg-IMUs are used to detect foot contact, thereby providing
zero velocity measurements to update the state of the Leg-IMU frames.
Additionally, we compute the relative position constraints between the Body-IMU
and Leg-IMUs by the leg kinematics and use them to update the main body state
and reduce the error drift of the individual IMU frames. Field experimental
results have shown that our proposed system can achieve better state estimation
accuracy compared to the traditional leg odometry method (using only Body-IMU
and joint encoders) across different terrains. We make our datasets publicly
available to benefit the research community.
| no_new_dataset | 0.942929 |
2503.04582 | Th\'eo Gnassounou | Th\'eo Gnassounou and Antoine Collas and R\'emi Flamary and Alexandre
Gramfort | PSDNorm: Test-Time Temporal Normalization for Deep Learning on EEG
Signals | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Distribution shift poses a significant challenge in machine learning,
particularly in biomedical applications such as EEG signals collected across
different subjects, institutions, and recording devices. While existing
normalization layers, Batch-Norm, LayerNorm and InstanceNorm, help address
distribution shifts, they fail to capture the temporal dependencies inherent in
temporal signals. In this paper, we propose PSDNorm, a layer that leverages
Monge mapping and temporal context to normalize feature maps in deep learning
models. Notably, the proposed method operates as a test-time domain adaptation
technique, addressing distribution shifts without additional training.
Evaluations on 10 sleep staging datasets using the U-Time model demonstrate
that PSDNorm achieves state-of-the-art performance at test time on datasets not
seen during training while being 4x more data-efficient than the best baseline.
Additionally, PSDNorm provides a significant improvement in robustness,
achieving markedly higher F1 scores for the 20% hardest subjects.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 16:20:25 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Gnassounou",
"Théo",
""
],
[
"Collas",
"Antoine",
""
],
[
"Flamary",
"Rémi",
""
],
[
"Gramfort",
"Alexandre",
""
]
]
| TITLE: PSDNorm: Test-Time Temporal Normalization for Deep Learning on EEG
Signals
ABSTRACT: Distribution shift poses a significant challenge in machine learning,
particularly in biomedical applications such as EEG signals collected across
different subjects, institutions, and recording devices. While existing
normalization layers, Batch-Norm, LayerNorm and InstanceNorm, help address
distribution shifts, they fail to capture the temporal dependencies inherent in
temporal signals. In this paper, we propose PSDNorm, a layer that leverages
Monge mapping and temporal context to normalize feature maps in deep learning
models. Notably, the proposed method operates as a test-time domain adaptation
technique, addressing distribution shifts without additional training.
Evaluations on 10 sleep staging datasets using the U-Time model demonstrate
that PSDNorm achieves state-of-the-art performance at test time on datasets not
seen during training while being 4x more data-efficient than the best baseline.
Additionally, PSDNorm provides a significant improvement in robustness,
achieving markedly higher F1 scores for the 20% hardest subjects.
| no_new_dataset | 0.948106 |
2503.04592 | Qing Zhou | Qing Zhou, Tao Yang, Junyu Gao, Weiping Ni, Junzheng Wu and Qi Wang | A Benchmark for Multi-Lingual Vision-Language Learning in Remote Sensing
Image Captioning | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Remote Sensing Image Captioning (RSIC) is a cross-modal field bridging vision
and language, aimed at automatically generating natural language descriptions
of features and scenes in remote sensing imagery. Despite significant advances
in developing sophisticated methods and large-scale datasets for training
vision-language models (VLMs), two critical challenges persist: the scarcity of
non-English descriptive datasets and the lack of multilingual capability
evaluation for models. These limitations fundamentally impede the progress and
practical deployment of RSIC, particularly in the era of large VLMs. To address
these challenges, this paper presents several significant contributions to the
field. First, we introduce and analyze BRSIC (Bilingual Remote Sensing Image
Captioning), a comprehensive bilingual dataset that enriches three established
English RSIC datasets with Chinese descriptions, encompassing 13,634 images
paired with 68,170 bilingual captions. Building upon this foundation, we
develop a systematic evaluation framework that addresses the prevalent
inconsistency in evaluation protocols, enabling rigorous assessment of model
performance through standardized retraining procedures on BRSIC. Furthermore,
we present an extensive empirical study of eight state-of-the-art large
vision-language models (LVLMs), examining their capabilities across multiple
paradigms including zero-shot inference, supervised fine-tuning, and
multi-lingual training. This comprehensive evaluation provides crucial insights
into the strengths and limitations of current LVLMs in handling multilingual
remote sensing tasks. Additionally, our cross-dataset transfer experiments
reveal interesting findings. The code and data will be available at
https://github.com/mrazhou/BRSIC.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 16:31:34 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Zhou",
"Qing",
""
],
[
"Yang",
"Tao",
""
],
[
"Gao",
"Junyu",
""
],
[
"Ni",
"Weiping",
""
],
[
"Wu",
"Junzheng",
""
],
[
"Wang",
"Qi",
""
]
]
| TITLE: A Benchmark for Multi-Lingual Vision-Language Learning in Remote Sensing
Image Captioning
ABSTRACT: Remote Sensing Image Captioning (RSIC) is a cross-modal field bridging vision
and language, aimed at automatically generating natural language descriptions
of features and scenes in remote sensing imagery. Despite significant advances
in developing sophisticated methods and large-scale datasets for training
vision-language models (VLMs), two critical challenges persist: the scarcity of
non-English descriptive datasets and the lack of multilingual capability
evaluation for models. These limitations fundamentally impede the progress and
practical deployment of RSIC, particularly in the era of large VLMs. To address
these challenges, this paper presents several significant contributions to the
field. First, we introduce and analyze BRSIC (Bilingual Remote Sensing Image
Captioning), a comprehensive bilingual dataset that enriches three established
English RSIC datasets with Chinese descriptions, encompassing 13,634 images
paired with 68,170 bilingual captions. Building upon this foundation, we
develop a systematic evaluation framework that addresses the prevalent
inconsistency in evaluation protocols, enabling rigorous assessment of model
performance through standardized retraining procedures on BRSIC. Furthermore,
we present an extensive empirical study of eight state-of-the-art large
vision-language models (LVLMs), examining their capabilities across multiple
paradigms including zero-shot inference, supervised fine-tuning, and
multi-lingual training. This comprehensive evaluation provides crucial insights
into the strengths and limitations of current LVLMs in handling multilingual
remote sensing tasks. Additionally, our cross-dataset transfer experiments
reveal interesting findings. The code and data will be available at
https://github.com/mrazhou/BRSIC.
| no_new_dataset | 0.939582 |
2503.04611 | Mohammad Amin Ghanizadeh | Mohammad Amin Ghanizadeh, Mohammad Javad Dousti | Towards Data-Efficient Language Models: A Child-Inspired Approach to
Language Learning | 5 pages | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In this work, we explain our approach employed in the BabyLM Challenge, which
uses various methods of training language models (LMs) with significantly less
data compared to traditional large language models (LLMs) and are inspired by
how human children learn. While a human child is exposed to far less linguistic
input than an LLM, they still achieve remarkable language understanding and
generation abilities. To this end, we develop a model trained on a curated
dataset consisting of 10 million words, primarily sourced from child-directed
transcripts. The 2024 BabyLM Challenge initial dataset of 10M words is filtered
to 8.5M. Next, it is supplemented with a randomly selected subset of TVR
dataset consisting of 1.5M words of television dialogues. The latter dataset
ensures that similar to children, the model is also exposed to language through
media. Furthermore, we reduce the vocabulary size to 32,000 tokens, aligning it
with the limited vocabulary of children in the early stages of language
acquisition. We use curriculum learning and is able to match the baseline on
certain benchmarks while surpassing the baseline on others. Additionally,
incorporating common LLM training datasets, such as MADLAD-400, degrades
performance. These findings underscore the importance of dataset selection,
vocabulary scaling, and curriculum learning in creating more data-efficient
language models that better mimic human learning processes.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 16:57:26 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Ghanizadeh",
"Mohammad Amin",
""
],
[
"Dousti",
"Mohammad Javad",
""
]
]
| TITLE: Towards Data-Efficient Language Models: A Child-Inspired Approach to
Language Learning
ABSTRACT: In this work, we explain our approach employed in the BabyLM Challenge, which
uses various methods of training language models (LMs) with significantly less
data compared to traditional large language models (LLMs) and are inspired by
how human children learn. While a human child is exposed to far less linguistic
input than an LLM, they still achieve remarkable language understanding and
generation abilities. To this end, we develop a model trained on a curated
dataset consisting of 10 million words, primarily sourced from child-directed
transcripts. The 2024 BabyLM Challenge initial dataset of 10M words is filtered
to 8.5M. Next, it is supplemented with a randomly selected subset of TVR
dataset consisting of 1.5M words of television dialogues. The latter dataset
ensures that similar to children, the model is also exposed to language through
media. Furthermore, we reduce the vocabulary size to 32,000 tokens, aligning it
with the limited vocabulary of children in the early stages of language
acquisition. We use curriculum learning and is able to match the baseline on
certain benchmarks while surpassing the baseline on others. Additionally,
incorporating common LLM training datasets, such as MADLAD-400, degrades
performance. These findings underscore the importance of dataset selection,
vocabulary scaling, and curriculum learning in creating more data-efficient
language models that better mimic human learning processes.
| new_dataset | 0.971375 |
2503.04615 | Ashok Urlana | Ashok Urlana, Gopichand Kanumolu, Charaka Vinayak Kumar, Bala
Mallikarjunarao Garlapati, Rahul Mishra | HalluCounter: Reference-free LLM Hallucination Detection in the Wild! | 30 pages, 4 figures | null | null | null | cs.CL | http://creativecommons.org/licenses/by-sa/4.0/ | Response consistency-based, reference-free hallucination detection (RFHD)
methods do not depend on internal model states, such as generation
probabilities or gradients, which Grey-box models typically rely on but are
inaccessible in closed-source LLMs. However, their inability to capture
query-response alignment patterns often results in lower detection accuracy.
Additionally, the lack of large-scale benchmark datasets spanning diverse
domains remains a challenge, as most existing datasets are limited in size and
scope. To this end, we propose HalluCounter, a novel reference-free
hallucination detection method that utilizes both response-response and
query-response consistency and alignment patterns. This enables the training of
a classifier that detects hallucinations and provides a confidence score and an
optimal response for user queries. Furthermore, we introduce HalluCounterEval,
a benchmark dataset comprising both synthetically generated and human-curated
samples across multiple domains. Our method outperforms state-of-the-art
approaches by a significant margin, achieving over 90\% average confidence in
hallucination detection across datasets.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 16:59:18 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Urlana",
"Ashok",
""
],
[
"Kanumolu",
"Gopichand",
""
],
[
"Kumar",
"Charaka Vinayak",
""
],
[
"Garlapati",
"Bala Mallikarjunarao",
""
],
[
"Mishra",
"Rahul",
""
]
]
| TITLE: HalluCounter: Reference-free LLM Hallucination Detection in the Wild!
ABSTRACT: Response consistency-based, reference-free hallucination detection (RFHD)
methods do not depend on internal model states, such as generation
probabilities or gradients, which Grey-box models typically rely on but are
inaccessible in closed-source LLMs. However, their inability to capture
query-response alignment patterns often results in lower detection accuracy.
Additionally, the lack of large-scale benchmark datasets spanning diverse
domains remains a challenge, as most existing datasets are limited in size and
scope. To this end, we propose HalluCounter, a novel reference-free
hallucination detection method that utilizes both response-response and
query-response consistency and alignment patterns. This enables the training of
a classifier that detects hallucinations and provides a confidence score and an
optimal response for user queries. Furthermore, we introduce HalluCounterEval,
a benchmark dataset comprising both synthetically generated and human-curated
samples across multiple domains. Our method outperforms state-of-the-art
approaches by a significant margin, achieving over 90\% average confidence in
hallucination detection across datasets.
| new_dataset | 0.95511 |
2503.04619 | Xin Zhang | Xin Zhang, Qiyu Wei, Yingjie Zhu, Linhai Zhang, Deyu Zhou, Sophia
Ananiadou | SynGraph: A Dynamic Graph-LLM Synthesis Framework for Sparse Streaming
User Sentiment Modeling | 18 pages, 17 figures | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | User reviews on e-commerce platforms exhibit dynamic sentiment patterns
driven by temporal and contextual factors. Traditional sentiment analysis
methods focus on static reviews, failing to capture the evolving temporal
relationship between user sentiment rating and textual content. Sentiment
analysis on streaming reviews addresses this limitation by modeling and
predicting the temporal evolution of user sentiments. However, it suffers from
data sparsity, manifesting in temporal, spatial, and combined forms. In this
paper, we introduce SynGraph, a novel framework designed to address data
sparsity in sentiment analysis on streaming reviews. SynGraph alleviates data
sparsity by categorizing users into mid-tail, long-tail, and extreme scenarios
and incorporating LLM-augmented enhancements within a dynamic graph-based
structure. Experiments on real-world datasets demonstrate its effectiveness in
addressing sparsity and improving sentiment modeling in streaming reviews.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 17:05:33 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Zhang",
"Xin",
""
],
[
"Wei",
"Qiyu",
""
],
[
"Zhu",
"Yingjie",
""
],
[
"Zhang",
"Linhai",
""
],
[
"Zhou",
"Deyu",
""
],
[
"Ananiadou",
"Sophia",
""
]
]
| TITLE: SynGraph: A Dynamic Graph-LLM Synthesis Framework for Sparse Streaming
User Sentiment Modeling
ABSTRACT: User reviews on e-commerce platforms exhibit dynamic sentiment patterns
driven by temporal and contextual factors. Traditional sentiment analysis
methods focus on static reviews, failing to capture the evolving temporal
relationship between user sentiment rating and textual content. Sentiment
analysis on streaming reviews addresses this limitation by modeling and
predicting the temporal evolution of user sentiments. However, it suffers from
data sparsity, manifesting in temporal, spatial, and combined forms. In this
paper, we introduce SynGraph, a novel framework designed to address data
sparsity in sentiment analysis on streaming reviews. SynGraph alleviates data
sparsity by categorizing users into mid-tail, long-tail, and extreme scenarios
and incorporating LLM-augmented enhancements within a dynamic graph-based
structure. Experiments on real-world datasets demonstrate its effectiveness in
addressing sparsity and improving sentiment modeling in streaming reviews.
| no_new_dataset | 0.944842 |
2503.04634 | Hong Liu | Hong Liu, Haosen Yang, Evi M.C. Huijben, Mark Schuiveling, Ruisheng
Su, Josien P.W. Pluim, Mitko Veta | PathoPainter: Augmenting Histopathology Segmentation via Tumor-aware
Inpainting | 10 pages, 3 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tumor segmentation plays a critical role in histopathology, but it requires
costly, fine-grained image-mask pairs annotated by pathologists. Thus,
synthesizing histopathology data to expand the dataset is highly desirable.
Previous works suffer from inaccuracies and limited diversity in image-mask
pairs, both of which affect training segmentation, particularly in small-scale
datasets and the inherently complex nature of histopathology images. To address
this challenge, we propose PathoPainter, which reformulates image-mask pair
generation as a tumor inpainting task. Specifically, our approach preserves the
background while inpainting the tumor region, ensuring precise alignment
between the generated image and its corresponding mask. To enhance dataset
diversity while maintaining biological plausibility, we incorporate a sampling
mechanism that conditions tumor inpainting on regional embeddings from a
different image. Additionally, we introduce a filtering strategy to exclude
uncertain synthetic regions, further improving the quality of the generated
data. Our comprehensive evaluation spans multiple datasets featuring diverse
tumor types and various training data scales. As a result, segmentation
improved significantly with our synthetic data, surpassing existing
segmentation data synthesis approaches, e.g., 75.69% -> 77.69% on CAMELYON16.
The code is available at https://github.com/HongLiuuuuu/PathoPainter.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 17:21:12 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Liu",
"Hong",
""
],
[
"Yang",
"Haosen",
""
],
[
"Huijben",
"Evi M. C.",
""
],
[
"Schuiveling",
"Mark",
""
],
[
"Su",
"Ruisheng",
""
],
[
"Pluim",
"Josien P. W.",
""
],
[
"Veta",
"Mitko",
""
]
]
| TITLE: PathoPainter: Augmenting Histopathology Segmentation via Tumor-aware
Inpainting
ABSTRACT: Tumor segmentation plays a critical role in histopathology, but it requires
costly, fine-grained image-mask pairs annotated by pathologists. Thus,
synthesizing histopathology data to expand the dataset is highly desirable.
Previous works suffer from inaccuracies and limited diversity in image-mask
pairs, both of which affect training segmentation, particularly in small-scale
datasets and the inherently complex nature of histopathology images. To address
this challenge, we propose PathoPainter, which reformulates image-mask pair
generation as a tumor inpainting task. Specifically, our approach preserves the
background while inpainting the tumor region, ensuring precise alignment
between the generated image and its corresponding mask. To enhance dataset
diversity while maintaining biological plausibility, we incorporate a sampling
mechanism that conditions tumor inpainting on regional embeddings from a
different image. Additionally, we introduce a filtering strategy to exclude
uncertain synthetic regions, further improving the quality of the generated
data. Our comprehensive evaluation spans multiple datasets featuring diverse
tumor types and various training data scales. As a result, segmentation
improved significantly with our synthetic data, surpassing existing
segmentation data synthesis approaches, e.g., 75.69% -> 77.69% on CAMELYON16.
The code is available at https://github.com/HongLiuuuuu/PathoPainter.
| no_new_dataset | 0.952706 |
2503.04635 | Artin Saberpour | Artin Saberpour Abadian and Yi-Chi Liao and Ata Otaran and Rishabh
Dabral and Marie Muehlhaus and Christian Theobalt and Martin Schmitz and
J\"urgen Steimle | 3HANDS Dataset: Learning from Humans for Generating Naturalistic
Handovers with Supernumerary Robotic Limbs | CHI '25 | null | 10.1145/3706598.3713306 | null | cs.RO cs.CV cs.HC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Supernumerary robotic limbs (SRLs) are robotic structures integrated closely
with the user's body, which augment human physical capabilities and necessitate
seamless, naturalistic human-machine interaction. For effective assistance in
physical tasks, enabling SRLs to hand over objects to humans is crucial. Yet,
designing heuristic-based policies for robots is time-consuming, difficult to
generalize across tasks, and results in less human-like motion. When trained
with proper datasets, generative models are powerful alternatives for creating
naturalistic handover motions. We introduce 3HANDS, a novel dataset of object
handover interactions between a participant performing a daily activity and
another participant enacting a hip-mounted SRL in a naturalistic manner. 3HANDS
captures the unique characteristics of SRL interactions: operating in intimate
personal space with asymmetric object origins, implicit motion synchronization,
and the user's engagement in a primary task during the handover. To demonstrate
the effectiveness of our dataset, we present three models: one that generates
naturalistic handover trajectories, another that determines the appropriate
handover endpoints, and a third that predicts the moment to initiate a
handover. In a user study (N=10), we compare the handover interaction performed
with our method compared to a baseline. The findings show that our method was
perceived as significantly more natural, less physically demanding, and more
comfortable.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 17:23:55 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Abadian",
"Artin Saberpour",
""
],
[
"Liao",
"Yi-Chi",
""
],
[
"Otaran",
"Ata",
""
],
[
"Dabral",
"Rishabh",
""
],
[
"Muehlhaus",
"Marie",
""
],
[
"Theobalt",
"Christian",
""
],
[
"Schmitz",
"Martin",
""
],
[
"Steimle",
"Jürgen",
""
]
]
| TITLE: 3HANDS Dataset: Learning from Humans for Generating Naturalistic
Handovers with Supernumerary Robotic Limbs
ABSTRACT: Supernumerary robotic limbs (SRLs) are robotic structures integrated closely
with the user's body, which augment human physical capabilities and necessitate
seamless, naturalistic human-machine interaction. For effective assistance in
physical tasks, enabling SRLs to hand over objects to humans is crucial. Yet,
designing heuristic-based policies for robots is time-consuming, difficult to
generalize across tasks, and results in less human-like motion. When trained
with proper datasets, generative models are powerful alternatives for creating
naturalistic handover motions. We introduce 3HANDS, a novel dataset of object
handover interactions between a participant performing a daily activity and
another participant enacting a hip-mounted SRL in a naturalistic manner. 3HANDS
captures the unique characteristics of SRL interactions: operating in intimate
personal space with asymmetric object origins, implicit motion synchronization,
and the user's engagement in a primary task during the handover. To demonstrate
the effectiveness of our dataset, we present three models: one that generates
naturalistic handover trajectories, another that determines the appropriate
handover endpoints, and a third that predicts the moment to initiate a
handover. In a user study (N=10), we compare the handover interaction performed
with our method compared to a baseline. The findings show that our method was
perceived as significantly more natural, less physically demanding, and more
comfortable.
| new_dataset | 0.962143 |
2503.04639 | Zhijian Yang | Aishik Konwer, Zhijian Yang, Erhan Bas, Cao Xiao, Prateek Prasanna,
Parminder Bhatia, Taha Kass-Hout | Enhancing SAM with Efficient Prompting and Preference Optimization for
Semi-supervised Medical Image Segmentation | Accepted to CVPR 2025 | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | Foundational models such as the Segment Anything Model (SAM) are gaining
traction in medical imaging segmentation, supporting multiple downstream tasks.
However, such models are supervised in nature, still relying on large annotated
datasets or prompts supplied by experts. Conventional techniques such as active
learning to alleviate such limitations are limited in scope and still
necessitate continuous human involvement and complex domain knowledge for label
refinement or establishing reward ground truth. To address these challenges, we
propose an enhanced Segment Anything Model (SAM) framework that utilizes
annotation-efficient prompts generated in a fully unsupervised fashion, while
still capturing essential semantic, location, and shape information through
contrastive language-image pretraining and visual question answering. We adopt
the direct preference optimization technique to design an optimal policy that
enables the model to generate high-fidelity segmentations with simple ratings
or rankings provided by a virtual annotator simulating the human annotation
process. State-of-the-art performance of our framework in tasks such as lung
segmentation, breast tumor segmentation, and organ segmentation across various
modalities, including X-ray, ultrasound, and abdominal CT, justifies its
effectiveness in low-annotation data scenarios.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 17:28:48 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Konwer",
"Aishik",
""
],
[
"Yang",
"Zhijian",
""
],
[
"Bas",
"Erhan",
""
],
[
"Xiao",
"Cao",
""
],
[
"Prasanna",
"Prateek",
""
],
[
"Bhatia",
"Parminder",
""
],
[
"Kass-Hout",
"Taha",
""
]
]
| TITLE: Enhancing SAM with Efficient Prompting and Preference Optimization for
Semi-supervised Medical Image Segmentation
ABSTRACT: Foundational models such as the Segment Anything Model (SAM) are gaining
traction in medical imaging segmentation, supporting multiple downstream tasks.
However, such models are supervised in nature, still relying on large annotated
datasets or prompts supplied by experts. Conventional techniques such as active
learning to alleviate such limitations are limited in scope and still
necessitate continuous human involvement and complex domain knowledge for label
refinement or establishing reward ground truth. To address these challenges, we
propose an enhanced Segment Anything Model (SAM) framework that utilizes
annotation-efficient prompts generated in a fully unsupervised fashion, while
still capturing essential semantic, location, and shape information through
contrastive language-image pretraining and visual question answering. We adopt
the direct preference optimization technique to design an optimal policy that
enables the model to generate high-fidelity segmentations with simple ratings
or rankings provided by a virtual annotator simulating the human annotation
process. State-of-the-art performance of our framework in tasks such as lung
segmentation, breast tumor segmentation, and organ segmentation across various
modalities, including X-ray, ultrasound, and abdominal CT, justifies its
effectiveness in low-annotation data scenarios.
| no_new_dataset | 0.94887 |
2503.04641 | Yuqi Hu | Yuqi Hu, Longguang Wang, Xian Liu, Ling-Hao Chen, Yuwei Guo, Yukai
Shi, Ce Liu, Anyi Rao, Zeyu Wang, Hui Xiong | Simulating the Real World: A Unified Survey of Multimodal Generative
Models | Repository for the related papers at
https://github.com/ALEEEHU/World-Simulator | null | null | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding and replicating the real world is a critical challenge in
Artificial General Intelligence (AGI) research. To achieve this, many existing
approaches, such as world models, aim to capture the fundamental principles
governing the physical world, enabling more accurate simulations and meaningful
interactions. However, current methods often treat different modalities,
including 2D (images), videos, 3D, and 4D representations, as independent
domains, overlooking their interdependencies. Additionally, these methods
typically focus on isolated dimensions of reality without systematically
integrating their connections. In this survey, we present a unified survey for
multimodal generative models that investigate the progression of data
dimensionality in real-world simulation. Specifically, this survey starts from
2D generation (appearance), then moves to video (appearance+dynamics) and 3D
generation (appearance+geometry), and finally culminates in 4D generation that
integrate all dimensions. To the best of our knowledge, this is the first
attempt to systematically unify the study of 2D, video, 3D and 4D generation
within a single framework. To guide future research, we provide a comprehensive
review of datasets, evaluation metrics and future directions, and fostering
insights for newcomers. This survey serves as a bridge to advance the study of
multimodal generative models and real-world simulation within a unified
framework.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 17:31:43 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Hu",
"Yuqi",
""
],
[
"Wang",
"Longguang",
""
],
[
"Liu",
"Xian",
""
],
[
"Chen",
"Ling-Hao",
""
],
[
"Guo",
"Yuwei",
""
],
[
"Shi",
"Yukai",
""
],
[
"Liu",
"Ce",
""
],
[
"Rao",
"Anyi",
""
],
[
"Wang",
"Zeyu",
""
],
[
"Xiong",
"Hui",
""
]
]
| TITLE: Simulating the Real World: A Unified Survey of Multimodal Generative
Models
ABSTRACT: Understanding and replicating the real world is a critical challenge in
Artificial General Intelligence (AGI) research. To achieve this, many existing
approaches, such as world models, aim to capture the fundamental principles
governing the physical world, enabling more accurate simulations and meaningful
interactions. However, current methods often treat different modalities,
including 2D (images), videos, 3D, and 4D representations, as independent
domains, overlooking their interdependencies. Additionally, these methods
typically focus on isolated dimensions of reality without systematically
integrating their connections. In this survey, we present a unified survey for
multimodal generative models that investigate the progression of data
dimensionality in real-world simulation. Specifically, this survey starts from
2D generation (appearance), then moves to video (appearance+dynamics) and 3D
generation (appearance+geometry), and finally culminates in 4D generation that
integrate all dimensions. To the best of our knowledge, this is the first
attempt to systematically unify the study of 2D, video, 3D and 4D generation
within a single framework. To guide future research, we provide a comprehensive
review of datasets, evaluation metrics and future directions, and fostering
insights for newcomers. This survey serves as a bridge to advance the study of
multimodal generative models and real-world simulation within a unified
framework.
| no_new_dataset | 0.947186 |
2503.04643 | Hong Liu | Hong Liu, Haosen Yang, Federica Eduati, Josien P.W. Pluim, Mitko Veta | Adaptive Prototype Learning for Multimodal Cancer Survival Analysis | 10 pages, 3 figures | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Leveraging multimodal data, particularly the integration of whole-slide
histology images (WSIs) and transcriptomic profiles, holds great promise for
improving cancer survival prediction. However, excessive redundancy in
multimodal data can degrade model performance. In this paper, we propose
Adaptive Prototype Learning (APL), a novel and effective approach for
multimodal cancer survival analysis. APL adaptively learns representative
prototypes in a data-driven manner, reducing redundancy while preserving
critical information. Our method employs two sets of learnable query vectors
that serve as a bridge between high-dimensional representations and survival
prediction, capturing task-relevant features. Additionally, we introduce a
multimodal mixed self-attention mechanism to enable cross-modal interactions,
further enhancing information fusion. Extensive experiments on five benchmark
cancer datasets demonstrate the superiority of our approach over existing
methods. The code is available at https://github.com/HongLiuuuuu/APL.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 17:32:15 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Liu",
"Hong",
""
],
[
"Yang",
"Haosen",
""
],
[
"Eduati",
"Federica",
""
],
[
"Pluim",
"Josien P. W.",
""
],
[
"Veta",
"Mitko",
""
]
]
| TITLE: Adaptive Prototype Learning for Multimodal Cancer Survival Analysis
ABSTRACT: Leveraging multimodal data, particularly the integration of whole-slide
histology images (WSIs) and transcriptomic profiles, holds great promise for
improving cancer survival prediction. However, excessive redundancy in
multimodal data can degrade model performance. In this paper, we propose
Adaptive Prototype Learning (APL), a novel and effective approach for
multimodal cancer survival analysis. APL adaptively learns representative
prototypes in a data-driven manner, reducing redundancy while preserving
critical information. Our method employs two sets of learnable query vectors
that serve as a bridge between high-dimensional representations and survival
prediction, capturing task-relevant features. Additionally, we introduce a
multimodal mixed self-attention mechanism to enable cross-modal interactions,
further enhancing information fusion. Extensive experiments on five benchmark
cancer datasets demonstrate the superiority of our approach over existing
methods. The code is available at https://github.com/HongLiuuuuu/APL.
| no_new_dataset | 0.948251 |
2503.04645 | Qunsong Zeng | Qunsong Zeng, Jianhao Huang, Zhanwei Wang, Kaibin Huang, Kin K. Leung | Ultra-Low-Latency Edge Intelligent Sensing: A Source-Channel Tradeoff
and Its Application to Coding Rate Adaptation | null | null | null | null | cs.IT eess.SP math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The forthcoming sixth-generation (6G) mobile network is set to merge edge
artificial intelligence (AI) and integrated sensing and communication (ISAC)
extensively, giving rise to the new paradigm of edge intelligent sensing
(EI-Sense). This paradigm leverages ubiquitous edge devices for environmental
sensing and deploys AI algorithms at edge servers to interpret the observations
via remote inference on wirelessly uploaded features. A significant challenge
arises in designing EI-Sense systems for 6G mission-critical applications,
which demand high performance under stringent latency constraints. To tackle
this challenge, we focus on the end-to-end (E2E) performance of EI-Sense and
characterize a source-channel tradeoff that balances source distortion and
channel reliability. In this work, we establish a theoretical foundation for
the source-channel tradeoff by quantifying the effects of source coding on
feature discriminant gains and channel reliability on packet loss. Building on
this foundation, we design the coding rate control by optimizing the tradeoff
to minimize the E2E sensing error probability, leading to a low-complexity
algorithm for ultra-low-latency EI-Sense. Finally, we validate our theoretical
analysis and proposed coding rate control algorithm through extensive
experiments on both synthetic and real datasets, demonstrating the sensing
performance gain of our approach with respect to traditional
reliability-centric methods.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 17:32:35 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Zeng",
"Qunsong",
""
],
[
"Huang",
"Jianhao",
""
],
[
"Wang",
"Zhanwei",
""
],
[
"Huang",
"Kaibin",
""
],
[
"Leung",
"Kin K.",
""
]
]
| TITLE: Ultra-Low-Latency Edge Intelligent Sensing: A Source-Channel Tradeoff
and Its Application to Coding Rate Adaptation
ABSTRACT: The forthcoming sixth-generation (6G) mobile network is set to merge edge
artificial intelligence (AI) and integrated sensing and communication (ISAC)
extensively, giving rise to the new paradigm of edge intelligent sensing
(EI-Sense). This paradigm leverages ubiquitous edge devices for environmental
sensing and deploys AI algorithms at edge servers to interpret the observations
via remote inference on wirelessly uploaded features. A significant challenge
arises in designing EI-Sense systems for 6G mission-critical applications,
which demand high performance under stringent latency constraints. To tackle
this challenge, we focus on the end-to-end (E2E) performance of EI-Sense and
characterize a source-channel tradeoff that balances source distortion and
channel reliability. In this work, we establish a theoretical foundation for
the source-channel tradeoff by quantifying the effects of source coding on
feature discriminant gains and channel reliability on packet loss. Building on
this foundation, we design the coding rate control by optimizing the tradeoff
to minimize the E2E sensing error probability, leading to a low-complexity
algorithm for ultra-low-latency EI-Sense. Finally, we validate our theoretical
analysis and proposed coding rate control algorithm through extensive
experiments on both synthetic and real datasets, demonstrating the sensing
performance gain of our approach with respect to traditional
reliability-centric methods.
| no_new_dataset | 0.943971 |
2503.04650 | Jiang Li | Jiang Li, Xiaoping Wang | Joint Masked Reconstruction and Contrastive Learning for Mining
Interactions Between Proteins | Submitted | null | null | null | cs.LG cs.ET | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Protein-protein interaction (PPI) prediction is an instrumental means in
elucidating the mechanisms underlying cellular operations, holding significant
practical implications for the realms of pharmaceutical development and
clinical treatment. Presently, the majority of research methods primarily
concentrate on the analysis of amino acid sequences, while investigations
predicated on protein structures remain in the nascent stages of exploration.
Despite the emergence of several structure-based algorithms in recent years,
these are still confronted with inherent challenges: (1) the extraction of
intrinsic structural information of proteins typically necessitates the
expenditure of substantial computational resources; (2) these models are overly
reliant on seen protein data, struggling to effectively unearth interaction
cues between unknown proteins. To further propel advancements in this domain,
this paper introduces a novel PPI prediction method jointing masked
reconstruction and contrastive learning, termed JmcPPI. This methodology
dissects the PPI prediction task into two distinct phases: during the residue
structure encoding phase, JmcPPI devises two feature reconstruction tasks and
employs graph attention mechanism to capture structural information between
residues; during the protein interaction inference phase, JmcPPI perturbs the
original PPI graph and employs a multi-graph contrastive learning strategy to
thoroughly mine extrinsic interaction information of novel proteins. Extensive
experiments conducted on three widely utilized PPI datasets demonstrate that
JmcPPI surpasses existing optimal baseline models across various data partition
schemes. The associated code can be accessed via
https://github.com/lijfrank-open/JmcPPI.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 17:39:12 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Li",
"Jiang",
""
],
[
"Wang",
"Xiaoping",
""
]
]
| TITLE: Joint Masked Reconstruction and Contrastive Learning for Mining
Interactions Between Proteins
ABSTRACT: Protein-protein interaction (PPI) prediction is an instrumental means in
elucidating the mechanisms underlying cellular operations, holding significant
practical implications for the realms of pharmaceutical development and
clinical treatment. Presently, the majority of research methods primarily
concentrate on the analysis of amino acid sequences, while investigations
predicated on protein structures remain in the nascent stages of exploration.
Despite the emergence of several structure-based algorithms in recent years,
these are still confronted with inherent challenges: (1) the extraction of
intrinsic structural information of proteins typically necessitates the
expenditure of substantial computational resources; (2) these models are overly
reliant on seen protein data, struggling to effectively unearth interaction
cues between unknown proteins. To further propel advancements in this domain,
this paper introduces a novel PPI prediction method jointing masked
reconstruction and contrastive learning, termed JmcPPI. This methodology
dissects the PPI prediction task into two distinct phases: during the residue
structure encoding phase, JmcPPI devises two feature reconstruction tasks and
employs graph attention mechanism to capture structural information between
residues; during the protein interaction inference phase, JmcPPI perturbs the
original PPI graph and employs a multi-graph contrastive learning strategy to
thoroughly mine extrinsic interaction information of novel proteins. Extensive
experiments conducted on three widely utilized PPI datasets demonstrate that
JmcPPI surpasses existing optimal baseline models across various data partition
schemes. The associated code can be accessed via
https://github.com/lijfrank-open/JmcPPI.
| no_new_dataset | 0.940953 |
2503.04653 | Tengfei Zhang | Tengfei Zhang, Ziheng Zhao, Chaoyi Wu, Xiao Zhou, Ya Zhang, Yangfeng
Wang, Weidi Xie | RadIR: A Scalable Framework for Multi-Grained Medical Image Retrieval
via Radiology Report Mining | null | null | null | null | cs.CV cs.IR eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Developing advanced medical imaging retrieval systems is challenging due to
the varying definitions of `similar images' across different medical contexts.
This challenge is compounded by the lack of large-scale, high-quality medical
imaging retrieval datasets and benchmarks. In this paper, we propose a novel
methodology that leverages dense radiology reports to define image-wise
similarity ordering at multiple granularities in a scalable and fully automatic
manner. Using this approach, we construct two comprehensive medical imaging
retrieval datasets: MIMIC-IR for Chest X-rays and CTRATE-IR for CT scans,
providing detailed image-image ranking annotations conditioned on diverse
anatomical structures. Furthermore, we develop two retrieval systems, RadIR-CXR
and model-ChestCT, which demonstrate superior performance in traditional
image-image and image-report retrieval tasks. These systems also enable
flexible, effective image retrieval conditioned on specific anatomical
structures described in text, achieving state-of-the-art results on 77 out of
78 metrics.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 17:43:03 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Zhang",
"Tengfei",
""
],
[
"Zhao",
"Ziheng",
""
],
[
"Wu",
"Chaoyi",
""
],
[
"Zhou",
"Xiao",
""
],
[
"Zhang",
"Ya",
""
],
[
"Wang",
"Yangfeng",
""
],
[
"Xie",
"Weidi",
""
]
]
| TITLE: RadIR: A Scalable Framework for Multi-Grained Medical Image Retrieval
via Radiology Report Mining
ABSTRACT: Developing advanced medical imaging retrieval systems is challenging due to
the varying definitions of `similar images' across different medical contexts.
This challenge is compounded by the lack of large-scale, high-quality medical
imaging retrieval datasets and benchmarks. In this paper, we propose a novel
methodology that leverages dense radiology reports to define image-wise
similarity ordering at multiple granularities in a scalable and fully automatic
manner. Using this approach, we construct two comprehensive medical imaging
retrieval datasets: MIMIC-IR for Chest X-rays and CTRATE-IR for CT scans,
providing detailed image-image ranking annotations conditioned on diverse
anatomical structures. Furthermore, we develop two retrieval systems, RadIR-CXR
and model-ChestCT, which demonstrate superior performance in traditional
image-image and image-report retrieval tasks. These systems also enable
flexible, effective image retrieval conditioned on specific anatomical
structures described in text, achieving state-of-the-art results on 77 out of
78 metrics.
| no_new_dataset | 0.637124 |
2503.04666 | Emanuele Bugliarello | Emanuele Bugliarello, Anurag Arnab, Roni Paiss, Pieter-Jan Kindermans,
Cordelia Schmid | What Are You Doing? A Closer Look at Controllable Human Video Generation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | High-quality benchmarks are crucial for driving progress in machine learning
research. However, despite the growing interest in video generation, there is
no comprehensive dataset to evaluate human generation. Humans can perform a
wide variety of actions and interactions, but existing datasets, like TikTok
and TED-Talks, lack the diversity and complexity to fully capture the
capabilities of video generation models. We close this gap by introducing `What
Are You Doing?' (WYD): a new benchmark for fine-grained evaluation of
controllable image-to-video generation of humans. WYD consists of 1{,}544
captioned videos that have been meticulously collected and annotated with 56
fine-grained categories. These allow us to systematically measure performance
across 9 aspects of human generation, including actions, interactions and
motion. We also propose and validate automatic metrics that leverage our
annotations and better capture human evaluations. Equipped with our dataset and
metrics, we perform in-depth analyses of seven state-of-the-art models in
controllable image-to-video generation, showing how WYD provides novel insights
about the capabilities of these models. We release our data and code to drive
forward progress in human video generation modeling at
https://github.com/google-deepmind/wyd-benchmark.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 17:59:29 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Bugliarello",
"Emanuele",
""
],
[
"Arnab",
"Anurag",
""
],
[
"Paiss",
"Roni",
""
],
[
"Kindermans",
"Pieter-Jan",
""
],
[
"Schmid",
"Cordelia",
""
]
]
| TITLE: What Are You Doing? A Closer Look at Controllable Human Video Generation
ABSTRACT: High-quality benchmarks are crucial for driving progress in machine learning
research. However, despite the growing interest in video generation, there is
no comprehensive dataset to evaluate human generation. Humans can perform a
wide variety of actions and interactions, but existing datasets, like TikTok
and TED-Talks, lack the diversity and complexity to fully capture the
capabilities of video generation models. We close this gap by introducing `What
Are You Doing?' (WYD): a new benchmark for fine-grained evaluation of
controllable image-to-video generation of humans. WYD consists of 1{,}544
captioned videos that have been meticulously collected and annotated with 56
fine-grained categories. These allow us to systematically measure performance
across 9 aspects of human generation, including actions, interactions and
motion. We also propose and validate automatic metrics that leverage our
annotations and better capture human evaluations. Equipped with our dataset and
metrics, we perform in-depth analyses of seven state-of-the-art models in
controllable image-to-video generation, showing how WYD provides novel insights
about the capabilities of these models. We release our data and code to drive
forward progress in human video generation modeling at
https://github.com/google-deepmind/wyd-benchmark.
| new_dataset | 0.975273 |
2503.04680 | Ryan Barron | Ryan Barron, Maksim E. Eren, Duc P. Truong, Cynthia Matuszek, James
Wendelberger, Mary F. Dorn, Boian Alexandrov | Matrix Factorization for Inferring Associations and Missing Links | 35 pages, 14 figures, 3 tables, 1 algorithm | null | null | null | cs.LG cs.AI cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Missing link prediction is a method for network analysis, with applications
in recommender systems, biology, social sciences, cybersecurity, information
retrieval, and Artificial Intelligence (AI) reasoning in Knowledge Graphs.
Missing link prediction identifies unseen but potentially existing connections
in a network by analyzing the observed patterns and relationships. In
proliferation detection, this supports efforts to identify and characterize
attempts by state and non-state actors to acquire nuclear weapons or associated
technology - a notoriously challenging but vital mission for global security.
Dimensionality reduction techniques like Non-Negative Matrix Factorization
(NMF) and Logistic Matrix Factorization (LMF) are effective but require
selection of the matrix rank parameter, that is, of the number of hidden
features, k, to avoid over/under-fitting. We introduce novel Weighted (WNMFk),
Boolean (BNMFk), and Recommender (RNMFk) matrix factorization methods, along
with ensemble variants incorporating logistic factorization, for link
prediction. Our methods integrate automatic model determination for rank
estimation by evaluating stability and accuracy using a modified bootstrap
methodology and uncertainty quantification (UQ), assessing prediction
reliability under random perturbations. We incorporate Otsu threshold selection
and k-means clustering for Boolean matrix factorization, comparing them to
coordinate descent-based Boolean thresholding. Our experiments highlight the
impact of rank k selection, evaluate model performance under varying test-set
sizes, and demonstrate the benefits of UQ for reliable predictions using
abstention. We validate our methods on three synthetic datasets (Boolean and
uniformly distributed) and benchmark them against LMF and symmetric LMF
(symLMF) on five real-world protein-protein interaction networks, showcasing an
improved prediction performance.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 18:22:46 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Barron",
"Ryan",
""
],
[
"Eren",
"Maksim E.",
""
],
[
"Truong",
"Duc P.",
""
],
[
"Matuszek",
"Cynthia",
""
],
[
"Wendelberger",
"James",
""
],
[
"Dorn",
"Mary F.",
""
],
[
"Alexandrov",
"Boian",
""
]
]
| TITLE: Matrix Factorization for Inferring Associations and Missing Links
ABSTRACT: Missing link prediction is a method for network analysis, with applications
in recommender systems, biology, social sciences, cybersecurity, information
retrieval, and Artificial Intelligence (AI) reasoning in Knowledge Graphs.
Missing link prediction identifies unseen but potentially existing connections
in a network by analyzing the observed patterns and relationships. In
proliferation detection, this supports efforts to identify and characterize
attempts by state and non-state actors to acquire nuclear weapons or associated
technology - a notoriously challenging but vital mission for global security.
Dimensionality reduction techniques like Non-Negative Matrix Factorization
(NMF) and Logistic Matrix Factorization (LMF) are effective but require
selection of the matrix rank parameter, that is, of the number of hidden
features, k, to avoid over/under-fitting. We introduce novel Weighted (WNMFk),
Boolean (BNMFk), and Recommender (RNMFk) matrix factorization methods, along
with ensemble variants incorporating logistic factorization, for link
prediction. Our methods integrate automatic model determination for rank
estimation by evaluating stability and accuracy using a modified bootstrap
methodology and uncertainty quantification (UQ), assessing prediction
reliability under random perturbations. We incorporate Otsu threshold selection
and k-means clustering for Boolean matrix factorization, comparing them to
coordinate descent-based Boolean thresholding. Our experiments highlight the
impact of rank k selection, evaluate model performance under varying test-set
sizes, and demonstrate the benefits of UQ for reliable predictions using
abstention. We validate our methods on three synthetic datasets (Boolean and
uniformly distributed) and benchmark them against LMF and symmetric LMF
(symLMF) on five real-world protein-protein interaction networks, showcasing an
improved prediction performance.
| no_new_dataset | 0.947235 |
2503.04688 | Davide Dalle Pezze | Riccardo De Monte, Davide Dalle Pezze, Gian Antonio Susto | Teach YOLO to Remember: A Self-Distillation Approach for Continual
Object Detection | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Real-time object detectors like YOLO achieve exceptional performance when
trained on large datasets for multiple epochs. However, in real-world scenarios
where data arrives incrementally, neural networks suffer from catastrophic
forgetting, leading to a loss of previously learned knowledge. To address this,
prior research has explored strategies for Class Incremental Learning (CIL) in
Continual Learning for Object Detection (CLOD), with most approaches focusing
on two-stage object detectors. However, existing work suggests that Learning
without Forgetting (LwF) may be ineffective for one-stage anchor-free detectors
like YOLO due to noisy regression outputs, which risk transferring corrupted
knowledge. In this work, we introduce YOLO LwF, a self-distillation approach
tailored for YOLO-based continual object detection. We demonstrate that when
coupled with a replay memory, YOLO LwF significantly mitigates forgetting.
Compared to previous approaches, it achieves state-of-the-art performance,
improving mAP by +2.1% and +2.9% on the VOC and COCO benchmarks, respectively.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 18:31:41 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"De Monte",
"Riccardo",
""
],
[
"Pezze",
"Davide Dalle",
""
],
[
"Susto",
"Gian Antonio",
""
]
]
| TITLE: Teach YOLO to Remember: A Self-Distillation Approach for Continual
Object Detection
ABSTRACT: Real-time object detectors like YOLO achieve exceptional performance when
trained on large datasets for multiple epochs. However, in real-world scenarios
where data arrives incrementally, neural networks suffer from catastrophic
forgetting, leading to a loss of previously learned knowledge. To address this,
prior research has explored strategies for Class Incremental Learning (CIL) in
Continual Learning for Object Detection (CLOD), with most approaches focusing
on two-stage object detectors. However, existing work suggests that Learning
without Forgetting (LwF) may be ineffective for one-stage anchor-free detectors
like YOLO due to noisy regression outputs, which risk transferring corrupted
knowledge. In this work, we introduce YOLO LwF, a self-distillation approach
tailored for YOLO-based continual object detection. We demonstrate that when
coupled with a replay memory, YOLO LwF significantly mitigates forgetting.
Compared to previous approaches, it achieves state-of-the-art performance,
improving mAP by +2.1% and +2.9% on the VOC and COCO benchmarks, respectively.
| no_new_dataset | 0.941975 |
2503.04693 | Wenyu Wang | Wenyu Wang, Mengqi Zhang, Xiaotian Ye, Zhaochun Ren, Zhumin Chen,
Pengjie Ren | UIPE: Enhancing LLM Unlearning by Removing Knowledge Related to
Forgetting Targets | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) inevitably acquire harmful information during
training on massive datasets. LLM unlearning aims to eliminate the influence of
such harmful information while maintaining the model's overall performance.
Existing unlearning methods, represented by gradient ascent-based approaches,
primarily focus on forgetting target data while overlooking the crucial impact
of logically related knowledge on the effectiveness of unlearning. In this
paper, through both theoretical and experimental analyses, we first demonstrate
that a key reason for the suboptimal unlearning performance is that models can
reconstruct the target content through reasoning with logically related
knowledge. To address this issue, we propose Unlearning Improvement via
Parameter Extrapolation (UIPE), a method that removes knowledge highly
correlated with the forgetting targets. Experimental results show that UIPE
significantly enhances the performance of various mainstream LLM unlearning
methods on the TOFU benchmark.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 18:40:00 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Wang",
"Wenyu",
""
],
[
"Zhang",
"Mengqi",
""
],
[
"Ye",
"Xiaotian",
""
],
[
"Ren",
"Zhaochun",
""
],
[
"Chen",
"Zhumin",
""
],
[
"Ren",
"Pengjie",
""
]
]
| TITLE: UIPE: Enhancing LLM Unlearning by Removing Knowledge Related to
Forgetting Targets
ABSTRACT: Large Language Models (LLMs) inevitably acquire harmful information during
training on massive datasets. LLM unlearning aims to eliminate the influence of
such harmful information while maintaining the model's overall performance.
Existing unlearning methods, represented by gradient ascent-based approaches,
primarily focus on forgetting target data while overlooking the crucial impact
of logically related knowledge on the effectiveness of unlearning. In this
paper, through both theoretical and experimental analyses, we first demonstrate
that a key reason for the suboptimal unlearning performance is that models can
reconstruct the target content through reasoning with logically related
knowledge. To address this issue, we propose Unlearning Improvement via
Parameter Extrapolation (UIPE), a method that removes knowledge highly
correlated with the forgetting targets. Experimental results show that UIPE
significantly enhances the performance of various mainstream LLM unlearning
methods on the TOFU benchmark.
| no_new_dataset | 0.946051 |
2503.04713 | Anuj Diwan | Anuj Diwan, Zhisheng Zheng, David Harwath, Eunsol Choi | Scaling Rich Style-Prompted Text-to-Speech Datasets | null | null | null | null | eess.AS cs.AI cs.CL cs.LG cs.SD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce Paralinguistic Speech Captions (ParaSpeechCaps), a large-scale
dataset that annotates speech utterances with rich style captions. While rich
abstract tags (e.g. guttural, nasal, pained) have been explored in small-scale
human-annotated datasets, existing large-scale datasets only cover basic tags
(e.g. low-pitched, slow, loud). We combine off-the-shelf text and speech
embedders, classifiers and an audio language model to automatically scale rich
tag annotations for the first time. ParaSpeechCaps covers a total of 59 style
tags, including both speaker-level intrinsic tags and utterance-level
situational tags. It consists of 342 hours of human-labelled data (PSC-Base)
and 2427 hours of automatically annotated data (PSC-Scaled). We finetune
Parler-TTS, an open-source style-prompted TTS model, on ParaSpeechCaps, and
achieve improved style consistency (+7.9% Consistency MOS) and speech quality
(+15.5% Naturalness MOS) over the best performing baseline that combines
existing rich style tag datasets. We ablate several of our dataset design
choices to lay the foundation for future work in this space. Our dataset,
models and code are released at https://github.com/ajd12342/paraspeechcaps .
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 18:57:40 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Diwan",
"Anuj",
""
],
[
"Zheng",
"Zhisheng",
""
],
[
"Harwath",
"David",
""
],
[
"Choi",
"Eunsol",
""
]
]
| TITLE: Scaling Rich Style-Prompted Text-to-Speech Datasets
ABSTRACT: We introduce Paralinguistic Speech Captions (ParaSpeechCaps), a large-scale
dataset that annotates speech utterances with rich style captions. While rich
abstract tags (e.g. guttural, nasal, pained) have been explored in small-scale
human-annotated datasets, existing large-scale datasets only cover basic tags
(e.g. low-pitched, slow, loud). We combine off-the-shelf text and speech
embedders, classifiers and an audio language model to automatically scale rich
tag annotations for the first time. ParaSpeechCaps covers a total of 59 style
tags, including both speaker-level intrinsic tags and utterance-level
situational tags. It consists of 342 hours of human-labelled data (PSC-Base)
and 2427 hours of automatically annotated data (PSC-Scaled). We finetune
Parler-TTS, an open-source style-prompted TTS model, on ParaSpeechCaps, and
achieve improved style consistency (+7.9% Consistency MOS) and speech quality
(+15.5% Naturalness MOS) over the best performing baseline that combines
existing rich style tag datasets. We ablate several of our dataset design
choices to lay the foundation for future work in this space. Our dataset,
models and code are released at https://github.com/ajd12342/paraspeechcaps .
| new_dataset | 0.953665 |
2503.04720 | Yue Gao | Yue Gao, Hong-Xing Yu, Bo Zhu and Jiajun Wu | FluidNexus: 3D Fluid Reconstruction and Prediction from a Single Video | CVPR 2025. Project website: https://yuegao.me/FluidNexus | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study reconstructing and predicting 3D fluid appearance and velocity from
a single video. Current methods require multi-view videos for fluid
reconstruction. We present FluidNexus, a novel framework that bridges video
generation and physics simulation to tackle this task. Our key insight is to
synthesize multiple novel-view videos as references for reconstruction.
FluidNexus consists of two key components: (1) a novel-view video synthesizer
that combines frame-wise view synthesis with video diffusion refinement for
generating realistic videos, and (2) a physics-integrated particle
representation coupling differentiable simulation and rendering to
simultaneously facilitate 3D fluid reconstruction and prediction. To evaluate
our approach, we collect two new real-world fluid datasets featuring textured
backgrounds and object interactions. Our method enables dynamic novel view
synthesis, future prediction, and interaction simulation from a single fluid
video. Project website: https://yuegao.me/FluidNexus.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 18:59:06 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Gao",
"Yue",
""
],
[
"Yu",
"Hong-Xing",
""
],
[
"Zhu",
"Bo",
""
],
[
"Wu",
"Jiajun",
""
]
]
| TITLE: FluidNexus: 3D Fluid Reconstruction and Prediction from a Single Video
ABSTRACT: We study reconstructing and predicting 3D fluid appearance and velocity from
a single video. Current methods require multi-view videos for fluid
reconstruction. We present FluidNexus, a novel framework that bridges video
generation and physics simulation to tackle this task. Our key insight is to
synthesize multiple novel-view videos as references for reconstruction.
FluidNexus consists of two key components: (1) a novel-view video synthesizer
that combines frame-wise view synthesis with video diffusion refinement for
generating realistic videos, and (2) a physics-integrated particle
representation coupling differentiable simulation and rendering to
simultaneously facilitate 3D fluid reconstruction and prediction. To evaluate
our approach, we collect two new real-world fluid datasets featuring textured
backgrounds and object interactions. Our method enables dynamic novel view
synthesis, future prediction, and interaction simulation from a single fluid
video. Project website: https://yuegao.me/FluidNexus.
| new_dataset | 0.944074 |
2503.04724 | Sambal Shikhar | Sambal Shikhar, Mohammed Irfan Kurpath, Sahal Shaji Mullappilly, Jean
Lahoud, Fahad Khan, Rao Muhammad Anwer, Salman Khan, Hisham Cholakkal | LLMVoX: Autoregressive Streaming Text-to-Speech Model for Any LLM | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Recent advancements in speech-to-speech dialogue systems leverage LLMs for
multimodal interactions, yet they remain hindered by fine-tuning requirements,
high computational overhead, and text-speech misalignment. Existing
speech-enabled LLMs often degrade conversational quality by modifying the LLM,
thereby compromising its linguistic capabilities. In contrast, we propose
LLMVoX, a lightweight 30M-parameter, LLM-agnostic, autoregressive streaming TTS
system that generates high-quality speech with low latency, while fully
preserving the capabilities of the base LLM. Our approach achieves a
significantly lower Word Error Rate compared to speech-enabled LLMs, while
operating at comparable latency and UTMOS score. By decoupling speech synthesis
from LLM processing via a multi-queue token streaming system, LLMVoX supports
seamless, infinite-length dialogues. Its plug-and-play design also facilitates
extension to various tasks with different backbones. Furthermore, LLMVoX
generalizes to new languages with only dataset adaptation, attaining a low
Character Error Rate on an Arabic speech task. Additionally, we have integrated
LLMVoX with a Vision-Language Model to create an omni-model with speech, text,
and vision capabilities, without requiring additional multimodal training. Our
code base and project page is available at https://mbzuai-oryx.github.io/LLMVoX .
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 18:59:38 GMT"
}
]
| 2025-03-07T00:00:00 | [
[
"Shikhar",
"Sambal",
""
],
[
"Kurpath",
"Mohammed Irfan",
""
],
[
"Mullappilly",
"Sahal Shaji",
""
],
[
"Lahoud",
"Jean",
""
],
[
"Khan",
"Fahad",
""
],
[
"Anwer",
"Rao Muhammad",
""
],
[
"Khan",
"Salman",
""
],
[
"Cholakkal",
"Hisham",
""
]
]
| TITLE: LLMVoX: Autoregressive Streaming Text-to-Speech Model for Any LLM
ABSTRACT: Recent advancements in speech-to-speech dialogue systems leverage LLMs for
multimodal interactions, yet they remain hindered by fine-tuning requirements,
high computational overhead, and text-speech misalignment. Existing
speech-enabled LLMs often degrade conversational quality by modifying the LLM,
thereby compromising its linguistic capabilities. In contrast, we propose
LLMVoX, a lightweight 30M-parameter, LLM-agnostic, autoregressive streaming TTS
system that generates high-quality speech with low latency, while fully
preserving the capabilities of the base LLM. Our approach achieves a
significantly lower Word Error Rate compared to speech-enabled LLMs, while
operating at comparable latency and UTMOS score. By decoupling speech synthesis
from LLM processing via a multi-queue token streaming system, LLMVoX supports
seamless, infinite-length dialogues. Its plug-and-play design also facilitates
extension to various tasks with different backbones. Furthermore, LLMVoX
generalizes to new languages with only dataset adaptation, attaining a low
Character Error Rate on an Arabic speech task. Additionally, we have integrated
LLMVoX with a Vision-Language Model to create an omni-model with speech, text,
and vision capabilities, without requiring additional multimodal training. Our
code base and project page is available at https://mbzuai-oryx.github.io/LLMVoX .
| no_new_dataset | 0.944638 |
2104.03353 | A\'ecio Solano Rodrigues Santos | A\'ecio Santos, Aline Bessa, Fernando Chirigati, Christopher Musco,
Juliana Freire | Correlation Sketches for Approximate Join-Correlation Queries | Proceedings of the 2021 International Conference on Management of
Data (SIGMOD '21) | In Proceedings of the 2021 International Conference on Management
of Data, pp. 1531-1544. 2021 | 10.1145/3448016.3458456 | null | cs.DB cs.DS cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The increasing availability of structured datasets, from Web tables and
open-data portals to enterprise data, opens up opportunities~to enrich
analytics and improve machine learning models through relational data
augmentation. In this paper, we introduce a new class of data augmentation
queries: join-correlation queries. Given a column $Q$ and a join column $K_Q$
from a query table $\mathcal{T}_Q$, retrieve tables $\mathcal{T}_X$ in a
dataset collection such that $\mathcal{T}_X$ is joinable with $\mathcal{T}_Q$
on $K_Q$ and there is a column $C \in \mathcal{T}_X$ such that $Q$ is
correlated with $C$. A na\"ive approach to evaluate these queries, which first
finds joinable tables and then explicitly joins and computes correlations
between $Q$ and all columns of the discovered tables, is prohibitively
expensive. To efficiently support correlated column discovery, we 1) propose a
sketching method that enables the construction of an index for a large number
of tables and that provides accurate estimates for join-correlation queries,
and 2) explore different scoring strategies that effectively rank the query
results based on how well the columns are correlated with the query. We carry
out a detailed experimental evaluation, using both synthetic and real data,
which shows that our sketches attain high accuracy and the scoring strategies
lead to high-quality rankings.
| [
{
"version": "v1",
"created": "Wed, 7 Apr 2021 19:08:14 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Santos",
"Aécio",
""
],
[
"Bessa",
"Aline",
""
],
[
"Chirigati",
"Fernando",
""
],
[
"Musco",
"Christopher",
""
],
[
"Freire",
"Juliana",
""
]
]
| TITLE: Correlation Sketches for Approximate Join-Correlation Queries
ABSTRACT: The increasing availability of structured datasets, from Web tables and
open-data portals to enterprise data, opens up opportunities~to enrich
analytics and improve machine learning models through relational data
augmentation. In this paper, we introduce a new class of data augmentation
queries: join-correlation queries. Given a column $Q$ and a join column $K_Q$
from a query table $\mathcal{T}_Q$, retrieve tables $\mathcal{T}_X$ in a
dataset collection such that $\mathcal{T}_X$ is joinable with $\mathcal{T}_Q$
on $K_Q$ and there is a column $C \in \mathcal{T}_X$ such that $Q$ is
correlated with $C$. A na\"ive approach to evaluate these queries, which first
finds joinable tables and then explicitly joins and computes correlations
between $Q$ and all columns of the discovered tables, is prohibitively
expensive. To efficiently support correlated column discovery, we 1) propose a
sketching method that enables the construction of an index for a large number
of tables and that provides accurate estimates for join-correlation queries,
and 2) explore different scoring strategies that effectively rank the query
results based on how well the columns are correlated with the query. We carry
out a detailed experimental evaluation, using both synthetic and real data,
which shows that our sketches attain high accuracy and the scoring strategies
lead to high-quality rankings.
| no_new_dataset | 0.937038 |
2210.09126 | Thorsten Eisenhofer | Thorsten Eisenhofer, Doreen Riepel, Varun Chandrasekaran, Esha Ghosh,
Olga Ohrimenko, Nicolas Papernot | Verifiable and Provably Secure Machine Unlearning | Accepted at IEEE SaTML2025 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Machine unlearning aims to remove points from the training dataset of a
machine learning model after training: e.g., when a user requests their data to
be deleted. While many unlearning methods have been proposed, none of them
enable users to audit the procedure. Furthermore, recent work shows a user is
unable to verify whether their data was unlearnt from an inspection of the
model parameter alone. Rather than reasoning about parameters, we propose to
view verifiable unlearning as a security problem. To this end, we present the
first cryptographic definition of verifiable unlearning to formally capture the
guarantees of an unlearning system. In this framework, the server first
computes a proof that the model was trained on a dataset D. Given a user's data
point d requested to be deleted, the server updates the model using an
unlearning algorithm. It then provides a proof of the correct execution of
unlearning and that d is not part of D', where D' is the new training dataset
(i.e., d has been removed). Our framework is generally applicable to different
unlearning techniques that we abstract as admissible functions. We instantiate
a protocol in the framework, based on cryptographic assumptions, using SNARKs
and hash chains. Finally, we implement the protocol for three different
unlearning techniques and validate its feasibility for linear regression,
logistic regression, and neural networks.
| [
{
"version": "v1",
"created": "Mon, 17 Oct 2022 14:19:52 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Mar 2023 19:22:58 GMT"
},
{
"version": "v3",
"created": "Wed, 5 Mar 2025 09:30:22 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Eisenhofer",
"Thorsten",
""
],
[
"Riepel",
"Doreen",
""
],
[
"Chandrasekaran",
"Varun",
""
],
[
"Ghosh",
"Esha",
""
],
[
"Ohrimenko",
"Olga",
""
],
[
"Papernot",
"Nicolas",
""
]
]
| TITLE: Verifiable and Provably Secure Machine Unlearning
ABSTRACT: Machine unlearning aims to remove points from the training dataset of a
machine learning model after training: e.g., when a user requests their data to
be deleted. While many unlearning methods have been proposed, none of them
enable users to audit the procedure. Furthermore, recent work shows a user is
unable to verify whether their data was unlearnt from an inspection of the
model parameter alone. Rather than reasoning about parameters, we propose to
view verifiable unlearning as a security problem. To this end, we present the
first cryptographic definition of verifiable unlearning to formally capture the
guarantees of an unlearning system. In this framework, the server first
computes a proof that the model was trained on a dataset D. Given a user's data
point d requested to be deleted, the server updates the model using an
unlearning algorithm. It then provides a proof of the correct execution of
unlearning and that d is not part of D', where D' is the new training dataset
(i.e., d has been removed). Our framework is generally applicable to different
unlearning techniques that we abstract as admissible functions. We instantiate
a protocol in the framework, based on cryptographic assumptions, using SNARKs
and hash chains. Finally, we implement the protocol for three different
unlearning techniques and validate its feasibility for linear regression,
logistic regression, and neural networks.
| no_new_dataset | 0.937211 |
2210.09604 | Xiaoning Liu | Xiaoning Liu | Perceptual Multi-Exposure Fusion | null | null | null | null | cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As an ever-increasing demand for high dynamic range (HDR) scene shooting,
multi-exposure image fusion (MEF) technology has abounded. In recent years,
multi-scale exposure fusion approaches based on detail-enhancement have led the
way for improvement in highlight and shadow details. Most of such methods,
however, are too computationally expensive to be deployed on mobile devices.
This paper presents a perceptual multi-exposure fusion method that not just
ensures fine shadow/highlight details but with lower complexity than
detailenhanced methods. We analyze the potential defects of three classical
exposure measures in lieu of using detail-enhancement component and improve two
of them, namely adaptive Wellexposedness (AWE) and the gradient of color images
(3-D gradient). AWE designed in YCbCr color space considers the difference
between varying exposure images. 3-D gradient is employed to extract fine
details. We build a large-scale multiexposure benchmark dataset suitable for
static scenes, which contains 167 image sequences all told. Experiments on the
constructed dataset demonstrate that the proposed method exceeds existing eight
state-of-the-art approaches in terms of visually and MEF-SSIM value. Moreover,
our approach can achieve a better improvement for current image enhancement
techniques, ensuring fine detail in bright light.
| [
{
"version": "v1",
"created": "Tue, 18 Oct 2022 05:34:58 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Oct 2022 06:58:48 GMT"
},
{
"version": "v3",
"created": "Wed, 5 Mar 2025 14:43:59 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Liu",
"Xiaoning",
""
]
]
| TITLE: Perceptual Multi-Exposure Fusion
ABSTRACT: As an ever-increasing demand for high dynamic range (HDR) scene shooting,
multi-exposure image fusion (MEF) technology has abounded. In recent years,
multi-scale exposure fusion approaches based on detail-enhancement have led the
way for improvement in highlight and shadow details. Most of such methods,
however, are too computationally expensive to be deployed on mobile devices.
This paper presents a perceptual multi-exposure fusion method that not just
ensures fine shadow/highlight details but with lower complexity than
detailenhanced methods. We analyze the potential defects of three classical
exposure measures in lieu of using detail-enhancement component and improve two
of them, namely adaptive Wellexposedness (AWE) and the gradient of color images
(3-D gradient). AWE designed in YCbCr color space considers the difference
between varying exposure images. 3-D gradient is employed to extract fine
details. We build a large-scale multiexposure benchmark dataset suitable for
static scenes, which contains 167 image sequences all told. Experiments on the
constructed dataset demonstrate that the proposed method exceeds existing eight
state-of-the-art approaches in terms of visually and MEF-SSIM value. Moreover,
our approach can achieve a better improvement for current image enhancement
techniques, ensuring fine detail in bright light.
| new_dataset | 0.958847 |
2210.12816 | Salar Fattahi | Geyu Liang, Gavin Zhang, Salar Fattahi, Richard Y. Zhang | Simple Alternating Minimization Provably Solves Complete Dictionary
Learning | null | null | null | null | cs.LG eess.SP math.OC | http://creativecommons.org/licenses/by/4.0/ | This paper focuses on the noiseless complete dictionary learning problem,
where the goal is to represent a set of given signals as linear combinations of
a small number of atoms from a learned dictionary. There are two main
challenges faced by theoretical and practical studies of dictionary learning:
the lack of theoretical guarantees for practically-used heuristic algorithms
and their poor scalability when dealing with huge-scale datasets. Towards
addressing these issues, we propose a simple and efficient algorithm that
provably recovers the ground truth when applied to the nonconvex and discrete
formulation of the problem in the noiseless setting. We also extend our
proposed method to mini-batch and online settings where the data is huge-scale
or arrives continuously over time. At the core of our proposed method lies an
efficient preconditioning technique that transforms the unknown dictionary to a
near-orthonormal one, for which we prove a simple alternating minimization
technique converges linearly to the ground truth under minimal conditions. Our
numerical experiments on synthetic and real datasets showcase the superiority
of our method compared with the existing techniques.
| [
{
"version": "v1",
"created": "Sun, 23 Oct 2022 18:30:45 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 02:01:02 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Liang",
"Geyu",
""
],
[
"Zhang",
"Gavin",
""
],
[
"Fattahi",
"Salar",
""
],
[
"Zhang",
"Richard Y.",
""
]
]
| TITLE: Simple Alternating Minimization Provably Solves Complete Dictionary
Learning
ABSTRACT: This paper focuses on the noiseless complete dictionary learning problem,
where the goal is to represent a set of given signals as linear combinations of
a small number of atoms from a learned dictionary. There are two main
challenges faced by theoretical and practical studies of dictionary learning:
the lack of theoretical guarantees for practically-used heuristic algorithms
and their poor scalability when dealing with huge-scale datasets. Towards
addressing these issues, we propose a simple and efficient algorithm that
provably recovers the ground truth when applied to the nonconvex and discrete
formulation of the problem in the noiseless setting. We also extend our
proposed method to mini-batch and online settings where the data is huge-scale
or arrives continuously over time. At the core of our proposed method lies an
efficient preconditioning technique that transforms the unknown dictionary to a
near-orthonormal one, for which we prove a simple alternating minimization
technique converges linearly to the ground truth under minimal conditions. Our
numerical experiments on synthetic and real datasets showcase the superiority
of our method compared with the existing techniques.
| no_new_dataset | 0.941601 |
2301.05811 | Majid Daliri | Aline Bessa, Majid Daliri, Juliana Freire, Cameron Musco, Christopher
Musco, A\'ecio Santos, Haoxiang Zhang | Weighted Minwise Hashing Beats Linear Sketching for Inner Product
Estimation | 23 pages, 6 figures | In Proceedings of the ACM SIGMOD-SIGACT-SIGAI Symposium on
Principles of Database Systems (PODS) 2023 | 10.1145/3584372.3588679 | null | cs.DB cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a new approach for computing compact sketches that can be used to
approximate the inner product between pairs of high-dimensional vectors. Based
on the Weighted MinHash algorithm, our approach admits strong accuracy
guarantees that improve on the guarantees of popular linear sketching
approaches for inner product estimation, such as CountSketch and
Johnson-Lindenstrauss projection. Specifically, while our method admits
guarantees that exactly match linear sketching for dense vectors, it yields
significantly lower error for sparse vectors with limited overlap between
non-zero entries. Such vectors arise in many applications involving sparse
data. They are also important in increasingly popular dataset search
applications, where inner product sketches are used to estimate data
covariance, conditional means, and other quantities involving columns in
unjoined tables. We complement our theoretical results by showing that our
approach empirically outperforms existing linear sketches and unweighted
hashing-based sketches for sparse vectors.
| [
{
"version": "v1",
"created": "Sat, 14 Jan 2023 03:21:36 GMT"
},
{
"version": "v2",
"created": "Fri, 5 May 2023 17:57:35 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Bessa",
"Aline",
""
],
[
"Daliri",
"Majid",
""
],
[
"Freire",
"Juliana",
""
],
[
"Musco",
"Cameron",
""
],
[
"Musco",
"Christopher",
""
],
[
"Santos",
"Aécio",
""
],
[
"Zhang",
"Haoxiang",
""
]
]
| TITLE: Weighted Minwise Hashing Beats Linear Sketching for Inner Product
Estimation
ABSTRACT: We present a new approach for computing compact sketches that can be used to
approximate the inner product between pairs of high-dimensional vectors. Based
on the Weighted MinHash algorithm, our approach admits strong accuracy
guarantees that improve on the guarantees of popular linear sketching
approaches for inner product estimation, such as CountSketch and
Johnson-Lindenstrauss projection. Specifically, while our method admits
guarantees that exactly match linear sketching for dense vectors, it yields
significantly lower error for sparse vectors with limited overlap between
non-zero entries. Such vectors arise in many applications involving sparse
data. They are also important in increasingly popular dataset search
applications, where inner product sketches are used to estimate data
covariance, conditional means, and other quantities involving columns in
unjoined tables. We complement our theoretical results by showing that our
approach empirically outperforms existing linear sketches and unweighted
hashing-based sketches for sparse vectors.
| no_new_dataset | 0.944177 |
2303.02610 | Ron Ferens | Ron Ferens, Yosi Keller | HyperPose: Hypernetwork-Infused Camera Pose Localization and an Extended
Cambridge Landmarks Dataset | Accepted to The IEEE/CVF Conference on Computer Vision and Pattern
Recognition (CVPR) 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we propose HyperPose, which utilizes hyper-networks in absolute
camera pose regressors. The inherent appearance variations in natural scenes,
attributable to environmental conditions, perspective, and lighting, induce a
significant domain disparity between the training and test datasets. This
disparity degrades the precision of contemporary localization networks. To
mitigate this, we advocate for incorporating hypernetworks into single-scene
and multiscene camera pose regression models. During inference, the
hypernetwork dynamically computes adaptive weights for the localization
regression heads based on the particular input image, effectively narrowing the
domain gap. Using indoor and outdoor datasets, we evaluate the HyperPose
methodology across multiple established absolute pose regression architectures.
We also introduce and share the Extended Cambridge Landmarks (ECL), a novel
localization dataset, based on the Cambridge Landmarks dataset, showing it in
multiple seasons with significantly varying appearance conditions. Our
empirical experiments demonstrate that HyperPose yields notable performance
enhancements for single- and multi-scene architectures. We have made our source
code, pre-trained models, and the ECL dataset openly available.
| [
{
"version": "v1",
"created": "Sun, 5 Mar 2023 08:45:50 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 19:46:58 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Ferens",
"Ron",
""
],
[
"Keller",
"Yosi",
""
]
]
| TITLE: HyperPose: Hypernetwork-Infused Camera Pose Localization and an Extended
Cambridge Landmarks Dataset
ABSTRACT: In this work, we propose HyperPose, which utilizes hyper-networks in absolute
camera pose regressors. The inherent appearance variations in natural scenes,
attributable to environmental conditions, perspective, and lighting, induce a
significant domain disparity between the training and test datasets. This
disparity degrades the precision of contemporary localization networks. To
mitigate this, we advocate for incorporating hypernetworks into single-scene
and multiscene camera pose regression models. During inference, the
hypernetwork dynamically computes adaptive weights for the localization
regression heads based on the particular input image, effectively narrowing the
domain gap. Using indoor and outdoor datasets, we evaluate the HyperPose
methodology across multiple established absolute pose regression architectures.
We also introduce and share the Extended Cambridge Landmarks (ECL), a novel
localization dataset, based on the Cambridge Landmarks dataset, showing it in
multiple seasons with significantly varying appearance conditions. Our
empirical experiments demonstrate that HyperPose yields notable performance
enhancements for single- and multi-scene architectures. We have made our source
code, pre-trained models, and the ECL dataset openly available.
| new_dataset | 0.957397 |
2312.07226 | Kai Pan | Kai Pan, Linyang Li, Li Lin, Pujin Cheng, Junyan Lyu, Lei Xi, and
Xiaoyin Tang | Super-Resolution on Rotationally Scanned Photoacoustic Microscopy Images
Incorporating Scanning Prior | null | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Photoacoustic Microscopy (PAM) images integrating the advantages of optical
contrast and acoustic resolution have been widely used in brain studies.
However, there exists a trade-off between scanning speed and image resolution.
Compared with traditional raster scanning, rotational scanning provides good
opportunities for fast PAM imaging by optimizing the scanning mechanism.
Recently, there is a trend to incorporate deep learning into the scanning
process to further increase the scanning speed.Yet, most such attempts are
performed for raster scanning while those for rotational scanning are
relatively rare. In this study, we propose a novel and well-performing
super-resolution framework for rotational scanning-based PAM imaging. To
eliminate adjacent rows' displacements due to subject motion or high-frequency
scanning distortion,we introduce a registration module across odd and even rows
in the preprocessing and incorporate displacement degradation in the training.
Besides, gradient-based patch selection is proposed to increase the probability
of blood vessel patches being selected for training. A Transformer-based
network with a global receptive field is applied for better performance.
Experimental results on both synthetic and real datasets demonstrate the
effectiveness and generalizability of our proposed framework for rotationally
scanned PAM images'super-resolution, both quantitatively and qualitatively.
Code is available at https://github.com/11710615/PAMSR.git.
| [
{
"version": "v1",
"created": "Tue, 12 Dec 2023 12:41:35 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 10:24:18 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Pan",
"Kai",
""
],
[
"Li",
"Linyang",
""
],
[
"Lin",
"Li",
""
],
[
"Cheng",
"Pujin",
""
],
[
"Lyu",
"Junyan",
""
],
[
"Xi",
"Lei",
""
],
[
"Tang",
"Xiaoyin",
""
]
]
| TITLE: Super-Resolution on Rotationally Scanned Photoacoustic Microscopy Images
Incorporating Scanning Prior
ABSTRACT: Photoacoustic Microscopy (PAM) images integrating the advantages of optical
contrast and acoustic resolution have been widely used in brain studies.
However, there exists a trade-off between scanning speed and image resolution.
Compared with traditional raster scanning, rotational scanning provides good
opportunities for fast PAM imaging by optimizing the scanning mechanism.
Recently, there is a trend to incorporate deep learning into the scanning
process to further increase the scanning speed.Yet, most such attempts are
performed for raster scanning while those for rotational scanning are
relatively rare. In this study, we propose a novel and well-performing
super-resolution framework for rotational scanning-based PAM imaging. To
eliminate adjacent rows' displacements due to subject motion or high-frequency
scanning distortion,we introduce a registration module across odd and even rows
in the preprocessing and incorporate displacement degradation in the training.
Besides, gradient-based patch selection is proposed to increase the probability
of blood vessel patches being selected for training. A Transformer-based
network with a global receptive field is applied for better performance.
Experimental results on both synthetic and real datasets demonstrate the
effectiveness and generalizability of our proposed framework for rotationally
scanned PAM images'super-resolution, both quantitatively and qualitatively.
Code is available at https://github.com/11710615/PAMSR.git.
| no_new_dataset | 0.951142 |
2312.10892 | Yanting Yang | Yanting Yang, Yiren Zhang, Zongyu Li, Jeffery Siyuan Tian, Matthieu
Dagommer, Jia Guo | Deep Learning-based MRI Reconstruction with Artificial Fourier Transform
Network (AFTNet) | null | null | null | null | eess.IV cs.CV q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Deep complex-valued neural networks (CVNNs) provide a powerful way to
leverage complex number operations and representations and have succeeded in
several phase-based applications. However, previous networks have not fully
explored the impact of complex-valued networks in the frequency domain. Here,
we introduce a unified complex-valued deep learning framework-Artificial
Fourier Transform Network (AFTNet)-which combines domain-manifold learning and
CVNNs. AFTNet can be readily used to solve image inverse problems in domain
transformation, especially for accelerated magnetic resonance imaging (MRI)
reconstruction and other applications. While conventional methods typically
utilize magnitude images or treat the real and imaginary components of k-space
data as separate channels, our approach directly processes raw k-space data in
the frequency domain, utilizing complex-valued operations. This allows for a
mapping between the frequency (k-space) and image domain to be determined
through cross-domain learning. We show that AFTNet achieves superior
accelerated MRI reconstruction compared to existing approaches. Furthermore,
our approach can be applied to various tasks, such as denoised magnetic
resonance spectroscopy (MRS) reconstruction and datasets with various
contrasts. The AFTNet presented here is a valuable preprocessing component for
different preclinical studies and provides an innovative alternative for
solving inverse problems in imaging and spectroscopy. The code is available at:
https://github.com/yanting-yang/AFT-Net.
| [
{
"version": "v1",
"created": "Mon, 18 Dec 2023 02:50:45 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Oct 2024 19:41:06 GMT"
},
{
"version": "v3",
"created": "Wed, 5 Mar 2025 05:27:43 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Yang",
"Yanting",
""
],
[
"Zhang",
"Yiren",
""
],
[
"Li",
"Zongyu",
""
],
[
"Tian",
"Jeffery Siyuan",
""
],
[
"Dagommer",
"Matthieu",
""
],
[
"Guo",
"Jia",
""
]
]
| TITLE: Deep Learning-based MRI Reconstruction with Artificial Fourier Transform
Network (AFTNet)
ABSTRACT: Deep complex-valued neural networks (CVNNs) provide a powerful way to
leverage complex number operations and representations and have succeeded in
several phase-based applications. However, previous networks have not fully
explored the impact of complex-valued networks in the frequency domain. Here,
we introduce a unified complex-valued deep learning framework-Artificial
Fourier Transform Network (AFTNet)-which combines domain-manifold learning and
CVNNs. AFTNet can be readily used to solve image inverse problems in domain
transformation, especially for accelerated magnetic resonance imaging (MRI)
reconstruction and other applications. While conventional methods typically
utilize magnitude images or treat the real and imaginary components of k-space
data as separate channels, our approach directly processes raw k-space data in
the frequency domain, utilizing complex-valued operations. This allows for a
mapping between the frequency (k-space) and image domain to be determined
through cross-domain learning. We show that AFTNet achieves superior
accelerated MRI reconstruction compared to existing approaches. Furthermore,
our approach can be applied to various tasks, such as denoised magnetic
resonance spectroscopy (MRS) reconstruction and datasets with various
contrasts. The AFTNet presented here is a valuable preprocessing component for
different preclinical studies and provides an innovative alternative for
solving inverse problems in imaging and spectroscopy. The code is available at:
https://github.com/yanting-yang/AFT-Net.
| no_new_dataset | 0.949763 |
2402.10711 | Ruixuan Liu | Ruixuan Liu, Kangle Deng, Ziwei Wang, Changliu Liu | StableLego: Stability Analysis of Block Stacking Assembly | null | IEEE Robotics and Automation Letters, vol. 9, no. 11, pp.
9383-9390, Nov. 2024 | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Structural stability is a necessary condition for successful construction of
an assembly. However, designing a stable assembly requires a non-trivial effort
since a slight variation in the design could significantly affect the
structural stability. To address the challenge, this paper studies the
stability of assembly structures, in particular, block stacking assembly. The
paper proposes a new optimization formulation, which optimizes over force
balancing equations, for inferring the structural stability of 3D block
stacking structures. The proposed stability analysis is verified on
hand-crafted Lego examples. The experiment results demonstrate that the
proposed method can correctly predict whether the structure is stable. In
addition, it outperforms the existing methods since it can accurately locate
the weakest parts in the design, and more importantly, solve any given assembly
structures. To further validate the proposed method, we provide
\textit{StableLego}: a comprehensive dataset including 50k+ 3D objects with
their Lego layouts. We test the proposed stability analysis and include the
stability inference for each corresponding object in StableLego. Our code and
the dataset are available at
https://github.com/intelligent-control-lab/StableLego.
| [
{
"version": "v1",
"created": "Fri, 16 Feb 2024 14:14:23 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 21:46:10 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Liu",
"Ruixuan",
""
],
[
"Deng",
"Kangle",
""
],
[
"Wang",
"Ziwei",
""
],
[
"Liu",
"Changliu",
""
]
]
| TITLE: StableLego: Stability Analysis of Block Stacking Assembly
ABSTRACT: Structural stability is a necessary condition for successful construction of
an assembly. However, designing a stable assembly requires a non-trivial effort
since a slight variation in the design could significantly affect the
structural stability. To address the challenge, this paper studies the
stability of assembly structures, in particular, block stacking assembly. The
paper proposes a new optimization formulation, which optimizes over force
balancing equations, for inferring the structural stability of 3D block
stacking structures. The proposed stability analysis is verified on
hand-crafted Lego examples. The experiment results demonstrate that the
proposed method can correctly predict whether the structure is stable. In
addition, it outperforms the existing methods since it can accurately locate
the weakest parts in the design, and more importantly, solve any given assembly
structures. To further validate the proposed method, we provide
\textit{StableLego}: a comprehensive dataset including 50k+ 3D objects with
their Lego layouts. We test the proposed stability analysis and include the
stability inference for each corresponding object in StableLego. Our code and
the dataset are available at
https://github.com/intelligent-control-lab/StableLego.
| new_dataset | 0.965576 |
2403.01505 | Qingsong Xie | Hongjian Liu, Qingsong Xie, TianXiang Ye, Zhijie Deng, Chen Chen,
Shixiang Tang, Xueyang Fu, Haonan Lu, Zheng-jun Zha | SCott: Accelerating Diffusion Models with Stochastic Consistency
Distillation | 22 pages, 16 figures | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | The iterative sampling procedure employed by diffusion models (DMs) often
leads to significant inference latency. To address this, we propose Stochastic
Consistency Distillation (SCott) to enable accelerated text-to-image
generation, where high-quality and diverse generations can be achieved within
just 2-4 sampling steps. In contrast to vanilla consistency distillation (CD)
which distills the ordinary differential equation solvers-based sampling
process of a pre-trained teacher model into a student, SCott explores the
possibility and validates the efficacy of integrating stochastic differential
equation (SDE) solvers into CD to fully unleash the potential of the teacher.
SCott is augmented with elaborate strategies to control the noise strength and
sampling process of the SDE solver. An adversarial loss is further incorporated
to strengthen the consistency constraints in rare sampling steps. Empirically,
on the MSCOCO-2017 5K dataset with a Stable Diffusion-V1.5 teacher, SCott
achieves an FID of 21.9 with 2 sampling steps, surpassing that of the 1-step
InstaFlow (23.4) and the 4-step UFOGen (22.1). Moreover, SCott can yield more
diverse samples than other consistency models for high-resolution image
generation, with up to 16% improvement in a qualified metric.
| [
{
"version": "v1",
"created": "Sun, 3 Mar 2024 13:08:32 GMT"
},
{
"version": "v2",
"created": "Mon, 15 Apr 2024 16:42:50 GMT"
},
{
"version": "v3",
"created": "Sun, 23 Feb 2025 07:04:10 GMT"
},
{
"version": "v4",
"created": "Wed, 5 Mar 2025 11:39:35 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Liu",
"Hongjian",
""
],
[
"Xie",
"Qingsong",
""
],
[
"Ye",
"TianXiang",
""
],
[
"Deng",
"Zhijie",
""
],
[
"Chen",
"Chen",
""
],
[
"Tang",
"Shixiang",
""
],
[
"Fu",
"Xueyang",
""
],
[
"Lu",
"Haonan",
""
],
[
"Zha",
"Zheng-jun",
""
]
]
| TITLE: SCott: Accelerating Diffusion Models with Stochastic Consistency
Distillation
ABSTRACT: The iterative sampling procedure employed by diffusion models (DMs) often
leads to significant inference latency. To address this, we propose Stochastic
Consistency Distillation (SCott) to enable accelerated text-to-image
generation, where high-quality and diverse generations can be achieved within
just 2-4 sampling steps. In contrast to vanilla consistency distillation (CD)
which distills the ordinary differential equation solvers-based sampling
process of a pre-trained teacher model into a student, SCott explores the
possibility and validates the efficacy of integrating stochastic differential
equation (SDE) solvers into CD to fully unleash the potential of the teacher.
SCott is augmented with elaborate strategies to control the noise strength and
sampling process of the SDE solver. An adversarial loss is further incorporated
to strengthen the consistency constraints in rare sampling steps. Empirically,
on the MSCOCO-2017 5K dataset with a Stable Diffusion-V1.5 teacher, SCott
achieves an FID of 21.9 with 2 sampling steps, surpassing that of the 1-step
InstaFlow (23.4) and the 4-step UFOGen (22.1). Moreover, SCott can yield more
diverse samples than other consistency models for high-resolution image
generation, with up to 16% improvement in a qualified metric.
| no_new_dataset | 0.947381 |
2403.10860 | Junyang Wu | Junyang Wu, Yun Gu, Guang-Zhong Yang | Sim2Real within 5 Minutes: Efficient Domain Transfer with Stylized
Gaussian Splatting for Endoscopic Images | Accepted by ICRA 2025 | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Robot assisted endoluminal intervention is an emerging technique for both
benign and malignant luminal lesions. With vision-based navigation, when
combined with pre-operative imaging data as priors, it is possible to recover
position and pose of the endoscope without the need of additional sensors. In
practice, however, aligning pre-operative and intra-operative domains is
complicated by significant texture differences. Although methods such as style
transfer can be used to address this issue, they require large datasets from
both source and target domains with prolonged training times. This paper
proposes an efficient domain transfer method based on stylized Gaussian
splatting, only requiring a few of real images (10 images) with very fast
training time. Specifically, the transfer process includes two phases. In the
first phase, the 3D models reconstructed from CT scans are represented as
differential Gaussian point clouds. In the second phase, only color appearance
related parameters are optimized to transfer the style and preserve the visual
content. A novel structure consistency loss is applied to latent features and
depth levels to enhance the stability of the transferred images. Detailed
validation was performed to demonstrate the performance advantages of the
proposed method compared to that of the current state-of-the-art, highlighting
the potential for intra-operative surgical navigation.
| [
{
"version": "v1",
"created": "Sat, 16 Mar 2024 08:57:00 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 12:41:05 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Wu",
"Junyang",
""
],
[
"Gu",
"Yun",
""
],
[
"Yang",
"Guang-Zhong",
""
]
]
| TITLE: Sim2Real within 5 Minutes: Efficient Domain Transfer with Stylized
Gaussian Splatting for Endoscopic Images
ABSTRACT: Robot assisted endoluminal intervention is an emerging technique for both
benign and malignant luminal lesions. With vision-based navigation, when
combined with pre-operative imaging data as priors, it is possible to recover
position and pose of the endoscope without the need of additional sensors. In
practice, however, aligning pre-operative and intra-operative domains is
complicated by significant texture differences. Although methods such as style
transfer can be used to address this issue, they require large datasets from
both source and target domains with prolonged training times. This paper
proposes an efficient domain transfer method based on stylized Gaussian
splatting, only requiring a few of real images (10 images) with very fast
training time. Specifically, the transfer process includes two phases. In the
first phase, the 3D models reconstructed from CT scans are represented as
differential Gaussian point clouds. In the second phase, only color appearance
related parameters are optimized to transfer the style and preserve the visual
content. A novel structure consistency loss is applied to latent features and
depth levels to enhance the stability of the transferred images. Detailed
validation was performed to demonstrate the performance advantages of the
proposed method compared to that of the current state-of-the-art, highlighting
the potential for intra-operative surgical navigation.
| no_new_dataset | 0.950824 |
2403.15029 | Shuai Lu | Shuai Lu, Jiayi Ding, Mingji Chen, Wei Gu, Junpeng Zhu, Yijun Xu,
Zhaoyang Dong, Zezheng Sun | On the Solution Uniqueness of Data-Driven Modeling of Flexible Loads
(with Supplementary Material) | null | IEEE Transactions on Smart Grid, 16 (2025) 1993 - 1996 | 10.1109/TSG.2024.3518094 | null | eess.SY cs.SY | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This letter first explores the solution uniqueness of the data-driven
modeling of price-responsive flexible loads (PFL). The PFL on the demand side
is critical in modern power systems. An accurate PFL model is fundamental for
system operations. However, whether the PFL model can be uniquely and correctly
identified from operational data remains unclear. To address this, we analyze
the structural and practical identifiability of the PFL model, deriving the
dataset condition that guarantees the solution uniqueness. Besides, we point
out the practical implications of the results. Numerical tests validate this
work.
| [
{
"version": "v1",
"created": "Fri, 22 Mar 2024 08:21:35 GMT"
},
{
"version": "v2",
"created": "Wed, 17 Jul 2024 09:29:43 GMT"
},
{
"version": "v3",
"created": "Fri, 18 Oct 2024 03:16:02 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Lu",
"Shuai",
""
],
[
"Ding",
"Jiayi",
""
],
[
"Chen",
"Mingji",
""
],
[
"Gu",
"Wei",
""
],
[
"Zhu",
"Junpeng",
""
],
[
"Xu",
"Yijun",
""
],
[
"Dong",
"Zhaoyang",
""
],
[
"Sun",
"Zezheng",
""
]
]
| TITLE: On the Solution Uniqueness of Data-Driven Modeling of Flexible Loads
(with Supplementary Material)
ABSTRACT: This letter first explores the solution uniqueness of the data-driven
modeling of price-responsive flexible loads (PFL). The PFL on the demand side
is critical in modern power systems. An accurate PFL model is fundamental for
system operations. However, whether the PFL model can be uniquely and correctly
identified from operational data remains unclear. To address this, we analyze
the structural and practical identifiability of the PFL model, deriving the
dataset condition that guarantees the solution uniqueness. Besides, we point
out the practical implications of the results. Numerical tests validate this
work.
| no_new_dataset | 0.949902 |
2403.15553 | A\'ecio Solano Rodrigues Santos | A\'ecio Santos, Flip Korn, Juliana Freire | Efficiently Estimating Mutual Information Between Attributes Across
Tables | Accepted to IEEE ICDE 2024 | 2024 IEEE 40th International Conference on Data Engineering
(ICDE), 2024, pp. 193-206 | 10.1109/ICDE60146.2024.00022 | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Relational data augmentation is a powerful technique for enhancing data
analytics and improving machine learning models by incorporating columns from
external datasets. However, it is challenging to efficiently discover relevant
external tables to join with a given input table. Existing approaches rely on
data discovery systems to identify joinable tables from external sources,
typically based on overlap or containment. However, the sheer number of tables
obtained from these systems results in irrelevant joins that need to be
performed; this can be computationally expensive or even infeasible in
practice. We address this limitation by proposing the use of efficient mutual
information (MI) estimation for finding relevant joinable tables. We introduce
a new sketching method that enables efficient evaluation of relationship
discovery queries by estimating MI without materializing the joins and
returning a smaller set of tables that are more likely to be relevant. We also
demonstrate the effectiveness of our approach at approximating MI in extensive
experiments using synthetic and real-world datasets.
| [
{
"version": "v1",
"created": "Fri, 22 Mar 2024 18:08:10 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Santos",
"Aécio",
""
],
[
"Korn",
"Flip",
""
],
[
"Freire",
"Juliana",
""
]
]
| TITLE: Efficiently Estimating Mutual Information Between Attributes Across
Tables
ABSTRACT: Relational data augmentation is a powerful technique for enhancing data
analytics and improving machine learning models by incorporating columns from
external datasets. However, it is challenging to efficiently discover relevant
external tables to join with a given input table. Existing approaches rely on
data discovery systems to identify joinable tables from external sources,
typically based on overlap or containment. However, the sheer number of tables
obtained from these systems results in irrelevant joins that need to be
performed; this can be computationally expensive or even infeasible in
practice. We address this limitation by proposing the use of efficient mutual
information (MI) estimation for finding relevant joinable tables. We introduce
a new sketching method that enables efficient evaluation of relationship
discovery queries by estimating MI without materializing the joins and
returning a smaller set of tables that are more likely to be relevant. We also
demonstrate the effectiveness of our approach at approximating MI in extensive
experiments using synthetic and real-world datasets.
| no_new_dataset | 0.948155 |
2404.04589 | Luc\'ia Coto Elena | Fernando Fern\'andez-Calatayud, Luc\'ia Coto-Elena, David Alejo,
Jos\'e J. Carpio-Jim\'enez, Fernando Caballero, Luis Merino | ARS548_ros. An ARS 548 RDI radar driver for ROS | 20 pages, 6 figures and 23 references | null | null | null | cs.RO | http://creativecommons.org/licenses/by-sa/4.0/ | The ARS 548 RDI Radar is a premium model of the fifth generation of 77 GHz
long range radar sensors with new RF antenna arrays, which offer digital beam
forming. This radar measures independently the distance, speed and angle of
objects without any reflectors in one measurement cycle based on Pulse
Compression with New Frequency Modulation. Unfortunately, to the best of our
knowledge, there are no open source drivers available for Linux systems to
enable users to analyze the data acquired by the sensor. In this paper, we
present a driver that can interpret the data from the ARS 548 RDI sensor and
make it available over the Robot Operating System versions 1 and 2 (ROS and
ROS2). Thus, these data can be stored, represented, and analyzed using the
powerful tools offered by ROS. Besides, our driver offers advanced object
features provided by the sensor, such as relative estimated velocity and
acceleration of each object, its orientation and angular velocity. We focus on
the configuration of the sensor and the use of our driver including its
filtering and representation tools. Besides, we offer a video tutorial to help
in its configuration process. Finally, a dataset acquired with this sensor and
an Ouster OS1-32 LiDAR sensor, to have baseline measurements, is available, so
that the user can check the correctness of our driver.
| [
{
"version": "v1",
"created": "Sat, 6 Apr 2024 10:57:57 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Jun 2024 12:48:11 GMT"
},
{
"version": "v3",
"created": "Mon, 10 Feb 2025 10:59:49 GMT"
},
{
"version": "v4",
"created": "Wed, 5 Mar 2025 10:53:24 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Fernández-Calatayud",
"Fernando",
""
],
[
"Coto-Elena",
"Lucía",
""
],
[
"Alejo",
"David",
""
],
[
"Carpio-Jiménez",
"José J.",
""
],
[
"Caballero",
"Fernando",
""
],
[
"Merino",
"Luis",
""
]
]
| TITLE: ARS548_ros. An ARS 548 RDI radar driver for ROS
ABSTRACT: The ARS 548 RDI Radar is a premium model of the fifth generation of 77 GHz
long range radar sensors with new RF antenna arrays, which offer digital beam
forming. This radar measures independently the distance, speed and angle of
objects without any reflectors in one measurement cycle based on Pulse
Compression with New Frequency Modulation. Unfortunately, to the best of our
knowledge, there are no open source drivers available for Linux systems to
enable users to analyze the data acquired by the sensor. In this paper, we
present a driver that can interpret the data from the ARS 548 RDI sensor and
make it available over the Robot Operating System versions 1 and 2 (ROS and
ROS2). Thus, these data can be stored, represented, and analyzed using the
powerful tools offered by ROS. Besides, our driver offers advanced object
features provided by the sensor, such as relative estimated velocity and
acceleration of each object, its orientation and angular velocity. We focus on
the configuration of the sensor and the use of our driver including its
filtering and representation tools. Besides, we offer a video tutorial to help
in its configuration process. Finally, a dataset acquired with this sensor and
an Ouster OS1-32 LiDAR sensor, to have baseline measurements, is available, so
that the user can check the correctness of our driver.
| no_new_dataset | 0.932083 |
2404.12020 | Jie Ma | Jie Ma, Min Hu, Pinghui Wang, Wangchun Sun, Lingyun Song, Hongbin Pei,
Jun Liu, Youtian Du | Look, Listen, and Answer: Overcoming Biases for Audio-Visual Question
Answering | Accepted by NeurIPS 2024 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Audio-Visual Question Answering (AVQA) is a complex multi-modal reasoning
task, demanding intelligent systems to accurately respond to natural language
queries based on audio-video input pairs. Nevertheless, prevalent AVQA
approaches are prone to overlearning dataset biases, resulting in poor
robustness. Furthermore, current datasets may not provide a precise diagnostic
for these methods. To tackle these challenges, firstly, we propose a novel
dataset, MUSIC-AVQA-R, crafted in two steps: rephrasing questions within the
test split of a public dataset (MUSIC-AVQA) and subsequently introducing
distribution shifts to split questions. The former leads to a large, diverse
test space, while the latter results in a comprehensive robustness evaluation
on rare, frequent, and overall questions. Secondly, we propose a robust
architecture that utilizes a multifaceted cycle collaborative debiasing
strategy to overcome bias learning. Experimental results show that this
architecture achieves state-of-the-art performance on MUSIC-AVQA-R, notably
obtaining a significant improvement of 9.32%. Extensive ablation experiments
are conducted on the two datasets mentioned to analyze the component
effectiveness within the debiasing strategy. Additionally, we highlight the
limited robustness of existing multi-modal QA methods through the evaluation on
our dataset. We also conduct experiments combining various baselines with our
proposed strategy on two datasets to verify its plug-and-play capability. Our
dataset and code are available at https://github.com/reml-group/MUSIC-AVQA-R.
| [
{
"version": "v1",
"created": "Thu, 18 Apr 2024 09:16:02 GMT"
},
{
"version": "v2",
"created": "Mon, 20 May 2024 00:45:35 GMT"
},
{
"version": "v3",
"created": "Mon, 21 Oct 2024 07:23:37 GMT"
},
{
"version": "v4",
"created": "Wed, 5 Mar 2025 08:09:07 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Ma",
"Jie",
""
],
[
"Hu",
"Min",
""
],
[
"Wang",
"Pinghui",
""
],
[
"Sun",
"Wangchun",
""
],
[
"Song",
"Lingyun",
""
],
[
"Pei",
"Hongbin",
""
],
[
"Liu",
"Jun",
""
],
[
"Du",
"Youtian",
""
]
]
| TITLE: Look, Listen, and Answer: Overcoming Biases for Audio-Visual Question
Answering
ABSTRACT: Audio-Visual Question Answering (AVQA) is a complex multi-modal reasoning
task, demanding intelligent systems to accurately respond to natural language
queries based on audio-video input pairs. Nevertheless, prevalent AVQA
approaches are prone to overlearning dataset biases, resulting in poor
robustness. Furthermore, current datasets may not provide a precise diagnostic
for these methods. To tackle these challenges, firstly, we propose a novel
dataset, MUSIC-AVQA-R, crafted in two steps: rephrasing questions within the
test split of a public dataset (MUSIC-AVQA) and subsequently introducing
distribution shifts to split questions. The former leads to a large, diverse
test space, while the latter results in a comprehensive robustness evaluation
on rare, frequent, and overall questions. Secondly, we propose a robust
architecture that utilizes a multifaceted cycle collaborative debiasing
strategy to overcome bias learning. Experimental results show that this
architecture achieves state-of-the-art performance on MUSIC-AVQA-R, notably
obtaining a significant improvement of 9.32%. Extensive ablation experiments
are conducted on the two datasets mentioned to analyze the component
effectiveness within the debiasing strategy. Additionally, we highlight the
limited robustness of existing multi-modal QA methods through the evaluation on
our dataset. We also conduct experiments combining various baselines with our
proposed strategy on two datasets to verify its plug-and-play capability. Our
dataset and code are available at https://github.com/reml-group/MUSIC-AVQA-R.
| new_dataset | 0.593977 |
2404.14395 | Mitodru Niyogi | Mitodru Niyogi, Arnab Bhattacharya | PARAMANU-GANITA: Can Small Math Language Models Rival with Large
Language Models on Mathematical Reasoning? | null | null | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | In this paper, we study whether domain specific pretraining of small
generative language models (SLM) from scratch with domain specialized tokenizer
and Chain-of-Thought (CoT) instruction fine-tuning results in competitive
performance on mathematical reasoning compared to LLMs? Secondly, whether this
approach is environmentally sustainable, highly cost efficient? To address
these research questions, we present Paramanu-Ganita, a 208 million-parameter
novel decoder-only Auto Regressive SLM on mathematics. We performed pretraining
from scratch on 31.5 billion tokens for 170 A100 hours using a context size of
4096 on a mixed mathematical corpus consisting of web pages, source code,
textbooks, CoT templatised StackOverflow QA pairs, and mathematical lecture
notes in LaTeX curated by us. We also trained a math and code specialised BPE
tokenizer. We proposed and performed CoT instruction fine-tuning of
Paramanu-Ganita on the MetaMathQA dataset. Our model Paramanu-Ganita, despite
being 34 times smaller than the 7B LLMs, outperforms generalist LLMs by
approximately 30% points, and even math-specialised LLMs by 3-23% points in
GSM8K test accuracy metric. On MATH benchmark, Paramanu-Ganita outperformed the
various models by 6-8% points. On benchmarks like LogiQA, MMLU (high school,
college level), and competitive exams level, AGIEVAL (AQuA-RAT, SAT-Math),
Paramanu-Ganita outperformed others by 1-4%. Our model is available at
https://huggingface.co/gyanai/paramanu-ganita-208M-hf .
| [
{
"version": "v1",
"created": "Mon, 22 Apr 2024 17:55:56 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 18:17:28 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Niyogi",
"Mitodru",
""
],
[
"Bhattacharya",
"Arnab",
""
]
]
| TITLE: PARAMANU-GANITA: Can Small Math Language Models Rival with Large
Language Models on Mathematical Reasoning?
ABSTRACT: In this paper, we study whether domain specific pretraining of small
generative language models (SLM) from scratch with domain specialized tokenizer
and Chain-of-Thought (CoT) instruction fine-tuning results in competitive
performance on mathematical reasoning compared to LLMs? Secondly, whether this
approach is environmentally sustainable, highly cost efficient? To address
these research questions, we present Paramanu-Ganita, a 208 million-parameter
novel decoder-only Auto Regressive SLM on mathematics. We performed pretraining
from scratch on 31.5 billion tokens for 170 A100 hours using a context size of
4096 on a mixed mathematical corpus consisting of web pages, source code,
textbooks, CoT templatised StackOverflow QA pairs, and mathematical lecture
notes in LaTeX curated by us. We also trained a math and code specialised BPE
tokenizer. We proposed and performed CoT instruction fine-tuning of
Paramanu-Ganita on the MetaMathQA dataset. Our model Paramanu-Ganita, despite
being 34 times smaller than the 7B LLMs, outperforms generalist LLMs by
approximately 30% points, and even math-specialised LLMs by 3-23% points in
GSM8K test accuracy metric. On MATH benchmark, Paramanu-Ganita outperformed the
various models by 6-8% points. On benchmarks like LogiQA, MMLU (high school,
college level), and competitive exams level, AGIEVAL (AQuA-RAT, SAT-Math),
Paramanu-Ganita outperformed others by 1-4%. Our model is available at
https://huggingface.co/gyanai/paramanu-ganita-208M-hf .
| no_new_dataset | 0.952794 |
2404.14846 | Lorenzo Cima | Benedetta Tessa, Lorenzo Cima, Amaury Trujillo, Marco Avvenuti,
Stefano Cresci | Beyond Trial-and-Error: Predicting User Abandonment After a Moderation
Intervention | null | null | null | null | cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current content moderation follows a reactive, trial-and-error approach,
where interventions are applied and their effects are only measured post-hoc.
In contrast, we introduce a proactive, predictive approach that enables
moderators to anticipate the impact of their actions before implementation. We
propose and tackle the new task of predicting user abandonment following a
moderation intervention. We study the reactions of 16,540 users to a massive
ban of online communities on Reddit, training a set of binary classifiers to
identify those users who would abandon the platform after the intervention -- a
problem of great practical relevance. We leverage a dataset of 13.8 million
posts to compute a large and diverse set of 142 features, which convey
information about the activity, toxicity, relations, and writing style of the
users. We obtain promising results, with the best-performing model achieving
micro F1-score = 0.914. Our model shows robust generalizability when applied to
users from previously unseen communities. Furthermore, we identify activity
features as the most informative predictors, followed by relational and
toxicity features, while writing style features exhibit limited utility.
Theoretically, our results demonstrate the feasibility of adopting a predictive
machine learning approach to estimate the effects of moderation interventions.
Practically, this work marks a fundamental shift from reactive to predictive
moderation, equipping platform administrators with intelligent tools to
strategically plan interventions, minimize unintended consequences, and
optimize user engagement.
| [
{
"version": "v1",
"created": "Tue, 23 Apr 2024 08:52:41 GMT"
},
{
"version": "v2",
"created": "Mon, 29 Apr 2024 09:16:43 GMT"
},
{
"version": "v3",
"created": "Wed, 5 Mar 2025 14:22:04 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Tessa",
"Benedetta",
""
],
[
"Cima",
"Lorenzo",
""
],
[
"Trujillo",
"Amaury",
""
],
[
"Avvenuti",
"Marco",
""
],
[
"Cresci",
"Stefano",
""
]
]
| TITLE: Beyond Trial-and-Error: Predicting User Abandonment After a Moderation
Intervention
ABSTRACT: Current content moderation follows a reactive, trial-and-error approach,
where interventions are applied and their effects are only measured post-hoc.
In contrast, we introduce a proactive, predictive approach that enables
moderators to anticipate the impact of their actions before implementation. We
propose and tackle the new task of predicting user abandonment following a
moderation intervention. We study the reactions of 16,540 users to a massive
ban of online communities on Reddit, training a set of binary classifiers to
identify those users who would abandon the platform after the intervention -- a
problem of great practical relevance. We leverage a dataset of 13.8 million
posts to compute a large and diverse set of 142 features, which convey
information about the activity, toxicity, relations, and writing style of the
users. We obtain promising results, with the best-performing model achieving
micro F1-score = 0.914. Our model shows robust generalizability when applied to
users from previously unseen communities. Furthermore, we identify activity
features as the most informative predictors, followed by relational and
toxicity features, while writing style features exhibit limited utility.
Theoretically, our results demonstrate the feasibility of adopting a predictive
machine learning approach to estimate the effects of moderation interventions.
Practically, this work marks a fundamental shift from reactive to predictive
moderation, equipping platform administrators with intelligent tools to
strategically plan interventions, minimize unintended consequences, and
optimize user engagement.
| no_new_dataset | 0.94428 |
2405.14241 | Chaokang Jiang | Chaokang Jiang, Dalong Du, Jiuming Liu, Siting Zhu, Zhenqiang Liu,
Zhuang Ma, Zhujin Liang and Jie Zhou | NeuroGauss4D-PCI: 4D Neural Fields and Gaussian Deformation Fields for
Point Cloud Interpolation | Under review | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Point Cloud Interpolation confronts challenges from point sparsity, complex
spatiotemporal dynamics, and the difficulty of deriving complete 3D point
clouds from sparse temporal information. This paper presents NeuroGauss4D-PCI,
which excels at modeling complex non-rigid deformations across varied dynamic
scenes. The method begins with an iterative Gaussian cloud soft clustering
module, offering structured temporal point cloud representations. The proposed
temporal radial basis function Gaussian residual utilizes Gaussian parameter
interpolation over time, enabling smooth parameter transitions and capturing
temporal residuals of Gaussian distributions. Additionally, a 4D Gaussian
deformation field tracks the evolution of these parameters, creating continuous
spatiotemporal deformation fields. A 4D neural field transforms low-dimensional
spatiotemporal coordinates ($x,y,z,t$) into a high-dimensional latent space.
Finally, we adaptively and efficiently fuse the latent features from neural
fields and the geometric features from Gaussian deformation fields.
NeuroGauss4D-PCI outperforms existing methods in point cloud frame
interpolation, delivering leading performance on both object-level (DHB) and
large-scale autonomous driving datasets (NL-Drive), with scalability to
auto-labeling and point cloud densification tasks. The source code is released
at https://github.com/jiangchaokang/NeuroGauss4D-PCI.
| [
{
"version": "v1",
"created": "Thu, 23 May 2024 07:21:01 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Jiang",
"Chaokang",
""
],
[
"Du",
"Dalong",
""
],
[
"Liu",
"Jiuming",
""
],
[
"Zhu",
"Siting",
""
],
[
"Liu",
"Zhenqiang",
""
],
[
"Ma",
"Zhuang",
""
],
[
"Liang",
"Zhujin",
""
],
[
"Zhou",
"Jie",
""
]
]
| TITLE: NeuroGauss4D-PCI: 4D Neural Fields and Gaussian Deformation Fields for
Point Cloud Interpolation
ABSTRACT: Point Cloud Interpolation confronts challenges from point sparsity, complex
spatiotemporal dynamics, and the difficulty of deriving complete 3D point
clouds from sparse temporal information. This paper presents NeuroGauss4D-PCI,
which excels at modeling complex non-rigid deformations across varied dynamic
scenes. The method begins with an iterative Gaussian cloud soft clustering
module, offering structured temporal point cloud representations. The proposed
temporal radial basis function Gaussian residual utilizes Gaussian parameter
interpolation over time, enabling smooth parameter transitions and capturing
temporal residuals of Gaussian distributions. Additionally, a 4D Gaussian
deformation field tracks the evolution of these parameters, creating continuous
spatiotemporal deformation fields. A 4D neural field transforms low-dimensional
spatiotemporal coordinates ($x,y,z,t$) into a high-dimensional latent space.
Finally, we adaptively and efficiently fuse the latent features from neural
fields and the geometric features from Gaussian deformation fields.
NeuroGauss4D-PCI outperforms existing methods in point cloud frame
interpolation, delivering leading performance on both object-level (DHB) and
large-scale autonomous driving datasets (NL-Drive), with scalability to
auto-labeling and point cloud densification tasks. The source code is released
at https://github.com/jiangchaokang/NeuroGauss4D-PCI.
| no_new_dataset | 0.952926 |
2405.16226 | Qian Wang | Qian Wang, Chen Li, Yuchen Luo, Hefei Ling, Shijuan Huang, Ruoxi Jia,
Ning Yu | Detecting Adversarial Data using Perturbation Forgery | Accepted as a conference paper at CVPR 2025 | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As a defense strategy against adversarial attacks, adversarial detection aims
to identify and filter out adversarial data from the data flow based on
discrepancies in distribution and noise patterns between natural and
adversarial data. Although previous detection methods achieve high performance
in detecting gradient-based adversarial attacks, new attacks based on
generative models with imbalanced and anisotropic noise patterns evade
detection. Even worse, the significant inference time overhead and limited
performance against unseen attacks make existing techniques impractical for
real-world use. In this paper, we explore the proximity relationship among
adversarial noise distributions and demonstrate the existence of an open
covering for these distributions. By training on the open covering of
adversarial noise distributions, a detector with strong generalization
performance against various types of unseen attacks can be developed. Based on
this insight, we heuristically propose Perturbation Forgery, which includes
noise distribution perturbation, sparse mask generation, and pseudo-adversarial
data production, to train an adversarial detector capable of detecting any
unseen gradient-based, generative-based, and physical adversarial attacks.
Comprehensive experiments conducted on multiple general and facial datasets,
with a wide spectrum of attacks, validate the strong generalization of our
method.
| [
{
"version": "v1",
"created": "Sat, 25 May 2024 13:34:16 GMT"
},
{
"version": "v2",
"created": "Sat, 24 Aug 2024 15:00:36 GMT"
},
{
"version": "v3",
"created": "Wed, 25 Sep 2024 00:09:58 GMT"
},
{
"version": "v4",
"created": "Wed, 5 Mar 2025 02:30:54 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Wang",
"Qian",
""
],
[
"Li",
"Chen",
""
],
[
"Luo",
"Yuchen",
""
],
[
"Ling",
"Hefei",
""
],
[
"Huang",
"Shijuan",
""
],
[
"Jia",
"Ruoxi",
""
],
[
"Yu",
"Ning",
""
]
]
| TITLE: Detecting Adversarial Data using Perturbation Forgery
ABSTRACT: As a defense strategy against adversarial attacks, adversarial detection aims
to identify and filter out adversarial data from the data flow based on
discrepancies in distribution and noise patterns between natural and
adversarial data. Although previous detection methods achieve high performance
in detecting gradient-based adversarial attacks, new attacks based on
generative models with imbalanced and anisotropic noise patterns evade
detection. Even worse, the significant inference time overhead and limited
performance against unseen attacks make existing techniques impractical for
real-world use. In this paper, we explore the proximity relationship among
adversarial noise distributions and demonstrate the existence of an open
covering for these distributions. By training on the open covering of
adversarial noise distributions, a detector with strong generalization
performance against various types of unseen attacks can be developed. Based on
this insight, we heuristically propose Perturbation Forgery, which includes
noise distribution perturbation, sparse mask generation, and pseudo-adversarial
data production, to train an adversarial detector capable of detecting any
unseen gradient-based, generative-based, and physical adversarial attacks.
Comprehensive experiments conducted on multiple general and facial datasets,
with a wide spectrum of attacks, validate the strong generalization of our
method.
| no_new_dataset | 0.945096 |
2405.17859 | Yangxiao Lu | Yangxiao Lu, Jishnu Jaykumar P, Yunhui Guo, Nicholas Ruozzi, and Yu
Xiang | Adapting Pre-Trained Vision Models for Novel Instance Detection and
Segmentation | Project Page: https://irvlutd.github.io/NIDSNet/ | null | null | null | cs.CV cs.RO | http://creativecommons.org/licenses/by/4.0/ | Novel Instance Detection and Segmentation (NIDS) aims at detecting and
segmenting novel object instances given a few examples of each instance. We
propose a unified, simple, yet effective framework (NIDS-Net) comprising object
proposal generation, embedding creation for both instance templates and
proposal regions, and embedding matching for instance label assignment.
Leveraging recent advancements in large vision methods, we utilize Grounding
DINO and Segment Anything Model (SAM) to obtain object proposals with accurate
bounding boxes and masks. Central to our approach is the generation of
high-quality instance embeddings. We utilized foreground feature averages of
patch embeddings from the DINOv2 ViT backbone, followed by refinement through a
weight adapter mechanism that we introduce.
We show experimentally that our weight adapter can adjust the embeddings
locally within their feature space and effectively limit overfitting in the
few-shot setting. Furthermore, the weight adapter optimizes weights to enhance
the distinctiveness of instance embeddings during similarity computation. This
methodology enables a straightforward matching strategy that results in
significant performance gains. Our framework surpasses current state-of-the-art
methods, demonstrating notable improvements in four detection datasets. In the
segmentation tasks on seven core datasets of the BOP challenge, our method
outperforms the leading published RGB methods and remains competitive with the
best RGB-D method. We have also verified our method using real-world images
from a Fetch robot and a RealSense camera. Project Page:
https://irvlutd.github.io/NIDSNet/
| [
{
"version": "v1",
"created": "Tue, 28 May 2024 06:16:57 GMT"
},
{
"version": "v2",
"created": "Mon, 2 Dec 2024 19:51:41 GMT"
},
{
"version": "v3",
"created": "Wed, 5 Mar 2025 01:48:25 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Lu",
"Yangxiao",
""
],
[
"P",
"Jishnu Jaykumar",
""
],
[
"Guo",
"Yunhui",
""
],
[
"Ruozzi",
"Nicholas",
""
],
[
"Xiang",
"Yu",
""
]
]
| TITLE: Adapting Pre-Trained Vision Models for Novel Instance Detection and
Segmentation
ABSTRACT: Novel Instance Detection and Segmentation (NIDS) aims at detecting and
segmenting novel object instances given a few examples of each instance. We
propose a unified, simple, yet effective framework (NIDS-Net) comprising object
proposal generation, embedding creation for both instance templates and
proposal regions, and embedding matching for instance label assignment.
Leveraging recent advancements in large vision methods, we utilize Grounding
DINO and Segment Anything Model (SAM) to obtain object proposals with accurate
bounding boxes and masks. Central to our approach is the generation of
high-quality instance embeddings. We utilized foreground feature averages of
patch embeddings from the DINOv2 ViT backbone, followed by refinement through a
weight adapter mechanism that we introduce.
We show experimentally that our weight adapter can adjust the embeddings
locally within their feature space and effectively limit overfitting in the
few-shot setting. Furthermore, the weight adapter optimizes weights to enhance
the distinctiveness of instance embeddings during similarity computation. This
methodology enables a straightforward matching strategy that results in
significant performance gains. Our framework surpasses current state-of-the-art
methods, demonstrating notable improvements in four detection datasets. In the
segmentation tasks on seven core datasets of the BOP challenge, our method
outperforms the leading published RGB methods and remains competitive with the
best RGB-D method. We have also verified our method using real-world images
from a Fetch robot and a RealSense camera. Project Page:
https://irvlutd.github.io/NIDSNet/
| no_new_dataset | 0.948106 |
2406.01863 | Jiexin Wang | Jiexin Wang, Adam Jatowt, Yi Cai | Towards Effective Time-Aware Language Representation: Exploring Enhanced
Temporal Understanding in Language Models | This paper has been accepted for publication in ACM Transactions on
the Web. Final publication details (volume, issue, page range) will be
updated once they are finalized | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | In the evolving field of Natural Language Processing (NLP), understanding the
temporal context of text is increasingly critical for applications requiring
advanced temporal reasoning. Traditional pre-trained language models like BERT,
which rely on synchronic document collections such as BookCorpus and Wikipedia,
often fall short in effectively capturing and leveraging temporal information.
To address this limitation, we introduce BiTimeBERT 2.0, a novel time-aware
language model pre-trained on a temporal news article collection. BiTimeBERT
2.0 incorporates temporal information through three innovative pre-training
objectives: Extended Time-Aware Masked Language Modeling (ETAMLM), Document
Dating (DD), and Time-Sensitive Entity Replacement (TSER). Each objective is
specifically designed to target a distinct dimension of temporal information:
ETAMLM enhances the model's understanding of temporal contexts and relations,
DD integrates document timestamps as explicit chronological markers, and TSER
focuses on the temporal dynamics of "Person" entities. Moreover, our refined
corpus preprocessing strategy reduces training time by nearly 53\%, making
BiTimeBERT 2.0 significantly more efficient while maintaining high performance.
Experimental results show that BiTimeBERT 2.0 achieves substantial improvements
across a broad range of time-related tasks and excels on datasets spanning
extensive temporal ranges. These findings underscore BiTimeBERT 2.0's potential
as a powerful tool for advancing temporal reasoning in NLP.
| [
{
"version": "v1",
"created": "Tue, 4 Jun 2024 00:30:37 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 16:27:57 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Wang",
"Jiexin",
""
],
[
"Jatowt",
"Adam",
""
],
[
"Cai",
"Yi",
""
]
]
| TITLE: Towards Effective Time-Aware Language Representation: Exploring Enhanced
Temporal Understanding in Language Models
ABSTRACT: In the evolving field of Natural Language Processing (NLP), understanding the
temporal context of text is increasingly critical for applications requiring
advanced temporal reasoning. Traditional pre-trained language models like BERT,
which rely on synchronic document collections such as BookCorpus and Wikipedia,
often fall short in effectively capturing and leveraging temporal information.
To address this limitation, we introduce BiTimeBERT 2.0, a novel time-aware
language model pre-trained on a temporal news article collection. BiTimeBERT
2.0 incorporates temporal information through three innovative pre-training
objectives: Extended Time-Aware Masked Language Modeling (ETAMLM), Document
Dating (DD), and Time-Sensitive Entity Replacement (TSER). Each objective is
specifically designed to target a distinct dimension of temporal information:
ETAMLM enhances the model's understanding of temporal contexts and relations,
DD integrates document timestamps as explicit chronological markers, and TSER
focuses on the temporal dynamics of "Person" entities. Moreover, our refined
corpus preprocessing strategy reduces training time by nearly 53\%, making
BiTimeBERT 2.0 significantly more efficient while maintaining high performance.
Experimental results show that BiTimeBERT 2.0 achieves substantial improvements
across a broad range of time-related tasks and excels on datasets spanning
extensive temporal ranges. These findings underscore BiTimeBERT 2.0's potential
as a powerful tool for advancing temporal reasoning in NLP.
| no_new_dataset | 0.947137 |
2406.05364 | Kalyan Nakka | Kalyan Nakka, Jimmy Dani, Nitesh Saxena | Is On-Device AI Broken and Exploitable? Assessing the Trust and Ethics
in Small Language Models | 26 pages, 31 figures and 5 tables | null | null | null | cs.CR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present a very first study to investigate trust and ethical
implications of on-device artificial intelligence (AI), focusing on small
language models (SLMs) amenable for personal devices like smartphones. While
on-device SLMs promise enhanced privacy, reduced latency, and improved user
experience compared to cloud-based services, we posit that they might also
introduce significant risks and vulnerabilities compared to their on-server
counterparts. As part of our trust assessment study, we conduct a systematic
evaluation of the state-of-the-art on-devices SLMs, contrasted to their
on-server counterparts, based on a well-established trustworthiness measurement
framework. Our results show on-device SLMs to be significantly less
trustworthy, specifically demonstrating more stereotypical, unfair and
privacy-breaching behavior. Informed by these findings, we then perform our
ethics assessment study using a dataset of unethical questions, that depicts
harmful scenarios. Our results illustrate the lacking ethical safeguards in
on-device SLMs, emphasizing their capabilities of generating harmful content.
Further, the broken safeguards and exploitable nature of on-device SLMs is
demonstrated using potentially unethical vanilla prompts, to which the
on-device SLMs answer with valid responses without any filters and without the
need for any jailbreaking or prompt engineering. These responses can be abused
for various harmful and unethical scenarios like: societal harm, illegal
activities, hate, self-harm, exploitable phishing content and many others, all
of which indicates the severe vulnerability and exploitability of these
on-device SLMs.
| [
{
"version": "v1",
"created": "Sat, 8 Jun 2024 05:45:42 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 04:18:08 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Nakka",
"Kalyan",
""
],
[
"Dani",
"Jimmy",
""
],
[
"Saxena",
"Nitesh",
""
]
]
| TITLE: Is On-Device AI Broken and Exploitable? Assessing the Trust and Ethics
in Small Language Models
ABSTRACT: In this paper, we present a very first study to investigate trust and ethical
implications of on-device artificial intelligence (AI), focusing on small
language models (SLMs) amenable for personal devices like smartphones. While
on-device SLMs promise enhanced privacy, reduced latency, and improved user
experience compared to cloud-based services, we posit that they might also
introduce significant risks and vulnerabilities compared to their on-server
counterparts. As part of our trust assessment study, we conduct a systematic
evaluation of the state-of-the-art on-devices SLMs, contrasted to their
on-server counterparts, based on a well-established trustworthiness measurement
framework. Our results show on-device SLMs to be significantly less
trustworthy, specifically demonstrating more stereotypical, unfair and
privacy-breaching behavior. Informed by these findings, we then perform our
ethics assessment study using a dataset of unethical questions, that depicts
harmful scenarios. Our results illustrate the lacking ethical safeguards in
on-device SLMs, emphasizing their capabilities of generating harmful content.
Further, the broken safeguards and exploitable nature of on-device SLMs is
demonstrated using potentially unethical vanilla prompts, to which the
on-device SLMs answer with valid responses without any filters and without the
need for any jailbreaking or prompt engineering. These responses can be abused
for various harmful and unethical scenarios like: societal harm, illegal
activities, hate, self-harm, exploitable phishing content and many others, all
of which indicates the severe vulnerability and exploitability of these
on-device SLMs.
| new_dataset | 0.973968 |
2406.09983 | Gergely Odor | Gergely \'Odor, M\'arton Karsai | Epidemic-induced local awareness behavior inferred from surveys and
genetic sequence data | null | null | null | null | physics.soc-ph cs.SI q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | Behavior-disease models suggest that pandemics can be contained
cost-effectively if individuals take preventive actions when disease prevalence
rises among their close contacts. However, assessing local awareness behavior
in real-world datasets remains a challenge. Through the analysis of mutation
patterns in clinical genetic sequence data, we propose an efficient approach to
quantify the impact of local awareness by identifying superspreading events and
assigning containment scores to them.
We validate the proposed containment score as a proxy for local awareness in
simulation experiments, and find that it was correlated positively with policy
stringency during the COVID-19 pandemic. Finally, we observe a temporary drop
in the containment score during the Omicron wave in the United Kingdom,
matching a survey experiment we carried out in Hungary during the corresponding
period of the pandemic. Our findings bring important insight into the field of
awareness modeling through the analysis of large-scale genetic sequence data,
one of the most promising data sources in epidemics research.
| [
{
"version": "v1",
"created": "Fri, 14 Jun 2024 12:46:35 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 19:14:12 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Ódor",
"Gergely",
""
],
[
"Karsai",
"Márton",
""
]
]
| TITLE: Epidemic-induced local awareness behavior inferred from surveys and
genetic sequence data
ABSTRACT: Behavior-disease models suggest that pandemics can be contained
cost-effectively if individuals take preventive actions when disease prevalence
rises among their close contacts. However, assessing local awareness behavior
in real-world datasets remains a challenge. Through the analysis of mutation
patterns in clinical genetic sequence data, we propose an efficient approach to
quantify the impact of local awareness by identifying superspreading events and
assigning containment scores to them.
We validate the proposed containment score as a proxy for local awareness in
simulation experiments, and find that it was correlated positively with policy
stringency during the COVID-19 pandemic. Finally, we observe a temporary drop
in the containment score during the Omicron wave in the United Kingdom,
matching a survey experiment we carried out in Hungary during the corresponding
period of the pandemic. Our findings bring important insight into the field of
awareness modeling through the analysis of large-scale genetic sequence data,
one of the most promising data sources in epidemics research.
| no_new_dataset | 0.948106 |
2406.14794 | Chen Liu | Chen Liu, Ke Xu, Liangbo L. Shen, Guillaume Huguet, Zilong Wang,
Alexander Tong, Danilo Bzdok, Jay Stewart, Jay C. Wang, Lucian V. Del Priore,
Smita Krishnaswamy | ImageFlowNet: Forecasting Multiscale Image-Level Trajectories of Disease
Progression with Irregularly-Sampled Longitudinal Medical Images | Accepted to ICASSP 2025 | null | null | null | eess.IV cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Advances in medical imaging technologies have enabled the collection of
longitudinal images, which involve repeated scanning of the same patients over
time, to monitor disease progression. However, predictive modeling of such data
remains challenging due to high dimensionality, irregular sampling, and data
sparsity. To address these issues, we propose ImageFlowNet, a novel model
designed to forecast disease trajectories from initial images while preserving
spatial details. ImageFlowNet first learns multiscale joint representation
spaces across patients and time points, then optimizes deterministic or
stochastic flow fields within these spaces using a position-parameterized
neural ODE/SDE framework. The model leverages a UNet architecture to create
robust multiscale representations and mitigates data scarcity by combining
knowledge from all patients. We provide theoretical insights that support our
formulation of ODEs, and motivate our regularizations involving high-level
visual features, latent space organization, and trajectory smoothness. We
validate ImageFlowNet on three longitudinal medical image datasets depicting
progression in geographic atrophy, multiple sclerosis, and glioblastoma,
demonstrating its ability to effectively forecast disease progression and
outperform existing methods. Our contributions include the development of
ImageFlowNet, its theoretical underpinnings, and empirical validation on
real-world datasets. The official implementation is available at
https://github.com/KrishnaswamyLab/ImageFlowNet.
| [
{
"version": "v1",
"created": "Thu, 20 Jun 2024 23:51:32 GMT"
},
{
"version": "v2",
"created": "Tue, 2 Jul 2024 17:53:43 GMT"
},
{
"version": "v3",
"created": "Fri, 12 Jul 2024 07:28:55 GMT"
},
{
"version": "v4",
"created": "Tue, 17 Sep 2024 01:19:19 GMT"
},
{
"version": "v5",
"created": "Tue, 7 Jan 2025 18:49:42 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Liu",
"Chen",
""
],
[
"Xu",
"Ke",
""
],
[
"Shen",
"Liangbo L.",
""
],
[
"Huguet",
"Guillaume",
""
],
[
"Wang",
"Zilong",
""
],
[
"Tong",
"Alexander",
""
],
[
"Bzdok",
"Danilo",
""
],
[
"Stewart",
"Jay",
""
],
[
"Wang",
"Jay C.",
""
],
[
"Del Priore",
"Lucian V.",
""
],
[
"Krishnaswamy",
"Smita",
""
]
]
| TITLE: ImageFlowNet: Forecasting Multiscale Image-Level Trajectories of Disease
Progression with Irregularly-Sampled Longitudinal Medical Images
ABSTRACT: Advances in medical imaging technologies have enabled the collection of
longitudinal images, which involve repeated scanning of the same patients over
time, to monitor disease progression. However, predictive modeling of such data
remains challenging due to high dimensionality, irregular sampling, and data
sparsity. To address these issues, we propose ImageFlowNet, a novel model
designed to forecast disease trajectories from initial images while preserving
spatial details. ImageFlowNet first learns multiscale joint representation
spaces across patients and time points, then optimizes deterministic or
stochastic flow fields within these spaces using a position-parameterized
neural ODE/SDE framework. The model leverages a UNet architecture to create
robust multiscale representations and mitigates data scarcity by combining
knowledge from all patients. We provide theoretical insights that support our
formulation of ODEs, and motivate our regularizations involving high-level
visual features, latent space organization, and trajectory smoothness. We
validate ImageFlowNet on three longitudinal medical image datasets depicting
progression in geographic atrophy, multiple sclerosis, and glioblastoma,
demonstrating its ability to effectively forecast disease progression and
outperform existing methods. Our contributions include the development of
ImageFlowNet, its theoretical underpinnings, and empirical validation on
real-world datasets. The official implementation is available at
https://github.com/KrishnaswamyLab/ImageFlowNet.
| no_new_dataset | 0.94868 |
2406.17548 | Vasisht Duddu | Vasisht Duddu, Oskari J\"arvinen, Lachlan J Gunn, N Asokan | Laminator: Verifiable ML Property Cards using Hardware-assisted
Attestations | ACM Conference on Data and Application Security and Privacy
(CODASPY), 2025 | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Regulations increasingly call for various assurances from machine learning
(ML) model providers about their training data, training process, and model
behavior. For better transparency, industry (e.g., Huggingface and Google) has
adopted model cards and datasheets to describe various properties of training
datasets and models. In the same vein, we introduce the notion of inference
cards to describe the properties of a given inference (e.g., binding of the
output to the model and its corresponding input). We coin the term ML property
cards to collectively refer to these various types of cards.
To prevent a malicious model provider from including false information in ML
property cards, they need to be verifiable. We show how to construct verifiable
ML property cards using property attestation, technical mechanisms by which a
prover (e.g., a model provider) can attest to various ML properties to a
verifier (e.g., an auditor). Since prior attestation mechanisms based purely on
cryptography are often narrowly focused (lacking versatility) and inefficient,
we need an efficient mechanism to attest different types of properties across
the entire ML model pipeline.
Emerging widespread support for confidential computing has made it possible
to run and even train models inside hardware-assisted trusted execution
environments (TEEs), which provide highly efficient attestation mechanisms. We
propose Laminator, which uses TEEs to provide the first framework for
verifiable ML property cards via hardware-assisted ML property attestations.
Laminator is efficient in terms of overhead, scalable to large numbers of
verifiers, and versatile with respect to the properties it can prove during
training or inference.
| [
{
"version": "v1",
"created": "Tue, 25 Jun 2024 13:36:53 GMT"
},
{
"version": "v2",
"created": "Mon, 30 Dec 2024 22:39:49 GMT"
},
{
"version": "v3",
"created": "Wed, 5 Mar 2025 06:05:14 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Duddu",
"Vasisht",
""
],
[
"Järvinen",
"Oskari",
""
],
[
"Gunn",
"Lachlan J",
""
],
[
"Asokan",
"N",
""
]
]
| TITLE: Laminator: Verifiable ML Property Cards using Hardware-assisted
Attestations
ABSTRACT: Regulations increasingly call for various assurances from machine learning
(ML) model providers about their training data, training process, and model
behavior. For better transparency, industry (e.g., Huggingface and Google) has
adopted model cards and datasheets to describe various properties of training
datasets and models. In the same vein, we introduce the notion of inference
cards to describe the properties of a given inference (e.g., binding of the
output to the model and its corresponding input). We coin the term ML property
cards to collectively refer to these various types of cards.
To prevent a malicious model provider from including false information in ML
property cards, they need to be verifiable. We show how to construct verifiable
ML property cards using property attestation, technical mechanisms by which a
prover (e.g., a model provider) can attest to various ML properties to a
verifier (e.g., an auditor). Since prior attestation mechanisms based purely on
cryptography are often narrowly focused (lacking versatility) and inefficient,
we need an efficient mechanism to attest different types of properties across
the entire ML model pipeline.
Emerging widespread support for confidential computing has made it possible
to run and even train models inside hardware-assisted trusted execution
environments (TEEs), which provide highly efficient attestation mechanisms. We
propose Laminator, which uses TEEs to provide the first framework for
verifiable ML property cards via hardware-assisted ML property attestations.
Laminator is efficient in terms of overhead, scalable to large numbers of
verifiers, and versatile with respect to the properties it can prove during
training or inference.
| no_new_dataset | 0.948106 |
2407.00840 | Zekai Wang | Zekai Wang, Tieming Liu, Bing Yao | MUSE-Net: Missingness-aware mUlti-branching Self-attention Encoder for
Irregular Longitudinal Electronic Health Records | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The era of big data has made vast amounts of clinical data readily available,
particularly in the form of electronic health records (EHRs), which provides
unprecedented opportunities for developing data-driven diagnostic tools to
enhance clinical decision making. However, the application of EHRs in
data-driven modeling faces challenges such as irregularly spaced multi-variate
time series, issues of incompleteness, and data imbalance. Realizing the full
data potential of EHRs hinges on the development of advanced analytical models.
In this paper, we propose a novel Missingness-aware mUlti-branching
Self-Attention Encoder (MUSE-Net) to cope with the challenges in modeling
longitudinal EHRs for data-driven disease prediction. The proposed MUSE-Net is
composed by four novel modules including: (1) a multi-task Gaussian process
(MGP) with missing value masks for data imputation; (2) a multi-branching
architecture to address the data imbalance problem; (3) a time-aware
self-attention encoder to account for the irregularly spaced time interval in
longitudinal EHRs; (4) interpretable multi-head attention mechanism that
provides insights into the importance of different time points in disease
prediction, allowing clinicians to trace model decisions. We evaluate the
proposed MUSE-Net using both synthetic and real-world datasets. Experimental
results show that our MUSE-Net outperforms existing methods that are widely
used to investigate longitudinal signals.
| [
{
"version": "v1",
"created": "Sun, 30 Jun 2024 21:54:41 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 02:39:47 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Wang",
"Zekai",
""
],
[
"Liu",
"Tieming",
""
],
[
"Yao",
"Bing",
""
]
]
| TITLE: MUSE-Net: Missingness-aware mUlti-branching Self-attention Encoder for
Irregular Longitudinal Electronic Health Records
ABSTRACT: The era of big data has made vast amounts of clinical data readily available,
particularly in the form of electronic health records (EHRs), which provides
unprecedented opportunities for developing data-driven diagnostic tools to
enhance clinical decision making. However, the application of EHRs in
data-driven modeling faces challenges such as irregularly spaced multi-variate
time series, issues of incompleteness, and data imbalance. Realizing the full
data potential of EHRs hinges on the development of advanced analytical models.
In this paper, we propose a novel Missingness-aware mUlti-branching
Self-Attention Encoder (MUSE-Net) to cope with the challenges in modeling
longitudinal EHRs for data-driven disease prediction. The proposed MUSE-Net is
composed by four novel modules including: (1) a multi-task Gaussian process
(MGP) with missing value masks for data imputation; (2) a multi-branching
architecture to address the data imbalance problem; (3) a time-aware
self-attention encoder to account for the irregularly spaced time interval in
longitudinal EHRs; (4) interpretable multi-head attention mechanism that
provides insights into the importance of different time points in disease
prediction, allowing clinicians to trace model decisions. We evaluate the
proposed MUSE-Net using both synthetic and real-world datasets. Experimental
results show that our MUSE-Net outperforms existing methods that are widely
used to investigate longitudinal signals.
| no_new_dataset | 0.950273 |
2407.09141 | Abhinav Dutta | Abhinav Dutta, Sanjeev Krishnan, Nipun Kwatra, Ramachandran Ramjee | Accuracy is Not All You Need | null | https://proceedings.neurips.cc/paper_files/paper/2024/hash/e0e956681b04ac126679e8c7dd706b2e-Abstract-Conference.html | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | When Large Language Models (LLMs) are compressed using techniques such as
quantization, the predominant way to demonstrate the validity of such
techniques is by measuring the model's accuracy on various benchmarks.If the
accuracies of the baseline model and the compressed model are close, it is
assumed that there was negligible degradation in quality.However, even when the
accuracy of baseline and compressed model are similar, we observe the
phenomenon of flips, wherein answers change from correct to incorrect and vice
versa in proportion.We conduct a detailed study of metrics across multiple
compression techniques, models and datasets, demonstrating that the behavior of
compressed models as visible to end-users is often significantly different from
the baseline model, even when accuracy is similar.We further evaluate
compressed models qualitatively and quantitatively using MT-Bench and show that
compressed models are significantly worse than baseline models in this
free-form generative task.Thus, we argue that compression techniques should
also be evaluated using distance metrics.We propose two such metrics,
KL-Divergence and flips, and show that they are well correlated.
| [
{
"version": "v1",
"created": "Fri, 12 Jul 2024 10:19:02 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Dutta",
"Abhinav",
""
],
[
"Krishnan",
"Sanjeev",
""
],
[
"Kwatra",
"Nipun",
""
],
[
"Ramjee",
"Ramachandran",
""
]
]
| TITLE: Accuracy is Not All You Need
ABSTRACT: When Large Language Models (LLMs) are compressed using techniques such as
quantization, the predominant way to demonstrate the validity of such
techniques is by measuring the model's accuracy on various benchmarks.If the
accuracies of the baseline model and the compressed model are close, it is
assumed that there was negligible degradation in quality.However, even when the
accuracy of baseline and compressed model are similar, we observe the
phenomenon of flips, wherein answers change from correct to incorrect and vice
versa in proportion.We conduct a detailed study of metrics across multiple
compression techniques, models and datasets, demonstrating that the behavior of
compressed models as visible to end-users is often significantly different from
the baseline model, even when accuracy is similar.We further evaluate
compressed models qualitatively and quantitatively using MT-Bench and show that
compressed models are significantly worse than baseline models in this
free-form generative task.Thus, we argue that compression techniques should
also be evaluated using distance metrics.We propose two such metrics,
KL-Divergence and flips, and show that they are well correlated.
| no_new_dataset | 0.940626 |
2407.09510 | Milena T Bagdasarian | Milena T. Bagdasarian, Paul Knoll, Yi-Hsin Li, Florian Barthel, Anna
Hilsmann, Peter Eisert, Wieland Morgenstern | 3DGS.zip: A survey on 3D Gaussian Splatting Compression Methods | 3D Gaussian Splatting compression survey; 3DGS compression; updated
discussion; new approaches added; new illustrations | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | 3D Gaussian Splatting (3DGS) has emerged as a cutting-edge technique for
real-time radiance field rendering, offering state-of-the-art performance in
terms of both quality and speed. 3DGS models a scene as a collection of
three-dimensional Gaussians, with additional attributes optimized to conform to
the scene's geometric and visual properties. Despite its advantages in
rendering speed and image fidelity, 3DGS is limited by its significant storage
and memory demands. These high demands make 3DGS impractical for mobile devices
or headsets, reducing its applicability in important areas of computer
graphics. To address these challenges and advance the practicality of 3DGS,
this survey provides a comprehensive and detailed examination of compression
and compaction techniques developed to make 3DGS more efficient. We classify
existing methods into two categories: compression, which focuses on reducing
file size, and compaction, which aims to minimize the number of Gaussians. Both
methods aim to maintain or improve quality, each by minimizing its respective
attribute: file size for compression and Gaussian count for compaction. We
introduce the basic mathematical concepts underlying the analyzed methods, as
well as key implementation details and design choices. Our report thoroughly
discusses similarities and differences among the methods, as well as their
respective advantages and disadvantages. We establish a consistent framework
for comparing the surveyed methods based on key performance metrics and
datasets. Specifically, since these methods have been developed in parallel and
over a short period of time, currently, no comprehensive comparison exists.
This survey, for the first time, presents a unified framework to evaluate 3DGS
compression techniques. We maintain a website that will be regularly updated
with emerging methods: https://w-m.github.io/3dgs-compression-survey/ .
| [
{
"version": "v1",
"created": "Mon, 17 Jun 2024 11:43:38 GMT"
},
{
"version": "v2",
"created": "Tue, 16 Jul 2024 12:47:46 GMT"
},
{
"version": "v3",
"created": "Tue, 3 Sep 2024 11:54:52 GMT"
},
{
"version": "v4",
"created": "Tue, 5 Nov 2024 11:41:40 GMT"
},
{
"version": "v5",
"created": "Wed, 5 Mar 2025 09:44:52 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Bagdasarian",
"Milena T.",
""
],
[
"Knoll",
"Paul",
""
],
[
"Li",
"Yi-Hsin",
""
],
[
"Barthel",
"Florian",
""
],
[
"Hilsmann",
"Anna",
""
],
[
"Eisert",
"Peter",
""
],
[
"Morgenstern",
"Wieland",
""
]
]
| TITLE: 3DGS.zip: A survey on 3D Gaussian Splatting Compression Methods
ABSTRACT: 3D Gaussian Splatting (3DGS) has emerged as a cutting-edge technique for
real-time radiance field rendering, offering state-of-the-art performance in
terms of both quality and speed. 3DGS models a scene as a collection of
three-dimensional Gaussians, with additional attributes optimized to conform to
the scene's geometric and visual properties. Despite its advantages in
rendering speed and image fidelity, 3DGS is limited by its significant storage
and memory demands. These high demands make 3DGS impractical for mobile devices
or headsets, reducing its applicability in important areas of computer
graphics. To address these challenges and advance the practicality of 3DGS,
this survey provides a comprehensive and detailed examination of compression
and compaction techniques developed to make 3DGS more efficient. We classify
existing methods into two categories: compression, which focuses on reducing
file size, and compaction, which aims to minimize the number of Gaussians. Both
methods aim to maintain or improve quality, each by minimizing its respective
attribute: file size for compression and Gaussian count for compaction. We
introduce the basic mathematical concepts underlying the analyzed methods, as
well as key implementation details and design choices. Our report thoroughly
discusses similarities and differences among the methods, as well as their
respective advantages and disadvantages. We establish a consistent framework
for comparing the surveyed methods based on key performance metrics and
datasets. Specifically, since these methods have been developed in parallel and
over a short period of time, currently, no comprehensive comparison exists.
This survey, for the first time, presents a unified framework to evaluate 3DGS
compression techniques. We maintain a website that will be regularly updated
with emerging methods: https://w-m.github.io/3dgs-compression-survey/ .
| no_new_dataset | 0.938913 |
2407.17457 | Jing Liang | Jing Liang, Zhuo Deng, Zheming Zhou, Min Sun, Omid Ghasemalizadeh,
Cheng-Hao Kuo, Arnie Sen, Dinesh Manocha | CSCPR: Cross-Source-Context Indoor RGB-D Place Recognition | null | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We extend our previous work, PoCo, and present a new algorithm,
Cross-Source-Context Place Recognition (CSCPR), for RGB-D indoor place
recognition that integrates global retrieval and reranking into an end-to-end
model and keeps the consistency of using Context-of-Clusters (CoCs) for feature
processing. Unlike prior approaches that primarily focus on the RGB domain for
place recognition reranking, CSCPR is designed to handle the RGB-D data. We
apply the CoCs to handle cross-sourced and cross-scaled RGB-D point clouds and
introduce two novel modules for reranking: the Self-Context Cluster (SCC) and
the Cross Source Context Cluster (CSCC), which enhance feature representation
and match query-database pairs based on local features, respectively. We also
release two new datasets, ScanNetIPR and ARKitIPR. Our experiments demonstrate
that CSCPR significantly outperforms state-of-the-art models on these datasets
by at least 29.27% in Recall@1 on the ScanNet-PR dataset and 43.24% in the new
datasets. Code and datasets will be released.
| [
{
"version": "v1",
"created": "Wed, 24 Jul 2024 17:50:00 GMT"
},
{
"version": "v2",
"created": "Thu, 26 Dec 2024 07:48:57 GMT"
},
{
"version": "v3",
"created": "Wed, 5 Mar 2025 00:32:49 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Liang",
"Jing",
""
],
[
"Deng",
"Zhuo",
""
],
[
"Zhou",
"Zheming",
""
],
[
"Sun",
"Min",
""
],
[
"Ghasemalizadeh",
"Omid",
""
],
[
"Kuo",
"Cheng-Hao",
""
],
[
"Sen",
"Arnie",
""
],
[
"Manocha",
"Dinesh",
""
]
]
| TITLE: CSCPR: Cross-Source-Context Indoor RGB-D Place Recognition
ABSTRACT: We extend our previous work, PoCo, and present a new algorithm,
Cross-Source-Context Place Recognition (CSCPR), for RGB-D indoor place
recognition that integrates global retrieval and reranking into an end-to-end
model and keeps the consistency of using Context-of-Clusters (CoCs) for feature
processing. Unlike prior approaches that primarily focus on the RGB domain for
place recognition reranking, CSCPR is designed to handle the RGB-D data. We
apply the CoCs to handle cross-sourced and cross-scaled RGB-D point clouds and
introduce two novel modules for reranking: the Self-Context Cluster (SCC) and
the Cross Source Context Cluster (CSCC), which enhance feature representation
and match query-database pairs based on local features, respectively. We also
release two new datasets, ScanNetIPR and ARKitIPR. Our experiments demonstrate
that CSCPR significantly outperforms state-of-the-art models on these datasets
by at least 29.27% in Recall@1 on the ScanNet-PR dataset and 43.24% in the new
datasets. Code and datasets will be released.
| new_dataset | 0.764012 |
2408.06927 | Xin Zhang | Xin Zhang, Jiawei Du, Ping Liu, Joey Tianyi Zhou | Breaking Class Barriers: Efficient Dataset Distillation via Inter-Class
Feature Compensator | Accepted to ICLR 2025 | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dataset distillation has emerged as a technique aiming to condense
informative features from large, natural datasets into a compact and synthetic
form. While recent advancements have refined this technique, its performance is
bottlenecked by the prevailing class-specific synthesis paradigm. Under this
paradigm, synthetic data is optimized exclusively for a pre-assigned one-hot
label, creating an implicit class barrier in feature condensation. This leads
to inefficient utilization of the distillation budget and oversight of
inter-class feature distributions, which ultimately limits the effectiveness
and efficiency, as demonstrated in our analysis. To overcome these constraints,
this paper presents the Inter-class Feature Compensator (INFER), an innovative
distillation approach that transcends the class-specific data-label framework
widely utilized in current dataset distillation methods. Specifically, INFER
leverages a Universal Feature Compensator (UFC) to enhance feature integration
across classes, enabling the generation of multiple additional synthetic
instances from a single UFC input. This significantly improves the efficiency
of the distillation budget. Moreover, INFER enriches inter-class interactions
during the distillation, thereby enhancing the effectiveness and
generalizability of the distilled data. By allowing for the linear
interpolation of labels similar to those in the original dataset, INFER
meticulously optimizes the synthetic data and dramatically reduces the size of
soft labels in the synthetic dataset to almost zero, establishing a new
benchmark for efficiency and effectiveness in dataset distillation. In
practice, INFER demonstrates state-of-the-art performance across benchmark
datasets. For instance, in the ipc = 50 setting on ImageNet-1k with the same
compression level, it outperforms SRe2L by 34.5% using ResNet18.
| [
{
"version": "v1",
"created": "Tue, 13 Aug 2024 14:29:00 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Oct 2024 14:01:27 GMT"
},
{
"version": "v3",
"created": "Wed, 5 Mar 2025 08:35:41 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Zhang",
"Xin",
""
],
[
"Du",
"Jiawei",
""
],
[
"Liu",
"Ping",
""
],
[
"Zhou",
"Joey Tianyi",
""
]
]
| TITLE: Breaking Class Barriers: Efficient Dataset Distillation via Inter-Class
Feature Compensator
ABSTRACT: Dataset distillation has emerged as a technique aiming to condense
informative features from large, natural datasets into a compact and synthetic
form. While recent advancements have refined this technique, its performance is
bottlenecked by the prevailing class-specific synthesis paradigm. Under this
paradigm, synthetic data is optimized exclusively for a pre-assigned one-hot
label, creating an implicit class barrier in feature condensation. This leads
to inefficient utilization of the distillation budget and oversight of
inter-class feature distributions, which ultimately limits the effectiveness
and efficiency, as demonstrated in our analysis. To overcome these constraints,
this paper presents the Inter-class Feature Compensator (INFER), an innovative
distillation approach that transcends the class-specific data-label framework
widely utilized in current dataset distillation methods. Specifically, INFER
leverages a Universal Feature Compensator (UFC) to enhance feature integration
across classes, enabling the generation of multiple additional synthetic
instances from a single UFC input. This significantly improves the efficiency
of the distillation budget. Moreover, INFER enriches inter-class interactions
during the distillation, thereby enhancing the effectiveness and
generalizability of the distilled data. By allowing for the linear
interpolation of labels similar to those in the original dataset, INFER
meticulously optimizes the synthetic data and dramatically reduces the size of
soft labels in the synthetic dataset to almost zero, establishing a new
benchmark for efficiency and effectiveness in dataset distillation. In
practice, INFER demonstrates state-of-the-art performance across benchmark
datasets. For instance, in the ipc = 50 setting on ImageNet-1k with the same
compression level, it outperforms SRe2L by 34.5% using ResNet18.
| no_new_dataset | 0.949995 |
2408.14687 | Flavio Giobergia | Flavio Giobergia, Eliana Pastor, Luca de Alfaro, Elena Baralis | A Synthetic Benchmark to Explore Limitations of Localized Drift
Detections | Paper accepted at DELTA Workshop @ KDD 2024 | null | 10.1007/978-3-031-82346-6_7 | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Concept drift is a common phenomenon in data streams where the statistical
properties of the target variable change over time. Traditionally, drift is
assumed to occur globally, affecting the entire dataset uniformly. However,
this assumption does not always hold true in real-world scenarios where only
specific subpopulations within the data may experience drift. This paper
explores the concept of localized drift and evaluates the performance of
several drift detection techniques in identifying such localized changes. We
introduce a synthetic dataset based on the Agrawal generator, where drift is
induced in a randomly chosen subgroup. Our experiments demonstrate that
commonly adopted drift detection methods may fail to detect drift when it is
confined to a small subpopulation. We propose and test various drift detection
approaches to quantify their effectiveness in this localized drift scenario. We
make the source code for the generation of the synthetic benchmark available at
https://github.com/fgiobergia/subgroup-agrawal-drift.
| [
{
"version": "v1",
"created": "Mon, 26 Aug 2024 23:24:31 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Giobergia",
"Flavio",
""
],
[
"Pastor",
"Eliana",
""
],
[
"de Alfaro",
"Luca",
""
],
[
"Baralis",
"Elena",
""
]
]
| TITLE: A Synthetic Benchmark to Explore Limitations of Localized Drift
Detections
ABSTRACT: Concept drift is a common phenomenon in data streams where the statistical
properties of the target variable change over time. Traditionally, drift is
assumed to occur globally, affecting the entire dataset uniformly. However,
this assumption does not always hold true in real-world scenarios where only
specific subpopulations within the data may experience drift. This paper
explores the concept of localized drift and evaluates the performance of
several drift detection techniques in identifying such localized changes. We
introduce a synthetic dataset based on the Agrawal generator, where drift is
induced in a randomly chosen subgroup. Our experiments demonstrate that
commonly adopted drift detection methods may fail to detect drift when it is
confined to a small subpopulation. We propose and test various drift detection
approaches to quantify their effectiveness in this localized drift scenario. We
make the source code for the generation of the synthetic benchmark available at
https://github.com/fgiobergia/subgroup-agrawal-drift.
| new_dataset | 0.964187 |
2408.15503 | Haisheng Su | Haisheng Su, Feixiang Song, Cong Ma, Wei Wu, Junchi Yan | RoboSense: Large-scale Dataset and Benchmark for Egocentric Robot
Perception and Navigation in Crowded and Unstructured Environments | Accepted to CVPR2025 | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reliable embodied perception from an egocentric perspective is challenging
yet essential for autonomous navigation technology of intelligent mobile
agents. With the growing demand of social robotics, near-field scene
understanding becomes an important research topic in the areas of egocentric
perceptual tasks related to navigation in both crowded and unstructured
environments. Due to the complexity of environmental conditions and difficulty
of surrounding obstacles owing to truncation and occlusion, the perception
capability under this circumstance is still inferior. To further enhance the
intelligence of mobile robots, in this paper, we setup an egocentric
multi-sensor data collection platform based on 3 main types of sensors (Camera,
LiDAR and Fisheye), which supports flexible sensor configurations to enable
dynamic sight of view from ego-perspective, capturing either near or farther
areas. Meanwhile, a large-scale multimodal dataset is constructed, named
RoboSense, to facilitate egocentric robot perception. Specifically, RoboSense
contains more than 133K synchronized data with 1.4M 3D bounding box and IDs
annotated in the full $360^{\circ}$ view, forming 216K trajectories across 7.6K
temporal sequences. It has $270\times$ and $18\times$ as many annotations of
surrounding obstacles within near ranges as the previous datasets collected for
autonomous driving scenarios such as KITTI and nuScenes. Moreover, we define a
novel matching criterion for near-field 3D perception and prediction metrics.
Based on RoboSense, we formulate 6 popular tasks to facilitate the future
research development, where the detailed analysis as well as benchmarks are
also provided accordingly. Data desensitization measures have been conducted
for privacy protection.
| [
{
"version": "v1",
"created": "Wed, 28 Aug 2024 03:17:40 GMT"
},
{
"version": "v2",
"created": "Sun, 15 Sep 2024 15:51:44 GMT"
},
{
"version": "v3",
"created": "Wed, 25 Sep 2024 11:29:27 GMT"
},
{
"version": "v4",
"created": "Mon, 25 Nov 2024 06:24:48 GMT"
},
{
"version": "v5",
"created": "Wed, 5 Mar 2025 05:14:34 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Su",
"Haisheng",
""
],
[
"Song",
"Feixiang",
""
],
[
"Ma",
"Cong",
""
],
[
"Wu",
"Wei",
""
],
[
"Yan",
"Junchi",
""
]
]
| TITLE: RoboSense: Large-scale Dataset and Benchmark for Egocentric Robot
Perception and Navigation in Crowded and Unstructured Environments
ABSTRACT: Reliable embodied perception from an egocentric perspective is challenging
yet essential for autonomous navigation technology of intelligent mobile
agents. With the growing demand of social robotics, near-field scene
understanding becomes an important research topic in the areas of egocentric
perceptual tasks related to navigation in both crowded and unstructured
environments. Due to the complexity of environmental conditions and difficulty
of surrounding obstacles owing to truncation and occlusion, the perception
capability under this circumstance is still inferior. To further enhance the
intelligence of mobile robots, in this paper, we setup an egocentric
multi-sensor data collection platform based on 3 main types of sensors (Camera,
LiDAR and Fisheye), which supports flexible sensor configurations to enable
dynamic sight of view from ego-perspective, capturing either near or farther
areas. Meanwhile, a large-scale multimodal dataset is constructed, named
RoboSense, to facilitate egocentric robot perception. Specifically, RoboSense
contains more than 133K synchronized data with 1.4M 3D bounding box and IDs
annotated in the full $360^{\circ}$ view, forming 216K trajectories across 7.6K
temporal sequences. It has $270\times$ and $18\times$ as many annotations of
surrounding obstacles within near ranges as the previous datasets collected for
autonomous driving scenarios such as KITTI and nuScenes. Moreover, we define a
novel matching criterion for near-field 3D perception and prediction metrics.
Based on RoboSense, we formulate 6 popular tasks to facilitate the future
research development, where the detailed analysis as well as benchmarks are
also provided accordingly. Data desensitization measures have been conducted
for privacy protection.
| new_dataset | 0.962285 |
2409.07003 | Xiaomin Lin | Xiaomin Lin, Vivek Mange, Arjun Suresh, Bernhard Neuberger, Aadi
Palnitkar, Brendan Campbell, Alan Williams, Kleio Baxevani, Jeremy Mallette,
Alhim Vera, Markus Vincze, Ioannis Rekleitis, Herbert G. Tanner, Yiannis
Aloimonos | ODYSSEE: Oyster Detection Yielded by Sensor Systems on Edge Electronics | null | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Oysters are a vital keystone species in coastal ecosystems, providing
significant economic, environmental, and cultural benefits. As the importance
of oysters grows, so does the relevance of autonomous systems for their
detection and monitoring. However, current monitoring strategies often rely on
destructive methods. While manual identification of oysters from video footage
is non-destructive, it is time-consuming, requires expert input, and is further
complicated by the challenges of the underwater environment.
To address these challenges, we propose a novel pipeline using stable
diffusion to augment a collected real dataset with realistic synthetic data.
This method enhances the dataset used to train a YOLOv10-based vision model.
The model is then deployed and tested on an edge platform in underwater
robotics, achieving a state-of-the-art 0.657 mAP@50 for oyster detection on the
Aqua2 platform.
| [
{
"version": "v1",
"created": "Wed, 11 Sep 2024 04:31:09 GMT"
},
{
"version": "v2",
"created": "Fri, 13 Sep 2024 14:17:17 GMT"
},
{
"version": "v3",
"created": "Tue, 4 Mar 2025 19:36:45 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Lin",
"Xiaomin",
""
],
[
"Mange",
"Vivek",
""
],
[
"Suresh",
"Arjun",
""
],
[
"Neuberger",
"Bernhard",
""
],
[
"Palnitkar",
"Aadi",
""
],
[
"Campbell",
"Brendan",
""
],
[
"Williams",
"Alan",
""
],
[
"Baxevani",
"Kleio",
""
],
[
"Mallette",
"Jeremy",
""
],
[
"Vera",
"Alhim",
""
],
[
"Vincze",
"Markus",
""
],
[
"Rekleitis",
"Ioannis",
""
],
[
"Tanner",
"Herbert G.",
""
],
[
"Aloimonos",
"Yiannis",
""
]
]
| TITLE: ODYSSEE: Oyster Detection Yielded by Sensor Systems on Edge Electronics
ABSTRACT: Oysters are a vital keystone species in coastal ecosystems, providing
significant economic, environmental, and cultural benefits. As the importance
of oysters grows, so does the relevance of autonomous systems for their
detection and monitoring. However, current monitoring strategies often rely on
destructive methods. While manual identification of oysters from video footage
is non-destructive, it is time-consuming, requires expert input, and is further
complicated by the challenges of the underwater environment.
To address these challenges, we propose a novel pipeline using stable
diffusion to augment a collected real dataset with realistic synthetic data.
This method enhances the dataset used to train a YOLOv10-based vision model.
The model is then deployed and tested on an edge platform in underwater
robotics, achieving a state-of-the-art 0.657 mAP@50 for oyster detection on the
Aqua2 platform.
| no_new_dataset | 0.944074 |
2409.11985 | Viacheslav Barkov | Viacheslav Barkov, Jonas Schmidinger, Robin Gebbers, Martin Atzmueller | An Efficient Model-Agnostic Approach for Uncertainty Estimation in
Data-Restricted Pedometric Applications | To be published in the proceedings of ICMLA 2024: 23rd International
Conference on Machine Learning and Applications | null | 10.1109/ICMLA61862.2024.00033 | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces a model-agnostic approach designed to enhance
uncertainty estimation in the predictive modeling of soil properties, a crucial
factor for advancing pedometrics and the practice of digital soil mapping. For
addressing the typical challenge of data scarcity in soil studies, we present
an improved technique for uncertainty estimation. This method is based on the
transformation of regression tasks into classification problems, which not only
allows for the production of reliable uncertainty estimates but also enables
the application of established machine learning algorithms with competitive
performance that have not yet been utilized in pedometrics. Empirical results
from datasets collected from two German agricultural fields showcase the
practical application of the proposed methodology. Our results and findings
suggest that the proposed approach has the potential to provide better
uncertainty estimation than the models commonly used in pedometrics.
| [
{
"version": "v1",
"created": "Wed, 18 Sep 2024 13:43:39 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Barkov",
"Viacheslav",
""
],
[
"Schmidinger",
"Jonas",
""
],
[
"Gebbers",
"Robin",
""
],
[
"Atzmueller",
"Martin",
""
]
]
| TITLE: An Efficient Model-Agnostic Approach for Uncertainty Estimation in
Data-Restricted Pedometric Applications
ABSTRACT: This paper introduces a model-agnostic approach designed to enhance
uncertainty estimation in the predictive modeling of soil properties, a crucial
factor for advancing pedometrics and the practice of digital soil mapping. For
addressing the typical challenge of data scarcity in soil studies, we present
an improved technique for uncertainty estimation. This method is based on the
transformation of regression tasks into classification problems, which not only
allows for the production of reliable uncertainty estimates but also enables
the application of established machine learning algorithms with competitive
performance that have not yet been utilized in pedometrics. Empirical results
from datasets collected from two German agricultural fields showcase the
practical application of the proposed methodology. Our results and findings
suggest that the proposed approach has the potential to provide better
uncertainty estimation than the models commonly used in pedometrics.
| no_new_dataset | 0.947088 |
2409.14262 | Jing Liang | Jing Liang, Dibyendu Das, Daeun Song, Md Nahid Hasan Shuvo, Mohammad
Durrani, Karthik Taranath, Ivan Penskiy, Dinesh Manocha, Xuesu Xiao | GND: Global Navigation Dataset with Multi-Modal Perception and
Multi-Category Traversability in Outdoor Campus Environments | null | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Navigating large-scale outdoor environments requires complex reasoning in
terms of geometric structures, environmental semantics, and terrain
characteristics, which are typically captured by onboard sensors such as LiDAR
and cameras. While current mobile robots can navigate such environments using
pre-defined, high-precision maps based on hand-crafted rules catered for the
specific environment, they lack commonsense reasoning capabilities that most
humans possess when navigating unknown outdoor spaces. To address this gap, we
introduce the Global Navigation Dataset (GND), a large-scale dataset that
integrates multi-modal sensory data, including 3D LiDAR point clouds and RGB
and 360-degree images, as well as multi-category traversability maps
(pedestrian walkways, vehicle roadways, stairs, off-road terrain, and
obstacles) from ten university campuses. These environments encompass a variety
of parks, urban settings, elevation changes, and campus layouts of different
scales. The dataset covers approximately 2.7km2 and includes at least 350
buildings in total. We also present a set of novel applications of GND to
showcase its utility to enable global robot navigation, such as map-based
global navigation, mapless navigation, and global place recognition.
| [
{
"version": "v1",
"created": "Sat, 21 Sep 2024 23:06:14 GMT"
},
{
"version": "v2",
"created": "Thu, 26 Sep 2024 19:08:40 GMT"
},
{
"version": "v3",
"created": "Wed, 5 Mar 2025 00:50:23 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Liang",
"Jing",
""
],
[
"Das",
"Dibyendu",
""
],
[
"Song",
"Daeun",
""
],
[
"Shuvo",
"Md Nahid Hasan",
""
],
[
"Durrani",
"Mohammad",
""
],
[
"Taranath",
"Karthik",
""
],
[
"Penskiy",
"Ivan",
""
],
[
"Manocha",
"Dinesh",
""
],
[
"Xiao",
"Xuesu",
""
]
]
| TITLE: GND: Global Navigation Dataset with Multi-Modal Perception and
Multi-Category Traversability in Outdoor Campus Environments
ABSTRACT: Navigating large-scale outdoor environments requires complex reasoning in
terms of geometric structures, environmental semantics, and terrain
characteristics, which are typically captured by onboard sensors such as LiDAR
and cameras. While current mobile robots can navigate such environments using
pre-defined, high-precision maps based on hand-crafted rules catered for the
specific environment, they lack commonsense reasoning capabilities that most
humans possess when navigating unknown outdoor spaces. To address this gap, we
introduce the Global Navigation Dataset (GND), a large-scale dataset that
integrates multi-modal sensory data, including 3D LiDAR point clouds and RGB
and 360-degree images, as well as multi-category traversability maps
(pedestrian walkways, vehicle roadways, stairs, off-road terrain, and
obstacles) from ten university campuses. These environments encompass a variety
of parks, urban settings, elevation changes, and campus layouts of different
scales. The dataset covers approximately 2.7km2 and includes at least 350
buildings in total. We also present a set of novel applications of GND to
showcase its utility to enable global robot navigation, such as map-based
global navigation, mapless navigation, and global place recognition.
| new_dataset | 0.962356 |
2409.16215 | Francesco Pasti | Francesco Pasti, Riccardo De Monte, Davide Dalle Pezze, Gian Antonio
Susto, Nicola Bellotto | Tiny Robotics Dataset and Benchmark for Continual Object Detection | null | null | null | null | cs.RO cs.CV | http://creativecommons.org/licenses/by/4.0/ | Detecting objects in mobile robotics is crucial for numerous applications,
from autonomous navigation to inspection. However, robots often need to operate
in different domains from those they were trained in, requiring them to adjust
to these changes. Tiny mobile robots, subject to size, power, and computational
constraints, encounter even more difficulties in running and adapting these
algorithms. Such adaptability, though, is crucial for real-world deployment,
where robots must operate effectively in dynamic and unpredictable settings. In
this work, we introduce a novel benchmark to evaluate the continual learning
capabilities of object detection systems in tiny robotic platforms. Our
contributions include: (i) Tiny Robotics Object Detection~(TiROD), a
comprehensive dataset collected using the onboard camera of a small mobile
robot, designed to test object detectors across various domains and classes;
(ii) a benchmark of different continual learning strategies on this dataset
using NanoDet, a lightweight object detector. Our results highlight key
challenges in developing robust and efficient continual learning strategies for
object detectors in tiny robotics.
| [
{
"version": "v1",
"created": "Tue, 24 Sep 2024 16:21:27 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 14:49:21 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Pasti",
"Francesco",
""
],
[
"De Monte",
"Riccardo",
""
],
[
"Pezze",
"Davide Dalle",
""
],
[
"Susto",
"Gian Antonio",
""
],
[
"Bellotto",
"Nicola",
""
]
]
| TITLE: Tiny Robotics Dataset and Benchmark for Continual Object Detection
ABSTRACT: Detecting objects in mobile robotics is crucial for numerous applications,
from autonomous navigation to inspection. However, robots often need to operate
in different domains from those they were trained in, requiring them to adjust
to these changes. Tiny mobile robots, subject to size, power, and computational
constraints, encounter even more difficulties in running and adapting these
algorithms. Such adaptability, though, is crucial for real-world deployment,
where robots must operate effectively in dynamic and unpredictable settings. In
this work, we introduce a novel benchmark to evaluate the continual learning
capabilities of object detection systems in tiny robotic platforms. Our
contributions include: (i) Tiny Robotics Object Detection~(TiROD), a
comprehensive dataset collected using the onboard camera of a small mobile
robot, designed to test object detectors across various domains and classes;
(ii) a benchmark of different continual learning strategies on this dataset
using NanoDet, a lightweight object detector. Our results highlight key
challenges in developing robust and efficient continual learning strategies for
object detectors in tiny robotics.
| new_dataset | 0.969843 |
2410.01962 | Mohammad Mahdavian | Mohammad Mahdavian, Mohammad Loni, Ted Samuelsson, Mo Chen | LS-HAR: Language Supervised Human Action Recognition with Salient
Fusion, Construction Sites as a Use-Case | null | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Detecting human actions is a crucial task for autonomous robots and vehicles,
often requiring the integration of various data modalities for improved
accuracy. In this study, we introduce a novel approach to Human Action
Recognition (HAR) using language supervision named LS-HAR based on skeleton and
visual cues. Our method leverages a language model to guide the feature
extraction process in the skeleton encoder. Specifically, we employ learnable
prompts for the language model conditioned on the skeleton modality to optimize
feature representation. Furthermore, we propose a fusion mechanism that
combines dual-modality features using a salient fusion module, incorporating
attention and transformer mechanisms to address the modalities' high
dimensionality. This fusion process prioritizes informative video frames and
body joints, enhancing the recognition accuracy of human actions. Additionally,
we introduce a new dataset tailored for real-world robotic applications in
construction sites, featuring visual, skeleton, and depth data modalities,
named VolvoConstAct. This dataset serves to facilitate the training and
evaluation of machine learning models to instruct autonomous construction
machines for performing necessary tasks in real-world construction sites. To
evaluate our approach, we conduct experiments on our dataset as well as three
widely used public datasets: NTU-RGB+D, NTU-RGB+D 120, and NW-UCLA. Results
reveal that our proposed method achieves promising performance across all
datasets, demonstrating its robustness and potential for various applications.
The code, dataset, and demonstration of real-machine experiments are available
at: https://mmahdavian.github.io/ls_har/
| [
{
"version": "v1",
"created": "Wed, 2 Oct 2024 19:10:23 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 00:41:20 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Mahdavian",
"Mohammad",
""
],
[
"Loni",
"Mohammad",
""
],
[
"Samuelsson",
"Ted",
""
],
[
"Chen",
"Mo",
""
]
]
| TITLE: LS-HAR: Language Supervised Human Action Recognition with Salient
Fusion, Construction Sites as a Use-Case
ABSTRACT: Detecting human actions is a crucial task for autonomous robots and vehicles,
often requiring the integration of various data modalities for improved
accuracy. In this study, we introduce a novel approach to Human Action
Recognition (HAR) using language supervision named LS-HAR based on skeleton and
visual cues. Our method leverages a language model to guide the feature
extraction process in the skeleton encoder. Specifically, we employ learnable
prompts for the language model conditioned on the skeleton modality to optimize
feature representation. Furthermore, we propose a fusion mechanism that
combines dual-modality features using a salient fusion module, incorporating
attention and transformer mechanisms to address the modalities' high
dimensionality. This fusion process prioritizes informative video frames and
body joints, enhancing the recognition accuracy of human actions. Additionally,
we introduce a new dataset tailored for real-world robotic applications in
construction sites, featuring visual, skeleton, and depth data modalities,
named VolvoConstAct. This dataset serves to facilitate the training and
evaluation of machine learning models to instruct autonomous construction
machines for performing necessary tasks in real-world construction sites. To
evaluate our approach, we conduct experiments on our dataset as well as three
widely used public datasets: NTU-RGB+D, NTU-RGB+D 120, and NW-UCLA. Results
reveal that our proposed method achieves promising performance across all
datasets, demonstrating its robustness and potential for various applications.
The code, dataset, and demonstration of real-machine experiments are available
at: https://mmahdavian.github.io/ls_har/
| new_dataset | 0.963369 |
2410.05096 | Mehdi Azarafza | Mehdi Azarafza, Fatima Idrees, Ali Ehteshami Bejnordi, Charles
Steinmetz, Stefan Henkler, Achim Rettberg | Human-in-the-loop Reasoning For Traffic Sign Detection: Collaborative
Approach Yolo With Video-llava | 10 pages, 6 figures | null | 10.1007/978-3-031-84457-7_9 | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Traffic Sign Recognition (TSR) detection is a crucial component of autonomous
vehicles. While You Only Look Once (YOLO) is a popular real-time object
detection algorithm, factors like training data quality and adverse weather
conditions (e.g., heavy rain) can lead to detection failures. These failures
can be particularly dangerous when visual similarities between objects exist,
such as mistaking a 30 km/h sign for a higher speed limit sign. This paper
proposes a method that combines video analysis and reasoning, prompting with a
human-in-the-loop guide large vision model to improve YOLOs accuracy in
detecting road speed limit signs, especially in semi-real-world conditions. It
is hypothesized that the guided prompting and reasoning abilities of
Video-LLava can enhance YOLOs traffic sign detection capabilities. This
hypothesis is supported by an evaluation based on human-annotated accuracy
metrics within a dataset of recorded videos from the CARLA car simulator. The
results demonstrate that a collaborative approach combining YOLO with
Video-LLava and reasoning can effectively address challenging situations such
as heavy rain and overcast conditions that hinder YOLOs detection capabilities.
| [
{
"version": "v1",
"created": "Mon, 7 Oct 2024 14:50:56 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 15:26:13 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Azarafza",
"Mehdi",
""
],
[
"Idrees",
"Fatima",
""
],
[
"Bejnordi",
"Ali Ehteshami",
""
],
[
"Steinmetz",
"Charles",
""
],
[
"Henkler",
"Stefan",
""
],
[
"Rettberg",
"Achim",
""
]
]
| TITLE: Human-in-the-loop Reasoning For Traffic Sign Detection: Collaborative
Approach Yolo With Video-llava
ABSTRACT: Traffic Sign Recognition (TSR) detection is a crucial component of autonomous
vehicles. While You Only Look Once (YOLO) is a popular real-time object
detection algorithm, factors like training data quality and adverse weather
conditions (e.g., heavy rain) can lead to detection failures. These failures
can be particularly dangerous when visual similarities between objects exist,
such as mistaking a 30 km/h sign for a higher speed limit sign. This paper
proposes a method that combines video analysis and reasoning, prompting with a
human-in-the-loop guide large vision model to improve YOLOs accuracy in
detecting road speed limit signs, especially in semi-real-world conditions. It
is hypothesized that the guided prompting and reasoning abilities of
Video-LLava can enhance YOLOs traffic sign detection capabilities. This
hypothesis is supported by an evaluation based on human-annotated accuracy
metrics within a dataset of recorded videos from the CARLA car simulator. The
results demonstrate that a collaborative approach combining YOLO with
Video-LLava and reasoning can effectively address challenging situations such
as heavy rain and overcast conditions that hinder YOLOs detection capabilities.
| no_new_dataset | 0.945551 |
2410.05274 | Amrita Singh | Amrita Singh, and Snehasis Mukherjee | Scale-Invariant Object Detection by Adaptive Convolution with Unified
Global-Local Context | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dense features are important for detecting minute objects in images.
Unfortunately, despite the remarkable efficacy of the CNN models in multi-scale
object detection, CNN models often fail to detect smaller objects in images due
to the loss of dense features during the pooling process. Atrous convolution
addresses this issue by applying sparse kernels. However, sparse kernels often
can lose the multi-scale detection efficacy of the CNN model. In this paper, we
propose an object detection model using a Switchable (adaptive) Atrous
Convolutional Network (SAC-Net) based on the efficientDet model. A fixed atrous
rate limits the performance of the CNN models in the convolutional layers. To
overcome this limitation, we introduce a switchable mechanism that allows for
dynamically adjusting the atrous rate during the forward pass. The proposed
SAC-Net encapsulates the benefits of both low-level and high-level features to
achieve improved performance on multi-scale object detection tasks, without
losing the dense features. Further, we apply a depth-wise switchable atrous
rate to the proposed network, to improve the scale-invariant features. Finally,
we apply global context on the proposed model. Our extensive experiments on
benchmark datasets demonstrate that the proposed SAC-Net outperforms the
state-of-the-art models by a significant margin in terms of accuracy.
| [
{
"version": "v1",
"created": "Tue, 17 Sep 2024 10:08:37 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 08:36:27 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Singh",
"Amrita",
""
],
[
"Mukherjee",
"Snehasis",
""
]
]
| TITLE: Scale-Invariant Object Detection by Adaptive Convolution with Unified
Global-Local Context
ABSTRACT: Dense features are important for detecting minute objects in images.
Unfortunately, despite the remarkable efficacy of the CNN models in multi-scale
object detection, CNN models often fail to detect smaller objects in images due
to the loss of dense features during the pooling process. Atrous convolution
addresses this issue by applying sparse kernels. However, sparse kernels often
can lose the multi-scale detection efficacy of the CNN model. In this paper, we
propose an object detection model using a Switchable (adaptive) Atrous
Convolutional Network (SAC-Net) based on the efficientDet model. A fixed atrous
rate limits the performance of the CNN models in the convolutional layers. To
overcome this limitation, we introduce a switchable mechanism that allows for
dynamically adjusting the atrous rate during the forward pass. The proposed
SAC-Net encapsulates the benefits of both low-level and high-level features to
achieve improved performance on multi-scale object detection tasks, without
losing the dense features. Further, we apply a depth-wise switchable atrous
rate to the proposed network, to improve the scale-invariant features. Finally,
we apply global context on the proposed model. Our extensive experiments on
benchmark datasets demonstrate that the proposed SAC-Net outperforms the
state-of-the-art models by a significant margin in terms of accuracy.
| no_new_dataset | 0.949995 |
2410.06437 | Kojiro Takeyama | Kojiro Takeyama, Yimeng Liu, Misha Sra | LocoVR: Multiuser Indoor Locomotion Dataset in Virtual Reality | This paper has been accepted to ICLR2025 | null | null | null | cs.RO cs.CV cs.HC | http://creativecommons.org/licenses/by/4.0/ | Understanding human locomotion is crucial for AI agents such as robots,
particularly in complex indoor home environments. Modeling human trajectories
in these spaces requires insight into how individuals maneuver around physical
obstacles and manage social navigation dynamics. These dynamics include subtle
behaviors influenced by proxemics - the social use of space, such as stepping
aside to allow others to pass or choosing longer routes to avoid collisions.
Previous research has developed datasets of human motion in indoor scenes, but
these are often limited in scale and lack the nuanced social navigation
dynamics common in home environments. To address this, we present LocoVR, a
dataset of 7000+ two-person trajectories captured in virtual reality from over
130 different indoor home environments. LocoVR provides accurate trajectory
data and precise spatial information, along with rich examples of
socially-motivated movement behaviors. For example, the dataset captures
instances of individuals navigating around each other in narrow spaces,
adjusting paths to respect personal boundaries in living areas, and
coordinating movements in high-traffic zones like entryways and kitchens. Our
evaluation shows that LocoVR significantly enhances model performance in three
practical indoor tasks utilizing human trajectories, and demonstrates
predicting socially-aware navigation patterns in home environments.
| [
{
"version": "v1",
"created": "Wed, 9 Oct 2024 00:45:02 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 23:49:01 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Takeyama",
"Kojiro",
""
],
[
"Liu",
"Yimeng",
""
],
[
"Sra",
"Misha",
""
]
]
| TITLE: LocoVR: Multiuser Indoor Locomotion Dataset in Virtual Reality
ABSTRACT: Understanding human locomotion is crucial for AI agents such as robots,
particularly in complex indoor home environments. Modeling human trajectories
in these spaces requires insight into how individuals maneuver around physical
obstacles and manage social navigation dynamics. These dynamics include subtle
behaviors influenced by proxemics - the social use of space, such as stepping
aside to allow others to pass or choosing longer routes to avoid collisions.
Previous research has developed datasets of human motion in indoor scenes, but
these are often limited in scale and lack the nuanced social navigation
dynamics common in home environments. To address this, we present LocoVR, a
dataset of 7000+ two-person trajectories captured in virtual reality from over
130 different indoor home environments. LocoVR provides accurate trajectory
data and precise spatial information, along with rich examples of
socially-motivated movement behaviors. For example, the dataset captures
instances of individuals navigating around each other in narrow spaces,
adjusting paths to respect personal boundaries in living areas, and
coordinating movements in high-traffic zones like entryways and kitchens. Our
evaluation shows that LocoVR significantly enhances model performance in three
practical indoor tasks utilizing human trajectories, and demonstrates
predicting socially-aware navigation patterns in home environments.
| new_dataset | 0.958187 |
2410.08143 | Yutong Wang | Yutong Wang, Jiali Zeng, Xuebo Liu, Derek F. Wong, Fandong Meng, Jie
Zhou, Min Zhang | DelTA: An Online Document-Level Translation Agent Based on Multi-Level
Memory | Accepted as a conference paper at ICLR 2025 | Published as a conference paper at ICLR 2025 | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large language models (LLMs) have achieved reasonable quality improvements in
machine translation (MT). However, most current research on MT-LLMs still faces
significant challenges in maintaining translation consistency and accuracy when
processing entire documents. In this paper, we introduce DelTA, a
Document-levEL Translation Agent designed to overcome these limitations. DelTA
features a multi-level memory structure that stores information across various
granularities and spans, including Proper Noun Records, Bilingual Summary,
Long-Term Memory, and Short-Term Memory, which are continuously retrieved and
updated by auxiliary LLM-based components. Experimental results indicate that
DelTA significantly outperforms strong baselines in terms of translation
consistency and quality across four open/closed-source LLMs and two
representative document translation datasets, achieving an increase in
consistency scores by up to 4.58 percentage points and in COMET scores by up to
3.16 points on average. DelTA employs a sentence-by-sentence translation
strategy, ensuring no sentence omissions and offering a memory-efficient
solution compared to the mainstream method. Furthermore, DelTA improves pronoun
and context-dependent translation accuracy, and the summary component of the
agent also shows promise as a tool for query-based summarization tasks. The
code and data of our approach are released at
https://github.com/YutongWang1216/DocMTAgent.
| [
{
"version": "v1",
"created": "Thu, 10 Oct 2024 17:30:09 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 17:50:44 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Wang",
"Yutong",
""
],
[
"Zeng",
"Jiali",
""
],
[
"Liu",
"Xuebo",
""
],
[
"Wong",
"Derek F.",
""
],
[
"Meng",
"Fandong",
""
],
[
"Zhou",
"Jie",
""
],
[
"Zhang",
"Min",
""
]
]
| TITLE: DelTA: An Online Document-Level Translation Agent Based on Multi-Level
Memory
ABSTRACT: Large language models (LLMs) have achieved reasonable quality improvements in
machine translation (MT). However, most current research on MT-LLMs still faces
significant challenges in maintaining translation consistency and accuracy when
processing entire documents. In this paper, we introduce DelTA, a
Document-levEL Translation Agent designed to overcome these limitations. DelTA
features a multi-level memory structure that stores information across various
granularities and spans, including Proper Noun Records, Bilingual Summary,
Long-Term Memory, and Short-Term Memory, which are continuously retrieved and
updated by auxiliary LLM-based components. Experimental results indicate that
DelTA significantly outperforms strong baselines in terms of translation
consistency and quality across four open/closed-source LLMs and two
representative document translation datasets, achieving an increase in
consistency scores by up to 4.58 percentage points and in COMET scores by up to
3.16 points on average. DelTA employs a sentence-by-sentence translation
strategy, ensuring no sentence omissions and offering a memory-efficient
solution compared to the mainstream method. Furthermore, DelTA improves pronoun
and context-dependent translation accuracy, and the summary component of the
agent also shows promise as a tool for query-based summarization tasks. The
code and data of our approach are released at
https://github.com/YutongWang1216/DocMTAgent.
| no_new_dataset | 0.94868 |
2410.08642 | Elisabeth Steffen | Elisabeth Steffen | More than Memes: A Multimodal Topic Modeling Approach to Conspiracy
Theories on Telegram | 12 pages, 10 figures | null | null | null | cs.SI cs.CL cs.CV cs.MM | http://creativecommons.org/licenses/by-sa/4.0/ | To address the increasing prevalence of (audio-)visual data on social media,
and to capture the evolving and dynamic nature of this communication,
researchers have begun to explore the potential of unsupervised approaches for
analyzing multimodal online content. However, existing research often neglects
visual content beyond memes, and in addition lacks methods to compare topic
models across modalities. Our study addresses these gaps by applying multimodal
topic modeling for analyzing conspiracy theories in German-language Telegram
channels. We use BERTopic with CLIP for the analysis of textual and visual data
in a corpus of ~40, 000 Telegram messages posted in October 2023 in 571
German-language Telegram channels known for disseminating conspiracy theories.
Through this dataset, we provide insights into unimodal and multimodal topic
models by analyzing symmetry and intersections of topics across modalities. We
demonstrate the variety of textual and visual content shared in the channels
discovered through the topic modeling, and propose a conceptual framework for
the analysis of textual and visual discursive strategies in the communication
of conspiracy theories. We apply the framework in a case study of the topic
group Israel Gaza.
| [
{
"version": "v1",
"created": "Fri, 11 Oct 2024 09:10:26 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 15:55:52 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Steffen",
"Elisabeth",
""
]
]
| TITLE: More than Memes: A Multimodal Topic Modeling Approach to Conspiracy
Theories on Telegram
ABSTRACT: To address the increasing prevalence of (audio-)visual data on social media,
and to capture the evolving and dynamic nature of this communication,
researchers have begun to explore the potential of unsupervised approaches for
analyzing multimodal online content. However, existing research often neglects
visual content beyond memes, and in addition lacks methods to compare topic
models across modalities. Our study addresses these gaps by applying multimodal
topic modeling for analyzing conspiracy theories in German-language Telegram
channels. We use BERTopic with CLIP for the analysis of textual and visual data
in a corpus of ~40, 000 Telegram messages posted in October 2023 in 571
German-language Telegram channels known for disseminating conspiracy theories.
Through this dataset, we provide insights into unimodal and multimodal topic
models by analyzing symmetry and intersections of topics across modalities. We
demonstrate the variety of textual and visual content shared in the channels
discovered through the topic modeling, and propose a conceptual framework for
the analysis of textual and visual discursive strategies in the communication
of conspiracy theories. We apply the framework in a case study of the topic
group Israel Gaza.
| new_dataset | 0.962603 |
2410.09156 | Bokun Wang | Bokun Wang and Yunwen Lei and Yiming Ying and Tianbao Yang | On Discriminative Probabilistic Modeling for Self-Supervised
Representation Learning | To appear in ICLR 2025 | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | We study the discriminative probabilistic modeling on a continuous domain for
the data prediction task of (multimodal) self-supervised representation
learning. To address the challenge of computing the integral in the partition
function for each anchor data, we leverage the multiple importance sampling
(MIS) technique for robust Monte Carlo integration, which can recover
InfoNCE-based contrastive loss as a special case. Within this probabilistic
modeling framework, we conduct generalization error analysis to reveal the
limitation of current InfoNCE-based contrastive loss for self-supervised
representation learning and derive insights for developing better approaches by
reducing the error of Monte Carlo integration. To this end, we propose a novel
non-parametric method for approximating the sum of conditional probability
densities required by MIS through convex optimization, yielding a new
contrastive objective for self-supervised representation learning. Moreover, we
design an efficient algorithm for solving the proposed objective. We
empirically compare our algorithm to representative baselines on the
contrastive image-language pretraining task. Experimental results on the CC3M
and CC12M datasets demonstrate the superior overall performance of our
algorithm. Our code is available at https://github.com/bokun-wang/NUCLR.
| [
{
"version": "v1",
"created": "Fri, 11 Oct 2024 18:02:46 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Feb 2025 21:05:15 GMT"
},
{
"version": "v3",
"created": "Wed, 5 Mar 2025 18:36:02 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Wang",
"Bokun",
""
],
[
"Lei",
"Yunwen",
""
],
[
"Ying",
"Yiming",
""
],
[
"Yang",
"Tianbao",
""
]
]
| TITLE: On Discriminative Probabilistic Modeling for Self-Supervised
Representation Learning
ABSTRACT: We study the discriminative probabilistic modeling on a continuous domain for
the data prediction task of (multimodal) self-supervised representation
learning. To address the challenge of computing the integral in the partition
function for each anchor data, we leverage the multiple importance sampling
(MIS) technique for robust Monte Carlo integration, which can recover
InfoNCE-based contrastive loss as a special case. Within this probabilistic
modeling framework, we conduct generalization error analysis to reveal the
limitation of current InfoNCE-based contrastive loss for self-supervised
representation learning and derive insights for developing better approaches by
reducing the error of Monte Carlo integration. To this end, we propose a novel
non-parametric method for approximating the sum of conditional probability
densities required by MIS through convex optimization, yielding a new
contrastive objective for self-supervised representation learning. Moreover, we
design an efficient algorithm for solving the proposed objective. We
empirically compare our algorithm to representative baselines on the
contrastive image-language pretraining task. Experimental results on the CC3M
and CC12M datasets demonstrate the superior overall performance of our
algorithm. Our code is available at https://github.com/bokun-wang/NUCLR.
| no_new_dataset | 0.943243 |
2410.14092 | Dekun Zhou | Alberto Del Pia, Dekun Zhou, Yinglun Zhu | Efficient Sparse PCA via Block-Diagonalization | 29 pages, 1 figure | null | null | null | cs.LG math.OC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sparse Principal Component Analysis (Sparse PCA) is a pivotal tool in data
analysis and dimensionality reduction. However, Sparse PCA is a challenging
problem in both theory and practice: it is known to be NP-hard and current
exact methods generally require exponential runtime. In this paper, we propose
a novel framework to efficiently approximate Sparse PCA by (i) approximating
the general input covariance matrix with a re-sorted block-diagonal matrix,
(ii) solving the Sparse PCA sub-problem in each block, and (iii) reconstructing
the solution to the original problem. Our framework is simple and powerful: it
can leverage any off-the-shelf Sparse PCA algorithm and achieve significant
computational speedups, with a minor additive error that is linear in the
approximation error of the block-diagonal matrix. Suppose $g(k, d)$ is the
runtime of an algorithm (approximately) solving Sparse PCA in dimension $d$ and
with sparsity constant $k$. Our framework, when integrated with this algorithm,
reduces the runtime to $\mathcal{O}\left(\frac{d}{d^\star} \cdot g(k, d^\star)
+ d^2\right)$, where $d^\star \leq d$ is the largest block size of the
block-diagonal matrix. For instance, integrating our framework with the
Branch-and-Bound algorithm reduces the complexity from $g(k, d) =
\mathcal{O}(k^3\cdot d^k)$ to $\mathcal{O}(k^3\cdot d \cdot (d^\star)^{k-1})$,
demonstrating exponential speedups if $d^\star$ is small. We perform
large-scale evaluations on many real-world datasets: for exact Sparse PCA
algorithm, our method achieves an average speedup factor of 100.50, while
maintaining an average approximation error of 0.61%; for approximate Sparse PCA
algorithm, our method achieves an average speedup factor of 6.00 and an average
approximation error of -0.91%, meaning that our method oftentimes finds better
solutions.
| [
{
"version": "v1",
"created": "Fri, 18 Oct 2024 00:16:10 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 00:31:22 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Del Pia",
"Alberto",
""
],
[
"Zhou",
"Dekun",
""
],
[
"Zhu",
"Yinglun",
""
]
]
| TITLE: Efficient Sparse PCA via Block-Diagonalization
ABSTRACT: Sparse Principal Component Analysis (Sparse PCA) is a pivotal tool in data
analysis and dimensionality reduction. However, Sparse PCA is a challenging
problem in both theory and practice: it is known to be NP-hard and current
exact methods generally require exponential runtime. In this paper, we propose
a novel framework to efficiently approximate Sparse PCA by (i) approximating
the general input covariance matrix with a re-sorted block-diagonal matrix,
(ii) solving the Sparse PCA sub-problem in each block, and (iii) reconstructing
the solution to the original problem. Our framework is simple and powerful: it
can leverage any off-the-shelf Sparse PCA algorithm and achieve significant
computational speedups, with a minor additive error that is linear in the
approximation error of the block-diagonal matrix. Suppose $g(k, d)$ is the
runtime of an algorithm (approximately) solving Sparse PCA in dimension $d$ and
with sparsity constant $k$. Our framework, when integrated with this algorithm,
reduces the runtime to $\mathcal{O}\left(\frac{d}{d^\star} \cdot g(k, d^\star)
+ d^2\right)$, where $d^\star \leq d$ is the largest block size of the
block-diagonal matrix. For instance, integrating our framework with the
Branch-and-Bound algorithm reduces the complexity from $g(k, d) =
\mathcal{O}(k^3\cdot d^k)$ to $\mathcal{O}(k^3\cdot d \cdot (d^\star)^{k-1})$,
demonstrating exponential speedups if $d^\star$ is small. We perform
large-scale evaluations on many real-world datasets: for exact Sparse PCA
algorithm, our method achieves an average speedup factor of 100.50, while
maintaining an average approximation error of 0.61%; for approximate Sparse PCA
algorithm, our method achieves an average speedup factor of 6.00 and an average
approximation error of -0.91%, meaning that our method oftentimes finds better
solutions.
| no_new_dataset | 0.942823 |
2410.23841 | Jianqun Zhou | Jianqun Zhou, Yuanlei Zheng, Wei Chen, Qianqian Zheng, Hui Su, Wei
Zhang, Rui Meng and Xiaoyu Shen | Beyond Content Relevance: Evaluating Instruction Following in Retrieval
Models | null | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Instruction-following capabilities in LLMs have progressed significantly,
enabling more complex user interactions through detailed prompts. However,
retrieval systems have not matched these advances, most of them still relies on
traditional lexical and semantic matching techniques that fail to fully capture
user intent. Recent efforts have introduced instruction-aware retrieval models,
but these primarily focus on intrinsic content relevance, which neglects the
importance of customized preferences for broader document-level attributes.
This study evaluates the instruction-following capabilities of various
retrieval models beyond content relevance, including LLM-based dense retrieval
and reranking models. We develop InfoSearch, a novel retrieval evaluation
benchmark spanning six document-level attributes: Audience, Keyword, Format,
Language, Length, and Source, and introduce novel metrics -- Strict Instruction
Compliance Ratio (SICR) and Weighted Instruction Sensitivity Evaluation (WISE)
to accurately assess the models' responsiveness to instructions. Our findings
indicate that although fine-tuning models on instruction-aware retrieval
datasets and increasing model size enhance performance, most models still fall
short of instruction compliance.
| [
{
"version": "v1",
"created": "Thu, 31 Oct 2024 11:47:21 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 12:10:57 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Zhou",
"Jianqun",
""
],
[
"Zheng",
"Yuanlei",
""
],
[
"Chen",
"Wei",
""
],
[
"Zheng",
"Qianqian",
""
],
[
"Su",
"Hui",
""
],
[
"Zhang",
"Wei",
""
],
[
"Meng",
"Rui",
""
],
[
"Shen",
"Xiaoyu",
""
]
]
| TITLE: Beyond Content Relevance: Evaluating Instruction Following in Retrieval
Models
ABSTRACT: Instruction-following capabilities in LLMs have progressed significantly,
enabling more complex user interactions through detailed prompts. However,
retrieval systems have not matched these advances, most of them still relies on
traditional lexical and semantic matching techniques that fail to fully capture
user intent. Recent efforts have introduced instruction-aware retrieval models,
but these primarily focus on intrinsic content relevance, which neglects the
importance of customized preferences for broader document-level attributes.
This study evaluates the instruction-following capabilities of various
retrieval models beyond content relevance, including LLM-based dense retrieval
and reranking models. We develop InfoSearch, a novel retrieval evaluation
benchmark spanning six document-level attributes: Audience, Keyword, Format,
Language, Length, and Source, and introduce novel metrics -- Strict Instruction
Compliance Ratio (SICR) and Weighted Instruction Sensitivity Evaluation (WISE)
to accurately assess the models' responsiveness to instructions. Our findings
indicate that although fine-tuning models on instruction-aware retrieval
datasets and increasing model size enhance performance, most models still fall
short of instruction compliance.
| no_new_dataset | 0.925061 |
2411.00508 | Gi-Cheon Kang | Gi-Cheon Kang, Junghyun Kim, Kyuhwan Shim, Jun Ki Lee, Byoung-Tak
Zhang | CLIP-RT: Learning Language-Conditioned Robotic Policies from Natural
Language Supervision | 27 pages | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | Teaching robots desired skills in real-world environments remains
challenging, especially for non-experts. A key bottleneck is that collecting
robotic data often requires expertise or specialized hardware, limiting
accessibility and scalability. We posit that natural language offers an
intuitive and accessible interface for robot learning. To this end, we study
two aspects: (1) enabling non-experts to collect robotic data through natural
language supervision (e.g., "move the arm to the right") and (2) learning
robotic policies directly from this supervision. Specifically, we introduce a
data collection framework that collects robot demonstrations based on natural
language supervision and further augments these demonstrations. We then present
CLIP-RT, a vision-language-action (VLA) model that learns language-conditioned
visuomotor policies from this supervision. CLIP-RT adapts the pretrained CLIP
models and learns to predict language-based motion primitives via contrastive
imitation learning. We train CLIP-RT on the Open X-Embodiment dataset and
finetune it on in-domain data collected by our framework to learn diverse
skills. CLIP-RT demonstrates strong capabilities in learning novel manipulation
skills, outperforming the state-of-the-art model, OpenVLA (7B parameters), by
24% in average success rates, while using 7x fewer parameters (1B). We further
observe that CLIP-RT shows significant improvements in few-shot generalization.
Finally, through collaboration with humans or large pretrained models, we
demonstrate that CLIP-RT can further improve its generalization on challenging
robotic tasks.
| [
{
"version": "v1",
"created": "Fri, 1 Nov 2024 10:48:03 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Feb 2025 03:07:38 GMT"
},
{
"version": "v3",
"created": "Wed, 5 Mar 2025 13:41:46 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Kang",
"Gi-Cheon",
""
],
[
"Kim",
"Junghyun",
""
],
[
"Shim",
"Kyuhwan",
""
],
[
"Lee",
"Jun Ki",
""
],
[
"Zhang",
"Byoung-Tak",
""
]
]
| TITLE: CLIP-RT: Learning Language-Conditioned Robotic Policies from Natural
Language Supervision
ABSTRACT: Teaching robots desired skills in real-world environments remains
challenging, especially for non-experts. A key bottleneck is that collecting
robotic data often requires expertise or specialized hardware, limiting
accessibility and scalability. We posit that natural language offers an
intuitive and accessible interface for robot learning. To this end, we study
two aspects: (1) enabling non-experts to collect robotic data through natural
language supervision (e.g., "move the arm to the right") and (2) learning
robotic policies directly from this supervision. Specifically, we introduce a
data collection framework that collects robot demonstrations based on natural
language supervision and further augments these demonstrations. We then present
CLIP-RT, a vision-language-action (VLA) model that learns language-conditioned
visuomotor policies from this supervision. CLIP-RT adapts the pretrained CLIP
models and learns to predict language-based motion primitives via contrastive
imitation learning. We train CLIP-RT on the Open X-Embodiment dataset and
finetune it on in-domain data collected by our framework to learn diverse
skills. CLIP-RT demonstrates strong capabilities in learning novel manipulation
skills, outperforming the state-of-the-art model, OpenVLA (7B parameters), by
24% in average success rates, while using 7x fewer parameters (1B). We further
observe that CLIP-RT shows significant improvements in few-shot generalization.
Finally, through collaboration with humans or large pretrained models, we
demonstrate that CLIP-RT can further improve its generalization on challenging
robotic tasks.
| no_new_dataset | 0.947769 |
2411.02951 | Jingwei Guan | Xingjian Tang, Jingwei Guan, Linge Li, Ran Shi, Youmei Zhang, Mengye
Lyu and Li Yan | LDPM: Towards undersampled MRI reconstruction with MR-VAE and Latent
Diffusion Prior | null | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Diffusion models, as powerful generative models, have found a wide range of
applications and shown great potential in solving image reconstruction
problems. Some works attempted to solve MRI reconstruction with diffusion
models, but these methods operate directly in pixel space, leading to higher
computational costs for optimization and inference. Latent diffusion models,
pre-trained on natural images with rich visual priors, are expected to solve
the high computational cost problem in MRI reconstruction by operating in a
lower-dimensional latent space. However, direct application to MRI
reconstruction faces three key challenges: (1) absence of explicit control
mechanisms for medical fidelity, (2) domain gap between natural images and MR
physics, and (3) undefined data consistency in latent space. To address these
challenges, a novel Latent Diffusion Prior-based undersampled MRI
reconstruction (LDPM) method is proposed. Our LDPM framework addresses these
challenges by: (1) a sketch-guided pipeline with a two-step reconstruction
strategy, which balances perceptual quality and anatomical fidelity, (2) an
MRI-optimized VAE (MR-VAE), which achieves an improvement of approximately 3.92
dB in PSNR for undersampled MRI reconstruction compared to that with SD-VAE
\cite{sd}, and (3) Dual-Stage Sampler, a modified version of spaced DDPM
sampler, which enforces high-fidelity reconstruction in the latent space.
Experiments on the fastMRI dataset\cite{fastmri} demonstrate the
state-of-the-art performance of the proposed method and its robustness across
various scenarios. The effectiveness of each module is also verified through
ablation experiments.
| [
{
"version": "v1",
"created": "Tue, 5 Nov 2024 09:51:59 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 14:16:27 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Tang",
"Xingjian",
""
],
[
"Guan",
"Jingwei",
""
],
[
"Li",
"Linge",
""
],
[
"Shi",
"Ran",
""
],
[
"Zhang",
"Youmei",
""
],
[
"Lyu",
"Mengye",
""
],
[
"Yan",
"Li",
""
]
]
| TITLE: LDPM: Towards undersampled MRI reconstruction with MR-VAE and Latent
Diffusion Prior
ABSTRACT: Diffusion models, as powerful generative models, have found a wide range of
applications and shown great potential in solving image reconstruction
problems. Some works attempted to solve MRI reconstruction with diffusion
models, but these methods operate directly in pixel space, leading to higher
computational costs for optimization and inference. Latent diffusion models,
pre-trained on natural images with rich visual priors, are expected to solve
the high computational cost problem in MRI reconstruction by operating in a
lower-dimensional latent space. However, direct application to MRI
reconstruction faces three key challenges: (1) absence of explicit control
mechanisms for medical fidelity, (2) domain gap between natural images and MR
physics, and (3) undefined data consistency in latent space. To address these
challenges, a novel Latent Diffusion Prior-based undersampled MRI
reconstruction (LDPM) method is proposed. Our LDPM framework addresses these
challenges by: (1) a sketch-guided pipeline with a two-step reconstruction
strategy, which balances perceptual quality and anatomical fidelity, (2) an
MRI-optimized VAE (MR-VAE), which achieves an improvement of approximately 3.92
dB in PSNR for undersampled MRI reconstruction compared to that with SD-VAE
\cite{sd}, and (3) Dual-Stage Sampler, a modified version of spaced DDPM
sampler, which enforces high-fidelity reconstruction in the latent space.
Experiments on the fastMRI dataset\cite{fastmri} demonstrate the
state-of-the-art performance of the proposed method and its robustness across
various scenarios. The effectiveness of each module is also verified through
ablation experiments.
| no_new_dataset | 0.953144 |
2411.03418 | Mikhail Razumovskiy Mr. | Mikhail Razumovskiy, Boris Fomin, Denis Astanin | MARFA: an Effective Line-by-line Tool For Calculating Molecular
Absorption in Planetary Atmospheres | null | null | null | null | astro-ph.EP astro-ph.IM physics.ao-ph | http://creativecommons.org/licenses/by/4.0/ | We present MARFA (Molecular atmospheric Absorption with Rapid and Flexible
Analysis) -- an open-source line-by-line tool for calculating absorption
coefficients and cross-sections in planetary atmospheres, particularly under
conditions of uncertain spectroscopic data and missing continuum functions.
With incorporated eleven-grid interpolation technique MARFA shows good
performance in computation of far-wing contributions for large line cut-offs.
The tool supports flexible parameterization, including line shape functions,
wing corrections, user-defined atmospheric profiles, thus, facilitating rapid
sensitivity studies for sparse datasets. Spectra are calculated at a
high-resolution of about 5*10E-4 cm-1, optimized for infrared and visible
spectral regions where HITRAN-formatted line data is available, yet adaptable
to other datasets with available line parameters. Output is represented either
in a form of binary lookup tables files, directly compatible with radiative
transfer codes or in a human-readable format for data analysis and
distribution. The MARFA tool is provided in two ways: through a web application
accessible at marfa.app for onboarding and educational usage, and as an
open-source code available in a public repository for advanced utilization,
development and contributions.
| [
{
"version": "v1",
"created": "Tue, 5 Nov 2024 18:58:27 GMT"
},
{
"version": "v2",
"created": "Thu, 30 Jan 2025 15:00:45 GMT"
},
{
"version": "v3",
"created": "Tue, 25 Feb 2025 09:28:36 GMT"
},
{
"version": "v4",
"created": "Tue, 4 Mar 2025 20:52:46 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Razumovskiy",
"Mikhail",
""
],
[
"Fomin",
"Boris",
""
],
[
"Astanin",
"Denis",
""
]
]
| TITLE: MARFA: an Effective Line-by-line Tool For Calculating Molecular
Absorption in Planetary Atmospheres
ABSTRACT: We present MARFA (Molecular atmospheric Absorption with Rapid and Flexible
Analysis) -- an open-source line-by-line tool for calculating absorption
coefficients and cross-sections in planetary atmospheres, particularly under
conditions of uncertain spectroscopic data and missing continuum functions.
With incorporated eleven-grid interpolation technique MARFA shows good
performance in computation of far-wing contributions for large line cut-offs.
The tool supports flexible parameterization, including line shape functions,
wing corrections, user-defined atmospheric profiles, thus, facilitating rapid
sensitivity studies for sparse datasets. Spectra are calculated at a
high-resolution of about 5*10E-4 cm-1, optimized for infrared and visible
spectral regions where HITRAN-formatted line data is available, yet adaptable
to other datasets with available line parameters. Output is represented either
in a form of binary lookup tables files, directly compatible with radiative
transfer codes or in a human-readable format for data analysis and
distribution. The MARFA tool is provided in two ways: through a web application
accessible at marfa.app for onboarding and educational usage, and as an
open-source code available in a public repository for advanced utilization,
development and contributions.
| no_new_dataset | 0.946001 |
2411.04426 | Yang Ding | Yang Ding, Yi Bu | Political Hegemony, Imitation Isomorphism, and Project Familiarity:
Instrumental Variables to Understand Funding Impact on Scholar Performance | This manuscript has been accepted by the Quantitative Science
Studies(QSS) | null | null | null | cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper contributes a new idea for exploring research funding effects on
scholar performance. By collecting details of 9,501 research grants received by
principal investigators from universities in the U.S. social sciences from 2000
to 2019 and data on their publications and citations in the Microsoft Academic
Graph and Web of Science bibliographic collections, we build a novel dataset of
grants and article counts, citations, and journal CiteScore. Based on this
dataset, we first introduce three instrumental variables (IVs) suitable for
isolating endogeneity issues in the study of competing grant effects, namely
scholars political hegemony in academia, imitation isomorphic behavior among
scholars, and project familiarity. Then, this study explains the research
funding effects by combining the three IVs with a two-stage least square (2SLS)
model. Also, we provide validity and robustness tests of these three IVs and
research funding effects. We find that our IVs serve the function of
exogenizing and isolating endogeneity in capturing the research funding effect.
Empirical findings show that receiving research funding increases a scholars
research output and impact. While research funding doesn't significantly
increase high CiteScore publications, it reduces submissions to low-prestige
journals, reshaping journal selection strategies and raising the floor of
academic performance.
| [
{
"version": "v1",
"created": "Thu, 7 Nov 2024 04:38:45 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 18:37:49 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Ding",
"Yang",
""
],
[
"Bu",
"Yi",
""
]
]
| TITLE: Political Hegemony, Imitation Isomorphism, and Project Familiarity:
Instrumental Variables to Understand Funding Impact on Scholar Performance
ABSTRACT: This paper contributes a new idea for exploring research funding effects on
scholar performance. By collecting details of 9,501 research grants received by
principal investigators from universities in the U.S. social sciences from 2000
to 2019 and data on their publications and citations in the Microsoft Academic
Graph and Web of Science bibliographic collections, we build a novel dataset of
grants and article counts, citations, and journal CiteScore. Based on this
dataset, we first introduce three instrumental variables (IVs) suitable for
isolating endogeneity issues in the study of competing grant effects, namely
scholars political hegemony in academia, imitation isomorphic behavior among
scholars, and project familiarity. Then, this study explains the research
funding effects by combining the three IVs with a two-stage least square (2SLS)
model. Also, we provide validity and robustness tests of these three IVs and
research funding effects. We find that our IVs serve the function of
exogenizing and isolating endogeneity in capturing the research funding effect.
Empirical findings show that receiving research funding increases a scholars
research output and impact. While research funding doesn't significantly
increase high CiteScore publications, it reduces submissions to low-prestige
journals, reshaping journal selection strategies and raising the floor of
academic performance.
| new_dataset | 0.958615 |
2411.07527 | Junxi Liu | Junxi Liu, Yanyan Feng, Jiehai Chen, Yun Xue, Fenghuan Li | Prompt-enhanced Network for Hateful Meme Classification | Published in Proceedings of the Thirty-Third International Joint
Conference on Artificial Intelligence Main Track. Pages 6397-6405 | null | 10.24963/ijcai.2024/707 | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The dynamic expansion of social media has led to an inundation of hateful
memes on media platforms, accentuating the growing need for efficient
identification and removal. Acknowledging the constraints of conventional
multimodal hateful meme classification, which heavily depends on external
knowledge and poses the risk of including irrelevant or redundant content, we
developed Pen -- a prompt-enhanced network framework based on the prompt
learning approach. Specifically, after constructing the sequence through the
prompt method and encoding it with a language model, we performed region
information global extraction on the encoded sequence for multi-view
perception. By capturing global information about inference instances and
demonstrations, Pen facilitates category selection by fully leveraging sequence
information. This approach significantly improves model classification
accuracy. Additionally, to bolster the model's reasoning capabilities in the
feature space, we introduced prompt-aware contrastive learning into the
framework to improve the quality of sample feature distributions. Through
extensive ablation experiments on two public datasets, we evaluate the
effectiveness of the Pen framework, concurrently comparing it with
state-of-the-art model baselines. Our research findings highlight that Pen
surpasses manual prompt methods, showcasing superior generalization and
classification accuracy in hateful meme classification tasks. Our code is
available at https://github.com/juszzi/Pen.
| [
{
"version": "v1",
"created": "Tue, 12 Nov 2024 03:55:27 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 15:52:25 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Liu",
"Junxi",
""
],
[
"Feng",
"Yanyan",
""
],
[
"Chen",
"Jiehai",
""
],
[
"Xue",
"Yun",
""
],
[
"Li",
"Fenghuan",
""
]
]
| TITLE: Prompt-enhanced Network for Hateful Meme Classification
ABSTRACT: The dynamic expansion of social media has led to an inundation of hateful
memes on media platforms, accentuating the growing need for efficient
identification and removal. Acknowledging the constraints of conventional
multimodal hateful meme classification, which heavily depends on external
knowledge and poses the risk of including irrelevant or redundant content, we
developed Pen -- a prompt-enhanced network framework based on the prompt
learning approach. Specifically, after constructing the sequence through the
prompt method and encoding it with a language model, we performed region
information global extraction on the encoded sequence for multi-view
perception. By capturing global information about inference instances and
demonstrations, Pen facilitates category selection by fully leveraging sequence
information. This approach significantly improves model classification
accuracy. Additionally, to bolster the model's reasoning capabilities in the
feature space, we introduced prompt-aware contrastive learning into the
framework to improve the quality of sample feature distributions. Through
extensive ablation experiments on two public datasets, we evaluate the
effectiveness of the Pen framework, concurrently comparing it with
state-of-the-art model baselines. Our research findings highlight that Pen
surpasses manual prompt methods, showcasing superior generalization and
classification accuracy in hateful meme classification tasks. Our code is
available at https://github.com/juszzi/Pen.
| no_new_dataset | 0.949201 |
2411.07621 | Youngseok Yoon | Youngseok Yoon, Sangwoo Hong, Hyungjun Joo, Yao Qin, Haewon Jeong,
Jungwoo Lee | Mix from Failure: Confusion-Pairing Mixup for Long-Tailed Recognition | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Long-tailed image recognition is a computer vision problem considering a
real-world class distribution rather than an artificial uniform. Existing
methods typically detour the problem by i) adjusting a loss function, ii)
decoupling classifier learning, or iii) proposing a new multi-head architecture
called experts. In this paper, we tackle the problem from a different
perspective to augment a training dataset to enhance the sample diversity of
minority classes. Specifically, our method, namely Confusion-Pairing Mixup
(CP-Mix), estimates the confusion distribution of the model and handles the
data deficiency problem by augmenting samples from confusion pairs in
real-time. In this way, CP-Mix trains the model to mitigate its weakness and
distinguish a pair of classes it frequently misclassifies. In addition, CP-Mix
utilizes a novel mixup formulation to handle the bias in decision boundaries
that originated from the imbalanced dataset. Extensive experiments demonstrate
that CP-Mix outperforms existing methods for long-tailed image recognition and
successfully relieves the confusion of the classifier.
| [
{
"version": "v1",
"created": "Tue, 12 Nov 2024 08:08:31 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 21:23:34 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Yoon",
"Youngseok",
""
],
[
"Hong",
"Sangwoo",
""
],
[
"Joo",
"Hyungjun",
""
],
[
"Qin",
"Yao",
""
],
[
"Jeong",
"Haewon",
""
],
[
"Lee",
"Jungwoo",
""
]
]
| TITLE: Mix from Failure: Confusion-Pairing Mixup for Long-Tailed Recognition
ABSTRACT: Long-tailed image recognition is a computer vision problem considering a
real-world class distribution rather than an artificial uniform. Existing
methods typically detour the problem by i) adjusting a loss function, ii)
decoupling classifier learning, or iii) proposing a new multi-head architecture
called experts. In this paper, we tackle the problem from a different
perspective to augment a training dataset to enhance the sample diversity of
minority classes. Specifically, our method, namely Confusion-Pairing Mixup
(CP-Mix), estimates the confusion distribution of the model and handles the
data deficiency problem by augmenting samples from confusion pairs in
real-time. In this way, CP-Mix trains the model to mitigate its weakness and
distinguish a pair of classes it frequently misclassifies. In addition, CP-Mix
utilizes a novel mixup formulation to handle the bias in decision boundaries
that originated from the imbalanced dataset. Extensive experiments demonstrate
that CP-Mix outperforms existing methods for long-tailed image recognition and
successfully relieves the confusion of the classifier.
| no_new_dataset | 0.950365 |
2411.12126 | Xiaomin Ouyang Dr. | Xiaomin Ouyang, Jason Wu, Tomoyoshi Kimura, Yihan Lin, Gunjan Verma,
Tarek Abdelzaher, Mani Srivastava | MMBind: Unleashing the Potential of Distributed and Heterogeneous Data
for Multimodal Learning in IoT | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Multimodal sensing systems are increasingly prevalent in various real-world
applications. Most existing multimodal learning approaches heavily rely on
training with a large amount of synchronized, complete multimodal data.
However, such a setting is impractical in real-world IoT sensing applications
where data is typically collected by distributed nodes with heterogeneous data
modalities, and is also rarely labeled. In this paper, we propose MMBind, a new
data binding approach for multimodal learning on distributed and heterogeneous
IoT data. The key idea of MMBind is to construct a pseudo-paired multimodal
dataset for model training by binding data from disparate sources and
incomplete modalities through a sufficiently descriptive shared modality. We
also propose a weighted contrastive learning approach to handle domain shifts
among disparate data, coupled with an adaptive multimodal learning architecture
capable of training models with heterogeneous modality combinations.
Evaluations on ten real-world multimodal datasets highlight that MMBind
outperforms state-of-the-art baselines under varying degrees of data
incompleteness and domain shift, and holds promise for advancing multimodal
foundation model training in IoT applications\footnote (The source code is
available via https://github.com/nesl/multimodal-bind).
| [
{
"version": "v1",
"created": "Mon, 18 Nov 2024 23:34:07 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 16:08:49 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Ouyang",
"Xiaomin",
""
],
[
"Wu",
"Jason",
""
],
[
"Kimura",
"Tomoyoshi",
""
],
[
"Lin",
"Yihan",
""
],
[
"Verma",
"Gunjan",
""
],
[
"Abdelzaher",
"Tarek",
""
],
[
"Srivastava",
"Mani",
""
]
]
| TITLE: MMBind: Unleashing the Potential of Distributed and Heterogeneous Data
for Multimodal Learning in IoT
ABSTRACT: Multimodal sensing systems are increasingly prevalent in various real-world
applications. Most existing multimodal learning approaches heavily rely on
training with a large amount of synchronized, complete multimodal data.
However, such a setting is impractical in real-world IoT sensing applications
where data is typically collected by distributed nodes with heterogeneous data
modalities, and is also rarely labeled. In this paper, we propose MMBind, a new
data binding approach for multimodal learning on distributed and heterogeneous
IoT data. The key idea of MMBind is to construct a pseudo-paired multimodal
dataset for model training by binding data from disparate sources and
incomplete modalities through a sufficiently descriptive shared modality. We
also propose a weighted contrastive learning approach to handle domain shifts
among disparate data, coupled with an adaptive multimodal learning architecture
capable of training models with heterogeneous modality combinations.
Evaluations on ten real-world multimodal datasets highlight that MMBind
outperforms state-of-the-art baselines under varying degrees of data
incompleteness and domain shift, and holds promise for advancing multimodal
foundation model training in IoT applications\footnote (The source code is
available via https://github.com/nesl/multimodal-bind).
| no_new_dataset | 0.947088 |
2411.13982 | Jordan Vice | Jordan Vice, Naveed Akhtar, Mubarak Shah, Richard Hartley, Ajmal Mian | Safety Without Semantic Disruptions: Editing-free Safe Image Generation
via Context-preserving Dual Latent Reconstruction | This research is supported by the NISDRG project #20100007, funded by
the Australian Government | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Training multimodal generative models on large, uncurated datasets can result
in users being exposed to harmful, unsafe and controversial or
culturally-inappropriate outputs. While model editing has been proposed to
remove or filter undesirable concepts in embedding and latent spaces, it can
inadvertently damage learned manifolds, distorting concepts in close semantic
proximity. We identify limitations in current model editing techniques, showing
that even benign, proximal concepts may become misaligned. To address the need
for safe content generation, we leverage safe embeddings and a modified
diffusion process with tunable weighted summation in the latent space to
generate safer images. Our method preserves global context without compromising
the structural integrity of the learned manifolds. We achieve state-of-the-art
results on safe image generation benchmarks and offer intuitive control over
the level of model safety. We identify trade-offs between safety and
censorship, which presents a necessary perspective in the development of
ethical AI models. We will release our code.
Keywords: Text-to-Image Models, Generative AI, Safety, Reliability, Model
Editing
| [
{
"version": "v1",
"created": "Thu, 21 Nov 2024 09:47:13 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 14:45:55 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Vice",
"Jordan",
""
],
[
"Akhtar",
"Naveed",
""
],
[
"Shah",
"Mubarak",
""
],
[
"Hartley",
"Richard",
""
],
[
"Mian",
"Ajmal",
""
]
]
| TITLE: Safety Without Semantic Disruptions: Editing-free Safe Image Generation
via Context-preserving Dual Latent Reconstruction
ABSTRACT: Training multimodal generative models on large, uncurated datasets can result
in users being exposed to harmful, unsafe and controversial or
culturally-inappropriate outputs. While model editing has been proposed to
remove or filter undesirable concepts in embedding and latent spaces, it can
inadvertently damage learned manifolds, distorting concepts in close semantic
proximity. We identify limitations in current model editing techniques, showing
that even benign, proximal concepts may become misaligned. To address the need
for safe content generation, we leverage safe embeddings and a modified
diffusion process with tunable weighted summation in the latent space to
generate safer images. Our method preserves global context without compromising
the structural integrity of the learned manifolds. We achieve state-of-the-art
results on safe image generation benchmarks and offer intuitive control over
the level of model safety. We identify trade-offs between safety and
censorship, which presents a necessary perspective in the development of
ethical AI models. We will release our code.
Keywords: Text-to-Image Models, Generative AI, Safety, Reliability, Model
Editing
| no_new_dataset | 0.936343 |
2412.04814 | Yibin Wang | Yibin Wang, Zhiyu Tan, Junyan Wang, Xiaomeng Yang, Cheng Jin, Hao Li | LiFT: Leveraging Human Feedback for Text-to-Video Model Alignment | Project page: https://codegoat24.github.io/LiFT | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in text-to-video (T2V) generative models have shown
impressive capabilities. However, these models are still inadequate in aligning
synthesized videos with human preferences (e.g., accurately reflecting text
descriptions), which is particularly difficult to address, as human preferences
are subjective and challenging to formalize as objective functions. Existing
studies train video quality assessment models that rely on human-annotated
ratings for video evaluation but overlook the reasoning behind evaluations,
limiting their ability to capture nuanced human criteria. Moreover, aligning
T2V model using video-based human feedback remains unexplored. Therefore, this
paper proposes LiFT, the first method designed to leverage human feedback for
T2V model alignment. Specifically, we first construct a Human Rating Annotation
dataset, LiFT-HRA, consisting of approximately 10k human annotations, each
including a score and its corresponding rationale. Based on this, we train a
reward model LiFT-Critic to learn reward function effectively, which serves as
a proxy for human judgment, measuring the alignment between given videos and
human expectations. Lastly, we leverage the learned reward function to align
the T2V model by maximizing the reward-weighted likelihood. As a case study, we
apply our pipeline to CogVideoX-2B, showing that the fine-tuned model
outperforms the CogVideoX-5B across all 16 metrics, highlighting the potential
of human feedback in improving the alignment and quality of synthesized videos.
| [
{
"version": "v1",
"created": "Fri, 6 Dec 2024 07:16:14 GMT"
},
{
"version": "v2",
"created": "Tue, 24 Dec 2024 11:57:46 GMT"
},
{
"version": "v3",
"created": "Wed, 5 Mar 2025 02:43:42 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Wang",
"Yibin",
""
],
[
"Tan",
"Zhiyu",
""
],
[
"Wang",
"Junyan",
""
],
[
"Yang",
"Xiaomeng",
""
],
[
"Jin",
"Cheng",
""
],
[
"Li",
"Hao",
""
]
]
| TITLE: LiFT: Leveraging Human Feedback for Text-to-Video Model Alignment
ABSTRACT: Recent advances in text-to-video (T2V) generative models have shown
impressive capabilities. However, these models are still inadequate in aligning
synthesized videos with human preferences (e.g., accurately reflecting text
descriptions), which is particularly difficult to address, as human preferences
are subjective and challenging to formalize as objective functions. Existing
studies train video quality assessment models that rely on human-annotated
ratings for video evaluation but overlook the reasoning behind evaluations,
limiting their ability to capture nuanced human criteria. Moreover, aligning
T2V model using video-based human feedback remains unexplored. Therefore, this
paper proposes LiFT, the first method designed to leverage human feedback for
T2V model alignment. Specifically, we first construct a Human Rating Annotation
dataset, LiFT-HRA, consisting of approximately 10k human annotations, each
including a score and its corresponding rationale. Based on this, we train a
reward model LiFT-Critic to learn reward function effectively, which serves as
a proxy for human judgment, measuring the alignment between given videos and
human expectations. Lastly, we leverage the learned reward function to align
the T2V model by maximizing the reward-weighted likelihood. As a case study, we
apply our pipeline to CogVideoX-2B, showing that the fine-tuned model
outperforms the CogVideoX-5B across all 16 metrics, highlighting the potential
of human feedback in improving the alignment and quality of synthesized videos.
| new_dataset | 0.961786 |
2412.07260 | Peipeng Yu | Peipeng Yu, Hui Gao, Jianwei Fei, Zhitao Huang, Zhihua Xia, Chip-Hong
Chang | DFREC: DeepFake Identity Recovery Based on Identity-aware Masked
Autoencoder | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Recent advances in deepfake forensics have primarily focused on improving the
classification accuracy and generalization performance. Despite enormous
progress in detection accuracy across a wide variety of forgery algorithms,
existing algorithms lack intuitive interpretability and identity traceability
to help with forensic investigation. In this paper, we introduce a novel
DeepFake Identity Recovery scheme (DFREC) to fill this gap. DFREC aims to
recover the pair of source and target faces from a deepfake image to facilitate
deepfake identity tracing and reduce the risk of deepfake attack. It comprises
three key components: an Identity Segmentation Module (ISM), a Source Identity
Reconstruction Module (SIRM), and a Target Identity Reconstruction Module
(TIRM). The ISM segments the input face into distinct source and target face
information, and the SIRM reconstructs the source face and extracts latent
target identity features with the segmented source information. The background
context and latent target identity features are synergetically fused by a
Masked Autoencoder in the TIRM to reconstruct the target face. We evaluate
DFREC on six different high-fidelity face-swapping attacks on FaceForensics++,
CelebaMegaFS and FFHQ-E4S datasets, which demonstrate its superior recovery
performance over state-of-the-art deepfake recovery algorithms. In addition,
DFREC is the only scheme that can recover both pristine source and target faces
directly from the forgery image with high fadelity.
| [
{
"version": "v1",
"created": "Tue, 10 Dec 2024 07:42:02 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 14:40:41 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Yu",
"Peipeng",
""
],
[
"Gao",
"Hui",
""
],
[
"Fei",
"Jianwei",
""
],
[
"Huang",
"Zhitao",
""
],
[
"Xia",
"Zhihua",
""
],
[
"Chang",
"Chip-Hong",
""
]
]
| TITLE: DFREC: DeepFake Identity Recovery Based on Identity-aware Masked
Autoencoder
ABSTRACT: Recent advances in deepfake forensics have primarily focused on improving the
classification accuracy and generalization performance. Despite enormous
progress in detection accuracy across a wide variety of forgery algorithms,
existing algorithms lack intuitive interpretability and identity traceability
to help with forensic investigation. In this paper, we introduce a novel
DeepFake Identity Recovery scheme (DFREC) to fill this gap. DFREC aims to
recover the pair of source and target faces from a deepfake image to facilitate
deepfake identity tracing and reduce the risk of deepfake attack. It comprises
three key components: an Identity Segmentation Module (ISM), a Source Identity
Reconstruction Module (SIRM), and a Target Identity Reconstruction Module
(TIRM). The ISM segments the input face into distinct source and target face
information, and the SIRM reconstructs the source face and extracts latent
target identity features with the segmented source information. The background
context and latent target identity features are synergetically fused by a
Masked Autoencoder in the TIRM to reconstruct the target face. We evaluate
DFREC on six different high-fidelity face-swapping attacks on FaceForensics++,
CelebaMegaFS and FFHQ-E4S datasets, which demonstrate its superior recovery
performance over state-of-the-art deepfake recovery algorithms. In addition,
DFREC is the only scheme that can recover both pristine source and target faces
directly from the forgery image with high fadelity.
| no_new_dataset | 0.947137 |
2412.07804 | Yifei Chen | Shenghao Zhu, Yifei Chen, Shuo Jiang, Weihong Chen, Chang Liu, Yuanhan
Wang, Xu Chen, Yifan Ke, Feiwei Qin, Changmiao Wang, Zhu Zhu | XLSTM-HVED: Cross-Modal Brain Tumor Segmentation and MRI Reconstruction
Method Using Vision XLSTM and Heteromodal Variational Encoder-Decoder | 5 pages, 2 figures | ISBI 2025 | null | null | eess.IV cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neurogliomas are among the most aggressive forms of cancer, presenting
considerable challenges in both treatment and monitoring due to their
unpredictable biological behavior. Magnetic resonance imaging (MRI) is
currently the preferred method for diagnosing and monitoring gliomas. However,
the lack of specific imaging techniques often compromises the accuracy of tumor
segmentation during the imaging process. To address this issue, we introduce
the XLSTM-HVED model. This model integrates a hetero-modal encoder-decoder
framework with the Vision XLSTM module to reconstruct missing MRI modalities.
By deeply fusing spatial and temporal features, it enhances tumor segmentation
performance. The key innovation of our approach is the Self-Attention
Variational Encoder (SAVE) module, which improves the integration of modal
features. Additionally, it optimizes the interaction of features between
segmentation and reconstruction tasks through the Squeeze-Fusion-Excitation
Cross Awareness (SFECA) module. Our experiments using the BraTS 2024 dataset
demonstrate that our model significantly outperforms existing advanced methods
in handling cases where modalities are missing. Our source code is available at
https://github.com/Quanato607/XLSTM-HVED.
| [
{
"version": "v1",
"created": "Mon, 9 Dec 2024 09:04:02 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Jan 2025 05:22:41 GMT"
},
{
"version": "v3",
"created": "Wed, 5 Mar 2025 10:09:25 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Zhu",
"Shenghao",
""
],
[
"Chen",
"Yifei",
""
],
[
"Jiang",
"Shuo",
""
],
[
"Chen",
"Weihong",
""
],
[
"Liu",
"Chang",
""
],
[
"Wang",
"Yuanhan",
""
],
[
"Chen",
"Xu",
""
],
[
"Ke",
"Yifan",
""
],
[
"Qin",
"Feiwei",
""
],
[
"Wang",
"Changmiao",
""
],
[
"Zhu",
"Zhu",
""
]
]
| TITLE: XLSTM-HVED: Cross-Modal Brain Tumor Segmentation and MRI Reconstruction
Method Using Vision XLSTM and Heteromodal Variational Encoder-Decoder
ABSTRACT: Neurogliomas are among the most aggressive forms of cancer, presenting
considerable challenges in both treatment and monitoring due to their
unpredictable biological behavior. Magnetic resonance imaging (MRI) is
currently the preferred method for diagnosing and monitoring gliomas. However,
the lack of specific imaging techniques often compromises the accuracy of tumor
segmentation during the imaging process. To address this issue, we introduce
the XLSTM-HVED model. This model integrates a hetero-modal encoder-decoder
framework with the Vision XLSTM module to reconstruct missing MRI modalities.
By deeply fusing spatial and temporal features, it enhances tumor segmentation
performance. The key innovation of our approach is the Self-Attention
Variational Encoder (SAVE) module, which improves the integration of modal
features. Additionally, it optimizes the interaction of features between
segmentation and reconstruction tasks through the Squeeze-Fusion-Excitation
Cross Awareness (SFECA) module. Our experiments using the BraTS 2024 dataset
demonstrate that our model significantly outperforms existing advanced methods
in handling cases where modalities are missing. Our source code is available at
https://github.com/Quanato607/XLSTM-HVED.
| no_new_dataset | 0.94474 |
2412.09412 | Chiara Lionello | Chiara Lionello, Matteo Becchi, Simone Martino, Giovanni M. Pavan | Relevant, hidden, and frustrated information in high-dimensional
analyses of complex dynamical systems with internal noise | null | null | null | null | physics.chem-ph | http://creativecommons.org/licenses/by/4.0/ | Extracting from trajectory data meaningful information to understand complex
molecular systems might be non-trivial. High-dimensional analyses are typically
assumed to be desirable, if not required, to prevent losing important
information. But to what extent such high-dimensionality is really
needed/beneficial often remains unclear. Here we challenge such a fundamental
general problem. As a representative case of a system with internal dynamical
complexity, we study atomistic molecular dynamics trajectories of liquid water
and ice coexisting in dynamical equilibrium at the solid/liquid transition
temperature. To attain an intrinsically high-dimensional analysis, we use as an
example the Smooth Overlap of Atomic Positions (SOAP) descriptor, obtaining a
large dataset containing 2.56e6 576-dimensional SOAP vectors that we analyze in
various ways. Our results demonstrate how the time-series data contained in one
single SOAP dimension accounting only <0.001% of the total dataset's variance
(neglected and discarded in typical variance-based dimensionality-reduction
approaches) allows resolving a remarkable amount of information,
classifying/discriminating the bulk of water and ice phases, as well as two
solid-interface and liquid-interface layers as four statistically distinct
dynamical molecular environments. Adding more dimensions to this one is found
not only ineffective but even detrimental to the analysis due to recurrent
negligible-information/non-negligible-noise additions and "frustrated
information" phenomena leading to information loss. Such effects are proven
general and are observed also in completely different systems and descriptors'
combinations. This shows how high-dimensional analyses are not necessarily
better than low-dimensional ones to elucidate the internal complexity of
physical/chemical systems, especially when these are characterized by
non-negligible internal noise.
| [
{
"version": "v1",
"created": "Thu, 12 Dec 2024 16:19:48 GMT"
},
{
"version": "v2",
"created": "Fri, 13 Dec 2024 17:23:57 GMT"
},
{
"version": "v3",
"created": "Thu, 19 Dec 2024 16:46:24 GMT"
},
{
"version": "v4",
"created": "Tue, 4 Mar 2025 15:56:23 GMT"
},
{
"version": "v5",
"created": "Wed, 5 Mar 2025 09:06:41 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Lionello",
"Chiara",
""
],
[
"Becchi",
"Matteo",
""
],
[
"Martino",
"Simone",
""
],
[
"Pavan",
"Giovanni M.",
""
]
]
| TITLE: Relevant, hidden, and frustrated information in high-dimensional
analyses of complex dynamical systems with internal noise
ABSTRACT: Extracting from trajectory data meaningful information to understand complex
molecular systems might be non-trivial. High-dimensional analyses are typically
assumed to be desirable, if not required, to prevent losing important
information. But to what extent such high-dimensionality is really
needed/beneficial often remains unclear. Here we challenge such a fundamental
general problem. As a representative case of a system with internal dynamical
complexity, we study atomistic molecular dynamics trajectories of liquid water
and ice coexisting in dynamical equilibrium at the solid/liquid transition
temperature. To attain an intrinsically high-dimensional analysis, we use as an
example the Smooth Overlap of Atomic Positions (SOAP) descriptor, obtaining a
large dataset containing 2.56e6 576-dimensional SOAP vectors that we analyze in
various ways. Our results demonstrate how the time-series data contained in one
single SOAP dimension accounting only <0.001% of the total dataset's variance
(neglected and discarded in typical variance-based dimensionality-reduction
approaches) allows resolving a remarkable amount of information,
classifying/discriminating the bulk of water and ice phases, as well as two
solid-interface and liquid-interface layers as four statistically distinct
dynamical molecular environments. Adding more dimensions to this one is found
not only ineffective but even detrimental to the analysis due to recurrent
negligible-information/non-negligible-noise additions and "frustrated
information" phenomena leading to information loss. Such effects are proven
general and are observed also in completely different systems and descriptors'
combinations. This shows how high-dimensional analyses are not necessarily
better than low-dimensional ones to elucidate the internal complexity of
physical/chemical systems, especially when these are characterized by
non-negligible internal noise.
| no_new_dataset | 0.946498 |
2412.09601 | Xizi Wang | Xizi Wang, Feng Cheng, Ziyang Wang, Huiyu Wang, Md Mohaiminul Islam,
Lorenzo Torresani, Mohit Bansal, Gedas Bertasius, David Crandall | TimeRefine: Temporal Grounding with Time Refining Video LLM | null | null | null | null | cs.CV cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | Video temporal grounding aims to localize relevant temporal boundaries in a
video given a textual prompt. Recent work has focused on enabling Video LLMs to
perform video temporal grounding via next-token prediction of temporal
timestamps. However, accurately localizing timestamps in videos remains
challenging for Video LLMs when relying solely on temporal token prediction.
Our proposed TimeRefine addresses this challenge in two ways. First, instead of
directly predicting the start and end timestamps, we reformulate the temporal
grounding task as a temporal refining task: the model first makes rough
predictions and then refines them by predicting offsets to the target segment.
This refining process is repeated multiple times, through which the model
progressively self-improves its temporal localization accuracy. Second, to
enhance the model's temporal perception capabilities, we incorporate an
auxiliary prediction head that penalizes the model more if a predicted segment
deviates further from the ground truth, thus encouraging the model to make
closer and more accurate predictions. Our plug-and-play method can be
integrated into most LLM-based temporal grounding approaches. The experimental
results demonstrate that TimeRefine achieves 3.6% and 5.0% mIoU improvements on
the ActivityNet and Charades-STA datasets, respectively. Code and pretrained
models will be released.
| [
{
"version": "v1",
"created": "Thu, 12 Dec 2024 18:59:11 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 07:06:15 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Wang",
"Xizi",
""
],
[
"Cheng",
"Feng",
""
],
[
"Wang",
"Ziyang",
""
],
[
"Wang",
"Huiyu",
""
],
[
"Islam",
"Md Mohaiminul",
""
],
[
"Torresani",
"Lorenzo",
""
],
[
"Bansal",
"Mohit",
""
],
[
"Bertasius",
"Gedas",
""
],
[
"Crandall",
"David",
""
]
]
| TITLE: TimeRefine: Temporal Grounding with Time Refining Video LLM
ABSTRACT: Video temporal grounding aims to localize relevant temporal boundaries in a
video given a textual prompt. Recent work has focused on enabling Video LLMs to
perform video temporal grounding via next-token prediction of temporal
timestamps. However, accurately localizing timestamps in videos remains
challenging for Video LLMs when relying solely on temporal token prediction.
Our proposed TimeRefine addresses this challenge in two ways. First, instead of
directly predicting the start and end timestamps, we reformulate the temporal
grounding task as a temporal refining task: the model first makes rough
predictions and then refines them by predicting offsets to the target segment.
This refining process is repeated multiple times, through which the model
progressively self-improves its temporal localization accuracy. Second, to
enhance the model's temporal perception capabilities, we incorporate an
auxiliary prediction head that penalizes the model more if a predicted segment
deviates further from the ground truth, thus encouraging the model to make
closer and more accurate predictions. Our plug-and-play method can be
integrated into most LLM-based temporal grounding approaches. The experimental
results demonstrate that TimeRefine achieves 3.6% and 5.0% mIoU improvements on
the ActivityNet and Charades-STA datasets, respectively. Code and pretrained
models will be released.
| no_new_dataset | 0.948728 |
2412.12843 | Xianlei Long | Xiaxin Zhu, Fangming Guo, Xianlei Long, Qingyi Gu, Chao Chen, Fuqiang
Gu | SLTNet: Efficient Event-based Semantic Segmentation with Spike-driven
Lightweight Transformer-based Networks | Submitted to 2025 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS 2025) | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Event-based semantic segmentation has great potential in autonomous driving
and robotics due to the advantages of event cameras, such as high dynamic
range, low latency, and low power cost. Unfortunately, current artificial
neural network (ANN)-based segmentation methods suffer from high computational
demands, the requirements for image frames, and massive energy consumption,
limiting their efficiency and application on resource-constrained edge/mobile
platforms. To address these problems, we introduce SLTNet, a spike-driven
lightweight transformer-based network designed for event-based semantic
segmentation. Specifically, SLTNet is built on efficient spike-driven
convolution blocks (SCBs) to extract rich semantic features while reducing the
model's parameters. Then, to enhance the long-range contextural feature
interaction, we propose novel spike-driven transformer blocks (STBs) with
binary mask operations. Based on these basic blocks, SLTNet employs a
high-efficiency single-branch architecture while maintaining the low energy
consumption of the Spiking Neural Network (SNN). Finally, extensive experiments
on DDD17 and DSEC-Semantic datasets demonstrate that SLTNet outperforms
state-of-the-art (SOTA) SNN-based methods by at most 9.06% and 9.39% mIoU,
respectively, with extremely 4.58x lower energy consumption and 114 FPS
inference speed. Our code is open-sourced and available at
https://github.com/longxianlei/SLTNet-v1.0.
| [
{
"version": "v1",
"created": "Tue, 17 Dec 2024 12:11:04 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 09:03:18 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Zhu",
"Xiaxin",
""
],
[
"Guo",
"Fangming",
""
],
[
"Long",
"Xianlei",
""
],
[
"Gu",
"Qingyi",
""
],
[
"Chen",
"Chao",
""
],
[
"Gu",
"Fuqiang",
""
]
]
| TITLE: SLTNet: Efficient Event-based Semantic Segmentation with Spike-driven
Lightweight Transformer-based Networks
ABSTRACT: Event-based semantic segmentation has great potential in autonomous driving
and robotics due to the advantages of event cameras, such as high dynamic
range, low latency, and low power cost. Unfortunately, current artificial
neural network (ANN)-based segmentation methods suffer from high computational
demands, the requirements for image frames, and massive energy consumption,
limiting their efficiency and application on resource-constrained edge/mobile
platforms. To address these problems, we introduce SLTNet, a spike-driven
lightweight transformer-based network designed for event-based semantic
segmentation. Specifically, SLTNet is built on efficient spike-driven
convolution blocks (SCBs) to extract rich semantic features while reducing the
model's parameters. Then, to enhance the long-range contextural feature
interaction, we propose novel spike-driven transformer blocks (STBs) with
binary mask operations. Based on these basic blocks, SLTNet employs a
high-efficiency single-branch architecture while maintaining the low energy
consumption of the Spiking Neural Network (SNN). Finally, extensive experiments
on DDD17 and DSEC-Semantic datasets demonstrate that SLTNet outperforms
state-of-the-art (SOTA) SNN-based methods by at most 9.06% and 9.39% mIoU,
respectively, with extremely 4.58x lower energy consumption and 114 FPS
inference speed. Our code is open-sourced and available at
https://github.com/longxianlei/SLTNet-v1.0.
| no_new_dataset | 0.948585 |
2412.15050 | ZhiFei Chen | Zhifei Chen, Tianshuo Xu, Wenhang Ge, Leyi Wu, Dongyu Yan, Jing He,
Luozhou Wang, Lu Zeng, Shunsi Zhang, Yingcong Chen | Uni-Renderer: Unifying Rendering and Inverse Rendering Via Dual Stream
Diffusion | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Rendering and inverse rendering are pivotal tasks in both computer vision and
graphics. The rendering equation is the core of the two tasks, as an ideal
conditional distribution transfer function from intrinsic properties to RGB
images. Despite achieving promising results of existing rendering methods, they
merely approximate the ideal estimation for a specific scene and come with a
high computational cost. Additionally, the inverse conditional distribution
transfer is intractable due to the inherent ambiguity. To address these
challenges, we propose a data-driven method that jointly models rendering and
inverse rendering as two conditional generation tasks within a single diffusion
framework. Inspired by UniDiffuser, we utilize two distinct time schedules to
model both tasks, and with a tailored dual streaming module, we achieve
cross-conditioning of two pre-trained diffusion models. This unified approach,
named Uni-Renderer, allows the two processes to facilitate each other through a
cycle-consistent constrain, mitigating ambiguity by enforcing consistency
between intrinsic properties and rendered images. Combined with a meticulously
prepared dataset, our method effectively decomposition of intrinsic properties
and demonstrates a strong capability to recognize changes during rendering. We
will open-source our training and inference code to the public, fostering
further research and development in this area.
| [
{
"version": "v1",
"created": "Thu, 19 Dec 2024 16:57:45 GMT"
},
{
"version": "v2",
"created": "Thu, 26 Dec 2024 03:57:52 GMT"
},
{
"version": "v3",
"created": "Tue, 28 Jan 2025 14:33:42 GMT"
},
{
"version": "v4",
"created": "Wed, 5 Mar 2025 02:09:23 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Chen",
"Zhifei",
""
],
[
"Xu",
"Tianshuo",
""
],
[
"Ge",
"Wenhang",
""
],
[
"Wu",
"Leyi",
""
],
[
"Yan",
"Dongyu",
""
],
[
"He",
"Jing",
""
],
[
"Wang",
"Luozhou",
""
],
[
"Zeng",
"Lu",
""
],
[
"Zhang",
"Shunsi",
""
],
[
"Chen",
"Yingcong",
""
]
]
| TITLE: Uni-Renderer: Unifying Rendering and Inverse Rendering Via Dual Stream
Diffusion
ABSTRACT: Rendering and inverse rendering are pivotal tasks in both computer vision and
graphics. The rendering equation is the core of the two tasks, as an ideal
conditional distribution transfer function from intrinsic properties to RGB
images. Despite achieving promising results of existing rendering methods, they
merely approximate the ideal estimation for a specific scene and come with a
high computational cost. Additionally, the inverse conditional distribution
transfer is intractable due to the inherent ambiguity. To address these
challenges, we propose a data-driven method that jointly models rendering and
inverse rendering as two conditional generation tasks within a single diffusion
framework. Inspired by UniDiffuser, we utilize two distinct time schedules to
model both tasks, and with a tailored dual streaming module, we achieve
cross-conditioning of two pre-trained diffusion models. This unified approach,
named Uni-Renderer, allows the two processes to facilitate each other through a
cycle-consistent constrain, mitigating ambiguity by enforcing consistency
between intrinsic properties and rendered images. Combined with a meticulously
prepared dataset, our method effectively decomposition of intrinsic properties
and demonstrates a strong capability to recognize changes during rendering. We
will open-source our training and inference code to the public, fostering
further research and development in this area.
| no_new_dataset | 0.936576 |
2412.18377 | Guy Kushilevitz | Shani Goren, Oren Kalinsky, Tomer Stav, Yuri Rapoport, Yaron
Fairstein, Ram Yazdi, Nachshon Cohen, Alexander Libov, Guy Kushilevitz | ChaI-TeA: A Benchmark for Evaluating Autocompletion of Interactions with
LLM-based Chatbots | null | null | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | The rise of LLMs has deflected a growing portion of human-computer
interactions towards LLM-based chatbots. The remarkable abilities of these
models allow users to interact using long, diverse natural language text
covering a wide range of topics and styles. Phrasing these messages is a time
and effort consuming task, calling for an autocomplete solution to assist
users. We introduce the task of chatbot interaction autocomplete. We present
ChaI-TeA: CHat InTEraction Autocomplete; An autcomplete evaluation framework
for LLM-based chatbot interactions. The framework includes a formal definition
of the task, coupled with suitable datasets and metrics. We use the framework
to evaluate After formally defining the task along with suitable datasets and
metrics, we test 9 models on the defined auto completion task, finding that
while current off-the-shelf models perform fairly, there is still much room for
improvement, mainly in ranking of the generated suggestions. We provide
insights for practitioners working on this task and open new research
directions for researchers in the field. We release our framework to serve as a
foundation for future research.
| [
{
"version": "v1",
"created": "Tue, 24 Dec 2024 12:03:36 GMT"
},
{
"version": "v2",
"created": "Wed, 25 Dec 2024 09:26:52 GMT"
},
{
"version": "v3",
"created": "Wed, 5 Mar 2025 11:49:36 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Goren",
"Shani",
""
],
[
"Kalinsky",
"Oren",
""
],
[
"Stav",
"Tomer",
""
],
[
"Rapoport",
"Yuri",
""
],
[
"Fairstein",
"Yaron",
""
],
[
"Yazdi",
"Ram",
""
],
[
"Cohen",
"Nachshon",
""
],
[
"Libov",
"Alexander",
""
],
[
"Kushilevitz",
"Guy",
""
]
]
| TITLE: ChaI-TeA: A Benchmark for Evaluating Autocompletion of Interactions with
LLM-based Chatbots
ABSTRACT: The rise of LLMs has deflected a growing portion of human-computer
interactions towards LLM-based chatbots. The remarkable abilities of these
models allow users to interact using long, diverse natural language text
covering a wide range of topics and styles. Phrasing these messages is a time
and effort consuming task, calling for an autocomplete solution to assist
users. We introduce the task of chatbot interaction autocomplete. We present
ChaI-TeA: CHat InTEraction Autocomplete; An autcomplete evaluation framework
for LLM-based chatbot interactions. The framework includes a formal definition
of the task, coupled with suitable datasets and metrics. We use the framework
to evaluate After formally defining the task along with suitable datasets and
metrics, we test 9 models on the defined auto completion task, finding that
while current off-the-shelf models perform fairly, there is still much room for
improvement, mainly in ranking of the generated suggestions. We provide
insights for practitioners working on this task and open new research
directions for researchers in the field. We release our framework to serve as a
foundation for future research.
| no_new_dataset | 0.92912 |
2501.01999 | Sharvaree Vadgama P | Sharvaree Vadgama and Mohammad Mohaiminul Islam and Domas Buracus and
Christian Shewmake and Erik Bekkers | On the Utility of Equivariance and Symmetry Breaking in Deep Learning
Architectures on Point Clouds | 19 pages, 4 figures | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | This paper explores the key factors that influence the performance of models
working with point clouds, across different tasks of varying geometric
complexity. In this work, we explore the trade-offs between flexibility and
weight-sharing introduced by equivariant layers, assessing when equivariance
boosts or detracts from performance. It is often argued that providing more
information as input improves a model's performance. However, if this
additional information breaks certain properties, such as $\SE(3)$
equivariance, does it remain beneficial? We identify the key aspects of
equivariant and non-equivariant architectures that drive success in different
tasks by benchmarking them on segmentation, regression, and generation tasks
across multiple datasets with increasing complexity. We observe a positive
impact of equivariance, which becomes more pronounced with increasing task
complexity, even when strict equivariance is not required.
| [
{
"version": "v1",
"created": "Wed, 1 Jan 2025 07:00:41 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 15:26:17 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Vadgama",
"Sharvaree",
""
],
[
"Islam",
"Mohammad Mohaiminul",
""
],
[
"Buracus",
"Domas",
""
],
[
"Shewmake",
"Christian",
""
],
[
"Bekkers",
"Erik",
""
]
]
| TITLE: On the Utility of Equivariance and Symmetry Breaking in Deep Learning
Architectures on Point Clouds
ABSTRACT: This paper explores the key factors that influence the performance of models
working with point clouds, across different tasks of varying geometric
complexity. In this work, we explore the trade-offs between flexibility and
weight-sharing introduced by equivariant layers, assessing when equivariance
boosts or detracts from performance. It is often argued that providing more
information as input improves a model's performance. However, if this
additional information breaks certain properties, such as $\SE(3)$
equivariance, does it remain beneficial? We identify the key aspects of
equivariant and non-equivariant architectures that drive success in different
tasks by benchmarking them on segmentation, regression, and generation tasks
across multiple datasets with increasing complexity. We observe a positive
impact of equivariance, which becomes more pronounced with increasing task
complexity, even when strict equivariance is not required.
| no_new_dataset | 0.946745 |
2501.05272 | Xinzi Cao | Xinzi Cao, Xiawu Zheng, Guanhong Wang, Weijiang Yu, Yunhang Shen, Ke
Li, Yutong Lu, Yonghong Tian | Solving the Catastrophic Forgetting Problem in Generalized Category
Discovery | Accepted by CVPR 2024 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Generalized Category Discovery (GCD) aims to identify a mix of known and
novel categories within unlabeled data sets, providing a more realistic setting
for image recognition. Essentially, GCD needs to remember existing patterns
thoroughly to recognize novel categories. Recent state-of-the-art method SimGCD
transfers the knowledge from known-class data to the learning of novel classes
through debiased learning. However, some patterns are catastrophically forgot
during adaptation and thus lead to poor performance in novel categories
classification. To address this issue, we propose a novel learning approach,
LegoGCD, which is seamlessly integrated into previous methods to enhance the
discrimination of novel classes while maintaining performance on previously
encountered known classes. Specifically, we design two types of techniques
termed as Local Entropy Regularization (LER) and Dual-views Kullback Leibler
divergence constraint (DKL). The LER optimizes the distribution of potential
known class samples in unlabeled data, thus ensuring the preservation of
knowledge related to known categories while learning novel classes. Meanwhile,
DKL introduces Kullback Leibler divergence to encourage the model to produce a
similar prediction distribution of two view samples from the same image. In
this way, it successfully avoids mismatched prediction and generates more
reliable potential known class samples simultaneously. Extensive experiments
validate that the proposed LegoGCD effectively addresses the known category
forgetting issue across all datasets, eg, delivering a 7.74% and 2.51% accuracy
boost on known and novel classes in CUB, respectively. Our code is available
at: https://github.com/Cliffia123/LegoGCD.
| [
{
"version": "v1",
"created": "Thu, 9 Jan 2025 14:31:54 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 03:26:07 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Cao",
"Xinzi",
""
],
[
"Zheng",
"Xiawu",
""
],
[
"Wang",
"Guanhong",
""
],
[
"Yu",
"Weijiang",
""
],
[
"Shen",
"Yunhang",
""
],
[
"Li",
"Ke",
""
],
[
"Lu",
"Yutong",
""
],
[
"Tian",
"Yonghong",
""
]
]
| TITLE: Solving the Catastrophic Forgetting Problem in Generalized Category
Discovery
ABSTRACT: Generalized Category Discovery (GCD) aims to identify a mix of known and
novel categories within unlabeled data sets, providing a more realistic setting
for image recognition. Essentially, GCD needs to remember existing patterns
thoroughly to recognize novel categories. Recent state-of-the-art method SimGCD
transfers the knowledge from known-class data to the learning of novel classes
through debiased learning. However, some patterns are catastrophically forgot
during adaptation and thus lead to poor performance in novel categories
classification. To address this issue, we propose a novel learning approach,
LegoGCD, which is seamlessly integrated into previous methods to enhance the
discrimination of novel classes while maintaining performance on previously
encountered known classes. Specifically, we design two types of techniques
termed as Local Entropy Regularization (LER) and Dual-views Kullback Leibler
divergence constraint (DKL). The LER optimizes the distribution of potential
known class samples in unlabeled data, thus ensuring the preservation of
knowledge related to known categories while learning novel classes. Meanwhile,
DKL introduces Kullback Leibler divergence to encourage the model to produce a
similar prediction distribution of two view samples from the same image. In
this way, it successfully avoids mismatched prediction and generates more
reliable potential known class samples simultaneously. Extensive experiments
validate that the proposed LegoGCD effectively addresses the known category
forgetting issue across all datasets, eg, delivering a 7.74% and 2.51% accuracy
boost on known and novel classes in CUB, respectively. Our code is available
at: https://github.com/Cliffia123/LegoGCD.
| no_new_dataset | 0.94743 |
2501.05891 | Bianca Raimondi | Bianca Raimondi, Saverio Giallorenzo and Maurizio Gabbrielli | Affordably Fine-tuned LLMs Provide Better Answers to Course-specific
MCQs | The 40th ACM/SIGAPP Symposium On Applied Computing | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | In education, the capability of generating human-like text of Large Language
Models (LLMs) inspired work on how they can increase the efficiency of learning
and teaching. We study the affordability of these models for educators and
students by investigating how LLMs answer multiple-choice questions (MCQs) with
respect to hardware constraints and refinement techniques. We explore this
space by using generic pre-trained LLMs (the 7B, 13B, and 70B variants of
LLaMA-2) to answer 162 undergraduate-level MCQs from a course on Programming
Languages (PL) -- the MCQ dataset is a contribution of this work, which we make
publicly available. Specifically, we dissect how different factors, such as
using readily-available material -- (parts of) the course's textbook -- for
fine-tuning and quantisation (to decrease resource usage) can change the
accuracy of the responses. The main takeaway is that smaller textbook-based
fine-tuned models outperform generic larger ones (whose pre-training requires
conspicuous resources), making the usage of LLMs for answering MCQs resource-
and material-wise affordable.
| [
{
"version": "v1",
"created": "Fri, 10 Jan 2025 11:44:35 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 09:18:31 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Raimondi",
"Bianca",
""
],
[
"Giallorenzo",
"Saverio",
""
],
[
"Gabbrielli",
"Maurizio",
""
]
]
| TITLE: Affordably Fine-tuned LLMs Provide Better Answers to Course-specific
MCQs
ABSTRACT: In education, the capability of generating human-like text of Large Language
Models (LLMs) inspired work on how they can increase the efficiency of learning
and teaching. We study the affordability of these models for educators and
students by investigating how LLMs answer multiple-choice questions (MCQs) with
respect to hardware constraints and refinement techniques. We explore this
space by using generic pre-trained LLMs (the 7B, 13B, and 70B variants of
LLaMA-2) to answer 162 undergraduate-level MCQs from a course on Programming
Languages (PL) -- the MCQ dataset is a contribution of this work, which we make
publicly available. Specifically, we dissect how different factors, such as
using readily-available material -- (parts of) the course's textbook -- for
fine-tuning and quantisation (to decrease resource usage) can change the
accuracy of the responses. The main takeaway is that smaller textbook-based
fine-tuned models outperform generic larger ones (whose pre-training requires
conspicuous resources), making the usage of LLMs for answering MCQs resource-
and material-wise affordable.
| no_new_dataset | 0.719925 |
2501.13335 | Xianrui Luo | Xianrui Luo, Juewen Peng, Zhongang Cai, Lei Yang, Fan Yang, Zhiguo
Cao, Guosheng Lin | Deblur-Avatar: Animatable Avatars from Motion-Blurred Monocular Videos | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a novel framework for modeling high-fidelity, animatable 3D
human avatars from motion-blurred monocular video inputs. Motion blur is
prevalent in real-world dynamic video capture, especially due to human
movements in 3D human avatar modeling. Existing methods either (1) assume sharp
image inputs, failing to address the detail loss introduced by motion blur, or
(2) mainly consider blur by camera movements, neglecting the human motion blur
which is more common in animatable avatars. Our proposed approach integrates a
human movement-based motion blur model into 3D Gaussian Splatting (3DGS). By
explicitly modeling human motion trajectories during exposure time, we jointly
optimize the trajectories and 3D Gaussians to reconstruct sharp, high-quality
human avatars. We employ a pose-dependent fusion mechanism to distinguish
moving body regions, optimizing both blurred and sharp areas effectively.
Extensive experiments on synthetic and real-world datasets demonstrate that our
method significantly outperforms existing methods in rendering quality and
quantitative metrics, producing sharp avatar reconstructions and enabling
real-time rendering under challenging motion blur conditions.
| [
{
"version": "v1",
"created": "Thu, 23 Jan 2025 02:31:57 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 14:32:31 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Luo",
"Xianrui",
""
],
[
"Peng",
"Juewen",
""
],
[
"Cai",
"Zhongang",
""
],
[
"Yang",
"Lei",
""
],
[
"Yang",
"Fan",
""
],
[
"Cao",
"Zhiguo",
""
],
[
"Lin",
"Guosheng",
""
]
]
| TITLE: Deblur-Avatar: Animatable Avatars from Motion-Blurred Monocular Videos
ABSTRACT: We introduce a novel framework for modeling high-fidelity, animatable 3D
human avatars from motion-blurred monocular video inputs. Motion blur is
prevalent in real-world dynamic video capture, especially due to human
movements in 3D human avatar modeling. Existing methods either (1) assume sharp
image inputs, failing to address the detail loss introduced by motion blur, or
(2) mainly consider blur by camera movements, neglecting the human motion blur
which is more common in animatable avatars. Our proposed approach integrates a
human movement-based motion blur model into 3D Gaussian Splatting (3DGS). By
explicitly modeling human motion trajectories during exposure time, we jointly
optimize the trajectories and 3D Gaussians to reconstruct sharp, high-quality
human avatars. We employ a pose-dependent fusion mechanism to distinguish
moving body regions, optimizing both blurred and sharp areas effectively.
Extensive experiments on synthetic and real-world datasets demonstrate that our
method significantly outperforms existing methods in rendering quality and
quantitative metrics, producing sharp avatar reconstructions and enabling
real-time rendering under challenging motion blur conditions.
| no_new_dataset | 0.948251 |
2501.15282 | Zhikai Chen | Zhikai Chen, Han Xie, Jian Zhang, Xiang song, Jiliang Tang, Huzefa
Rangwala, George Karypis | AutoG: Towards automatic graph construction from tabular data | camera ready version, update meta info | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent years have witnessed significant advancements in graph machine
learning (GML), with its applications spanning numerous domains. However, the
focus of GML has predominantly been on developing powerful models, often
overlooking a crucial initial step: constructing suitable graphs from common
data formats, such as tabular data. This construction process is fundamental to
applying graph-based models, yet it remains largely understudied and lacks
formalization. Our research aims to address this gap by formalizing the graph
construction problem and proposing an effective solution. We identify two
critical challenges to achieve this goal: 1. The absence of dedicated datasets
to formalize and evaluate the effectiveness of graph construction methods, and
2. Existing automatic construction methods can only be applied to some specific
cases, while tedious human engineering is required to generate high-quality
graphs. To tackle these challenges, we present a two-fold contribution. First,
we introduce a set of datasets to formalize and evaluate graph construction
methods. Second, we propose an LLM-based solution, AutoG, automatically
generating high-quality graph schemas without human intervention. The
experimental results demonstrate that the quality of constructed graphs is
critical to downstream task performance, and AutoG can generate high-quality
graphs that rival those produced by human experts. Our code can be accessible
from https://github.com/amazon-science/Automatic-Table-to-Graph-Generation.
| [
{
"version": "v1",
"created": "Sat, 25 Jan 2025 17:31:56 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Feb 2025 15:11:44 GMT"
},
{
"version": "v3",
"created": "Wed, 5 Mar 2025 03:38:57 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Chen",
"Zhikai",
""
],
[
"Xie",
"Han",
""
],
[
"Zhang",
"Jian",
""
],
[
"song",
"Xiang",
""
],
[
"Tang",
"Jiliang",
""
],
[
"Rangwala",
"Huzefa",
""
],
[
"Karypis",
"George",
""
]
]
| TITLE: AutoG: Towards automatic graph construction from tabular data
ABSTRACT: Recent years have witnessed significant advancements in graph machine
learning (GML), with its applications spanning numerous domains. However, the
focus of GML has predominantly been on developing powerful models, often
overlooking a crucial initial step: constructing suitable graphs from common
data formats, such as tabular data. This construction process is fundamental to
applying graph-based models, yet it remains largely understudied and lacks
formalization. Our research aims to address this gap by formalizing the graph
construction problem and proposing an effective solution. We identify two
critical challenges to achieve this goal: 1. The absence of dedicated datasets
to formalize and evaluate the effectiveness of graph construction methods, and
2. Existing automatic construction methods can only be applied to some specific
cases, while tedious human engineering is required to generate high-quality
graphs. To tackle these challenges, we present a two-fold contribution. First,
we introduce a set of datasets to formalize and evaluate graph construction
methods. Second, we propose an LLM-based solution, AutoG, automatically
generating high-quality graph schemas without human intervention. The
experimental results demonstrate that the quality of constructed graphs is
critical to downstream task performance, and AutoG can generate high-quality
graphs that rival those produced by human experts. Our code can be accessible
from https://github.com/amazon-science/Automatic-Table-to-Graph-Generation.
| no_new_dataset | 0.596507 |
2501.18516 | Guanqun Cao | Guanqun Cao, Ryan Mckenna, Erich Graf and John Oyekan | Learn from the Past: Language-conditioned Object Rearrangement with
Large Language Models | null | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Object manipulation for rearrangement into a specific goal state is a
significant task for collaborative robots. Accurately determining object
placement is a key challenge, as misalignment can increase task complexity and
the risk of collisions, affecting the efficiency of the rearrangement process.
Most current methods heavily rely on pre-collected datasets to train the model
for predicting the goal position. As a result, these methods are restricted to
specific instructions, which limits their broader applicability and
generalisation. In this paper, we propose a framework of flexible
language-conditioned object rearrangement based on the Large Language Model
(LLM). Our approach mimics human reasoning by making use of successful past
experiences as a reference to infer the best strategies to achieve a current
desired goal position. Based on LLM's strong natural language comprehension and
inference ability, our method generalises to handle various everyday objects
and free-form language instructions in a zero-shot manner. Experimental results
demonstrate that our methods can effectively execute the robotic rearrangement
tasks, even those involving long sequences of orders.
| [
{
"version": "v1",
"created": "Thu, 30 Jan 2025 17:28:11 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 13:54:04 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Cao",
"Guanqun",
""
],
[
"Mckenna",
"Ryan",
""
],
[
"Graf",
"Erich",
""
],
[
"Oyekan",
"John",
""
]
]
| TITLE: Learn from the Past: Language-conditioned Object Rearrangement with
Large Language Models
ABSTRACT: Object manipulation for rearrangement into a specific goal state is a
significant task for collaborative robots. Accurately determining object
placement is a key challenge, as misalignment can increase task complexity and
the risk of collisions, affecting the efficiency of the rearrangement process.
Most current methods heavily rely on pre-collected datasets to train the model
for predicting the goal position. As a result, these methods are restricted to
specific instructions, which limits their broader applicability and
generalisation. In this paper, we propose a framework of flexible
language-conditioned object rearrangement based on the Large Language Model
(LLM). Our approach mimics human reasoning by making use of successful past
experiences as a reference to infer the best strategies to achieve a current
desired goal position. Based on LLM's strong natural language comprehension and
inference ability, our method generalises to handle various everyday objects
and free-form language instructions in a zero-shot manner. Experimental results
demonstrate that our methods can effectively execute the robotic rearrangement
tasks, even those involving long sequences of orders.
| no_new_dataset | 0.947672 |
2501.18821 | Danial Sadrian Zadeh | Mohammad Fatahi, Danial Sadrian Zadeh, Benyamin Ghojogh, Behzad
Moshiri, Otman Basir | An Optimal Cascade Feature-Level Spatiotemporal Fusion Strategy for
Anomaly Detection in CAN Bus | v2: updated the text and graphs | null | null | null | cs.LG cs.AI cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Autonomous vehicles represent a revolutionary advancement driven by the
integration of artificial intelligence within intelligent transportation
systems. However, they remain vulnerable due to the absence of robust security
mechanisms in the Controller Area Network (CAN) bus. In order to mitigate the
security issue, many machine learning models and strategies have been proposed,
which primarily focus on a subset of dominant patterns of anomalies and lack
rigorous evaluation in terms of reliability and robustness. Therefore, to
address the limitations of previous works and mitigate the security
vulnerability in CAN bus, the current study develops a model based on the
intrinsic nature of the problem to cover all dominant patterns of anomalies. To
achieve this, a cascade feature-level fusion strategy optimized by a
two-parameter genetic algorithm is proposed to combine temporal and spatial
information. Subsequently, the model is evaluated using a paired t-test to
ensure reliability and robustness. Finally, a comprehensive comparative
analysis conducted on two widely used datasets advocates that the proposed
model outperforms other models and achieves superior accuracy and F1-score,
demonstrating the best performance among all models presented to date.
| [
{
"version": "v1",
"created": "Fri, 31 Jan 2025 00:36:08 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 04:45:03 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Fatahi",
"Mohammad",
""
],
[
"Zadeh",
"Danial Sadrian",
""
],
[
"Ghojogh",
"Benyamin",
""
],
[
"Moshiri",
"Behzad",
""
],
[
"Basir",
"Otman",
""
]
]
| TITLE: An Optimal Cascade Feature-Level Spatiotemporal Fusion Strategy for
Anomaly Detection in CAN Bus
ABSTRACT: Autonomous vehicles represent a revolutionary advancement driven by the
integration of artificial intelligence within intelligent transportation
systems. However, they remain vulnerable due to the absence of robust security
mechanisms in the Controller Area Network (CAN) bus. In order to mitigate the
security issue, many machine learning models and strategies have been proposed,
which primarily focus on a subset of dominant patterns of anomalies and lack
rigorous evaluation in terms of reliability and robustness. Therefore, to
address the limitations of previous works and mitigate the security
vulnerability in CAN bus, the current study develops a model based on the
intrinsic nature of the problem to cover all dominant patterns of anomalies. To
achieve this, a cascade feature-level fusion strategy optimized by a
two-parameter genetic algorithm is proposed to combine temporal and spatial
information. Subsequently, the model is evaluated using a paired t-test to
ensure reliability and robustness. Finally, a comprehensive comparative
analysis conducted on two widely used datasets advocates that the proposed
model outperforms other models and achieves superior accuracy and F1-score,
demonstrating the best performance among all models presented to date.
| no_new_dataset | 0.949389 |
2502.01565 | Jeffri Murrugarra-Llerena | Jeffri Murrugarra-LLerena, Jose Henrique Lima Marques, Claudio R. Jung | GauCho: Gaussian Distributions with Cholesky Decomposition for Oriented
Object Detection | null | The IEEE/CVF Conference on Computer Vision and Pattern Recognition
2025 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Oriented Object Detection (OOD) has received increased attention in the past
years, being a suitable solution for detecting elongated objects in remote
sensing analysis. In particular, using regression loss functions based on
Gaussian distributions has become attractive since they yield simple and
differentiable terms. However, existing solutions are still based on regression
heads that produce Oriented Bounding Boxes (OBBs), and the known problem of
angular boundary discontinuity persists. In this work, we propose a regression
head for OOD that directly produces Gaussian distributions based on the
Cholesky matrix decomposition. The proposed head, named GauCho, theoretically
mitigates the boundary discontinuity problem and is fully compatible with
recent Gaussian-based regression loss functions. Furthermore, we advocate using
Oriented Ellipses (OEs) to represent oriented objects, which relates to GauCho
through a bijective function and alleviates the encoding ambiguity problem for
circular objects. Our experimental results show that GauCho can be a viable
alternative to the traditional OBB head, achieving results comparable to or
better than state-of-the-art detectors for the challenging dataset DOTA
| [
{
"version": "v1",
"created": "Mon, 3 Feb 2025 17:47:26 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Murrugarra-LLerena",
"Jeffri",
""
],
[
"Marques",
"Jose Henrique Lima",
""
],
[
"Jung",
"Claudio R.",
""
]
]
| TITLE: GauCho: Gaussian Distributions with Cholesky Decomposition for Oriented
Object Detection
ABSTRACT: Oriented Object Detection (OOD) has received increased attention in the past
years, being a suitable solution for detecting elongated objects in remote
sensing analysis. In particular, using regression loss functions based on
Gaussian distributions has become attractive since they yield simple and
differentiable terms. However, existing solutions are still based on regression
heads that produce Oriented Bounding Boxes (OBBs), and the known problem of
angular boundary discontinuity persists. In this work, we propose a regression
head for OOD that directly produces Gaussian distributions based on the
Cholesky matrix decomposition. The proposed head, named GauCho, theoretically
mitigates the boundary discontinuity problem and is fully compatible with
recent Gaussian-based regression loss functions. Furthermore, we advocate using
Oriented Ellipses (OEs) to represent oriented objects, which relates to GauCho
through a bijective function and alleviates the encoding ambiguity problem for
circular objects. Our experimental results show that GauCho can be a viable
alternative to the traditional OBB head, achieving results comparable to or
better than state-of-the-art detectors for the challenging dataset DOTA
| no_new_dataset | 0.948251 |
2502.05503 | Yongfan Chen | Yongfan Chen, Xiuwen Zhu, Tianyu Li | A Physical Coherence Benchmark for Evaluating Video Generation Models
via Optical Flow-guided Frame Prediction | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in video generation models demonstrate their potential as
world simulators, but they often struggle with videos deviating from physical
laws, a key concern overlooked by most text-to-video benchmarks. We introduce a
benchmark designed specifically to assess the Physical Coherence of generated
videos, PhyCoBench. Our benchmark includes 120 prompts covering 7 categories of
physical principles, capturing key physical laws observable in video content.
We evaluated four state-of-the-art (SoTA) T2V models on PhyCoBench and
conducted manual assessments. Additionally, we propose an automated evaluation
model: PhyCoPredictor, a diffusion model that generates optical flow and video
frames in a cascade manner. Through a consistency evaluation comparing
automated and manual sorting, the experimental results show that PhyCoPredictor
currently aligns most closely with human evaluation. Therefore, it can
effectively evaluate the physical coherence of videos, providing insights for
future model optimization. Our benchmark, including physical coherence prompts,
the automatic evaluation tool PhyCoPredictor, and the generated video dataset,
has been released on GitHub at https://github.com/Jeckinchen/PhyCoBench.
| [
{
"version": "v1",
"created": "Sat, 8 Feb 2025 09:31:26 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Feb 2025 09:07:09 GMT"
},
{
"version": "v3",
"created": "Wed, 5 Mar 2025 12:27:57 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Chen",
"Yongfan",
""
],
[
"Zhu",
"Xiuwen",
""
],
[
"Li",
"Tianyu",
""
]
]
| TITLE: A Physical Coherence Benchmark for Evaluating Video Generation Models
via Optical Flow-guided Frame Prediction
ABSTRACT: Recent advances in video generation models demonstrate their potential as
world simulators, but they often struggle with videos deviating from physical
laws, a key concern overlooked by most text-to-video benchmarks. We introduce a
benchmark designed specifically to assess the Physical Coherence of generated
videos, PhyCoBench. Our benchmark includes 120 prompts covering 7 categories of
physical principles, capturing key physical laws observable in video content.
We evaluated four state-of-the-art (SoTA) T2V models on PhyCoBench and
conducted manual assessments. Additionally, we propose an automated evaluation
model: PhyCoPredictor, a diffusion model that generates optical flow and video
frames in a cascade manner. Through a consistency evaluation comparing
automated and manual sorting, the experimental results show that PhyCoPredictor
currently aligns most closely with human evaluation. Therefore, it can
effectively evaluate the physical coherence of videos, providing insights for
future model optimization. Our benchmark, including physical coherence prompts,
the automatic evaluation tool PhyCoPredictor, and the generated video dataset,
has been released on GitHub at https://github.com/Jeckinchen/PhyCoBench.
| new_dataset | 0.951953 |
2502.07115 | Zijie Zhou | Patrick Jaillet, Jiashuo Jiang, Chara Podimata, Zijie Zhou | Online Scheduling for LLM Inference with KV Cache Constraints | Will add a lemma in the proof of Theorem 5.3 to make the statement
and proof more rigorous | null | null | null | cs.LG cs.AI math.OC | http://creativecommons.org/licenses/by/4.0/ | Large Language Model (LLM) inference, where a trained model generates text
one word at a time in response to user prompts, is a computationally intensive
process requiring efficient scheduling to optimize latency and resource
utilization. A key challenge in LLM inference is the management of the
Key-Value (KV) cache, which reduces redundant computations but introduces
memory constraints. In this work, we model LLM inference with KV cache
constraints theoretically and propose novel batching and scheduling algorithms
that minimize inference latency while effectively managing the KV cache's
memory.
We analyze both semi-online and fully online scheduling models, and our
results are threefold. First, we provide a polynomial-time algorithm that
achieves exact optimality in terms of average latency in the semi-online prompt
arrival model. Second, in the fully online case with a stochastic prompt
arrival, we introduce an efficient online scheduling algorithm with constant
regret. Third, we prove that no algorithm (deterministic or randomized) can
achieve a constant competitive ratio in fully online adversarial settings. Our
empirical evaluations on a public LLM inference dataset, using the Llama-70B
model on A100 GPUs, show that our approach significantly outperforms benchmark
algorithms used currently in practice, achieving lower latency while reducing
energy consumption. Overall, our results offer a path toward more sustainable
and cost-effective LLM deployment.
| [
{
"version": "v1",
"created": "Mon, 10 Feb 2025 23:11:44 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Feb 2025 12:54:36 GMT"
},
{
"version": "v3",
"created": "Wed, 5 Mar 2025 14:43:01 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Jaillet",
"Patrick",
""
],
[
"Jiang",
"Jiashuo",
""
],
[
"Podimata",
"Chara",
""
],
[
"Zhou",
"Zijie",
""
]
]
| TITLE: Online Scheduling for LLM Inference with KV Cache Constraints
ABSTRACT: Large Language Model (LLM) inference, where a trained model generates text
one word at a time in response to user prompts, is a computationally intensive
process requiring efficient scheduling to optimize latency and resource
utilization. A key challenge in LLM inference is the management of the
Key-Value (KV) cache, which reduces redundant computations but introduces
memory constraints. In this work, we model LLM inference with KV cache
constraints theoretically and propose novel batching and scheduling algorithms
that minimize inference latency while effectively managing the KV cache's
memory.
We analyze both semi-online and fully online scheduling models, and our
results are threefold. First, we provide a polynomial-time algorithm that
achieves exact optimality in terms of average latency in the semi-online prompt
arrival model. Second, in the fully online case with a stochastic prompt
arrival, we introduce an efficient online scheduling algorithm with constant
regret. Third, we prove that no algorithm (deterministic or randomized) can
achieve a constant competitive ratio in fully online adversarial settings. Our
empirical evaluations on a public LLM inference dataset, using the Llama-70B
model on A100 GPUs, show that our approach significantly outperforms benchmark
algorithms used currently in practice, achieving lower latency while reducing
energy consumption. Overall, our results offer a path toward more sustainable
and cost-effective LLM deployment.
| no_new_dataset | 0.943919 |
2502.07132 | A\'ecio Solano Rodrigues Santos | A\'ecio Santos, Eduardo H. M. Pena, Roque Lopez, Juliana Freire | Interactive Data Harmonization with LLM Agents | null | null | null | null | cs.AI cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data harmonization is an essential task that entails integrating datasets
from diverse sources. Despite years of research in this area, it remains a
time-consuming and challenging task due to schema mismatches, varying
terminologies, and differences in data collection methodologies. This paper
presents the case for agentic data harmonization as a means to both empower
experts to harmonize their data and to streamline the process. We introduce
Harmonia, a system that combines LLM-based reasoning, an interactive user
interface, and a library of data harmonization primitives to automate the
synthesis of data harmonization pipelines. We demonstrate Harmonia in a
clinical data harmonization scenario, where it helps to interactively create
reusable pipelines that map datasets to a standard format. Finally, we discuss
challenges and open problems, and suggest research directions for advancing our
vision.
| [
{
"version": "v1",
"created": "Mon, 10 Feb 2025 23:50:09 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 18:33:41 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Santos",
"Aécio",
""
],
[
"Pena",
"Eduardo H. M.",
""
],
[
"Lopez",
"Roque",
""
],
[
"Freire",
"Juliana",
""
]
]
| TITLE: Interactive Data Harmonization with LLM Agents
ABSTRACT: Data harmonization is an essential task that entails integrating datasets
from diverse sources. Despite years of research in this area, it remains a
time-consuming and challenging task due to schema mismatches, varying
terminologies, and differences in data collection methodologies. This paper
presents the case for agentic data harmonization as a means to both empower
experts to harmonize their data and to streamline the process. We introduce
Harmonia, a system that combines LLM-based reasoning, an interactive user
interface, and a library of data harmonization primitives to automate the
synthesis of data harmonization pipelines. We demonstrate Harmonia in a
clinical data harmonization scenario, where it helps to interactively create
reusable pipelines that map datasets to a standard format. Finally, we discuss
challenges and open problems, and suggest research directions for advancing our
vision.
| no_new_dataset | 0.949248 |
2502.09977 | Kuan Li | Kuan Li, Liwen Zhang, Yong Jiang, Pengjun Xie, Fei Huang, Shuai Wang,
Minhao Cheng | LaRA: Benchmarking Retrieval-Augmented Generation and Long-Context LLMs
-- No Silver Bullet for LC or RAG Routing | 22 pages | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Effectively incorporating external knowledge into Large Language Models
(LLMs) is crucial for enhancing their capabilities and addressing real-world
needs. Retrieval-Augmented Generation (RAG) offers an effective method for
achieving this by retrieving the most relevant fragments into LLMs. However,
the advancements in context window size for LLMs offer an alternative approach,
raising the question of whether RAG remains necessary for effectively handling
external knowledge. Several existing studies provide inconclusive comparisons
between RAG and long-context (LC) LLMs, largely due to limitations in the
benchmark designs. In this paper, we present LaRA, a novel benchmark
specifically designed to rigorously compare RAG and LC LLMs. LaRA encompasses
2326 test cases across four practical QA task categories and three types of
naturally occurring long texts. Through systematic evaluation of seven
open-source and four proprietary LLMs, we find that the optimal choice between
RAG and LC depends on a complex interplay of factors, including the model's
parameter size, long-text capabilities, context length, task type, and the
characteristics of the retrieved chunks. Our findings provide actionable
guidelines for practitioners to effectively leverage both RAG and LC approaches
in developing and deploying LLM applications. Our code and dataset is provided
at:
\href{https://github.com/Alibaba-NLP/LaRA}{\textbf{https://github.com/Alibaba-NLP/LaRA}}.
| [
{
"version": "v1",
"created": "Fri, 14 Feb 2025 08:04:22 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 08:48:25 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Li",
"Kuan",
""
],
[
"Zhang",
"Liwen",
""
],
[
"Jiang",
"Yong",
""
],
[
"Xie",
"Pengjun",
""
],
[
"Huang",
"Fei",
""
],
[
"Wang",
"Shuai",
""
],
[
"Cheng",
"Minhao",
""
]
]
| TITLE: LaRA: Benchmarking Retrieval-Augmented Generation and Long-Context LLMs
-- No Silver Bullet for LC or RAG Routing
ABSTRACT: Effectively incorporating external knowledge into Large Language Models
(LLMs) is crucial for enhancing their capabilities and addressing real-world
needs. Retrieval-Augmented Generation (RAG) offers an effective method for
achieving this by retrieving the most relevant fragments into LLMs. However,
the advancements in context window size for LLMs offer an alternative approach,
raising the question of whether RAG remains necessary for effectively handling
external knowledge. Several existing studies provide inconclusive comparisons
between RAG and long-context (LC) LLMs, largely due to limitations in the
benchmark designs. In this paper, we present LaRA, a novel benchmark
specifically designed to rigorously compare RAG and LC LLMs. LaRA encompasses
2326 test cases across four practical QA task categories and three types of
naturally occurring long texts. Through systematic evaluation of seven
open-source and four proprietary LLMs, we find that the optimal choice between
RAG and LC depends on a complex interplay of factors, including the model's
parameter size, long-text capabilities, context length, task type, and the
characteristics of the retrieved chunks. Our findings provide actionable
guidelines for practitioners to effectively leverage both RAG and LC approaches
in developing and deploying LLM applications. Our code and dataset is provided
at:
\href{https://github.com/Alibaba-NLP/LaRA}{\textbf{https://github.com/Alibaba-NLP/LaRA}}.
| new_dataset | 0.91267 |
2502.11681 | Yuncheng Hua | Yuncheng Hua, Lizhen Qu, Zhuang Li, Hao Xue, Flora D. Salim,
Gholamreza Haffari | RIDE: Enhancing Large Language Model Alignment through Restyled
In-Context Learning Demonstration Exemplars | 38 pages, 2 figures, 20 tables; The paper is under review in ARR | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Alignment tuning is crucial for ensuring large language models (LLMs) behave
ethically and helpfully. Current alignment approaches require high-quality
annotations and significant training resources. This paper proposes a low-cost,
tuning-free method using in-context learning (ICL) to enhance LLM alignment.
Through an analysis of high-quality ICL demos, we identified style as a key
factor influencing LLM alignment capabilities and explicitly restyled ICL
exemplars based on this stylistic framework. Additionally, we combined the
restyled demos to achieve a balance between the two conflicting aspects of LLM
alignment--factuality and safety. We packaged the restyled examples as prompts
to trigger few-shot learning, improving LLM alignment. Compared to the best
baseline approach, with an average score of 5.00 as the maximum, our method
achieves a maximum 0.10 increase on the Alpaca task (from 4.50 to 4.60), a 0.22
enhancement on the Just-eval benchmark (from 4.34 to 4.56), and a maximum
improvement of 0.32 (from 3.53 to 3.85) on the MT-Bench dataset. We release the
code and data at https://github.com/AnonymousCode-ComputerScience/RIDE.
| [
{
"version": "v1",
"created": "Mon, 17 Feb 2025 11:16:19 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Feb 2025 08:41:10 GMT"
},
{
"version": "v3",
"created": "Fri, 21 Feb 2025 06:14:33 GMT"
},
{
"version": "v4",
"created": "Wed, 5 Mar 2025 14:38:19 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Hua",
"Yuncheng",
""
],
[
"Qu",
"Lizhen",
""
],
[
"Li",
"Zhuang",
""
],
[
"Xue",
"Hao",
""
],
[
"Salim",
"Flora D.",
""
],
[
"Haffari",
"Gholamreza",
""
]
]
| TITLE: RIDE: Enhancing Large Language Model Alignment through Restyled
In-Context Learning Demonstration Exemplars
ABSTRACT: Alignment tuning is crucial for ensuring large language models (LLMs) behave
ethically and helpfully. Current alignment approaches require high-quality
annotations and significant training resources. This paper proposes a low-cost,
tuning-free method using in-context learning (ICL) to enhance LLM alignment.
Through an analysis of high-quality ICL demos, we identified style as a key
factor influencing LLM alignment capabilities and explicitly restyled ICL
exemplars based on this stylistic framework. Additionally, we combined the
restyled demos to achieve a balance between the two conflicting aspects of LLM
alignment--factuality and safety. We packaged the restyled examples as prompts
to trigger few-shot learning, improving LLM alignment. Compared to the best
baseline approach, with an average score of 5.00 as the maximum, our method
achieves a maximum 0.10 increase on the Alpaca task (from 4.50 to 4.60), a 0.22
enhancement on the Just-eval benchmark (from 4.34 to 4.56), and a maximum
improvement of 0.32 (from 3.53 to 3.85) on the MT-Bench dataset. We release the
code and data at https://github.com/AnonymousCode-ComputerScience/RIDE.
| no_new_dataset | 0.946695 |
2502.13921 | Hao Mark Chen | Jiahao Gai, Hao Mark Chen, Zhican Wang, Hongyu Zhou, Wanru Zhao,
Nicholas Lane, Hongxiang Fan | Exploring Code Language Models for Automated HLS-based Hardware
Generation: Benchmark, Infrastructure and Analysis | Paper accepted by ASP-DAC'25 | null | null | null | cs.LG cs.AR cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in code generation have illuminated the potential of
employing large language models (LLMs) for general-purpose programming
languages such as Python and C++, opening new opportunities for automating
software development and enhancing programmer productivity. The potential of
LLMs in software programming has sparked significant interest in exploring
automated hardware generation and automation. Although preliminary endeavors
have been made to adopt LLMs in generating hardware description languages
(HDLs), several challenges persist in this direction. First, the volume of
available HDL training data is substantially smaller compared to that for
software programming languages. Second, the pre-trained LLMs, mainly tailored
for software code, tend to produce HDL designs that are more error-prone.
Third, the generation of HDL requires a significantly higher number of tokens
compared to software programming, leading to inefficiencies in cost and energy
consumption. To tackle these challenges, this paper explores leveraging LLMs to
generate High-Level Synthesis (HLS)-based hardware design. Although code
generation for domain-specific programming languages is not new in the
literature, we aim to provide experimental results, insights, benchmarks, and
evaluation infrastructure to investigate the suitability of HLS over low-level
HDLs for LLM-assisted hardware design generation. To achieve this, we first
finetune pre-trained models for HLS-based hardware generation, using a
collected dataset with text prompts and corresponding reference HLS designs. An
LLM-assisted framework is then proposed to automate end-to-end hardware code
generation, which also investigates the impact of chain-of-thought and feedback
loops promoting techniques on HLS-design generation. Limited by the timeframe
of this research, we plan to evaluate more advanced reasoning models in the
future.
| [
{
"version": "v1",
"created": "Wed, 19 Feb 2025 17:53:59 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 16:07:23 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Gai",
"Jiahao",
""
],
[
"Chen",
"Hao Mark",
""
],
[
"Wang",
"Zhican",
""
],
[
"Zhou",
"Hongyu",
""
],
[
"Zhao",
"Wanru",
""
],
[
"Lane",
"Nicholas",
""
],
[
"Fan",
"Hongxiang",
""
]
]
| TITLE: Exploring Code Language Models for Automated HLS-based Hardware
Generation: Benchmark, Infrastructure and Analysis
ABSTRACT: Recent advances in code generation have illuminated the potential of
employing large language models (LLMs) for general-purpose programming
languages such as Python and C++, opening new opportunities for automating
software development and enhancing programmer productivity. The potential of
LLMs in software programming has sparked significant interest in exploring
automated hardware generation and automation. Although preliminary endeavors
have been made to adopt LLMs in generating hardware description languages
(HDLs), several challenges persist in this direction. First, the volume of
available HDL training data is substantially smaller compared to that for
software programming languages. Second, the pre-trained LLMs, mainly tailored
for software code, tend to produce HDL designs that are more error-prone.
Third, the generation of HDL requires a significantly higher number of tokens
compared to software programming, leading to inefficiencies in cost and energy
consumption. To tackle these challenges, this paper explores leveraging LLMs to
generate High-Level Synthesis (HLS)-based hardware design. Although code
generation for domain-specific programming languages is not new in the
literature, we aim to provide experimental results, insights, benchmarks, and
evaluation infrastructure to investigate the suitability of HLS over low-level
HDLs for LLM-assisted hardware design generation. To achieve this, we first
finetune pre-trained models for HLS-based hardware generation, using a
collected dataset with text prompts and corresponding reference HLS designs. An
LLM-assisted framework is then proposed to automate end-to-end hardware code
generation, which also investigates the impact of chain-of-thought and feedback
loops promoting techniques on HLS-design generation. Limited by the timeframe
of this research, we plan to evaluate more advanced reasoning models in the
future.
| no_new_dataset | 0.954095 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.