Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2408.06816 | Yongjin Yang | Yongjin Yang, Haneul Yoo, Hwaran Lee | MAQA: Evaluating Uncertainty Quantification in LLMs Regarding Data
Uncertainty | Findings of NAACL 2025 | null | null | null | cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | Despite the massive advancements in large language models (LLMs), they still
suffer from producing plausible but incorrect responses. To improve the
reliability of LLMs, recent research has focused on uncertainty quantification
to predict whether a response is correct or not. However, most uncertainty
quantification methods have been evaluated on single-labeled questions, which
removes data uncertainty: the irreducible randomness often present in user
queries, which can arise from factors like multiple possible answers. This
limitation may cause uncertainty quantification results to be unreliable in
practical settings. In this paper, we investigate previous uncertainty
quantification methods under the presence of data uncertainty. Our
contributions are two-fold: 1) proposing a new Multi-Answer Question Answering
dataset, MAQA, consisting of world knowledge, mathematical reasoning, and
commonsense reasoning tasks to evaluate uncertainty quantification regarding
data uncertainty, and 2) assessing 5 uncertainty quantification methods of
diverse white- and black-box LLMs. Our findings show that previous methods
relatively struggle compared to single-answer settings, though this varies
depending on the task. Moreover, we observe that entropy- and consistency-based
methods effectively estimate model uncertainty, even in the presence of data
uncertainty. We believe these observations will guide future work on
uncertainty quantification in more realistic settings.
| [
{
"version": "v1",
"created": "Tue, 13 Aug 2024 11:17:31 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Mar 2025 13:03:14 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Yang",
"Yongjin",
""
],
[
"Yoo",
"Haneul",
""
],
[
"Lee",
"Hwaran",
""
]
] | TITLE: MAQA: Evaluating Uncertainty Quantification in LLMs Regarding Data
Uncertainty
ABSTRACT: Despite the massive advancements in large language models (LLMs), they still
suffer from producing plausible but incorrect responses. To improve the
reliability of LLMs, recent research has focused on uncertainty quantification
to predict whether a response is correct or not. However, most uncertainty
quantification methods have been evaluated on single-labeled questions, which
removes data uncertainty: the irreducible randomness often present in user
queries, which can arise from factors like multiple possible answers. This
limitation may cause uncertainty quantification results to be unreliable in
practical settings. In this paper, we investigate previous uncertainty
quantification methods under the presence of data uncertainty. Our
contributions are two-fold: 1) proposing a new Multi-Answer Question Answering
dataset, MAQA, consisting of world knowledge, mathematical reasoning, and
commonsense reasoning tasks to evaluate uncertainty quantification regarding
data uncertainty, and 2) assessing 5 uncertainty quantification methods of
diverse white- and black-box LLMs. Our findings show that previous methods
relatively struggle compared to single-answer settings, though this varies
depending on the task. Moreover, we observe that entropy- and consistency-based
methods effectively estimate model uncertainty, even in the presence of data
uncertainty. We believe these observations will guide future work on
uncertainty quantification in more realistic settings.
|
2408.07790 | Seung Hyun Lee | Seung Hyun Lee, Jijun Jiang, Yiran Xu, Zhuofang Li, Junjie Ke, Yinxiao
Li, Junfeng He, Steven Hickson, Katie Datsenko, Sangpil Kim, Ming-Hsuan Yang,
Irfan Essa, Feng Yang | Cropper: Vision-Language Model for Image Cropping through In-Context
Learning | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | The goal of image cropping is to identify visually appealing crops in an
image. Conventional methods are trained on specific datasets and fail to adapt
to new requirements. Recent breakthroughs in large vision-language models
(VLMs) enable visual in-context learning without explicit training. However,
downstream tasks with VLMs remain under explored. In this paper, we propose an
effective approach to leverage VLMs for image cropping. First, we propose an
efficient prompt retrieval mechanism for image cropping to automate the
selection of in-context examples. Second, we introduce an iterative refinement
strategy to iteratively enhance the predicted crops. The proposed framework, we
refer to as Cropper, is applicable to a wide range of cropping tasks, including
free-form cropping, subject-aware cropping, and aspect ratio-aware cropping.
Extensive experiments demonstrate that Cropper significantly outperforms
state-of-the-art methods across several benchmarks.
| [
{
"version": "v1",
"created": "Wed, 14 Aug 2024 20:03:03 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Mar 2025 11:42:39 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Lee",
"Seung Hyun",
""
],
[
"Jiang",
"Jijun",
""
],
[
"Xu",
"Yiran",
""
],
[
"Li",
"Zhuofang",
""
],
[
"Ke",
"Junjie",
""
],
[
"Li",
"Yinxiao",
""
],
[
"He",
"Junfeng",
""
],
[
"Hickson",
"Steven",
""
],
[
"Datsenko",
"Katie",
""
],
[
"Kim",
"Sangpil",
""
],
[
"Yang",
"Ming-Hsuan",
""
],
[
"Essa",
"Irfan",
""
],
[
"Yang",
"Feng",
""
]
] | TITLE: Cropper: Vision-Language Model for Image Cropping through In-Context
Learning
ABSTRACT: The goal of image cropping is to identify visually appealing crops in an
image. Conventional methods are trained on specific datasets and fail to adapt
to new requirements. Recent breakthroughs in large vision-language models
(VLMs) enable visual in-context learning without explicit training. However,
downstream tasks with VLMs remain under explored. In this paper, we propose an
effective approach to leverage VLMs for image cropping. First, we propose an
efficient prompt retrieval mechanism for image cropping to automate the
selection of in-context examples. Second, we introduce an iterative refinement
strategy to iteratively enhance the predicted crops. The proposed framework, we
refer to as Cropper, is applicable to a wide range of cropping tasks, including
free-form cropping, subject-aware cropping, and aspect ratio-aware cropping.
Extensive experiments demonstrate that Cropper significantly outperforms
state-of-the-art methods across several benchmarks.
|
2408.08650 | Peiming Guo | Peiming Guo, Sinuo Liu, Yanzhao Zhang, Dingkun Long, Pengjun Xie,
Meishan Zhang, Min Zhang | An End-to-End Model for Photo-Sharing Multi-modal Dialogue Generation | Accepted by ICME2025 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Photo-Sharing Multi-modal dialogue generation requires a dialogue agent not
only to generate text responses but also to share photos at the proper moment.
Using image text caption as the bridge, a pipeline model integrates an image
caption model, a text generation model, and an image generation model to handle
this complex multi-modal task. However, representing the images with text
captions may loss important visual details and information and cause error
propagation in the complex dialogue system. Besides, the pipeline model
isolates the three models separately because discrete image text captions
hinder end-to-end gradient propagation. We propose the first end-to-end model
for photo-sharing multi-modal dialogue generation, which integrates an image
perceptron and an image generator with a large language model. The large
language model employs the Q-Former to perceive visual images in the input end.
For image generation in the output end, we propose a dynamic vocabulary
transformation matrix and use straight-through and gumbel-softmax techniques to
align the large language model and stable diffusion model and achieve
end-to-end gradient propagation. We perform experiments on PhotoChat and
DialogCC datasets to evaluate our end-to-end model. Compared with pipeline
models, the end-to-end model gains state-of-the-art performances on various
metrics of text and image generation. More analysis experiments also verify the
effectiveness of the end-to-end model for photo-sharing multi-modal dialogue
generation.
| [
{
"version": "v1",
"created": "Fri, 16 Aug 2024 10:33:19 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Mar 2025 10:42:31 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Guo",
"Peiming",
""
],
[
"Liu",
"Sinuo",
""
],
[
"Zhang",
"Yanzhao",
""
],
[
"Long",
"Dingkun",
""
],
[
"Xie",
"Pengjun",
""
],
[
"Zhang",
"Meishan",
""
],
[
"Zhang",
"Min",
""
]
] | TITLE: An End-to-End Model for Photo-Sharing Multi-modal Dialogue Generation
ABSTRACT: Photo-Sharing Multi-modal dialogue generation requires a dialogue agent not
only to generate text responses but also to share photos at the proper moment.
Using image text caption as the bridge, a pipeline model integrates an image
caption model, a text generation model, and an image generation model to handle
this complex multi-modal task. However, representing the images with text
captions may loss important visual details and information and cause error
propagation in the complex dialogue system. Besides, the pipeline model
isolates the three models separately because discrete image text captions
hinder end-to-end gradient propagation. We propose the first end-to-end model
for photo-sharing multi-modal dialogue generation, which integrates an image
perceptron and an image generator with a large language model. The large
language model employs the Q-Former to perceive visual images in the input end.
For image generation in the output end, we propose a dynamic vocabulary
transformation matrix and use straight-through and gumbel-softmax techniques to
align the large language model and stable diffusion model and achieve
end-to-end gradient propagation. We perform experiments on PhotoChat and
DialogCC datasets to evaluate our end-to-end model. Compared with pipeline
models, the end-to-end model gains state-of-the-art performances on various
metrics of text and image generation. More analysis experiments also verify the
effectiveness of the end-to-end model for photo-sharing multi-modal dialogue
generation.
|
2408.10397 | Brian Moser | Vijul Shah, Brian B. Moser, Ko Watanabe, and Andreas Dengel | Webcam-based Pupil Diameter Prediction Benefits from Upscaling | null | null | 10.5220/0013162800003890 | null | cs.CV cs.AI cs.MM | http://creativecommons.org/licenses/by/4.0/ | Capturing pupil diameter is essential for assessing psychological and
physiological states such as stress levels and cognitive load. However, the low
resolution of images in eye datasets often hampers precise measurement. This
study evaluates the impact of various upscaling methods, ranging from bicubic
interpolation to advanced super-resolution, on pupil diameter predictions. We
compare several pre-trained methods, including CodeFormer, GFPGAN, Real-ESRGAN,
HAT, and SRResNet. Our findings suggest that pupil diameter prediction models
trained on upscaled datasets are highly sensitive to the selected upscaling
method and scale. Our results demonstrate that upscaling methods consistently
enhance the accuracy of pupil diameter prediction models, highlighting the
importance of upscaling in pupilometry. Overall, our work provides valuable
insights for selecting upscaling techniques, paving the way for more accurate
assessments in psychological and physiological research.
| [
{
"version": "v1",
"created": "Mon, 19 Aug 2024 20:28:39 GMT"
},
{
"version": "v2",
"created": "Sun, 22 Dec 2024 19:35:34 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Shah",
"Vijul",
""
],
[
"Moser",
"Brian B.",
""
],
[
"Watanabe",
"Ko",
""
],
[
"Dengel",
"Andreas",
""
]
] | TITLE: Webcam-based Pupil Diameter Prediction Benefits from Upscaling
ABSTRACT: Capturing pupil diameter is essential for assessing psychological and
physiological states such as stress levels and cognitive load. However, the low
resolution of images in eye datasets often hampers precise measurement. This
study evaluates the impact of various upscaling methods, ranging from bicubic
interpolation to advanced super-resolution, on pupil diameter predictions. We
compare several pre-trained methods, including CodeFormer, GFPGAN, Real-ESRGAN,
HAT, and SRResNet. Our findings suggest that pupil diameter prediction models
trained on upscaled datasets are highly sensitive to the selected upscaling
method and scale. Our results demonstrate that upscaling methods consistently
enhance the accuracy of pupil diameter prediction models, highlighting the
importance of upscaling in pupilometry. Overall, our work provides valuable
insights for selecting upscaling techniques, paving the way for more accurate
assessments in psychological and physiological research.
|
2408.11836 | Alexandre Matov | Alexandre Matov | Analysis of Unstructured High-Density Crowded Scenes for Crowd
Monitoring | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We are interested in developing an automated system for detection of
organized movements in human crowds. Computer vision algorithms can extract
information from videos of crowded scenes and automatically detect and track
groups of individuals undergoing organized motion that represents an anomalous
behavior in the context of conflict aversion. Our system can detect organized
cohorts against the background of randomly moving objects and we can estimate
the number of participants in an organized cohort, the speed and direction of
motion in real time, within three to four video frames, which is less than one
second from the onset of motion captured on a CCTV. We have performed
preliminary analysis in this context in biological cell data containing up to
four thousand objects per frame and will extend this numerically to a
hundred-fold for public safety applications.
We envisage using the existing infrastructure of video cameras for acquiring
image datasets on-the-fly and deploying an easy-to-use data-driven software
system for parsing of significant events by analyzing image sequences taken
inside and outside of sports stadiums or other public venues. Other prospective
users are organizers of political rallies, civic and wildlife organizations,
security firms, and the military. We will optimize the performance of the
software by implementing a classification method able to distinguish between
activities posing a threat and those not posing a threat.
| [
{
"version": "v1",
"created": "Tue, 6 Aug 2024 22:09:50 GMT"
},
{
"version": "v2",
"created": "Fri, 23 Aug 2024 02:38:07 GMT"
},
{
"version": "v3",
"created": "Mon, 26 Aug 2024 20:31:08 GMT"
},
{
"version": "v4",
"created": "Tue, 10 Sep 2024 15:00:14 GMT"
},
{
"version": "v5",
"created": "Sat, 2 Nov 2024 23:45:39 GMT"
},
{
"version": "v6",
"created": "Tue, 10 Dec 2024 21:05:37 GMT"
},
{
"version": "v7",
"created": "Wed, 12 Feb 2025 02:12:47 GMT"
},
{
"version": "v8",
"created": "Sat, 22 Feb 2025 04:16:41 GMT"
},
{
"version": "v9",
"created": "Sun, 30 Mar 2025 01:21:45 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Matov",
"Alexandre",
""
]
] | TITLE: Analysis of Unstructured High-Density Crowded Scenes for Crowd
Monitoring
ABSTRACT: We are interested in developing an automated system for detection of
organized movements in human crowds. Computer vision algorithms can extract
information from videos of crowded scenes and automatically detect and track
groups of individuals undergoing organized motion that represents an anomalous
behavior in the context of conflict aversion. Our system can detect organized
cohorts against the background of randomly moving objects and we can estimate
the number of participants in an organized cohort, the speed and direction of
motion in real time, within three to four video frames, which is less than one
second from the onset of motion captured on a CCTV. We have performed
preliminary analysis in this context in biological cell data containing up to
four thousand objects per frame and will extend this numerically to a
hundred-fold for public safety applications.
We envisage using the existing infrastructure of video cameras for acquiring
image datasets on-the-fly and deploying an easy-to-use data-driven software
system for parsing of significant events by analyzing image sequences taken
inside and outside of sports stadiums or other public venues. Other prospective
users are organizers of political rallies, civic and wildlife organizations,
security firms, and the military. We will optimize the performance of the
software by implementing a classification method able to distinguish between
activities posing a threat and those not posing a threat.
|
2408.13006 | Hui Wei | Hui Wei, Shenghua He, Tian Xia, Fei Liu, Andy Wong, Jingyang Lin, Mei
Han | Systematic Evaluation of LLM-as-a-Judge in LLM Alignment Tasks:
Explainable Metrics and Diverse Prompt Templates | Accepted by Building Trust in LLMs and LLM Applications workshop at
ICLR 2025 | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-nd/4.0/ | LLM-as-a-Judge has been widely applied to evaluate and compare different LLM
alignmnet approaches (e.g., RLHF and DPO). However, concerns regarding its
reliability have emerged, due to LLM judges' biases and inconsistent
decision-making. Previous research has developed evaluation frameworks to
assess reliability of LLM judges and their alignment with human preferences.
However, the employed evaluation metrics often lack adequate explainability and
fail to address LLM internal inconsistency. Additionally, existing studies
inadequately explore the impact of various prompt templates when applying
LLM-as-a-Judge methods, leading to potentially inconsistent comparisons between
different alignment algorithms. In this work, we systematically evaluate
LLM-as-a-Judge on alignment tasks by defining more theoretically interpretable
evaluation metrics and explicitly mitigating LLM internal inconsistency from
reliability metrics. We develop an open-source framework to evaluate, compare,
and visualize the reliability and alignment of LLM judges, which facilitates
practitioners to choose LLM judges for alignment tasks. In the experiments, we
examine effects of diverse prompt templates on LLM-judge reliability and also
demonstrate our developed framework by comparing various LLM judges on two
common alignment datasets (i.e., TL;DR Summarization and HH-RLHF-Helpfulness).
Our results indicate a significant impact of prompt templates on LLM judge
performance, as well as a mediocre alignment level between the tested LLM
judges and human evaluators.
| [
{
"version": "v1",
"created": "Fri, 23 Aug 2024 11:49:01 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Mar 2025 17:59:47 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Wei",
"Hui",
""
],
[
"He",
"Shenghua",
""
],
[
"Xia",
"Tian",
""
],
[
"Liu",
"Fei",
""
],
[
"Wong",
"Andy",
""
],
[
"Lin",
"Jingyang",
""
],
[
"Han",
"Mei",
""
]
] | TITLE: Systematic Evaluation of LLM-as-a-Judge in LLM Alignment Tasks:
Explainable Metrics and Diverse Prompt Templates
ABSTRACT: LLM-as-a-Judge has been widely applied to evaluate and compare different LLM
alignmnet approaches (e.g., RLHF and DPO). However, concerns regarding its
reliability have emerged, due to LLM judges' biases and inconsistent
decision-making. Previous research has developed evaluation frameworks to
assess reliability of LLM judges and their alignment with human preferences.
However, the employed evaluation metrics often lack adequate explainability and
fail to address LLM internal inconsistency. Additionally, existing studies
inadequately explore the impact of various prompt templates when applying
LLM-as-a-Judge methods, leading to potentially inconsistent comparisons between
different alignment algorithms. In this work, we systematically evaluate
LLM-as-a-Judge on alignment tasks by defining more theoretically interpretable
evaluation metrics and explicitly mitigating LLM internal inconsistency from
reliability metrics. We develop an open-source framework to evaluate, compare,
and visualize the reliability and alignment of LLM judges, which facilitates
practitioners to choose LLM judges for alignment tasks. In the experiments, we
examine effects of diverse prompt templates on LLM-judge reliability and also
demonstrate our developed framework by comparing various LLM judges on two
common alignment datasets (i.e., TL;DR Summarization and HH-RLHF-Helpfulness).
Our results indicate a significant impact of prompt templates on LLM judge
performance, as well as a mediocre alignment level between the tested LLM
judges and human evaluators.
|
2408.13065 | Rotem Benisty | Rotem Benisty, Yevgenia Shteynman, Moshe Porat, Anat Ilivitzki, Moti
Freiman | SIMPLE: Simultaneous Multi-Plane Self-Supervised Learning for Isotropic
MRI Restoration from Anisotropic Data | null | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Magnetic resonance imaging (MRI) is crucial in diagnosing various abdominal
conditions and anomalies. Traditional MRI scans often yield anisotropic data
due to technical constraints, resulting in varying resolutions across spatial
dimensions, which limits diagnostic accuracy and volumetric analysis.
Super-resolution (SR) techniques aim to address these limitations by
reconstructing isotropic high-resolution images from anisotropic data. However,
current SR methods often depend on indirect mappings and scarce 3D isotropic
data for training, primarily focusing on two-dimensional enhancements rather
than achieving genuine three-dimensional isotropy. We introduce ``SIMPLE,'' a
Simultaneous Multi-Plane Self-Supervised Learning approach for isotropic MRI
restoration from anisotropic data. Our method leverages existing anisotropic
clinical data acquired in different planes, bypassing the need for simulated
downsampling processes. By considering the inherent three-dimensional nature of
MRI data, SIMPLE ensures realistic isotropic data generation rather than solely
improving through-plane slices. This approach's flexibility allows it to be
extended to multiple contrast types and acquisition methods commonly used in
clinical settings. Our experiments on two distinct datasets (brain and abdomen)
show that SIMPLE outperforms state-of-the-art methods both quantitatively using
the Kernel Inception Distance (KID), semi-quantitatively through radiologist
evaluations, and qualitatively through Fourier domain analysis. The generated
isotropic volume facilitates more accurate volumetric analysis and 3D
reconstructions, promising significant improvements in clinical diagnostic
capabilities.
| [
{
"version": "v1",
"created": "Fri, 23 Aug 2024 13:48:11 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Mar 2025 16:21:48 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Benisty",
"Rotem",
""
],
[
"Shteynman",
"Yevgenia",
""
],
[
"Porat",
"Moshe",
""
],
[
"Ilivitzki",
"Anat",
""
],
[
"Freiman",
"Moti",
""
]
] | TITLE: SIMPLE: Simultaneous Multi-Plane Self-Supervised Learning for Isotropic
MRI Restoration from Anisotropic Data
ABSTRACT: Magnetic resonance imaging (MRI) is crucial in diagnosing various abdominal
conditions and anomalies. Traditional MRI scans often yield anisotropic data
due to technical constraints, resulting in varying resolutions across spatial
dimensions, which limits diagnostic accuracy and volumetric analysis.
Super-resolution (SR) techniques aim to address these limitations by
reconstructing isotropic high-resolution images from anisotropic data. However,
current SR methods often depend on indirect mappings and scarce 3D isotropic
data for training, primarily focusing on two-dimensional enhancements rather
than achieving genuine three-dimensional isotropy. We introduce ``SIMPLE,'' a
Simultaneous Multi-Plane Self-Supervised Learning approach for isotropic MRI
restoration from anisotropic data. Our method leverages existing anisotropic
clinical data acquired in different planes, bypassing the need for simulated
downsampling processes. By considering the inherent three-dimensional nature of
MRI data, SIMPLE ensures realistic isotropic data generation rather than solely
improving through-plane slices. This approach's flexibility allows it to be
extended to multiple contrast types and acquisition methods commonly used in
clinical settings. Our experiments on two distinct datasets (brain and abdomen)
show that SIMPLE outperforms state-of-the-art methods both quantitatively using
the Kernel Inception Distance (KID), semi-quantitatively through radiologist
evaluations, and qualitatively through Fourier domain analysis. The generated
isotropic volume facilitates more accurate volumetric analysis and 3D
reconstructions, promising significant improvements in clinical diagnostic
capabilities.
|
2409.00317 | Ziwei Sun | Zi-Wei Sun, Ze-Xi Hua, Heng-Chao Li, Zhi-Peng Qi, Xiang Li, Yan Li,
and Jin-Chi Zhang | FBD-SV-2024: Flying Bird Object Detection Dataset in Surveillance Video | null | [J]. Scientific Data, 2025, 12(1): 530 | 10.1038/s41597-025-04872-6 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A Flying Bird Dataset for Surveillance Videos (FBD-SV-2024) is introduced and
tailored for the development and performance evaluation of flying bird
detection algorithms in surveillance videos. This dataset comprises 483 video
clips, amounting to 28,694 frames in total. Among them, 23,833 frames contain
28,366 instances of flying birds. The proposed dataset of flying birds in
surveillance videos is collected from realistic surveillance scenarios, where
the birds exhibit characteristics such as inconspicuous features in single
frames (in some instances), generally small sizes, and shape variability during
flight. These attributes pose challenges that need to be addressed when
developing flying bird detection methods for surveillance videos. Finally,
advanced (video) object detection algorithms were selected for experimentation
on the proposed dataset, and the results demonstrated that this dataset remains
challenging for the algorithms above. The FBD-SV-2024 is now publicly
available: Please visit https://github.com/Ziwei89/FBD-SV-2024_github for the
dataset download link and related processing scripts.
| [
{
"version": "v1",
"created": "Sat, 31 Aug 2024 01:11:57 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Sun",
"Zi-Wei",
""
],
[
"Hua",
"Ze-Xi",
""
],
[
"Li",
"Heng-Chao",
""
],
[
"Qi",
"Zhi-Peng",
""
],
[
"Li",
"Xiang",
""
],
[
"Li",
"Yan",
""
],
[
"Zhang",
"Jin-Chi",
""
]
] | TITLE: FBD-SV-2024: Flying Bird Object Detection Dataset in Surveillance Video
ABSTRACT: A Flying Bird Dataset for Surveillance Videos (FBD-SV-2024) is introduced and
tailored for the development and performance evaluation of flying bird
detection algorithms in surveillance videos. This dataset comprises 483 video
clips, amounting to 28,694 frames in total. Among them, 23,833 frames contain
28,366 instances of flying birds. The proposed dataset of flying birds in
surveillance videos is collected from realistic surveillance scenarios, where
the birds exhibit characteristics such as inconspicuous features in single
frames (in some instances), generally small sizes, and shape variability during
flight. These attributes pose challenges that need to be addressed when
developing flying bird detection methods for surveillance videos. Finally,
advanced (video) object detection algorithms were selected for experimentation
on the proposed dataset, and the results demonstrated that this dataset remains
challenging for the algorithms above. The FBD-SV-2024 is now publicly
available: Please visit https://github.com/Ziwei89/FBD-SV-2024_github for the
dataset download link and related processing scripts.
|
2409.02729 | Umaima Rahman | Umaima Rahman, Raza Imam, Mohammad Yaqub, Boulbaba Ben Amor,
Dwarikanath Mahapatra | Can language-guided unsupervised adaptation improve medical image
classification using unpaired images and texts? | Conference paper at International Symposium on Biomedical Imaging
(ISBI) 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In medical image classification, supervised learning is challenging due to
the scarcity of labeled medical images. To address this, we leverage the
visual-textual alignment within Vision-Language Models (VLMs) to enable
unsupervised learning of a medical image classifier. In this work, we propose
\underline{Med}ical \underline{Un}supervised \underline{A}daptation
(\texttt{MedUnA}) of VLMs, where the LLM-generated descriptions for each class
are encoded into text embeddings and matched with class labels via a
cross-modal adapter. This adapter attaches to a visual encoder of
\texttt{MedCLIP} and aligns the visual embeddings through unsupervised
learning, driven by a contrastive entropy-based loss and prompt tuning.
Thereby, improving performance in scenarios where textual information is more
abundant than labeled images, particularly in the healthcare domain. Unlike
traditional VLMs, \texttt{MedUnA} uses \textbf{unpaired images and text} for
learning representations and enhances the potential of VLMs beyond traditional
constraints. We evaluate the performance on three chest X-ray datasets and two
multi-class datasets (diabetic retinopathy and skin lesions), showing
significant accuracy gains over the zero-shot baseline. Our code is available
at https://github.com/rumaima/meduna.
| [
{
"version": "v1",
"created": "Tue, 3 Sep 2024 09:25:51 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Mar 2025 19:44:22 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Rahman",
"Umaima",
""
],
[
"Imam",
"Raza",
""
],
[
"Yaqub",
"Mohammad",
""
],
[
"Amor",
"Boulbaba Ben",
""
],
[
"Mahapatra",
"Dwarikanath",
""
]
] | TITLE: Can language-guided unsupervised adaptation improve medical image
classification using unpaired images and texts?
ABSTRACT: In medical image classification, supervised learning is challenging due to
the scarcity of labeled medical images. To address this, we leverage the
visual-textual alignment within Vision-Language Models (VLMs) to enable
unsupervised learning of a medical image classifier. In this work, we propose
\underline{Med}ical \underline{Un}supervised \underline{A}daptation
(\texttt{MedUnA}) of VLMs, where the LLM-generated descriptions for each class
are encoded into text embeddings and matched with class labels via a
cross-modal adapter. This adapter attaches to a visual encoder of
\texttt{MedCLIP} and aligns the visual embeddings through unsupervised
learning, driven by a contrastive entropy-based loss and prompt tuning.
Thereby, improving performance in scenarios where textual information is more
abundant than labeled images, particularly in the healthcare domain. Unlike
traditional VLMs, \texttt{MedUnA} uses \textbf{unpaired images and text} for
learning representations and enhances the potential of VLMs beyond traditional
constraints. We evaluate the performance on three chest X-ray datasets and two
multi-class datasets (diabetic retinopathy and skin lesions), showing
significant accuracy gains over the zero-shot baseline. Our code is available
at https://github.com/rumaima/meduna.
|
2409.05206 | Dimitris G. Sotiropoulos PhD | E. V. Aretos and D. G. Sotiropoulos | SEF: A Method for Computing Prediction Intervals by Shifting the Error
Function in Neural Networks | The paper has been accepted at the 2024 International Conference on
Computer and Applications (ICCA24), Cairo, Egypt, December 17-19, 2024.
https://icca-conf.info/icca-2024 | 2024 International Conference on Computer and Applications (ICCA),
pp. 1-8, 2024 | 10.1109/ICCA62237.2024.10927749 | null | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In today's era, Neural Networks (NN) are applied in various scientific fields
such as robotics, medicine, engineering, etc. However, the predictions of
neural networks themselves contain a degree of uncertainty that must always be
taken into account before any decision is made. This is why many researchers
have focused on developing different ways to quantify the uncertainty of neural
network predictions. Some of these methods are based on generating prediction
intervals (PI) via neural networks for the requested target values. The SEF
(Shifting the Error Function) method presented in this paper is a new method
that belongs to this category of methods. The proposed approach involves
training a single neural network three times, thus generating an estimate along
with the corresponding upper and lower bounds for a given problem. A pivotal
aspect of the method is the calculation of a parameter from the initial
network's estimates, which is then integrated into the loss functions of the
other two networks. This innovative process effectively produces PIs, resulting
in a robust and efficient technique for uncertainty quantification. To evaluate
the effectiveness of our method, a comparison in terms of successful PI
generation between the SEF, PI3NN and PIVEN methods was made using two
synthetic datasets.
| [
{
"version": "v1",
"created": "Sun, 8 Sep 2024 19:46:45 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Aretos",
"E. V.",
""
],
[
"Sotiropoulos",
"D. G.",
""
]
] | TITLE: SEF: A Method for Computing Prediction Intervals by Shifting the Error
Function in Neural Networks
ABSTRACT: In today's era, Neural Networks (NN) are applied in various scientific fields
such as robotics, medicine, engineering, etc. However, the predictions of
neural networks themselves contain a degree of uncertainty that must always be
taken into account before any decision is made. This is why many researchers
have focused on developing different ways to quantify the uncertainty of neural
network predictions. Some of these methods are based on generating prediction
intervals (PI) via neural networks for the requested target values. The SEF
(Shifting the Error Function) method presented in this paper is a new method
that belongs to this category of methods. The proposed approach involves
training a single neural network three times, thus generating an estimate along
with the corresponding upper and lower bounds for a given problem. A pivotal
aspect of the method is the calculation of a parameter from the initial
network's estimates, which is then integrated into the loss functions of the
other two networks. This innovative process effectively produces PIs, resulting
in a robust and efficient technique for uncertainty quantification. To evaluate
the effectiveness of our method, a comparison in terms of successful PI
generation between the SEF, PI3NN and PIVEN methods was made using two
synthetic datasets.
|
2409.06615 | Prithwish Dan | Kushal Kedia, Prithwish Dan, Angela Chao, Maximus Adrian Pace,
Sanjiban Choudhury | One-Shot Imitation under Mismatched Execution | null | null | null | null | cs.RO cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Human demonstrations as prompts are a powerful way to program robots to do
long-horizon manipulation tasks. However, translating these demonstrations into
robot-executable actions presents significant challenges due to execution
mismatches in movement styles and physical capabilities. Existing methods for
human-robot translation either depend on paired data, which is infeasible to
scale, or rely heavily on frame-level visual similarities that often break down
in practice. To address these challenges, we propose RHyME, a novel framework
that automatically pairs human and robot trajectories using sequence-level
optimal transport cost functions. Given long-horizon robot demonstrations,
RHyME synthesizes semantically equivalent human videos by retrieving and
composing short-horizon human clips. This approach facilitates effective policy
training without the need for paired data. RHyME successfully imitates a range
of cross-embodiment demonstrators, both in simulation and with a real human
hand, achieving over 50% increase in task success compared to previous methods.
We release our code and datasets at https://portal-cornell.github.io/rhyme/.
| [
{
"version": "v1",
"created": "Tue, 10 Sep 2024 16:11:57 GMT"
},
{
"version": "v2",
"created": "Tue, 17 Sep 2024 18:33:45 GMT"
},
{
"version": "v3",
"created": "Sat, 12 Oct 2024 18:27:19 GMT"
},
{
"version": "v4",
"created": "Wed, 16 Oct 2024 02:19:05 GMT"
},
{
"version": "v5",
"created": "Wed, 5 Mar 2025 16:07:20 GMT"
},
{
"version": "v6",
"created": "Fri, 28 Mar 2025 22:16:37 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Kedia",
"Kushal",
""
],
[
"Dan",
"Prithwish",
""
],
[
"Chao",
"Angela",
""
],
[
"Pace",
"Maximus Adrian",
""
],
[
"Choudhury",
"Sanjiban",
""
]
] | TITLE: One-Shot Imitation under Mismatched Execution
ABSTRACT: Human demonstrations as prompts are a powerful way to program robots to do
long-horizon manipulation tasks. However, translating these demonstrations into
robot-executable actions presents significant challenges due to execution
mismatches in movement styles and physical capabilities. Existing methods for
human-robot translation either depend on paired data, which is infeasible to
scale, or rely heavily on frame-level visual similarities that often break down
in practice. To address these challenges, we propose RHyME, a novel framework
that automatically pairs human and robot trajectories using sequence-level
optimal transport cost functions. Given long-horizon robot demonstrations,
RHyME synthesizes semantically equivalent human videos by retrieving and
composing short-horizon human clips. This approach facilitates effective policy
training without the need for paired data. RHyME successfully imitates a range
of cross-embodiment demonstrators, both in simulation and with a real human
hand, achieving over 50% increase in task success compared to previous methods.
We release our code and datasets at https://portal-cornell.github.io/rhyme/.
|
2409.14583 | Vishal Mirza | Vishal Mirza, Rahul Kulkarni, Aakanksha Jadhav | Evaluating Gender, Racial, and Age Biases in Large Language Models: A
Comparative Analysis of Occupational and Crime Scenarios | 11 pages, 17 figures, Accepted at IEEE Conference on Artificial
Intelligence (IEEE CAI) 2025. Full Paper acceptance in the Vertical
HUMAN-CENTERED AI category | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Recent advancements in Large Language Models(LLMs) have been notable, yet
widespread enterprise adoption remains limited due to various constraints. This
paper examines bias in LLMs-a crucial issue affecting their usability,
reliability, and fairness. Researchers are developing strategies to mitigate
bias, including debiasing layers, specialized reference datasets like
Winogender and Winobias, and reinforcement learning with human feedback (RLHF).
These techniques have been integrated into the latest LLMs. Our study evaluates
gender bias in occupational scenarios and gender, age, and racial bias in crime
scenarios across four leading LLMs released in 2024: Gemini 1.5 Pro, Llama 3
70B, Claude 3 Opus, and GPT-4o. Findings reveal that LLMs often depict female
characters more frequently than male ones in various occupations, showing a 37%
deviation from US BLS data. In crime scenarios, deviations from US FBI data are
54% for gender, 28% for race, and 17% for age. We observe that efforts to
reduce gender and racial bias often lead to outcomes that may over-index one
sub-class, potentially exacerbating the issue. These results highlight the
limitations of current bias mitigation techniques and underscore the need for
more effective approaches.
| [
{
"version": "v1",
"created": "Sun, 22 Sep 2024 20:21:20 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Oct 2024 05:41:03 GMT"
},
{
"version": "v3",
"created": "Sun, 30 Mar 2025 01:41:39 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Mirza",
"Vishal",
""
],
[
"Kulkarni",
"Rahul",
""
],
[
"Jadhav",
"Aakanksha",
""
]
] | TITLE: Evaluating Gender, Racial, and Age Biases in Large Language Models: A
Comparative Analysis of Occupational and Crime Scenarios
ABSTRACT: Recent advancements in Large Language Models(LLMs) have been notable, yet
widespread enterprise adoption remains limited due to various constraints. This
paper examines bias in LLMs-a crucial issue affecting their usability,
reliability, and fairness. Researchers are developing strategies to mitigate
bias, including debiasing layers, specialized reference datasets like
Winogender and Winobias, and reinforcement learning with human feedback (RLHF).
These techniques have been integrated into the latest LLMs. Our study evaluates
gender bias in occupational scenarios and gender, age, and racial bias in crime
scenarios across four leading LLMs released in 2024: Gemini 1.5 Pro, Llama 3
70B, Claude 3 Opus, and GPT-4o. Findings reveal that LLMs often depict female
characters more frequently than male ones in various occupations, showing a 37%
deviation from US BLS data. In crime scenarios, deviations from US FBI data are
54% for gender, 28% for race, and 17% for age. We observe that efforts to
reduce gender and racial bias often lead to outcomes that may over-index one
sub-class, potentially exacerbating the issue. These results highlight the
limitations of current bias mitigation techniques and underscore the need for
more effective approaches.
|
2409.19581 | Gibong Hong | Gibong Hong, Veronica Hindle, Nadine M. Veasley, Hannah D. Holscher,
Halil Kilicoglu | DiMB-RE: Mining the Scientific Literature for Diet-Microbiome
Associations | Accepted for publication in Journal of the American Medical
Informatics Association. Please refer to the supplementary data if needed:
https://doi.org/10.1093/jamia/ocaf054 | null | 10.1093/jamia/ocaf054 | null | cs.CL | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Objective: To develop a corpus annotated for diet-microbiome associations
from the biomedical literature and train natural language processing (NLP)
models to identify these associations, thereby improving the understanding of
their role in health and disease, and supporting personalized nutrition
strategies. Materials and Methods: We constructed DiMB-RE, a comprehensive
corpus annotated with 15 entity types (e.g., Nutrient, Microorganism) and 13
relation types (e.g., INCREASES, IMPROVES) capturing diet-microbiome
associations. We fine-tuned and evaluated state-of-the-art NLP models for named
entity, trigger, and relation extraction as well as factuality detection using
DiMB-RE. In addition, we benchmarked two generative large language models
(GPT-4o-mini and GPT-4o) on a subset of the dataset in zero- and one-shot
settings. Results: DiMB-RE consists of 14,450 entities and 4,206 relationships
from 165 publications (including 30 full-text Results sections). Fine-tuned NLP
models performed reasonably well for named entity recognition (0.800 F1 score),
while end-to-end relation extraction performance was modest (0.445 F1). The use
of Results section annotations improved relation extraction. The impact of
trigger detection was mixed. Generative models showed lower accuracy compared
to fine-tuned models. Discussion: To our knowledge, DiMB-RE is the largest and
most diverse corpus focusing on diet-microbiome interactions. NLP models
fine-tuned on DiMB-RE exhibit lower performance compared to similar corpora,
highlighting the complexity of information extraction in this domain.
Misclassified entities, missed triggers, and cross-sentence relations are the
major sources of relation extraction errors. Conclusions: DiMB-RE can serve as
a benchmark corpus for biomedical literature mining. DiMB-RE and the NLP models
are available at https://github.com/ScienceNLP-Lab/DiMB-RE.
| [
{
"version": "v1",
"created": "Sun, 29 Sep 2024 06:58:26 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Mar 2025 20:48:34 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Hong",
"Gibong",
""
],
[
"Hindle",
"Veronica",
""
],
[
"Veasley",
"Nadine M.",
""
],
[
"Holscher",
"Hannah D.",
""
],
[
"Kilicoglu",
"Halil",
""
]
] | TITLE: DiMB-RE: Mining the Scientific Literature for Diet-Microbiome
Associations
ABSTRACT: Objective: To develop a corpus annotated for diet-microbiome associations
from the biomedical literature and train natural language processing (NLP)
models to identify these associations, thereby improving the understanding of
their role in health and disease, and supporting personalized nutrition
strategies. Materials and Methods: We constructed DiMB-RE, a comprehensive
corpus annotated with 15 entity types (e.g., Nutrient, Microorganism) and 13
relation types (e.g., INCREASES, IMPROVES) capturing diet-microbiome
associations. We fine-tuned and evaluated state-of-the-art NLP models for named
entity, trigger, and relation extraction as well as factuality detection using
DiMB-RE. In addition, we benchmarked two generative large language models
(GPT-4o-mini and GPT-4o) on a subset of the dataset in zero- and one-shot
settings. Results: DiMB-RE consists of 14,450 entities and 4,206 relationships
from 165 publications (including 30 full-text Results sections). Fine-tuned NLP
models performed reasonably well for named entity recognition (0.800 F1 score),
while end-to-end relation extraction performance was modest (0.445 F1). The use
of Results section annotations improved relation extraction. The impact of
trigger detection was mixed. Generative models showed lower accuracy compared
to fine-tuned models. Discussion: To our knowledge, DiMB-RE is the largest and
most diverse corpus focusing on diet-microbiome interactions. NLP models
fine-tuned on DiMB-RE exhibit lower performance compared to similar corpora,
highlighting the complexity of information extraction in this domain.
Misclassified entities, missed triggers, and cross-sentence relations are the
major sources of relation extraction errors. Conclusions: DiMB-RE can serve as
a benchmark corpus for biomedical literature mining. DiMB-RE and the NLP models
are available at https://github.com/ScienceNLP-Lab/DiMB-RE.
|
2410.00462 | Zhang Yuanwen | Yuanwen Zhang, Jingfeng Xiong, Haolan Xian, Chuheng Chen, Xinxing
Chen, Chenglong Fu, and Yuquan Leng | Joint Moment Estimation for Hip Exoskeleton Control: A Generalized
Moment Feature Generation Method | 13 pages, 10 figures, Submitted to Biomimetic Intelligence and
Robotics | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hip joint moments during walking are the key foundation for hip exoskeleton
assistance control. Most recent studies have shown estimating hip joint moments
instantaneously offers a lot of advantages compared to generating assistive
torque profiles based on gait estimation, such as simple sensor requirements
and adaptability to variable walking speeds. However, existing joint moment
estimation methods still suffer from a lack of personalization, leading to
estimation accuracy degradation for new users. To address the challenges, this
paper proposes a hip joint moment estimation method based on generalized moment
features (GMF). A GMF generator is constructed to learn GMF of the joint moment
which is invariant to individual variations while remaining decodable into
joint moments through a dedicated decoder. Utilizing this well-featured
representation, a GRU-based neural network is used to predict GMF with joint
kinematics data, which can easily be acquired by hip exoskeleton encoders. The
proposed estimation method achieves a root mean square error of 0.1180 Nm/kg
under 28 walking speed conditions on a treadmill dataset, improved by 6.5%
compared to the model without body parameter fusion, and by 8.3% for the
conventional fusion model with body parameter. Furthermore, the proposed method
was employed on a hip exoskeleton with only encoder sensors and achieved an
average 20.5% metabolic reduction (p<0.01) for users compared to assist-off
condition in level-ground walking.
| [
{
"version": "v1",
"created": "Tue, 1 Oct 2024 07:38:49 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Mar 2025 01:29:16 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Zhang",
"Yuanwen",
""
],
[
"Xiong",
"Jingfeng",
""
],
[
"Xian",
"Haolan",
""
],
[
"Chen",
"Chuheng",
""
],
[
"Chen",
"Xinxing",
""
],
[
"Fu",
"Chenglong",
""
],
[
"Leng",
"Yuquan",
""
]
] | TITLE: Joint Moment Estimation for Hip Exoskeleton Control: A Generalized
Moment Feature Generation Method
ABSTRACT: Hip joint moments during walking are the key foundation for hip exoskeleton
assistance control. Most recent studies have shown estimating hip joint moments
instantaneously offers a lot of advantages compared to generating assistive
torque profiles based on gait estimation, such as simple sensor requirements
and adaptability to variable walking speeds. However, existing joint moment
estimation methods still suffer from a lack of personalization, leading to
estimation accuracy degradation for new users. To address the challenges, this
paper proposes a hip joint moment estimation method based on generalized moment
features (GMF). A GMF generator is constructed to learn GMF of the joint moment
which is invariant to individual variations while remaining decodable into
joint moments through a dedicated decoder. Utilizing this well-featured
representation, a GRU-based neural network is used to predict GMF with joint
kinematics data, which can easily be acquired by hip exoskeleton encoders. The
proposed estimation method achieves a root mean square error of 0.1180 Nm/kg
under 28 walking speed conditions on a treadmill dataset, improved by 6.5%
compared to the model without body parameter fusion, and by 8.3% for the
conventional fusion model with body parameter. Furthermore, the proposed method
was employed on a hip exoskeleton with only encoder sensors and achieved an
average 20.5% metabolic reduction (p<0.01) for users compared to assist-off
condition in level-ground walking.
|
2410.01532 | Angela Lopez | Angela Lopez-Cardona and Carlos Segura and Alexandros Karatzoglou and
Sergi Abadal and Ioannis Arapakis | Seeing Eye to AI: Human Alignment via Gaze-Based Response Rewards for
Large Language Models | This paper has been accepted to ICLR 2025 | null | null | null | cs.CL cs.AI cs.CV cs.HC | http://creativecommons.org/licenses/by/4.0/ | Advancements in Natural Language Processing (NLP), have led to the emergence
of Large Language Models (LLMs) such as GPT, Llama, Claude, and Gemini, which
excel across a range of tasks but require extensive fine-tuning to align their
outputs with human expectations. A widely used method for achieving this
alignment is Reinforcement Learning from Human Feedback (RLHF), which, despite
its success, faces challenges in accurately modelling human preferences. In
this paper, we introduce GazeReward, a novel framework that integrates implicit
feedback -- and specifically eye-tracking (ET) data -- into the Reward Model
(RM). In addition, we explore how ET-based features can provide insights into
user preferences. Through ablation studies we test our framework with different
integration methods, LLMs, and ET generator models, demonstrating that our
approach significantly improves the accuracy of the RM on established human
preference datasets. This work advances the ongoing discussion on optimizing AI
alignment with human values, exploring the potential of cognitive data for
shaping future NLP research.
| [
{
"version": "v1",
"created": "Wed, 2 Oct 2024 13:24:56 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 22:37:13 GMT"
},
{
"version": "v3",
"created": "Sat, 29 Mar 2025 11:32:39 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Lopez-Cardona",
"Angela",
""
],
[
"Segura",
"Carlos",
""
],
[
"Karatzoglou",
"Alexandros",
""
],
[
"Abadal",
"Sergi",
""
],
[
"Arapakis",
"Ioannis",
""
]
] | TITLE: Seeing Eye to AI: Human Alignment via Gaze-Based Response Rewards for
Large Language Models
ABSTRACT: Advancements in Natural Language Processing (NLP), have led to the emergence
of Large Language Models (LLMs) such as GPT, Llama, Claude, and Gemini, which
excel across a range of tasks but require extensive fine-tuning to align their
outputs with human expectations. A widely used method for achieving this
alignment is Reinforcement Learning from Human Feedback (RLHF), which, despite
its success, faces challenges in accurately modelling human preferences. In
this paper, we introduce GazeReward, a novel framework that integrates implicit
feedback -- and specifically eye-tracking (ET) data -- into the Reward Model
(RM). In addition, we explore how ET-based features can provide insights into
user preferences. Through ablation studies we test our framework with different
integration methods, LLMs, and ET generator models, demonstrating that our
approach significantly improves the accuracy of the RM on established human
preference datasets. This work advances the ongoing discussion on optimizing AI
alignment with human values, exploring the potential of cognitive data for
shaping future NLP research.
|
2410.02247 | Xinhao Yao | Xinhao Yao, Hongjin Qian, Xiaolin Hu, Gengze Xu, Yong Liu, Wei Liu,
Jian Luan, Bin Wang | Theoretical Insights into Fine-Tuning Attention Mechanism:
Generalization and Optimization | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs), built on Transformer architectures, exhibit
remarkable generalization across a wide range of tasks. However, fine-tuning
these models for specific tasks remains resource-intensive due to their
extensive parameterization. In this paper, we investigate two remarkable
phenomena related to the attention mechanism during the fine-tuning of LLMs.
The first phenomenon, termed "Unequal Importance of Attention Matrices,"
highlights the impact of fine-tuning different weight matrices. It shows that
optimizing the $\mathbf{W}_v$ matrix yields significantly better performance
than optimizing the $\mathbf{W}_k$ matrix. Fine-tuning only the $\mathbf{W}_q$
and $\mathbf{W}_v$ matrices is computationally efficient while delivering
results comparable to, or even better than fine-tuning all three matrices
($\mathbf{W}_q$, $\mathbf{W}_k$, and $\mathbf{W}_v$). The second phenomenon,
"Attention Matrices with Customized Learning Rate Leads to Better Convergence,"
emphasizes the importance of assigning distinct learning rates to these
matrices. Specifically, a higher learning rate for the $\mathbf{W}_v$ matrix
compared to $\mathbf{W}_q$ and $\mathbf{W}_k$ accelerates convergence and
improves performance. Building on these insights, we propose a new strategy
that improves fine-tuning efficiency in terms of both storage and time.
Experimental results on benchmark datasets validate the effectiveness of this
approach, supporting our theoretical findings. Our analysis lays the
theoretical groundwork for configuring and improving lightweight algorithms in
LLMs fine-tuning.
| [
{
"version": "v1",
"created": "Thu, 3 Oct 2024 06:37:37 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Mar 2025 16:16:02 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Yao",
"Xinhao",
""
],
[
"Qian",
"Hongjin",
""
],
[
"Hu",
"Xiaolin",
""
],
[
"Xu",
"Gengze",
""
],
[
"Liu",
"Yong",
""
],
[
"Liu",
"Wei",
""
],
[
"Luan",
"Jian",
""
],
[
"Wang",
"Bin",
""
]
] | TITLE: Theoretical Insights into Fine-Tuning Attention Mechanism:
Generalization and Optimization
ABSTRACT: Large Language Models (LLMs), built on Transformer architectures, exhibit
remarkable generalization across a wide range of tasks. However, fine-tuning
these models for specific tasks remains resource-intensive due to their
extensive parameterization. In this paper, we investigate two remarkable
phenomena related to the attention mechanism during the fine-tuning of LLMs.
The first phenomenon, termed "Unequal Importance of Attention Matrices,"
highlights the impact of fine-tuning different weight matrices. It shows that
optimizing the $\mathbf{W}_v$ matrix yields significantly better performance
than optimizing the $\mathbf{W}_k$ matrix. Fine-tuning only the $\mathbf{W}_q$
and $\mathbf{W}_v$ matrices is computationally efficient while delivering
results comparable to, or even better than fine-tuning all three matrices
($\mathbf{W}_q$, $\mathbf{W}_k$, and $\mathbf{W}_v$). The second phenomenon,
"Attention Matrices with Customized Learning Rate Leads to Better Convergence,"
emphasizes the importance of assigning distinct learning rates to these
matrices. Specifically, a higher learning rate for the $\mathbf{W}_v$ matrix
compared to $\mathbf{W}_q$ and $\mathbf{W}_k$ accelerates convergence and
improves performance. Building on these insights, we propose a new strategy
that improves fine-tuning efficiency in terms of both storage and time.
Experimental results on benchmark datasets validate the effectiveness of this
approach, supporting our theoretical findings. Our analysis lays the
theoretical groundwork for configuring and improving lightweight algorithms in
LLMs fine-tuning.
|
2410.02344 | Felix Zimmer | Felix Zimmer | RelChaNet: Neural Network Feature Selection using Relative Change Scores | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There is an ongoing effort to develop feature selection algorithms to improve
interpretability, reduce computational resources, and minimize overfitting in
predictive models. Neural networks stand out as architectures on which to build
feature selection methods, and recently, neuron pruning and regrowth have
emerged from the sparse neural network literature as promising new tools. We
introduce RelChaNet, a novel and lightweight supervised feature selection
algorithm that uses neuron pruning and regrowth in the input layer of a dense
neural network. For neuron pruning, a gradient sum metric measures the relative
change induced in a network after a feature enters, while neurons are randomly
regrown. We also propose an extension that adapts the size of the input layer
at runtime. Extensive experiments on 13 different datasets show that our
approach generally outperforms the current state-of-the-art methods, and in
particular improves the average accuracy by 2% on the MNIST dataset. Our code
is available at https://github.com/flxzimmer/relchanet.
| [
{
"version": "v1",
"created": "Thu, 3 Oct 2024 09:56:39 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Mar 2025 10:43:53 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Zimmer",
"Felix",
""
]
] | TITLE: RelChaNet: Neural Network Feature Selection using Relative Change Scores
ABSTRACT: There is an ongoing effort to develop feature selection algorithms to improve
interpretability, reduce computational resources, and minimize overfitting in
predictive models. Neural networks stand out as architectures on which to build
feature selection methods, and recently, neuron pruning and regrowth have
emerged from the sparse neural network literature as promising new tools. We
introduce RelChaNet, a novel and lightweight supervised feature selection
algorithm that uses neuron pruning and regrowth in the input layer of a dense
neural network. For neuron pruning, a gradient sum metric measures the relative
change induced in a network after a feature enters, while neurons are randomly
regrown. We also propose an extension that adapts the size of the input layer
at runtime. Extensive experiments on 13 different datasets show that our
approach generally outperforms the current state-of-the-art methods, and in
particular improves the average accuracy by 2% on the MNIST dataset. Our code
is available at https://github.com/flxzimmer/relchanet.
|
2410.02646 | Jinsu Yoo | Jinsu Yoo, Zhenyang Feng, Tai-Yu Pan, Yihong Sun, Cheng Perng Phoo,
Xiangyu Chen, Mark Campbell, Kilian Q. Weinberger, Bharath Hariharan, Wei-Lun
Chao | Learning 3D Perception from Others' Predictions | Accepted to ICLR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Accurate 3D object detection in real-world environments requires a huge
amount of annotated data with high quality. Acquiring such data is tedious and
expensive, and often needs repeated effort when a new sensor is adopted or when
the detector is deployed in a new environment. We investigate a new scenario to
construct 3D object detectors: learning from the predictions of a nearby unit
that is equipped with an accurate detector. For example, when a self-driving
car enters a new area, it may learn from other traffic participants whose
detectors have been optimized for that area. This setting is label-efficient,
sensor-agnostic, and communication-efficient: nearby units only need to share
the predictions with the ego agent (e.g., car). Naively using the received
predictions as ground-truths to train the detector for the ego car, however,
leads to inferior performance. We systematically study the problem and identify
viewpoint mismatches and mislocalization (due to synchronization and GPS
errors) as the main causes, which unavoidably result in false positives, false
negatives, and inaccurate pseudo labels. We propose a distance-based
curriculum, first learning from closer units with similar viewpoints and
subsequently improving the quality of other units' predictions via
self-training. We further demonstrate that an effective pseudo label refinement
module can be trained with a handful of annotated data, largely reducing the
data quantity necessary to train an object detector. We validate our approach
on the recently released real-world collaborative driving dataset, using
reference cars' predictions as pseudo labels for the ego car. Extensive
experiments including several scenarios (e.g., different sensors, detectors,
and domains) demonstrate the effectiveness of our approach toward
label-efficient learning of 3D perception from other units' predictions.
| [
{
"version": "v1",
"created": "Thu, 3 Oct 2024 16:31:28 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Oct 2024 16:35:32 GMT"
},
{
"version": "v3",
"created": "Sat, 29 Mar 2025 21:01:54 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Yoo",
"Jinsu",
""
],
[
"Feng",
"Zhenyang",
""
],
[
"Pan",
"Tai-Yu",
""
],
[
"Sun",
"Yihong",
""
],
[
"Phoo",
"Cheng Perng",
""
],
[
"Chen",
"Xiangyu",
""
],
[
"Campbell",
"Mark",
""
],
[
"Weinberger",
"Kilian Q.",
""
],
[
"Hariharan",
"Bharath",
""
],
[
"Chao",
"Wei-Lun",
""
]
] | TITLE: Learning 3D Perception from Others' Predictions
ABSTRACT: Accurate 3D object detection in real-world environments requires a huge
amount of annotated data with high quality. Acquiring such data is tedious and
expensive, and often needs repeated effort when a new sensor is adopted or when
the detector is deployed in a new environment. We investigate a new scenario to
construct 3D object detectors: learning from the predictions of a nearby unit
that is equipped with an accurate detector. For example, when a self-driving
car enters a new area, it may learn from other traffic participants whose
detectors have been optimized for that area. This setting is label-efficient,
sensor-agnostic, and communication-efficient: nearby units only need to share
the predictions with the ego agent (e.g., car). Naively using the received
predictions as ground-truths to train the detector for the ego car, however,
leads to inferior performance. We systematically study the problem and identify
viewpoint mismatches and mislocalization (due to synchronization and GPS
errors) as the main causes, which unavoidably result in false positives, false
negatives, and inaccurate pseudo labels. We propose a distance-based
curriculum, first learning from closer units with similar viewpoints and
subsequently improving the quality of other units' predictions via
self-training. We further demonstrate that an effective pseudo label refinement
module can be trained with a handful of annotated data, largely reducing the
data quantity necessary to train an object detector. We validate our approach
on the recently released real-world collaborative driving dataset, using
reference cars' predictions as pseudo labels for the ego car. Extensive
experiments including several scenarios (e.g., different sensors, detectors,
and domains) demonstrate the effectiveness of our approach toward
label-efficient learning of 3D perception from other units' predictions.
|
2410.05804 | Mingyi Guo | Mingyi Guo, Yuyang Liu, Zhiyuan Yan, Zongying Lin, Peixi Peng and
Yonghong Tian | CASA: Class-Agnostic Shared Attributes in Vision-Language Models for
Efficient Incremental Object Detection | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Incremental object detection is fundamentally challenged by catastrophic
forgetting. A major factor contributing to this issue is background shift,
where background categories in sequential tasks may overlap with either
previously learned or future unseen classes. To address this, we propose a
novel method called Class-Agnostic Shared Attribute Base (CASA) that encourages
the model to learn category-agnostic attributes shared across incremental
classes. Our approach leverages an LLM to generate candidate textual
attributes, selects the most relevant ones based on the current training data,
and records their importance in an assignment matrix. For subsequent tasks, the
retained attributes are frozen, and new attributes are selected from the
remaining candidates, ensuring both knowledge retention and adaptability.
Extensive experiments on the COCO dataset demonstrate the state-of-the-art
performance of our method.
| [
{
"version": "v1",
"created": "Tue, 8 Oct 2024 08:36:12 GMT"
},
{
"version": "v2",
"created": "Fri, 11 Oct 2024 08:54:41 GMT"
},
{
"version": "v3",
"created": "Mon, 31 Mar 2025 15:30:45 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Guo",
"Mingyi",
""
],
[
"Liu",
"Yuyang",
""
],
[
"Yan",
"Zhiyuan",
""
],
[
"Lin",
"Zongying",
""
],
[
"Peng",
"Peixi",
""
],
[
"Tian",
"Yonghong",
""
]
] | TITLE: CASA: Class-Agnostic Shared Attributes in Vision-Language Models for
Efficient Incremental Object Detection
ABSTRACT: Incremental object detection is fundamentally challenged by catastrophic
forgetting. A major factor contributing to this issue is background shift,
where background categories in sequential tasks may overlap with either
previously learned or future unseen classes. To address this, we propose a
novel method called Class-Agnostic Shared Attribute Base (CASA) that encourages
the model to learn category-agnostic attributes shared across incremental
classes. Our approach leverages an LLM to generate candidate textual
attributes, selects the most relevant ones based on the current training data,
and records their importance in an assignment matrix. For subsequent tasks, the
retained attributes are frozen, and new attributes are selected from the
remaining candidates, ensuring both knowledge retention and adaptability.
Extensive experiments on the COCO dataset demonstrate the state-of-the-art
performance of our method.
|
2410.08063 | Mingjia Li | Hao Zhao, Mingjia Li, Qiming Hu, Xiaojie Guo | Reversible Decoupling Network for Single Image Reflection Removal | To appear at CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Recent deep-learning-based approaches to single-image reflection removal have
shown promising advances, primarily for two reasons: 1) the utilization of
recognition-pretrained features as inputs, and 2) the design of dual-stream
interaction networks. However, according to the Information Bottleneck
principle, high-level semantic clues tend to be compressed or discarded during
layer-by-layer propagation. Additionally, interactions in dual-stream networks
follow a fixed pattern across different layers, limiting overall performance.
To address these limitations, we propose a novel architecture called Reversible
Decoupling Network (RDNet), which employs a reversible encoder to secure
valuable information while flexibly decoupling transmission- and
reflection-relevant features during the forward pass. Furthermore, we customize
a transmission-rate-aware prompt generator to dynamically calibrate features,
further boosting performance. Extensive experiments demonstrate the superiority
of RDNet over existing SOTA methods on five widely-adopted benchmark datasets.
RDNet achieves the best performance in the NTIRE 2025 Single Image Reflection
Removal in the Wild Challenge in both fidelity and perceptual comparison. Our
code is available at https://github.com/lime-j/RDNet
| [
{
"version": "v1",
"created": "Thu, 10 Oct 2024 15:58:27 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Mar 2025 16:19:14 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Zhao",
"Hao",
""
],
[
"Li",
"Mingjia",
""
],
[
"Hu",
"Qiming",
""
],
[
"Guo",
"Xiaojie",
""
]
] | TITLE: Reversible Decoupling Network for Single Image Reflection Removal
ABSTRACT: Recent deep-learning-based approaches to single-image reflection removal have
shown promising advances, primarily for two reasons: 1) the utilization of
recognition-pretrained features as inputs, and 2) the design of dual-stream
interaction networks. However, according to the Information Bottleneck
principle, high-level semantic clues tend to be compressed or discarded during
layer-by-layer propagation. Additionally, interactions in dual-stream networks
follow a fixed pattern across different layers, limiting overall performance.
To address these limitations, we propose a novel architecture called Reversible
Decoupling Network (RDNet), which employs a reversible encoder to secure
valuable information while flexibly decoupling transmission- and
reflection-relevant features during the forward pass. Furthermore, we customize
a transmission-rate-aware prompt generator to dynamically calibrate features,
further boosting performance. Extensive experiments demonstrate the superiority
of RDNet over existing SOTA methods on five widely-adopted benchmark datasets.
RDNet achieves the best performance in the NTIRE 2025 Single Image Reflection
Removal in the Wild Challenge in both fidelity and perceptual comparison. Our
code is available at https://github.com/lime-j/RDNet
|
2410.10209 | Huang Dong | Dong Huang, Guangtao Zeng, Jianbo Dai, Meng Luo, Han Weng, Yuhao Qing,
Heming Cui, Zhijiang Guo, Jie M. Zhang | SwiftCoder: Enhancing Code Generation in Large Language Models through
Efficiency-Aware Fine-tuning | Under Review | null | null | null | cs.CL cs.SE | http://creativecommons.org/licenses/by/4.0/ | As large language models (LLMs) play an increasingly important role in code
generation, enhancing both correctness and efficiency has become crucial.
Current methods primarily focus on correctness, often overlooking efficiency.
To address this gap, we introduce \dataset to improve both aspects by
fine-tuning LLMs on a high-quality dataset comprising correct and efficient
code samples. Our methodology involves leveraging multiple LLMs to generate
diverse candidate code solutions for various tasks across different programming
languages. We then evaluate these solutions by directly measuring their
execution time and memory usage through local execution. The code solution with
the lowest execution time and memory consumption is selected as the final
output for each task. Experimental results demonstrate significant improvements
when fine-tuning with \dataset. For instance, Qwen2.5-Coder-7B-Instruct's
pass@1 score increases from 44.8\% to 57.7\%, while the average execution time
for correct tasks decreases by 48.4\%. \dataset offers a scalable and effective
solution for advancing AI-driven code generation, benefiting both software
development and computational problem-solving. The source code of Effi-Code was
released in https://github.com/huangd1999/Effi-Code.
| [
{
"version": "v1",
"created": "Mon, 14 Oct 2024 07:05:51 GMT"
},
{
"version": "v2",
"created": "Sat, 19 Oct 2024 12:39:11 GMT"
},
{
"version": "v3",
"created": "Mon, 31 Mar 2025 07:00:08 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Huang",
"Dong",
""
],
[
"Zeng",
"Guangtao",
""
],
[
"Dai",
"Jianbo",
""
],
[
"Luo",
"Meng",
""
],
[
"Weng",
"Han",
""
],
[
"Qing",
"Yuhao",
""
],
[
"Cui",
"Heming",
""
],
[
"Guo",
"Zhijiang",
""
],
[
"Zhang",
"Jie M.",
""
]
] | TITLE: SwiftCoder: Enhancing Code Generation in Large Language Models through
Efficiency-Aware Fine-tuning
ABSTRACT: As large language models (LLMs) play an increasingly important role in code
generation, enhancing both correctness and efficiency has become crucial.
Current methods primarily focus on correctness, often overlooking efficiency.
To address this gap, we introduce \dataset to improve both aspects by
fine-tuning LLMs on a high-quality dataset comprising correct and efficient
code samples. Our methodology involves leveraging multiple LLMs to generate
diverse candidate code solutions for various tasks across different programming
languages. We then evaluate these solutions by directly measuring their
execution time and memory usage through local execution. The code solution with
the lowest execution time and memory consumption is selected as the final
output for each task. Experimental results demonstrate significant improvements
when fine-tuning with \dataset. For instance, Qwen2.5-Coder-7B-Instruct's
pass@1 score increases from 44.8\% to 57.7\%, while the average execution time
for correct tasks decreases by 48.4\%. \dataset offers a scalable and effective
solution for advancing AI-driven code generation, benefiting both software
development and computational problem-solving. The source code of Effi-Code was
released in https://github.com/huangd1999/Effi-Code.
|
2410.10741 | Pengrui Quan | Pengrui Quan, Xiaomin Ouyang, Jeya Vikranth Jeyakumar, Ziqi Wang, Yang
Xing, Mani Srivastava | SensorBench: Benchmarking LLMs in Coding-Based Sensor Processing | null | null | null | null | cs.AI cs.LG eess.SP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Effective processing, interpretation, and management of sensor data have
emerged as a critical component of cyber-physical systems. Traditionally,
processing sensor data requires profound theoretical knowledge and proficiency
in signal-processing tools. However, recent works show that Large Language
Models (LLMs) have promising capabilities in processing sensory data,
suggesting their potential as copilots for developing sensing systems.
To explore this potential, we construct a comprehensive benchmark,
SensorBench, to establish a quantifiable objective. The benchmark incorporates
diverse real-world sensor datasets for various tasks. The results show that
while LLMs exhibit considerable proficiency in simpler tasks, they face
inherent challenges in processing compositional tasks with parameter selections
compared to engineering experts. Additionally, we investigate four prompting
strategies for sensor processing and show that self-verification can outperform
all other baselines in 48% of tasks. Our study provides a comprehensive
benchmark and prompting analysis for future developments, paving the way toward
an LLM-based sensor processing copilot.
| [
{
"version": "v1",
"created": "Mon, 14 Oct 2024 17:21:39 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Oct 2024 23:29:49 GMT"
},
{
"version": "v3",
"created": "Fri, 28 Mar 2025 18:42:25 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Quan",
"Pengrui",
""
],
[
"Ouyang",
"Xiaomin",
""
],
[
"Jeyakumar",
"Jeya Vikranth",
""
],
[
"Wang",
"Ziqi",
""
],
[
"Xing",
"Yang",
""
],
[
"Srivastava",
"Mani",
""
]
] | TITLE: SensorBench: Benchmarking LLMs in Coding-Based Sensor Processing
ABSTRACT: Effective processing, interpretation, and management of sensor data have
emerged as a critical component of cyber-physical systems. Traditionally,
processing sensor data requires profound theoretical knowledge and proficiency
in signal-processing tools. However, recent works show that Large Language
Models (LLMs) have promising capabilities in processing sensory data,
suggesting their potential as copilots for developing sensing systems.
To explore this potential, we construct a comprehensive benchmark,
SensorBench, to establish a quantifiable objective. The benchmark incorporates
diverse real-world sensor datasets for various tasks. The results show that
while LLMs exhibit considerable proficiency in simpler tasks, they face
inherent challenges in processing compositional tasks with parameter selections
compared to engineering experts. Additionally, we investigate four prompting
strategies for sensor processing and show that self-verification can outperform
all other baselines in 48% of tasks. Our study provides a comprehensive
benchmark and prompting analysis for future developments, paving the way toward
an LLM-based sensor processing copilot.
|
2410.10870 | Rana Muhammad Shahroz Khan | Rana Muhammad Shahroz Khan, Pingzhi Li, Sukwon Yun, Zhenyu Wang,
Shahriar Nirjon, Chau-Wai Wong, Tianlong Chen | PortLLM: Personalizing Evolving Large Language Models with Training-Free
and Portable Model Patches | null | null | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | As large language models (LLMs) increasingly shape the AI landscape,
fine-tuning pretrained models has become more popular than in the pre-LLM era
for achieving optimal performance in domain-specific tasks. However, pretrained
LLMs such as ChatGPT are periodically evolved, i.e., model parameters are
frequently updated), making it challenging for downstream users with limited
resources to keep up with fine-tuning the newest LLMs for their domain
application. Even though fine-tuning costs have nowadays been reduced thanks to
the innovations of parameter-efficient fine-tuning such as LoRA, not all
downstream users have adequate computing for frequent personalization.
Moreover, access to fine-tuning datasets, particularly in sensitive domains
such as healthcare, could be time-restrictive, making it crucial to retain the
knowledge encoded in earlier fine-tuned rounds for future adaptation. In this
paper, we present PortLLM, a training-free framework that (i) creates an
initial lightweight model update patch to capture domain-specific knowledge,
and (ii) allows a subsequent seamless plugging for the continual
personalization of evolved LLM at minimal cost. Our extensive experiments cover
seven representative datasets, from easier question-answering tasks {BoolQ,
SST2} to harder reasoning tasks {WinoGrande, GSM8K}, and models including
{Mistral-7B, Llama2, Llama3.1, and Gemma2}, validating the portability of our
designed model patches and showcasing the effectiveness of our proposed
framework. For instance, PortLLM achieves comparable performance to LoRA
fine-tuning with reductions of up to 12.2x in GPU memory usage. Finally, we
provide theoretical justifications to understand the portability of our model
update patches, which offers new insights into the theoretical dimension of
LLMs' personalization.
| [
{
"version": "v1",
"created": "Tue, 8 Oct 2024 13:41:08 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Oct 2024 17:58:52 GMT"
},
{
"version": "v3",
"created": "Sat, 29 Mar 2025 03:32:53 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Khan",
"Rana Muhammad Shahroz",
""
],
[
"Li",
"Pingzhi",
""
],
[
"Yun",
"Sukwon",
""
],
[
"Wang",
"Zhenyu",
""
],
[
"Nirjon",
"Shahriar",
""
],
[
"Wong",
"Chau-Wai",
""
],
[
"Chen",
"Tianlong",
""
]
] | TITLE: PortLLM: Personalizing Evolving Large Language Models with Training-Free
and Portable Model Patches
ABSTRACT: As large language models (LLMs) increasingly shape the AI landscape,
fine-tuning pretrained models has become more popular than in the pre-LLM era
for achieving optimal performance in domain-specific tasks. However, pretrained
LLMs such as ChatGPT are periodically evolved, i.e., model parameters are
frequently updated), making it challenging for downstream users with limited
resources to keep up with fine-tuning the newest LLMs for their domain
application. Even though fine-tuning costs have nowadays been reduced thanks to
the innovations of parameter-efficient fine-tuning such as LoRA, not all
downstream users have adequate computing for frequent personalization.
Moreover, access to fine-tuning datasets, particularly in sensitive domains
such as healthcare, could be time-restrictive, making it crucial to retain the
knowledge encoded in earlier fine-tuned rounds for future adaptation. In this
paper, we present PortLLM, a training-free framework that (i) creates an
initial lightweight model update patch to capture domain-specific knowledge,
and (ii) allows a subsequent seamless plugging for the continual
personalization of evolved LLM at minimal cost. Our extensive experiments cover
seven representative datasets, from easier question-answering tasks {BoolQ,
SST2} to harder reasoning tasks {WinoGrande, GSM8K}, and models including
{Mistral-7B, Llama2, Llama3.1, and Gemma2}, validating the portability of our
designed model patches and showcasing the effectiveness of our proposed
framework. For instance, PortLLM achieves comparable performance to LoRA
fine-tuning with reductions of up to 12.2x in GPU memory usage. Finally, we
provide theoretical justifications to understand the portability of our model
update patches, which offers new insights into the theoretical dimension of
LLMs' personalization.
|
2410.12237 | Yufei Zhu | Yufei Zhu, Andrey Rudenko, Luigi Palmieri, Lukas Heuer, Achim J.
Lilienthal, Martin Magnusson | Fast Online Learning of CLiFF-maps in Changing Environments | Accepted to the 2025 IEEE International Conference on Robotics and
Automation (ICRA) | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | Maps of dynamics are effective representations of motion patterns learned
from prior observations, with recent research demonstrating their ability to
enhance various downstream tasks such as human-aware robot navigation,
long-term human motion prediction, and robot localization. Current advancements
have primarily concentrated on methods for learning maps of human flow in
environments where the flow is static, i.e., not assumed to change over time.
In this paper we propose an online update method of the CLiFF-map (an advanced
map of dynamics type that models motion patterns as velocity and orientation
mixtures) to actively detect and adapt to human flow changes. As new
observations are collected, our goal is to update a CLiFF-map to effectively
and accurately integrate them, while retaining relevant historic motion
patterns. The proposed online update method maintains a probabilistic
representation in each observed location, updating parameters by continuously
tracking sufficient statistics. In experiments using both synthetic and
real-world datasets, we show that our method is able to maintain accurate
representations of human motion dynamics, contributing to high performance
flow-compliant planning downstream tasks, while being orders of magnitude
faster than the comparable baselines.
| [
{
"version": "v1",
"created": "Wed, 16 Oct 2024 04:54:49 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Mar 2025 09:49:33 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Zhu",
"Yufei",
""
],
[
"Rudenko",
"Andrey",
""
],
[
"Palmieri",
"Luigi",
""
],
[
"Heuer",
"Lukas",
""
],
[
"Lilienthal",
"Achim J.",
""
],
[
"Magnusson",
"Martin",
""
]
] | TITLE: Fast Online Learning of CLiFF-maps in Changing Environments
ABSTRACT: Maps of dynamics are effective representations of motion patterns learned
from prior observations, with recent research demonstrating their ability to
enhance various downstream tasks such as human-aware robot navigation,
long-term human motion prediction, and robot localization. Current advancements
have primarily concentrated on methods for learning maps of human flow in
environments where the flow is static, i.e., not assumed to change over time.
In this paper we propose an online update method of the CLiFF-map (an advanced
map of dynamics type that models motion patterns as velocity and orientation
mixtures) to actively detect and adapt to human flow changes. As new
observations are collected, our goal is to update a CLiFF-map to effectively
and accurately integrate them, while retaining relevant historic motion
patterns. The proposed online update method maintains a probabilistic
representation in each observed location, updating parameters by continuously
tracking sufficient statistics. In experiments using both synthetic and
real-world datasets, we show that our method is able to maintain accurate
representations of human motion dynamics, contributing to high performance
flow-compliant planning downstream tasks, while being orders of magnitude
faster than the comparable baselines.
|
2410.13146 | Kuleen Sasse | Kuleen Sasse, Shan Chen, Jackson Pond, Danielle Bitterman, John
Osborne | debiaSAE: Benchmarking and Mitigating Vision-Language Model Bias | Under Review at COLM 2025 | null | null | null | cs.CL cs.CV | http://creativecommons.org/licenses/by/4.0/ | As Vision Language Models (VLMs) gain widespread use, their fairness remains
under-explored. In this paper, we analyze demographic biases across five models
and six datasets. We find that portrait datasets like UTKFace and CelebA are
the best tools for bias detection, finding gaps in performance and fairness for
both LLaVa and CLIP models. Scene-based datasets like PATA and VLStereoSet fail
to be useful benchmarks for bias due to their text prompts allowing the model
to guess the answer without a picture. As for pronoun-based datasets like
VisoGender, we receive mixed signals as only some subsets of the data are
useful in providing insights. To alleviate these two problems, we introduce a
more rigorous evaluation dataset and a debiasing method based on Sparse
Autoencoders to help reduce bias in models. We find that our data set generates
more meaningful errors than the previous data sets. Furthermore, our debiasing
method improves fairness, gaining 5-15 points in performance over the baseline.
This study displays the problems with the current benchmarks for measuring
demographic bias in Vision Language Models and introduces both a more effective
dataset for measuring bias and a novel and interpretable debiasing method based
on Sparse Autoencoders.
| [
{
"version": "v1",
"created": "Thu, 17 Oct 2024 02:03:27 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Mar 2025 01:59:15 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Sasse",
"Kuleen",
""
],
[
"Chen",
"Shan",
""
],
[
"Pond",
"Jackson",
""
],
[
"Bitterman",
"Danielle",
""
],
[
"Osborne",
"John",
""
]
] | TITLE: debiaSAE: Benchmarking and Mitigating Vision-Language Model Bias
ABSTRACT: As Vision Language Models (VLMs) gain widespread use, their fairness remains
under-explored. In this paper, we analyze demographic biases across five models
and six datasets. We find that portrait datasets like UTKFace and CelebA are
the best tools for bias detection, finding gaps in performance and fairness for
both LLaVa and CLIP models. Scene-based datasets like PATA and VLStereoSet fail
to be useful benchmarks for bias due to their text prompts allowing the model
to guess the answer without a picture. As for pronoun-based datasets like
VisoGender, we receive mixed signals as only some subsets of the data are
useful in providing insights. To alleviate these two problems, we introduce a
more rigorous evaluation dataset and a debiasing method based on Sparse
Autoencoders to help reduce bias in models. We find that our data set generates
more meaningful errors than the previous data sets. Furthermore, our debiasing
method improves fairness, gaining 5-15 points in performance over the baseline.
This study displays the problems with the current benchmarks for measuring
demographic bias in Vision Language Models and introduces both a more effective
dataset for measuring bias and a novel and interpretable debiasing method based
on Sparse Autoencoders.
|
2410.13567 | Yujian Zhao | Yujian Zhao, Chengru Wu, Yinong Xu, Xuanzheng Du, Ruiyu Li, Guanglin
Niu | CCUP: A Controllable Synthetic Data Generation Pipeline for Pretraining
Cloth-Changing Person Re-Identification Models | Accepted by ICME 2025 | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cloth-changing person re-identification (CC-ReID), also known as Long-Term
Person Re-Identification (LT-ReID) is a critical and challenging research topic
in computer vision that has recently garnered significant attention. However,
due to the high cost of constructing CC-ReID data, the existing data-driven
models are hard to train efficiently on limited data, causing overfitting
issue. To address this challenge, we propose a low-cost and efficient pipeline
for generating controllable and high-quality synthetic data simulating the
surveillance of real scenarios specific to the CC-ReID task. Particularly, we
construct a new self-annotated CC-ReID dataset named Cloth-Changing Unreal
Person (CCUP), containing 6,000 IDs, 1,179,976 images, 100 cameras, and 26.5
outfits per individual. Based on this large-scale dataset, we introduce an
effective and scalable pretrain-finetune framework for enhancing the
generalization capabilities of the traditional CC-ReID models. The extensive
experiments demonstrate that two typical models namely TransReID and FIRe^2,
when integrated into our framework, outperform other state-of-the-art models
after pretraining on CCUP and finetuning on the benchmarks such as PRCC,
VC-Clothes and NKUP. The CCUP is available at:
https://github.com/yjzhao1019/CCUP.
| [
{
"version": "v1",
"created": "Thu, 17 Oct 2024 14:04:02 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 08:17:18 GMT"
},
{
"version": "v3",
"created": "Sun, 30 Mar 2025 14:17:31 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Zhao",
"Yujian",
""
],
[
"Wu",
"Chengru",
""
],
[
"Xu",
"Yinong",
""
],
[
"Du",
"Xuanzheng",
""
],
[
"Li",
"Ruiyu",
""
],
[
"Niu",
"Guanglin",
""
]
] | TITLE: CCUP: A Controllable Synthetic Data Generation Pipeline for Pretraining
Cloth-Changing Person Re-Identification Models
ABSTRACT: Cloth-changing person re-identification (CC-ReID), also known as Long-Term
Person Re-Identification (LT-ReID) is a critical and challenging research topic
in computer vision that has recently garnered significant attention. However,
due to the high cost of constructing CC-ReID data, the existing data-driven
models are hard to train efficiently on limited data, causing overfitting
issue. To address this challenge, we propose a low-cost and efficient pipeline
for generating controllable and high-quality synthetic data simulating the
surveillance of real scenarios specific to the CC-ReID task. Particularly, we
construct a new self-annotated CC-ReID dataset named Cloth-Changing Unreal
Person (CCUP), containing 6,000 IDs, 1,179,976 images, 100 cameras, and 26.5
outfits per individual. Based on this large-scale dataset, we introduce an
effective and scalable pretrain-finetune framework for enhancing the
generalization capabilities of the traditional CC-ReID models. The extensive
experiments demonstrate that two typical models namely TransReID and FIRe^2,
when integrated into our framework, outperform other state-of-the-art models
after pretraining on CCUP and finetuning on the benchmarks such as PRCC,
VC-Clothes and NKUP. The CCUP is available at:
https://github.com/yjzhao1019/CCUP.
|
2410.13716 | Nandan Thakur | Nandan Thakur, Suleman Kazi, Ge Luo, Jimmy Lin, Amin Ahmad | MIRAGE-Bench: Automatic Multilingual Benchmark Arena for
Retrieval-Augmented Generation Systems | Accepted at NAACL 2025 (Main Conference) | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Traditional retrieval-augmented generation (RAG) benchmarks evaluate systems
using heuristic-based metrics, but these require human preferences as the
ground truth for reference. In contrast, arena-based benchmarks, where systems
compete against each other, require an expensive large language model (LLM) as
a judge for a reliable evaluation. We present a simple efficient technique to
combine the best of both worlds. The idea is to train a surrogate judge using
heuristic metrics as input, to output the LLM as a judge prediction. In our
work, we develop MIRAGE-Bench, a synthetic arena-based RAG benchmark for 18
diverse languages on Wikipedia focused on multilingual answer generation
evaluation. It extensively couples both heuristic features and LLM as a judge
for evaluation. We benchmark 19 multilingual LLMs, and observe a high
correlation (Kendall Tau ($\tau$) = 0.909) using our surrogate judge and
between GPT-4o as a teacher using the Bradley-Terry framework. Our results show
proprietary and large open-source LLMs currently dominate on MIRAGE-Bench. Our
code and datasets are made publicly available here:
https://github.com/vectara/mirage-bench.
| [
{
"version": "v1",
"created": "Thu, 17 Oct 2024 16:18:49 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Mar 2025 01:11:30 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Thakur",
"Nandan",
""
],
[
"Kazi",
"Suleman",
""
],
[
"Luo",
"Ge",
""
],
[
"Lin",
"Jimmy",
""
],
[
"Ahmad",
"Amin",
""
]
] | TITLE: MIRAGE-Bench: Automatic Multilingual Benchmark Arena for
Retrieval-Augmented Generation Systems
ABSTRACT: Traditional retrieval-augmented generation (RAG) benchmarks evaluate systems
using heuristic-based metrics, but these require human preferences as the
ground truth for reference. In contrast, arena-based benchmarks, where systems
compete against each other, require an expensive large language model (LLM) as
a judge for a reliable evaluation. We present a simple efficient technique to
combine the best of both worlds. The idea is to train a surrogate judge using
heuristic metrics as input, to output the LLM as a judge prediction. In our
work, we develop MIRAGE-Bench, a synthetic arena-based RAG benchmark for 18
diverse languages on Wikipedia focused on multilingual answer generation
evaluation. It extensively couples both heuristic features and LLM as a judge
for evaluation. We benchmark 19 multilingual LLMs, and observe a high
correlation (Kendall Tau ($\tau$) = 0.909) using our surrogate judge and
between GPT-4o as a teacher using the Bradley-Terry framework. Our results show
proprietary and large open-source LLMs currently dominate on MIRAGE-Bench. Our
code and datasets are made publicly available here:
https://github.com/vectara/mirage-bench.
|
2410.15849 | Shikhar Vashistha | Shikhar Vashistha, Neetesh Kumar | Focus Where It Matters: Graph Selective State Focused Attention Networks | null | ccgrid 2025 | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Traditional graph neural networks (GNNs) lack scalability and lose individual
node characteristics due to over-smoothing, especially in the case of deeper
networks. This results in sub-optimal feature representation, affecting the
model's performance on tasks involving dynamically changing graphs. To address
this issue, we present Graph Selective States Focused Attention Networks
(GSANs) based neural network architecture for graph-structured data. The GSAN
is enabled by multi-head masked self-attention (MHMSA) and selective state
space modeling (S3M) layers to overcome the limitations of GNNs. In GSAN, the
MHMSA allows GSAN to dynamically emphasize crucial node connections,
particularly in evolving graph environments. The S3M layer enables the network
to adjust dynamically in changing node states and improving predictions of node
behavior in varying contexts without needing primary knowledge of the graph
structure. Furthermore, the S3M layer enhances the generalization of unseen
structures and interprets how node states influence link importance. With this,
GSAN effectively outperforms inductive and transductive tasks and overcomes the
issues that traditional GNNs experience. To analyze the performance behavior of
GSAN, a set of state-of-the-art comparative experiments are conducted on graphs
benchmark datasets, including $Cora$, $Citeseer$, $Pubmed$ network citation,
and $protein-protein-interaction$ datasets, as an outcome, GSAN improved the
classification accuracy by $1.56\%$, $8.94\%$, $0.37\%$, and $1.54\%$ on
$F1-score$ respectively.
| [
{
"version": "v1",
"created": "Mon, 21 Oct 2024 10:25:52 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Vashistha",
"Shikhar",
""
],
[
"Kumar",
"Neetesh",
""
]
] | TITLE: Focus Where It Matters: Graph Selective State Focused Attention Networks
ABSTRACT: Traditional graph neural networks (GNNs) lack scalability and lose individual
node characteristics due to over-smoothing, especially in the case of deeper
networks. This results in sub-optimal feature representation, affecting the
model's performance on tasks involving dynamically changing graphs. To address
this issue, we present Graph Selective States Focused Attention Networks
(GSANs) based neural network architecture for graph-structured data. The GSAN
is enabled by multi-head masked self-attention (MHMSA) and selective state
space modeling (S3M) layers to overcome the limitations of GNNs. In GSAN, the
MHMSA allows GSAN to dynamically emphasize crucial node connections,
particularly in evolving graph environments. The S3M layer enables the network
to adjust dynamically in changing node states and improving predictions of node
behavior in varying contexts without needing primary knowledge of the graph
structure. Furthermore, the S3M layer enhances the generalization of unseen
structures and interprets how node states influence link importance. With this,
GSAN effectively outperforms inductive and transductive tasks and overcomes the
issues that traditional GNNs experience. To analyze the performance behavior of
GSAN, a set of state-of-the-art comparative experiments are conducted on graphs
benchmark datasets, including $Cora$, $Citeseer$, $Pubmed$ network citation,
and $protein-protein-interaction$ datasets, as an outcome, GSAN improved the
classification accuracy by $1.56\%$, $8.94\%$, $0.37\%$, and $1.54\%$ on
$F1-score$ respectively.
|
2410.17193 | ZeKai Li | Kai Wang, Zekai Li, Zhi-Qi Cheng, Samir Khaki, Ahmad Sajedi,
Ramakrishna Vedantam, Konstantinos N Plataniotis, Alexander Hauptmann, Yang
You | Emphasizing Discriminative Features for Dataset Distillation in Complex
Scenarios | 24 pages, 13 figures | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Dataset distillation has demonstrated strong performance on simple datasets
like CIFAR, MNIST, and TinyImageNet but struggles to achieve similar results in
more complex scenarios. In this paper, we propose EDF (emphasizes the
discriminative features), a dataset distillation method that enhances key
discriminative regions in synthetic images using Grad-CAM activation maps. Our
approach is inspired by a key observation: in simple datasets, high-activation
areas typically occupy most of the image, whereas in complex scenarios, the
size of these areas is much smaller. Unlike previous methods that treat all
pixels equally when synthesizing images, EDF uses Grad-CAM activation maps to
enhance high-activation areas. From a supervision perspective, we downplay
supervision signals that have lower losses, as they contain common patterns.
Additionally, to help the DD community better explore complex scenarios, we
build the Complex Dataset Distillation (Comp-DD) benchmark by meticulously
selecting sixteen subsets, eight easy and eight hard, from ImageNet-1K. In
particular, EDF consistently outperforms SOTA results in complex scenarios,
such as ImageNet-1K subsets. Hopefully, more researchers will be inspired and
encouraged to improve the practicality and efficacy of DD. Our code and
benchmark will be made public at https://github.com/NUS-HPC-AI-Lab/EDF.
| [
{
"version": "v1",
"created": "Tue, 22 Oct 2024 17:13:19 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Mar 2025 04:10:38 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Wang",
"Kai",
""
],
[
"Li",
"Zekai",
""
],
[
"Cheng",
"Zhi-Qi",
""
],
[
"Khaki",
"Samir",
""
],
[
"Sajedi",
"Ahmad",
""
],
[
"Vedantam",
"Ramakrishna",
""
],
[
"Plataniotis",
"Konstantinos N",
""
],
[
"Hauptmann",
"Alexander",
""
],
[
"You",
"Yang",
""
]
] | TITLE: Emphasizing Discriminative Features for Dataset Distillation in Complex
Scenarios
ABSTRACT: Dataset distillation has demonstrated strong performance on simple datasets
like CIFAR, MNIST, and TinyImageNet but struggles to achieve similar results in
more complex scenarios. In this paper, we propose EDF (emphasizes the
discriminative features), a dataset distillation method that enhances key
discriminative regions in synthetic images using Grad-CAM activation maps. Our
approach is inspired by a key observation: in simple datasets, high-activation
areas typically occupy most of the image, whereas in complex scenarios, the
size of these areas is much smaller. Unlike previous methods that treat all
pixels equally when synthesizing images, EDF uses Grad-CAM activation maps to
enhance high-activation areas. From a supervision perspective, we downplay
supervision signals that have lower losses, as they contain common patterns.
Additionally, to help the DD community better explore complex scenarios, we
build the Complex Dataset Distillation (Comp-DD) benchmark by meticulously
selecting sixteen subsets, eight easy and eight hard, from ImageNet-1K. In
particular, EDF consistently outperforms SOTA results in complex scenarios,
such as ImageNet-1K subsets. Hopefully, more researchers will be inspired and
encouraged to improve the practicality and efficacy of DD. Our code and
benchmark will be made public at https://github.com/NUS-HPC-AI-Lab/EDF.
|
2410.18390 | Xinyu Wang | Xinyu Wang, Wenbo Zhang, Sarah Rajtmajer | Monolingual and Multilingual Misinformation Detection for Low-Resource
Languages: A Comprehensive Survey | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | In today's global digital landscape, misinformation transcends linguistic
boundaries, posing a significant challenge for moderation systems. Most
approaches to misinformation detection are monolingual, focused on
high-resource languages, i.e., a handful of world languages that have benefited
from substantial research investment. This survey provides a comprehensive
overview of the current research on misinformation detection in low-resource
languages, both in monolingual and multilingual settings. We review existing
datasets, methodologies, and tools used in these domains, identifying key
challenges related to: data resources, model development, cultural and
linguistic context, and real-world applications. We examine emerging
approaches, such as language-generalizable models and multi-modal techniques,
and emphasize the need for improved data collection practices,
interdisciplinary collaboration, and stronger incentives for socially
responsible AI research. Our findings underscore the importance of systems
capable of addressing misinformation across diverse linguistic and cultural
contexts.
| [
{
"version": "v1",
"created": "Thu, 24 Oct 2024 03:02:03 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Mar 2025 21:19:38 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Wang",
"Xinyu",
""
],
[
"Zhang",
"Wenbo",
""
],
[
"Rajtmajer",
"Sarah",
""
]
] | TITLE: Monolingual and Multilingual Misinformation Detection for Low-Resource
Languages: A Comprehensive Survey
ABSTRACT: In today's global digital landscape, misinformation transcends linguistic
boundaries, posing a significant challenge for moderation systems. Most
approaches to misinformation detection are monolingual, focused on
high-resource languages, i.e., a handful of world languages that have benefited
from substantial research investment. This survey provides a comprehensive
overview of the current research on misinformation detection in low-resource
languages, both in monolingual and multilingual settings. We review existing
datasets, methodologies, and tools used in these domains, identifying key
challenges related to: data resources, model development, cultural and
linguistic context, and real-world applications. We examine emerging
approaches, such as language-generalizable models and multi-modal techniques,
and emphasize the need for improved data collection practices,
interdisciplinary collaboration, and stronger incentives for socially
responsible AI research. Our findings underscore the importance of systems
capable of addressing misinformation across diverse linguistic and cultural
contexts.
|
2410.22233 | Ashutosh Chaubey | Ashutosh Chaubey, Anoubhav Agarwaal, Sartaki Sinha Roy, Aayush
Agrawal, Susmita Ghose | ContextIQ: A Multimodal Expert-Based Video Retrieval System for
Contextual Advertising | Published at WACV 2025 | null | null | null | cs.CV cs.AI cs.IR | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Contextual advertising serves ads that are aligned to the content that the
user is viewing. The rapid growth of video content on social platforms and
streaming services, along with privacy concerns, has increased the need for
contextual advertising. Placing the right ad in the right context creates a
seamless and pleasant ad viewing experience, resulting in higher audience
engagement and, ultimately, better ad monetization. From a technology
standpoint, effective contextual advertising requires a video retrieval system
capable of understanding complex video content at a very granular level.
Current text-to-video retrieval models based on joint multimodal training
demand large datasets and computational resources, limiting their practicality
and lacking the key functionalities required for ad ecosystem integration. We
introduce ContextIQ, a multimodal expert-based video retrieval system designed
specifically for contextual advertising. ContextIQ utilizes modality-specific
experts-video, audio, transcript (captions), and metadata such as objects,
actions, emotion, etc.-to create semantically rich video representations. We
show that our system, without joint training, achieves better or comparable
results to state-of-the-art models and commercial solutions on multiple
text-to-video retrieval benchmarks. Our ablation studies highlight the benefits
of leveraging multiple modalities for enhanced video retrieval accuracy instead
of using a vision-language model alone. Furthermore, we show how video
retrieval systems such as ContextIQ can be used for contextual advertising in
an ad ecosystem while also addressing concerns related to brand safety and
filtering inappropriate content.
| [
{
"version": "v1",
"created": "Tue, 29 Oct 2024 17:01:05 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Nov 2024 19:52:58 GMT"
},
{
"version": "v3",
"created": "Sat, 29 Mar 2025 17:42:02 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Chaubey",
"Ashutosh",
""
],
[
"Agarwaal",
"Anoubhav",
""
],
[
"Roy",
"Sartaki Sinha",
""
],
[
"Agrawal",
"Aayush",
""
],
[
"Ghose",
"Susmita",
""
]
] | TITLE: ContextIQ: A Multimodal Expert-Based Video Retrieval System for
Contextual Advertising
ABSTRACT: Contextual advertising serves ads that are aligned to the content that the
user is viewing. The rapid growth of video content on social platforms and
streaming services, along with privacy concerns, has increased the need for
contextual advertising. Placing the right ad in the right context creates a
seamless and pleasant ad viewing experience, resulting in higher audience
engagement and, ultimately, better ad monetization. From a technology
standpoint, effective contextual advertising requires a video retrieval system
capable of understanding complex video content at a very granular level.
Current text-to-video retrieval models based on joint multimodal training
demand large datasets and computational resources, limiting their practicality
and lacking the key functionalities required for ad ecosystem integration. We
introduce ContextIQ, a multimodal expert-based video retrieval system designed
specifically for contextual advertising. ContextIQ utilizes modality-specific
experts-video, audio, transcript (captions), and metadata such as objects,
actions, emotion, etc.-to create semantically rich video representations. We
show that our system, without joint training, achieves better or comparable
results to state-of-the-art models and commercial solutions on multiple
text-to-video retrieval benchmarks. Our ablation studies highlight the benefits
of leveraging multiple modalities for enhanced video retrieval accuracy instead
of using a vision-language model alone. Furthermore, we show how video
retrieval systems such as ContextIQ can be used for contextual advertising in
an ad ecosystem while also addressing concerns related to brand safety and
filtering inappropriate content.
|
2410.22770 | Hao Li | Hao Li, Xiaogeng Liu | InjecGuard: Benchmarking and Mitigating Over-defense in Prompt Injection
Guardrail Models | null | null | null | null | cs.CL cs.AI cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Prompt injection attacks pose a critical threat to large language models
(LLMs), enabling goal hijacking and data leakage. Prompt guard models, though
effective in defense, suffer from over-defense -- falsely flagging benign
inputs as malicious due to trigger word bias. To address this issue, we
introduce NotInject, an evaluation dataset that systematically measures
over-defense across various prompt guard models. NotInject contains 339 benign
samples enriched with trigger words common in prompt injection attacks,
enabling fine-grained evaluation. Our results show that state-of-the-art models
suffer from over-defense issues, with accuracy dropping close to random
guessing levels (60%). To mitigate this, we propose InjecGuard, a novel prompt
guard model that incorporates a new training strategy, Mitigating Over-defense
for Free (MOF), which significantly reduces the bias on trigger words.
InjecGuard demonstrates state-of-the-art performance on diverse benchmarks
including NotInject, surpassing the existing best model by 30.8%, offering a
robust and open-source solution for detecting prompt injection attacks. The
code and datasets are released at https://github.com/leolee99/InjecGuard.
| [
{
"version": "v1",
"created": "Wed, 30 Oct 2024 07:39:42 GMT"
},
{
"version": "v2",
"created": "Sun, 24 Nov 2024 05:31:53 GMT"
},
{
"version": "v3",
"created": "Sun, 30 Mar 2025 16:39:15 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Li",
"Hao",
""
],
[
"Liu",
"Xiaogeng",
""
]
] | TITLE: InjecGuard: Benchmarking and Mitigating Over-defense in Prompt Injection
Guardrail Models
ABSTRACT: Prompt injection attacks pose a critical threat to large language models
(LLMs), enabling goal hijacking and data leakage. Prompt guard models, though
effective in defense, suffer from over-defense -- falsely flagging benign
inputs as malicious due to trigger word bias. To address this issue, we
introduce NotInject, an evaluation dataset that systematically measures
over-defense across various prompt guard models. NotInject contains 339 benign
samples enriched with trigger words common in prompt injection attacks,
enabling fine-grained evaluation. Our results show that state-of-the-art models
suffer from over-defense issues, with accuracy dropping close to random
guessing levels (60%). To mitigate this, we propose InjecGuard, a novel prompt
guard model that incorporates a new training strategy, Mitigating Over-defense
for Free (MOF), which significantly reduces the bias on trigger words.
InjecGuard demonstrates state-of-the-art performance on diverse benchmarks
including NotInject, surpassing the existing best model by 30.8%, offering a
robust and open-source solution for detecting prompt injection attacks. The
code and datasets are released at https://github.com/leolee99/InjecGuard.
|
2410.23749 | Dizhen Liang | Dizhen Liang | LSEAttention is All You Need for Time Series Forecasting | 8 pages with referencing, 1 figure, 5 tables | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Transformer-based architectures have achieved remarkable success in natural
language processing and computer vision. However, their performance in
multivariate long-term forecasting often falls short compared to simpler linear
baselines. Previous research has identified the traditional attention mechanism
as a key factor limiting their effectiveness in this domain. To bridge this
gap, we introduce LATST, a novel approach designed to mitigate entropy collapse
and training instability common challenges in Transformer-based time series
forecasting. We rigorously evaluate LATST across multiple real-world
multivariate time series datasets, demonstrating its ability to outperform
existing state-of-the-art Transformer models. Notably, LATST manages to achieve
competitive performance with fewer parameters than some linear models on
certain datasets, highlighting its efficiency and effectiveness.
| [
{
"version": "v1",
"created": "Thu, 31 Oct 2024 09:09:39 GMT"
},
{
"version": "v2",
"created": "Fri, 1 Nov 2024 02:47:29 GMT"
},
{
"version": "v3",
"created": "Thu, 30 Jan 2025 13:50:52 GMT"
},
{
"version": "v4",
"created": "Thu, 27 Mar 2025 02:00:07 GMT"
},
{
"version": "v5",
"created": "Mon, 31 Mar 2025 12:04:09 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Liang",
"Dizhen",
""
]
] | TITLE: LSEAttention is All You Need for Time Series Forecasting
ABSTRACT: Transformer-based architectures have achieved remarkable success in natural
language processing and computer vision. However, their performance in
multivariate long-term forecasting often falls short compared to simpler linear
baselines. Previous research has identified the traditional attention mechanism
as a key factor limiting their effectiveness in this domain. To bridge this
gap, we introduce LATST, a novel approach designed to mitigate entropy collapse
and training instability common challenges in Transformer-based time series
forecasting. We rigorously evaluate LATST across multiple real-world
multivariate time series datasets, demonstrating its ability to outperform
existing state-of-the-art Transformer models. Notably, LATST manages to achieve
competitive performance with fewer parameters than some linear models on
certain datasets, highlighting its efficiency and effectiveness.
|
2411.01705 | Yuefeng Peng | Yuefeng Peng, Junda Wang, Hong Yu, Amir Houmansadr | Data Extraction Attacks in Retrieval-Augmented Generation via Backdoors | null | null | null | null | cs.CR cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite significant advancements, large language models (LLMs) still struggle
with providing accurate answers when lacking domain-specific or up-to-date
knowledge. Retrieval-Augmented Generation (RAG) addresses this limitation by
incorporating external knowledge bases, but it also introduces new attack
surfaces. In this paper, we investigate data extraction attacks targeting RAG's
knowledge databases. We show that previous prompt injection-based extraction
attacks largely rely on the instruction-following capabilities of LLMs. As a
result, they fail on models that are less responsive to such malicious prompts
-- for example, our experiments show that state-of-the-art attacks achieve
near-zero success on Gemma-2B-IT. Moreover, even for models that can follow
these instructions, we found fine-tuning may significantly reduce attack
performance. To further reveal the vulnerability, we propose to backdoor RAG,
where a small portion of poisoned data is injected during the fine-tuning phase
to create a backdoor within the LLM. When this compromised LLM is integrated
into a RAG system, attackers can exploit specific triggers in prompts to
manipulate the LLM to leak documents from the retrieval database. By carefully
designing the poisoned data, we achieve both verbatim and paraphrased document
extraction. For example, on Gemma-2B-IT, we show that with only 5\% poisoned
data, our method achieves an average success rate of 94.1\% for verbatim
extraction (ROUGE-L score: 82.1) and 63.6\% for paraphrased extraction (average
ROUGE score: 66.4) across four datasets. These results underscore the privacy
risks associated with the supply chain when deploying RAG systems.
| [
{
"version": "v1",
"created": "Sun, 3 Nov 2024 22:27:40 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Mar 2025 01:49:11 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Peng",
"Yuefeng",
""
],
[
"Wang",
"Junda",
""
],
[
"Yu",
"Hong",
""
],
[
"Houmansadr",
"Amir",
""
]
] | TITLE: Data Extraction Attacks in Retrieval-Augmented Generation via Backdoors
ABSTRACT: Despite significant advancements, large language models (LLMs) still struggle
with providing accurate answers when lacking domain-specific or up-to-date
knowledge. Retrieval-Augmented Generation (RAG) addresses this limitation by
incorporating external knowledge bases, but it also introduces new attack
surfaces. In this paper, we investigate data extraction attacks targeting RAG's
knowledge databases. We show that previous prompt injection-based extraction
attacks largely rely on the instruction-following capabilities of LLMs. As a
result, they fail on models that are less responsive to such malicious prompts
-- for example, our experiments show that state-of-the-art attacks achieve
near-zero success on Gemma-2B-IT. Moreover, even for models that can follow
these instructions, we found fine-tuning may significantly reduce attack
performance. To further reveal the vulnerability, we propose to backdoor RAG,
where a small portion of poisoned data is injected during the fine-tuning phase
to create a backdoor within the LLM. When this compromised LLM is integrated
into a RAG system, attackers can exploit specific triggers in prompts to
manipulate the LLM to leak documents from the retrieval database. By carefully
designing the poisoned data, we achieve both verbatim and paraphrased document
extraction. For example, on Gemma-2B-IT, we show that with only 5\% poisoned
data, our method achieves an average success rate of 94.1\% for verbatim
extraction (ROUGE-L score: 82.1) and 63.6\% for paraphrased extraction (average
ROUGE score: 66.4) across four datasets. These results underscore the privacy
risks associated with the supply chain when deploying RAG systems.
|
2411.02442 | Jiaqi Zhang | Yuxiang Guo, Lu Yin, Bo Jiang and Jiaqi Zhang | TODO: Enhancing LLM Alignment with Ternary Preferences | Accepted to ICLR 2025 | null | null | null | cs.CL cs.AI cs.IR | http://creativecommons.org/licenses/by/4.0/ | Aligning large language models (LLMs) with human intent is critical for
enhancing their performance across a variety of tasks. Standard alignment
techniques, such as Direct Preference Optimization (DPO), often rely on the
binary Bradley-Terry (BT) model, which can struggle to capture the complexities
of human preferences -- particularly in the presence of noisy or inconsistent
labels and frequent ties. To address these limitations, we introduce the
Tie-rank Oriented Bradley-Terry model (TOBT), an extension of the BT model that
explicitly incorporates ties, enabling more nuanced preference representation.
Building on this, we propose Tie-rank Oriented Direct Preference Optimization
(TODO), a novel alignment algorithm that leverages TOBT's ternary ranking
system to improve preference alignment. In evaluations on Mistral-7B and Llama
3-8B models, TODO consistently outperforms DPO in modeling preferences across
both in-distribution and out-of-distribution datasets. Additional assessments
using MT Bench and benchmarks such as Piqa, ARC-c, and MMLU further demonstrate
TODO's superior alignment performance. Notably, TODO also shows strong results
in binary preference alignment, highlighting its versatility and potential for
broader integration into LLM alignment. The implementation details can be found
in https://github.com/XXares/TODO.
| [
{
"version": "v1",
"created": "Sat, 2 Nov 2024 14:36:03 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Mar 2025 02:56:45 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Guo",
"Yuxiang",
""
],
[
"Yin",
"Lu",
""
],
[
"Jiang",
"Bo",
""
],
[
"Zhang",
"Jiaqi",
""
]
] | TITLE: TODO: Enhancing LLM Alignment with Ternary Preferences
ABSTRACT: Aligning large language models (LLMs) with human intent is critical for
enhancing their performance across a variety of tasks. Standard alignment
techniques, such as Direct Preference Optimization (DPO), often rely on the
binary Bradley-Terry (BT) model, which can struggle to capture the complexities
of human preferences -- particularly in the presence of noisy or inconsistent
labels and frequent ties. To address these limitations, we introduce the
Tie-rank Oriented Bradley-Terry model (TOBT), an extension of the BT model that
explicitly incorporates ties, enabling more nuanced preference representation.
Building on this, we propose Tie-rank Oriented Direct Preference Optimization
(TODO), a novel alignment algorithm that leverages TOBT's ternary ranking
system to improve preference alignment. In evaluations on Mistral-7B and Llama
3-8B models, TODO consistently outperforms DPO in modeling preferences across
both in-distribution and out-of-distribution datasets. Additional assessments
using MT Bench and benchmarks such as Piqa, ARC-c, and MMLU further demonstrate
TODO's superior alignment performance. Notably, TODO also shows strong results
in binary preference alignment, highlighting its versatility and potential for
broader integration into LLM alignment. The implementation details can be found
in https://github.com/XXares/TODO.
|
2411.07496 | Ganzhao Yuan | Ganzhao Yuan | ADMM for Structured Fractional Minimization | null | null | null | null | math.OC cs.LG cs.NA math.NA | http://creativecommons.org/licenses/by/4.0/ | This paper considers a class of structured fractional minimization problems.
The numerator consists of a differentiable function, a simple nonconvex
nonsmooth function, a concave nonsmooth function, and a convex nonsmooth
function composed with a linear operator. The denominator is a continuous
function that is either weakly convex or has a weakly convex square root. These
problems are prevalent in various important applications in machine learning
and data science. Existing methods, primarily based on subgradient methods and
smoothing proximal gradient methods, often suffer from slow convergence and
numerical stability issues. In this paper, we introduce {\sf FADMM}, the first
Alternating Direction Method of Multipliers tailored for this class of
problems. {\sf FADMM} decouples the original problem into linearized proximal
subproblems, featuring two variants: one using Dinkelbach's parametric method
({\sf FADMM-D}) and the other using the quadratic transform method ({\sf
FADMM-Q}). By introducing a novel Lyapunov function, we establish that {\sf
FADMM} converges to $\epsilon$-approximate critical points of the problem
within an oracle complexity of $\mathcal{O}(1/\epsilon^{3})$. Extensive
experiments on synthetic and real-world datasets, including sparse Fisher
discriminant analysis, robust Sharpe ratio minimization, and robust sparse
recovery, demonstrate the effectiveness of our approach.
Keywords: Fractional Minimization, Nonconvex Optimization, Proximal
Linearized ADMM, Nonsmooth Optimization, Convergence Analysis
| [
{
"version": "v1",
"created": "Tue, 12 Nov 2024 02:50:12 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Mar 2025 02:26:37 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Yuan",
"Ganzhao",
""
]
] | TITLE: ADMM for Structured Fractional Minimization
ABSTRACT: This paper considers a class of structured fractional minimization problems.
The numerator consists of a differentiable function, a simple nonconvex
nonsmooth function, a concave nonsmooth function, and a convex nonsmooth
function composed with a linear operator. The denominator is a continuous
function that is either weakly convex or has a weakly convex square root. These
problems are prevalent in various important applications in machine learning
and data science. Existing methods, primarily based on subgradient methods and
smoothing proximal gradient methods, often suffer from slow convergence and
numerical stability issues. In this paper, we introduce {\sf FADMM}, the first
Alternating Direction Method of Multipliers tailored for this class of
problems. {\sf FADMM} decouples the original problem into linearized proximal
subproblems, featuring two variants: one using Dinkelbach's parametric method
({\sf FADMM-D}) and the other using the quadratic transform method ({\sf
FADMM-Q}). By introducing a novel Lyapunov function, we establish that {\sf
FADMM} converges to $\epsilon$-approximate critical points of the problem
within an oracle complexity of $\mathcal{O}(1/\epsilon^{3})$. Extensive
experiments on synthetic and real-world datasets, including sparse Fisher
discriminant analysis, robust Sharpe ratio minimization, and robust sparse
recovery, demonstrate the effectiveness of our approach.
Keywords: Fractional Minimization, Nonconvex Optimization, Proximal
Linearized ADMM, Nonsmooth Optimization, Convergence Analysis
|
2411.08002 | Behzad Ghanbarian | Shaluka Senevirathna, Anna Zemlyanova, Shaina A. Kelly, Qinhong Hu,
Yong Zhang and Behzad Ghanbarian | Modeling and scaling spontaneous imbibition with generalized fractional
flow theory and non-Boltzmann transformation | 7 figures and 1 table | SPE Journal, 2025 | 10.2118/226176-PA | SPE-226176-PA | physics.flu-dyn | http://creativecommons.org/licenses/by/4.0/ | Spontaneous imbibition (SI) is a process by which liquid is drawn into
partially saturated porous media by capillary forces, relevant for subsurface
processes like underground fluid storage and withdrawal. Accurate modeling and
scaling of counter-current SI have long been challenging. In this study, we
proposed a generalized fractional flow theory (GFFT) using the Hausdorff
fractal derivative, combined with non-Boltzmann scaling. The model links
imbibition distance to time through the power law exponent alpha/2, where alpha
is the fractal index (0 < alpha < 2 in this study). We applied the GFFT to
various experimental and stimulated datasets of both porous and fractured
media, finding that alpha varied with factors such as contact angle (of the
imbibing fluid), dynamic viscosity, pore structure, and fracture properties. By
analyzing SI data from sandstones, diatomite, carbonate, and synthetic porous
media, we demonstrated that the non-Boltzmann scaling provided a better
collapse of the SI data than the traditional Boltzmann approach alpha = 1),
with alpha values ranging from 0.88 to 1.54. These deviations illustrate the
model's adaptability to different porous materials. Using the GFFT, we expect
to better predict fluid imbibition rates when properties like porosity,
permeability, initial and maximum saturations, viscosity, and wettability are
known, offering a more accurate alternative to traditional models.
| [
{
"version": "v1",
"created": "Tue, 12 Nov 2024 18:28:22 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Senevirathna",
"Shaluka",
""
],
[
"Zemlyanova",
"Anna",
""
],
[
"Kelly",
"Shaina A.",
""
],
[
"Hu",
"Qinhong",
""
],
[
"Zhang",
"Yong",
""
],
[
"Ghanbarian",
"Behzad",
""
]
] | TITLE: Modeling and scaling spontaneous imbibition with generalized fractional
flow theory and non-Boltzmann transformation
ABSTRACT: Spontaneous imbibition (SI) is a process by which liquid is drawn into
partially saturated porous media by capillary forces, relevant for subsurface
processes like underground fluid storage and withdrawal. Accurate modeling and
scaling of counter-current SI have long been challenging. In this study, we
proposed a generalized fractional flow theory (GFFT) using the Hausdorff
fractal derivative, combined with non-Boltzmann scaling. The model links
imbibition distance to time through the power law exponent alpha/2, where alpha
is the fractal index (0 < alpha < 2 in this study). We applied the GFFT to
various experimental and stimulated datasets of both porous and fractured
media, finding that alpha varied with factors such as contact angle (of the
imbibing fluid), dynamic viscosity, pore structure, and fracture properties. By
analyzing SI data from sandstones, diatomite, carbonate, and synthetic porous
media, we demonstrated that the non-Boltzmann scaling provided a better
collapse of the SI data than the traditional Boltzmann approach alpha = 1),
with alpha values ranging from 0.88 to 1.54. These deviations illustrate the
model's adaptability to different porous materials. Using the GFFT, we expect
to better predict fluid imbibition rates when properties like porosity,
permeability, initial and maximum saturations, viscosity, and wettability are
known, offering a more accurate alternative to traditional models.
|
2411.08028 | Juanhui Li | Juanhui Li, Sreyashi Nag, Hui Liu, Xianfeng Tang, Sheikh Sarwar,
Limeng Cui, Hansu Gu, Suhang Wang, Qi He, Jiliang Tang | Learning with Less: Knowledge Distillation from Large Language Models
via Unlabeled Data | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In real-world NLP applications, Large Language Models (LLMs) offer promising
solutions due to their extensive training on vast datasets. However, the large
size and high computation demands of LLMs limit their practicality in many
applications, especially when further fine-tuning is required. To address these
limitations, smaller models are typically preferred for deployment. However,
their training is hindered by the scarcity of labeled data. In contrast,
unlabeled data is often readily which can be leveraged by using LLMs to
generate pseudo-labels for training smaller models. This enables the smaller
models (student) to acquire knowledge from LLMs(teacher) while reducing
computational costs. This process introduces challenges, such as potential
noisy pseudo-labels. Selecting high-quality and informative data is therefore
critical to enhance model performance while improving the efficiency of data
utilization. To address this, we propose LLKD that enables Learning with Less
computational resources and less data for Knowledge Distillation from LLMs.
LLKD is an adaptive sample selection method that incorporates signals from both
the teacher and student. Specifically, it prioritizes samples where the teacher
demonstrates high confidence in its labeling, indicating reliable labels, and
where the student exhibits a high information need, identifying challenging
samples that require further learning. Our comprehensive experiments show that
LLKD achieves superior performance across various datasets with higher data
efficiency.
| [
{
"version": "v1",
"created": "Tue, 12 Nov 2024 18:57:59 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Feb 2025 07:01:34 GMT"
},
{
"version": "v3",
"created": "Sun, 30 Mar 2025 06:21:19 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Li",
"Juanhui",
""
],
[
"Nag",
"Sreyashi",
""
],
[
"Liu",
"Hui",
""
],
[
"Tang",
"Xianfeng",
""
],
[
"Sarwar",
"Sheikh",
""
],
[
"Cui",
"Limeng",
""
],
[
"Gu",
"Hansu",
""
],
[
"Wang",
"Suhang",
""
],
[
"He",
"Qi",
""
],
[
"Tang",
"Jiliang",
""
]
] | TITLE: Learning with Less: Knowledge Distillation from Large Language Models
via Unlabeled Data
ABSTRACT: In real-world NLP applications, Large Language Models (LLMs) offer promising
solutions due to their extensive training on vast datasets. However, the large
size and high computation demands of LLMs limit their practicality in many
applications, especially when further fine-tuning is required. To address these
limitations, smaller models are typically preferred for deployment. However,
their training is hindered by the scarcity of labeled data. In contrast,
unlabeled data is often readily which can be leveraged by using LLMs to
generate pseudo-labels for training smaller models. This enables the smaller
models (student) to acquire knowledge from LLMs(teacher) while reducing
computational costs. This process introduces challenges, such as potential
noisy pseudo-labels. Selecting high-quality and informative data is therefore
critical to enhance model performance while improving the efficiency of data
utilization. To address this, we propose LLKD that enables Learning with Less
computational resources and less data for Knowledge Distillation from LLMs.
LLKD is an adaptive sample selection method that incorporates signals from both
the teacher and student. Specifically, it prioritizes samples where the teacher
demonstrates high confidence in its labeling, indicating reliable labels, and
where the student exhibits a high information need, identifying challenging
samples that require further learning. Our comprehensive experiments show that
LLKD achieves superior performance across various datasets with higher data
efficiency.
|
2411.11903 | Jinnan Chen | Jinnan Chen, Chen Li, Gim Hee Lee | DiHuR: Diffusion-Guided Generalizable Human Reconstruction | Accepted to WACV 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce DiHuR, a novel Diffusion-guided model for generalizable Human 3D
Reconstruction and view synthesis from sparse, minimally overlapping images.
While existing generalizable human radiance fields excel at novel view
synthesis, they often struggle with comprehensive 3D reconstruction. Similarly,
directly optimizing implicit Signed Distance Function (SDF) fields from
sparse-view images typically yields poor results due to limited overlap. To
enhance 3D reconstruction quality, we propose using learnable tokens associated
with SMPL vertices to aggregate sparse view features and then to guide SDF
prediction. These tokens learn a generalizable prior across different
identities in training datasets, leveraging the consistent projection of SMPL
vertices onto similar semantic areas across various human identities. This
consistency enables effective knowledge transfer to unseen identities during
inference. Recognizing SMPL's limitations in capturing clothing details, we
incorporate a diffusion model as an additional prior to fill in missing
information, particularly for complex clothing geometries. Our method
integrates two key priors in a coherent manner: the prior from generalizable
feed-forward models and the 2D diffusion prior, and it requires only multi-view
image training, without 3D supervision. DiHuR demonstrates superior performance
in both within-dataset and cross-dataset generalization settings, as validated
on THuman, ZJU-MoCap, and HuMMan datasets compared to existing methods.
| [
{
"version": "v1",
"created": "Sat, 16 Nov 2024 03:52:23 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Mar 2025 19:55:32 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Chen",
"Jinnan",
""
],
[
"Li",
"Chen",
""
],
[
"Lee",
"Gim Hee",
""
]
] | TITLE: DiHuR: Diffusion-Guided Generalizable Human Reconstruction
ABSTRACT: We introduce DiHuR, a novel Diffusion-guided model for generalizable Human 3D
Reconstruction and view synthesis from sparse, minimally overlapping images.
While existing generalizable human radiance fields excel at novel view
synthesis, they often struggle with comprehensive 3D reconstruction. Similarly,
directly optimizing implicit Signed Distance Function (SDF) fields from
sparse-view images typically yields poor results due to limited overlap. To
enhance 3D reconstruction quality, we propose using learnable tokens associated
with SMPL vertices to aggregate sparse view features and then to guide SDF
prediction. These tokens learn a generalizable prior across different
identities in training datasets, leveraging the consistent projection of SMPL
vertices onto similar semantic areas across various human identities. This
consistency enables effective knowledge transfer to unseen identities during
inference. Recognizing SMPL's limitations in capturing clothing details, we
incorporate a diffusion model as an additional prior to fill in missing
information, particularly for complex clothing geometries. Our method
integrates two key priors in a coherent manner: the prior from generalizable
feed-forward models and the 2D diffusion prior, and it requires only multi-view
image training, without 3D supervision. DiHuR demonstrates superior performance
in both within-dataset and cross-dataset generalization settings, as validated
on THuman, ZJU-MoCap, and HuMMan datasets compared to existing methods.
|
2411.11912 | Pramit Saha | Pramit Saha, Felix Wagner, Divyanshu Mishra, Can Peng, Anshul Thakur,
David Clifton, Konstantinos Kamnitsas, J. Alison Noble | F$^3$OCUS -- Federated Finetuning of Vision-Language Foundation Models
with Optimal Client Layer Updating Strategy via Multi-objective
Meta-Heuristics | Accepted in CVPR 2025 | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Effective training of large Vision-Language Models (VLMs) on
resource-constrained client devices in Federated Learning (FL) requires the
usage of parameter-efficient fine-tuning (PEFT) strategies. To this end, we
demonstrate the impact of two factors \textit{viz.}, client-specific layer
importance score that selects the most important VLM layers for fine-tuning and
inter-client layer diversity score that encourages diverse layer selection
across clients for optimal VLM layer selection. We first theoretically motivate
and leverage the principal eigenvalue magnitude of layerwise Neural Tangent
Kernels and show its effectiveness as client-specific layer importance score.
Next, we propose a novel layer updating strategy dubbed F$^3$OCUS that jointly
optimizes the layer importance and diversity factors by employing a data-free,
multi-objective, meta-heuristic optimization on the server. We explore 5
different meta-heuristic algorithms and compare their effectiveness for
selecting model layers and adapter layers towards PEFT-FL. Furthermore, we
release a new MedVQA-FL dataset involving overall 707,962 VQA triplets and 9
modality-specific clients and utilize it to train and evaluate our method.
Overall, we conduct more than 10,000 client-level experiments on 6
Vision-Language FL task settings involving 58 medical image datasets and 4
different VLM architectures of varying sizes to demonstrate the effectiveness
of the proposed method.
| [
{
"version": "v1",
"created": "Sun, 17 Nov 2024 21:54:57 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Mar 2025 10:30:03 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Saha",
"Pramit",
""
],
[
"Wagner",
"Felix",
""
],
[
"Mishra",
"Divyanshu",
""
],
[
"Peng",
"Can",
""
],
[
"Thakur",
"Anshul",
""
],
[
"Clifton",
"David",
""
],
[
"Kamnitsas",
"Konstantinos",
""
],
[
"Noble",
"J. Alison",
""
]
] | TITLE: F$^3$OCUS -- Federated Finetuning of Vision-Language Foundation Models
with Optimal Client Layer Updating Strategy via Multi-objective
Meta-Heuristics
ABSTRACT: Effective training of large Vision-Language Models (VLMs) on
resource-constrained client devices in Federated Learning (FL) requires the
usage of parameter-efficient fine-tuning (PEFT) strategies. To this end, we
demonstrate the impact of two factors \textit{viz.}, client-specific layer
importance score that selects the most important VLM layers for fine-tuning and
inter-client layer diversity score that encourages diverse layer selection
across clients for optimal VLM layer selection. We first theoretically motivate
and leverage the principal eigenvalue magnitude of layerwise Neural Tangent
Kernels and show its effectiveness as client-specific layer importance score.
Next, we propose a novel layer updating strategy dubbed F$^3$OCUS that jointly
optimizes the layer importance and diversity factors by employing a data-free,
multi-objective, meta-heuristic optimization on the server. We explore 5
different meta-heuristic algorithms and compare their effectiveness for
selecting model layers and adapter layers towards PEFT-FL. Furthermore, we
release a new MedVQA-FL dataset involving overall 707,962 VQA triplets and 9
modality-specific clients and utilize it to train and evaluate our method.
Overall, we conduct more than 10,000 client-level experiments on 6
Vision-Language FL task settings involving 58 medical image datasets and 4
different VLM architectures of varying sizes to demonstrate the effectiveness
of the proposed method.
|
2411.12914 | Xihe Gu | Xihe Gu, Greg Fields, Yaman Jandali, Tara Javidi, Farinaz Koushanfar | Trojan Cleansing with Neural Collapse | null | null | null | null | cs.LG cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Trojan attacks are sophisticated training-time attacks on neural networks
that embed backdoor triggers which force the network to produce a specific
output on any input which includes the trigger. With the increasing relevance
of deep networks which are too large to train with personal resources and which
are trained on data too large to thoroughly audit, these training-time attacks
pose a significant risk. In this work, we connect trojan attacks to Neural
Collapse, a phenomenon wherein the final feature representations of
over-parameterized neural networks converge to a simple geometric structure. We
provide experimental evidence that trojan attacks disrupt this convergence for
a variety of datasets and architectures. We then use this disruption to design
a lightweight, broadly generalizable mechanism for cleansing trojan attacks
from a wide variety of different network architectures and experimentally
demonstrate its efficacy.
| [
{
"version": "v1",
"created": "Tue, 19 Nov 2024 22:57:40 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Mar 2025 18:04:11 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Gu",
"Xihe",
""
],
[
"Fields",
"Greg",
""
],
[
"Jandali",
"Yaman",
""
],
[
"Javidi",
"Tara",
""
],
[
"Koushanfar",
"Farinaz",
""
]
] | TITLE: Trojan Cleansing with Neural Collapse
ABSTRACT: Trojan attacks are sophisticated training-time attacks on neural networks
that embed backdoor triggers which force the network to produce a specific
output on any input which includes the trigger. With the increasing relevance
of deep networks which are too large to train with personal resources and which
are trained on data too large to thoroughly audit, these training-time attacks
pose a significant risk. In this work, we connect trojan attacks to Neural
Collapse, a phenomenon wherein the final feature representations of
over-parameterized neural networks converge to a simple geometric structure. We
provide experimental evidence that trojan attacks disrupt this convergence for
a variety of datasets and architectures. We then use this disruption to design
a lightweight, broadly generalizable mechanism for cleansing trojan attacks
from a wide variety of different network architectures and experimentally
demonstrate its efficacy.
|
2411.13323 | Daniel Ramos | Daniel Ramos, Claudia Mamede, Kush Jain, Paulo Canelas, Catarina
Gamboa, Claire Le Goues | Are Large Language Models Memorizing Bug Benchmarks? | null | null | null | null | cs.SE cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) have become integral to various software
engineering tasks, including code generation, bug detection, and repair. To
evaluate model performance in these domains, numerous bug benchmarks containing
real-world bugs from software projects have been developed. However, a growing
concern within the software engineering community is that these benchmarks may
not reliably reflect true LLM performance due to the risk of data leakage.
Despite this concern, limited research has been conducted to quantify the
impact of potential leakage. In this paper, we systematically evaluate popular
LLMs to assess their susceptibility to data leakage from widely used bug
benchmarks. To identify potential leakage, we use multiple metrics, including a
study of benchmark membership within commonly used training datasets, as well
as analyses of negative log-likelihood and n-gram accuracy. Our findings show
that certain models, in particular codegen-multi, exhibit significant evidence
of memorization in widely used benchmarks like Defects4J, while newer models
trained on larger datasets like LLaMa 3.1 exhibit limited signs of leakage.
These results highlight the need for careful benchmark selection and the
adoption of robust metrics to adequately assess models capabilities.
| [
{
"version": "v1",
"created": "Wed, 20 Nov 2024 13:46:04 GMT"
},
{
"version": "v2",
"created": "Sat, 30 Nov 2024 23:44:43 GMT"
},
{
"version": "v3",
"created": "Mon, 31 Mar 2025 13:02:51 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Ramos",
"Daniel",
""
],
[
"Mamede",
"Claudia",
""
],
[
"Jain",
"Kush",
""
],
[
"Canelas",
"Paulo",
""
],
[
"Gamboa",
"Catarina",
""
],
[
"Goues",
"Claire Le",
""
]
] | TITLE: Are Large Language Models Memorizing Bug Benchmarks?
ABSTRACT: Large Language Models (LLMs) have become integral to various software
engineering tasks, including code generation, bug detection, and repair. To
evaluate model performance in these domains, numerous bug benchmarks containing
real-world bugs from software projects have been developed. However, a growing
concern within the software engineering community is that these benchmarks may
not reliably reflect true LLM performance due to the risk of data leakage.
Despite this concern, limited research has been conducted to quantify the
impact of potential leakage. In this paper, we systematically evaluate popular
LLMs to assess their susceptibility to data leakage from widely used bug
benchmarks. To identify potential leakage, we use multiple metrics, including a
study of benchmark membership within commonly used training datasets, as well
as analyses of negative log-likelihood and n-gram accuracy. Our findings show
that certain models, in particular codegen-multi, exhibit significant evidence
of memorization in widely used benchmarks like Defects4J, while newer models
trained on larger datasets like LLaMa 3.1 exhibit limited signs of leakage.
These results highlight the need for careful benchmark selection and the
adoption of robust metrics to adequately assess models capabilities.
|
2411.15262 | Weijia Wu | Weijia Wu and Mingyu Liu and Zeyu Zhu and Xi Xia and Haoen Feng and
Wen Wang and Kevin Qinghong Lin and Chunhua Shen and Mike Zheng Shou | MovieBench: A Hierarchical Movie Level Dataset for Long Video Generation | The project website is at: https://weijiawu.github.io/MovieBench/.
Code: https://github.com/showlab/MovieBecnh | CVPR 2025 | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Recent advancements in video generation models, like Stable Video Diffusion,
show promising results, but primarily focus on short, single-scene videos.
These models struggle with generating long videos that involve multiple scenes,
coherent narratives, and consistent characters. Furthermore, there is no
publicly available dataset tailored for the analysis, evaluation, and training
of long video generation models. In this paper, we present MovieBench: A
Hierarchical Movie-Level Dataset for Long Video Generation, which addresses
these challenges by providing unique contributions: (1) movie-length videos
featuring rich, coherent storylines and multi-scene narratives, (2) consistency
of character appearance and audio across scenes, and (3) hierarchical data
structure contains high-level movie information and detailed shot-level
descriptions. Experiments demonstrate that MovieBench brings some new insights
and challenges, such as maintaining character ID consistency across multiple
scenes for various characters. The dataset will be public and continuously
maintained, aiming to advance the field of long video generation. Data can be
found at: https://weijiawu.github.io/MovieBench/.
| [
{
"version": "v1",
"created": "Fri, 22 Nov 2024 10:25:08 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Mar 2025 02:52:56 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Wu",
"Weijia",
""
],
[
"Liu",
"Mingyu",
""
],
[
"Zhu",
"Zeyu",
""
],
[
"Xia",
"Xi",
""
],
[
"Feng",
"Haoen",
""
],
[
"Wang",
"Wen",
""
],
[
"Lin",
"Kevin Qinghong",
""
],
[
"Shen",
"Chunhua",
""
],
[
"Shou",
"Mike Zheng",
""
]
] | TITLE: MovieBench: A Hierarchical Movie Level Dataset for Long Video Generation
ABSTRACT: Recent advancements in video generation models, like Stable Video Diffusion,
show promising results, but primarily focus on short, single-scene videos.
These models struggle with generating long videos that involve multiple scenes,
coherent narratives, and consistent characters. Furthermore, there is no
publicly available dataset tailored for the analysis, evaluation, and training
of long video generation models. In this paper, we present MovieBench: A
Hierarchical Movie-Level Dataset for Long Video Generation, which addresses
these challenges by providing unique contributions: (1) movie-length videos
featuring rich, coherent storylines and multi-scene narratives, (2) consistency
of character appearance and audio across scenes, and (3) hierarchical data
structure contains high-level movie information and detailed shot-level
descriptions. Experiments demonstrate that MovieBench brings some new insights
and challenges, such as maintaining character ID consistency across multiple
scenes for various characters. The dataset will be public and continuously
maintained, aiming to advance the field of long video generation. Data can be
found at: https://weijiawu.github.io/MovieBench/.
|
2411.15382 | Elita Lobo | Elita Lobo, Chirag Agarwal, Himabindu Lakkaraju | On the Impact of Fine-Tuning on Chain-of-Thought Reasoning | This paper is a work in progress with findings based on limited
evidence. Please exercise discretion when interpreting the findings | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Large language models have emerged as powerful tools for general
intelligence, showcasing advanced natural language processing capabilities that
find applications across diverse domains. Despite their impressive performance,
recent studies have highlighted the potential for significant enhancements in
LLMs' task-specific performance through fine-tuning strategies like
Reinforcement Learning with Human Feedback (RLHF), supervised fine-tuning
(SFT), and Quantized Low-Rank Adapters (Q-LoRA) method. However, previous works
have shown that while fine-tuning offers significant performance gains, it also
leads to challenges such as catastrophic forgetting and privacy and safety
risks. To this end, there has been little to no work in \textit{understanding
the impact of fine-tuning on the reasoning capabilities of LLMs}. Our research
investigates the effect of fine-tuning on the reasoning abilities of LLMs,
addressing critical questions regarding the impact of task-specific fine-tuning
on overall reasoning capabilities, the influence of fine-tuning on
Chain-of-Thought (CoT) reasoning performance, and the implications for the
faithfulness of CoT reasonings. By exploring these dimensions, our study shows
the impact of fine-tuning on LLM reasoning capabilities, where the faithfulness
of CoT reasoning, on average across four datasets, decreases, highlighting
potential shifts in internal mechanisms of the LLMs resulting from fine-tuning
processes.
| [
{
"version": "v1",
"created": "Fri, 22 Nov 2024 23:54:37 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Mar 2025 23:56:09 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Lobo",
"Elita",
""
],
[
"Agarwal",
"Chirag",
""
],
[
"Lakkaraju",
"Himabindu",
""
]
] | TITLE: On the Impact of Fine-Tuning on Chain-of-Thought Reasoning
ABSTRACT: Large language models have emerged as powerful tools for general
intelligence, showcasing advanced natural language processing capabilities that
find applications across diverse domains. Despite their impressive performance,
recent studies have highlighted the potential for significant enhancements in
LLMs' task-specific performance through fine-tuning strategies like
Reinforcement Learning with Human Feedback (RLHF), supervised fine-tuning
(SFT), and Quantized Low-Rank Adapters (Q-LoRA) method. However, previous works
have shown that while fine-tuning offers significant performance gains, it also
leads to challenges such as catastrophic forgetting and privacy and safety
risks. To this end, there has been little to no work in \textit{understanding
the impact of fine-tuning on the reasoning capabilities of LLMs}. Our research
investigates the effect of fine-tuning on the reasoning abilities of LLMs,
addressing critical questions regarding the impact of task-specific fine-tuning
on overall reasoning capabilities, the influence of fine-tuning on
Chain-of-Thought (CoT) reasoning performance, and the implications for the
faithfulness of CoT reasonings. By exploring these dimensions, our study shows
the impact of fine-tuning on LLM reasoning capabilities, where the faithfulness
of CoT reasoning, on average across four datasets, decreases, highlighting
potential shifts in internal mechanisms of the LLMs resulting from fine-tuning
processes.
|
2411.15738 | Qifan Yu | Qifan Yu, Wei Chow, Zhongqi Yue, Kaihang Pan, Yang Wu, Xiaoyang Wan,
Juncheng Li, Siliang Tang, Hanwang Zhang, Yueting Zhuang | AnyEdit: Mastering Unified High-Quality Image Editing for Any Idea | Accepted by CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Instruction-based image editing aims to modify specific image elements with
natural language instructions. However, current models in this domain often
struggle to accurately execute complex user instructions, as they are trained
on low-quality data with limited editing types. We present AnyEdit, a
comprehensive multi-modal instruction editing dataset, comprising 2.5 million
high-quality editing pairs spanning over 20 editing types and five domains. We
ensure the diversity and quality of the AnyEdit collection through three
aspects: initial data diversity, adaptive editing process, and automated
selection of editing results. Using the dataset, we further train a novel
AnyEdit Stable Diffusion with task-aware routing and learnable task embedding
for unified image editing. Comprehensive experiments on three benchmark
datasets show that AnyEdit consistently boosts the performance of
diffusion-based editing models. This presents prospects for developing
instruction-driven image editing models that support human creativity.
| [
{
"version": "v1",
"created": "Sun, 24 Nov 2024 07:02:56 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Nov 2024 03:34:34 GMT"
},
{
"version": "v3",
"created": "Sat, 29 Mar 2025 04:08:47 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Yu",
"Qifan",
""
],
[
"Chow",
"Wei",
""
],
[
"Yue",
"Zhongqi",
""
],
[
"Pan",
"Kaihang",
""
],
[
"Wu",
"Yang",
""
],
[
"Wan",
"Xiaoyang",
""
],
[
"Li",
"Juncheng",
""
],
[
"Tang",
"Siliang",
""
],
[
"Zhang",
"Hanwang",
""
],
[
"Zhuang",
"Yueting",
""
]
] | TITLE: AnyEdit: Mastering Unified High-Quality Image Editing for Any Idea
ABSTRACT: Instruction-based image editing aims to modify specific image elements with
natural language instructions. However, current models in this domain often
struggle to accurately execute complex user instructions, as they are trained
on low-quality data with limited editing types. We present AnyEdit, a
comprehensive multi-modal instruction editing dataset, comprising 2.5 million
high-quality editing pairs spanning over 20 editing types and five domains. We
ensure the diversity and quality of the AnyEdit collection through three
aspects: initial data diversity, adaptive editing process, and automated
selection of editing results. Using the dataset, we further train a novel
AnyEdit Stable Diffusion with task-aware routing and learnable task embedding
for unified image editing. Comprehensive experiments on three benchmark
datasets show that AnyEdit consistently boosts the performance of
diffusion-based editing models. This presents prospects for developing
instruction-driven image editing models that support human creativity.
|
2411.15821 | Aryan Sajith | Aryan Sajith, Krishna Chaitanya Rao Kathala | Is Training Data Quality or Quantity More Impactful to Small Language
Model Performance? | 10 pages, 4 figures | null | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | This study investigates the relative impact of training data quality versus
quantity on the performance of small language models (SLMs), utilizing the
TinyStories dataset for empirical analysis. Analysis of dataset variations with
respect to size (25% and 50% of the original size) and duplication (controlled
rates of 25%, 50%, 75%, and 100%) were performed. Model performance was
evaluated based on the validation loss, accuracy, and perplexity metrics.
Results indicate training data quality plays a more significant role in the
overall performance of SLMs, especially given scale of this experiment. Minimal
duplication positively impacted model accuracy (+0.87% increase in accuracy at
25% duplication) without significantly increasing perplexity (+0.52% increase
going from 0% to 25% duplication) but excessive duplication led to pronounced
performance degradation (-40% drop in accuracy at 100% duplication). The
implications of this exploration extend beyond just model performance; training
large-scale models imposes significant financial and computational burdens,
which can be prohibitive for organizations, individuals, and the public at
large, especially in developing countries. Additionally, the energy consumption
associated with large-scale training raises environmental concerns.
Understanding the relative importance of data quality versus quantity could
democratize AI technology, making advanced models more accessible and
sustainable for all.
| [
{
"version": "v1",
"created": "Sun, 24 Nov 2024 12:51:50 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 22:38:02 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Sajith",
"Aryan",
""
],
[
"Kathala",
"Krishna Chaitanya Rao",
""
]
] | TITLE: Is Training Data Quality or Quantity More Impactful to Small Language
Model Performance?
ABSTRACT: This study investigates the relative impact of training data quality versus
quantity on the performance of small language models (SLMs), utilizing the
TinyStories dataset for empirical analysis. Analysis of dataset variations with
respect to size (25% and 50% of the original size) and duplication (controlled
rates of 25%, 50%, 75%, and 100%) were performed. Model performance was
evaluated based on the validation loss, accuracy, and perplexity metrics.
Results indicate training data quality plays a more significant role in the
overall performance of SLMs, especially given scale of this experiment. Minimal
duplication positively impacted model accuracy (+0.87% increase in accuracy at
25% duplication) without significantly increasing perplexity (+0.52% increase
going from 0% to 25% duplication) but excessive duplication led to pronounced
performance degradation (-40% drop in accuracy at 100% duplication). The
implications of this exploration extend beyond just model performance; training
large-scale models imposes significant financial and computational burdens,
which can be prohibitive for organizations, individuals, and the public at
large, especially in developing countries. Additionally, the energy consumption
associated with large-scale training raises environmental concerns.
Understanding the relative importance of data quality versus quantity could
democratize AI technology, making advanced models more accessible and
sustainable for all.
|
2411.16761 | Buru Chang | Ji Hyeok Jung, Eun Tae Kim, Seoyeon Kim, Joo Ho Lee, Bumsoo Kim, Buru
Chang | Is 'Right' Right? Enhancing Object Orientation Understanding in
Multimodal Large Language Models through Egocentric Instruction Tuning | CVPR2025 Camera-ready | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multimodal large language models (MLLMs) act as essential interfaces,
connecting humans with AI technologies in multimodal applications. However,
current MLLMs face challenges in accurately interpreting object orientation in
images due to inconsistent orientation annotations in training data, hindering
the development of a coherent orientation understanding. To overcome this, we
propose egocentric instruction tuning, which aligns MLLMs' orientation
understanding with the user's perspective, based on a consistent annotation
standard derived from the user's egocentric viewpoint. We first generate
egocentric instruction data that leverages MLLMs' ability to recognize object
details and applies prior knowledge for orientation understanding. Using this
data, we perform instruction tuning to enhance the model's capability for
accurate orientation interpretation. In addition, we introduce EgoOrientBench,
a benchmark that evaluates MLLMs' orientation understanding across three tasks
using images collected from diverse domains. Experimental results on this
benchmark show that egocentric instruction tuning significantly improves
orientation understanding without compromising overall MLLM performance. The
instruction data and benchmark dataset are available on our project page at
https://github.com/jhCOR/EgoOrientBench.
| [
{
"version": "v1",
"created": "Sun, 24 Nov 2024 15:07:47 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Mar 2025 09:24:00 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Jung",
"Ji Hyeok",
""
],
[
"Kim",
"Eun Tae",
""
],
[
"Kim",
"Seoyeon",
""
],
[
"Lee",
"Joo Ho",
""
],
[
"Kim",
"Bumsoo",
""
],
[
"Chang",
"Buru",
""
]
] | TITLE: Is 'Right' Right? Enhancing Object Orientation Understanding in
Multimodal Large Language Models through Egocentric Instruction Tuning
ABSTRACT: Multimodal large language models (MLLMs) act as essential interfaces,
connecting humans with AI technologies in multimodal applications. However,
current MLLMs face challenges in accurately interpreting object orientation in
images due to inconsistent orientation annotations in training data, hindering
the development of a coherent orientation understanding. To overcome this, we
propose egocentric instruction tuning, which aligns MLLMs' orientation
understanding with the user's perspective, based on a consistent annotation
standard derived from the user's egocentric viewpoint. We first generate
egocentric instruction data that leverages MLLMs' ability to recognize object
details and applies prior knowledge for orientation understanding. Using this
data, we perform instruction tuning to enhance the model's capability for
accurate orientation interpretation. In addition, we introduce EgoOrientBench,
a benchmark that evaluates MLLMs' orientation understanding across three tasks
using images collected from diverse domains. Experimental results on this
benchmark show that egocentric instruction tuning significantly improves
orientation understanding without compromising overall MLLM performance. The
instruction data and benchmark dataset are available on our project page at
https://github.com/jhCOR/EgoOrientBench.
|
2411.17776 | Shuyu Yang | Shuyu Yang, Yaxiong Wang, Li Zhu, Zhedong Zheng | Beyond Walking: A Large-Scale Image-Text Benchmark for Text-based Person
Anomaly Search | null | null | null | null | cs.CV cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Text-based person search aims to retrieve specific individuals across camera
networks using natural language descriptions. However, current benchmarks often
exhibit biases towards common actions like walking or standing, neglecting the
critical need for identifying abnormal behaviors in real-world scenarios. To
meet such demands, we propose a new task, text-based person anomaly search,
locating pedestrians engaged in both routine or anomalous activities via text.
To enable the training and evaluation of this new task, we construct a
large-scale image-text Pedestrian Anomaly Behavior (PAB) benchmark, featuring a
broad spectrum of actions, e.g., running, performing, playing soccer, and the
corresponding anomalies, e.g., lying, being hit, and falling of the same
identity. The training set of PAB comprises 1,013,605 synthesized image-text
pairs of both normalities and anomalies, while the test set includes 1,978
real-world image-text pairs. To validate the potential of PAB, we introduce a
cross-modal pose-aware framework, which integrates human pose patterns with
identity-based hard negative pair sampling. Extensive experiments on the
proposed benchmark show that synthetic training data facilitates the
fine-grained behavior retrieval, and the proposed pose-aware method arrives at
84.93% recall@1 accuracy, surpassing other competitive methods. The dataset,
model, and code are available at https://github.com/Shuyu-XJTU/CMP.
| [
{
"version": "v1",
"created": "Tue, 26 Nov 2024 09:50:15 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Mar 2025 10:47:48 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Yang",
"Shuyu",
""
],
[
"Wang",
"Yaxiong",
""
],
[
"Zhu",
"Li",
""
],
[
"Zheng",
"Zhedong",
""
]
] | TITLE: Beyond Walking: A Large-Scale Image-Text Benchmark for Text-based Person
Anomaly Search
ABSTRACT: Text-based person search aims to retrieve specific individuals across camera
networks using natural language descriptions. However, current benchmarks often
exhibit biases towards common actions like walking or standing, neglecting the
critical need for identifying abnormal behaviors in real-world scenarios. To
meet such demands, we propose a new task, text-based person anomaly search,
locating pedestrians engaged in both routine or anomalous activities via text.
To enable the training and evaluation of this new task, we construct a
large-scale image-text Pedestrian Anomaly Behavior (PAB) benchmark, featuring a
broad spectrum of actions, e.g., running, performing, playing soccer, and the
corresponding anomalies, e.g., lying, being hit, and falling of the same
identity. The training set of PAB comprises 1,013,605 synthesized image-text
pairs of both normalities and anomalies, while the test set includes 1,978
real-world image-text pairs. To validate the potential of PAB, we introduce a
cross-modal pose-aware framework, which integrates human pose patterns with
identity-based hard negative pair sampling. Extensive experiments on the
proposed benchmark show that synthetic training data facilitates the
fine-grained behavior retrieval, and the proposed pose-aware method arrives at
84.93% recall@1 accuracy, surpassing other competitive methods. The dataset,
model, and code are available at https://github.com/Shuyu-XJTU/CMP.
|
2411.18042 | Trong-Thuan Nguyen | Trong-Thuan Nguyen, Pha Nguyen, Jackson Cothren, Alper Yilmaz, Khoa
Luu | HyperGLM: HyperGraph for Video Scene Graph Generation and Anticipation | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Multimodal LLMs have advanced vision-language tasks but still struggle with
understanding video scenes. To bridge this gap, Video Scene Graph Generation
(VidSGG) has emerged to capture multi-object relationships across video frames.
However, prior methods rely on pairwise connections, limiting their ability to
handle complex multi-object interactions and reasoning. To this end, we propose
Multimodal LLMs on a Scene HyperGraph (HyperGLM), promoting reasoning about
multi-way interactions and higher-order relationships. Our approach uniquely
integrates entity scene graphs, which capture spatial relationships between
objects, with a procedural graph that models their causal transitions, forming
a unified HyperGraph. Significantly, HyperGLM enables reasoning by injecting
this unified HyperGraph into LLMs. Additionally, we introduce a new Video Scene
Graph Reasoning (VSGR) dataset featuring 1.9M frames from third-person,
egocentric, and drone views and supports five tasks: Scene Graph Generation,
Scene Graph Anticipation, Video Question Answering, Video Captioning, and
Relation Reasoning. Empirically, HyperGLM consistently outperforms
state-of-the-art methods across five tasks, effectively modeling and reasoning
complex relationships in diverse video scenes.
| [
{
"version": "v1",
"created": "Wed, 27 Nov 2024 04:24:39 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Mar 2025 08:16:49 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Nguyen",
"Trong-Thuan",
""
],
[
"Nguyen",
"Pha",
""
],
[
"Cothren",
"Jackson",
""
],
[
"Yilmaz",
"Alper",
""
],
[
"Luu",
"Khoa",
""
]
] | TITLE: HyperGLM: HyperGraph for Video Scene Graph Generation and Anticipation
ABSTRACT: Multimodal LLMs have advanced vision-language tasks but still struggle with
understanding video scenes. To bridge this gap, Video Scene Graph Generation
(VidSGG) has emerged to capture multi-object relationships across video frames.
However, prior methods rely on pairwise connections, limiting their ability to
handle complex multi-object interactions and reasoning. To this end, we propose
Multimodal LLMs on a Scene HyperGraph (HyperGLM), promoting reasoning about
multi-way interactions and higher-order relationships. Our approach uniquely
integrates entity scene graphs, which capture spatial relationships between
objects, with a procedural graph that models their causal transitions, forming
a unified HyperGraph. Significantly, HyperGLM enables reasoning by injecting
this unified HyperGraph into LLMs. Additionally, we introduce a new Video Scene
Graph Reasoning (VSGR) dataset featuring 1.9M frames from third-person,
egocentric, and drone views and supports five tasks: Scene Graph Generation,
Scene Graph Anticipation, Video Question Answering, Video Captioning, and
Relation Reasoning. Empirically, HyperGLM consistently outperforms
state-of-the-art methods across five tasks, effectively modeling and reasoning
complex relationships in diverse video scenes.
|
2411.18343 | Zechen Liu | Zechen Liu, Feiyang Zhang, Wei Song, Xiang Li, Wei Wei | FreqX: Analyze the Attribution Methods in Another Domain | 16pages, 9 figures | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Personalized Federal learning(PFL) allows clients to cooperatively train a
personalized model without disclosing their private dataset. However, PFL
suffers from Non-IID, heterogeneous devices, lack of fairness, and unclear
contribution which urgently need the interpretability of deep learning model to
overcome these challenges. These challenges proposed new demands for
interpretability. Low cost, privacy, and detailed information. There is no
current interpretability method satisfying them. In this paper, we propose a
novel interpretability method \emph{FreqX} by introducing Signal Processing and
Information Theory. Our experiments show that the explanation results of FreqX
contain both attribution information and concept information. FreqX runs at
least 10 times faster than the baselines which contain concept information.
| [
{
"version": "v1",
"created": "Wed, 27 Nov 2024 13:41:24 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Mar 2025 06:28:48 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Liu",
"Zechen",
""
],
[
"Zhang",
"Feiyang",
""
],
[
"Song",
"Wei",
""
],
[
"Li",
"Xiang",
""
],
[
"Wei",
"Wei",
""
]
] | TITLE: FreqX: Analyze the Attribution Methods in Another Domain
ABSTRACT: Personalized Federal learning(PFL) allows clients to cooperatively train a
personalized model without disclosing their private dataset. However, PFL
suffers from Non-IID, heterogeneous devices, lack of fairness, and unclear
contribution which urgently need the interpretability of deep learning model to
overcome these challenges. These challenges proposed new demands for
interpretability. Low cost, privacy, and detailed information. There is no
current interpretability method satisfying them. In this paper, we propose a
novel interpretability method \emph{FreqX} by introducing Signal Processing and
Information Theory. Our experiments show that the explanation results of FreqX
contain both attribution information and concept information. FreqX runs at
least 10 times faster than the baselines which contain concept information.
|
2411.19626 | Yawen Shao | Yawen Shao, Wei Zhai, Yuhang Yang, Hongchen Luo, Yang Cao, Zheng-Jun
Zha | GREAT: Geometry-Intention Collaborative Inference for Open-Vocabulary 3D
Object Affordance Grounding | CVPR 2025. Project page: https://yawen-shao.github.io/GREAT/ Code:
https://github.com/yawen-shao/GREAT_code | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Open-Vocabulary 3D object affordance grounding aims to anticipate ``action
possibilities'' regions on 3D objects with arbitrary instructions, which is
crucial for robots to generically perceive real scenarios and respond to
operational changes. Existing methods focus on combining images or languages
that depict interactions with 3D geometries to introduce external interaction
priors. However, they are still vulnerable to a limited semantic space by
failing to leverage implied invariant geometries and potential interaction
intentions. Normally, humans address complex tasks through multi-step reasoning
and respond to diverse situations by leveraging associative and analogical
thinking. In light of this, we propose GREAT (GeometRy-intEntion collAboraTive
inference) for Open-Vocabulary 3D Object Affordance Grounding, a novel
framework that mines the object invariant geometry attributes and performs
analogically reason in potential interaction scenarios to form affordance
knowledge, fully combining the knowledge with both geometries and visual
contents to ground 3D object affordance. Besides, we introduce the Point Image
Affordance Dataset v2 (PIADv2), the largest 3D object affordance dataset at
present to support the task. Extensive experiments demonstrate the
effectiveness and superiority of GREAT. The code and dataset are available at
https://yawen-shao.github.io/GREAT/.
| [
{
"version": "v1",
"created": "Fri, 29 Nov 2024 11:23:15 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Mar 2025 03:46:58 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Shao",
"Yawen",
""
],
[
"Zhai",
"Wei",
""
],
[
"Yang",
"Yuhang",
""
],
[
"Luo",
"Hongchen",
""
],
[
"Cao",
"Yang",
""
],
[
"Zha",
"Zheng-Jun",
""
]
] | TITLE: GREAT: Geometry-Intention Collaborative Inference for Open-Vocabulary 3D
Object Affordance Grounding
ABSTRACT: Open-Vocabulary 3D object affordance grounding aims to anticipate ``action
possibilities'' regions on 3D objects with arbitrary instructions, which is
crucial for robots to generically perceive real scenarios and respond to
operational changes. Existing methods focus on combining images or languages
that depict interactions with 3D geometries to introduce external interaction
priors. However, they are still vulnerable to a limited semantic space by
failing to leverage implied invariant geometries and potential interaction
intentions. Normally, humans address complex tasks through multi-step reasoning
and respond to diverse situations by leveraging associative and analogical
thinking. In light of this, we propose GREAT (GeometRy-intEntion collAboraTive
inference) for Open-Vocabulary 3D Object Affordance Grounding, a novel
framework that mines the object invariant geometry attributes and performs
analogically reason in potential interaction scenarios to form affordance
knowledge, fully combining the knowledge with both geometries and visual
contents to ground 3D object affordance. Besides, we introduce the Point Image
Affordance Dataset v2 (PIADv2), the largest 3D object affordance dataset at
present to support the task. Extensive experiments demonstrate the
effectiveness and superiority of GREAT. The code and dataset are available at
https://yawen-shao.github.io/GREAT/.
|
2411.19655 | Alessandro Scir\`e | Alessandro Scir\`e, Andrei Stefan Bejgu, Simone Tedeschi, Karim
Ghonim, Federico Martelli, Roberto Navigli | Truth or Mirage? Towards End-to-End Factuality Evaluation with LLM-Oasis | 15 pages. To be submitted to CL journal | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | After the introduction of Large Language Models (LLMs), there have been
substantial improvements in the performance of Natural Language Generation
(NLG) tasks, including Text Summarization and Machine Translation. However,
LLMs still produce outputs containing hallucinations, that is, content not
grounded in factual information. Therefore, developing methods to assess the
factuality of LLMs has become urgent.
Indeed, resources for factuality evaluation have recently emerged. Although
challenging, these resources face one or more of the following limitations: (i)
they are tailored to a specific task or domain; (ii) they are limited in size,
thereby preventing the training of new factuality evaluators; (iii) they are
designed for simpler verification tasks, such as claim verification.
To address these issues, we introduce LLM-Oasis, to the best of our knowledge
the largest resource for training end-to-end factuality evaluators. LLM-Oasis
is constructed by extracting claims from Wikipedia, falsifying a subset of
these claims, and generating pairs of factual and unfactual texts. We then rely
on human annotators to both validate the quality of our dataset and to create a
gold standard test set for benchmarking factuality evaluation systems.
Our experiments demonstrate that LLM-Oasis presents a significant challenge
for state-of-the-art LLMs, with GPT-4o achieving up to 60% accuracy in our
proposed end-to-end factuality evaluation task, highlighting its potential to
drive future research in the field.
| [
{
"version": "v1",
"created": "Fri, 29 Nov 2024 12:21:15 GMT"
},
{
"version": "v2",
"created": "Mon, 2 Dec 2024 14:28:07 GMT"
},
{
"version": "v3",
"created": "Mon, 31 Mar 2025 13:55:07 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Scirè",
"Alessandro",
""
],
[
"Bejgu",
"Andrei Stefan",
""
],
[
"Tedeschi",
"Simone",
""
],
[
"Ghonim",
"Karim",
""
],
[
"Martelli",
"Federico",
""
],
[
"Navigli",
"Roberto",
""
]
] | TITLE: Truth or Mirage? Towards End-to-End Factuality Evaluation with LLM-Oasis
ABSTRACT: After the introduction of Large Language Models (LLMs), there have been
substantial improvements in the performance of Natural Language Generation
(NLG) tasks, including Text Summarization and Machine Translation. However,
LLMs still produce outputs containing hallucinations, that is, content not
grounded in factual information. Therefore, developing methods to assess the
factuality of LLMs has become urgent.
Indeed, resources for factuality evaluation have recently emerged. Although
challenging, these resources face one or more of the following limitations: (i)
they are tailored to a specific task or domain; (ii) they are limited in size,
thereby preventing the training of new factuality evaluators; (iii) they are
designed for simpler verification tasks, such as claim verification.
To address these issues, we introduce LLM-Oasis, to the best of our knowledge
the largest resource for training end-to-end factuality evaluators. LLM-Oasis
is constructed by extracting claims from Wikipedia, falsifying a subset of
these claims, and generating pairs of factual and unfactual texts. We then rely
on human annotators to both validate the quality of our dataset and to create a
gold standard test set for benchmarking factuality evaluation systems.
Our experiments demonstrate that LLM-Oasis presents a significant challenge
for state-of-the-art LLMs, with GPT-4o achieving up to 60% accuracy in our
proposed end-to-end factuality evaluation task, highlighting its potential to
drive future research in the field.
|
2412.00624 | Yogesh Kulkarni | Yogesh Kulkarni, Pooyan Fazli | VideoSAVi: Self-Aligned Video Language Models without Human Supervision | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Recent advances in video-large language models (Video-LLMs) have led to
significant progress in video understanding. Current preference optimization
methods often rely on proprietary APIs or ground-truth captions to generate
preference data (i.e., pairs of model outputs ranked based on their quality or
alignment with human judgment), which is then used to train models for
video-language alignment. This approach is both costly and labor-intensive. To
address this limitation, we introduce VideoSAVi (Self-Aligned Video Language
Model), a self-training pipeline that enables Video-LLMs to reason over video
content without external supervision. Our approach includes a self-critiquing
mechanism that identifies reasoning errors in the model's initial responses and
generates improved alternatives, creating preference pairs directly from video
content. VideoSAVi then applies Direct Preference Optimization (DPO), which
uses the preference data to iteratively train the model, enhancing temporal and
spatial reasoning in video understanding. Experiments show that VideoSAVi
achieves state-of-the-art performance on MVBench (74.0%) and delivers
significant improvements across other benchmarks, including a 3.9% gain on
PerceptionTest and a substantial 6.8% improvement on the challenging EgoSchema
dataset compared to baseline models. Our model-agnostic approach is
computationally efficient, requiring only 32 frames, offering a promising
direction for self-aligned video understanding without reliance on external
models or annotations.
| [
{
"version": "v1",
"created": "Sun, 1 Dec 2024 00:33:05 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Mar 2025 01:19:52 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Kulkarni",
"Yogesh",
""
],
[
"Fazli",
"Pooyan",
""
]
] | TITLE: VideoSAVi: Self-Aligned Video Language Models without Human Supervision
ABSTRACT: Recent advances in video-large language models (Video-LLMs) have led to
significant progress in video understanding. Current preference optimization
methods often rely on proprietary APIs or ground-truth captions to generate
preference data (i.e., pairs of model outputs ranked based on their quality or
alignment with human judgment), which is then used to train models for
video-language alignment. This approach is both costly and labor-intensive. To
address this limitation, we introduce VideoSAVi (Self-Aligned Video Language
Model), a self-training pipeline that enables Video-LLMs to reason over video
content without external supervision. Our approach includes a self-critiquing
mechanism that identifies reasoning errors in the model's initial responses and
generates improved alternatives, creating preference pairs directly from video
content. VideoSAVi then applies Direct Preference Optimization (DPO), which
uses the preference data to iteratively train the model, enhancing temporal and
spatial reasoning in video understanding. Experiments show that VideoSAVi
achieves state-of-the-art performance on MVBench (74.0%) and delivers
significant improvements across other benchmarks, including a 3.9% gain on
PerceptionTest and a substantial 6.8% improvement on the challenging EgoSchema
dataset compared to baseline models. Our model-agnostic approach is
computationally efficient, requiring only 32 frames, offering a promising
direction for self-aligned video understanding without reliance on external
models or annotations.
|
2412.00947 | Ryo Kamoi | Ryo Kamoi, Yusen Zhang, Sarkar Snigdha Sarathi Das, Ranran Haoran
Zhang, Rui Zhang | VisOnlyQA: Large Vision Language Models Still Struggle with Visual
Perception of Geometric Information | VisOnlyQA dataset, code, and model responses are provided at
https://github.com/psunlpgroup/VisOnlyQA. Please also refer to our project
website at https://visonlyqa.github.io/ | null | null | null | cs.CL cs.CV | http://creativecommons.org/licenses/by/4.0/ | Large Vision Language Models (LVLMs) have achieved remarkable performance in
various vision-language tasks. However, it is still unclear how accurately
LVLMs can perceive visual information in images. In particular, the capability
of LVLMs to perceive geometric information, such as shape, angle, and size,
remains insufficiently analyzed, although the perception of these properties is
crucial for tasks that require a detailed visual understanding. In this work,
we introduce VisOnlyQA, a dataset for evaluating the geometric perception of
LVLMs, and reveal that LVLMs often cannot accurately perceive basic geometric
information in images, while human performance is nearly perfect. VisOnlyQA
consists of 12 tasks that directly ask about geometric information in geometric
shapes, charts, chemical structures, and 3D shapes. Our experiments highlight
the following findings: (i) State-of-the-art LVLMs struggle with basic
geometric perception -- 20 LVLMs we evaluate, including GPT-4o and Gemini 1.5
Pro, work poorly on VisOnlyQA. (ii) Additional training data does not resolve
this issue -- fine-tuning on the training set of VisOnlyQA is not always
effective, even for in-distribution tasks. (iii) Bottleneck in the architecture
-- LVLMs using stronger LLMs exhibit better geometric perception on VisOnlyQA,
while it does not require complex reasoning, suggesting that the way LVLMs
process information from visual encoders is a bottleneck. The datasets, code,
and model responses are provided at https://github.com/psunlpgroup/VisOnlyQA.
| [
{
"version": "v1",
"created": "Sun, 1 Dec 2024 19:46:22 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Mar 2025 15:30:48 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Kamoi",
"Ryo",
""
],
[
"Zhang",
"Yusen",
""
],
[
"Das",
"Sarkar Snigdha Sarathi",
""
],
[
"Zhang",
"Ranran Haoran",
""
],
[
"Zhang",
"Rui",
""
]
] | TITLE: VisOnlyQA: Large Vision Language Models Still Struggle with Visual
Perception of Geometric Information
ABSTRACT: Large Vision Language Models (LVLMs) have achieved remarkable performance in
various vision-language tasks. However, it is still unclear how accurately
LVLMs can perceive visual information in images. In particular, the capability
of LVLMs to perceive geometric information, such as shape, angle, and size,
remains insufficiently analyzed, although the perception of these properties is
crucial for tasks that require a detailed visual understanding. In this work,
we introduce VisOnlyQA, a dataset for evaluating the geometric perception of
LVLMs, and reveal that LVLMs often cannot accurately perceive basic geometric
information in images, while human performance is nearly perfect. VisOnlyQA
consists of 12 tasks that directly ask about geometric information in geometric
shapes, charts, chemical structures, and 3D shapes. Our experiments highlight
the following findings: (i) State-of-the-art LVLMs struggle with basic
geometric perception -- 20 LVLMs we evaluate, including GPT-4o and Gemini 1.5
Pro, work poorly on VisOnlyQA. (ii) Additional training data does not resolve
this issue -- fine-tuning on the training set of VisOnlyQA is not always
effective, even for in-distribution tasks. (iii) Bottleneck in the architecture
-- LVLMs using stronger LLMs exhibit better geometric perception on VisOnlyQA,
while it does not require complex reasoning, suggesting that the way LVLMs
process information from visual encoders is a bottleneck. The datasets, code,
and model responses are provided at https://github.com/psunlpgroup/VisOnlyQA.
|
2412.01316 | Xin Yan | Xin Yan, Yuxuan Cai, Qiuyue Wang, Yuan Zhou, Wenhao Huang, Huan Yang | Long Video Diffusion Generation with Segmented Cross-Attention and
Content-Rich Video Data Curation | This paper is accepted by CVPR 2025 | null | null | null | cs.CV cs.AI cs.MM | http://creativecommons.org/licenses/by/4.0/ | We introduce Presto, a novel video diffusion model designed to generate
15-second videos with long-range coherence and rich content. Extending video
generation methods to maintain scenario diversity over long durations presents
significant challenges. To address this, we propose a Segmented Cross-Attention
(SCA) strategy, which splits hidden states into segments along the temporal
dimension, allowing each segment to cross-attend to a corresponding
sub-caption. SCA requires no additional parameters, enabling seamless
incorporation into current DiT-based architectures. To facilitate high-quality
long video generation, we build the LongTake-HD dataset, consisting of 261k
content-rich videos with scenario coherence, annotated with an overall video
caption and five progressive sub-captions. Experiments show that our Presto
achieves 78.5% on the VBench Semantic Score and 100% on the Dynamic Degree,
outperforming existing state-of-the-art video generation methods. This
demonstrates that our proposed Presto significantly enhances content richness,
maintains long-range coherence, and captures intricate textual details. More
details are displayed on our project page: https://presto-video.github.io/.
| [
{
"version": "v1",
"created": "Mon, 2 Dec 2024 09:32:36 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Mar 2025 08:56:56 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Yan",
"Xin",
""
],
[
"Cai",
"Yuxuan",
""
],
[
"Wang",
"Qiuyue",
""
],
[
"Zhou",
"Yuan",
""
],
[
"Huang",
"Wenhao",
""
],
[
"Yang",
"Huan",
""
]
] | TITLE: Long Video Diffusion Generation with Segmented Cross-Attention and
Content-Rich Video Data Curation
ABSTRACT: We introduce Presto, a novel video diffusion model designed to generate
15-second videos with long-range coherence and rich content. Extending video
generation methods to maintain scenario diversity over long durations presents
significant challenges. To address this, we propose a Segmented Cross-Attention
(SCA) strategy, which splits hidden states into segments along the temporal
dimension, allowing each segment to cross-attend to a corresponding
sub-caption. SCA requires no additional parameters, enabling seamless
incorporation into current DiT-based architectures. To facilitate high-quality
long video generation, we build the LongTake-HD dataset, consisting of 261k
content-rich videos with scenario coherence, annotated with an overall video
caption and five progressive sub-captions. Experiments show that our Presto
achieves 78.5% on the VBench Semantic Score and 100% on the Dynamic Degree,
outperforming existing state-of-the-art video generation methods. This
demonstrates that our proposed Presto significantly enhances content richness,
maintains long-range coherence, and captures intricate textual details. More
details are displayed on our project page: https://presto-video.github.io/.
|
2412.02506 | Fabian Schmidt | Fabian Schmidt, Julian Daubermann, Marcel Mitschke, Constantin
Blessing, Stefan Meyer, Markus Enzweiler, Abhinav Valada | ROVER: A Multi-Season Dataset for Visual SLAM | 19 pages, 9 figures, 12 tables | null | null | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Robust SLAM is a crucial enabler for autonomous navigation in natural,
semi-structured environments such as parks and gardens. However, these
environments present unique challenges for SLAM due to frequent seasonal
changes, varying light conditions, and dense vegetation. These factors often
degrade the performance of visual SLAM algorithms originally developed for
structured urban environments. To address this gap, we present ROVER, a
comprehensive benchmark dataset tailored for evaluating visual SLAM algorithms
under diverse environmental conditions and spatial configurations. We captured
the dataset with a robotic platform equipped with monocular, stereo, and RGBD
cameras, as well as inertial sensors. It covers 39 recordings across five
outdoor locations, collected through all seasons and various lighting
scenarios, i.e., day, dusk, and night with and without external lighting. With
this novel dataset, we evaluate several traditional and deep learning-based
SLAM methods and study their performance in diverse challenging conditions. The
results demonstrate that while stereo-inertial and RGBD configurations
generally perform better under favorable lighting and moderate vegetation, most
SLAM systems perform poorly in low-light and high-vegetation scenarios,
particularly during summer and autumn. Our analysis highlights the need for
improved adaptability in visual SLAM algorithms for outdoor applications, as
current systems struggle with dynamic environmental factors affecting scale,
feature extraction, and trajectory consistency. This dataset provides a solid
foundation for advancing visual SLAM research in real-world, semi-structured
environments, fostering the development of more resilient SLAM systems for
long-term outdoor localization and mapping. The dataset and the code of the
benchmark are available under https://iis-esslingen.github.io/rover.
| [
{
"version": "v1",
"created": "Tue, 3 Dec 2024 15:34:00 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Mar 2025 17:53:06 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Schmidt",
"Fabian",
""
],
[
"Daubermann",
"Julian",
""
],
[
"Mitschke",
"Marcel",
""
],
[
"Blessing",
"Constantin",
""
],
[
"Meyer",
"Stefan",
""
],
[
"Enzweiler",
"Markus",
""
],
[
"Valada",
"Abhinav",
""
]
] | TITLE: ROVER: A Multi-Season Dataset for Visual SLAM
ABSTRACT: Robust SLAM is a crucial enabler for autonomous navigation in natural,
semi-structured environments such as parks and gardens. However, these
environments present unique challenges for SLAM due to frequent seasonal
changes, varying light conditions, and dense vegetation. These factors often
degrade the performance of visual SLAM algorithms originally developed for
structured urban environments. To address this gap, we present ROVER, a
comprehensive benchmark dataset tailored for evaluating visual SLAM algorithms
under diverse environmental conditions and spatial configurations. We captured
the dataset with a robotic platform equipped with monocular, stereo, and RGBD
cameras, as well as inertial sensors. It covers 39 recordings across five
outdoor locations, collected through all seasons and various lighting
scenarios, i.e., day, dusk, and night with and without external lighting. With
this novel dataset, we evaluate several traditional and deep learning-based
SLAM methods and study their performance in diverse challenging conditions. The
results demonstrate that while stereo-inertial and RGBD configurations
generally perform better under favorable lighting and moderate vegetation, most
SLAM systems perform poorly in low-light and high-vegetation scenarios,
particularly during summer and autumn. Our analysis highlights the need for
improved adaptability in visual SLAM algorithms for outdoor applications, as
current systems struggle with dynamic environmental factors affecting scale,
feature extraction, and trajectory consistency. This dataset provides a solid
foundation for advancing visual SLAM research in real-world, semi-structured
environments, fostering the development of more resilient SLAM systems for
long-term outdoor localization and mapping. The dataset and the code of the
benchmark are available under https://iis-esslingen.github.io/rover.
|
2412.04471 | Yunze Man | Vinayak Gupta, Yunze Man, Yu-Xiong Wang | PaintScene4D: Consistent 4D Scene Generation from Text Prompts | Preprint. Project page: https://paintscene4d.github.io/ | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Recent advances in diffusion models have revolutionized 2D and 3D content
creation, yet generating photorealistic dynamic 4D scenes remains a significant
challenge. Existing dynamic 4D generation methods typically rely on distilling
knowledge from pre-trained 3D generative models, often fine-tuned on synthetic
object datasets. Consequently, the resulting scenes tend to be object-centric
and lack photorealism. While text-to-video models can generate more realistic
scenes with motion, they often struggle with spatial understanding and provide
limited control over camera viewpoints during rendering. To address these
limitations, we present PaintScene4D, a novel text-to-4D scene generation
framework that departs from conventional multi-view generative models in favor
of a streamlined architecture that harnesses video generative models trained on
diverse real-world datasets. Our method first generates a reference video using
a video generation model, and then employs a strategic camera array selection
for rendering. We apply a progressive warping and inpainting technique to
ensure both spatial and temporal consistency across multiple viewpoints.
Finally, we optimize multi-view images using a dynamic renderer, enabling
flexible camera control based on user preferences. Adopting a training-free
architecture, our PaintScene4D efficiently produces realistic 4D scenes that
can be viewed from arbitrary trajectories. The code will be made publicly
available. Our project page is at https://paintscene4d.github.io/
| [
{
"version": "v1",
"created": "Thu, 5 Dec 2024 18:59:57 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Mar 2025 00:26:04 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Gupta",
"Vinayak",
""
],
[
"Man",
"Yunze",
""
],
[
"Wang",
"Yu-Xiong",
""
]
] | TITLE: PaintScene4D: Consistent 4D Scene Generation from Text Prompts
ABSTRACT: Recent advances in diffusion models have revolutionized 2D and 3D content
creation, yet generating photorealistic dynamic 4D scenes remains a significant
challenge. Existing dynamic 4D generation methods typically rely on distilling
knowledge from pre-trained 3D generative models, often fine-tuned on synthetic
object datasets. Consequently, the resulting scenes tend to be object-centric
and lack photorealism. While text-to-video models can generate more realistic
scenes with motion, they often struggle with spatial understanding and provide
limited control over camera viewpoints during rendering. To address these
limitations, we present PaintScene4D, a novel text-to-4D scene generation
framework that departs from conventional multi-view generative models in favor
of a streamlined architecture that harnesses video generative models trained on
diverse real-world datasets. Our method first generates a reference video using
a video generation model, and then employs a strategic camera array selection
for rendering. We apply a progressive warping and inpainting technique to
ensure both spatial and temporal consistency across multiple viewpoints.
Finally, we optimize multi-view images using a dynamic renderer, enabling
flexible camera control based on user preferences. Adopting a training-free
architecture, our PaintScene4D efficiently produces realistic 4D scenes that
can be viewed from arbitrary trajectories. The code will be made publicly
available. Our project page is at https://paintscene4d.github.io/
|
2412.05580 | Hao-Chun Yang | Hao-Chun Yang, Sicheng Dai, Saige Rutherford, Christian Gaser, Andre F
Marquand, Christian F Beckmann, Thomas Wolfers | Self-Supervised Masked Mesh Learning for Unsupervised Anomaly Detection
on 3D Cortical Surfaces | null | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Unsupervised anomaly detection in brain imaging is challenging. In this
paper, we propose self-supervised masked mesh learning for unsupervised anomaly
detection on 3D cortical surfaces. Our framework leverages the intrinsic
geometry of the cortical surface to learn a self-supervised representation that
captures the underlying structure of the brain. We introduce a masked mesh
convolutional neural network (MMN) that learns to predict masked regions of the
cortical surface. By training the MMN on a large dataset of healthy subjects,
we learn a representation that captures the normal variation in the cortical
surface. We then use this representation to detect anomalies in unseen
individuals by calculating anomaly scores based on the reconstruction error of
the MMN. We evaluated our framework by training on population-scale dataset UKB
and HCP-Aging and testing on two datasets of Alzheimer's disease patients ADNI
and OASIS3. Our results show that our framework can detect anomalies in
cortical thickness, cortical volume, and cortical sulcus characteristics, which
are known to be biomarkers of Alzheimer's disease. Our proposed framework
provides a promising approach for unsupervised anomaly detection based on
normative variation of cortical features.
| [
{
"version": "v1",
"created": "Sat, 7 Dec 2024 08:08:24 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Jan 2025 17:06:36 GMT"
},
{
"version": "v3",
"created": "Sun, 30 Mar 2025 16:19:40 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Yang",
"Hao-Chun",
""
],
[
"Dai",
"Sicheng",
""
],
[
"Rutherford",
"Saige",
""
],
[
"Gaser",
"Christian",
""
],
[
"Marquand",
"Andre F",
""
],
[
"Beckmann",
"Christian F",
""
],
[
"Wolfers",
"Thomas",
""
]
] | TITLE: Self-Supervised Masked Mesh Learning for Unsupervised Anomaly Detection
on 3D Cortical Surfaces
ABSTRACT: Unsupervised anomaly detection in brain imaging is challenging. In this
paper, we propose self-supervised masked mesh learning for unsupervised anomaly
detection on 3D cortical surfaces. Our framework leverages the intrinsic
geometry of the cortical surface to learn a self-supervised representation that
captures the underlying structure of the brain. We introduce a masked mesh
convolutional neural network (MMN) that learns to predict masked regions of the
cortical surface. By training the MMN on a large dataset of healthy subjects,
we learn a representation that captures the normal variation in the cortical
surface. We then use this representation to detect anomalies in unseen
individuals by calculating anomaly scores based on the reconstruction error of
the MMN. We evaluated our framework by training on population-scale dataset UKB
and HCP-Aging and testing on two datasets of Alzheimer's disease patients ADNI
and OASIS3. Our results show that our framework can detect anomalies in
cortical thickness, cortical volume, and cortical sulcus characteristics, which
are known to be biomarkers of Alzheimer's disease. Our proposed framework
provides a promising approach for unsupervised anomaly detection based on
normative variation of cortical features.
|
2412.06227 | Marsha Mariya Kappan | Marsha Mariya Kappan, Eduardo Benitez Sandoval, Erik Meijering and
Francisco Cruz | Attention-Enhanced Lightweight Hourglass Network for Human Pose
Estimation | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Pose estimation is a critical task in computer vision with a wide range of
applications from activity monitoring to human-robot interaction. However,most
of the existing methods are computationally expensive or have complex
architecture. Here we propose a lightweight attention based pose estimation
network that utilizes depthwise separable convolution and Convolutional Block
Attention Module on an hourglass backbone. The network significantly reduces
the computational complexity (floating point operations) and the model size
(number of parameters) containing only about 10% of parameters of original
eight stack Hourglass network. Experiments were conducted on COCO and MPII
datasets using a two stack hourglass backbone. The results showed that our
model performs well in comparison to six other lightweight pose estimation
models with an average precision of 72.07. The model achieves this performance
with only 2.3M parameters and 3.7G FLOPs.
| [
{
"version": "v1",
"created": "Mon, 9 Dec 2024 06:02:07 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Mar 2025 00:49:46 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Kappan",
"Marsha Mariya",
""
],
[
"Sandoval",
"Eduardo Benitez",
""
],
[
"Meijering",
"Erik",
""
],
[
"Cruz",
"Francisco",
""
]
] | TITLE: Attention-Enhanced Lightweight Hourglass Network for Human Pose
Estimation
ABSTRACT: Pose estimation is a critical task in computer vision with a wide range of
applications from activity monitoring to human-robot interaction. However,most
of the existing methods are computationally expensive or have complex
architecture. Here we propose a lightweight attention based pose estimation
network that utilizes depthwise separable convolution and Convolutional Block
Attention Module on an hourglass backbone. The network significantly reduces
the computational complexity (floating point operations) and the model size
(number of parameters) containing only about 10% of parameters of original
eight stack Hourglass network. Experiments were conducted on COCO and MPII
datasets using a two stack hourglass backbone. The results showed that our
model performs well in comparison to six other lightweight pose estimation
models with an average precision of 72.07. The model achieves this performance
with only 2.3M parameters and 3.7G FLOPs.
|
2412.08049 | Ao Li | Ao Li, Longwei Xu, Chen Ling, Jinghui Zhang, Pengwei Wang | EmoVerse: Exploring Multimodal Large Language Models for Sentiment and
Emotion Understanding | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sentiment and emotion understanding are essential to applications such as
human-computer interaction and depression detection. While Multimodal Large
Language Models (MLLMs) demonstrate robust general capabilities, they face
considerable challenges in the field of affective computing, particularly in
detecting subtle facial expressions and handling complex emotion-related tasks,
such as emotion reason inference and understanding emotions in long-context
scenarios. Furthermore, there is a lack of a unified MLLM that can effectively
handle both sentiment and emotion-related tasks. To address these challenges,
we explore multi-task training strategies for MLLMs in affective computing and
introduce Emotion Universe (EmoVerse), an MLLM designed to handle a broad
spectrum of sentiment and emotion-related tasks. In addition, EmoVerse is
capable of deeply analyzing the underlying causes of emotional states. We also
introduce the Affective Multitask (AMT) Dataset, which supports multimodal
sentiment analysis, multimodal emotion recognition, facial expression
recognition, emotion reason inference, and emotion cause-pair extraction tasks.
Extensive experiments demonstrate that EmoVerse outperforms existing methods,
achieving state-of-the-art results in sentiment and emotion-related tasks. The
code is available at https://github.com/liaolea/EmoVerse.
| [
{
"version": "v1",
"created": "Wed, 11 Dec 2024 02:55:00 GMT"
},
{
"version": "v2",
"created": "Mon, 16 Dec 2024 10:31:03 GMT"
},
{
"version": "v3",
"created": "Mon, 31 Mar 2025 07:15:17 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Li",
"Ao",
""
],
[
"Xu",
"Longwei",
""
],
[
"Ling",
"Chen",
""
],
[
"Zhang",
"Jinghui",
""
],
[
"Wang",
"Pengwei",
""
]
] | TITLE: EmoVerse: Exploring Multimodal Large Language Models for Sentiment and
Emotion Understanding
ABSTRACT: Sentiment and emotion understanding are essential to applications such as
human-computer interaction and depression detection. While Multimodal Large
Language Models (MLLMs) demonstrate robust general capabilities, they face
considerable challenges in the field of affective computing, particularly in
detecting subtle facial expressions and handling complex emotion-related tasks,
such as emotion reason inference and understanding emotions in long-context
scenarios. Furthermore, there is a lack of a unified MLLM that can effectively
handle both sentiment and emotion-related tasks. To address these challenges,
we explore multi-task training strategies for MLLMs in affective computing and
introduce Emotion Universe (EmoVerse), an MLLM designed to handle a broad
spectrum of sentiment and emotion-related tasks. In addition, EmoVerse is
capable of deeply analyzing the underlying causes of emotional states. We also
introduce the Affective Multitask (AMT) Dataset, which supports multimodal
sentiment analysis, multimodal emotion recognition, facial expression
recognition, emotion reason inference, and emotion cause-pair extraction tasks.
Extensive experiments demonstrate that EmoVerse outperforms existing methods,
achieving state-of-the-art results in sentiment and emotion-related tasks. The
code is available at https://github.com/liaolea/EmoVerse.
|
2412.08646 | Jihao Liu | Jihao Liu, Zhiding Yu, Shiyi Lan, Shihao Wang, Rongyao Fang, Jan
Kautz, Hongsheng Li, Jose M. Alvare | StreamChat: Chatting with Streaming Video | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | This paper presents StreamChat, a novel approach that enhances the
interaction capabilities of Large Multimodal Models (LMMs) with streaming video
content. In streaming interaction scenarios, existing methods rely solely on
visual information available at the moment a question is posed, resulting in
significant delays as the model remains unaware of subsequent changes in the
streaming video. StreamChat addresses this limitation by innovatively updating
the visual context at each decoding step, ensuring that the model utilizes
up-to-date video content throughout the decoding process. Additionally, we
introduce a flexible and efficient crossattention-based architecture to process
dynamic streaming inputs while maintaining inference efficiency for streaming
interactions. Furthermore, we construct a new dense instruction dataset to
facilitate the training of streaming interaction models, complemented by a
parallel 3D-RoPE mechanism that encodes the relative temporal information of
visual and text tokens. Experimental results demonstrate that StreamChat
achieves competitive performance on established image and video benchmarks and
exhibits superior capabilities in streaming interaction scenarios compared to
state-of-the-art video LMM.
| [
{
"version": "v1",
"created": "Wed, 11 Dec 2024 18:59:54 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Mar 2025 05:25:58 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Liu",
"Jihao",
""
],
[
"Yu",
"Zhiding",
""
],
[
"Lan",
"Shiyi",
""
],
[
"Wang",
"Shihao",
""
],
[
"Fang",
"Rongyao",
""
],
[
"Kautz",
"Jan",
""
],
[
"Li",
"Hongsheng",
""
],
[
"Alvare",
"Jose M.",
""
]
] | TITLE: StreamChat: Chatting with Streaming Video
ABSTRACT: This paper presents StreamChat, a novel approach that enhances the
interaction capabilities of Large Multimodal Models (LMMs) with streaming video
content. In streaming interaction scenarios, existing methods rely solely on
visual information available at the moment a question is posed, resulting in
significant delays as the model remains unaware of subsequent changes in the
streaming video. StreamChat addresses this limitation by innovatively updating
the visual context at each decoding step, ensuring that the model utilizes
up-to-date video content throughout the decoding process. Additionally, we
introduce a flexible and efficient crossattention-based architecture to process
dynamic streaming inputs while maintaining inference efficiency for streaming
interactions. Furthermore, we construct a new dense instruction dataset to
facilitate the training of streaming interaction models, complemented by a
parallel 3D-RoPE mechanism that encodes the relative temporal information of
visual and text tokens. Experimental results demonstrate that StreamChat
achieves competitive performance on established image and video benchmarks and
exhibits superior capabilities in streaming interaction scenarios compared to
state-of-the-art video LMM.
|
2412.10031 | Jizhihui Liu | Jizhihui Liu, Qixun Teng, Qing Ma, Junjun Jiang | FM2S: Towards Spatially-Correlated Noise Modeling in Zero-Shot
Fluorescence Microscopy Image Denoising | 14 pages, 10 figures | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Fluorescence microscopy image (FMI) denoising faces critical challenges due
to the compound mixed Poisson-Gaussian noise with strong spatial correlation
and the impracticality of acquiring paired noisy/clean data in dynamic
biomedical scenarios. While supervised methods trained on synthetic noise
(e.g., Gaussian/Poisson) suffer from out-of-distribution generalization issues,
existing self-supervised approaches degrade under real FMI noise due to
oversimplified noise assumptions and computationally intensive deep
architectures. In this paper, we propose Fluorescence Micrograph to Self
(FM2S), a zero-shot denoiser that achieves efficient FMI denoising through
three key innovations: 1) A noise injection module that ensures training data
sufficiency through adaptive Poisson-Gaussian synthesis while preserving
spatial correlation and global statistics of FMI noise for robust model
generalization; 2) A two-stage progressive learning strategy that first
recovers structural priors via pre-denoised targets then refines high-frequency
details through noise distribution alignment; 3) An ultra-lightweight network
(3.5k parameters) enabling rapid convergence with 270$\times$ faster training
and inference than SOTAs. Extensive experiments across FMI datasets demonstrate
FM2S's superiority: It outperforms CVF-SID by 1.4dB PSNR on average while
requiring 0.1% parameters of AP-BSN. Notably, FM2S maintains stable performance
across varying noise levels, proving its practicality for microscopy platforms
with diverse sensor characteristics. Code and datasets will be released.
| [
{
"version": "v1",
"created": "Fri, 13 Dec 2024 10:45:25 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Mar 2025 10:44:34 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Liu",
"Jizhihui",
""
],
[
"Teng",
"Qixun",
""
],
[
"Ma",
"Qing",
""
],
[
"Jiang",
"Junjun",
""
]
] | TITLE: FM2S: Towards Spatially-Correlated Noise Modeling in Zero-Shot
Fluorescence Microscopy Image Denoising
ABSTRACT: Fluorescence microscopy image (FMI) denoising faces critical challenges due
to the compound mixed Poisson-Gaussian noise with strong spatial correlation
and the impracticality of acquiring paired noisy/clean data in dynamic
biomedical scenarios. While supervised methods trained on synthetic noise
(e.g., Gaussian/Poisson) suffer from out-of-distribution generalization issues,
existing self-supervised approaches degrade under real FMI noise due to
oversimplified noise assumptions and computationally intensive deep
architectures. In this paper, we propose Fluorescence Micrograph to Self
(FM2S), a zero-shot denoiser that achieves efficient FMI denoising through
three key innovations: 1) A noise injection module that ensures training data
sufficiency through adaptive Poisson-Gaussian synthesis while preserving
spatial correlation and global statistics of FMI noise for robust model
generalization; 2) A two-stage progressive learning strategy that first
recovers structural priors via pre-denoised targets then refines high-frequency
details through noise distribution alignment; 3) An ultra-lightweight network
(3.5k parameters) enabling rapid convergence with 270$\times$ faster training
and inference than SOTAs. Extensive experiments across FMI datasets demonstrate
FM2S's superiority: It outperforms CVF-SID by 1.4dB PSNR on average while
requiring 0.1% parameters of AP-BSN. Notably, FM2S maintains stable performance
across varying noise levels, proving its practicality for microscopy platforms
with diverse sensor characteristics. Code and datasets will be released.
|
2412.13440 | John Hastings | Suvineetha Herath, Haywood Gelman, John Hastings, Yong Wang | Safeguarding Virtual Healthcare: A Novel Attacker-Centric Model for Data
Security and Privacy | 6 pages, 3 figures, 3 tables | 2024 IEEE International Conference on Computer and Applications
(ICCA-24) | 10.1109/ICCA62237.2024.10927870 | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rapid growth of remote healthcare delivery has introduced significant
security and privacy risks to protected health information (PHI). Analysis of a
comprehensive healthcare security breach dataset covering 2009-2023 reveals
their significant prevalence and impact. This study investigates the root
causes of such security incidents and introduces the Attacker-Centric Approach
(ACA), a novel threat model tailored to protect PHI. ACA addresses limitations
in existing threat models and regulatory frameworks by adopting a holistic
attacker-focused perspective, examining threats from the viewpoint of cyber
adversaries, their motivations, tactics, and potential attack vectors.
Leveraging established risk management frameworks, ACA provides a multi-layered
approach to threat identification, risk assessment, and proactive mitigation
strategies. A comprehensive threat library classifies physical, third-party,
external, and internal threats. ACA's iterative nature and feedback mechanisms
enable continuous adaptation to emerging threats, ensuring sustained
effectiveness. ACA allows healthcare providers to proactively identify and
mitigate vulnerabilities, fostering trust and supporting the secure adoption of
virtual care technologies.
| [
{
"version": "v1",
"created": "Wed, 18 Dec 2024 02:21:53 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Herath",
"Suvineetha",
""
],
[
"Gelman",
"Haywood",
""
],
[
"Hastings",
"John",
""
],
[
"Wang",
"Yong",
""
]
] | TITLE: Safeguarding Virtual Healthcare: A Novel Attacker-Centric Model for Data
Security and Privacy
ABSTRACT: The rapid growth of remote healthcare delivery has introduced significant
security and privacy risks to protected health information (PHI). Analysis of a
comprehensive healthcare security breach dataset covering 2009-2023 reveals
their significant prevalence and impact. This study investigates the root
causes of such security incidents and introduces the Attacker-Centric Approach
(ACA), a novel threat model tailored to protect PHI. ACA addresses limitations
in existing threat models and regulatory frameworks by adopting a holistic
attacker-focused perspective, examining threats from the viewpoint of cyber
adversaries, their motivations, tactics, and potential attack vectors.
Leveraging established risk management frameworks, ACA provides a multi-layered
approach to threat identification, risk assessment, and proactive mitigation
strategies. A comprehensive threat library classifies physical, third-party,
external, and internal threats. ACA's iterative nature and feedback mechanisms
enable continuous adaptation to emerging threats, ensuring sustained
effectiveness. ACA allows healthcare providers to proactively identify and
mitigate vulnerabilities, fostering trust and supporting the secure adoption of
virtual care technologies.
|
2412.16897 | Shuai Lyu | Shuai Lyu, Rongchen Zhang, Zeqi Ma, Fangjian Liao, Dongmei Mo,
Waikeung Wong | MVREC: A General Few-shot Defect Classification Model Using Multi-View
Region-Context | Accepted by AAAI 2025 | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Few-shot defect multi-classification (FSDMC) is an emerging trend in quality
control within industrial manufacturing. However, current FSDMC research often
lacks generalizability due to its focus on specific datasets. Additionally,
defect classification heavily relies on contextual information within images,
and existing methods fall short of effectively extracting this information. To
address these challenges, we propose a general FSDMC framework called MVREC,
which offers two primary advantages: (1) MVREC extracts general features for
defect instances by incorporating the pre-trained AlphaCLIP model. (2) It
utilizes a region-context framework to enhance defect features by leveraging
mask region input and multi-view context augmentation. Furthermore, Few-shot
Zip-Adapter(-F) classifiers within the model are introduced to cache the visual
features of the support set and perform few-shot classification. We also
introduce MVTec-FS, a new FSDMC benchmark based on MVTec AD, which includes
1228 defect images with instance-level mask annotations and 46 defect types.
Extensive experiments conducted on MVTec-FS and four additional datasets
demonstrate its effectiveness in general defect classification and its ability
to incorporate contextual information to improve classification performance.
Code: https://github.com/ShuaiLYU/MVREC
| [
{
"version": "v1",
"created": "Sun, 22 Dec 2024 07:14:45 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Mar 2025 09:19:53 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Lyu",
"Shuai",
""
],
[
"Zhang",
"Rongchen",
""
],
[
"Ma",
"Zeqi",
""
],
[
"Liao",
"Fangjian",
""
],
[
"Mo",
"Dongmei",
""
],
[
"Wong",
"Waikeung",
""
]
] | TITLE: MVREC: A General Few-shot Defect Classification Model Using Multi-View
Region-Context
ABSTRACT: Few-shot defect multi-classification (FSDMC) is an emerging trend in quality
control within industrial manufacturing. However, current FSDMC research often
lacks generalizability due to its focus on specific datasets. Additionally,
defect classification heavily relies on contextual information within images,
and existing methods fall short of effectively extracting this information. To
address these challenges, we propose a general FSDMC framework called MVREC,
which offers two primary advantages: (1) MVREC extracts general features for
defect instances by incorporating the pre-trained AlphaCLIP model. (2) It
utilizes a region-context framework to enhance defect features by leveraging
mask region input and multi-view context augmentation. Furthermore, Few-shot
Zip-Adapter(-F) classifiers within the model are introduced to cache the visual
features of the support set and perform few-shot classification. We also
introduce MVTec-FS, a new FSDMC benchmark based on MVTec AD, which includes
1228 defect images with instance-level mask annotations and 46 defect types.
Extensive experiments conducted on MVTec-FS and four additional datasets
demonstrate its effectiveness in general defect classification and its ability
to incorporate contextual information to improve classification performance.
Code: https://github.com/ShuaiLYU/MVREC
|
2412.17632 | Renyang Liu | Renyang Liu, Ziyu Lyu, Wei Zhou, See-Kiong Ng | D-Judge: How Far Are We? Evaluating the Discrepancies Between
AI-synthesized Images and Natural Images through Multimodal Guidance | null | null | null | null | cs.AI cs.CV cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In Artificial Intelligence Generated Content (AIGC), distinguishing
AI-synthesized images from natural ones remains a key challenge. Despite
advancements in generative models, significant discrepancies persist. To
systematically investigate and quantify these discrepancies, we introduce an
AI-Natural Image Discrepancy accessing benchmark (\textit{D-Judge}) aimed at
addressing the critical question: \textit{how far are AI-generated images
(AIGIs) from truly realistic images?} We construct \textit{D-ANI}, a dataset
with 5,000 natural images and over 440,000 AIGIs generated by nine models using
Text-to-Image (T2I), Image-to-Image (I2I), and Text and Image-to-Image (TI2I)
prompts. Our framework evaluates the discrepancy across five dimensions: naive
image quality, semantic alignment, aesthetic appeal, downstream applicability,
and human validation. Results reveal notable gaps, emphasizing the importance
of aligning metrics with human judgment. Source code and datasets are available
at https://shorturl.at/l83W2.
| [
{
"version": "v1",
"created": "Mon, 23 Dec 2024 15:08:08 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Mar 2025 03:52:12 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Liu",
"Renyang",
""
],
[
"Lyu",
"Ziyu",
""
],
[
"Zhou",
"Wei",
""
],
[
"Ng",
"See-Kiong",
""
]
] | TITLE: D-Judge: How Far Are We? Evaluating the Discrepancies Between
AI-synthesized Images and Natural Images through Multimodal Guidance
ABSTRACT: In Artificial Intelligence Generated Content (AIGC), distinguishing
AI-synthesized images from natural ones remains a key challenge. Despite
advancements in generative models, significant discrepancies persist. To
systematically investigate and quantify these discrepancies, we introduce an
AI-Natural Image Discrepancy accessing benchmark (\textit{D-Judge}) aimed at
addressing the critical question: \textit{how far are AI-generated images
(AIGIs) from truly realistic images?} We construct \textit{D-ANI}, a dataset
with 5,000 natural images and over 440,000 AIGIs generated by nine models using
Text-to-Image (T2I), Image-to-Image (I2I), and Text and Image-to-Image (TI2I)
prompts. Our framework evaluates the discrepancy across five dimensions: naive
image quality, semantic alignment, aesthetic appeal, downstream applicability,
and human validation. Results reveal notable gaps, emphasizing the importance
of aligning metrics with human judgment. Source code and datasets are available
at https://shorturl.at/l83W2.
|
2412.17684 | Arnav Das | Arnav M. Das, Gantavya Bhatt, Lilly Kumari, Sahil Verma, Jeff Bilmes | COBRA: COmBinatorial Retrieval Augmentation for Few-Shot Adaptation | Accepted at CVPR 2025 | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Retrieval augmentation, the practice of retrieving additional data from large
auxiliary pools, has emerged as an effective technique for enhancing model
performance in the low-data regime. Prior approaches have employed only
nearest-neighbor based strategies for data selection, which retrieve auxiliary
samples with high similarity to instances in the target task. However, these
approaches are prone to selecting highly redundant samples, since they fail to
incorporate any notion of diversity. In our work, we first demonstrate that
data selection strategies used in prior retrieval-augmented few-shot adaptation
settings can be generalized using a class of functions known as Combinatorial
Mutual Information (CMI) measures. We then propose COBRA (COmBinatorial
Retrieval Augmentation), which employs an alternative CMI measure that
considers both diversity and similarity to a target dataset. COBRA consistently
outperforms previous retrieval approaches across image classification tasks and
few-shot learning techniques when used to retrieve samples from LAION-2B. COBRA
introduces negligible computational overhead to the cost of retrieval while
providing significant gains in downstream model performance.
| [
{
"version": "v1",
"created": "Mon, 23 Dec 2024 16:10:07 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 22:53:36 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Das",
"Arnav M.",
""
],
[
"Bhatt",
"Gantavya",
""
],
[
"Kumari",
"Lilly",
""
],
[
"Verma",
"Sahil",
""
],
[
"Bilmes",
"Jeff",
""
]
] | TITLE: COBRA: COmBinatorial Retrieval Augmentation for Few-Shot Adaptation
ABSTRACT: Retrieval augmentation, the practice of retrieving additional data from large
auxiliary pools, has emerged as an effective technique for enhancing model
performance in the low-data regime. Prior approaches have employed only
nearest-neighbor based strategies for data selection, which retrieve auxiliary
samples with high similarity to instances in the target task. However, these
approaches are prone to selecting highly redundant samples, since they fail to
incorporate any notion of diversity. In our work, we first demonstrate that
data selection strategies used in prior retrieval-augmented few-shot adaptation
settings can be generalized using a class of functions known as Combinatorial
Mutual Information (CMI) measures. We then propose COBRA (COmBinatorial
Retrieval Augmentation), which employs an alternative CMI measure that
considers both diversity and similarity to a target dataset. COBRA consistently
outperforms previous retrieval approaches across image classification tasks and
few-shot learning techniques when used to retrieve samples from LAION-2B. COBRA
introduces negligible computational overhead to the cost of retrieval while
providing significant gains in downstream model performance.
|
2412.18947 | Kaiwen Zuo | Kaiwen Zuo, Yirui Jiang | MedHallBench: A New Benchmark for Assessing Hallucination in Medical
Large Language Models | Published to AAAI-25 Bridge Program | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Medical Large Language Models (MLLMs) have demonstrated potential in
healthcare applications, yet their propensity for hallucinations -- generating
medically implausible or inaccurate information -- presents substantial risks
to patient care. This paper introduces MedHallBench, a comprehensive benchmark
framework for evaluating and mitigating hallucinations in MLLMs. Our
methodology integrates expert-validated medical case scenarios with established
medical databases to create a robust evaluation dataset. The framework employs
a sophisticated measurement system that combines automated ACHMI (Automatic
Caption Hallucination Measurement in Medical Imaging) scoring with rigorous
clinical expert evaluations and utilizes reinforcement learning methods to
achieve automatic annotation. Through an optimized reinforcement learning from
human feedback (RLHF) training pipeline specifically designed for medical
applications, MedHallBench enables thorough evaluation of MLLMs across diverse
clinical contexts while maintaining stringent accuracy standards. We conducted
comparative experiments involving various models, utilizing the benchmark to
establish a baseline for widely adopted large language models (LLMs). Our
findings indicate that ACHMI provides a more nuanced understanding of the
effects of hallucinations compared to traditional metrics, thereby highlighting
its advantages in hallucination assessment. This research establishes a
foundational framework for enhancing MLLMs' reliability in healthcare settings
and presents actionable strategies for addressing the critical challenge of AI
hallucinations in medical applications.
| [
{
"version": "v1",
"created": "Wed, 25 Dec 2024 16:51:29 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Jan 2025 00:16:52 GMT"
},
{
"version": "v3",
"created": "Thu, 13 Mar 2025 02:29:47 GMT"
},
{
"version": "v4",
"created": "Fri, 28 Mar 2025 23:37:45 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Zuo",
"Kaiwen",
""
],
[
"Jiang",
"Yirui",
""
]
] | TITLE: MedHallBench: A New Benchmark for Assessing Hallucination in Medical
Large Language Models
ABSTRACT: Medical Large Language Models (MLLMs) have demonstrated potential in
healthcare applications, yet their propensity for hallucinations -- generating
medically implausible or inaccurate information -- presents substantial risks
to patient care. This paper introduces MedHallBench, a comprehensive benchmark
framework for evaluating and mitigating hallucinations in MLLMs. Our
methodology integrates expert-validated medical case scenarios with established
medical databases to create a robust evaluation dataset. The framework employs
a sophisticated measurement system that combines automated ACHMI (Automatic
Caption Hallucination Measurement in Medical Imaging) scoring with rigorous
clinical expert evaluations and utilizes reinforcement learning methods to
achieve automatic annotation. Through an optimized reinforcement learning from
human feedback (RLHF) training pipeline specifically designed for medical
applications, MedHallBench enables thorough evaluation of MLLMs across diverse
clinical contexts while maintaining stringent accuracy standards. We conducted
comparative experiments involving various models, utilizing the benchmark to
establish a baseline for widely adopted large language models (LLMs). Our
findings indicate that ACHMI provides a more nuanced understanding of the
effects of hallucinations compared to traditional metrics, thereby highlighting
its advantages in hallucination assessment. This research establishes a
foundational framework for enhancing MLLMs' reliability in healthcare settings
and presents actionable strategies for addressing the critical challenge of AI
hallucinations in medical applications.
|
2412.19412 | Xingyu Jiang | Jiangwei Ren, Xingyu Jiang, Zizhuo Li, Dingkang Liang, Xin Zhou, Xiang
Bai | MINIMA: Modality Invariant Image Matching | Accepted to CVPR 2025. The dataset and code are available at
https://github.com/LSXI7/MINIMA | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Image matching for both cross-view and cross-modality plays a critical role
in multimodal perception. In practice, the modality gap caused by different
imaging systems/styles poses great challenges to the matching task. Existing
works try to extract invariant features for specific modalities and train on
limited datasets, showing poor generalization. In this paper, we present
MINIMA, a unified image matching framework for multiple cross-modal cases.
Without pursuing fancy modules, our MINIMA aims to enhance universal
performance from the perspective of data scaling up. For such purpose, we
propose a simple yet effective data engine that can freely produce a large
dataset containing multiple modalities, rich scenarios, and accurate matching
labels. Specifically, we scale up the modalities from cheap but rich RGB-only
matching data, by means of generative models. Under this setting, the matching
labels and rich diversity of the RGB dataset are well inherited by the
generated multimodal data. Benefiting from this, we construct MD-syn, a new
comprehensive dataset that fills the data gap for general multimodal image
matching. With MD-syn, we can directly train any advanced matching pipeline on
randomly selected modality pairs to obtain cross-modal ability. Extensive
experiments on in-domain and zero-shot matching tasks, including $19$
cross-modal cases, demonstrate that our MINIMA can significantly outperform the
baselines and even surpass modality-specific methods. The dataset and code are
available at https://github.com/LSXI7/MINIMA.
| [
{
"version": "v1",
"created": "Fri, 27 Dec 2024 02:39:50 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Mar 2025 09:04:13 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Ren",
"Jiangwei",
""
],
[
"Jiang",
"Xingyu",
""
],
[
"Li",
"Zizhuo",
""
],
[
"Liang",
"Dingkang",
""
],
[
"Zhou",
"Xin",
""
],
[
"Bai",
"Xiang",
""
]
] | TITLE: MINIMA: Modality Invariant Image Matching
ABSTRACT: Image matching for both cross-view and cross-modality plays a critical role
in multimodal perception. In practice, the modality gap caused by different
imaging systems/styles poses great challenges to the matching task. Existing
works try to extract invariant features for specific modalities and train on
limited datasets, showing poor generalization. In this paper, we present
MINIMA, a unified image matching framework for multiple cross-modal cases.
Without pursuing fancy modules, our MINIMA aims to enhance universal
performance from the perspective of data scaling up. For such purpose, we
propose a simple yet effective data engine that can freely produce a large
dataset containing multiple modalities, rich scenarios, and accurate matching
labels. Specifically, we scale up the modalities from cheap but rich RGB-only
matching data, by means of generative models. Under this setting, the matching
labels and rich diversity of the RGB dataset are well inherited by the
generated multimodal data. Benefiting from this, we construct MD-syn, a new
comprehensive dataset that fills the data gap for general multimodal image
matching. With MD-syn, we can directly train any advanced matching pipeline on
randomly selected modality pairs to obtain cross-modal ability. Extensive
experiments on in-domain and zero-shot matching tasks, including $19$
cross-modal cases, demonstrate that our MINIMA can significantly outperform the
baselines and even surpass modality-specific methods. The dataset and code are
available at https://github.com/LSXI7/MINIMA.
|
2501.00363 | Xiaoning Dong | Xiaoning Dong, Peilin Xin, Jia Li and Wei Xu | SPDZCoder: Combining Expert Knowledge with LLMs for Generating
Privacy-Computing Code | null | null | null | null | cs.CR cs.AI cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Privacy computing receives increasing attention but writing privacy computing
code remains challenging for developers due to limited library functions,
necessitating function implementation from scratch, and data-oblivious
requirement, contradicting intuitive thinking and usual practices of
programmers. Automating the generation of privacy computing code with Large
Language Models can streamline development effort and lower the barrier to
using privacy computing frameworks. However, existing LLMs still encounter
challenges in code translation for privacy-preserving computation, such as
translating Python to MP-SPDZ, due to the scarcity of MP-SPDZ data required for
effective pre-training or fine-tuning. Moreover, the lack of a benchmark
further complicates the evaluation of translation quality. To address the
limitations, this work proposes SPDZCoder, a rule-based framework that combines
LLMs with expert knowledge for generating privacy-computing code without
requiring additional training data. Specifically, SPDZCoder employ a rigorous
procedure for collecting high-quality expert knowledge to represent the
semantic-expressing differences between Python and MP-SPDZ, and to derive
transformation rules for translating Python to MP-SPDZ based on these
knowledge. Then, SPDZCoder progressively converts Python code into MP-SPDZ code
using transformation rules in a three stage pipeline. To evaluate SPDZCoder, we
manually constructed a benchmark dataset, SPDZEval, which comprises six data
splits, each representing a distinct class of challenging tasks in MP-SPDZ
implementation. Extensive experiments show that SPDZCoder achieves superior
performance, significantly surpassing baselines in pass@1 and pass@2.
Specifically, SPDZCoder attains an overall correctness of 85.94% and 92.01% in
pass@1 and pass@2, respectively, whereas the best-performing baseline achieves
63.58% and 76.36%, respectively.
| [
{
"version": "v1",
"created": "Tue, 31 Dec 2024 09:29:38 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Mar 2025 12:52:57 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Dong",
"Xiaoning",
""
],
[
"Xin",
"Peilin",
""
],
[
"Li",
"Jia",
""
],
[
"Xu",
"Wei",
""
]
] | TITLE: SPDZCoder: Combining Expert Knowledge with LLMs for Generating
Privacy-Computing Code
ABSTRACT: Privacy computing receives increasing attention but writing privacy computing
code remains challenging for developers due to limited library functions,
necessitating function implementation from scratch, and data-oblivious
requirement, contradicting intuitive thinking and usual practices of
programmers. Automating the generation of privacy computing code with Large
Language Models can streamline development effort and lower the barrier to
using privacy computing frameworks. However, existing LLMs still encounter
challenges in code translation for privacy-preserving computation, such as
translating Python to MP-SPDZ, due to the scarcity of MP-SPDZ data required for
effective pre-training or fine-tuning. Moreover, the lack of a benchmark
further complicates the evaluation of translation quality. To address the
limitations, this work proposes SPDZCoder, a rule-based framework that combines
LLMs with expert knowledge for generating privacy-computing code without
requiring additional training data. Specifically, SPDZCoder employ a rigorous
procedure for collecting high-quality expert knowledge to represent the
semantic-expressing differences between Python and MP-SPDZ, and to derive
transformation rules for translating Python to MP-SPDZ based on these
knowledge. Then, SPDZCoder progressively converts Python code into MP-SPDZ code
using transformation rules in a three stage pipeline. To evaluate SPDZCoder, we
manually constructed a benchmark dataset, SPDZEval, which comprises six data
splits, each representing a distinct class of challenging tasks in MP-SPDZ
implementation. Extensive experiments show that SPDZCoder achieves superior
performance, significantly surpassing baselines in pass@1 and pass@2.
Specifically, SPDZCoder attains an overall correctness of 85.94% and 92.01% in
pass@1 and pass@2, respectively, whereas the best-performing baseline achieves
63.58% and 76.36%, respectively.
|
2501.02068 | Roseval Malaquias Junior | Roseval Malaquias Junior, Ramon Pires, Thales Sales Almeida, Kenzo
Sakiyama, Roseli A. F. Romero, Rodrigo Nogueira | The interplay between domain specialization and model size | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Scaling laws for language models have often focused on finding the optimal
model size and token count for training from scratch. However, achieving this
optimal balance requires significant compute resources due to the extensive
data demands when training models from randomly-initialized weights. Continued
pretraining offers a cost-effective alternative, leveraging the compute
investment from pretrained models to incorporate new knowledge without
requiring extensive new data. Recent findings suggest that data quality
influences constants in scaling laws, thereby altering the optimal
parameter-token allocation ratio. Building on this insight, we investigate the
interplay between domain specialization and model size during continued
pretraining under compute-constrained scenarios. Our goal is to identify an
optimal training regime for this scenario and detect patterns in this interplay
that can be generalized across different model sizes and domains. To compare
general and specialized training, we filtered a web-based dataset to extract
data from three domains: legal, medical, and accounting. We pretrained models
with 1.5B, 3B, 7B, and 14B parameters on both the unfiltered and filtered
datasets, then evaluated their performance on domain-specific exams. Results
show that as model size increases, specialized models outperform general models
while requiring less training compute. Additionally, their growing compute
efficiency leads to reduced forgetting of previously learned knowledge.
| [
{
"version": "v1",
"created": "Fri, 3 Jan 2025 19:28:53 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 16:48:14 GMT"
},
{
"version": "v3",
"created": "Sat, 29 Mar 2025 17:18:43 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Junior",
"Roseval Malaquias",
""
],
[
"Pires",
"Ramon",
""
],
[
"Almeida",
"Thales Sales",
""
],
[
"Sakiyama",
"Kenzo",
""
],
[
"Romero",
"Roseli A. F.",
""
],
[
"Nogueira",
"Rodrigo",
""
]
] | TITLE: The interplay between domain specialization and model size
ABSTRACT: Scaling laws for language models have often focused on finding the optimal
model size and token count for training from scratch. However, achieving this
optimal balance requires significant compute resources due to the extensive
data demands when training models from randomly-initialized weights. Continued
pretraining offers a cost-effective alternative, leveraging the compute
investment from pretrained models to incorporate new knowledge without
requiring extensive new data. Recent findings suggest that data quality
influences constants in scaling laws, thereby altering the optimal
parameter-token allocation ratio. Building on this insight, we investigate the
interplay between domain specialization and model size during continued
pretraining under compute-constrained scenarios. Our goal is to identify an
optimal training regime for this scenario and detect patterns in this interplay
that can be generalized across different model sizes and domains. To compare
general and specialized training, we filtered a web-based dataset to extract
data from three domains: legal, medical, and accounting. We pretrained models
with 1.5B, 3B, 7B, and 14B parameters on both the unfiltered and filtered
datasets, then evaluated their performance on domain-specific exams. Results
show that as model size increases, specialized models outperform general models
while requiring less training compute. Additionally, their growing compute
efficiency leads to reduced forgetting of previously learned knowledge.
|
2501.06903 | Wojciech Zielonka | Wojciech Zielonka, Stephan J. Garbin, Alexandros Lattas, George
Kopanas, Paulo Gotardo, Thabo Beeler, Justus Thies, Timo Bolkart | Synthetic Prior for Few-Shot Drivable Head Avatar Inversion | Accepted to CVPR25 Website: https://zielon.github.io/synshot/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present SynShot, a novel method for the few-shot inversion of a drivable
head avatar based on a synthetic prior. We tackle three major challenges.
First, training a controllable 3D generative network requires a large number of
diverse sequences, for which pairs of images and high-quality tracked meshes
are not always available. Second, the use of real data is strictly regulated
(e.g., under the General Data Protection Regulation, which mandates frequent
deletion of models and data to accommodate a situation when a participant's
consent is withdrawn). Synthetic data, free from these constraints, is an
appealing alternative. Third, state-of-the-art monocular avatar models struggle
to generalize to new views and expressions, lacking a strong prior and often
overfitting to a specific viewpoint distribution. Inspired by machine learning
models trained solely on synthetic data, we propose a method that learns a
prior model from a large dataset of synthetic heads with diverse identities,
expressions, and viewpoints. With few input images, SynShot fine-tunes the
pretrained synthetic prior to bridge the domain gap, modeling a photorealistic
head avatar that generalizes to novel expressions and viewpoints. We model the
head avatar using 3D Gaussian splatting and a convolutional encoder-decoder
that outputs Gaussian parameters in UV texture space. To account for the
different modeling complexities over parts of the head (e.g., skin vs hair), we
embed the prior with explicit control for upsampling the number of per-part
primitives. Compared to SOTA monocular and GAN-based methods, SynShot
significantly improves novel view and expression synthesis.
| [
{
"version": "v1",
"created": "Sun, 12 Jan 2025 19:01:05 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 10:18:44 GMT"
},
{
"version": "v3",
"created": "Mon, 31 Mar 2025 09:30:17 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Zielonka",
"Wojciech",
""
],
[
"Garbin",
"Stephan J.",
""
],
[
"Lattas",
"Alexandros",
""
],
[
"Kopanas",
"George",
""
],
[
"Gotardo",
"Paulo",
""
],
[
"Beeler",
"Thabo",
""
],
[
"Thies",
"Justus",
""
],
[
"Bolkart",
"Timo",
""
]
] | TITLE: Synthetic Prior for Few-Shot Drivable Head Avatar Inversion
ABSTRACT: We present SynShot, a novel method for the few-shot inversion of a drivable
head avatar based on a synthetic prior. We tackle three major challenges.
First, training a controllable 3D generative network requires a large number of
diverse sequences, for which pairs of images and high-quality tracked meshes
are not always available. Second, the use of real data is strictly regulated
(e.g., under the General Data Protection Regulation, which mandates frequent
deletion of models and data to accommodate a situation when a participant's
consent is withdrawn). Synthetic data, free from these constraints, is an
appealing alternative. Third, state-of-the-art monocular avatar models struggle
to generalize to new views and expressions, lacking a strong prior and often
overfitting to a specific viewpoint distribution. Inspired by machine learning
models trained solely on synthetic data, we propose a method that learns a
prior model from a large dataset of synthetic heads with diverse identities,
expressions, and viewpoints. With few input images, SynShot fine-tunes the
pretrained synthetic prior to bridge the domain gap, modeling a photorealistic
head avatar that generalizes to novel expressions and viewpoints. We model the
head avatar using 3D Gaussian splatting and a convolutional encoder-decoder
that outputs Gaussian parameters in UV texture space. To account for the
different modeling complexities over parts of the head (e.g., skin vs hair), we
embed the prior with explicit control for upsampling the number of per-part
primitives. Compared to SOTA monocular and GAN-based methods, SynShot
significantly improves novel view and expression synthesis.
|
2501.08163 | Yucong Meng | Yucong Meng, Zhiwei Yang, Zhijian Song, Yonghong Shi | DH-Mamba: Exploring Dual-domain Hierarchical State Space Models for MRI
Reconstruction | null | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The accelerated MRI reconstruction poses a challenging ill-posed inverse
problem due to the significant undersampling in k-space. Deep neural networks,
such as CNNs and ViTs, have shown substantial performance improvements for this
task while encountering the dilemma between global receptive fields and
efficient computation. To this end, this paper explores selective state space
models (Mamba), a new paradigm for long-range dependency modeling with linear
complexity, for efficient and effective MRI reconstruction. However, directly
applying Mamba to MRI reconstruction faces three significant issues: (1) Mamba
typically flattens 2D images into distinct 1D sequences along rows and columns,
disrupting k-space's unique spectrum and leaving its potential in k-space
learning unexplored. (2) Existing approaches adopt multi-directional lengthy
scanning to unfold images at the pixel level, leading to long-range forgetting
and high computational burden. (3) Mamba struggles with spatially-varying
contents, resulting in limited diversity of local representations. To address
these, we propose a dual-domain hierarchical Mamba for MRI reconstruction from
the following perspectives: (1) We pioneer vision Mamba in k-space learning. A
circular scanning is customized for spectrum unfolding, benefiting the global
modeling of k-space. (2) We propose a hierarchical Mamba with an efficient
scanning strategy in both image and k-space domains. It mitigates long-range
forgetting and achieves a better trade-off between efficiency and performance.
(3) We develop a local diversity enhancement module to improve the
spatially-varying representation of Mamba. Extensive experiments are conducted
on three public datasets for MRI reconstruction under various undersampling
patterns. Comprehensive results demonstrate that our method significantly
outperforms state-of-the-art methods with lower computational cost.
| [
{
"version": "v1",
"created": "Tue, 14 Jan 2025 14:41:51 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Feb 2025 06:08:21 GMT"
},
{
"version": "v3",
"created": "Mon, 31 Mar 2025 13:41:34 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Meng",
"Yucong",
""
],
[
"Yang",
"Zhiwei",
""
],
[
"Song",
"Zhijian",
""
],
[
"Shi",
"Yonghong",
""
]
] | TITLE: DH-Mamba: Exploring Dual-domain Hierarchical State Space Models for MRI
Reconstruction
ABSTRACT: The accelerated MRI reconstruction poses a challenging ill-posed inverse
problem due to the significant undersampling in k-space. Deep neural networks,
such as CNNs and ViTs, have shown substantial performance improvements for this
task while encountering the dilemma between global receptive fields and
efficient computation. To this end, this paper explores selective state space
models (Mamba), a new paradigm for long-range dependency modeling with linear
complexity, for efficient and effective MRI reconstruction. However, directly
applying Mamba to MRI reconstruction faces three significant issues: (1) Mamba
typically flattens 2D images into distinct 1D sequences along rows and columns,
disrupting k-space's unique spectrum and leaving its potential in k-space
learning unexplored. (2) Existing approaches adopt multi-directional lengthy
scanning to unfold images at the pixel level, leading to long-range forgetting
and high computational burden. (3) Mamba struggles with spatially-varying
contents, resulting in limited diversity of local representations. To address
these, we propose a dual-domain hierarchical Mamba for MRI reconstruction from
the following perspectives: (1) We pioneer vision Mamba in k-space learning. A
circular scanning is customized for spectrum unfolding, benefiting the global
modeling of k-space. (2) We propose a hierarchical Mamba with an efficient
scanning strategy in both image and k-space domains. It mitigates long-range
forgetting and achieves a better trade-off between efficiency and performance.
(3) We develop a local diversity enhancement module to improve the
spatially-varying representation of Mamba. Extensive experiments are conducted
on three public datasets for MRI reconstruction under various undersampling
patterns. Comprehensive results demonstrate that our method significantly
outperforms state-of-the-art methods with lower computational cost.
|
2501.08279 | Longtao Jiang | Longtao Jiang, Zhendong Wang, Jianmin Bao, Wengang Zhou, Dongdong
Chen, Lei Shi, Dong Chen, Houqiang Li | SmartEraser: Remove Anything from Images using Masked-Region Guidance | Project at: https://longtaojiang.github.io/smarteraser.github.io/ | The IEEE/CVF Conference on Computer Vision and Pattern Recognition
2025 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Object removal has so far been dominated by the mask-and-inpaint paradigm,
where the masked region is excluded from the input, leaving models relying on
unmasked areas to inpaint the missing region. However, this approach lacks
contextual information for the masked area, often resulting in unstable
performance. In this work, we introduce SmartEraser, built with a new removing
paradigm called Masked-Region Guidance. This paradigm retains the masked region
in the input, using it as guidance for the removal process. It offers several
distinct advantages: (a) it guides the model to accurately identify the object
to be removed, preventing its regeneration in the output; (b) since the user
mask often extends beyond the object itself, it aids in preserving the
surrounding context in the final result. Leveraging this new paradigm, we
present Syn4Removal, a large-scale object removal dataset, where instance
segmentation data is used to copy and paste objects onto images as removal
targets, with the original images serving as ground truths. Experimental
results demonstrate that SmartEraser significantly outperforms existing
methods, achieving superior performance in object removal, especially in
complex scenes with intricate compositions.
| [
{
"version": "v1",
"created": "Tue, 14 Jan 2025 17:55:12 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Mar 2025 09:36:55 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Jiang",
"Longtao",
""
],
[
"Wang",
"Zhendong",
""
],
[
"Bao",
"Jianmin",
""
],
[
"Zhou",
"Wengang",
""
],
[
"Chen",
"Dongdong",
""
],
[
"Shi",
"Lei",
""
],
[
"Chen",
"Dong",
""
],
[
"Li",
"Houqiang",
""
]
] | TITLE: SmartEraser: Remove Anything from Images using Masked-Region Guidance
ABSTRACT: Object removal has so far been dominated by the mask-and-inpaint paradigm,
where the masked region is excluded from the input, leaving models relying on
unmasked areas to inpaint the missing region. However, this approach lacks
contextual information for the masked area, often resulting in unstable
performance. In this work, we introduce SmartEraser, built with a new removing
paradigm called Masked-Region Guidance. This paradigm retains the masked region
in the input, using it as guidance for the removal process. It offers several
distinct advantages: (a) it guides the model to accurately identify the object
to be removed, preventing its regeneration in the output; (b) since the user
mask often extends beyond the object itself, it aids in preserving the
surrounding context in the final result. Leveraging this new paradigm, we
present Syn4Removal, a large-scale object removal dataset, where instance
segmentation data is used to copy and paste objects onto images as removal
targets, with the original images serving as ground truths. Experimental
results demonstrate that SmartEraser significantly outperforms existing
methods, achieving superior performance in object removal, especially in
complex scenes with intricate compositions.
|
2501.09754 | Youngjoon Jang | Youngjoon Jang, Haran Raajesh, Liliane Momeni, G\"ul Varol, Andrew
Zisserman | Lost in Translation, Found in Context: Sign Language Translation with
Contextual Cues | CVPR 2025 Camera Ready, Project page:
https://www.robots.ox.ac.uk/~vgg/research/litfic/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Our objective is to translate continuous sign language into spoken language
text. Inspired by the way human interpreters rely on context for accurate
translation, we incorporate additional contextual cues together with the
signing video, into a new translation framework. Specifically, besides visual
sign recognition features that encode the input video, we integrate
complementary textual information from (i) captions describing the background
show, (ii) translation of previous sentences, as well as (iii) pseudo-glosses
transcribing the signing. These are automatically extracted and inputted along
with the visual features to a pre-trained large language model (LLM), which we
fine-tune to generate spoken language translations in text form. Through
extensive ablation studies, we show the positive contribution of each input cue
to the translation performance. We train and evaluate our approach on BOBSL --
the largest British Sign Language dataset currently available. We show that our
contextual approach significantly enhances the quality of the translations
compared to previously reported results on BOBSL, and also to state-of-the-art
methods that we implement as baselines. Furthermore, we demonstrate the
generality of our approach by applying it also to How2Sign, an American Sign
Language dataset, and achieve competitive results.
| [
{
"version": "v1",
"created": "Thu, 16 Jan 2025 18:59:03 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Mar 2025 09:02:32 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Jang",
"Youngjoon",
""
],
[
"Raajesh",
"Haran",
""
],
[
"Momeni",
"Liliane",
""
],
[
"Varol",
"Gül",
""
],
[
"Zisserman",
"Andrew",
""
]
] | TITLE: Lost in Translation, Found in Context: Sign Language Translation with
Contextual Cues
ABSTRACT: Our objective is to translate continuous sign language into spoken language
text. Inspired by the way human interpreters rely on context for accurate
translation, we incorporate additional contextual cues together with the
signing video, into a new translation framework. Specifically, besides visual
sign recognition features that encode the input video, we integrate
complementary textual information from (i) captions describing the background
show, (ii) translation of previous sentences, as well as (iii) pseudo-glosses
transcribing the signing. These are automatically extracted and inputted along
with the visual features to a pre-trained large language model (LLM), which we
fine-tune to generate spoken language translations in text form. Through
extensive ablation studies, we show the positive contribution of each input cue
to the translation performance. We train and evaluate our approach on BOBSL --
the largest British Sign Language dataset currently available. We show that our
contextual approach significantly enhances the quality of the translations
compared to previously reported results on BOBSL, and also to state-of-the-art
methods that we implement as baselines. Furthermore, we demonstrate the
generality of our approach by applying it also to How2Sign, an American Sign
Language dataset, and achieve competitive results.
|
2501.11901 | Hangyu Liu | Hangyu Liu, Bo Peng, Can Cui, Pengxiang Ding, Donglin Wang | Enhancing Adversarial Transferability via Component-Wise Transformation | 15 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep Neural Networks (DNNs) are highly vulnerable to adversarial examples,
which pose significant challenges in security-sensitive applications. Among
various adversarial attack strategies, input transformation-based attacks have
demonstrated remarkable effectiveness in enhancing adversarial transferability.
However, existing methods still perform poorly across different architectures,
even though they have achieved promising results within the same architecture.
This limitation arises because, while models of the same architecture may focus
on different regions of the object, the variation is even more pronounced
across different architectures. Unfortunately, current approaches fail to
effectively guide models to attend to these diverse regions. To address this
issue, this paper proposes a novel input transformation-based attack method,
termed Component-Wise Transformation (CWT). CWT applies interpolation and
selective rotation to individual image blocks, ensuring that each transformed
image highlights different target regions, thereby improving the
transferability of adversarial examples. Extensive experiments on the standard
ImageNet dataset show that CWT consistently outperforms state-of-the-art
methods in both attack success rates and stability across CNN- and
Transformer-based models.
| [
{
"version": "v1",
"created": "Tue, 21 Jan 2025 05:41:09 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Mar 2025 01:07:14 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Liu",
"Hangyu",
""
],
[
"Peng",
"Bo",
""
],
[
"Cui",
"Can",
""
],
[
"Ding",
"Pengxiang",
""
],
[
"Wang",
"Donglin",
""
]
] | TITLE: Enhancing Adversarial Transferability via Component-Wise Transformation
ABSTRACT: Deep Neural Networks (DNNs) are highly vulnerable to adversarial examples,
which pose significant challenges in security-sensitive applications. Among
various adversarial attack strategies, input transformation-based attacks have
demonstrated remarkable effectiveness in enhancing adversarial transferability.
However, existing methods still perform poorly across different architectures,
even though they have achieved promising results within the same architecture.
This limitation arises because, while models of the same architecture may focus
on different regions of the object, the variation is even more pronounced
across different architectures. Unfortunately, current approaches fail to
effectively guide models to attend to these diverse regions. To address this
issue, this paper proposes a novel input transformation-based attack method,
termed Component-Wise Transformation (CWT). CWT applies interpolation and
selective rotation to individual image blocks, ensuring that each transformed
image highlights different target regions, thereby improving the
transferability of adversarial examples. Extensive experiments on the standard
ImageNet dataset show that CWT consistently outperforms state-of-the-art
methods in both attack success rates and stability across CNN- and
Transformer-based models.
|
2501.15140 | Hulingxiao He | Hulingxiao He, Geng Li, Zijun Geng, Jinglin Xu, Yuxin Peng | Analyzing and Boosting the Power of Fine-Grained Visual Recognition for
Multi-modal Large Language Models | Published as a conference paper at ICLR 2025. The model is available
at https://huggingface.co/StevenHH2000/Finedefics | null | null | null | cs.CV cs.AI cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-modal large language models (MLLMs) have shown remarkable abilities in
various visual understanding tasks. However, MLLMs still struggle with
fine-grained visual recognition (FGVR), which aims to identify
subordinate-level categories from images. This can negatively impact more
advanced capabilities of MLLMs, such as object-centric visual question
answering and reasoning. In our study, we revisit three quintessential
capabilities of MLLMs for FGVR, including object information extraction,
category knowledge reserve, object-category alignment, and position of the root
cause as a misalignment problem. To address this issue, we present Finedefics,
an MLLM that enhances the model's FGVR capability by incorporating informative
attribute descriptions of objects into the training phase. We employ
contrastive learning on object-attribute pairs and attribute-category pairs
simultaneously and use examples from similar but incorrect categories as hard
negatives, naturally bringing representations of visual objects and category
names closer. Extensive evaluations across multiple popular FGVR datasets
demonstrate that Finedefics outperforms existing MLLMs of comparable parameter
sizes, showcasing its remarkable efficacy. The code is available at
https://github.com/PKU-ICST-MIPL/Finedefics_ICLR2025.
| [
{
"version": "v1",
"created": "Sat, 25 Jan 2025 08:52:43 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Feb 2025 02:57:30 GMT"
},
{
"version": "v3",
"created": "Sun, 30 Mar 2025 13:12:34 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"He",
"Hulingxiao",
""
],
[
"Li",
"Geng",
""
],
[
"Geng",
"Zijun",
""
],
[
"Xu",
"Jinglin",
""
],
[
"Peng",
"Yuxin",
""
]
] | TITLE: Analyzing and Boosting the Power of Fine-Grained Visual Recognition for
Multi-modal Large Language Models
ABSTRACT: Multi-modal large language models (MLLMs) have shown remarkable abilities in
various visual understanding tasks. However, MLLMs still struggle with
fine-grained visual recognition (FGVR), which aims to identify
subordinate-level categories from images. This can negatively impact more
advanced capabilities of MLLMs, such as object-centric visual question
answering and reasoning. In our study, we revisit three quintessential
capabilities of MLLMs for FGVR, including object information extraction,
category knowledge reserve, object-category alignment, and position of the root
cause as a misalignment problem. To address this issue, we present Finedefics,
an MLLM that enhances the model's FGVR capability by incorporating informative
attribute descriptions of objects into the training phase. We employ
contrastive learning on object-attribute pairs and attribute-category pairs
simultaneously and use examples from similar but incorrect categories as hard
negatives, naturally bringing representations of visual objects and category
names closer. Extensive evaluations across multiple popular FGVR datasets
demonstrate that Finedefics outperforms existing MLLMs of comparable parameter
sizes, showcasing its remarkable efficacy. The code is available at
https://github.com/PKU-ICST-MIPL/Finedefics_ICLR2025.
|
2501.15167 | Jianhui Wang | Yangfan He, Jianhui Wang, Yijin Wang, Kun Li, Yan Zhong, Xinyuan Song,
Li Sun, Jingyuan Lu, Sida Li, Haoyuan Li, Jiayi Su, Jinhua Song, Miao Zhang,
Tianyu Shi | Enhancing Intent Understanding for Ambiguous prompt: A Human-Machine
Co-Adaption Strategy | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Today's image generation systems are capable of producing realistic and
high-quality images. However, user prompts often contain ambiguities, making it
difficult for these systems to interpret users' actual intentions.
Consequently, many users must modify their prompts several times to ensure the
generated images meet their expectations. While some methods focus on enhancing
prompts to make the generated images fit user needs, the model is still hard to
understand users' real needs, especially for non-expert users. In this
research, we aim to enhance the visual parameter-tuning process, making the
model user-friendly for individuals without specialized knowledge and better
understand user needs. We propose a human-machine co-adaption strategy using
mutual information between the user's prompts and the pictures under
modification as the optimizing target to make the system better adapt to user
needs. We find that an improved model can reduce the necessity for multiple
rounds of adjustments. We also collect multi-round dialogue datasets with
prompts and images pairs and user intent. Various experiments demonstrate the
effectiveness of the proposed method in our proposed dataset. Our annotation
tools and several examples of our dataset are available at
https://zenodo.org/records/14876029 for easier review. We will make open source
our full dataset and code.
| [
{
"version": "v1",
"created": "Sat, 25 Jan 2025 10:32:00 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Feb 2025 18:02:47 GMT"
},
{
"version": "v3",
"created": "Tue, 4 Mar 2025 19:51:26 GMT"
},
{
"version": "v4",
"created": "Mon, 31 Mar 2025 06:06:33 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"He",
"Yangfan",
""
],
[
"Wang",
"Jianhui",
""
],
[
"Wang",
"Yijin",
""
],
[
"Li",
"Kun",
""
],
[
"Zhong",
"Yan",
""
],
[
"Song",
"Xinyuan",
""
],
[
"Sun",
"Li",
""
],
[
"Lu",
"Jingyuan",
""
],
[
"Li",
"Sida",
""
],
[
"Li",
"Haoyuan",
""
],
[
"Su",
"Jiayi",
""
],
[
"Song",
"Jinhua",
""
],
[
"Zhang",
"Miao",
""
],
[
"Shi",
"Tianyu",
""
]
] | TITLE: Enhancing Intent Understanding for Ambiguous prompt: A Human-Machine
Co-Adaption Strategy
ABSTRACT: Today's image generation systems are capable of producing realistic and
high-quality images. However, user prompts often contain ambiguities, making it
difficult for these systems to interpret users' actual intentions.
Consequently, many users must modify their prompts several times to ensure the
generated images meet their expectations. While some methods focus on enhancing
prompts to make the generated images fit user needs, the model is still hard to
understand users' real needs, especially for non-expert users. In this
research, we aim to enhance the visual parameter-tuning process, making the
model user-friendly for individuals without specialized knowledge and better
understand user needs. We propose a human-machine co-adaption strategy using
mutual information between the user's prompts and the pictures under
modification as the optimizing target to make the system better adapt to user
needs. We find that an improved model can reduce the necessity for multiple
rounds of adjustments. We also collect multi-round dialogue datasets with
prompts and images pairs and user intent. Various experiments demonstrate the
effectiveness of the proposed method in our proposed dataset. Our annotation
tools and several examples of our dataset are available at
https://zenodo.org/records/14876029 for easier review. We will make open source
our full dataset and code.
|
2501.17112 | Claas Beger | Carl-Leander Henneking, Claas Beger | Decoding Human Preferences in Alignment: An Improved Approach to Inverse
Constitutional AI | 9 Pages, 3 Figures | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Traditional methods for aligning Large Language Models (LLMs), such as
Reinforcement Learning from Human Feedback (RLHF) and Direct Preference
Optimization (DPO), rely on implicit principles, limiting interpretability.
Constitutional AI (CAI) offers an explicit, rule-based framework for guiding
LLM alignment. Building on this, we refine the Inverse Constitutional AI (ICAI)
algorithm, which extracts constitutions from preference datasets. By improving
principle generation, clustering, and embedding processes, our approach
enhances the accuracy and generalizability of extracted principles across
synthetic and real-world datasets. Our results highlight the potential of these
principles to foster more transparent and adaptable alignment methods, offering
a promising direction for future advancements beyond traditional fine-tuning.
| [
{
"version": "v1",
"created": "Tue, 28 Jan 2025 17:59:56 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Mar 2025 17:39:07 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Henneking",
"Carl-Leander",
""
],
[
"Beger",
"Claas",
""
]
] | TITLE: Decoding Human Preferences in Alignment: An Improved Approach to Inverse
Constitutional AI
ABSTRACT: Traditional methods for aligning Large Language Models (LLMs), such as
Reinforcement Learning from Human Feedback (RLHF) and Direct Preference
Optimization (DPO), rely on implicit principles, limiting interpretability.
Constitutional AI (CAI) offers an explicit, rule-based framework for guiding
LLM alignment. Building on this, we refine the Inverse Constitutional AI (ICAI)
algorithm, which extracts constitutions from preference datasets. By improving
principle generation, clustering, and embedding processes, our approach
enhances the accuracy and generalizability of extracted principles across
synthetic and real-world datasets. Our results highlight the potential of these
principles to foster more transparent and adaptable alignment methods, offering
a promising direction for future advancements beyond traditional fine-tuning.
|
2501.17703 | Yubo Wang | Yubo Wang, Xiang Yue, Wenhu Chen | Critique Fine-Tuning: Learning to Critique is More Effective than
Learning to Imitate | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Supervised Fine-Tuning (SFT) is commonly used to train language models to
imitate annotated responses for given instructions. In this paper, we propose
Critique Fine-Tuning (CFT), a method more effective than SFT for reasoning
tasks. Instead of simply imitating correct responses, CFT trains models to
critique noisy responses, inspired by human learning processes that emphasize
critical thinking, deeper analysis, and nuanced understanding - traits often
overlooked by standard SFT. To validate the effectiveness of CFT, we construct
multiple critique datasets (e.g., WebInstruct, MetaMath, NuminaMath), where
GPT-4o serves as the teacher to generate critiques in the form of ([query;
noisy response], critique). Experiments on these datasets demonstrate that CFT
consistently outperforms SFT by 4-10% across six mathematical reasoning
benchmarks, and is effective across different base models including Qwen2.5,
Qwen2.5-Math, and DeepSeek-Math. Notably, our model Qwen2.5-Math-CFT only
requires 1 hour of training on 8 x H100 over the 50K examples, yet matches or
outperforms strong competitors like Qwen2.5-Math-Instruct on most benchmarks,
which use over 2M samples. Moreover, it matches the performance of SimpleRL,
which is a DeepSeek-r1 replication trained with 140 x more compute. Experiments
on IF_Eval and MT-Bench further demonstrate that CFT can significantly enhance
the model's general generation and instruction-following capabilities,
outperforming the Qwen2.5-Math-Instruct by a large margin. Ablation studies
show that CFT is robust to noisy response sources and teacher critique models.
These findings highlight that CFT offers a more effective alternative to
advance the reasoning of language models.
| [
{
"version": "v1",
"created": "Wed, 29 Jan 2025 15:20:30 GMT"
},
{
"version": "v2",
"created": "Thu, 30 Jan 2025 17:58:54 GMT"
},
{
"version": "v3",
"created": "Wed, 5 Feb 2025 11:53:10 GMT"
},
{
"version": "v4",
"created": "Sat, 29 Mar 2025 15:21:55 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Wang",
"Yubo",
""
],
[
"Yue",
"Xiang",
""
],
[
"Chen",
"Wenhu",
""
]
] | TITLE: Critique Fine-Tuning: Learning to Critique is More Effective than
Learning to Imitate
ABSTRACT: Supervised Fine-Tuning (SFT) is commonly used to train language models to
imitate annotated responses for given instructions. In this paper, we propose
Critique Fine-Tuning (CFT), a method more effective than SFT for reasoning
tasks. Instead of simply imitating correct responses, CFT trains models to
critique noisy responses, inspired by human learning processes that emphasize
critical thinking, deeper analysis, and nuanced understanding - traits often
overlooked by standard SFT. To validate the effectiveness of CFT, we construct
multiple critique datasets (e.g., WebInstruct, MetaMath, NuminaMath), where
GPT-4o serves as the teacher to generate critiques in the form of ([query;
noisy response], critique). Experiments on these datasets demonstrate that CFT
consistently outperforms SFT by 4-10% across six mathematical reasoning
benchmarks, and is effective across different base models including Qwen2.5,
Qwen2.5-Math, and DeepSeek-Math. Notably, our model Qwen2.5-Math-CFT only
requires 1 hour of training on 8 x H100 over the 50K examples, yet matches or
outperforms strong competitors like Qwen2.5-Math-Instruct on most benchmarks,
which use over 2M samples. Moreover, it matches the performance of SimpleRL,
which is a DeepSeek-r1 replication trained with 140 x more compute. Experiments
on IF_Eval and MT-Bench further demonstrate that CFT can significantly enhance
the model's general generation and instruction-following capabilities,
outperforming the Qwen2.5-Math-Instruct by a large margin. Ablation studies
show that CFT is robust to noisy response sources and teacher critique models.
These findings highlight that CFT offers a more effective alternative to
advance the reasoning of language models.
|
2501.19061 | Heqian Qiu | Heqian Qiu, Zhaofeng Shi, Lanxiao Wang, Huiyu Xiong, Xiang Li,
Hongliang Li | EgoMe: A New Dataset and Challenge for Following Me via Egocentric View
in Real World | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In human imitation learning, the imitator typically take the egocentric view
as a benchmark, naturally transferring behaviors observed from an exocentric
view to their owns, which provides inspiration for researching how robots can
more effectively imitate human behavior. However, current research primarily
focuses on the basic alignment issues of ego-exo data from different cameras,
rather than collecting data from the imitator's perspective, which is
inconsistent with the high-level cognitive process. To advance this research,
we introduce a novel large-scale egocentric dataset, called EgoMe, which
towards following the process of human imitation learning via the imitator's
egocentric view in the real world. Our dataset includes 7902 paired exo-ego
videos (totaling15804 videos) spanning diverse daily behaviors in various
real-world scenarios. For each video pair, one video captures an exocentric
view of the imitator observing the demonstrator's actions, while the other
captures an egocentric view of the imitator subsequently following those
actions. Notably, EgoMe uniquely incorporates exo-ego eye gaze, other
multi-modal sensor IMU data and different-level annotations for assisting in
establishing correlations between observing and imitating process. We further
provide a suit of challenging benchmarks for fully leveraging this data
resource and promoting the robot imitation learning research. Extensive
analysis demonstrates significant advantages over existing datasets. Our EgoMe
dataset and benchmarks are available at
https://huggingface.co/datasets/HeqianQiu/EgoMe.
| [
{
"version": "v1",
"created": "Fri, 31 Jan 2025 11:48:22 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Mar 2025 02:44:43 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Qiu",
"Heqian",
""
],
[
"Shi",
"Zhaofeng",
""
],
[
"Wang",
"Lanxiao",
""
],
[
"Xiong",
"Huiyu",
""
],
[
"Li",
"Xiang",
""
],
[
"Li",
"Hongliang",
""
]
] | TITLE: EgoMe: A New Dataset and Challenge for Following Me via Egocentric View
in Real World
ABSTRACT: In human imitation learning, the imitator typically take the egocentric view
as a benchmark, naturally transferring behaviors observed from an exocentric
view to their owns, which provides inspiration for researching how robots can
more effectively imitate human behavior. However, current research primarily
focuses on the basic alignment issues of ego-exo data from different cameras,
rather than collecting data from the imitator's perspective, which is
inconsistent with the high-level cognitive process. To advance this research,
we introduce a novel large-scale egocentric dataset, called EgoMe, which
towards following the process of human imitation learning via the imitator's
egocentric view in the real world. Our dataset includes 7902 paired exo-ego
videos (totaling15804 videos) spanning diverse daily behaviors in various
real-world scenarios. For each video pair, one video captures an exocentric
view of the imitator observing the demonstrator's actions, while the other
captures an egocentric view of the imitator subsequently following those
actions. Notably, EgoMe uniquely incorporates exo-ego eye gaze, other
multi-modal sensor IMU data and different-level annotations for assisting in
establishing correlations between observing and imitating process. We further
provide a suit of challenging benchmarks for fully leveraging this data
resource and promoting the robot imitation learning research. Extensive
analysis demonstrates significant advantages over existing datasets. Our EgoMe
dataset and benchmarks are available at
https://huggingface.co/datasets/HeqianQiu/EgoMe.
|
2502.01692 | Kim Yong Tan | Kim Yong Tan, Yueming Lyu, Ivor Tsang, Yew-Soon Ong | Fast Direct: Query-Efficient Online Black-box Guidance for
Diffusion-model Target Generation | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Guided diffusion-model generation is a promising direction for customizing
the generation process of a pre-trained diffusion model to address specific
downstream tasks. Existing guided diffusion models either rely on training the
guidance model with pre-collected datasets or require the objective functions
to be differentiable. However, for most real-world tasks, offline datasets are
often unavailable, and their objective functions are often not differentiable,
such as image generation with human preferences, molecular generation for drug
discovery, and material design. Thus, we need an $\textbf{online}$ algorithm
capable of collecting data during runtime and supporting a $\textbf{black-box}$
objective function. Moreover, the $\textbf{query efficiency}$ of the algorithm
is also critical because the objective evaluation of the query is often
expensive in real-world scenarios. In this work, we propose a novel and simple
algorithm, $\textbf{Fast Direct}$, for query-efficient online black-box target
generation. Our Fast Direct builds a pseudo-target on the data manifold to
update the noise sequence of the diffusion model with a universal direction,
which is promising to perform query-efficient guided generation. Extensive
experiments on twelve high-resolution ($\small {1024 \times 1024}$) image
target generation tasks and six 3D-molecule target generation tasks show
$\textbf{6}\times$ up to $\textbf{10}\times$ query efficiency improvement and
$\textbf{11}\times$ up to $\textbf{44}\times$ query efficiency improvement,
respectively. Our implementation is publicly available at:
https://github.com/kimyong95/guide-stable-diffusion/tree/fast-direct
| [
{
"version": "v1",
"created": "Sun, 2 Feb 2025 17:21:10 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Feb 2025 13:49:21 GMT"
},
{
"version": "v3",
"created": "Thu, 6 Feb 2025 05:25:25 GMT"
},
{
"version": "v4",
"created": "Sat, 1 Mar 2025 06:39:47 GMT"
},
{
"version": "v5",
"created": "Sat, 29 Mar 2025 05:45:56 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Tan",
"Kim Yong",
""
],
[
"Lyu",
"Yueming",
""
],
[
"Tsang",
"Ivor",
""
],
[
"Ong",
"Yew-Soon",
""
]
] | TITLE: Fast Direct: Query-Efficient Online Black-box Guidance for
Diffusion-model Target Generation
ABSTRACT: Guided diffusion-model generation is a promising direction for customizing
the generation process of a pre-trained diffusion model to address specific
downstream tasks. Existing guided diffusion models either rely on training the
guidance model with pre-collected datasets or require the objective functions
to be differentiable. However, for most real-world tasks, offline datasets are
often unavailable, and their objective functions are often not differentiable,
such as image generation with human preferences, molecular generation for drug
discovery, and material design. Thus, we need an $\textbf{online}$ algorithm
capable of collecting data during runtime and supporting a $\textbf{black-box}$
objective function. Moreover, the $\textbf{query efficiency}$ of the algorithm
is also critical because the objective evaluation of the query is often
expensive in real-world scenarios. In this work, we propose a novel and simple
algorithm, $\textbf{Fast Direct}$, for query-efficient online black-box target
generation. Our Fast Direct builds a pseudo-target on the data manifold to
update the noise sequence of the diffusion model with a universal direction,
which is promising to perform query-efficient guided generation. Extensive
experiments on twelve high-resolution ($\small {1024 \times 1024}$) image
target generation tasks and six 3D-molecule target generation tasks show
$\textbf{6}\times$ up to $\textbf{10}\times$ query efficiency improvement and
$\textbf{11}\times$ up to $\textbf{44}\times$ query efficiency improvement,
respectively. Our implementation is publicly available at:
https://github.com/kimyong95/guide-stable-diffusion/tree/fast-direct
|
2502.02234 | Zhenglai Li | Zhenglai Li, Yuqi Shi, Xiao He, Chang Tang | Mask-informed Deep Contrastive Incomplete Multi-view Clustering | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-view clustering (MvC) utilizes information from multiple views to
uncover the underlying structures of data. Despite significant advancements in
MvC, mitigating the impact of missing samples in specific views on the
integration of knowledge from different views remains a critical challenge.
This paper proposes a novel Mask-informed Deep Contrastive Incomplete
Multi-view Clustering (Mask-IMvC) method, which elegantly identifies a
view-common representation for clustering. Specifically, we introduce a
mask-informed fusion network that aggregates incomplete multi-view information
while considering the observation status of samples across various views as a
mask, thereby reducing the adverse effects of missing values. Additionally, we
design a prior knowledge-assisted contrastive learning loss that boosts the
representation capability of the aggregated view-common representation by
injecting neighborhood information of samples from different views. Finally,
extensive experiments are conducted to demonstrate the superiority of the
proposed Mask-IMvC method over state-of-the-art approaches across multiple MvC
datasets, both in complete and incomplete scenarios.
| [
{
"version": "v1",
"created": "Tue, 4 Feb 2025 11:23:48 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Mar 2025 11:05:43 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Li",
"Zhenglai",
""
],
[
"Shi",
"Yuqi",
""
],
[
"He",
"Xiao",
""
],
[
"Tang",
"Chang",
""
]
] | TITLE: Mask-informed Deep Contrastive Incomplete Multi-view Clustering
ABSTRACT: Multi-view clustering (MvC) utilizes information from multiple views to
uncover the underlying structures of data. Despite significant advancements in
MvC, mitigating the impact of missing samples in specific views on the
integration of knowledge from different views remains a critical challenge.
This paper proposes a novel Mask-informed Deep Contrastive Incomplete
Multi-view Clustering (Mask-IMvC) method, which elegantly identifies a
view-common representation for clustering. Specifically, we introduce a
mask-informed fusion network that aggregates incomplete multi-view information
while considering the observation status of samples across various views as a
mask, thereby reducing the adverse effects of missing values. Additionally, we
design a prior knowledge-assisted contrastive learning loss that boosts the
representation capability of the aggregated view-common representation by
injecting neighborhood information of samples from different views. Finally,
extensive experiments are conducted to demonstrate the superiority of the
proposed Mask-IMvC method over state-of-the-art approaches across multiple MvC
datasets, both in complete and incomplete scenarios.
|
2502.11546 | Yingli Shen | Yingli Shen, Wen Lai, Shuo Wang, Xueren Zhang, Kangyang Luo, Alexander
Fraser, Maosong Sun | DCAD-2000: A Multilingual Dataset across 2000+ Languages with Data
Cleaning as Anomaly Detection | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | The rapid development of multilingual large language models (LLMs) highlights
the need for high-quality, diverse, and clean multilingual datasets. In this
paper, we introduce DCAD-2000 (Data Cleaning as Anomaly Detection), a
large-scale multilingual corpus built using newly extracted Common Crawl data
and existing multilingual datasets. DCAD-2000 includes over 2,282 languages,
46.72TB of data, and 8.63 billion documents, spanning 155 high- and
medium-resource languages and 159 writing scripts. To overcome the limitations
of current data cleaning methods, which rely on manual heuristic thresholds, we
propose reframing data cleaning as an anomaly detection task. This dynamic
filtering approach significantly enhances data quality by identifying and
removing noisy or anomalous content. We evaluate the quality of DCAD-2000 on
the FineTask benchmark, demonstrating substantial improvements in multilingual
dataset quality and task performance.
| [
{
"version": "v1",
"created": "Mon, 17 Feb 2025 08:28:29 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Mar 2025 05:25:57 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Shen",
"Yingli",
""
],
[
"Lai",
"Wen",
""
],
[
"Wang",
"Shuo",
""
],
[
"Zhang",
"Xueren",
""
],
[
"Luo",
"Kangyang",
""
],
[
"Fraser",
"Alexander",
""
],
[
"Sun",
"Maosong",
""
]
] | TITLE: DCAD-2000: A Multilingual Dataset across 2000+ Languages with Data
Cleaning as Anomaly Detection
ABSTRACT: The rapid development of multilingual large language models (LLMs) highlights
the need for high-quality, diverse, and clean multilingual datasets. In this
paper, we introduce DCAD-2000 (Data Cleaning as Anomaly Detection), a
large-scale multilingual corpus built using newly extracted Common Crawl data
and existing multilingual datasets. DCAD-2000 includes over 2,282 languages,
46.72TB of data, and 8.63 billion documents, spanning 155 high- and
medium-resource languages and 159 writing scripts. To overcome the limitations
of current data cleaning methods, which rely on manual heuristic thresholds, we
propose reframing data cleaning as an anomaly detection task. This dynamic
filtering approach significantly enhances data quality by identifying and
removing noisy or anomalous content. We evaluate the quality of DCAD-2000 on
the FineTask benchmark, demonstrating substantial improvements in multilingual
dataset quality and task performance.
|
2502.11971 | Jixiang Chen | Jixiang Chen, Jing Chen, Kai Liu, Haochen Chang, Shanfeng Fu, Jian
Yang | Robust 6DoF Pose Tracking Considering Contour and Interior
Correspondence Uncertainty for AR Assembly Guidance | Submitted to IEEE Transactions on Instrumentation and Measurement | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Augmented reality assembly guidance is essential for intelligent
manufacturing and medical applications, requiring continuous measurement of the
6DoF poses of manipulated objects. Although current tracking methods have made
significant advancements in accuracy and efficiency, they still face challenges
in robustness when dealing with cluttered backgrounds, rotationally symmetric
objects, and noisy sequences. In this paper, we first propose a robust
contour-based pose tracking method that addresses error-prone contour
correspondences and improves noise tolerance. It utilizes a fan-shaped search
strategy to refine correspondences and models local contour shape and noise
uncertainty as mixed probability distribution, resulting in a highly robust
contour energy function. Secondly, we introduce a CPU-only strategy to better
track rotationally symmetric objects and assist the contour-based method in
overcoming local minima by exploring sparse interior correspondences. This is
achieved by pre-sampling interior points from sparse viewpoint templates
offline and using the DIS optical flow algorithm to compute their
correspondences during tracking. Finally, we formulate a unified energy
function to fuse contour and interior information, which is solvable using a
re-weighted least squares algorithm. Experiments on public datasets and real
scenarios demonstrate that our method significantly outperforms
state-of-the-art monocular tracking methods and can achieve more than 100 FPS
using only a CPU.
| [
{
"version": "v1",
"created": "Mon, 17 Feb 2025 16:18:57 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Mar 2025 04:15:30 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Chen",
"Jixiang",
""
],
[
"Chen",
"Jing",
""
],
[
"Liu",
"Kai",
""
],
[
"Chang",
"Haochen",
""
],
[
"Fu",
"Shanfeng",
""
],
[
"Yang",
"Jian",
""
]
] | TITLE: Robust 6DoF Pose Tracking Considering Contour and Interior
Correspondence Uncertainty for AR Assembly Guidance
ABSTRACT: Augmented reality assembly guidance is essential for intelligent
manufacturing and medical applications, requiring continuous measurement of the
6DoF poses of manipulated objects. Although current tracking methods have made
significant advancements in accuracy and efficiency, they still face challenges
in robustness when dealing with cluttered backgrounds, rotationally symmetric
objects, and noisy sequences. In this paper, we first propose a robust
contour-based pose tracking method that addresses error-prone contour
correspondences and improves noise tolerance. It utilizes a fan-shaped search
strategy to refine correspondences and models local contour shape and noise
uncertainty as mixed probability distribution, resulting in a highly robust
contour energy function. Secondly, we introduce a CPU-only strategy to better
track rotationally symmetric objects and assist the contour-based method in
overcoming local minima by exploring sparse interior correspondences. This is
achieved by pre-sampling interior points from sparse viewpoint templates
offline and using the DIS optical flow algorithm to compute their
correspondences during tracking. Finally, we formulate a unified energy
function to fuse contour and interior information, which is solvable using a
re-weighted least squares algorithm. Experiments on public datasets and real
scenarios demonstrate that our method significantly outperforms
state-of-the-art monocular tracking methods and can achieve more than 100 FPS
using only a CPU.
|
2502.14630 | Rebecca Perriment | Rebecca Perriment, Vasco Mergulhao, Volkan Kumtepeli, Priti Parikh,
Malcolm McCulloch, David Howey | Understanding long-term energy use in off-grid solar home systems in
sub-Saharan Africa | Draft updates, including text and figure changes | null | null | null | eess.SY cs.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Solar home systems provide low-cost electricity access for rural off-grid
communities. As access to them increases, more long-term data becomes available
on how these systems are used throughout their lifetime. This work analyses a
dataset of 1,000 systems across sub-Saharan Africa. Dynamic time warping
clustering was applied to the load demand data from the systems, identifying
five distinct archetypal daily load profiles and their occurrence across the
dataset. Temporal analysis reveals a general decline in daily energy
consumption over time, with 77% of households reducing their usage compared to
the start of ownership. On average, there is a 33% decrease in daily
consumption by the end of the second year compared to the peak demand, which
occurs on the 96th day. Combining the load demand analysis with payment data
shows that this decrease in energy consumption is observed even in households
that are not experiencing economic hardship, indicating there are reasons
beyond financial constraints for decreasing energy use once energy access is
obtained.
| [
{
"version": "v1",
"created": "Thu, 20 Feb 2025 15:09:31 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Mar 2025 09:39:38 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Perriment",
"Rebecca",
""
],
[
"Mergulhao",
"Vasco",
""
],
[
"Kumtepeli",
"Volkan",
""
],
[
"Parikh",
"Priti",
""
],
[
"McCulloch",
"Malcolm",
""
],
[
"Howey",
"David",
""
]
] | TITLE: Understanding long-term energy use in off-grid solar home systems in
sub-Saharan Africa
ABSTRACT: Solar home systems provide low-cost electricity access for rural off-grid
communities. As access to them increases, more long-term data becomes available
on how these systems are used throughout their lifetime. This work analyses a
dataset of 1,000 systems across sub-Saharan Africa. Dynamic time warping
clustering was applied to the load demand data from the systems, identifying
five distinct archetypal daily load profiles and their occurrence across the
dataset. Temporal analysis reveals a general decline in daily energy
consumption over time, with 77% of households reducing their usage compared to
the start of ownership. On average, there is a 33% decrease in daily
consumption by the end of the second year compared to the peak demand, which
occurs on the 96th day. Combining the load demand analysis with payment data
shows that this decrease in energy consumption is observed even in households
that are not experiencing economic hardship, indicating there are reasons
beyond financial constraints for decreasing energy use once energy access is
obtained.
|
2502.19590 | Rebecca M. M. Hicke | Sil Hamilton, Rebecca M. M. Hicke, David Mimno, Matthew Wilkens | A City of Millions: Mapping Literary Social Networks At Scale | null | null | null | null | cs.CL cs.LG cs.SI | http://creativecommons.org/licenses/by/4.0/ | We release 70,509 high-quality social networks extracted from multilingual
fiction and nonfiction narratives. We additionally provide metadata for
$\sim$30,000 of these texts (73\% nonfiction and 27\% fiction) written between
1800 and 1999 in 58 languages. This dataset provides information on historical
social worlds at an unprecedented scale, including data for 2,510,021
individuals in 2,805,482 pair-wise relationships annotated for affinity and
relationship type. We achieve this scale by automating previously manual
methods of extracting social networks; specifically, we adapt an existing
annotation task as a language model prompt, ensuring consistency at scale with
the use of structured output. This dataset serves as a unique resource for
humanities and social science research by providing data on cognitive models of
social realities.
| [
{
"version": "v1",
"created": "Wed, 26 Feb 2025 22:11:47 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 21:51:13 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Hamilton",
"Sil",
""
],
[
"Hicke",
"Rebecca M. M.",
""
],
[
"Mimno",
"David",
""
],
[
"Wilkens",
"Matthew",
""
]
] | TITLE: A City of Millions: Mapping Literary Social Networks At Scale
ABSTRACT: We release 70,509 high-quality social networks extracted from multilingual
fiction and nonfiction narratives. We additionally provide metadata for
$\sim$30,000 of these texts (73\% nonfiction and 27\% fiction) written between
1800 and 1999 in 58 languages. This dataset provides information on historical
social worlds at an unprecedented scale, including data for 2,510,021
individuals in 2,805,482 pair-wise relationships annotated for affinity and
relationship type. We achieve this scale by automating previously manual
methods of extracting social networks; specifically, we adapt an existing
annotation task as a language model prompt, ensuring consistency at scale with
the use of structured output. This dataset serves as a unique resource for
humanities and social science research by providing data on cognitive models of
social realities.
|
2502.19777 | Shuchang Zhou | Shuchang Zhou, Jiwei Wei, Shiyuan He, Yuyang Zhou, Chaoning Zhang, Jie
Zou, Ning Xie, Yang Yang | InPK: Infusing Prior Knowledge into Prompt for Vision-Language Models | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Prompt tuning has become a popular strategy for adapting Vision-Language
Models (VLMs) to zero/few-shot visual recognition tasks. Some prompting
techniques introduce prior knowledge due to its richness, but when learnable
tokens are randomly initialized and disconnected from prior knowledge, they
tend to overfit on seen classes and struggle with domain shifts for unseen
ones. To address this issue, we propose the InPK model, which infuses
class-specific prior knowledge into the learnable tokens during initialization,
thus enabling the model to explicitly focus on class-relevant information.
Furthermore, to mitigate the weakening of class information by multi-layer
encoders, we continuously reinforce the interaction between learnable tokens
and prior knowledge across multiple feature levels. This progressive
interaction allows the learnable tokens to better capture the fine-grained
differences and universal visual concepts within prior knowledge, enabling the
model to extract more discriminative and generalized text features. Even for
unseen classes, the learned interaction allows the model to capture their
common representations and infer their appropriate positions within the
existing semantic structure. Moreover, we introduce a learnable text-to-vision
projection layer to accommodate the text adjustments, ensuring better alignment
of visual-text semantics. Extensive experiments on 11 recognition datasets show
that InPK significantly outperforms state-of-the-art methods in multiple
zero/few-shot image classification tasks.
| [
{
"version": "v1",
"created": "Thu, 27 Feb 2025 05:33:18 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Mar 2025 11:44:28 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Zhou",
"Shuchang",
""
],
[
"Wei",
"Jiwei",
""
],
[
"He",
"Shiyuan",
""
],
[
"Zhou",
"Yuyang",
""
],
[
"Zhang",
"Chaoning",
""
],
[
"Zou",
"Jie",
""
],
[
"Xie",
"Ning",
""
],
[
"Yang",
"Yang",
""
]
] | TITLE: InPK: Infusing Prior Knowledge into Prompt for Vision-Language Models
ABSTRACT: Prompt tuning has become a popular strategy for adapting Vision-Language
Models (VLMs) to zero/few-shot visual recognition tasks. Some prompting
techniques introduce prior knowledge due to its richness, but when learnable
tokens are randomly initialized and disconnected from prior knowledge, they
tend to overfit on seen classes and struggle with domain shifts for unseen
ones. To address this issue, we propose the InPK model, which infuses
class-specific prior knowledge into the learnable tokens during initialization,
thus enabling the model to explicitly focus on class-relevant information.
Furthermore, to mitigate the weakening of class information by multi-layer
encoders, we continuously reinforce the interaction between learnable tokens
and prior knowledge across multiple feature levels. This progressive
interaction allows the learnable tokens to better capture the fine-grained
differences and universal visual concepts within prior knowledge, enabling the
model to extract more discriminative and generalized text features. Even for
unseen classes, the learned interaction allows the model to capture their
common representations and infer their appropriate positions within the
existing semantic structure. Moreover, we introduce a learnable text-to-vision
projection layer to accommodate the text adjustments, ensuring better alignment
of visual-text semantics. Extensive experiments on 11 recognition datasets show
that InPK significantly outperforms state-of-the-art methods in multiple
zero/few-shot image classification tasks.
|
2502.20225 | Dat Tran Tan | Lam Pham, Dat Tran, Phat Lam, Florian Skopik, Alexander Schindler,
Silvia Poletti, David Fischinger, Martin Boyer | DIN-CTS: Low-Complexity Depthwise-Inception Neural Network with
Contrastive Training Strategy for Deepfake Speech Detection | null | null | null | null | cs.SD cs.CR eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a deep neural network approach for deepfake speech
detection (DSD) based on a lowcomplexity Depthwise-Inception Network (DIN)
trained with a contrastive training strategy (CTS). In this framework, input
audio recordings are first transformed into spectrograms using Short-Time
Fourier Transform (STFT) and Linear Filter (LF), which are then used to train
the DIN. Once trained, the DIN processes bonafide utterances to extract audio
embeddings, which are used to construct a Gaussian distribution representing
genuine speech. Deepfake detection is then performed by computing the distance
between a test utterance and this distribution to determine whether the
utterance is fake or bonafide. To evaluate our proposed systems, we conducted
extensive experiments on the benchmark dataset of ASVspoof 2019 LA. The
experimental results demonstrate the effectiveness of combining the
Depthwise-Inception Network with the contrastive learning strategy in
distinguishing between fake and bonafide utterances. We achieved Equal Error
Rate (EER), Accuracy (Acc.), F1, AUC scores of 4.6%, 95.4%, 97.3%, and 98.9%
respectively using a single, low-complexity DIN with just 1.77 M parameters and
985 M FLOPS on short audio segments (4 seconds). Furthermore, our proposed
system outperforms the single-system submissions in the ASVspoof 2019 LA
challenge, showcasing its potential for real-time applications.
| [
{
"version": "v1",
"created": "Thu, 27 Feb 2025 16:09:04 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Mar 2025 09:32:56 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Pham",
"Lam",
""
],
[
"Tran",
"Dat",
""
],
[
"Lam",
"Phat",
""
],
[
"Skopik",
"Florian",
""
],
[
"Schindler",
"Alexander",
""
],
[
"Poletti",
"Silvia",
""
],
[
"Fischinger",
"David",
""
],
[
"Boyer",
"Martin",
""
]
] | TITLE: DIN-CTS: Low-Complexity Depthwise-Inception Neural Network with
Contrastive Training Strategy for Deepfake Speech Detection
ABSTRACT: In this paper, we propose a deep neural network approach for deepfake speech
detection (DSD) based on a lowcomplexity Depthwise-Inception Network (DIN)
trained with a contrastive training strategy (CTS). In this framework, input
audio recordings are first transformed into spectrograms using Short-Time
Fourier Transform (STFT) and Linear Filter (LF), which are then used to train
the DIN. Once trained, the DIN processes bonafide utterances to extract audio
embeddings, which are used to construct a Gaussian distribution representing
genuine speech. Deepfake detection is then performed by computing the distance
between a test utterance and this distribution to determine whether the
utterance is fake or bonafide. To evaluate our proposed systems, we conducted
extensive experiments on the benchmark dataset of ASVspoof 2019 LA. The
experimental results demonstrate the effectiveness of combining the
Depthwise-Inception Network with the contrastive learning strategy in
distinguishing between fake and bonafide utterances. We achieved Equal Error
Rate (EER), Accuracy (Acc.), F1, AUC scores of 4.6%, 95.4%, 97.3%, and 98.9%
respectively using a single, low-complexity DIN with just 1.77 M parameters and
985 M FLOPS on short audio segments (4 seconds). Furthermore, our proposed
system outperforms the single-system submissions in the ASVspoof 2019 LA
challenge, showcasing its potential for real-time applications.
|
2503.00065 | Jing Xu | Jing Xu, Franziska Boenisch, Adam Dziedzic | ADAGE: Active Defenses Against GNN Extraction | Not all authors have given their explicit consent | null | null | null | cs.CR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph Neural Networks (GNNs) achieve high performance in various real-world
applications, such as drug discovery, traffic states prediction, and
recommendation systems. The fact that building powerful GNNs requires a large
amount of training data, powerful computing resources, and human expertise
turns the models into lucrative targets for model stealing attacks. Prior work
has revealed that the threat vector of stealing attacks against GNNs is large
and diverse, as an attacker can leverage various heterogeneous signals ranging
from node labels to high-dimensional node embeddings to create a local copy of
the target GNN at a fraction of the original training costs. This diversity in
the threat vector renders the design of effective and general defenses
challenging and existing defenses usually focus on one particular stealing
setup. Additionally, they solely provide means to identify stolen model copies
rather than preventing the attack. To close this gap, we propose the first and
general Active Defense Against GNN Extraction (ADAGE). By analyzing the queries
to the GNN, tracking their diversity in terms of proximity to different
communities identified in the underlying graph, and increasing the defense
strength with the growing fraction of communities that have been queried, ADAGE
can prevent stealing in all common attack setups. Our extensive experimental
evaluation using six benchmark datasets, four GNN models, and three types of
adaptive attackers shows that ADAGE penalizes attackers to the degree of
rendering stealing impossible, whilst not harming predictive performance for
legitimate users. ADAGE, thereby, contributes towards securely sharing valuable
GNNs in the future.
| [
{
"version": "v1",
"created": "Thu, 27 Feb 2025 10:56:11 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Mar 2025 11:32:39 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Xu",
"Jing",
""
],
[
"Boenisch",
"Franziska",
""
],
[
"Dziedzic",
"Adam",
""
]
] | TITLE: ADAGE: Active Defenses Against GNN Extraction
ABSTRACT: Graph Neural Networks (GNNs) achieve high performance in various real-world
applications, such as drug discovery, traffic states prediction, and
recommendation systems. The fact that building powerful GNNs requires a large
amount of training data, powerful computing resources, and human expertise
turns the models into lucrative targets for model stealing attacks. Prior work
has revealed that the threat vector of stealing attacks against GNNs is large
and diverse, as an attacker can leverage various heterogeneous signals ranging
from node labels to high-dimensional node embeddings to create a local copy of
the target GNN at a fraction of the original training costs. This diversity in
the threat vector renders the design of effective and general defenses
challenging and existing defenses usually focus on one particular stealing
setup. Additionally, they solely provide means to identify stolen model copies
rather than preventing the attack. To close this gap, we propose the first and
general Active Defense Against GNN Extraction (ADAGE). By analyzing the queries
to the GNN, tracking their diversity in terms of proximity to different
communities identified in the underlying graph, and increasing the defense
strength with the growing fraction of communities that have been queried, ADAGE
can prevent stealing in all common attack setups. Our extensive experimental
evaluation using six benchmark datasets, four GNN models, and three types of
adaptive attackers shows that ADAGE penalizes attackers to the degree of
rendering stealing impossible, whilst not harming predictive performance for
legitimate users. ADAGE, thereby, contributes towards securely sharing valuable
GNNs in the future.
|
2503.00223 | Pengcheng Jiang | Pengcheng Jiang, Jiacheng Lin, Lang Cao, Runchu Tian, SeongKu Kang,
Zifeng Wang, Jimeng Sun, Jiawei Han | DeepRetrieval: Hacking Real Search Engines and Retrievers with Large
Language Models via Reinforcement Learning | null | null | null | null | cs.IR | http://creativecommons.org/licenses/by/4.0/ | Information retrieval systems are crucial for enabling effective access to
large document collections. Recent approaches have leveraged Large Language
Models (LLMs) to enhance retrieval performance through query augmentation, but
often rely on expensive supervised learning or distillation techniques that
require significant computational resources and hand-labeled data. We introduce
DeepRetrieval, a reinforcement learning (RL) approach that trains LLMs for
query generation through trial and error without supervised data (reference
query). Using retrieval metrics as rewards, our system generates queries that
maximize retrieval performance. DeepRetrieval outperforms leading methods on
literature search with 65.07% (vs. previous SOTA 24.68%) recall for publication
search and 63.18% (vs. previous SOTA 32.11%) recall for trial search using
real-world search engines. DeepRetrieval also dominates in evidence-seeking
retrieval, classic information retrieval and SQL database search. With only 3B
parameters, it outperforms industry-leading models like GPT-4o and
Claude-3.5-Sonnet on 11/13 datasets. These results demonstrate that our RL
approach offers a more efficient and effective paradigm for information
retrieval. Our data and code are available at:
https://github.com/pat-jj/DeepRetrieval.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 22:16:42 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 18:01:15 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Jiang",
"Pengcheng",
""
],
[
"Lin",
"Jiacheng",
""
],
[
"Cao",
"Lang",
""
],
[
"Tian",
"Runchu",
""
],
[
"Kang",
"SeongKu",
""
],
[
"Wang",
"Zifeng",
""
],
[
"Sun",
"Jimeng",
""
],
[
"Han",
"Jiawei",
""
]
] | TITLE: DeepRetrieval: Hacking Real Search Engines and Retrievers with Large
Language Models via Reinforcement Learning
ABSTRACT: Information retrieval systems are crucial for enabling effective access to
large document collections. Recent approaches have leveraged Large Language
Models (LLMs) to enhance retrieval performance through query augmentation, but
often rely on expensive supervised learning or distillation techniques that
require significant computational resources and hand-labeled data. We introduce
DeepRetrieval, a reinforcement learning (RL) approach that trains LLMs for
query generation through trial and error without supervised data (reference
query). Using retrieval metrics as rewards, our system generates queries that
maximize retrieval performance. DeepRetrieval outperforms leading methods on
literature search with 65.07% (vs. previous SOTA 24.68%) recall for publication
search and 63.18% (vs. previous SOTA 32.11%) recall for trial search using
real-world search engines. DeepRetrieval also dominates in evidence-seeking
retrieval, classic information retrieval and SQL database search. With only 3B
parameters, it outperforms industry-leading models like GPT-4o and
Claude-3.5-Sonnet on 11/13 datasets. These results demonstrate that our RL
approach offers a more efficient and effective paradigm for information
retrieval. Our data and code are available at:
https://github.com/pat-jj/DeepRetrieval.
|
2503.02112 | Philip Harris | Elizabeth G. Campolongo, Yuan-Tang Chou, Ekaterina Govorkova, Wahid
Bhimji, Wei-Lun Chao, Chris Harris, Shih-Chieh Hsu, Hilmar Lapp, Mark S.
Neubauer, Josephine Namayanja, Aneesh Subramanian, Philip Harris, Advaith
Anand, David E. Carlyn, Subhankar Ghosh, Christopher Lawrence, Eric Moreno,
Ryan Raikman, Jiaman Wu, Ziheng Zhang, Bayu Adhi, Mohammad Ahmadi
Gharehtoragh, Sa\'ul Alonso Monsalve, Marta Babicz, Furqan Baig, Namrata
Banerji, William Bardon, Tyler Barna, Tanya Berger-Wolf, Adji Bousso Dieng,
Micah Brachman, Quentin Buat, David C.Y. Hui, Phuong Cao, Franco Cerino,
Yi-Chun Chang, Shivaji Chaulagain, An-Kai Chen, Deming Chen, Eric Chen,
Chia-Jui Chou, Zih-Chen Ciou, Miles Cochran-Branson, Artur Cordeiro Oudot
Choi, Michael Coughlin, Matteo Cremonesi, Maria Dadarlat, Peter Darch, Malina
Desai, Daniel Diaz, Steven Dillmann, Javier Duarte, Isla Duporge, Urbas Ekka,
Saba Entezari Heravi, Hao Fang, Rian Flynn, Geoffrey Fox, Emily Freed, Hang
Gao, Jing Gao, Julia Gonski, Matthew Graham, Abolfazl Hashemi, Scott Hauck,
James Hazelden, Joshua Henry Peterson, Duc Hoang, Wei Hu, Mirco Huennefeld,
David Hyde, Vandana Janeja, Nattapon Jaroenchai, Haoyi Jia, Yunfan Kang,
Maksim Kholiavchenko, Elham E. Khoda, Sangin Kim, Aditya Kumar, Bo-Cheng Lai,
Trung Le, Chi-Wei Lee, JangHyeon Lee, Shaocheng Lee, Suzan van der Lee,
Charles Lewis, Haitong Li, Haoyang Li, Henry Liao, Mia Liu, Xiaolin Liu,
Xiulong Liu, Vladimir Loncar, Fangzheng Lyu, Ilya Makarov, Abhishikth
Mallampalli Chen-Yu Mao, Alexander Michels, Alexander Migala, Farouk Mokhtar,
Mathieu Morlighem, Min Namgung, Andrzej Novak, Andrew Novick, Amy Orsborn,
Anand Padmanabhan, Jia-Cheng Pan, Sneh Pandya, Zhiyuan Pei, Ana Peixoto,
George Percivall, Alex Po Leung, Sanjay Purushotham, Zhiqiang Que, Melissa
Quinnan, Arghya Ranjan, Dylan Rankin, Christina Reissel, Benedikt Riedel, Dan
Rubenstein, Argyro Sasli, Eli Shlizerman, Arushi Singh, Kim Singh, Eric R.
Sokol, Arturo Sorensen, Yu Su, Mitra Taheri, Vaibhav Thakkar, Ann Mariam
Thomas, Eric Toberer, Chenghan Tsai, Rebecca Vandewalle, Arjun Verma, Ricco
C. Venterea, He Wang, Jianwu Wang, Sam Wang, Shaowen Wang, Gordon Watts,
Jason Weitz, Andrew Wildridge, Rebecca Williams, Scott Wolf, Yue Xu, Jianqi
Yan, Jai Yu, Yulei Zhang, Haoran Zhao, Ying Zhao, Yibo Zhong | Building Machine Learning Challenges for Anomaly Detection in Science | 17 pages 6 figures to be submitted to Nature Communications | null | null | null | cs.LG astro-ph.IM | http://creativecommons.org/licenses/by/4.0/ | Scientific discoveries are often made by finding a pattern or object that was
not predicted by the known rules of science. Oftentimes, these anomalous events
or objects that do not conform to the norms are an indication that the rules of
science governing the data are incomplete, and something new needs to be
present to explain these unexpected outliers. The challenge of finding
anomalies can be confounding since it requires codifying a complete knowledge
of the known scientific behaviors and then projecting these known behaviors on
the data to look for deviations. When utilizing machine learning, this presents
a particular challenge since we require that the model not only understands
scientific data perfectly but also recognizes when the data is inconsistent and
out of the scope of its trained behavior. In this paper, we present three
datasets aimed at developing machine learning-based anomaly detection for
disparate scientific domains covering astrophysics, genomics, and polar
science. We present the different datasets along with a scheme to make machine
learning challenges around the three datasets findable, accessible,
interoperable, and reusable (FAIR). Furthermore, we present an approach that
generalizes to future machine learning challenges, enabling the possibility of
large, more compute-intensive challenges that can ultimately lead to scientific
discovery.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 22:54:07 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Mar 2025 01:05:46 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Campolongo",
"Elizabeth G.",
""
],
[
"Chou",
"Yuan-Tang",
""
],
[
"Govorkova",
"Ekaterina",
""
],
[
"Bhimji",
"Wahid",
""
],
[
"Chao",
"Wei-Lun",
""
],
[
"Harris",
"Chris",
""
],
[
"Hsu",
"Shih-Chieh",
""
],
[
"Lapp",
"Hilmar",
""
],
[
"Neubauer",
"Mark S.",
""
],
[
"Namayanja",
"Josephine",
""
],
[
"Subramanian",
"Aneesh",
""
],
[
"Harris",
"Philip",
""
],
[
"Anand",
"Advaith",
""
],
[
"Carlyn",
"David E.",
""
],
[
"Ghosh",
"Subhankar",
""
],
[
"Lawrence",
"Christopher",
""
],
[
"Moreno",
"Eric",
""
],
[
"Raikman",
"Ryan",
""
],
[
"Wu",
"Jiaman",
""
],
[
"Zhang",
"Ziheng",
""
],
[
"Adhi",
"Bayu",
""
],
[
"Gharehtoragh",
"Mohammad Ahmadi",
""
],
[
"Monsalve",
"Saúl Alonso",
""
],
[
"Babicz",
"Marta",
""
],
[
"Baig",
"Furqan",
""
],
[
"Banerji",
"Namrata",
""
],
[
"Bardon",
"William",
""
],
[
"Barna",
"Tyler",
""
],
[
"Berger-Wolf",
"Tanya",
""
],
[
"Dieng",
"Adji Bousso",
""
],
[
"Brachman",
"Micah",
""
],
[
"Buat",
"Quentin",
""
],
[
"Hui",
"David C. Y.",
""
],
[
"Cao",
"Phuong",
""
],
[
"Cerino",
"Franco",
""
],
[
"Chang",
"Yi-Chun",
""
],
[
"Chaulagain",
"Shivaji",
""
],
[
"Chen",
"An-Kai",
""
],
[
"Chen",
"Deming",
""
],
[
"Chen",
"Eric",
""
],
[
"Chou",
"Chia-Jui",
""
],
[
"Ciou",
"Zih-Chen",
""
],
[
"Cochran-Branson",
"Miles",
""
],
[
"Choi",
"Artur Cordeiro Oudot",
""
],
[
"Coughlin",
"Michael",
""
],
[
"Cremonesi",
"Matteo",
""
],
[
"Dadarlat",
"Maria",
""
],
[
"Darch",
"Peter",
""
],
[
"Desai",
"Malina",
""
],
[
"Diaz",
"Daniel",
""
],
[
"Dillmann",
"Steven",
""
],
[
"Duarte",
"Javier",
""
],
[
"Duporge",
"Isla",
""
],
[
"Ekka",
"Urbas",
""
],
[
"Heravi",
"Saba Entezari",
""
],
[
"Fang",
"Hao",
""
],
[
"Flynn",
"Rian",
""
],
[
"Fox",
"Geoffrey",
""
],
[
"Freed",
"Emily",
""
],
[
"Gao",
"Hang",
""
],
[
"Gao",
"Jing",
""
],
[
"Gonski",
"Julia",
""
],
[
"Graham",
"Matthew",
""
],
[
"Hashemi",
"Abolfazl",
""
],
[
"Hauck",
"Scott",
""
],
[
"Hazelden",
"James",
""
],
[
"Peterson",
"Joshua Henry",
""
],
[
"Hoang",
"Duc",
""
],
[
"Hu",
"Wei",
""
],
[
"Huennefeld",
"Mirco",
""
],
[
"Hyde",
"David",
""
],
[
"Janeja",
"Vandana",
""
],
[
"Jaroenchai",
"Nattapon",
""
],
[
"Jia",
"Haoyi",
""
],
[
"Kang",
"Yunfan",
""
],
[
"Kholiavchenko",
"Maksim",
""
],
[
"Khoda",
"Elham E.",
""
],
[
"Kim",
"Sangin",
""
],
[
"Kumar",
"Aditya",
""
],
[
"Lai",
"Bo-Cheng",
""
],
[
"Le",
"Trung",
""
],
[
"Lee",
"Chi-Wei",
""
],
[
"Lee",
"JangHyeon",
""
],
[
"Lee",
"Shaocheng",
""
],
[
"van der Lee",
"Suzan",
""
],
[
"Lewis",
"Charles",
""
],
[
"Li",
"Haitong",
""
],
[
"Li",
"Haoyang",
""
],
[
"Liao",
"Henry",
""
],
[
"Liu",
"Mia",
""
],
[
"Liu",
"Xiaolin",
""
],
[
"Liu",
"Xiulong",
""
],
[
"Loncar",
"Vladimir",
""
],
[
"Lyu",
"Fangzheng",
""
],
[
"Makarov",
"Ilya",
""
],
[
"Mao",
"Abhishikth Mallampalli Chen-Yu",
""
],
[
"Michels",
"Alexander",
""
],
[
"Migala",
"Alexander",
""
],
[
"Mokhtar",
"Farouk",
""
],
[
"Morlighem",
"Mathieu",
""
],
[
"Namgung",
"Min",
""
],
[
"Novak",
"Andrzej",
""
],
[
"Novick",
"Andrew",
""
],
[
"Orsborn",
"Amy",
""
],
[
"Padmanabhan",
"Anand",
""
],
[
"Pan",
"Jia-Cheng",
""
],
[
"Pandya",
"Sneh",
""
],
[
"Pei",
"Zhiyuan",
""
],
[
"Peixoto",
"Ana",
""
],
[
"Percivall",
"George",
""
],
[
"Leung",
"Alex Po",
""
],
[
"Purushotham",
"Sanjay",
""
],
[
"Que",
"Zhiqiang",
""
],
[
"Quinnan",
"Melissa",
""
],
[
"Ranjan",
"Arghya",
""
],
[
"Rankin",
"Dylan",
""
],
[
"Reissel",
"Christina",
""
],
[
"Riedel",
"Benedikt",
""
],
[
"Rubenstein",
"Dan",
""
],
[
"Sasli",
"Argyro",
""
],
[
"Shlizerman",
"Eli",
""
],
[
"Singh",
"Arushi",
""
],
[
"Singh",
"Kim",
""
],
[
"Sokol",
"Eric R.",
""
],
[
"Sorensen",
"Arturo",
""
],
[
"Su",
"Yu",
""
],
[
"Taheri",
"Mitra",
""
],
[
"Thakkar",
"Vaibhav",
""
],
[
"Thomas",
"Ann Mariam",
""
],
[
"Toberer",
"Eric",
""
],
[
"Tsai",
"Chenghan",
""
],
[
"Vandewalle",
"Rebecca",
""
],
[
"Verma",
"Arjun",
""
],
[
"Venterea",
"Ricco C.",
""
],
[
"Wang",
"He",
""
],
[
"Wang",
"Jianwu",
""
],
[
"Wang",
"Sam",
""
],
[
"Wang",
"Shaowen",
""
],
[
"Watts",
"Gordon",
""
],
[
"Weitz",
"Jason",
""
],
[
"Wildridge",
"Andrew",
""
],
[
"Williams",
"Rebecca",
""
],
[
"Wolf",
"Scott",
""
],
[
"Xu",
"Yue",
""
],
[
"Yan",
"Jianqi",
""
],
[
"Yu",
"Jai",
""
],
[
"Zhang",
"Yulei",
""
],
[
"Zhao",
"Haoran",
""
],
[
"Zhao",
"Ying",
""
],
[
"Zhong",
"Yibo",
""
]
] | TITLE: Building Machine Learning Challenges for Anomaly Detection in Science
ABSTRACT: Scientific discoveries are often made by finding a pattern or object that was
not predicted by the known rules of science. Oftentimes, these anomalous events
or objects that do not conform to the norms are an indication that the rules of
science governing the data are incomplete, and something new needs to be
present to explain these unexpected outliers. The challenge of finding
anomalies can be confounding since it requires codifying a complete knowledge
of the known scientific behaviors and then projecting these known behaviors on
the data to look for deviations. When utilizing machine learning, this presents
a particular challenge since we require that the model not only understands
scientific data perfectly but also recognizes when the data is inconsistent and
out of the scope of its trained behavior. In this paper, we present three
datasets aimed at developing machine learning-based anomaly detection for
disparate scientific domains covering astrophysics, genomics, and polar
science. We present the different datasets along with a scheme to make machine
learning challenges around the three datasets findable, accessible,
interoperable, and reusable (FAIR). Furthermore, we present an approach that
generalizes to future machine learning challenges, enabling the possibility of
large, more compute-intensive challenges that can ultimately lead to scientific
discovery.
|
2503.08427 | Yuan Gao | Yuan Gao, Anton Rodomanov, Jeremy Rack, Sebastian U. Stich | Accelerated Distributed Optimization with Compression and Error Feedback | null | null | null | null | math.OC cs.LG | http://creativecommons.org/licenses/by/4.0/ | Modern machine learning tasks often involve massive datasets and models,
necessitating distributed optimization algorithms with reduced communication
overhead. Communication compression, where clients transmit compressed updates
to a central server, has emerged as a key technique to mitigate communication
bottlenecks. However, the theoretical understanding of stochastic distributed
optimization with contractive compression remains limited, particularly in
conjunction with Nesterov acceleration -- a cornerstone for achieving faster
convergence in optimization.
In this paper, we propose a novel algorithm, ADEF (Accelerated Distributed
Error Feedback), which integrates Nesterov acceleration, contractive
compression, error feedback, and gradient difference compression. We prove that
ADEF achieves the first accelerated convergence rate for stochastic distributed
optimization with contractive compression in the general convex regime.
Numerical experiments validate our theoretical findings and demonstrate the
practical efficacy of ADEF in reducing communication costs while maintaining
fast convergence.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 13:40:34 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Mar 2025 20:52:06 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Gao",
"Yuan",
""
],
[
"Rodomanov",
"Anton",
""
],
[
"Rack",
"Jeremy",
""
],
[
"Stich",
"Sebastian U.",
""
]
] | TITLE: Accelerated Distributed Optimization with Compression and Error Feedback
ABSTRACT: Modern machine learning tasks often involve massive datasets and models,
necessitating distributed optimization algorithms with reduced communication
overhead. Communication compression, where clients transmit compressed updates
to a central server, has emerged as a key technique to mitigate communication
bottlenecks. However, the theoretical understanding of stochastic distributed
optimization with contractive compression remains limited, particularly in
conjunction with Nesterov acceleration -- a cornerstone for achieving faster
convergence in optimization.
In this paper, we propose a novel algorithm, ADEF (Accelerated Distributed
Error Feedback), which integrates Nesterov acceleration, contractive
compression, error feedback, and gradient difference compression. We prove that
ADEF achieves the first accelerated convergence rate for stochastic distributed
optimization with contractive compression in the general convex regime.
Numerical experiments validate our theoretical findings and demonstrate the
practical efficacy of ADEF in reducing communication costs while maintaining
fast convergence.
|
2503.09433 | Richard Dubniczky | Richard A. Dubniczky, Krisztofer Zolt\'an Horv\'at, Tam\'as Bisztray,
Mohamed Amine Ferrag, Lucas C. Cordeiro, Norbert Tihanyi | CASTLE: Benchmarking Dataset for Static Code Analyzers and LLMs towards
CWE Detection | null | null | null | null | cs.CR cs.AI cs.SE | http://creativecommons.org/licenses/by/4.0/ | Identifying vulnerabilities in source code is crucial, especially in critical
software components. Existing methods such as static analysis, dynamic
analysis, formal verification, and recently Large Language Models are widely
used to detect security flaws. This paper introduces CASTLE (CWE Automated
Security Testing and Low-Level Evaluation), a benchmarking framework for
evaluating the vulnerability detection capabilities of different methods. We
assess 13 static analysis tools, 10 LLMs, and 2 formal verification tools using
a hand-crafted dataset of 250 micro-benchmark programs covering 25 common CWEs.
We propose the CASTLE Score, a novel evaluation metric to ensure fair
comparison. Our results reveal key differences: ESBMC (a formal verification
tool) minimizes false positives but struggles with vulnerabilities beyond model
checking, such as weak cryptography or SQL injection. Static analyzers suffer
from high false positives, increasing manual validation efforts for developers.
LLMs perform exceptionally well in the CASTLE dataset when identifying
vulnerabilities in small code snippets. However, their accuracy declines, and
hallucinations increase as the code size grows. These results suggest that LLMs
could play a pivotal role in future security solutions, particularly within
code completion frameworks, where they can provide real-time guidance to
prevent vulnerabilities. The dataset is accessible at
https://github.com/CASTLE-Benchmark.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 14:30:05 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Mar 2025 16:07:10 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Dubniczky",
"Richard A.",
""
],
[
"Horvát",
"Krisztofer Zoltán",
""
],
[
"Bisztray",
"Tamás",
""
],
[
"Ferrag",
"Mohamed Amine",
""
],
[
"Cordeiro",
"Lucas C.",
""
],
[
"Tihanyi",
"Norbert",
""
]
] | TITLE: CASTLE: Benchmarking Dataset for Static Code Analyzers and LLMs towards
CWE Detection
ABSTRACT: Identifying vulnerabilities in source code is crucial, especially in critical
software components. Existing methods such as static analysis, dynamic
analysis, formal verification, and recently Large Language Models are widely
used to detect security flaws. This paper introduces CASTLE (CWE Automated
Security Testing and Low-Level Evaluation), a benchmarking framework for
evaluating the vulnerability detection capabilities of different methods. We
assess 13 static analysis tools, 10 LLMs, and 2 formal verification tools using
a hand-crafted dataset of 250 micro-benchmark programs covering 25 common CWEs.
We propose the CASTLE Score, a novel evaluation metric to ensure fair
comparison. Our results reveal key differences: ESBMC (a formal verification
tool) minimizes false positives but struggles with vulnerabilities beyond model
checking, such as weak cryptography or SQL injection. Static analyzers suffer
from high false positives, increasing manual validation efforts for developers.
LLMs perform exceptionally well in the CASTLE dataset when identifying
vulnerabilities in small code snippets. However, their accuracy declines, and
hallucinations increase as the code size grows. These results suggest that LLMs
could play a pivotal role in future security solutions, particularly within
code completion frameworks, where they can provide real-time guidance to
prevent vulnerabilities. The dataset is accessible at
https://github.com/CASTLE-Benchmark.
|
2503.10616 | Jinyang Li | Jinyang Li, En Yu, Sijia Chen, Wenbing Tao | OVTR: End-to-End Open-Vocabulary Multiple Object Tracking with
Transformer | Accepted by ICLR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Open-vocabulary multiple object tracking aims to generalize trackers to
unseen categories during training, enabling their application across a variety
of real-world scenarios. However, the existing open-vocabulary tracker is
constrained by its framework structure, isolated frame-level perception, and
insufficient modal interactions, which hinder its performance in
open-vocabulary classification and tracking. In this paper, we propose OVTR
(End-to-End Open-Vocabulary Multiple Object Tracking with TRansformer), the
first end-to-end open-vocabulary tracker that models motion, appearance, and
category simultaneously. To achieve stable classification and continuous
tracking, we design the CIP (Category Information Propagation) strategy, which
establishes multiple high-level category information priors for subsequent
frames. Additionally, we introduce a dual-branch structure for generalization
capability and deep multimodal interaction, and incorporate protective
strategies in the decoder to enhance performance. Experimental results show
that our method surpasses previous trackers on the open-vocabulary MOT
benchmark while also achieving faster inference speeds and significantly
reducing preprocessing requirements. Moreover, the experiment transferring the
model to another dataset demonstrates its strong adaptability. Models and code
are released at https://github.com/jinyanglii/OVTR.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 17:56:10 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 16:12:19 GMT"
},
{
"version": "v3",
"created": "Sun, 30 Mar 2025 17:15:53 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Li",
"Jinyang",
""
],
[
"Yu",
"En",
""
],
[
"Chen",
"Sijia",
""
],
[
"Tao",
"Wenbing",
""
]
] | TITLE: OVTR: End-to-End Open-Vocabulary Multiple Object Tracking with
Transformer
ABSTRACT: Open-vocabulary multiple object tracking aims to generalize trackers to
unseen categories during training, enabling their application across a variety
of real-world scenarios. However, the existing open-vocabulary tracker is
constrained by its framework structure, isolated frame-level perception, and
insufficient modal interactions, which hinder its performance in
open-vocabulary classification and tracking. In this paper, we propose OVTR
(End-to-End Open-Vocabulary Multiple Object Tracking with TRansformer), the
first end-to-end open-vocabulary tracker that models motion, appearance, and
category simultaneously. To achieve stable classification and continuous
tracking, we design the CIP (Category Information Propagation) strategy, which
establishes multiple high-level category information priors for subsequent
frames. Additionally, we introduce a dual-branch structure for generalization
capability and deep multimodal interaction, and incorporate protective
strategies in the decoder to enhance performance. Experimental results show
that our method surpasses previous trackers on the open-vocabulary MOT
benchmark while also achieving faster inference speeds and significantly
reducing preprocessing requirements. Moreover, the experiment transferring the
model to another dataset demonstrates its strong adaptability. Models and code
are released at https://github.com/jinyanglii/OVTR.
|
2503.11720 | Hanyang Zhao | Hanyang Zhao, Haoxian Chen, Yucheng Guo, Genta Indra Winata, Tingting
Ou, Ziyu Huang, David D. Yao, Wenpin Tang | Fine-Tuning Diffusion Generative Models via Rich Preference Optimization | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | We introduce Rich Preference Optimization (RPO), a novel pipeline that
leverages rich feedback signals to improve the curation of preference pairs for
fine-tuning text-to-image diffusion models. Traditional methods, like
Diffusion-DPO, often rely solely on reward model labeling, which can be opaque,
offer limited insights into the rationale behind preferences, and are prone to
issues such as reward hacking or overfitting. In contrast, our approach begins
with generating detailed critiques of synthesized images to extract reliable
and actionable image editing instructions. By implementing these instructions,
we create refined images, resulting in synthetic, informative preference pairs
that serve as enhanced tuning datasets. We demonstrate the effectiveness of our
pipeline and the resulting datasets in fine-tuning state-of-the-art diffusion
models.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 21:10:29 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 19:11:31 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Zhao",
"Hanyang",
""
],
[
"Chen",
"Haoxian",
""
],
[
"Guo",
"Yucheng",
""
],
[
"Winata",
"Genta Indra",
""
],
[
"Ou",
"Tingting",
""
],
[
"Huang",
"Ziyu",
""
],
[
"Yao",
"David D.",
""
],
[
"Tang",
"Wenpin",
""
]
] | TITLE: Fine-Tuning Diffusion Generative Models via Rich Preference Optimization
ABSTRACT: We introduce Rich Preference Optimization (RPO), a novel pipeline that
leverages rich feedback signals to improve the curation of preference pairs for
fine-tuning text-to-image diffusion models. Traditional methods, like
Diffusion-DPO, often rely solely on reward model labeling, which can be opaque,
offer limited insights into the rationale behind preferences, and are prone to
issues such as reward hacking or overfitting. In contrast, our approach begins
with generating detailed critiques of synthesized images to extract reliable
and actionable image editing instructions. By implementing these instructions,
we create refined images, resulting in synthetic, informative preference pairs
that serve as enhanced tuning datasets. We demonstrate the effectiveness of our
pipeline and the resulting datasets in fine-tuning state-of-the-art diffusion
models.
|
2503.11849 | Yi Wang | Yi Wang, Zhitong Xiong, Chenying Liu, Adam J. Stewart, Thomas
Dujardin, Nikolaos Ioannis Bountos, Angelos Zavras, Franziska Gerken, Ioannis
Papoutsis, Laura Leal-Taix\'e, Xiao Xiang Zhu | Towards a Unified Copernicus Foundation Model for Earth Vision | 31 pages, 32 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Advances in Earth observation (EO) foundation models have unlocked the
potential of big satellite data to learn generic representations from space,
benefiting a wide range of downstream applications crucial to our planet.
However, most existing efforts remain limited to fixed spectral sensors, focus
solely on the Earth's surface, and overlook valuable metadata beyond imagery.
In this work, we take a step towards next-generation EO foundation models with
three key components: 1) Copernicus-Pretrain, a massive-scale pretraining
dataset that integrates 18.7M aligned images from all major Copernicus Sentinel
missions, spanning from the Earth's surface to its atmosphere; 2)
Copernicus-FM, a unified foundation model capable of processing any spectral or
non-spectral sensor modality using extended dynamic hypernetworks and flexible
metadata encoding; and 3) Copernicus-Bench, a systematic evaluation benchmark
with 15 hierarchical downstream tasks ranging from preprocessing to specialized
applications for each Sentinel mission. Our dataset, model, and benchmark
greatly improve the scalability, versatility, and multimodal adaptability of EO
foundation models, while also creating new opportunities to connect EO,
weather, and climate research. Codes, datasets and models are available at
https://github.com/zhu-xlab/Copernicus-FM.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 20:16:48 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Mar 2025 20:01:44 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Wang",
"Yi",
""
],
[
"Xiong",
"Zhitong",
""
],
[
"Liu",
"Chenying",
""
],
[
"Stewart",
"Adam J.",
""
],
[
"Dujardin",
"Thomas",
""
],
[
"Bountos",
"Nikolaos Ioannis",
""
],
[
"Zavras",
"Angelos",
""
],
[
"Gerken",
"Franziska",
""
],
[
"Papoutsis",
"Ioannis",
""
],
[
"Leal-Taixé",
"Laura",
""
],
[
"Zhu",
"Xiao Xiang",
""
]
] | TITLE: Towards a Unified Copernicus Foundation Model for Earth Vision
ABSTRACT: Advances in Earth observation (EO) foundation models have unlocked the
potential of big satellite data to learn generic representations from space,
benefiting a wide range of downstream applications crucial to our planet.
However, most existing efforts remain limited to fixed spectral sensors, focus
solely on the Earth's surface, and overlook valuable metadata beyond imagery.
In this work, we take a step towards next-generation EO foundation models with
three key components: 1) Copernicus-Pretrain, a massive-scale pretraining
dataset that integrates 18.7M aligned images from all major Copernicus Sentinel
missions, spanning from the Earth's surface to its atmosphere; 2)
Copernicus-FM, a unified foundation model capable of processing any spectral or
non-spectral sensor modality using extended dynamic hypernetworks and flexible
metadata encoding; and 3) Copernicus-Bench, a systematic evaluation benchmark
with 15 hierarchical downstream tasks ranging from preprocessing to specialized
applications for each Sentinel mission. Our dataset, model, and benchmark
greatly improve the scalability, versatility, and multimodal adaptability of EO
foundation models, while also creating new opportunities to connect EO,
weather, and climate research. Codes, datasets and models are available at
https://github.com/zhu-xlab/Copernicus-FM.
|
2503.13262 | Deyin Yi | Deyin Yi, Yihao Liu, Lang Cao, Mengyu Zhou, Haoyu Dong, Shi Han,
Dongmei Zhang | TablePilot: Recommending Human-Preferred Tabular Data Analysis with
Large Language Models | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Tabular data analysis is crucial in many scenarios, yet efficiently
identifying the most relevant data analysis queries and results for a new table
remains a significant challenge. The complexity of tabular data, diverse
analytical operations, and the demand for high-quality analysis make the
process tedious. To address these challenges, we aim to recommend
query-code-result triplets tailored for new tables in tabular data analysis
workflows. In this paper, we present TablePilot, a pioneering tabular data
analysis framework leveraging large language models to autonomously generate
comprehensive and superior analytical results without relying on user profiles
or prior interactions. The framework incorporates key designs in analysis
preparation and analysis optimization to enhance accuracy. Additionally, we
propose Rec-Align, a novel method to further improve recommendation quality and
better align with human preferences. Experiments on DART, a dataset
specifically designed for comprehensive tabular data analysis recommendation,
demonstrate the effectiveness of our framework. Based on GPT-4o, the tuned
TablePilot achieves 77.0% top-5 recommendation recall. Human evaluations
further highlight its effectiveness in optimizing tabular data analysis
workflows.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 15:16:59 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 14:41:59 GMT"
},
{
"version": "v3",
"created": "Thu, 20 Mar 2025 10:42:08 GMT"
},
{
"version": "v4",
"created": "Mon, 31 Mar 2025 07:02:55 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Yi",
"Deyin",
""
],
[
"Liu",
"Yihao",
""
],
[
"Cao",
"Lang",
""
],
[
"Zhou",
"Mengyu",
""
],
[
"Dong",
"Haoyu",
""
],
[
"Han",
"Shi",
""
],
[
"Zhang",
"Dongmei",
""
]
] | TITLE: TablePilot: Recommending Human-Preferred Tabular Data Analysis with
Large Language Models
ABSTRACT: Tabular data analysis is crucial in many scenarios, yet efficiently
identifying the most relevant data analysis queries and results for a new table
remains a significant challenge. The complexity of tabular data, diverse
analytical operations, and the demand for high-quality analysis make the
process tedious. To address these challenges, we aim to recommend
query-code-result triplets tailored for new tables in tabular data analysis
workflows. In this paper, we present TablePilot, a pioneering tabular data
analysis framework leveraging large language models to autonomously generate
comprehensive and superior analytical results without relying on user profiles
or prior interactions. The framework incorporates key designs in analysis
preparation and analysis optimization to enhance accuracy. Additionally, we
propose Rec-Align, a novel method to further improve recommendation quality and
better align with human preferences. Experiments on DART, a dataset
specifically designed for comprehensive tabular data analysis recommendation,
demonstrate the effectiveness of our framework. Based on GPT-4o, the tuned
TablePilot achieves 77.0% top-5 recommendation recall. Human evaluations
further highlight its effectiveness in optimizing tabular data analysis
workflows.
|
2503.13883 | Ziyu Lin | Ziyu Lin, Yunfan Wu, Yuhang Ma, Junzhou Chen, Ronghui Zhang, Jiaming
Wu, Guodong Yin, and Liang Lin | YOLO-LLTS: Real-Time Low-Light Traffic Sign Detection via Prior-Guided
Enhancement and Multi-Branch Feature Interaction | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Detecting traffic signs effectively under low-light conditions remains a
significant challenge. To address this issue, we propose YOLO-LLTS, an
end-to-end real-time traffic sign detection algorithm specifically designed for
low-light environments. Firstly, we introduce the High-Resolution Feature Map
for Small Object Detection (HRFM-TOD) module to address indistinct small-object
features in low-light scenarios. By leveraging high-resolution feature maps,
HRFM-TOD effectively mitigates the feature dilution problem encountered in
conventional PANet frameworks, thereby enhancing both detection accuracy and
inference speed. Secondly, we develop the Multi-branch Feature Interaction
Attention (MFIA) module, which facilitates deep feature interaction across
multiple receptive fields in both channel and spatial dimensions, significantly
improving the model's information extraction capabilities. Finally, we propose
the Prior-Guided Enhancement Module (PGFE) to tackle common image quality
challenges in low-light environments, such as noise, low contrast, and
blurriness. This module employs prior knowledge to enrich image details and
enhance visibility, substantially boosting detection performance. To support
this research, we construct a novel dataset, the Chinese Nighttime Traffic Sign
Sample Set (CNTSSS), covering diverse nighttime scenarios, including urban,
highway, and rural environments under varying weather conditions. Experimental
evaluations demonstrate that YOLO-LLTS achieves state-of-the-art performance,
outperforming the previous best methods by 2.7% mAP50 and 1.6% mAP50:95 on
TT100K-night, 1.3% mAP50 and 1.9% mAP50:95 on CNTSSS, and achieving superior
results on the CCTSDB2021 dataset. Moreover, deployment experiments on edge
devices confirm the real-time applicability and effectiveness of our proposed
approach.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 04:28:05 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Mar 2025 11:16:14 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Lin",
"Ziyu",
""
],
[
"Wu",
"Yunfan",
""
],
[
"Ma",
"Yuhang",
""
],
[
"Chen",
"Junzhou",
""
],
[
"Zhang",
"Ronghui",
""
],
[
"Wu",
"Jiaming",
""
],
[
"Yin",
"Guodong",
""
],
[
"Lin",
"Liang",
""
]
] | TITLE: YOLO-LLTS: Real-Time Low-Light Traffic Sign Detection via Prior-Guided
Enhancement and Multi-Branch Feature Interaction
ABSTRACT: Detecting traffic signs effectively under low-light conditions remains a
significant challenge. To address this issue, we propose YOLO-LLTS, an
end-to-end real-time traffic sign detection algorithm specifically designed for
low-light environments. Firstly, we introduce the High-Resolution Feature Map
for Small Object Detection (HRFM-TOD) module to address indistinct small-object
features in low-light scenarios. By leveraging high-resolution feature maps,
HRFM-TOD effectively mitigates the feature dilution problem encountered in
conventional PANet frameworks, thereby enhancing both detection accuracy and
inference speed. Secondly, we develop the Multi-branch Feature Interaction
Attention (MFIA) module, which facilitates deep feature interaction across
multiple receptive fields in both channel and spatial dimensions, significantly
improving the model's information extraction capabilities. Finally, we propose
the Prior-Guided Enhancement Module (PGFE) to tackle common image quality
challenges in low-light environments, such as noise, low contrast, and
blurriness. This module employs prior knowledge to enrich image details and
enhance visibility, substantially boosting detection performance. To support
this research, we construct a novel dataset, the Chinese Nighttime Traffic Sign
Sample Set (CNTSSS), covering diverse nighttime scenarios, including urban,
highway, and rural environments under varying weather conditions. Experimental
evaluations demonstrate that YOLO-LLTS achieves state-of-the-art performance,
outperforming the previous best methods by 2.7% mAP50 and 1.6% mAP50:95 on
TT100K-night, 1.3% mAP50 and 1.9% mAP50:95 on CNTSSS, and achieving superior
results on the CCTSDB2021 dataset. Moreover, deployment experiments on edge
devices confirm the real-time applicability and effectiveness of our proposed
approach.
|
2503.14001 | Wenbo Xiao | Wenbo Xiao, Qiannan Han, Gang Shu, Guiping Liang, Hongyan Zhang, Song
Wang, Zhihao Xu, Weican Wan, Chuang Li, Guitao Jiang, Yi Xiao | Multimodal Feature-Driven Deep Learning for the Prediction of Duck Body
Dimensions and Weight | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate body dimension and weight measurements are critical for optimizing
poultry management, health assessment, and economic efficiency. This study
introduces an innovative deep learning-based model leveraging multimodal
data-2D RGB images from different views, depth images, and 3D point clouds-for
the non-invasive estimation of duck body dimensions and weight. A dataset of
1,023 Linwu ducks, comprising over 5,000 samples with diverse postures and
conditions, was collected to support model training. The proposed method
innovatively employs PointNet++ to extract key feature points from point
clouds, extracts and computes corresponding 3D geometric features, and fuses
them with multi-view convolutional 2D features. A Transformer encoder is then
utilized to capture long-range dependencies and refine feature interactions,
thereby enhancing prediction robustness. The model achieved a mean absolute
percentage error (MAPE) of 6.33% and an R2 of 0.953 across eight morphometric
parameters, demonstrating strong predictive capability. Unlike conventional
manual measurements, the proposed model enables high-precision estimation while
eliminating the necessity for physical handling, thereby reducing animal stress
and broadening its application scope. This study marks the first application of
deep learning techniques to poultry body dimension and weight estimation,
providing a valuable reference for the intelligent and precise management of
the livestock industry with far-reaching practical significance.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 08:09:19 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 06:16:56 GMT"
},
{
"version": "v3",
"created": "Thu, 27 Mar 2025 11:37:28 GMT"
},
{
"version": "v4",
"created": "Sun, 30 Mar 2025 14:10:48 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Xiao",
"Wenbo",
""
],
[
"Han",
"Qiannan",
""
],
[
"Shu",
"Gang",
""
],
[
"Liang",
"Guiping",
""
],
[
"Zhang",
"Hongyan",
""
],
[
"Wang",
"Song",
""
],
[
"Xu",
"Zhihao",
""
],
[
"Wan",
"Weican",
""
],
[
"Li",
"Chuang",
""
],
[
"Jiang",
"Guitao",
""
],
[
"Xiao",
"Yi",
""
]
] | TITLE: Multimodal Feature-Driven Deep Learning for the Prediction of Duck Body
Dimensions and Weight
ABSTRACT: Accurate body dimension and weight measurements are critical for optimizing
poultry management, health assessment, and economic efficiency. This study
introduces an innovative deep learning-based model leveraging multimodal
data-2D RGB images from different views, depth images, and 3D point clouds-for
the non-invasive estimation of duck body dimensions and weight. A dataset of
1,023 Linwu ducks, comprising over 5,000 samples with diverse postures and
conditions, was collected to support model training. The proposed method
innovatively employs PointNet++ to extract key feature points from point
clouds, extracts and computes corresponding 3D geometric features, and fuses
them with multi-view convolutional 2D features. A Transformer encoder is then
utilized to capture long-range dependencies and refine feature interactions,
thereby enhancing prediction robustness. The model achieved a mean absolute
percentage error (MAPE) of 6.33% and an R2 of 0.953 across eight morphometric
parameters, demonstrating strong predictive capability. Unlike conventional
manual measurements, the proposed model enables high-precision estimation while
eliminating the necessity for physical handling, thereby reducing animal stress
and broadening its application scope. This study marks the first application of
deep learning techniques to poultry body dimension and weight estimation,
providing a valuable reference for the intelligent and precise management of
the livestock industry with far-reaching practical significance.
|
2503.14456 | Daniel Goldstein | Bo Peng, Ruichong Zhang, Daniel Goldstein, Eric Alcaide, Xingjian Du,
Haowen Hou, Jiaju Lin, Jiaxing Liu, Janna Lu, William Merrill, Guangyu Song,
Kaifeng Tan, Saiteja Utpala, Nathan Wilce, Johan S. Wind, Tianyi Wu, Daniel
Wuttke, Christian Zhou-Zheng | RWKV-7 "Goose" with Expressive Dynamic State Evolution | null | null | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | We present RWKV-7 "Goose", a new sequence modeling architecture with constant
memory usage and constant inference time per token. Despite being trained on
dramatically fewer tokens than other top models, our 2.9 billion parameter
language model achieves a new 3B SoTA on multilingual tasks and matches the
current 3B SoTA on English language downstream performance. RWKV-7 introduces a
newly generalized formulation of the delta rule with vector-valued gating and
in-context learning rates, as well as a relaxed value replacement rule. We show
that RWKV-7 can perform state tracking and recognize all regular languages,
while retaining parallelizability of training. This exceeds the capabilities of
Transformers under standard complexity conjectures, which are limited to
$\mathsf{TC}^0$. To demonstrate RWKV-7's language modeling capability, we also
present an extended open source 3.1 trillion token multilingual corpus, and
train four RWKV-7 models ranging from 0.19 billion to 2.9 billion parameters on
this dataset.
To foster openness, reproduction, and adoption, we release our models and
dataset component listing at https://huggingface.co/RWKV, and our training and
inference code at https://github.com/RWKV/RWKV-LM all under the Apache 2.0
License.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 17:31:05 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Mar 2025 13:46:44 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Peng",
"Bo",
""
],
[
"Zhang",
"Ruichong",
""
],
[
"Goldstein",
"Daniel",
""
],
[
"Alcaide",
"Eric",
""
],
[
"Du",
"Xingjian",
""
],
[
"Hou",
"Haowen",
""
],
[
"Lin",
"Jiaju",
""
],
[
"Liu",
"Jiaxing",
""
],
[
"Lu",
"Janna",
""
],
[
"Merrill",
"William",
""
],
[
"Song",
"Guangyu",
""
],
[
"Tan",
"Kaifeng",
""
],
[
"Utpala",
"Saiteja",
""
],
[
"Wilce",
"Nathan",
""
],
[
"Wind",
"Johan S.",
""
],
[
"Wu",
"Tianyi",
""
],
[
"Wuttke",
"Daniel",
""
],
[
"Zhou-Zheng",
"Christian",
""
]
] | TITLE: RWKV-7 "Goose" with Expressive Dynamic State Evolution
ABSTRACT: We present RWKV-7 "Goose", a new sequence modeling architecture with constant
memory usage and constant inference time per token. Despite being trained on
dramatically fewer tokens than other top models, our 2.9 billion parameter
language model achieves a new 3B SoTA on multilingual tasks and matches the
current 3B SoTA on English language downstream performance. RWKV-7 introduces a
newly generalized formulation of the delta rule with vector-valued gating and
in-context learning rates, as well as a relaxed value replacement rule. We show
that RWKV-7 can perform state tracking and recognize all regular languages,
while retaining parallelizability of training. This exceeds the capabilities of
Transformers under standard complexity conjectures, which are limited to
$\mathsf{TC}^0$. To demonstrate RWKV-7's language modeling capability, we also
present an extended open source 3.1 trillion token multilingual corpus, and
train four RWKV-7 models ranging from 0.19 billion to 2.9 billion parameters on
this dataset.
To foster openness, reproduction, and adoption, we release our models and
dataset component listing at https://huggingface.co/RWKV, and our training and
inference code at https://github.com/RWKV/RWKV-LM all under the Apache 2.0
License.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.