id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
listlengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2403.02084 | Jiaxiang Cheng | Jiaxiang Cheng, Pan Xie, Xin Xia, Jiashi Li, Jie Wu, Yuxi Ren, Huixia
Li, Xuefeng Xiao, Min Zheng, Lean Fu | ResAdapter: Domain Consistent Resolution Adapter for Diffusion Models | Accepted by AAAI 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advancement in text-to-image models (e.g., Stable Diffusion) and
corresponding personalized technologies (e.g., DreamBooth and LoRA) enables
individuals to generate high-quality and imaginative images. However, they
often suffer from limitations when generating images with resolutions outside
of their trained domain. To overcome this limitation, we present the Resolution
Adapter (ResAdapter), a domain-consistent adapter designed for diffusion models
to generate images with unrestricted resolutions and aspect ratios. Unlike
other multi-resolution generation methods that process images of static
resolution with complex post-process operations, ResAdapter directly generates
images with the dynamical resolution. Especially, after learning a deep
understanding of pure resolution priors, ResAdapter trained on the general
dataset, generates resolution-free images with personalized diffusion models
while preserving their original style domain. Comprehensive experiments
demonstrate that ResAdapter with only 0.5M can process images with flexible
resolutions for arbitrary diffusion models. More extended experiments
demonstrate that ResAdapter is compatible with other modules (e.g., ControlNet,
IP-Adapter and LCM-LoRA) for image generation across a broad range of
resolutions, and can be integrated into other multi-resolution model (e.g.,
ElasticDiffusion) for efficiently generating higher-resolution images. Project
link is https://res-adapter.github.io
| [
{
"version": "v1",
"created": "Mon, 4 Mar 2024 14:36:56 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Mar 2025 09:36:28 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Cheng",
"Jiaxiang",
""
],
[
"Xie",
"Pan",
""
],
[
"Xia",
"Xin",
""
],
[
"Li",
"Jiashi",
""
],
[
"Wu",
"Jie",
""
],
[
"Ren",
"Yuxi",
""
],
[
"Li",
"Huixia",
""
],
[
"Xiao",
"Xuefeng",
""
],
[
"Zheng",
"Min",
""
],
[
"Fu",
"Lean",
""
]
]
| TITLE: ResAdapter: Domain Consistent Resolution Adapter for Diffusion Models
ABSTRACT: Recent advancement in text-to-image models (e.g., Stable Diffusion) and
corresponding personalized technologies (e.g., DreamBooth and LoRA) enables
individuals to generate high-quality and imaginative images. However, they
often suffer from limitations when generating images with resolutions outside
of their trained domain. To overcome this limitation, we present the Resolution
Adapter (ResAdapter), a domain-consistent adapter designed for diffusion models
to generate images with unrestricted resolutions and aspect ratios. Unlike
other multi-resolution generation methods that process images of static
resolution with complex post-process operations, ResAdapter directly generates
images with the dynamical resolution. Especially, after learning a deep
understanding of pure resolution priors, ResAdapter trained on the general
dataset, generates resolution-free images with personalized diffusion models
while preserving their original style domain. Comprehensive experiments
demonstrate that ResAdapter with only 0.5M can process images with flexible
resolutions for arbitrary diffusion models. More extended experiments
demonstrate that ResAdapter is compatible with other modules (e.g., ControlNet,
IP-Adapter and LCM-LoRA) for image generation across a broad range of
resolutions, and can be integrated into other multi-resolution model (e.g.,
ElasticDiffusion) for efficiently generating higher-resolution images. Project
link is https://res-adapter.github.io
| no_new_dataset | 0.956675 |
2403.04125 | Evelyn Mannix | Evelyn J. Mannix, Liam Hodgkinson and Howard Bondell | ComFe: An Interpretable Head for Vision Transformers | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Interpretable computer vision models explain their classifications through
comparing the distances between the local embeddings of an image and a set of
prototypes that represent the training data. However, these approaches
introduce additional hyper-parameters that need to be tuned to apply to new
datasets, scale poorly, and are more computationally intensive to train in
comparison to black-box approaches. In this work, we introduce Component
Features (ComFe), a highly scalable interpretable-by-design image
classification head for pretrained Vision Transformers (ViTs) that can obtain
competitive performance in comparison to comparable non-interpretable methods.
ComFe is the first interpretable head, that we know of, and unlike other
interpretable approaches, can be readily applied to large scale datasets such
as ImageNet-1K. Additionally, ComFe provides improved robustness and
outperforms previous interpretable approaches on key benchmark
datasets$\unicode{x2013}$using a consistent set of hyper-parameters and without
finetuning the pretrained ViT backbone. With only global image labels and no
segmentation or part annotations, ComFe can identify consistent component
features within an image and determine which of these features are informative
in making a prediction. Code is available at
https://github.com/emannix/comfe-component-features.
| [
{
"version": "v1",
"created": "Thu, 7 Mar 2024 00:44:21 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Mar 2024 03:53:14 GMT"
},
{
"version": "v3",
"created": "Fri, 24 May 2024 06:10:35 GMT"
},
{
"version": "v4",
"created": "Fri, 22 Nov 2024 01:41:20 GMT"
},
{
"version": "v5",
"created": "Sat, 8 Mar 2025 02:18:30 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Mannix",
"Evelyn J.",
""
],
[
"Hodgkinson",
"Liam",
""
],
[
"Bondell",
"Howard",
""
]
]
| TITLE: ComFe: An Interpretable Head for Vision Transformers
ABSTRACT: Interpretable computer vision models explain their classifications through
comparing the distances between the local embeddings of an image and a set of
prototypes that represent the training data. However, these approaches
introduce additional hyper-parameters that need to be tuned to apply to new
datasets, scale poorly, and are more computationally intensive to train in
comparison to black-box approaches. In this work, we introduce Component
Features (ComFe), a highly scalable interpretable-by-design image
classification head for pretrained Vision Transformers (ViTs) that can obtain
competitive performance in comparison to comparable non-interpretable methods.
ComFe is the first interpretable head, that we know of, and unlike other
interpretable approaches, can be readily applied to large scale datasets such
as ImageNet-1K. Additionally, ComFe provides improved robustness and
outperforms previous interpretable approaches on key benchmark
datasets$\unicode{x2013}$using a consistent set of hyper-parameters and without
finetuning the pretrained ViT backbone. With only global image labels and no
segmentation or part annotations, ComFe can identify consistent component
features within an image and determine which of these features are informative
in making a prediction. Code is available at
https://github.com/emannix/comfe-component-features.
| no_new_dataset | 0.942981 |
2403.08291 | Danrui Qi | Danrui Qi, Zhengjie Miao, Jiannan Wang | CleanAgent: Automating Data Standardization with LLM-based Agents | null | null | null | null | cs.LG cs.AI cs.MA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data standardization is a crucial part of the data science life cycle. While
tools like Pandas offer robust functionalities, their complexity and the manual
effort required for customizing code to diverse column types pose significant
challenges. Although large language models (LLMs) like ChatGPT have shown
promise in automating this process through natural language understanding and
code generation, it still demands expert-level programming knowledge and
continuous interaction for prompt refinement. To solve these challenges, our
key idea is to propose a Python library with declarative, unified APIs for
standardizing different column types, simplifying the LLM's code generation
with concise API calls. We first propose Dataprep.Clean, a component of the
Dataprep Python Library, significantly reduces the coding complexity by
enabling the standardization of specific column types with a single line of
code. Then, we introduce the CleanAgent framework integrating Dataprep.Clean
and LLM-based agents to automate the data standardization process. With
CleanAgent, data scientists only need to provide their requirements once,
allowing for a hands-free process. To demonstrate the practical utility of
CleanAgent, we developed a user-friendly web application, allowing attendees to
interact with it using real-world datasets.
| [
{
"version": "v1",
"created": "Wed, 13 Mar 2024 06:54:15 GMT"
},
{
"version": "v2",
"created": "Thu, 25 Apr 2024 03:47:13 GMT"
},
{
"version": "v3",
"created": "Fri, 7 Mar 2025 19:01:29 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Qi",
"Danrui",
""
],
[
"Miao",
"Zhengjie",
""
],
[
"Wang",
"Jiannan",
""
]
]
| TITLE: CleanAgent: Automating Data Standardization with LLM-based Agents
ABSTRACT: Data standardization is a crucial part of the data science life cycle. While
tools like Pandas offer robust functionalities, their complexity and the manual
effort required for customizing code to diverse column types pose significant
challenges. Although large language models (LLMs) like ChatGPT have shown
promise in automating this process through natural language understanding and
code generation, it still demands expert-level programming knowledge and
continuous interaction for prompt refinement. To solve these challenges, our
key idea is to propose a Python library with declarative, unified APIs for
standardizing different column types, simplifying the LLM's code generation
with concise API calls. We first propose Dataprep.Clean, a component of the
Dataprep Python Library, significantly reduces the coding complexity by
enabling the standardization of specific column types with a single line of
code. Then, we introduce the CleanAgent framework integrating Dataprep.Clean
and LLM-based agents to automate the data standardization process. With
CleanAgent, data scientists only need to provide their requirements once,
allowing for a hands-free process. To demonstrate the practical utility of
CleanAgent, we developed a user-friendly web application, allowing attendees to
interact with it using real-world datasets.
| no_new_dataset | 0.939471 |
2403.09616 | Chaoyang Wang | Chaoyang Wang, Xiangtai Li, Henghui Ding, Lu Qi, Jiangning Zhang,
Yunhai Tong, Chen Change Loy, Shuicheng Yan | Explore In-Context Segmentation via Latent Diffusion Models | AAAI 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In-context segmentation has drawn increasing attention with the advent of
vision foundation models. Its goal is to segment objects using given reference
images. Most existing approaches adopt metric learning or masked image modeling
to build the correlation between visual prompts and input image queries. This
work approaches the problem from a fresh perspective - unlocking the capability
of the latent diffusion model (LDM) for in-context segmentation and
investigating different design choices. Specifically, we examine the problem
from three angles: instruction extraction, output alignment, and
meta-architectures. We design a two-stage masking strategy to prevent
interfering information from leaking into the instructions. In addition, we
propose an augmented pseudo-masking target to ensure the model predicts without
forgetting the original images. Moreover, we build a new and fair in-context
segmentation benchmark that covers both image and video datasets. Experiments
validate the effectiveness of our approach, demonstrating comparable or even
stronger results than previous specialist or visual foundation models. We hope
our work inspires others to rethink the unification of segmentation and
generation.
| [
{
"version": "v1",
"created": "Thu, 14 Mar 2024 17:52:31 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Mar 2025 11:58:01 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Wang",
"Chaoyang",
""
],
[
"Li",
"Xiangtai",
""
],
[
"Ding",
"Henghui",
""
],
[
"Qi",
"Lu",
""
],
[
"Zhang",
"Jiangning",
""
],
[
"Tong",
"Yunhai",
""
],
[
"Loy",
"Chen Change",
""
],
[
"Yan",
"Shuicheng",
""
]
]
| TITLE: Explore In-Context Segmentation via Latent Diffusion Models
ABSTRACT: In-context segmentation has drawn increasing attention with the advent of
vision foundation models. Its goal is to segment objects using given reference
images. Most existing approaches adopt metric learning or masked image modeling
to build the correlation between visual prompts and input image queries. This
work approaches the problem from a fresh perspective - unlocking the capability
of the latent diffusion model (LDM) for in-context segmentation and
investigating different design choices. Specifically, we examine the problem
from three angles: instruction extraction, output alignment, and
meta-architectures. We design a two-stage masking strategy to prevent
interfering information from leaking into the instructions. In addition, we
propose an augmented pseudo-masking target to ensure the model predicts without
forgetting the original images. Moreover, we build a new and fair in-context
segmentation benchmark that covers both image and video datasets. Experiments
validate the effectiveness of our approach, demonstrating comparable or even
stronger results than previous specialist or visual foundation models. We hope
our work inspires others to rethink the unification of segmentation and
generation.
| no_new_dataset | 0.935405 |
2403.10390 | Alexander Hepburn | Alexander Hepburn and Raul Santos-Rodriguez and Javier Portilla | Evaluating Perceptual Distance Models by Fitting Binomial Distributions
to Two-Alternative Forced Choice Data | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Two-alternative forced choice (2AFC) experiments are popular in the visual
perception literature to understand how human observers perceive distances
within triplets made of a reference image and two distorted versions.
Previously, this had been conducted in controlled environments, with triplets
sharing images, making it possible to rank the perceived quality and evaluate
perceptual distance models against the ranking. Recently, crowd-sourced
perceptual datasets have emerged, with no images shared between triplets,
making ranking infeasible. Evaluations using this data reduces the judgements
on a triplet to a binary decision, namely, whether the distance model agrees
with the human decision - which is suboptimal and prone to misleading
conclusions. Instead, we statistically model the underlying decision-making
process during 2AFC experiments using a binomial distribution. We estimate a
smooth and consistent distribution of the judgements on the reference-distorted
distance plane, according to each distance model. We estimate the parameter of
the local binomial distribution using maximum likelihood, and a global
measurement of the expected log-likelihood of the judgements. We calculate
meaningful and well-founded metrics, beyond the mere prediction accuracy as
percentage agreement and compare to a neural network counterpart, also
optimised to maximise likelihood according to a binomial model.
| [
{
"version": "v1",
"created": "Fri, 15 Mar 2024 15:21:04 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Oct 2024 17:10:22 GMT"
},
{
"version": "v3",
"created": "Mon, 10 Mar 2025 12:42:55 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Hepburn",
"Alexander",
""
],
[
"Santos-Rodriguez",
"Raul",
""
],
[
"Portilla",
"Javier",
""
]
]
| TITLE: Evaluating Perceptual Distance Models by Fitting Binomial Distributions
to Two-Alternative Forced Choice Data
ABSTRACT: Two-alternative forced choice (2AFC) experiments are popular in the visual
perception literature to understand how human observers perceive distances
within triplets made of a reference image and two distorted versions.
Previously, this had been conducted in controlled environments, with triplets
sharing images, making it possible to rank the perceived quality and evaluate
perceptual distance models against the ranking. Recently, crowd-sourced
perceptual datasets have emerged, with no images shared between triplets,
making ranking infeasible. Evaluations using this data reduces the judgements
on a triplet to a binary decision, namely, whether the distance model agrees
with the human decision - which is suboptimal and prone to misleading
conclusions. Instead, we statistically model the underlying decision-making
process during 2AFC experiments using a binomial distribution. We estimate a
smooth and consistent distribution of the judgements on the reference-distorted
distance plane, according to each distance model. We estimate the parameter of
the local binomial distribution using maximum likelihood, and a global
measurement of the expected log-likelihood of the judgements. We calculate
meaningful and well-founded metrics, beyond the mere prediction accuracy as
percentage agreement and compare to a neural network counterpart, also
optimised to maximise likelihood according to a binomial model.
| no_new_dataset | 0.94474 |
2403.11176 | Lorenzo Agnolucci | Lorenzo Agnolucci, Leonardo Galteri, Marco Bertini | Quality-Aware Image-Text Alignment for Opinion-Unaware Image Quality
Assessment | null | null | null | null | cs.CV cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | No-Reference Image Quality Assessment (NR-IQA) focuses on designing methods
to measure image quality in alignment with human perception when a high-quality
reference image is unavailable. Most state-of-the-art NR-IQA approaches are
opinion-aware, i.e. they require human annotations for training. This
dependency limits their scalability and broad applicability. To overcome this
limitation, we propose QualiCLIP (Quality-aware CLIP), a CLIP-based
self-supervised opinion-unaware approach that does not require human opinions.
In particular, we introduce a quality-aware image-text alignment strategy to
make CLIP generate quality-aware image representations. Starting from pristine
images, we synthetically degrade them with increasing levels of intensity.
Then, we train CLIP to rank these degraded images based on their similarity to
quality-related antonym text prompts. At the same time, we force CLIP to
generate consistent representations for images with similar content and the
same level of degradation. Our experiments show that the proposed method
improves over existing opinion-unaware approaches across multiple datasets with
diverse distortion types. Moreover, despite not requiring human annotations,
QualiCLIP achieves excellent performance against supervised opinion-aware
methods in cross-dataset experiments, thus demonstrating remarkable
generalization capabilities. The code and the model are publicly available at
https://github.com/miccunifi/QualiCLIP.
| [
{
"version": "v1",
"created": "Sun, 17 Mar 2024 11:32:18 GMT"
},
{
"version": "v2",
"created": "Sun, 8 Dec 2024 12:00:50 GMT"
},
{
"version": "v3",
"created": "Mon, 10 Mar 2025 15:31:00 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Agnolucci",
"Lorenzo",
""
],
[
"Galteri",
"Leonardo",
""
],
[
"Bertini",
"Marco",
""
]
]
| TITLE: Quality-Aware Image-Text Alignment for Opinion-Unaware Image Quality
Assessment
ABSTRACT: No-Reference Image Quality Assessment (NR-IQA) focuses on designing methods
to measure image quality in alignment with human perception when a high-quality
reference image is unavailable. Most state-of-the-art NR-IQA approaches are
opinion-aware, i.e. they require human annotations for training. This
dependency limits their scalability and broad applicability. To overcome this
limitation, we propose QualiCLIP (Quality-aware CLIP), a CLIP-based
self-supervised opinion-unaware approach that does not require human opinions.
In particular, we introduce a quality-aware image-text alignment strategy to
make CLIP generate quality-aware image representations. Starting from pristine
images, we synthetically degrade them with increasing levels of intensity.
Then, we train CLIP to rank these degraded images based on their similarity to
quality-related antonym text prompts. At the same time, we force CLIP to
generate consistent representations for images with similar content and the
same level of degradation. Our experiments show that the proposed method
improves over existing opinion-unaware approaches across multiple datasets with
diverse distortion types. Moreover, despite not requiring human annotations,
QualiCLIP achieves excellent performance against supervised opinion-aware
methods in cross-dataset experiments, thus demonstrating remarkable
generalization capabilities. The code and the model are publicly available at
https://github.com/miccunifi/QualiCLIP.
| no_new_dataset | 0.952175 |
2403.12960 | Kartik Narayan | Kartik Narayan, Vibashan VS, Rama Chellappa, Vishal M. Patel | FaceXFormer: A Unified Transformer for Facial Analysis | Project page: https://kartik-3004.github.io/facexformer/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we introduce FaceXFormer, an end-to-end unified transformer
model capable of performing ten facial analysis tasks within a single
framework. These tasks include face parsing, landmark detection, head pose
estimation, attribute prediction, age, gender, and race estimation, facial
expression recognition, face recognition, and face visibility. Traditional face
analysis approaches rely on task-specific architectures and pre-processing
techniques, limiting scalability and integration. In contrast, FaceXFormer
employs a transformer-based encoder-decoder architecture, where each task is
represented as a learnable token, enabling seamless multi-task processing
within a unified model. To enhance efficiency, we introduce FaceX, a
lightweight decoder with a novel bi-directional cross-attention mechanism,
which jointly processes face and task tokens to learn robust and generalized
facial representations. We train FaceXFormer on ten diverse face perception
datasets and evaluate it against both specialized and multi-task models across
multiple benchmarks, demonstrating state-of-the-art or competitive performance.
Additionally, we analyze the impact of various components of FaceXFormer on
performance, assess real-world robustness in "in-the-wild" settings, and
conduct a computational performance evaluation. To the best of our knowledge,
FaceXFormer is the first model capable of handling ten facial analysis tasks
while maintaining real-time performance at 33.21 FPS. Code:
https://github.com/Kartik-3004/facexformer
| [
{
"version": "v1",
"created": "Tue, 19 Mar 2024 17:58:04 GMT"
},
{
"version": "v2",
"created": "Thu, 19 Dec 2024 22:48:46 GMT"
},
{
"version": "v3",
"created": "Mon, 10 Mar 2025 17:08:19 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Narayan",
"Kartik",
""
],
[
"VS",
"Vibashan",
""
],
[
"Chellappa",
"Rama",
""
],
[
"Patel",
"Vishal M.",
""
]
]
| TITLE: FaceXFormer: A Unified Transformer for Facial Analysis
ABSTRACT: In this work, we introduce FaceXFormer, an end-to-end unified transformer
model capable of performing ten facial analysis tasks within a single
framework. These tasks include face parsing, landmark detection, head pose
estimation, attribute prediction, age, gender, and race estimation, facial
expression recognition, face recognition, and face visibility. Traditional face
analysis approaches rely on task-specific architectures and pre-processing
techniques, limiting scalability and integration. In contrast, FaceXFormer
employs a transformer-based encoder-decoder architecture, where each task is
represented as a learnable token, enabling seamless multi-task processing
within a unified model. To enhance efficiency, we introduce FaceX, a
lightweight decoder with a novel bi-directional cross-attention mechanism,
which jointly processes face and task tokens to learn robust and generalized
facial representations. We train FaceXFormer on ten diverse face perception
datasets and evaluate it against both specialized and multi-task models across
multiple benchmarks, demonstrating state-of-the-art or competitive performance.
Additionally, we analyze the impact of various components of FaceXFormer on
performance, assess real-world robustness in "in-the-wild" settings, and
conduct a computational performance evaluation. To the best of our knowledge,
FaceXFormer is the first model capable of handling ten facial analysis tasks
while maintaining real-time performance at 33.21 FPS. Code:
https://github.com/Kartik-3004/facexformer
| no_new_dataset | 0.940626 |
2403.14362 | Jiaqi Yue | Jiaqi Yue, Chunhui Zhao, Jiancheng Zhao, Biao Huang | Enabling Generalized Zero-shot Learning Towards Unseen Domains by
Intrinsic Learning from Redundant LLM Semantics | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generalized zero-shot learning (GZSL) focuses on recognizing seen and unseen
classes against domain shift problem where data of unseen classes may be
misclassified as seen classes. However, existing GZSL is still limited to seen
domains. In the current work, we study cross-domain GZSL (CDGZSL) which
addresses GZSL towards unseen domains. Different from existing GZSL methods,
CDGZSL constructs a common feature space across domains and acquires the
corresponding intrinsic semantics shared among domains to transfer from seen to
unseen domains. Considering the information asymmetry problem caused by
redundant class semantics annotated with large language models (LLMs), we
present Meta Domain Alignment Semantic Refinement (MDASR). Technically, MDASR
consists of two parts: Inter-class similarity alignment, which eliminates the
non-intrinsic semantics not shared across all domains under the guidance of
inter-class feature relationships, and unseen-class meta generation, which
preserves intrinsic semantics to maintain connectivity between seen and unseen
classes by simulating feature generation. MDASR effectively aligns the
redundant semantic space with the common feature space, mitigating the
information asymmetry in CDGZSL. The effectiveness of MDASR is demonstrated on
two datasets, Office-Home and Mini-DomainNet, and we have shared the LLM-based
semantics for these datasets as a benchmark.
| [
{
"version": "v1",
"created": "Thu, 21 Mar 2024 12:45:01 GMT"
},
{
"version": "v2",
"created": "Thu, 23 May 2024 07:50:31 GMT"
},
{
"version": "v3",
"created": "Tue, 6 Aug 2024 07:32:46 GMT"
},
{
"version": "v4",
"created": "Mon, 19 Aug 2024 12:28:55 GMT"
},
{
"version": "v5",
"created": "Mon, 10 Mar 2025 09:35:20 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Yue",
"Jiaqi",
""
],
[
"Zhao",
"Chunhui",
""
],
[
"Zhao",
"Jiancheng",
""
],
[
"Huang",
"Biao",
""
]
]
| TITLE: Enabling Generalized Zero-shot Learning Towards Unseen Domains by
Intrinsic Learning from Redundant LLM Semantics
ABSTRACT: Generalized zero-shot learning (GZSL) focuses on recognizing seen and unseen
classes against domain shift problem where data of unseen classes may be
misclassified as seen classes. However, existing GZSL is still limited to seen
domains. In the current work, we study cross-domain GZSL (CDGZSL) which
addresses GZSL towards unseen domains. Different from existing GZSL methods,
CDGZSL constructs a common feature space across domains and acquires the
corresponding intrinsic semantics shared among domains to transfer from seen to
unseen domains. Considering the information asymmetry problem caused by
redundant class semantics annotated with large language models (LLMs), we
present Meta Domain Alignment Semantic Refinement (MDASR). Technically, MDASR
consists of two parts: Inter-class similarity alignment, which eliminates the
non-intrinsic semantics not shared across all domains under the guidance of
inter-class feature relationships, and unseen-class meta generation, which
preserves intrinsic semantics to maintain connectivity between seen and unseen
classes by simulating feature generation. MDASR effectively aligns the
redundant semantic space with the common feature space, mitigating the
information asymmetry in CDGZSL. The effectiveness of MDASR is demonstrated on
two datasets, Office-Home and Mini-DomainNet, and we have shared the LLM-based
semantics for these datasets as a benchmark.
| no_new_dataset | 0.945651 |
2403.15038 | Jean-Baptiste Fermanian | Gilles Blanchard (LMO, DATASHAPE), Jean-Baptiste Fermanian (LMO),
Hannah Marienwald (TUB) | Estimation of multiple mean vectors in high dimension | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We endeavour to estimate numerous multi-dimensional means of various
probability distributions on a common space based on independent samples. Our
approach involves forming estimators through convex combinations of empirical
means derived from these samples. We introduce two strategies to find
appropriate data-dependent convex combination weights: a first one employing a
testing procedure to identify neighbouring means with low variance, which
results in a closed-form plug-in formula for the weights, and a second one
determining weights via minimization of an upper confidence bound on the
quadratic risk. Through theoretical analysis, we evaluate the improvement in
quadratic risk offered by our methods compared to the empirical means. Our
analysis focuses on a dimensional asymptotics perspective, showing that our
methods asymptotically approach an oracle (minimax) improvement as the
effective dimension of the data increases. We demonstrate the efficacy of our
methods in estimating multiple kernel mean embeddings through experiments on
both simulated and real-world datasets.
| [
{
"version": "v1",
"created": "Fri, 22 Mar 2024 08:42:41 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Mar 2025 09:32:52 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Blanchard",
"Gilles",
"",
"LMO, DATASHAPE"
],
[
"Fermanian",
"Jean-Baptiste",
"",
"LMO"
],
[
"Marienwald",
"Hannah",
"",
"TUB"
]
]
| TITLE: Estimation of multiple mean vectors in high dimension
ABSTRACT: We endeavour to estimate numerous multi-dimensional means of various
probability distributions on a common space based on independent samples. Our
approach involves forming estimators through convex combinations of empirical
means derived from these samples. We introduce two strategies to find
appropriate data-dependent convex combination weights: a first one employing a
testing procedure to identify neighbouring means with low variance, which
results in a closed-form plug-in formula for the weights, and a second one
determining weights via minimization of an upper confidence bound on the
quadratic risk. Through theoretical analysis, we evaluate the improvement in
quadratic risk offered by our methods compared to the empirical means. Our
analysis focuses on a dimensional asymptotics perspective, showing that our
methods asymptotically approach an oracle (minimax) improvement as the
effective dimension of the data increases. We demonstrate the efficacy of our
methods in estimating multiple kernel mean embeddings through experiments on
both simulated and real-world datasets.
| no_new_dataset | 0.945197 |
2403.18334 | Shuai Xiang | Shuai Xiang, Pieter M. Blok, James Burridge, Haozhou Wang, Wei Guo | DODA: Adapting Object Detectors to Dynamic Agricultural Environments in
Real-Time with Diffusion | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Object detection has wide applications in agriculture, but domain shifts of
diverse environments limit the broader use of the trained models. Existing
domain adaptation methods usually require retraining the model for new domains,
which is impractical for agricultural applications due to constantly changing
environments. In this paper, we propose DODA ($D$iffusion for
$O$bject-detection $D$omain Adaptation in $A$griculture), a diffusion-based
framework that can adapt the detector to a new domain in just 2 minutes. DODA
incorporates external domain embeddings and an improved layout-to-image
approach, allowing it to generate high-quality detection data for new domains
without additional training. We demonstrate DODA's effectiveness on the Global
Wheat Head Detection dataset, where fine-tuning detectors on DODA-generated
data yields significant improvements across multiple domains. DODA provides a
simple yet powerful solution for agricultural domain adaptation, reducing the
barriers for growers to use detection in personalised environments. The code is
available at https://github.com/UTokyo-FieldPhenomics-Lab/DODA.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2024 08:16:33 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Mar 2025 06:04:11 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Xiang",
"Shuai",
""
],
[
"Blok",
"Pieter M.",
""
],
[
"Burridge",
"James",
""
],
[
"Wang",
"Haozhou",
""
],
[
"Guo",
"Wei",
""
]
]
| TITLE: DODA: Adapting Object Detectors to Dynamic Agricultural Environments in
Real-Time with Diffusion
ABSTRACT: Object detection has wide applications in agriculture, but domain shifts of
diverse environments limit the broader use of the trained models. Existing
domain adaptation methods usually require retraining the model for new domains,
which is impractical for agricultural applications due to constantly changing
environments. In this paper, we propose DODA ($D$iffusion for
$O$bject-detection $D$omain Adaptation in $A$griculture), a diffusion-based
framework that can adapt the detector to a new domain in just 2 minutes. DODA
incorporates external domain embeddings and an improved layout-to-image
approach, allowing it to generate high-quality detection data for new domains
without additional training. We demonstrate DODA's effectiveness on the Global
Wheat Head Detection dataset, where fine-tuning detectors on DODA-generated
data yields significant improvements across multiple domains. DODA provides a
simple yet powerful solution for agricultural domain adaptation, reducing the
barriers for growers to use detection in personalised environments. The code is
available at https://github.com/UTokyo-FieldPhenomics-Lab/DODA.
| no_new_dataset | 0.950041 |
2404.03906 | Nimrod Shabtay | Nimrod Shabtay, Eli Schwartz, and Raja Giryes | Deep Phase Coded Image Prior | null | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Phase-coded imaging is a computational imaging method designed to tackle
tasks such as passive depth estimation and extended depth of field (EDOF) using
depth cues inserted during image capture. Most of the current deep
learning-based methods for depth estimation or all-in-focus imaging require a
training dataset with high-quality depth maps and an optimal focus point at
infinity for all-in-focus images. Such datasets are difficult to create,
usually synthetic, and require external graphic programs. We propose a new
method named "Deep Phase Coded Image Prior" (DPCIP) for jointly recovering the
depth map and all-in-focus image from a coded-phase image using solely the
captured image and the optical information of the imaging system. Our approach
does not depend on any specific dataset and surpasses prior supervised
techniques utilizing the same imaging system. This improvement is achieved
through the utilization of a problem formulation based on implicit neural
representation (INR) and deep image prior (DIP). Due to our zero-shot method,
we overcome the barrier of acquiring accurate ground-truth data of depth maps
and all-in-focus images for each new phase-coded system introduced. This allows
focusing mainly on developing the imaging system, and not on ground-truth data
collection.
| [
{
"version": "v1",
"created": "Fri, 5 Apr 2024 05:58:40 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Mar 2025 09:34:49 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Shabtay",
"Nimrod",
""
],
[
"Schwartz",
"Eli",
""
],
[
"Giryes",
"Raja",
""
]
]
| TITLE: Deep Phase Coded Image Prior
ABSTRACT: Phase-coded imaging is a computational imaging method designed to tackle
tasks such as passive depth estimation and extended depth of field (EDOF) using
depth cues inserted during image capture. Most of the current deep
learning-based methods for depth estimation or all-in-focus imaging require a
training dataset with high-quality depth maps and an optimal focus point at
infinity for all-in-focus images. Such datasets are difficult to create,
usually synthetic, and require external graphic programs. We propose a new
method named "Deep Phase Coded Image Prior" (DPCIP) for jointly recovering the
depth map and all-in-focus image from a coded-phase image using solely the
captured image and the optical information of the imaging system. Our approach
does not depend on any specific dataset and surpasses prior supervised
techniques utilizing the same imaging system. This improvement is achieved
through the utilization of a problem formulation based on implicit neural
representation (INR) and deep image prior (DIP). Due to our zero-shot method,
we overcome the barrier of acquiring accurate ground-truth data of depth maps
and all-in-focus images for each new phase-coded system introduced. This allows
focusing mainly on developing the imaging system, and not on ground-truth data
collection.
| no_new_dataset | 0.946941 |
2404.06564 | Haoyang He | Haoyang He, Yuhu Bai, Jiangning Zhang, Qingdong He, Hongxu Chen,
Zhenye Gan, Chengjie Wang, Xiangtai Li, Guanzhong Tian, Lei Xie | MambaAD: Exploring State Space Models for Multi-class Unsupervised
Anomaly Detection | NeurIPS'24 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advancements in anomaly detection have seen the efficacy of CNN- and
transformer-based approaches. However, CNNs struggle with long-range
dependencies, while transformers are burdened by quadratic computational
complexity. Mamba-based models, with their superior long-range modeling and
linear efficiency, have garnered substantial attention. This study pioneers the
application of Mamba to multi-class unsupervised anomaly detection, presenting
MambaAD, which consists of a pre-trained encoder and a Mamba decoder featuring
(Locality-Enhanced State Space) LSS modules at multi-scales. The proposed LSS
module, integrating parallel cascaded (Hybrid State Space) HSS blocks and
multi-kernel convolutions operations, effectively captures both long-range and
local information. The HSS block, utilizing (Hybrid Scanning) HS encoders,
encodes feature maps into five scanning methods and eight directions, thereby
strengthening global connections through the (State Space Model) SSM. The use
of Hilbert scanning and eight directions significantly improves feature
sequence modeling. Comprehensive experiments on six diverse anomaly detection
datasets and seven metrics demonstrate state-of-the-art performance,
substantiating the method's effectiveness. The code and models are available at
https://lewandofskee.github.io/projects/MambaAD.
| [
{
"version": "v1",
"created": "Tue, 9 Apr 2024 18:28:55 GMT"
},
{
"version": "v2",
"created": "Thu, 11 Apr 2024 16:06:39 GMT"
},
{
"version": "v3",
"created": "Sun, 14 Apr 2024 09:14:23 GMT"
},
{
"version": "v4",
"created": "Sun, 9 Mar 2025 15:56:38 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"He",
"Haoyang",
""
],
[
"Bai",
"Yuhu",
""
],
[
"Zhang",
"Jiangning",
""
],
[
"He",
"Qingdong",
""
],
[
"Chen",
"Hongxu",
""
],
[
"Gan",
"Zhenye",
""
],
[
"Wang",
"Chengjie",
""
],
[
"Li",
"Xiangtai",
""
],
[
"Tian",
"Guanzhong",
""
],
[
"Xie",
"Lei",
""
]
]
| TITLE: MambaAD: Exploring State Space Models for Multi-class Unsupervised
Anomaly Detection
ABSTRACT: Recent advancements in anomaly detection have seen the efficacy of CNN- and
transformer-based approaches. However, CNNs struggle with long-range
dependencies, while transformers are burdened by quadratic computational
complexity. Mamba-based models, with their superior long-range modeling and
linear efficiency, have garnered substantial attention. This study pioneers the
application of Mamba to multi-class unsupervised anomaly detection, presenting
MambaAD, which consists of a pre-trained encoder and a Mamba decoder featuring
(Locality-Enhanced State Space) LSS modules at multi-scales. The proposed LSS
module, integrating parallel cascaded (Hybrid State Space) HSS blocks and
multi-kernel convolutions operations, effectively captures both long-range and
local information. The HSS block, utilizing (Hybrid Scanning) HS encoders,
encodes feature maps into five scanning methods and eight directions, thereby
strengthening global connections through the (State Space Model) SSM. The use
of Hilbert scanning and eight directions significantly improves feature
sequence modeling. Comprehensive experiments on six diverse anomaly detection
datasets and seven metrics demonstrate state-of-the-art performance,
substantiating the method's effectiveness. The code and models are available at
https://lewandofskee.github.io/projects/MambaAD.
| no_new_dataset | 0.94887 |
2404.08514 | Rongjian Xu | Rongjian Xu, Zhilu Zhang, Renlong Wu, Wangmeng Zuo | NIR-Assisted Image Denoising: A Selective Fusion Approach and A
Real-World Benchmark Dataset | Accepted by IEEE Transactions on Multimedia (TMM) | null | 10.1109/TMM.2024.3521833 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the significant progress in image denoising, it is still challenging
to restore fine-scale details while removing noise, especially in extremely
low-light environments. Leveraging near-infrared (NIR) images to assist visible
RGB image denoising shows the potential to address this issue, becoming a
promising technology. Nonetheless, existing works still struggle with taking
advantage of NIR information effectively for real-world image denoising, due to
the content inconsistency between NIR-RGB images and the scarcity of real-world
paired datasets. To alleviate the problem, we propose an efficient Selective
Fusion Module (SFM), which can be plug-and-played into the advanced denoising
networks to merge the deep NIR-RGB features. Specifically, we sequentially
perform the global and local modulation for NIR and RGB features, and then
integrate the two modulated features. Furthermore, we present a Real-world
NIR-Assisted Image Denoising (Real-NAID) dataset, which covers diverse
scenarios as well as various noise levels. Extensive experiments on both
synthetic and our real-world datasets demonstrate that the proposed method
achieves better results than state-of-the-art ones. The dataset, codes, and
pre-trained models will be publicly available at
https://github.com/ronjonxu/NAID.
| [
{
"version": "v1",
"created": "Fri, 12 Apr 2024 14:54:26 GMT"
},
{
"version": "v2",
"created": "Tue, 16 Apr 2024 07:56:01 GMT"
},
{
"version": "v3",
"created": "Thu, 18 Apr 2024 19:30:49 GMT"
},
{
"version": "v4",
"created": "Mon, 10 Mar 2025 04:02:57 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Xu",
"Rongjian",
""
],
[
"Zhang",
"Zhilu",
""
],
[
"Wu",
"Renlong",
""
],
[
"Zuo",
"Wangmeng",
""
]
]
| TITLE: NIR-Assisted Image Denoising: A Selective Fusion Approach and A
Real-World Benchmark Dataset
ABSTRACT: Despite the significant progress in image denoising, it is still challenging
to restore fine-scale details while removing noise, especially in extremely
low-light environments. Leveraging near-infrared (NIR) images to assist visible
RGB image denoising shows the potential to address this issue, becoming a
promising technology. Nonetheless, existing works still struggle with taking
advantage of NIR information effectively for real-world image denoising, due to
the content inconsistency between NIR-RGB images and the scarcity of real-world
paired datasets. To alleviate the problem, we propose an efficient Selective
Fusion Module (SFM), which can be plug-and-played into the advanced denoising
networks to merge the deep NIR-RGB features. Specifically, we sequentially
perform the global and local modulation for NIR and RGB features, and then
integrate the two modulated features. Furthermore, we present a Real-world
NIR-Assisted Image Denoising (Real-NAID) dataset, which covers diverse
scenarios as well as various noise levels. Extensive experiments on both
synthetic and our real-world datasets demonstrate that the proposed method
achieves better results than state-of-the-art ones. The dataset, codes, and
pre-trained models will be publicly available at
https://github.com/ronjonxu/NAID.
| new_dataset | 0.969671 |
2404.12008 | Siyi Lin | Siyi Lin, Chongming Gao, Jiawei Chen, Sheng Zhou, Binbin Hu, Yan Feng,
Chun Chen, Can Wang | How Do Recommendation Models Amplify Popularity Bias? An Analysis from
the Spectral Perspective | 14 pages, 7 figures | null | 10.1145/3701551.3703579 | null | cs.IR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recommendation Systems (RS) are often plagued by popularity bias. When
training a recommendation model on a typically long-tailed dataset, the model
tends to not only inherit this bias but often exacerbate it, resulting in
over-representation of popular items in the recommendation lists. This study
conducts comprehensive empirical and theoretical analyses to expose the root
causes of this phenomenon, yielding two core insights: 1) Item popularity is
memorized in the principal spectrum of the score matrix predicted by the
recommendation model; 2) The dimension collapse phenomenon amplifies the
relative prominence of the principal spectrum, thereby intensifying the
popularity bias. Building on these insights, we propose a novel debiasing
strategy that leverages a spectral norm regularizer to penalize the magnitude
of the principal singular value. We have developed an efficient algorithm to
expedite the calculation of the spectral norm by exploiting the spectral
property of the score matrix. Extensive experiments across seven real-world
datasets and three testing paradigms have been conducted to validate the
superiority of the proposed method.
| [
{
"version": "v1",
"created": "Thu, 18 Apr 2024 08:59:32 GMT"
},
{
"version": "v2",
"created": "Mon, 27 May 2024 05:28:57 GMT"
},
{
"version": "v3",
"created": "Thu, 13 Jun 2024 07:31:09 GMT"
},
{
"version": "v4",
"created": "Tue, 26 Nov 2024 10:57:40 GMT"
},
{
"version": "v5",
"created": "Sat, 8 Mar 2025 07:20:30 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Lin",
"Siyi",
""
],
[
"Gao",
"Chongming",
""
],
[
"Chen",
"Jiawei",
""
],
[
"Zhou",
"Sheng",
""
],
[
"Hu",
"Binbin",
""
],
[
"Feng",
"Yan",
""
],
[
"Chen",
"Chun",
""
],
[
"Wang",
"Can",
""
]
]
| TITLE: How Do Recommendation Models Amplify Popularity Bias? An Analysis from
the Spectral Perspective
ABSTRACT: Recommendation Systems (RS) are often plagued by popularity bias. When
training a recommendation model on a typically long-tailed dataset, the model
tends to not only inherit this bias but often exacerbate it, resulting in
over-representation of popular items in the recommendation lists. This study
conducts comprehensive empirical and theoretical analyses to expose the root
causes of this phenomenon, yielding two core insights: 1) Item popularity is
memorized in the principal spectrum of the score matrix predicted by the
recommendation model; 2) The dimension collapse phenomenon amplifies the
relative prominence of the principal spectrum, thereby intensifying the
popularity bias. Building on these insights, we propose a novel debiasing
strategy that leverages a spectral norm regularizer to penalize the magnitude
of the principal singular value. We have developed an efficient algorithm to
expedite the calculation of the spectral norm by exploiting the spectral
property of the score matrix. Extensive experiments across seven real-world
datasets and three testing paradigms have been conducted to validate the
superiority of the proposed method.
| no_new_dataset | 0.945851 |
2404.12827 | Anthony Yazdani | Anthony Yazdani, Alban Bornet, Philipp Khlebnikov, Boya Zhang, Hossein
Rouhizadeh, Poorya Amini and Douglas Teodoro | An Evaluation Benchmark for Adverse Drug Event Prediction from Clinical
Trial Results | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Adverse drug events (ADEs) are a major safety issue in clinical trials. Thus,
predicting ADEs is key to developing safer medications and enhancing patient
outcomes. To support this effort, we introduce CT-ADE, a dataset for multilabel
ADE prediction in monopharmacy treatments. CT-ADE encompasses 2,497 drugs and
168,984 drug-ADE pairs from clinical trial results, annotated using the MedDRA
ontology. Unlike existing resources, CT-ADE integrates treatment and target
population data, enabling comparative analyses under varying conditions, such
as dosage, administration route, and demographics. In addition, CT-ADE
systematically collects all ADEs in the study population, including positive
and negative cases. To provide a baseline for ADE prediction performance using
the CT-ADE dataset, we conducted analyses using large language models (LLMs).
The best LLM achieved an F1-score of 56%, with models incorporating treatment
and patient information outperforming by 21%-38% those relying solely on the
chemical structure. These findings underscore the importance of contextual
information in ADE prediction and establish CT-ADE as a robust resource for
safety risk assessment in pharmaceutical research and development.
| [
{
"version": "v1",
"created": "Fri, 19 Apr 2024 12:04:32 GMT"
},
{
"version": "v2",
"created": "Tue, 30 Jul 2024 08:38:50 GMT"
},
{
"version": "v3",
"created": "Mon, 10 Mar 2025 09:51:28 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Yazdani",
"Anthony",
""
],
[
"Bornet",
"Alban",
""
],
[
"Khlebnikov",
"Philipp",
""
],
[
"Zhang",
"Boya",
""
],
[
"Rouhizadeh",
"Hossein",
""
],
[
"Amini",
"Poorya",
""
],
[
"Teodoro",
"Douglas",
""
]
]
| TITLE: An Evaluation Benchmark for Adverse Drug Event Prediction from Clinical
Trial Results
ABSTRACT: Adverse drug events (ADEs) are a major safety issue in clinical trials. Thus,
predicting ADEs is key to developing safer medications and enhancing patient
outcomes. To support this effort, we introduce CT-ADE, a dataset for multilabel
ADE prediction in monopharmacy treatments. CT-ADE encompasses 2,497 drugs and
168,984 drug-ADE pairs from clinical trial results, annotated using the MedDRA
ontology. Unlike existing resources, CT-ADE integrates treatment and target
population data, enabling comparative analyses under varying conditions, such
as dosage, administration route, and demographics. In addition, CT-ADE
systematically collects all ADEs in the study population, including positive
and negative cases. To provide a baseline for ADE prediction performance using
the CT-ADE dataset, we conducted analyses using large language models (LLMs).
The best LLM achieved an F1-score of 56%, with models incorporating treatment
and patient information outperforming by 21%-38% those relying solely on the
chemical structure. These findings underscore the importance of contextual
information in ADE prediction and establish CT-ADE as a robust resource for
safety risk assessment in pharmaceutical research and development.
| new_dataset | 0.961389 |
2405.01855 | Sairamvinay Vijayaraghavan | Sairamvinay Vijayaraghavan, Prasant Mohapatra | Robust Explainable Recommendation | Not in the final state | null | null | null | cs.IR cs.LG | http://creativecommons.org/licenses/by/4.0/ | Explainable Recommender Systems is an important field of study which provides
reasons behind the suggested recommendations. Explanations with recommender
systems are useful for developers while debugging anomalies within the system
and for consumers while interpreting the model's effectiveness in capturing
their true preferences towards items. However, most of the existing
state-of-the-art (SOTA) explainable recommenders could not retain their
explanation capability under noisy circumstances and moreover are not
generalizable across different datasets. The robustness of the explanations
must be ensured so that certain malicious attackers do not manipulate any
high-stake decision scenarios to their advantage, which could cause severe
consequences affecting large groups of interest. In this work, we present a
general framework for feature-aware explainable recommenders that can withstand
external attacks and provide robust and generalized explanations. This paper
presents a novel framework which could be utilized as an additional defense
tool, preserving the global explainability when subject to model-based white
box attacks. Our framework is simple to implement and supports different
methods regardless of the internal model structure and intrinsic utility within
any model. We experimented our framework on two architecturally different
feature-based SOTA explainable algorithms by training them on three popular
e-commerce datasets of increasing scales. We noticed that both the algorithms
displayed an overall improvement in the quality and robustness of the global
explainability under normal as well as noisy environments across all the
datasets, indicating the flexibility and mutability of our framework.
| [
{
"version": "v1",
"created": "Fri, 3 May 2024 05:03:07 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 23:30:17 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Vijayaraghavan",
"Sairamvinay",
""
],
[
"Mohapatra",
"Prasant",
""
]
]
| TITLE: Robust Explainable Recommendation
ABSTRACT: Explainable Recommender Systems is an important field of study which provides
reasons behind the suggested recommendations. Explanations with recommender
systems are useful for developers while debugging anomalies within the system
and for consumers while interpreting the model's effectiveness in capturing
their true preferences towards items. However, most of the existing
state-of-the-art (SOTA) explainable recommenders could not retain their
explanation capability under noisy circumstances and moreover are not
generalizable across different datasets. The robustness of the explanations
must be ensured so that certain malicious attackers do not manipulate any
high-stake decision scenarios to their advantage, which could cause severe
consequences affecting large groups of interest. In this work, we present a
general framework for feature-aware explainable recommenders that can withstand
external attacks and provide robust and generalized explanations. This paper
presents a novel framework which could be utilized as an additional defense
tool, preserving the global explainability when subject to model-based white
box attacks. Our framework is simple to implement and supports different
methods regardless of the internal model structure and intrinsic utility within
any model. We experimented our framework on two architecturally different
feature-based SOTA explainable algorithms by training them on three popular
e-commerce datasets of increasing scales. We noticed that both the algorithms
displayed an overall improvement in the quality and robustness of the global
explainability under normal as well as noisy environments across all the
datasets, indicating the flexibility and mutability of our framework.
| no_new_dataset | 0.941115 |
2405.03969 | Huan Yin | Zhijian Qiao, Haoming Huang, Chuhao Liu, Zehuan Yu, Shaojie Shen,
Fumin Zhang and Huan Yin | Speak the Same Language: Global LiDAR Registration on BIM Using Pose
Hough Transform | Accepted for publication in IEEE Transactions on Automation Science
and Engineering (T-ASE). Video is available at https://youtu.be/SWbnsaRyL-M | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Light detection and ranging (LiDAR) point clouds and building information
modeling (BIM) represent two distinct data modalities in the fields of robot
perception and construction. These modalities originate from different sources
and are associated with unique reference frames. The primary goal of this study
is to align these modalities within a shared reference frame using a global
registration approach, effectively enabling them to ``speak the same
language''. To achieve this, we propose a cross-modality registration method,
spanning from the front end to the back end. At the front end, we extract
triangle descriptors by identifying walls and intersected corners, enabling the
matching of corner triplets with a complexity independent of the BIM's size.
For the back-end transformation estimation, we utilize the Hough transform to
map the matched triplets to the transformation space and introduce a
hierarchical voting mechanism to hypothesize multiple pose candidates. The
final transformation is then verified using our designed occupancy-aware
scoring method. To assess the effectiveness of our approach, we conducted
real-world multi-session experiments in a large-scale university building,
employing two different types of LiDAR sensors. We make the collected datasets
and codes publicly available to benefit the community.
| [
{
"version": "v1",
"created": "Tue, 7 May 2024 02:58:29 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Mar 2025 12:53:02 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Qiao",
"Zhijian",
""
],
[
"Huang",
"Haoming",
""
],
[
"Liu",
"Chuhao",
""
],
[
"Yu",
"Zehuan",
""
],
[
"Shen",
"Shaojie",
""
],
[
"Zhang",
"Fumin",
""
],
[
"Yin",
"Huan",
""
]
]
| TITLE: Speak the Same Language: Global LiDAR Registration on BIM Using Pose
Hough Transform
ABSTRACT: Light detection and ranging (LiDAR) point clouds and building information
modeling (BIM) represent two distinct data modalities in the fields of robot
perception and construction. These modalities originate from different sources
and are associated with unique reference frames. The primary goal of this study
is to align these modalities within a shared reference frame using a global
registration approach, effectively enabling them to ``speak the same
language''. To achieve this, we propose a cross-modality registration method,
spanning from the front end to the back end. At the front end, we extract
triangle descriptors by identifying walls and intersected corners, enabling the
matching of corner triplets with a complexity independent of the BIM's size.
For the back-end transformation estimation, we utilize the Hough transform to
map the matched triplets to the transformation space and introduce a
hierarchical voting mechanism to hypothesize multiple pose candidates. The
final transformation is then verified using our designed occupancy-aware
scoring method. To assess the effectiveness of our approach, we conducted
real-world multi-session experiments in a large-scale university building,
employing two different types of LiDAR sensors. We make the collected datasets
and codes publicly available to benefit the community.
| no_new_dataset | 0.951863 |
2405.04812 | Jianhao Jiao | Peng Yin, Jianhao Jiao, Shiqi Zhao, Lingyun Xu, Guoquan Huang, Howie
Choset, Sebastian Scherer, Jianda Han | General Place Recognition Survey: Towards Real-World Autonomy | 20 pages, 12 figures, accepted by IEEE Transactions on Robotics as
Survey Paper | null | null | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the realm of robotics, the quest for achieving real-world autonomy,
capable of executing large-scale and long-term operations, has positioned place
recognition (PR) as a cornerstone technology. Despite the PR community's
remarkable strides over the past two decades, garnering attention from fields
like computer vision and robotics, the development of PR methods that
sufficiently support real-world robotic systems remains a challenge. This paper
aims to bridge this gap by highlighting the crucial role of PR within the
framework of Simultaneous Localization and Mapping (SLAM) 2.0. This new phase
in robotic navigation calls for scalable, adaptable, and efficient PR solutions
by integrating advanced artificial intelligence (AI) technologies. For this
goal, we provide a comprehensive review of the current state-of-the-art (SOTA)
advancements in PR, alongside the remaining challenges, and underscore its
broad applications in robotics. This paper begins with an exploration of PR's
formulation and key research challenges. We extensively review literature,
focusing on related methods on place representation and solutions to various PR
challenges. Applications showcasing PR's potential in robotics, key PR
datasets, and open-source libraries are discussed. We conclude with a
discussion on PR's future directions and provide a summary of the literature
covered at: https://github.com/MetaSLAM/GPRS.
| [
{
"version": "v1",
"created": "Wed, 8 May 2024 04:54:48 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Mar 2025 14:14:06 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Yin",
"Peng",
""
],
[
"Jiao",
"Jianhao",
""
],
[
"Zhao",
"Shiqi",
""
],
[
"Xu",
"Lingyun",
""
],
[
"Huang",
"Guoquan",
""
],
[
"Choset",
"Howie",
""
],
[
"Scherer",
"Sebastian",
""
],
[
"Han",
"Jianda",
""
]
]
| TITLE: General Place Recognition Survey: Towards Real-World Autonomy
ABSTRACT: In the realm of robotics, the quest for achieving real-world autonomy,
capable of executing large-scale and long-term operations, has positioned place
recognition (PR) as a cornerstone technology. Despite the PR community's
remarkable strides over the past two decades, garnering attention from fields
like computer vision and robotics, the development of PR methods that
sufficiently support real-world robotic systems remains a challenge. This paper
aims to bridge this gap by highlighting the crucial role of PR within the
framework of Simultaneous Localization and Mapping (SLAM) 2.0. This new phase
in robotic navigation calls for scalable, adaptable, and efficient PR solutions
by integrating advanced artificial intelligence (AI) technologies. For this
goal, we provide a comprehensive review of the current state-of-the-art (SOTA)
advancements in PR, alongside the remaining challenges, and underscore its
broad applications in robotics. This paper begins with an exploration of PR's
formulation and key research challenges. We extensively review literature,
focusing on related methods on place representation and solutions to various PR
challenges. Applications showcasing PR's potential in robotics, key PR
datasets, and open-source libraries are discussed. We conclude with a
discussion on PR's future directions and provide a summary of the literature
covered at: https://github.com/MetaSLAM/GPRS.
| no_new_dataset | 0.947381 |
2405.04944 | Tugba Torun | Tugba Torun, Ameer Taweel, and Didem Unat | A Sparse Tensor Generator with Efficient Feature Extraction | 20 pages, 4 figures, 6 tables | null | null | null | cs.MS cs.LG | http://creativecommons.org/licenses/by/4.0/ | Sparse tensor operations are increasingly important in diverse applications
such as social networks, deep learning, diagnosis, crime, and review analysis.
However, a major obstacle in sparse tensor research is the lack of large-scale
sparse tensor datasets. Another challenge lies in analyzing sparse tensor
features, which are essential not only for understanding the nonzero pattern
but also for selecting the most suitable storage format, decomposition
algorithm, and reordering methods. However, due to the large size of real-world
tensors, even extracting these features can be computationally expensive
without careful optimization. To address these limitations, we have developed a
smart sparse tensor generator that replicates key characteristics of real
sparse tensors. Additionally, we propose efficient methods for extracting a
comprehensive set of sparse tensor features. The effectiveness of our generator
is validated through the quality of extracted features and the performance of
decomposition on the generated tensors. Both the sparse tensor feature
extractor and the tensor generator are open source with all the artifacts
available at https://github.com/sparcityeu/FeaTensor and
https://github.com/sparcityeu/GenTensor, respectively.
| [
{
"version": "v1",
"created": "Wed, 8 May 2024 10:28:20 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 05:06:10 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Torun",
"Tugba",
""
],
[
"Taweel",
"Ameer",
""
],
[
"Unat",
"Didem",
""
]
]
| TITLE: A Sparse Tensor Generator with Efficient Feature Extraction
ABSTRACT: Sparse tensor operations are increasingly important in diverse applications
such as social networks, deep learning, diagnosis, crime, and review analysis.
However, a major obstacle in sparse tensor research is the lack of large-scale
sparse tensor datasets. Another challenge lies in analyzing sparse tensor
features, which are essential not only for understanding the nonzero pattern
but also for selecting the most suitable storage format, decomposition
algorithm, and reordering methods. However, due to the large size of real-world
tensors, even extracting these features can be computationally expensive
without careful optimization. To address these limitations, we have developed a
smart sparse tensor generator that replicates key characteristics of real
sparse tensors. Additionally, we propose efficient methods for extracting a
comprehensive set of sparse tensor features. The effectiveness of our generator
is validated through the quality of extracted features and the performance of
decomposition on the generated tensors. Both the sparse tensor feature
extractor and the tensor generator are open source with all the artifacts
available at https://github.com/sparcityeu/FeaTensor and
https://github.com/sparcityeu/GenTensor, respectively.
| no_new_dataset | 0.821832 |
2405.06705 | Zhuoxuan Jiang | Zhuoxuan Jiang and Haoyuan Peng and Shanshan Feng and Fan Li and
Dongsheng Li | LLMs can Find Mathematical Reasoning Mistakes by Pedagogical
Chain-of-Thought | Accepted by IJCAI 2024 | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Self-correction is emerging as a promising approach to mitigate the issue of
hallucination in Large Language Models (LLMs). To facilitate effective
self-correction, recent research has proposed mistake detection as its initial
step. However, current literature suggests that LLMs often struggle with
reliably identifying reasoning mistakes when using simplistic prompting
strategies. To address this challenge, we introduce a unique prompting
strategy, termed the Pedagogical Chain-of-Thought (PedCoT), which is
specifically designed to guide the identification of reasoning mistakes,
particularly mathematical reasoning mistakes. PedCoT consists of pedagogical
principles for prompts (PPP) design, two-stage interaction process (TIP) and
grounded PedCoT prompts, all inspired by the educational theory of the Bloom
Cognitive Model (BCM). We evaluate our approach on two public datasets
featuring math problems of varying difficulty levels. The experiments
demonstrate that our zero-shot prompting strategy significantly outperforms
strong baselines. The proposed method can achieve the goal of reliable
mathematical mistake identification and provide a foundation for automatic math
answer grading. The results underscore the significance of educational theory,
serving as domain knowledge, in guiding prompting strategy design for
addressing challenging tasks with LLMs effectively.
| [
{
"version": "v1",
"created": "Thu, 9 May 2024 07:37:34 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Mar 2025 15:20:34 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Jiang",
"Zhuoxuan",
""
],
[
"Peng",
"Haoyuan",
""
],
[
"Feng",
"Shanshan",
""
],
[
"Li",
"Fan",
""
],
[
"Li",
"Dongsheng",
""
]
]
| TITLE: LLMs can Find Mathematical Reasoning Mistakes by Pedagogical
Chain-of-Thought
ABSTRACT: Self-correction is emerging as a promising approach to mitigate the issue of
hallucination in Large Language Models (LLMs). To facilitate effective
self-correction, recent research has proposed mistake detection as its initial
step. However, current literature suggests that LLMs often struggle with
reliably identifying reasoning mistakes when using simplistic prompting
strategies. To address this challenge, we introduce a unique prompting
strategy, termed the Pedagogical Chain-of-Thought (PedCoT), which is
specifically designed to guide the identification of reasoning mistakes,
particularly mathematical reasoning mistakes. PedCoT consists of pedagogical
principles for prompts (PPP) design, two-stage interaction process (TIP) and
grounded PedCoT prompts, all inspired by the educational theory of the Bloom
Cognitive Model (BCM). We evaluate our approach on two public datasets
featuring math problems of varying difficulty levels. The experiments
demonstrate that our zero-shot prompting strategy significantly outperforms
strong baselines. The proposed method can achieve the goal of reliable
mathematical mistake identification and provide a foundation for automatic math
answer grading. The results underscore the significance of educational theory,
serving as domain knowledge, in guiding prompting strategy design for
addressing challenging tasks with LLMs effectively.
| no_new_dataset | 0.9434 |
2405.09996 | Junkai Fan | Junkai Fan, Jiangwei Weng, Kun Wang, Yijun Yang, Jianjun Qian, Jun Li,
and Jian Yang | Driving-Video Dehazing with Non-Aligned Regularization for Safety
Assistance | Accepted by CVPR 2024 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Real driving-video dehazing poses a significant challenge due to the inherent
difficulty in acquiring precisely aligned hazy/clear video pairs for effective
model training, especially in dynamic driving scenarios with unpredictable
weather conditions. In this paper, we propose a pioneering approach that
addresses this challenge through a nonaligned regularization strategy. Our core
concept involves identifying clear frames that closely match hazy frames,
serving as references to supervise a video dehazing network. Our approach
comprises two key components: reference matching and video dehazing. Firstly,
we introduce a non-aligned reference frame matching module, leveraging an
adaptive sliding window to match high-quality reference frames from clear
videos. Video dehazing incorporates flow-guided cosine attention sampler and
deformable cosine attention fusion modules to enhance spatial multiframe
alignment and fuse their improved information. To validate our approach, we
collect a GoProHazy dataset captured effortlessly with GoPro cameras in diverse
rural and urban road environments. Extensive experiments demonstrate the
superiority of the proposed method over current state-of-the-art methods in the
challenging task of real driving-video dehazing. Project page.
| [
{
"version": "v1",
"created": "Thu, 16 May 2024 11:28:01 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Mar 2025 09:19:02 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Fan",
"Junkai",
""
],
[
"Weng",
"Jiangwei",
""
],
[
"Wang",
"Kun",
""
],
[
"Yang",
"Yijun",
""
],
[
"Qian",
"Jianjun",
""
],
[
"Li",
"Jun",
""
],
[
"Yang",
"Jian",
""
]
]
| TITLE: Driving-Video Dehazing with Non-Aligned Regularization for Safety
Assistance
ABSTRACT: Real driving-video dehazing poses a significant challenge due to the inherent
difficulty in acquiring precisely aligned hazy/clear video pairs for effective
model training, especially in dynamic driving scenarios with unpredictable
weather conditions. In this paper, we propose a pioneering approach that
addresses this challenge through a nonaligned regularization strategy. Our core
concept involves identifying clear frames that closely match hazy frames,
serving as references to supervise a video dehazing network. Our approach
comprises two key components: reference matching and video dehazing. Firstly,
we introduce a non-aligned reference frame matching module, leveraging an
adaptive sliding window to match high-quality reference frames from clear
videos. Video dehazing incorporates flow-guided cosine attention sampler and
deformable cosine attention fusion modules to enhance spatial multiframe
alignment and fuse their improved information. To validate our approach, we
collect a GoProHazy dataset captured effortlessly with GoPro cameras in diverse
rural and urban road environments. Extensive experiments demonstrate the
superiority of the proposed method over current state-of-the-art methods in the
challenging task of real driving-video dehazing. Project page.
| no_new_dataset | 0.935582 |
2405.10311 | Sahel Sharifymoghaddam | Sahel Sharifymoghaddam, Shivani Upadhyay, Wenhu Chen, Jimmy Lin | UniRAG: Universal Retrieval Augmentation for Large Vision Language
Models | 14 pages, 6 figures | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, Large Vision Language Models (LVLMs) have unlocked many complex use
cases that require Multi-Modal (MM) understanding (e.g., image captioning or
visual question answering) and MM generation (e.g., text-guided image
generation or editing) capabilities. To further improve the output fidelityof
LVLMs we introduce UniRAG, a plug-and-play technique that adds relevant
retrieved information to prompts as few-shot examples during inference. Unlike
the common belief that Retrieval Augmentation (RA) mainly improves generation
or understanding of uncommon entities, our evaluation results on the MSCOCO
dataset with common entities show that both proprietary models like GPT-4o and
Gemini-Pro and smaller open-source models like LLaVA, LaVIT, and Emu2
significantly enhance their generation quality when their input prompts are
augmented with relevant information retrieved by Vision-Language (VL)
retrievers like UniIR models. All the necessary code to reproduce our results
is available at https://github.com/castorini/UniRAG
| [
{
"version": "v1",
"created": "Thu, 16 May 2024 17:58:45 GMT"
},
{
"version": "v2",
"created": "Sun, 20 Oct 2024 05:49:18 GMT"
},
{
"version": "v3",
"created": "Sun, 9 Mar 2025 19:13:53 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Sharifymoghaddam",
"Sahel",
""
],
[
"Upadhyay",
"Shivani",
""
],
[
"Chen",
"Wenhu",
""
],
[
"Lin",
"Jimmy",
""
]
]
| TITLE: UniRAG: Universal Retrieval Augmentation for Large Vision Language
Models
ABSTRACT: Recently, Large Vision Language Models (LVLMs) have unlocked many complex use
cases that require Multi-Modal (MM) understanding (e.g., image captioning or
visual question answering) and MM generation (e.g., text-guided image
generation or editing) capabilities. To further improve the output fidelityof
LVLMs we introduce UniRAG, a plug-and-play technique that adds relevant
retrieved information to prompts as few-shot examples during inference. Unlike
the common belief that Retrieval Augmentation (RA) mainly improves generation
or understanding of uncommon entities, our evaluation results on the MSCOCO
dataset with common entities show that both proprietary models like GPT-4o and
Gemini-Pro and smaller open-source models like LLaVA, LaVIT, and Emu2
significantly enhance their generation quality when their input prompts are
augmented with relevant information retrieved by Vision-Language (VL)
retrievers like UniIR models. All the necessary code to reproduce our results
is available at https://github.com/castorini/UniRAG
| no_new_dataset | 0.951233 |
2405.11247 | Udi Aharon | Udi Aharon, Ran Dubin, Amit Dvir and Chen Hajaj | A Classification-by-Retrieval Framework for Few-Shot Anomaly Detection
to Detect API Injection Attacks | 15 pages, 10 figures, 5 tables | null | 10.1016/j.cose.2024.104249 | null | cs.CR | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Application Programming Interface (API) Injection attacks refer to the
unauthorized or malicious use of APIs, which are often exploited to gain access
to sensitive data or manipulate online systems for illicit purposes.
Identifying actors that deceitfully utilize an API poses a demanding problem.
Although there have been notable advancements and contributions in the field of
API security, there remains a significant challenge when dealing with attackers
who use novel approaches that don't match the well-known payloads commonly seen
in attacks. Also, attackers may exploit standard functionalities
unconventionally and with objectives surpassing their intended boundaries.
Thus, API security needs to be more sophisticated and dynamic than ever, with
advanced computational intelligence methods, such as machine learning models
that can quickly identify and respond to abnormal behavior. In response to
these challenges, we propose a novel unsupervised few-shot anomaly detection
framework composed of two main parts: First, we train a dedicated generic
language model for API based on FastText embedding. Next, we use Approximate
Nearest Neighbor search in a classification-by-retrieval approach. Our
framework allows for training a fast, lightweight classification model using
only a few examples of normal API requests. We evaluated the performance of our
framework using the CSIC 2010 and ATRDF 2023 datasets. The results demonstrate
that our framework improves API attack detection accuracy compared to the
state-of-the-art (SOTA) unsupervised anomaly detection baselines.
| [
{
"version": "v1",
"created": "Sat, 18 May 2024 10:15:31 GMT"
},
{
"version": "v2",
"created": "Sun, 15 Sep 2024 15:31:13 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Aharon",
"Udi",
""
],
[
"Dubin",
"Ran",
""
],
[
"Dvir",
"Amit",
""
],
[
"Hajaj",
"Chen",
""
]
]
| TITLE: A Classification-by-Retrieval Framework for Few-Shot Anomaly Detection
to Detect API Injection Attacks
ABSTRACT: Application Programming Interface (API) Injection attacks refer to the
unauthorized or malicious use of APIs, which are often exploited to gain access
to sensitive data or manipulate online systems for illicit purposes.
Identifying actors that deceitfully utilize an API poses a demanding problem.
Although there have been notable advancements and contributions in the field of
API security, there remains a significant challenge when dealing with attackers
who use novel approaches that don't match the well-known payloads commonly seen
in attacks. Also, attackers may exploit standard functionalities
unconventionally and with objectives surpassing their intended boundaries.
Thus, API security needs to be more sophisticated and dynamic than ever, with
advanced computational intelligence methods, such as machine learning models
that can quickly identify and respond to abnormal behavior. In response to
these challenges, we propose a novel unsupervised few-shot anomaly detection
framework composed of two main parts: First, we train a dedicated generic
language model for API based on FastText embedding. Next, we use Approximate
Nearest Neighbor search in a classification-by-retrieval approach. Our
framework allows for training a fast, lightweight classification model using
only a few examples of normal API requests. We evaluated the performance of our
framework using the CSIC 2010 and ATRDF 2023 datasets. The results demonstrate
that our framework improves API attack detection accuracy compared to the
state-of-the-art (SOTA) unsupervised anomaly detection baselines.
| no_new_dataset | 0.942718 |
2405.15240 | Peng Kuang | Peng Kuang, Zhibo Wang, Zhixuan Chu, Jingyi Wang, Kui Ren | Rethinking Debiasing: Real-World Bias Analysis and Mitigation | null | null | null | null | cs.LG cs.CV | http://creativecommons.org/licenses/by/4.0/ | Spurious correlations in training data significantly hinder the
generalization capability of machine learning models when faced with
distribution shifts in real-world scenarios.To tackle the problem, numerous
debiasing approaches have been proposed and benchmarked on datasets
intentionally designed with severe biases.However, it remains to be asked:
\textit{1. Do existing benchmarks really capture biases in the real world? 2.
Can existing debiasing methods handle biases in the real world?} To answer the
questions, we revisit biased distributions in existing benchmarks and
real-world datasets, and propose a fine-grained framework for analyzing dataset
bias by disentangling it into the magnitude and prevalence of bias. We
empirically and theoretically identify key characteristics of real-world biases
poorly represented by existing benchmarks. We further introduce two novel
biased distributions to bridge this gap, forming a systematic evaluation
framework for real-world debiasing.With the evaluation framework, we focus on
the practical setting of debiasing w/o bias supervision and find existing
methods incapable of handling real-world biases.Through in-depth analysis, we
propose a simple yet effective approach that can be easily applied to existing
debiasing methods, named Debias in Destruction (DiD).Empirical results on
real-world datasets in both image and language modalities demonstrate the
superiority of DiD, improving the performance of existing methods on all types
of biases.
| [
{
"version": "v1",
"created": "Fri, 24 May 2024 06:06:41 GMT"
},
{
"version": "v2",
"created": "Thu, 30 May 2024 12:14:05 GMT"
},
{
"version": "v3",
"created": "Sat, 8 Mar 2025 03:47:36 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Kuang",
"Peng",
""
],
[
"Wang",
"Zhibo",
""
],
[
"Chu",
"Zhixuan",
""
],
[
"Wang",
"Jingyi",
""
],
[
"Ren",
"Kui",
""
]
]
| TITLE: Rethinking Debiasing: Real-World Bias Analysis and Mitigation
ABSTRACT: Spurious correlations in training data significantly hinder the
generalization capability of machine learning models when faced with
distribution shifts in real-world scenarios.To tackle the problem, numerous
debiasing approaches have been proposed and benchmarked on datasets
intentionally designed with severe biases.However, it remains to be asked:
\textit{1. Do existing benchmarks really capture biases in the real world? 2.
Can existing debiasing methods handle biases in the real world?} To answer the
questions, we revisit biased distributions in existing benchmarks and
real-world datasets, and propose a fine-grained framework for analyzing dataset
bias by disentangling it into the magnitude and prevalence of bias. We
empirically and theoretically identify key characteristics of real-world biases
poorly represented by existing benchmarks. We further introduce two novel
biased distributions to bridge this gap, forming a systematic evaluation
framework for real-world debiasing.With the evaluation framework, we focus on
the practical setting of debiasing w/o bias supervision and find existing
methods incapable of handling real-world biases.Through in-depth analysis, we
propose a simple yet effective approach that can be easily applied to existing
debiasing methods, named Debias in Destruction (DiD).Empirical results on
real-world datasets in both image and language modalities demonstrate the
superiority of DiD, improving the performance of existing methods on all types
of biases.
| no_new_dataset | 0.94474 |
2405.15779 | Van-Truong Pham | Ngoc-Du Tran, Thi-Thao Tran, Quang-Huy Nguyen, Manh-Hung Vu,
Van-Truong Pham | LiteNeXt: A Novel Lightweight ConvMixer-based Model with Self-embedding
Representation Parallel for Medical Image Segmentation | This manuscript has been accepted by Biomedical Signal Processing and
Control | Biomedical Signal Processing and Control, 2025 | null | null | eess.IV cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | The emergence of deep learning techniques has advanced the image segmentation
task, especially for medical images. Many neural network models have been
introduced in the last decade bringing the automated segmentation accuracy
close to manual segmentation. However, cutting-edge models like
Transformer-based architectures rely on large scale annotated training data,
and are generally designed with densely consecutive layers in the encoder,
decoder, and skip connections resulting in large number of parameters.
Additionally, for better performance, they often be pretrained on a larger
data, thus requiring large memory size and increasing resource expenses. In
this study, we propose a new lightweight but efficient model, namely LiteNeXt,
based on convolutions and mixing modules with simplified decoder, for medical
image segmentation. The model is trained from scratch with small amount of
parameters (0.71M) and Giga Floating Point Operations Per Second (0.42). To
handle boundary fuzzy as well as occlusion or clutter in objects especially in
medical image regions, we propose the Marginal Weight Loss that can help
effectively determine the marginal boundary between object and background.
Additionally, the Self-embedding Representation Parallel technique is proposed
as an innovative data augmentation strategy that utilizes the network
architecture itself for self-learning augmentation, enhancing feature
extraction robustness without external data. Experiments on public datasets
including Data Science Bowls, GlaS, ISIC2018, PH2, Sunnybrook, and Lung X-ray
data show promising results compared to other state-of-the-art CNN-based and
Transformer-based architectures. Our code is released at:
https://github.com/tranngocduvnvp/LiteNeXt.
| [
{
"version": "v1",
"created": "Thu, 4 Apr 2024 01:59:19 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Mar 2025 08:54:13 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Tran",
"Ngoc-Du",
""
],
[
"Tran",
"Thi-Thao",
""
],
[
"Nguyen",
"Quang-Huy",
""
],
[
"Vu",
"Manh-Hung",
""
],
[
"Pham",
"Van-Truong",
""
]
]
| TITLE: LiteNeXt: A Novel Lightweight ConvMixer-based Model with Self-embedding
Representation Parallel for Medical Image Segmentation
ABSTRACT: The emergence of deep learning techniques has advanced the image segmentation
task, especially for medical images. Many neural network models have been
introduced in the last decade bringing the automated segmentation accuracy
close to manual segmentation. However, cutting-edge models like
Transformer-based architectures rely on large scale annotated training data,
and are generally designed with densely consecutive layers in the encoder,
decoder, and skip connections resulting in large number of parameters.
Additionally, for better performance, they often be pretrained on a larger
data, thus requiring large memory size and increasing resource expenses. In
this study, we propose a new lightweight but efficient model, namely LiteNeXt,
based on convolutions and mixing modules with simplified decoder, for medical
image segmentation. The model is trained from scratch with small amount of
parameters (0.71M) and Giga Floating Point Operations Per Second (0.42). To
handle boundary fuzzy as well as occlusion or clutter in objects especially in
medical image regions, we propose the Marginal Weight Loss that can help
effectively determine the marginal boundary between object and background.
Additionally, the Self-embedding Representation Parallel technique is proposed
as an innovative data augmentation strategy that utilizes the network
architecture itself for self-learning augmentation, enhancing feature
extraction robustness without external data. Experiments on public datasets
including Data Science Bowls, GlaS, ISIC2018, PH2, Sunnybrook, and Lung X-ray
data show promising results compared to other state-of-the-art CNN-based and
Transformer-based architectures. Our code is released at:
https://github.com/tranngocduvnvp/LiteNeXt.
| no_new_dataset | 0.950365 |
2405.16498 | Menghao Waiyan William Zhu | Menghao Waiyan William Zhu and Ercan Engin Kuruo\u{g}lu | On Sequential Maximum a Posteriori Inference for Continual Learning | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | We formulate sequential maximum a posteriori inference as a recursion of loss
functions and reduce the problem of continual learning to approximating the
previous loss function. We then propose two coreset-free methods: autodiff
quadratic consolidation, which uses an accurate and full quadratic
approximation, and neural consolidation, which uses a neural network
approximation. These methods are not scalable with respect to the neural
network size, and we study them for classification tasks in combination with a
fixed pre-trained feature extractor. We also introduce simple but challenging
classical task sequences based on Iris and Wine datasets. We find that neural
consolidation performs well in the classical task sequences, where the input
dimension is small, while autodiff quadratic consolidation performs
consistently well in image task sequences with a fixed pre-trained feature
extractor, achieving comparable performance to joint maximum a posteriori
training in many cases.
| [
{
"version": "v1",
"created": "Sun, 26 May 2024 09:20:47 GMT"
},
{
"version": "v2",
"created": "Sun, 24 Nov 2024 05:18:42 GMT"
},
{
"version": "v3",
"created": "Wed, 26 Feb 2025 07:14:47 GMT"
},
{
"version": "v4",
"created": "Mon, 10 Mar 2025 09:20:24 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Zhu",
"Menghao Waiyan William",
""
],
[
"Kuruoğlu",
"Ercan Engin",
""
]
]
| TITLE: On Sequential Maximum a Posteriori Inference for Continual Learning
ABSTRACT: We formulate sequential maximum a posteriori inference as a recursion of loss
functions and reduce the problem of continual learning to approximating the
previous loss function. We then propose two coreset-free methods: autodiff
quadratic consolidation, which uses an accurate and full quadratic
approximation, and neural consolidation, which uses a neural network
approximation. These methods are not scalable with respect to the neural
network size, and we study them for classification tasks in combination with a
fixed pre-trained feature extractor. We also introduce simple but challenging
classical task sequences based on Iris and Wine datasets. We find that neural
consolidation performs well in the classical task sequences, where the input
dimension is small, while autodiff quadratic consolidation performs
consistently well in image task sequences with a fixed pre-trained feature
extractor, achieving comparable performance to joint maximum a posteriori
training in many cases.
| no_new_dataset | 0.951863 |
2405.16918 | Nils Philipp Walter | Nils Philipp Walter, Linara Adilova, Jilles Vreeken, Michael Kamp | The Uncanny Valley: Exploring Adversarial Robustness from a Flatness
Perspective | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Flatness of the loss surface not only correlates positively with
generalization, but is also related to adversarial robustness since
perturbations of inputs relate non-linearly to perturbations of weights. In
this paper, we empirically analyze the relation between adversarial examples
and relative flatness with respect to the parameters of one layer. We observe a
peculiar property of adversarial examples in the context of relative flatness:
during an iterative first-order white-box attack, the flatness of the loss
surface measured around the adversarial example first becomes sharper until the
label is flipped, but if we keep the attack running, it runs into a flat
uncanny valley where the label remains flipped. In extensive experiments, we
observe this phenomenon across various model architectures and datasets, even
for adversarially trained models. Our results also extend to large language
models (LLMs), but due to the discrete nature of the input space and
comparatively weak attacks, adversarial examples rarely reach truly flat
regions. Most importantly, this phenomenon shows that flatness alone cannot
explain adversarial robustness unless we can also guarantee the behavior of the
function around the examples. We, therefore theoretically connect relative
flatness to adversarial robustness by bounding the third derivative of the loss
surface, underlining the need for flatness in combination with a low global
Lipschitz constant for a robust model.
| [
{
"version": "v1",
"created": "Mon, 27 May 2024 08:10:46 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 14:47:37 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Walter",
"Nils Philipp",
""
],
[
"Adilova",
"Linara",
""
],
[
"Vreeken",
"Jilles",
""
],
[
"Kamp",
"Michael",
""
]
]
| TITLE: The Uncanny Valley: Exploring Adversarial Robustness from a Flatness
Perspective
ABSTRACT: Flatness of the loss surface not only correlates positively with
generalization, but is also related to adversarial robustness since
perturbations of inputs relate non-linearly to perturbations of weights. In
this paper, we empirically analyze the relation between adversarial examples
and relative flatness with respect to the parameters of one layer. We observe a
peculiar property of adversarial examples in the context of relative flatness:
during an iterative first-order white-box attack, the flatness of the loss
surface measured around the adversarial example first becomes sharper until the
label is flipped, but if we keep the attack running, it runs into a flat
uncanny valley where the label remains flipped. In extensive experiments, we
observe this phenomenon across various model architectures and datasets, even
for adversarially trained models. Our results also extend to large language
models (LLMs), but due to the discrete nature of the input space and
comparatively weak attacks, adversarial examples rarely reach truly flat
regions. Most importantly, this phenomenon shows that flatness alone cannot
explain adversarial robustness unless we can also guarantee the behavior of the
function around the examples. We, therefore theoretically connect relative
flatness to adversarial robustness by bounding the third derivative of the loss
surface, underlining the need for flatness in combination with a low global
Lipschitz constant for a robust model.
| no_new_dataset | 0.947039 |
2405.16919 | Ruipu Luo | Zejun Li, Ruipu Luo, Jiwen Zhang, Minghui Qiu, Xuanjing Huang, Zhongyu
Wei | VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large
Multi-Modal Models | Accepted by NAACL 2025 main conference | null | null | null | cs.CV cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While large multi-modal models (LMMs) have exhibited impressive capabilities
across diverse tasks, their effectiveness in handling complex tasks has been
limited by the prevailing single-step reasoning paradigm. To this end, this
paper proposes VoCoT, a multi-step Visually grounded object-centric
Chain-of-Thought reasoning framework tailored for inference with LMMs. VoCoT is
characterized by two key features: (1) object-centric reasoning paths that
revolve around cross-modal shared object-level information, and (2) visually
grounded representation of object concepts in a multi-modal interleaved and
aligned manner, which effectively bridges the modality gap within LMMs during
long-term generation. To adapt LMMs in reasoning with VoCoT, we further
construct an instruction-tuning dataset. By combining VoCoT with the prevalent
open-source LMM architectures, we develop a VoCoT-based model, VolCano. With
only 7B parameters and limited input image resolution, VolCano demonstrates
excellent performance across various scenarios. In benchmarks like CLEVR and
EmbSpatial, which highly require complex reasoning capabilities, VolCano
outperforms SOTA models, including powerful GPT-4V. Related code, data and
models are released in https://github.com/RupertLuo/VoCoT.
| [
{
"version": "v1",
"created": "Mon, 27 May 2024 08:12:00 GMT"
},
{
"version": "v2",
"created": "Tue, 28 May 2024 06:12:45 GMT"
},
{
"version": "v3",
"created": "Sat, 8 Mar 2025 17:16:09 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Li",
"Zejun",
""
],
[
"Luo",
"Ruipu",
""
],
[
"Zhang",
"Jiwen",
""
],
[
"Qiu",
"Minghui",
""
],
[
"Huang",
"Xuanjing",
""
],
[
"Wei",
"Zhongyu",
""
]
]
| TITLE: VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large
Multi-Modal Models
ABSTRACT: While large multi-modal models (LMMs) have exhibited impressive capabilities
across diverse tasks, their effectiveness in handling complex tasks has been
limited by the prevailing single-step reasoning paradigm. To this end, this
paper proposes VoCoT, a multi-step Visually grounded object-centric
Chain-of-Thought reasoning framework tailored for inference with LMMs. VoCoT is
characterized by two key features: (1) object-centric reasoning paths that
revolve around cross-modal shared object-level information, and (2) visually
grounded representation of object concepts in a multi-modal interleaved and
aligned manner, which effectively bridges the modality gap within LMMs during
long-term generation. To adapt LMMs in reasoning with VoCoT, we further
construct an instruction-tuning dataset. By combining VoCoT with the prevalent
open-source LMM architectures, we develop a VoCoT-based model, VolCano. With
only 7B parameters and limited input image resolution, VolCano demonstrates
excellent performance across various scenarios. In benchmarks like CLEVR and
EmbSpatial, which highly require complex reasoning capabilities, VolCano
outperforms SOTA models, including powerful GPT-4V. Related code, data and
models are released in https://github.com/RupertLuo/VoCoT.
| new_dataset | 0.968321 |
2405.17631 | Yusuf Roohani | Yusuf Roohani, Andrew Lee, Qian Huang, Jian Vora, Zachary Steinhart,
Kexin Huang, Alexander Marson, Percy Liang, Jure Leskovec | BioDiscoveryAgent: An AI Agent for Designing Genetic Perturbation
Experiments | null | null | null | null | cs.AI cs.CE cs.MA | http://creativecommons.org/licenses/by/4.0/ | Agents based on large language models have shown great potential in
accelerating scientific discovery by leveraging their rich background knowledge
and reasoning capabilities. In this paper, we introduce BioDiscoveryAgent, an
agent that designs new experiments, reasons about their outcomes, and
efficiently navigates the hypothesis space to reach desired solutions. We
demonstrate our agent on the problem of designing genetic perturbation
experiments, where the aim is to find a small subset out of many possible genes
that, when perturbed, result in a specific phenotype (e.g., cell growth).
Utilizing its biological knowledge, BioDiscoveryAgent can uniquely design new
experiments without the need to train a machine learning model or explicitly
design an acquisition function as in Bayesian optimization. Moreover,
BioDiscoveryAgent, using Claude 3.5 Sonnet, achieves an average of 21%
improvement in predicting relevant genetic perturbations across six datasets,
and a 46% improvement in the harder task of non-essential gene perturbation,
compared to existing Bayesian optimization baselines specifically trained for
this task. Our evaluation includes one dataset that is unpublished, ensuring it
is not part of the language model's training data. Additionally,
BioDiscoveryAgent predicts gene combinations to perturb more than twice as
accurately as a random baseline, a task so far not explored in the context of
closed-loop experiment design. The agent also has access to tools for searching
the biomedical literature, executing code to analyze biological datasets, and
prompting another agent to critically evaluate its predictions. Overall,
BioDiscoveryAgent is interpretable at every stage, representing an accessible
new paradigm in the computational design of biological experiments with the
potential to augment scientists' efficacy.
| [
{
"version": "v1",
"created": "Mon, 27 May 2024 19:57:17 GMT"
},
{
"version": "v2",
"created": "Sun, 6 Oct 2024 04:55:16 GMT"
},
{
"version": "v3",
"created": "Sun, 9 Mar 2025 21:57:20 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Roohani",
"Yusuf",
""
],
[
"Lee",
"Andrew",
""
],
[
"Huang",
"Qian",
""
],
[
"Vora",
"Jian",
""
],
[
"Steinhart",
"Zachary",
""
],
[
"Huang",
"Kexin",
""
],
[
"Marson",
"Alexander",
""
],
[
"Liang",
"Percy",
""
],
[
"Leskovec",
"Jure",
""
]
]
| TITLE: BioDiscoveryAgent: An AI Agent for Designing Genetic Perturbation
Experiments
ABSTRACT: Agents based on large language models have shown great potential in
accelerating scientific discovery by leveraging their rich background knowledge
and reasoning capabilities. In this paper, we introduce BioDiscoveryAgent, an
agent that designs new experiments, reasons about their outcomes, and
efficiently navigates the hypothesis space to reach desired solutions. We
demonstrate our agent on the problem of designing genetic perturbation
experiments, where the aim is to find a small subset out of many possible genes
that, when perturbed, result in a specific phenotype (e.g., cell growth).
Utilizing its biological knowledge, BioDiscoveryAgent can uniquely design new
experiments without the need to train a machine learning model or explicitly
design an acquisition function as in Bayesian optimization. Moreover,
BioDiscoveryAgent, using Claude 3.5 Sonnet, achieves an average of 21%
improvement in predicting relevant genetic perturbations across six datasets,
and a 46% improvement in the harder task of non-essential gene perturbation,
compared to existing Bayesian optimization baselines specifically trained for
this task. Our evaluation includes one dataset that is unpublished, ensuring it
is not part of the language model's training data. Additionally,
BioDiscoveryAgent predicts gene combinations to perturb more than twice as
accurately as a random baseline, a task so far not explored in the context of
closed-loop experiment design. The agent also has access to tools for searching
the biomedical literature, executing code to analyze biological datasets, and
prompting another agent to critically evaluate its predictions. Overall,
BioDiscoveryAgent is interpretable at every stage, representing an accessible
new paradigm in the computational design of biological experiments with the
potential to augment scientists' efficacy.
| no_new_dataset | 0.9462 |
2406.00958 | Jueqing Lu | Jueqing Lu, Wray Buntine, Yuanyuan Qi, Joanna Dipnall, Belinda Gabbe,
Lan Du | Navigating Conflicting Views: Harnessing Trust for Learning | null | null | null | null | cs.LG cs.CV | http://creativecommons.org/licenses/by/4.0/ | Resolving conflicts is essential to make the decisions of multi-view
classification more reliable. Much research has been conducted on learning
consistent informative representations among different views, assuming that all
views are identically important and strictly aligned. However, real-world
multi-view data may not always conform to these assumptions, as some views may
express distinct information. To address this issue, we develop a computational
trust-based discounting method to enhance the existing trustworthy framework in
scenarios where conflicts between different views may arise. Its belief fusion
process considers the trustworthiness of predictions made by individual views
via an instance-wise probability-sensitive trust discounting mechanism. We
evaluate our method on six real-world datasets, using Top-1 Accuracy, AUC-ROC
for Uncertainty-Aware Prediction, Fleiss' Kappa, and a new metric called
Multi-View Agreement with Ground Truth that takes into consideration the ground
truth labels. The experimental results show that computational trust can
effectively resolve conflicts, paving the way for more reliable multi-view
classification models in real-world applications.
| [
{
"version": "v1",
"created": "Mon, 3 Jun 2024 03:22:18 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Mar 2025 12:32:00 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Lu",
"Jueqing",
""
],
[
"Buntine",
"Wray",
""
],
[
"Qi",
"Yuanyuan",
""
],
[
"Dipnall",
"Joanna",
""
],
[
"Gabbe",
"Belinda",
""
],
[
"Du",
"Lan",
""
]
]
| TITLE: Navigating Conflicting Views: Harnessing Trust for Learning
ABSTRACT: Resolving conflicts is essential to make the decisions of multi-view
classification more reliable. Much research has been conducted on learning
consistent informative representations among different views, assuming that all
views are identically important and strictly aligned. However, real-world
multi-view data may not always conform to these assumptions, as some views may
express distinct information. To address this issue, we develop a computational
trust-based discounting method to enhance the existing trustworthy framework in
scenarios where conflicts between different views may arise. Its belief fusion
process considers the trustworthiness of predictions made by individual views
via an instance-wise probability-sensitive trust discounting mechanism. We
evaluate our method on six real-world datasets, using Top-1 Accuracy, AUC-ROC
for Uncertainty-Aware Prediction, Fleiss' Kappa, and a new metric called
Multi-View Agreement with Ground Truth that takes into consideration the ground
truth labels. The experimental results show that computational trust can
effectively resolve conflicts, paving the way for more reliable multi-view
classification models in real-world applications.
| no_new_dataset | 0.942348 |
2406.05612 | Pranav Jeevan P | Pranav Jeevan and Amit Sethi | Which Backbone to Use: A Resource-efficient Domain Specific Comparison
for Computer Vision | 12 pages, 2 figures, accepted in TMLR | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | In contemporary computer vision applications, particularly image
classification, architectural backbones pre-trained on large datasets like
ImageNet are commonly employed as feature extractors. Despite the widespread
use of these pre-trained convolutional neural networks (CNNs), there remains a
gap in understanding the performance of various resource-efficient backbones
across diverse domains and dataset sizes. Our study systematically evaluates
multiple lightweight, pre-trained CNN backbones under consistent training
settings across a variety of datasets, including natural images, medical
images, galaxy images, and remote sensing images. This comprehensive analysis
aims to aid machine learning practitioners in selecting the most suitable
backbone for their specific problem, especially in scenarios involving small
datasets where fine-tuning a pre-trained network is crucial. Even though
attention-based architectures are gaining popularity, we observed that they
tend to perform poorly under low data finetuning tasks compared to CNNs. We
also observed that some CNN architectures such as ConvNeXt, RegNet and
EfficientNet performs well compared to others on a diverse set of domains
consistently. Our findings provide actionable insights into the performance
trade-offs and effectiveness of different backbones, facilitating informed
decision-making in model selection for a broad spectrum of computer vision
domains. Our code is available here: https://github.com/pranavphoenix/Backbones
| [
{
"version": "v1",
"created": "Sun, 9 Jun 2024 02:01:25 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Jun 2024 12:26:42 GMT"
},
{
"version": "v3",
"created": "Sun, 9 Mar 2025 21:00:14 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Jeevan",
"Pranav",
""
],
[
"Sethi",
"Amit",
""
]
]
| TITLE: Which Backbone to Use: A Resource-efficient Domain Specific Comparison
for Computer Vision
ABSTRACT: In contemporary computer vision applications, particularly image
classification, architectural backbones pre-trained on large datasets like
ImageNet are commonly employed as feature extractors. Despite the widespread
use of these pre-trained convolutional neural networks (CNNs), there remains a
gap in understanding the performance of various resource-efficient backbones
across diverse domains and dataset sizes. Our study systematically evaluates
multiple lightweight, pre-trained CNN backbones under consistent training
settings across a variety of datasets, including natural images, medical
images, galaxy images, and remote sensing images. This comprehensive analysis
aims to aid machine learning practitioners in selecting the most suitable
backbone for their specific problem, especially in scenarios involving small
datasets where fine-tuning a pre-trained network is crucial. Even though
attention-based architectures are gaining popularity, we observed that they
tend to perform poorly under low data finetuning tasks compared to CNNs. We
also observed that some CNN architectures such as ConvNeXt, RegNet and
EfficientNet performs well compared to others on a diverse set of domains
consistently. Our findings provide actionable insights into the performance
trade-offs and effectiveness of different backbones, facilitating informed
decision-making in model selection for a broad spectrum of computer vision
domains. Our code is available here: https://github.com/pranavphoenix/Backbones
| no_new_dataset | 0.949669 |
2406.08564 | Parsa Hassani Shariat Panahi | Parsa Hassani Shariat Panahi, Amir Hossein Jalilvand, Abolfazl Diyanat | Machine Learning-Driven Open-Source Framework for Assessing QoE in
Multimedia Networks | 11 pages, 6 figures | null | 10.1109/OJCOMS.2025.3543750 | null | cs.NI cs.AI cs.MM | http://creativecommons.org/licenses/by/4.0/ | The Internet is integral to modern life, influencing communication, business,
and lifestyles globally. As dependence on Internet services grows, the demand
for high-quality service delivery increases. Service providers must maintain
high standards of quality of service and quality of experience (QoE) to ensure
user satisfaction. QoE, which reflects user satisfaction with service quality,
is a key metric for multimedia services, yet it is challenging to measure due
to its subjective nature and the complexities of real-time feedback. This paper
introduces a machine learning-based framework for objectively assessing QoE in
multimedia networks. The open-source framework complies with the ITU-T P.1203
standard. It automates data collection and user satisfaction prediction using
key network parameters such as delay, jitter, packet loss, bitrate, and
throughput. Using a dataset of over 20,000 records from various network
conditions, the Random Forest model predicts the mean opinion score with 95.8%
accuracy. Our framework addresses the limitations of existing QoE models by
integrating real-time data collection, machine learning predictions, and
adherence to international standards. This approach enhances QoE evaluation
accuracy and allows dynamic network resource management, optimizing performance
and cost-efficiency. Its open-source nature encourages adaptation and extension
for various multimedia services. The findings significantly affect the
telecommunications industry in managing and optimizing multimedia services. The
network centric QoE prediction of the framework offers a scalable solution to
improve user satisfaction without the need for content-specific data. Future
enhancements could include advanced machine learning models and broader
applicability to digital services. This research contributes a practical,
standardized tool for QoE assessment across diverse networks and platforms.
| [
{
"version": "v1",
"created": "Wed, 12 Jun 2024 18:07:06 GMT"
},
{
"version": "v2",
"created": "Tue, 10 Sep 2024 07:30:02 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Panahi",
"Parsa Hassani Shariat",
""
],
[
"Jalilvand",
"Amir Hossein",
""
],
[
"Diyanat",
"Abolfazl",
""
]
]
| TITLE: Machine Learning-Driven Open-Source Framework for Assessing QoE in
Multimedia Networks
ABSTRACT: The Internet is integral to modern life, influencing communication, business,
and lifestyles globally. As dependence on Internet services grows, the demand
for high-quality service delivery increases. Service providers must maintain
high standards of quality of service and quality of experience (QoE) to ensure
user satisfaction. QoE, which reflects user satisfaction with service quality,
is a key metric for multimedia services, yet it is challenging to measure due
to its subjective nature and the complexities of real-time feedback. This paper
introduces a machine learning-based framework for objectively assessing QoE in
multimedia networks. The open-source framework complies with the ITU-T P.1203
standard. It automates data collection and user satisfaction prediction using
key network parameters such as delay, jitter, packet loss, bitrate, and
throughput. Using a dataset of over 20,000 records from various network
conditions, the Random Forest model predicts the mean opinion score with 95.8%
accuracy. Our framework addresses the limitations of existing QoE models by
integrating real-time data collection, machine learning predictions, and
adherence to international standards. This approach enhances QoE evaluation
accuracy and allows dynamic network resource management, optimizing performance
and cost-efficiency. Its open-source nature encourages adaptation and extension
for various multimedia services. The findings significantly affect the
telecommunications industry in managing and optimizing multimedia services. The
network centric QoE prediction of the framework offers a scalable solution to
improve user satisfaction without the need for content-specific data. Future
enhancements could include advanced machine learning models and broader
applicability to digital services. This research contributes a practical,
standardized tool for QoE assessment across diverse networks and platforms.
| no_new_dataset | 0.946051 |
2406.16855 | Yuang Peng | Yuang Peng, Yuxin Cui, Haomiao Tang, Zekun Qi, Runpei Dong, Jing Bai,
Chunrui Han, Zheng Ge, Xiangyu Zhang, Shu-Tao Xia | DreamBench++: A Human-Aligned Benchmark for Personalized Image
Generation | ICLR 2025, Project page: https://dreambenchplus.github.io/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Personalized image generation holds great promise in assisting humans in
everyday work and life due to its impressive ability to creatively generate
personalized content across various contexts. However, current evaluations
either are automated but misalign with humans or require human evaluations that
are time-consuming and expensive. In this work, we present DreamBench++, a
human-aligned benchmark that advanced multimodal GPT models automate.
Specifically, we systematically design the prompts to let GPT be both
human-aligned and self-aligned, empowered with task reinforcement. Further, we
construct a comprehensive dataset comprising diverse images and prompts. By
benchmarking 7 modern generative models, we demonstrate that DreamBench++
results in significantly more human-aligned evaluation, helping boost the
community with innovative findings.
| [
{
"version": "v1",
"created": "Mon, 24 Jun 2024 17:58:47 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Mar 2025 02:57:28 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Peng",
"Yuang",
""
],
[
"Cui",
"Yuxin",
""
],
[
"Tang",
"Haomiao",
""
],
[
"Qi",
"Zekun",
""
],
[
"Dong",
"Runpei",
""
],
[
"Bai",
"Jing",
""
],
[
"Han",
"Chunrui",
""
],
[
"Ge",
"Zheng",
""
],
[
"Zhang",
"Xiangyu",
""
],
[
"Xia",
"Shu-Tao",
""
]
]
| TITLE: DreamBench++: A Human-Aligned Benchmark for Personalized Image
Generation
ABSTRACT: Personalized image generation holds great promise in assisting humans in
everyday work and life due to its impressive ability to creatively generate
personalized content across various contexts. However, current evaluations
either are automated but misalign with humans or require human evaluations that
are time-consuming and expensive. In this work, we present DreamBench++, a
human-aligned benchmark that advanced multimodal GPT models automate.
Specifically, we systematically design the prompts to let GPT be both
human-aligned and self-aligned, empowered with task reinforcement. Further, we
construct a comprehensive dataset comprising diverse images and prompts. By
benchmarking 7 modern generative models, we demonstrate that DreamBench++
results in significantly more human-aligned evaluation, helping boost the
community with innovative findings.
| new_dataset | 0.950915 |
2406.17055 | Ryan Liu | Ryan Liu, Jiayi Geng, Joshua C. Peterson, Ilia Sucholutsky, Thomas L.
Griffiths | Large Language Models Assume People are More Rational than We Really are | null | null | null | null | cs.CL cs.AI cs.CY cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In order for AI systems to communicate effectively with people, they must
understand how we make decisions. However, people's decisions are not always
rational, so the implicit internal models of human decision-making in Large
Language Models (LLMs) must account for this. Previous empirical evidence seems
to suggest that these implicit models are accurate -- LLMs offer believable
proxies of human behavior, acting how we expect humans would in everyday
interactions. However, by comparing LLM behavior and predictions to a large
dataset of human decisions, we find that this is actually not the case: when
both simulating and predicting people's choices, a suite of cutting-edge LLMs
(GPT-4o & 4-Turbo, Llama-3-8B & 70B, Claude 3 Opus) assume that people are more
rational than we really are. Specifically, these models deviate from human
behavior and align more closely with a classic model of rational choice --
expected value theory. Interestingly, people also tend to assume that other
people are rational when interpreting their behavior. As a consequence, when we
compare the inferences that LLMs and people draw from the decisions of others
using another psychological dataset, we find that these inferences are highly
correlated. Thus, the implicit decision-making models of LLMs appear to be
aligned with the human expectation that other people will act rationally,
rather than with how people actually act.
| [
{
"version": "v1",
"created": "Mon, 24 Jun 2024 18:15:27 GMT"
},
{
"version": "v2",
"created": "Mon, 1 Jul 2024 17:29:54 GMT"
},
{
"version": "v3",
"created": "Tue, 30 Jul 2024 14:22:26 GMT"
},
{
"version": "v4",
"created": "Mon, 10 Mar 2025 17:42:37 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Liu",
"Ryan",
""
],
[
"Geng",
"Jiayi",
""
],
[
"Peterson",
"Joshua C.",
""
],
[
"Sucholutsky",
"Ilia",
""
],
[
"Griffiths",
"Thomas L.",
""
]
]
| TITLE: Large Language Models Assume People are More Rational than We Really are
ABSTRACT: In order for AI systems to communicate effectively with people, they must
understand how we make decisions. However, people's decisions are not always
rational, so the implicit internal models of human decision-making in Large
Language Models (LLMs) must account for this. Previous empirical evidence seems
to suggest that these implicit models are accurate -- LLMs offer believable
proxies of human behavior, acting how we expect humans would in everyday
interactions. However, by comparing LLM behavior and predictions to a large
dataset of human decisions, we find that this is actually not the case: when
both simulating and predicting people's choices, a suite of cutting-edge LLMs
(GPT-4o & 4-Turbo, Llama-3-8B & 70B, Claude 3 Opus) assume that people are more
rational than we really are. Specifically, these models deviate from human
behavior and align more closely with a classic model of rational choice --
expected value theory. Interestingly, people also tend to assume that other
people are rational when interpreting their behavior. As a consequence, when we
compare the inferences that LLMs and people draw from the decisions of others
using another psychological dataset, we find that these inferences are highly
correlated. Thus, the implicit decision-making models of LLMs appear to be
aligned with the human expectation that other people will act rationally,
rather than with how people actually act.
| no_new_dataset | 0.880181 |
2407.02437 | Zhe Yu | Yu Zhe, Jun Sakuma | Beyond Full Poisoning: Effective Availability Attacks with Partial
Perturbation | 10 pages; updated, previous title <Parameter Matching Attack:
Enhancing Practical Applicability of Availability Attacks> | null | null | null | cs.LG cs.CR cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The widespread use of publicly available datasets for training machine
learning models raises significant concerns about data misuse. Availability
attacks have emerged as a means for data owners to safeguard their data by
designing imperceptible perturbations that degrade model performance when
incorporated into training datasets. However, existing availability attacks are
ineffective when only a portion of the data can be perturbed. To address this
challenge, we propose a novel availability attack approach termed Parameter
Matching Attack (PMA). PMA is the first availability attack capable of causing
more than a 30\% performance drop when only a portion of data can be perturbed.
PMA optimizes perturbations so that when the model is trained on a mixture of
clean and perturbed data, the resulting model will approach a model designed to
perform poorly. Experimental results across four datasets demonstrate that PMA
outperforms existing methods, achieving significant model performance
degradation when a part of the training data is perturbed. Our code is
available in the supplementary materials.
| [
{
"version": "v1",
"created": "Tue, 2 Jul 2024 17:15:12 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 16:27:37 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Zhe",
"Yu",
""
],
[
"Sakuma",
"Jun",
""
]
]
| TITLE: Beyond Full Poisoning: Effective Availability Attacks with Partial
Perturbation
ABSTRACT: The widespread use of publicly available datasets for training machine
learning models raises significant concerns about data misuse. Availability
attacks have emerged as a means for data owners to safeguard their data by
designing imperceptible perturbations that degrade model performance when
incorporated into training datasets. However, existing availability attacks are
ineffective when only a portion of the data can be perturbed. To address this
challenge, we propose a novel availability attack approach termed Parameter
Matching Attack (PMA). PMA is the first availability attack capable of causing
more than a 30\% performance drop when only a portion of data can be perturbed.
PMA optimizes perturbations so that when the model is trained on a mixture of
clean and perturbed data, the resulting model will approach a model designed to
perform poorly. Experimental results across four datasets demonstrate that PMA
outperforms existing methods, achieving significant model performance
degradation when a part of the training data is perturbed. Our code is
available in the supplementary materials.
| no_new_dataset | 0.946051 |
2407.02883 | Xiangyang Li | Xiangyang Li, Kuicai Dong, Yi Quan Lee, Wei Xia, Hao Zhang, Xinyi Dai,
Yasheng Wang, Ruiming Tang | CoIR: A Comprehensive Benchmark for Code Information Retrieval Models | null | null | null | null | cs.IR cs.CL | http://creativecommons.org/publicdomain/zero/1.0/ | Despite the substantial success of Information Retrieval (IR) in various NLP
tasks, most IR systems predominantly handle queries and corpora in natural
language, neglecting the domain of code retrieval. Code retrieval is critically
important yet remains under-explored, with existing methods and benchmarks
inadequately representing the diversity of code in various domains and tasks.
Addressing this gap, we present COIR (Code Information Retrieval Benchmark), a
robust and comprehensive benchmark specifically designed to assess code
retrieval capabilities. COIR comprises ten meticulously curated code datasets,
spanning eight distinctive retrieval tasks across seven diverse domains. We
first discuss the construction of COIR and its diverse dataset composition.
Further, we evaluate nine widely used retrieval models using COIR, uncovering
significant difficulties in performing code retrieval tasks even with
state-of-the-art systems. To facilitate easy adoption and integration within
existing research workflows, COIR has been developed as a user-friendly Python
framework, readily installable via pip. It shares same data schema as other
popular benchmarks like MTEB and BEIR, enabling seamless cross-benchmark
evaluations. Through COIR, we aim to invigorate research in the code retrieval
domain, providing a versatile benchmarking tool that encourages further
development and exploration of code retrieval systems
https://github.com/CoIR-team/coir.
| [
{
"version": "v1",
"created": "Wed, 3 Jul 2024 07:58:20 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 08:48:30 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Li",
"Xiangyang",
""
],
[
"Dong",
"Kuicai",
""
],
[
"Lee",
"Yi Quan",
""
],
[
"Xia",
"Wei",
""
],
[
"Zhang",
"Hao",
""
],
[
"Dai",
"Xinyi",
""
],
[
"Wang",
"Yasheng",
""
],
[
"Tang",
"Ruiming",
""
]
]
| TITLE: CoIR: A Comprehensive Benchmark for Code Information Retrieval Models
ABSTRACT: Despite the substantial success of Information Retrieval (IR) in various NLP
tasks, most IR systems predominantly handle queries and corpora in natural
language, neglecting the domain of code retrieval. Code retrieval is critically
important yet remains under-explored, with existing methods and benchmarks
inadequately representing the diversity of code in various domains and tasks.
Addressing this gap, we present COIR (Code Information Retrieval Benchmark), a
robust and comprehensive benchmark specifically designed to assess code
retrieval capabilities. COIR comprises ten meticulously curated code datasets,
spanning eight distinctive retrieval tasks across seven diverse domains. We
first discuss the construction of COIR and its diverse dataset composition.
Further, we evaluate nine widely used retrieval models using COIR, uncovering
significant difficulties in performing code retrieval tasks even with
state-of-the-art systems. To facilitate easy adoption and integration within
existing research workflows, COIR has been developed as a user-friendly Python
framework, readily installable via pip. It shares same data schema as other
popular benchmarks like MTEB and BEIR, enabling seamless cross-benchmark
evaluations. Through COIR, we aim to invigorate research in the code retrieval
domain, providing a versatile benchmarking tool that encourages further
development and exploration of code retrieval systems
https://github.com/CoIR-team/coir.
| new_dataset | 0.724627 |
2407.08272 | Tomasz Kryjak | Dominika Przewlocka-Rus, Tomasz Kryjak, Marek Gorgon | PowerYOLO: Mixed Precision Model for Hardware Efficient Object Detection
with Event Data | The paper has been accepted for the 27th Euromicro Conference Series
on Digital System Design (DSD) 2024 | null | 10.1109/DSD64264.2024.00036 | null | cs.CV eess.IV | http://creativecommons.org/licenses/by/4.0/ | The performance of object detection systems in automotive solutions must be
as high as possible, with minimal response time and, due to the often
battery-powered operation, low energy consumption. When designing such
solutions, we therefore face challenges typical for embedded vision systems:
the problem of fitting algorithms of high memory and computational complexity
into small low-power devices. In this paper we propose PowerYOLO - a mixed
precision solution, which targets three essential elements of such application.
First, we propose a system based on a Dynamic Vision Sensor (DVS), a novel
sensor, that offers low power requirements and operates well in conditions with
variable illumination. It is these features that may make event cameras a
preferential choice over frame cameras in some applications. Second, to ensure
high accuracy and low memory and computational complexity, we propose to use
4-bit width Powers-of-Two (PoT) quantisation for convolution weights of the
YOLO detector, with all other parameters quantised linearly. Finally, we
embrace from PoT scheme and replace multiplication with bit-shifting to
increase the efficiency of hardware acceleration of such solution, with a
special convolution-batch normalisation fusion scheme. The use of specific
sensor with PoT quantisation and special batch normalisation fusion leads to a
unique system with almost 8x reduction in memory complexity and vast
computational simplifications, with relation to a standard approach. This
efficient system achieves high accuracy of mAP 0.301 on the GEN1 DVS dataset,
marking the new state-of-the-art for such compressed model.
| [
{
"version": "v1",
"created": "Thu, 11 Jul 2024 08:17:35 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Przewlocka-Rus",
"Dominika",
""
],
[
"Kryjak",
"Tomasz",
""
],
[
"Gorgon",
"Marek",
""
]
]
| TITLE: PowerYOLO: Mixed Precision Model for Hardware Efficient Object Detection
with Event Data
ABSTRACT: The performance of object detection systems in automotive solutions must be
as high as possible, with minimal response time and, due to the often
battery-powered operation, low energy consumption. When designing such
solutions, we therefore face challenges typical for embedded vision systems:
the problem of fitting algorithms of high memory and computational complexity
into small low-power devices. In this paper we propose PowerYOLO - a mixed
precision solution, which targets three essential elements of such application.
First, we propose a system based on a Dynamic Vision Sensor (DVS), a novel
sensor, that offers low power requirements and operates well in conditions with
variable illumination. It is these features that may make event cameras a
preferential choice over frame cameras in some applications. Second, to ensure
high accuracy and low memory and computational complexity, we propose to use
4-bit width Powers-of-Two (PoT) quantisation for convolution weights of the
YOLO detector, with all other parameters quantised linearly. Finally, we
embrace from PoT scheme and replace multiplication with bit-shifting to
increase the efficiency of hardware acceleration of such solution, with a
special convolution-batch normalisation fusion scheme. The use of specific
sensor with PoT quantisation and special batch normalisation fusion leads to a
unique system with almost 8x reduction in memory complexity and vast
computational simplifications, with relation to a standard approach. This
efficient system achieves high accuracy of mAP 0.301 on the GEN1 DVS dataset,
marking the new state-of-the-art for such compressed model.
| no_new_dataset | 0.937096 |
2407.10998 | Do Dat | Do Huu Dat, Do Duc Anh, Anh Tuan Luu, Wray Buntine | Discrete Diffusion Language Model for Efficient Text Summarization | null | null | null | null | cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | While diffusion models excel at conditional generating high-quality images,
prior works in discrete diffusion models were not evaluated on conditional
long-text generation. In this work, we address the limitations of prior
discrete diffusion models for conditional long-text generation, particularly in
long sequence-to-sequence tasks such as abstractive summarization. Despite fast
decoding speeds compared to autoregressive methods, previous diffusion models
failed on the abstractive summarization task due to the incompatibility between
the backbone architectures and the random noising process. To overcome these
challenges, we introduce a novel semantic-aware noising process that enables
Transformer backbones to handle long sequences effectively. Additionally, we
propose CrossMamba, an adaptation of the Mamba model to the encoder-decoder
paradigm, which integrates seamlessly with the random absorbing noising
process. Our approaches achieve state-of-the-art performance on three benchmark
summarization datasets: Gigaword, CNN/DailyMail, and Arxiv, outperforming
existing discrete diffusion models on ROUGE metrics as well as possessing much
faster speed in inference compared to autoregressive models.
| [
{
"version": "v1",
"created": "Tue, 25 Jun 2024 09:55:22 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 08:45:53 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Dat",
"Do Huu",
""
],
[
"Anh",
"Do Duc",
""
],
[
"Luu",
"Anh Tuan",
""
],
[
"Buntine",
"Wray",
""
]
]
| TITLE: Discrete Diffusion Language Model for Efficient Text Summarization
ABSTRACT: While diffusion models excel at conditional generating high-quality images,
prior works in discrete diffusion models were not evaluated on conditional
long-text generation. In this work, we address the limitations of prior
discrete diffusion models for conditional long-text generation, particularly in
long sequence-to-sequence tasks such as abstractive summarization. Despite fast
decoding speeds compared to autoregressive methods, previous diffusion models
failed on the abstractive summarization task due to the incompatibility between
the backbone architectures and the random noising process. To overcome these
challenges, we introduce a novel semantic-aware noising process that enables
Transformer backbones to handle long sequences effectively. Additionally, we
propose CrossMamba, an adaptation of the Mamba model to the encoder-decoder
paradigm, which integrates seamlessly with the random absorbing noising
process. Our approaches achieve state-of-the-art performance on three benchmark
summarization datasets: Gigaword, CNN/DailyMail, and Arxiv, outperforming
existing discrete diffusion models on ROUGE metrics as well as possessing much
faster speed in inference compared to autoregressive models.
| no_new_dataset | 0.952397 |
2407.12538 | Jiaxiang He | Juan Song, Jiaxiang He, Lijie Yang, Mingtao Feng, and Keyan Wang | High Frequency Matters: Uncertainty Guided Image Compression with
Wavelet Diffusion | null | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Diffusion probabilistic models have recently achieved remarkable success in
generating high-quality images. However, balancing high perceptual quality and
low distortion remains challenging in image compression applications. To
address this issue, we propose an efficient Uncertainty-Guided image
compression approach with wavelet Diffusion (UGDiff). Our approach focuses on
high frequency compression via the wavelet transform, since high frequency
components are crucial for reconstructing image details. We introduce a wavelet
conditional diffusion model for high frequency prediction, followed by a
residual codec that compresses and transmits prediction residuals to the
decoder. This diffusion prediction-then-residual compression paradigm
effectively addresses the low fidelity issue common in direct reconstructions
by existing diffusion models. Considering the uncertainty from the random
sampling of the diffusion model, we further design an uncertainty-weighted
rate-distortion (R-D) loss tailored for residual compression, providing a more
rational trade-off between rate and distortion. Comprehensive experiments on
two benchmark datasets validate the effectiveness of UGDiff, surpassing
state-of-the-art image compression methods in R-D performance, perceptual
quality, subjective quality, and inference time. Our code is available at:
https://github.com/hejiaxiang1/Wavelet-Diffusion/tree/main
| [
{
"version": "v1",
"created": "Wed, 17 Jul 2024 13:21:31 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Mar 2025 06:25:14 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Song",
"Juan",
""
],
[
"He",
"Jiaxiang",
""
],
[
"Yang",
"Lijie",
""
],
[
"Feng",
"Mingtao",
""
],
[
"Wang",
"Keyan",
""
]
]
| TITLE: High Frequency Matters: Uncertainty Guided Image Compression with
Wavelet Diffusion
ABSTRACT: Diffusion probabilistic models have recently achieved remarkable success in
generating high-quality images. However, balancing high perceptual quality and
low distortion remains challenging in image compression applications. To
address this issue, we propose an efficient Uncertainty-Guided image
compression approach with wavelet Diffusion (UGDiff). Our approach focuses on
high frequency compression via the wavelet transform, since high frequency
components are crucial for reconstructing image details. We introduce a wavelet
conditional diffusion model for high frequency prediction, followed by a
residual codec that compresses and transmits prediction residuals to the
decoder. This diffusion prediction-then-residual compression paradigm
effectively addresses the low fidelity issue common in direct reconstructions
by existing diffusion models. Considering the uncertainty from the random
sampling of the diffusion model, we further design an uncertainty-weighted
rate-distortion (R-D) loss tailored for residual compression, providing a more
rational trade-off between rate and distortion. Comprehensive experiments on
two benchmark datasets validate the effectiveness of UGDiff, surpassing
state-of-the-art image compression methods in R-D performance, perceptual
quality, subjective quality, and inference time. Our code is available at:
https://github.com/hejiaxiang1/Wavelet-Diffusion/tree/main
| no_new_dataset | 0.946695 |
2407.17417 | Michael-Andrei Panaitescu-Liess | Michael-Andrei Panaitescu-Liess, Zora Che, Bang An, Yuancheng Xu,
Pankayaraj Pathmanathan, Souradip Chakraborty, Sicheng Zhu, Tom Goldstein,
Furong Huang | Can Watermarking Large Language Models Prevent Copyrighted Text
Generation and Hide Training Data? | 19 pages, 7 figures. Published at AAAI 2025. Code will be available
at https://github.com/michael-panaitescu/watermark_copyright_aaai25 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) have demonstrated impressive capabilities in
generating diverse and contextually rich text. However, concerns regarding
copyright infringement arise as LLMs may inadvertently produce copyrighted
material. In this paper, we first investigate the effectiveness of watermarking
LLMs as a deterrent against the generation of copyrighted texts. Through
theoretical analysis and empirical evaluation, we demonstrate that
incorporating watermarks into LLMs significantly reduces the likelihood of
generating copyrighted content, thereby addressing a critical concern in the
deployment of LLMs. However, we also find that watermarking can have unintended
consequences on Membership Inference Attacks (MIAs), which aim to discern
whether a sample was part of the pretraining dataset and may be used to detect
copyright violations. Surprisingly, we find that watermarking adversely affects
the success rate of MIAs, complicating the task of detecting copyrighted text
in the pretraining dataset. These results reveal the complex interplay between
different regulatory measures, which may impact each other in unforeseen ways.
Finally, we propose an adaptive technique to improve the success rate of a
recent MIA under watermarking. Our findings underscore the importance of
developing adaptive methods to study critical problems in LLMs with potential
legal implications.
| [
{
"version": "v1",
"created": "Wed, 24 Jul 2024 16:53:09 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 06:18:24 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Panaitescu-Liess",
"Michael-Andrei",
""
],
[
"Che",
"Zora",
""
],
[
"An",
"Bang",
""
],
[
"Xu",
"Yuancheng",
""
],
[
"Pathmanathan",
"Pankayaraj",
""
],
[
"Chakraborty",
"Souradip",
""
],
[
"Zhu",
"Sicheng",
""
],
[
"Goldstein",
"Tom",
""
],
[
"Huang",
"Furong",
""
]
]
| TITLE: Can Watermarking Large Language Models Prevent Copyrighted Text
Generation and Hide Training Data?
ABSTRACT: Large Language Models (LLMs) have demonstrated impressive capabilities in
generating diverse and contextually rich text. However, concerns regarding
copyright infringement arise as LLMs may inadvertently produce copyrighted
material. In this paper, we first investigate the effectiveness of watermarking
LLMs as a deterrent against the generation of copyrighted texts. Through
theoretical analysis and empirical evaluation, we demonstrate that
incorporating watermarks into LLMs significantly reduces the likelihood of
generating copyrighted content, thereby addressing a critical concern in the
deployment of LLMs. However, we also find that watermarking can have unintended
consequences on Membership Inference Attacks (MIAs), which aim to discern
whether a sample was part of the pretraining dataset and may be used to detect
copyright violations. Surprisingly, we find that watermarking adversely affects
the success rate of MIAs, complicating the task of detecting copyrighted text
in the pretraining dataset. These results reveal the complex interplay between
different regulatory measures, which may impact each other in unforeseen ways.
Finally, we propose an adaptive technique to improve the success rate of a
recent MIA under watermarking. Our findings underscore the importance of
developing adaptive methods to study critical problems in LLMs with potential
legal implications.
| no_new_dataset | 0.942823 |
2407.19547 | Yushi Huang | Yushi Huang, Ruihao Gong, Xianglong Liu, Jing Liu, Yuhang Li, Jiwen
Lu, Dacheng Tao | Temporal Feature Matters: A Framework for Diffusion Model Quantization | arXiv admin note: substantial text overlap with arXiv:2311.16503 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | The Diffusion models, widely used for image generation, face significant
challenges related to their broad applicability due to prolonged inference
times and high memory demands. Efficient Post-Training Quantization (PTQ) is
crucial to address these issues. However, unlike traditional models, diffusion
models critically rely on the time-step for the multi-round denoising.
Typically, each time-step is encoded into a hypersensitive temporal feature by
several modules. Despite this, existing PTQ methods do not optimize these
modules individually. Instead, they employ unsuitable reconstruction objectives
and complex calibration methods, leading to significant disturbances in the
temporal feature and denoising trajectory, as well as reduced compression
efficiency. To address these challenges, we introduce a novel quantization
framework that includes three strategies: 1) TIB-based Maintenance: Based on
our innovative Temporal Information Block (TIB) definition, Temporal
Information-aware Reconstruction (TIAR) and Finite Set Calibration (FSC) are
developed to efficiently align original temporal features. 2) Cache-based
Maintenance: Instead of indirect and complex optimization for the related
modules, pre-computing and caching quantized counterparts of temporal features
are developed to minimize errors. 3) Disturbance-aware Selection: Employ
temporal feature errors to guide a fine-grained selection between the two
maintenance strategies for further disturbance reduction. This framework
preserves most of the temporal information and ensures high-quality end-to-end
generation. Extensive testing on various datasets, diffusion models and
hardware confirms our superior performance and acceleration.
| [
{
"version": "v1",
"created": "Sun, 28 Jul 2024 17:46:15 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Aug 2024 20:43:10 GMT"
},
{
"version": "v3",
"created": "Sun, 9 Mar 2025 17:43:28 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Huang",
"Yushi",
""
],
[
"Gong",
"Ruihao",
""
],
[
"Liu",
"Xianglong",
""
],
[
"Liu",
"Jing",
""
],
[
"Li",
"Yuhang",
""
],
[
"Lu",
"Jiwen",
""
],
[
"Tao",
"Dacheng",
""
]
]
| TITLE: Temporal Feature Matters: A Framework for Diffusion Model Quantization
ABSTRACT: The Diffusion models, widely used for image generation, face significant
challenges related to their broad applicability due to prolonged inference
times and high memory demands. Efficient Post-Training Quantization (PTQ) is
crucial to address these issues. However, unlike traditional models, diffusion
models critically rely on the time-step for the multi-round denoising.
Typically, each time-step is encoded into a hypersensitive temporal feature by
several modules. Despite this, existing PTQ methods do not optimize these
modules individually. Instead, they employ unsuitable reconstruction objectives
and complex calibration methods, leading to significant disturbances in the
temporal feature and denoising trajectory, as well as reduced compression
efficiency. To address these challenges, we introduce a novel quantization
framework that includes three strategies: 1) TIB-based Maintenance: Based on
our innovative Temporal Information Block (TIB) definition, Temporal
Information-aware Reconstruction (TIAR) and Finite Set Calibration (FSC) are
developed to efficiently align original temporal features. 2) Cache-based
Maintenance: Instead of indirect and complex optimization for the related
modules, pre-computing and caching quantized counterparts of temporal features
are developed to minimize errors. 3) Disturbance-aware Selection: Employ
temporal feature errors to guide a fine-grained selection between the two
maintenance strategies for further disturbance reduction. This framework
preserves most of the temporal information and ensures high-quality end-to-end
generation. Extensive testing on various datasets, diffusion models and
hardware confirms our superior performance and acceleration.
| no_new_dataset | 0.942929 |
2408.01716 | Fabian Schmidt | Fabian Schmidt, Constantin Blessing, Markus Enzweiler, Abhinav Valada | Visual-Inertial SLAM for Unstructured Outdoor Environments: Benchmarking
the Benefits and Computational Costs of Loop Closing | 22 pages, 8 figures, 7 tables | null | null | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Simultaneous Localization and Mapping (SLAM) is essential for mobile
robotics, enabling autonomous navigation in dynamic, unstructured outdoor
environments without relying on external positioning systems. These
environments pose significant challenges due to variable lighting, weather
conditions, and complex terrain. Visual-Inertial SLAM has emerged as a
promising solution for robust localization under such conditions. This paper
benchmarks several open-source Visual-Inertial SLAM systems, including
traditional methods (ORB-SLAM3, VINS-Fusion, OpenVINS, Kimera, and SVO Pro) and
learning-based approaches (HFNet-SLAM, AirSLAM), to evaluate their performance
in unstructured natural outdoor settings. We focus on the impact of loop
closing on localization accuracy and computational demands, providing a
comprehensive analysis of these systems' effectiveness in real-world
environments and especially their application to embedded systems in outdoor
robotics. Our contributions further include an assessment of varying frame
rates on localization accuracy and computational load. The findings highlight
the importance of loop closing in improving localization accuracy while
managing computational resources efficiently, offering valuable insights for
optimizing Visual-Inertial SLAM systems for practical outdoor applications in
mobile robotics. The dataset and the benchmark code are available under
https://github.com/iis-esslingen/vi-slam_lc_benchmark.
| [
{
"version": "v1",
"created": "Sat, 3 Aug 2024 09:10:38 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 21:40:36 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Schmidt",
"Fabian",
""
],
[
"Blessing",
"Constantin",
""
],
[
"Enzweiler",
"Markus",
""
],
[
"Valada",
"Abhinav",
""
]
]
| TITLE: Visual-Inertial SLAM for Unstructured Outdoor Environments: Benchmarking
the Benefits and Computational Costs of Loop Closing
ABSTRACT: Simultaneous Localization and Mapping (SLAM) is essential for mobile
robotics, enabling autonomous navigation in dynamic, unstructured outdoor
environments without relying on external positioning systems. These
environments pose significant challenges due to variable lighting, weather
conditions, and complex terrain. Visual-Inertial SLAM has emerged as a
promising solution for robust localization under such conditions. This paper
benchmarks several open-source Visual-Inertial SLAM systems, including
traditional methods (ORB-SLAM3, VINS-Fusion, OpenVINS, Kimera, and SVO Pro) and
learning-based approaches (HFNet-SLAM, AirSLAM), to evaluate their performance
in unstructured natural outdoor settings. We focus on the impact of loop
closing on localization accuracy and computational demands, providing a
comprehensive analysis of these systems' effectiveness in real-world
environments and especially their application to embedded systems in outdoor
robotics. Our contributions further include an assessment of varying frame
rates on localization accuracy and computational load. The findings highlight
the importance of loop closing in improving localization accuracy while
managing computational resources efficiently, offering valuable insights for
optimizing Visual-Inertial SLAM systems for practical outdoor applications in
mobile robotics. The dataset and the benchmark code are available under
https://github.com/iis-esslingen/vi-slam_lc_benchmark.
| no_new_dataset | 0.887009 |
2408.04034 | Zhuofan Zhang | Zhuofan Zhang, Ziyu Zhu, Junhao Li, Pengxiang Li, Tianxu Wang, Tengyu
Liu, Xiaojian Ma, Yixin Chen, Baoxiong Jia, Siyuan Huang, Qing Li | Task-oriented Sequential Grounding and Navigation in 3D Scenes | website: https://sg-3d.github.io/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Grounding natural language in 3D environments is a critical step toward
achieving robust 3D vision-language alignment. Current datasets and models for
3D visual grounding predominantly focus on identifying and localizing objects
from static, object-centric descriptions. These approaches do not adequately
address the dynamic and sequential nature of task-oriented scenarios. In this
work, we introduce a novel task: Task-oriented Sequential Grounding and
Navigation in 3D Scenes, where models must interpret step-by-step instructions
for daily activities by either localizing a sequence of target objects in
indoor scenes or navigating toward them within a 3D simulator. To facilitate
this task, we present SG3D, a large-scale dataset comprising 22,346 tasks with
112,236 steps across 4,895 real-world 3D scenes. The dataset is constructed by
combining RGB-D scans from various 3D scene datasets with an automated task
generation pipeline, followed by human verification for quality assurance. We
benchmark contemporary methods on SG3D, revealing the significant challenges in
understanding task-oriented context across multiple steps. Furthermore, we
propose SG-LLM, a state-of-the-art approach leveraging a stepwise grounding
paradigm to tackle the sequential grounding task. Our findings underscore the
need for further research to advance the development of more capable and
context-aware embodied agents.
| [
{
"version": "v1",
"created": "Wed, 7 Aug 2024 18:30:18 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Mar 2025 01:37:47 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Zhang",
"Zhuofan",
""
],
[
"Zhu",
"Ziyu",
""
],
[
"Li",
"Junhao",
""
],
[
"Li",
"Pengxiang",
""
],
[
"Wang",
"Tianxu",
""
],
[
"Liu",
"Tengyu",
""
],
[
"Ma",
"Xiaojian",
""
],
[
"Chen",
"Yixin",
""
],
[
"Jia",
"Baoxiong",
""
],
[
"Huang",
"Siyuan",
""
],
[
"Li",
"Qing",
""
]
]
| TITLE: Task-oriented Sequential Grounding and Navigation in 3D Scenes
ABSTRACT: Grounding natural language in 3D environments is a critical step toward
achieving robust 3D vision-language alignment. Current datasets and models for
3D visual grounding predominantly focus on identifying and localizing objects
from static, object-centric descriptions. These approaches do not adequately
address the dynamic and sequential nature of task-oriented scenarios. In this
work, we introduce a novel task: Task-oriented Sequential Grounding and
Navigation in 3D Scenes, where models must interpret step-by-step instructions
for daily activities by either localizing a sequence of target objects in
indoor scenes or navigating toward them within a 3D simulator. To facilitate
this task, we present SG3D, a large-scale dataset comprising 22,346 tasks with
112,236 steps across 4,895 real-world 3D scenes. The dataset is constructed by
combining RGB-D scans from various 3D scene datasets with an automated task
generation pipeline, followed by human verification for quality assurance. We
benchmark contemporary methods on SG3D, revealing the significant challenges in
understanding task-oriented context across multiple steps. Furthermore, we
propose SG-LLM, a state-of-the-art approach leveraging a stepwise grounding
paradigm to tackle the sequential grounding task. Our findings underscore the
need for further research to advance the development of more capable and
context-aware embodied agents.
| new_dataset | 0.959383 |
2408.11779 | Minjun Zhu | Minjun Zhu, Yixuan Weng, Linyi Yang, Yue Zhang | Personality Alignment of Large Language Models | Acecpt in ICLR 2025 | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Aligning large language models (LLMs) typically aim to reflect general human
values and behaviors, but they often fail to capture the unique characteristics
and preferences of individual users. To address this gap, we introduce the
concept of Personality Alignment. This approach tailors LLMs' responses and
decisions to match the specific preferences of individual users or closely
related groups. Inspired by psychometrics, we created the Personality Alignment
with Personality Inventories (PAPI) dataset, which includes data from over
320,000 real subjects across multiple personality assessments, including both
the Big Five Personality Factors and Dark Triad traits. This comprehensive
dataset enables quantitative evaluation of LLMs' alignment capabilities across
both positive and potentially problematic personality dimensions. Recognizing
the challenges of personality alignments, such as limited personal data,
diverse preferences, and scalability requirements, we developed an activation
intervention optimization method. This method enhances LLMs' ability to
efficiently align with individual behavioral preferences using minimal data and
computational resources. Remarkably, our method, PAS, achieves superior
performance while requiring only 1/5 of the optimization time compared to DPO,
offering practical value for personality alignment. Our work paves the way for
future AI systems to make decisions and reason in truly personality ways,
enhancing the relevance and meaning of AI interactions for each user and
advancing human-centered artificial intelligence. The dataset and code are
released at https://github.com/zhu-minjun/PAlign.
| [
{
"version": "v1",
"created": "Wed, 21 Aug 2024 17:09:00 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Mar 2025 14:01:37 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Zhu",
"Minjun",
""
],
[
"Weng",
"Yixuan",
""
],
[
"Yang",
"Linyi",
""
],
[
"Zhang",
"Yue",
""
]
]
| TITLE: Personality Alignment of Large Language Models
ABSTRACT: Aligning large language models (LLMs) typically aim to reflect general human
values and behaviors, but they often fail to capture the unique characteristics
and preferences of individual users. To address this gap, we introduce the
concept of Personality Alignment. This approach tailors LLMs' responses and
decisions to match the specific preferences of individual users or closely
related groups. Inspired by psychometrics, we created the Personality Alignment
with Personality Inventories (PAPI) dataset, which includes data from over
320,000 real subjects across multiple personality assessments, including both
the Big Five Personality Factors and Dark Triad traits. This comprehensive
dataset enables quantitative evaluation of LLMs' alignment capabilities across
both positive and potentially problematic personality dimensions. Recognizing
the challenges of personality alignments, such as limited personal data,
diverse preferences, and scalability requirements, we developed an activation
intervention optimization method. This method enhances LLMs' ability to
efficiently align with individual behavioral preferences using minimal data and
computational resources. Remarkably, our method, PAS, achieves superior
performance while requiring only 1/5 of the optimization time compared to DPO,
offering practical value for personality alignment. Our work paves the way for
future AI systems to make decisions and reason in truly personality ways,
enhancing the relevance and meaning of AI interactions for each user and
advancing human-centered artificial intelligence. The dataset and code are
released at https://github.com/zhu-minjun/PAlign.
| new_dataset | 0.963882 |
2408.12246 | Guoting Wei | Guoting Wei, Xia Yuan, Yu Liu, Zhenhao Shang, Xizhe Xue, Peng Wang,
Kelu Yao, Chunxia Zhao, Haokui Zhang, Rong Xiao | OVA-Det: Open Vocabulary Aerial Object Detection with Image-Text
Collaboration | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Aerial object detection plays a crucial role in numerous applications.
However, most existing methods focus on detecting predefined object categories,
limiting their applicability in real-world open scenarios. In this paper, we
extend aerial object detection to open scenarios through image-text
collaboration and propose OVA-Det, a highly efficient open-vocabulary detector
for aerial scenes. Specifically, we first introduce an image-to-text alignment
loss to replace the conventional category regression loss, thereby eliminating
category limitations. Next, we propose a lightweight text-guided strategy that
enhances the feature extraction process in the encoder and enables queries to
focus on class-relevant image features within the decoder, further improving
detection accuracy without introducing significant additional costs. Extensive
comparison experiments demonstrate that the proposed OVA-Det outperforms
state-of-the-art methods on all three widely used benchmark datasets by a large
margin. For instance, for zero-shot detection on DIOR, OVA-Det achieves 37.2
mAP and 79.8 Recall, 12.4 and 42.0 higher than that of YOLO-World. In addition,
the inference speed of OVA-Det reaches 36 FPS on RTX 4090, meeting the
real-time detection requirements for various applications. The code is
available at
\href{https://github.com/GT-Wei/OVA-Det}{https://github.com/GT-Wei/OVA-Det}.
| [
{
"version": "v1",
"created": "Thu, 22 Aug 2024 09:33:25 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 06:32:41 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Wei",
"Guoting",
""
],
[
"Yuan",
"Xia",
""
],
[
"Liu",
"Yu",
""
],
[
"Shang",
"Zhenhao",
""
],
[
"Xue",
"Xizhe",
""
],
[
"Wang",
"Peng",
""
],
[
"Yao",
"Kelu",
""
],
[
"Zhao",
"Chunxia",
""
],
[
"Zhang",
"Haokui",
""
],
[
"Xiao",
"Rong",
""
]
]
| TITLE: OVA-Det: Open Vocabulary Aerial Object Detection with Image-Text
Collaboration
ABSTRACT: Aerial object detection plays a crucial role in numerous applications.
However, most existing methods focus on detecting predefined object categories,
limiting their applicability in real-world open scenarios. In this paper, we
extend aerial object detection to open scenarios through image-text
collaboration and propose OVA-Det, a highly efficient open-vocabulary detector
for aerial scenes. Specifically, we first introduce an image-to-text alignment
loss to replace the conventional category regression loss, thereby eliminating
category limitations. Next, we propose a lightweight text-guided strategy that
enhances the feature extraction process in the encoder and enables queries to
focus on class-relevant image features within the decoder, further improving
detection accuracy without introducing significant additional costs. Extensive
comparison experiments demonstrate that the proposed OVA-Det outperforms
state-of-the-art methods on all three widely used benchmark datasets by a large
margin. For instance, for zero-shot detection on DIOR, OVA-Det achieves 37.2
mAP and 79.8 Recall, 12.4 and 42.0 higher than that of YOLO-World. In addition,
the inference speed of OVA-Det reaches 36 FPS on RTX 4090, meeting the
real-time detection requirements for various applications. The code is
available at
\href{https://github.com/GT-Wei/OVA-Det}{https://github.com/GT-Wei/OVA-Det}.
| no_new_dataset | 0.946892 |
2408.12362 | Mashael Al-Duwais Mrs. | Mashael Al-Duwais, Hend Al-Khalifa and Abdulmalik Al-Salman | CLEANANERCorp: Identifying and Correcting Incorrect Labels in the
ANERcorp Dataset | Proceedings of the 6th Workshop on Open-Source Arabic Corpora and
Processing Tools (OSACT) with Shared Tasks on Arabic LLMs Hallucination and
Dialect to MSA Machine Translation @ LREC-COLING 2024 | ELRA and ICCL 2024 | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Label errors are a common issue in machine learning datasets, particularly
for tasks such as Named Entity Recognition. Such label errors might hurt model
training, affect evaluation results, and lead to an inaccurate assessment of
model performance. In this study, we dived deep into one of the widely adopted
Arabic NER benchmark datasets (ANERcorp) and found a significant number of
annotation errors, missing labels, and inconsistencies. Therefore, in this
study, we conducted empirical research to understand these errors, correct them
and propose a cleaner version of the dataset named CLEANANERCorp. CLEANANERCorp
will serve the research community as a more accurate and consistent benchmark.
| [
{
"version": "v1",
"created": "Thu, 22 Aug 2024 12:59:05 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Mar 2025 14:06:08 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Al-Duwais",
"Mashael",
""
],
[
"Al-Khalifa",
"Hend",
""
],
[
"Al-Salman",
"Abdulmalik",
""
]
]
| TITLE: CLEANANERCorp: Identifying and Correcting Incorrect Labels in the
ANERcorp Dataset
ABSTRACT: Label errors are a common issue in machine learning datasets, particularly
for tasks such as Named Entity Recognition. Such label errors might hurt model
training, affect evaluation results, and lead to an inaccurate assessment of
model performance. In this study, we dived deep into one of the widely adopted
Arabic NER benchmark datasets (ANERcorp) and found a significant number of
annotation errors, missing labels, and inconsistencies. Therefore, in this
study, we conducted empirical research to understand these errors, correct them
and propose a cleaner version of the dataset named CLEANANERCorp. CLEANANERCorp
will serve the research community as a more accurate and consistent benchmark.
| new_dataset | 0.89303 |
2408.13679 | George Tang | George Tang, William Zhao, Logan Ford, David Benhaim, Paul Zhang | Segment Any Mesh | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | We propose Segment Any Mesh, a novel zero-shot mesh part segmentation method
that overcomes the limitations of shape analysis-based, learning-based, and
contemporary approaches. Our approach operates in two phases: multimodal
rendering and 2D-to-3D lifting. In the first phase, multiview renders of the
mesh are individually processed through Segment Anything to generate 2D masks.
These masks are then lifted into a mesh part segmentation by associating masks
that refer to the same mesh part across the multiview renders. We find that
applying Segment Anything to multimodal feature renders of normals and shape
diameter scalars achieves better results than using only untextured renders of
meshes. By building our method on top of Segment Anything, we seamlessly
inherit any future improvements made to 2D segmentation. We compare our method
with a robust, well-evaluated shape analysis method, Shape Diameter Function,
and show that our method is comparable to or exceeds its performance. Since
current benchmarks contain limited object diversity, we also curate and release
a dataset of generated meshes and use it to demonstrate our method's improved
generalization over Shape Diameter Function via human evaluation. We release
the code and dataset at https://github.com/gtangg12/samesh
| [
{
"version": "v1",
"created": "Sat, 24 Aug 2024 22:05:04 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Mar 2025 21:11:26 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Tang",
"George",
""
],
[
"Zhao",
"William",
""
],
[
"Ford",
"Logan",
""
],
[
"Benhaim",
"David",
""
],
[
"Zhang",
"Paul",
""
]
]
| TITLE: Segment Any Mesh
ABSTRACT: We propose Segment Any Mesh, a novel zero-shot mesh part segmentation method
that overcomes the limitations of shape analysis-based, learning-based, and
contemporary approaches. Our approach operates in two phases: multimodal
rendering and 2D-to-3D lifting. In the first phase, multiview renders of the
mesh are individually processed through Segment Anything to generate 2D masks.
These masks are then lifted into a mesh part segmentation by associating masks
that refer to the same mesh part across the multiview renders. We find that
applying Segment Anything to multimodal feature renders of normals and shape
diameter scalars achieves better results than using only untextured renders of
meshes. By building our method on top of Segment Anything, we seamlessly
inherit any future improvements made to 2D segmentation. We compare our method
with a robust, well-evaluated shape analysis method, Shape Diameter Function,
and show that our method is comparable to or exceeds its performance. Since
current benchmarks contain limited object diversity, we also curate and release
a dataset of generated meshes and use it to demonstrate our method's improved
generalization over Shape Diameter Function via human evaluation. We release
the code and dataset at https://github.com/gtangg12/samesh
| new_dataset | 0.956756 |
2408.17267 | Baichuan Zhou | Baichuan Zhou, Haote Yang, Dairong Chen, Junyan Ye, Tianyi Bai, Jinhua
Yu, Songyang Zhang, Dahua Lin, Conghui He, Weijia Li | UrBench: A Comprehensive Benchmark for Evaluating Large Multimodal
Models in Multi-View Urban Scenarios | 9 pages, 6 figures | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent evaluations of Large Multimodal Models (LMMs) have explored their
capabilities in various domains, with only few benchmarks specifically focusing
on urban environments. Moreover, existing urban benchmarks have been limited to
evaluating LMMs with basic region-level urban tasks under singular views,
leading to incomplete evaluations of LMMs' abilities in urban environments. To
address these issues, we present UrBench, a comprehensive benchmark designed
for evaluating LMMs in complex multi-view urban scenarios. UrBench contains
11.6K meticulously curated questions at both region-level and role-level that
cover 4 task dimensions: Geo-Localization, Scene Reasoning, Scene
Understanding, and Object Understanding, totaling 14 task types. In
constructing UrBench, we utilize data from existing datasets and additionally
collect data from 11 cities, creating new annotations using a cross-view
detection-matching method. With these images and annotations, we then integrate
LMM-based, rule-based, and human-based methods to construct large-scale
high-quality questions. Our evaluations on 21 LMMs show that current LMMs
struggle in the urban environments in several aspects. Even the best performing
GPT-4o lags behind humans in most tasks, ranging from simple tasks such as
counting to complex tasks such as orientation, localization and object
attribute recognition, with an average performance gap of 17.4%. Our benchmark
also reveals that LMMs exhibit inconsistent behaviors with different urban
views, especially with respect to understanding cross-view relations.
| [
{
"version": "v1",
"created": "Fri, 30 Aug 2024 13:13:35 GMT"
},
{
"version": "v2",
"created": "Mon, 23 Dec 2024 07:25:51 GMT"
},
{
"version": "v3",
"created": "Sun, 9 Mar 2025 09:48:31 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Zhou",
"Baichuan",
""
],
[
"Yang",
"Haote",
""
],
[
"Chen",
"Dairong",
""
],
[
"Ye",
"Junyan",
""
],
[
"Bai",
"Tianyi",
""
],
[
"Yu",
"Jinhua",
""
],
[
"Zhang",
"Songyang",
""
],
[
"Lin",
"Dahua",
""
],
[
"He",
"Conghui",
""
],
[
"Li",
"Weijia",
""
]
]
| TITLE: UrBench: A Comprehensive Benchmark for Evaluating Large Multimodal
Models in Multi-View Urban Scenarios
ABSTRACT: Recent evaluations of Large Multimodal Models (LMMs) have explored their
capabilities in various domains, with only few benchmarks specifically focusing
on urban environments. Moreover, existing urban benchmarks have been limited to
evaluating LMMs with basic region-level urban tasks under singular views,
leading to incomplete evaluations of LMMs' abilities in urban environments. To
address these issues, we present UrBench, a comprehensive benchmark designed
for evaluating LMMs in complex multi-view urban scenarios. UrBench contains
11.6K meticulously curated questions at both region-level and role-level that
cover 4 task dimensions: Geo-Localization, Scene Reasoning, Scene
Understanding, and Object Understanding, totaling 14 task types. In
constructing UrBench, we utilize data from existing datasets and additionally
collect data from 11 cities, creating new annotations using a cross-view
detection-matching method. With these images and annotations, we then integrate
LMM-based, rule-based, and human-based methods to construct large-scale
high-quality questions. Our evaluations on 21 LMMs show that current LMMs
struggle in the urban environments in several aspects. Even the best performing
GPT-4o lags behind humans in most tasks, ranging from simple tasks such as
counting to complex tasks such as orientation, localization and object
attribute recognition, with an average performance gap of 17.4%. Our benchmark
also reveals that LMMs exhibit inconsistent behaviors with different urban
views, especially with respect to understanding cross-view relations.
| no_new_dataset | 0.617959 |
2409.02066 | Anton Kozyriev | Anton Kozyriev, Vladimir Norkin | Robust Clustering on High-Dimensional Data with Stochastic Quantization | 22 pages, 5 figures, published in the International Scientific
Technical Journal "Problems of Control and Informatics" | International Scientific Technical Journal "Problems of Control
and Informatics" 70 (2025) 32-48 | 10.34229/1028-0979-2025-1-3 | null | cs.LG math.OC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | This paper addresses the limitations of conventional vector quantization
algorithms, particularly K-Means and its variant K-Means++, and investigates
the Stochastic Quantization (SQ) algorithm as a scalable alternative for
high-dimensional unsupervised and semi-supervised learning tasks. Traditional
clustering algorithms often suffer from inefficient memory utilization during
computation, necessitating the loading of all data samples into memory, which
becomes impractical for large-scale datasets. While variants such as Mini-Batch
K-Means partially mitigate this issue by reducing memory usage, they lack
robust theoretical convergence guarantees due to the non-convex nature of
clustering problems. In contrast, the Stochastic Quantization algorithm
provides strong theoretical convergence guarantees, making it a robust
alternative for clustering tasks. We demonstrate the computational efficiency
and rapid convergence of the algorithm on an image classification problem with
partially labeled data, comparing model accuracy across various ratios of
labeled to unlabeled data. To address the challenge of high dimensionality, we
employ a Triplet Network to encode images into low-dimensional representations
in a latent space, which serve as a basis for comparing the efficiency of both
the Stochastic Quantization algorithm and traditional quantization algorithms.
Furthermore, we enhance the algorithm's convergence speed by introducing
modifications with an adaptive learning rate.
| [
{
"version": "v1",
"created": "Tue, 3 Sep 2024 17:13:55 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Sep 2024 15:35:53 GMT"
},
{
"version": "v3",
"created": "Fri, 11 Oct 2024 14:21:22 GMT"
},
{
"version": "v4",
"created": "Tue, 12 Nov 2024 09:50:15 GMT"
},
{
"version": "v5",
"created": "Sun, 9 Mar 2025 16:53:00 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Kozyriev",
"Anton",
""
],
[
"Norkin",
"Vladimir",
""
]
]
| TITLE: Robust Clustering on High-Dimensional Data with Stochastic Quantization
ABSTRACT: This paper addresses the limitations of conventional vector quantization
algorithms, particularly K-Means and its variant K-Means++, and investigates
the Stochastic Quantization (SQ) algorithm as a scalable alternative for
high-dimensional unsupervised and semi-supervised learning tasks. Traditional
clustering algorithms often suffer from inefficient memory utilization during
computation, necessitating the loading of all data samples into memory, which
becomes impractical for large-scale datasets. While variants such as Mini-Batch
K-Means partially mitigate this issue by reducing memory usage, they lack
robust theoretical convergence guarantees due to the non-convex nature of
clustering problems. In contrast, the Stochastic Quantization algorithm
provides strong theoretical convergence guarantees, making it a robust
alternative for clustering tasks. We demonstrate the computational efficiency
and rapid convergence of the algorithm on an image classification problem with
partially labeled data, comparing model accuracy across various ratios of
labeled to unlabeled data. To address the challenge of high dimensionality, we
employ a Triplet Network to encode images into low-dimensional representations
in a latent space, which serve as a basis for comparing the efficiency of both
the Stochastic Quantization algorithm and traditional quantization algorithms.
Furthermore, we enhance the algorithm's convergence speed by introducing
modifications with an adaptive learning rate.
| no_new_dataset | 0.950595 |
2409.03765 | Martin Obschonka | Martin Obschonka, Christian Fisch, Tharindu Fernando, Clinton Fookes | AI, Entrepreneurs, and Privacy: Deep Learning Outperforms Humans in
Detecting Entrepreneurs from Image Data | 46 pages, 2 tables, 11 figures | null | null | null | cs.CV eess.IV | http://creativecommons.org/licenses/by/4.0/ | Occupational outcomes like entrepreneurship are generally considered personal
information that individuals should have the autonomy to disclose. With the
advancing capability of artificial intelligence (AI) to infer private details
from widely available human-centric data (e.g., social media), it is crucial to
investigate whether AI can accurately extract private occupational information
from such data. In this study, we demonstrate that deep neural networks can
classify individuals as entrepreneurs with high accuracy based on facial images
sourced from Crunchbase, a premier source for entrepreneurship data. Utilizing
a dataset comprising facial images of 40,728 individuals, including both
entrepreneurs and non-entrepreneurs, we train a Convolutional Neural Network
(CNN) using a contrastive learning approach based on pairs of facial images
(one entrepreneur and one non-entrepreneur per pair). While human experts
(n=650) and trained participants (n=133) were unable to classify entrepreneurs
with accuracy above chance levels (>50%), our AI model achieved a
classification accuracy of 79.51%. Several robustness tests indicate that this
high level of accuracy is maintained under various conditions. These results
indicate privacy risks for entrepreneurs.
| [
{
"version": "v1",
"created": "Mon, 19 Aug 2024 22:45:46 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Feb 2025 16:12:29 GMT"
},
{
"version": "v3",
"created": "Sat, 8 Mar 2025 14:23:39 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Obschonka",
"Martin",
""
],
[
"Fisch",
"Christian",
""
],
[
"Fernando",
"Tharindu",
""
],
[
"Fookes",
"Clinton",
""
]
]
| TITLE: AI, Entrepreneurs, and Privacy: Deep Learning Outperforms Humans in
Detecting Entrepreneurs from Image Data
ABSTRACT: Occupational outcomes like entrepreneurship are generally considered personal
information that individuals should have the autonomy to disclose. With the
advancing capability of artificial intelligence (AI) to infer private details
from widely available human-centric data (e.g., social media), it is crucial to
investigate whether AI can accurately extract private occupational information
from such data. In this study, we demonstrate that deep neural networks can
classify individuals as entrepreneurs with high accuracy based on facial images
sourced from Crunchbase, a premier source for entrepreneurship data. Utilizing
a dataset comprising facial images of 40,728 individuals, including both
entrepreneurs and non-entrepreneurs, we train a Convolutional Neural Network
(CNN) using a contrastive learning approach based on pairs of facial images
(one entrepreneur and one non-entrepreneur per pair). While human experts
(n=650) and trained participants (n=133) were unable to classify entrepreneurs
with accuracy above chance levels (>50%), our AI model achieved a
classification accuracy of 79.51%. Several robustness tests indicate that this
high level of accuracy is maintained under various conditions. These results
indicate privacy risks for entrepreneurs.
| no_new_dataset | 0.893681 |
2409.06714 | Jiaze E | Jiaze E, Srutarshi Banerjee, Tekin Bicer, Guannan Wang, Yanfu Zhang,
Bin Ren | FCDM: A Physics-Guided Bidirectional Frequency Aware Convolution and
Diffusion-Based Model for Sinogram Inpainting | null | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Computed tomography (CT) is widely used in industrial and medical imaging,
but sparse-view scanning reduces radiation exposure at the cost of incomplete
sinograms and challenging reconstruction. Existing RGB-based inpainting models
struggle with severe feature entanglement, while sinogram-specific methods
often lack explicit physics constraints. We propose FCDM, a physics-guided,
frequency-aware sinogram inpainting framework. It integrates bidirectional
frequency-domain convolutions to disentangle overlapping features while
enforcing total absorption and frequency-domain consistency via a
physics-informed loss. To enhance diffusion-based restoration, we introduce a
Fourier-enhanced mask embedding to encode angular dependencies and a
frequency-adaptive noise scheduling strategy that incorporates a soft row-wise
absorption constraint to maintain physical realism. Experiments on synthetic
and real-world datasets show that FCDM outperforms existing methods, achieving
SSIM over 0.95 and PSNR above 30 dB, with up to 33% and 29% improvements over
baselines.
| [
{
"version": "v1",
"created": "Mon, 26 Aug 2024 12:31:38 GMT"
},
{
"version": "v2",
"created": "Fri, 22 Nov 2024 21:17:56 GMT"
},
{
"version": "v3",
"created": "Sat, 8 Mar 2025 22:31:49 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"E",
"Jiaze",
""
],
[
"Banerjee",
"Srutarshi",
""
],
[
"Bicer",
"Tekin",
""
],
[
"Wang",
"Guannan",
""
],
[
"Zhang",
"Yanfu",
""
],
[
"Ren",
"Bin",
""
]
]
| TITLE: FCDM: A Physics-Guided Bidirectional Frequency Aware Convolution and
Diffusion-Based Model for Sinogram Inpainting
ABSTRACT: Computed tomography (CT) is widely used in industrial and medical imaging,
but sparse-view scanning reduces radiation exposure at the cost of incomplete
sinograms and challenging reconstruction. Existing RGB-based inpainting models
struggle with severe feature entanglement, while sinogram-specific methods
often lack explicit physics constraints. We propose FCDM, a physics-guided,
frequency-aware sinogram inpainting framework. It integrates bidirectional
frequency-domain convolutions to disentangle overlapping features while
enforcing total absorption and frequency-domain consistency via a
physics-informed loss. To enhance diffusion-based restoration, we introduce a
Fourier-enhanced mask embedding to encode angular dependencies and a
frequency-adaptive noise scheduling strategy that incorporates a soft row-wise
absorption constraint to maintain physical realism. Experiments on synthetic
and real-world datasets show that FCDM outperforms existing methods, achieving
SSIM over 0.95 and PSNR above 30 dB, with up to 33% and 29% improvements over
baselines.
| no_new_dataset | 0.946399 |
2409.07215 | Lucile Ter-Minassian | Jake Fawkes, Lucile Ter-Minassian, Desi Ivanova, Uri Shalit, Chris
Holmes | Is merging worth it? Securely evaluating the information gain for causal
dataset acquisition | Published at AISTATS 2025 | null | null | null | stat.ML cs.CR cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Merging datasets across institutions is a lengthy and costly procedure,
especially when it involves private information. Data hosts may therefore want
to prospectively gauge which datasets are most beneficial to merge with,
without revealing sensitive information. For causal estimation this is
particularly challenging as the value of a merge will depend not only on the
reduction in epistemic uncertainty but also the improvement in overlap. To
address this challenge, we introduce the first cryptographically secure
information-theoretic approach for quantifying the value of a merge in the
context of heterogeneous treatment effect estimation. We do this by evaluating
the Expected Information Gain (EIG) and utilising multi-party computation to
ensure it can be securely computed without revealing any raw data. As we
demonstrate, this can be used with differential privacy (DP) to ensure privacy
requirements whilst preserving more accurate computation than naive DP alone.
To the best of our knowledge, this work presents the first privacy-preserving
method for dataset acquisition tailored to causal estimation. We demonstrate
the effectiveness and reliability of our method on a range of simulated and
realistic benchmarks. The code is available anonymously.
| [
{
"version": "v1",
"created": "Wed, 11 Sep 2024 12:17:01 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 14:23:27 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Fawkes",
"Jake",
""
],
[
"Ter-Minassian",
"Lucile",
""
],
[
"Ivanova",
"Desi",
""
],
[
"Shalit",
"Uri",
""
],
[
"Holmes",
"Chris",
""
]
]
| TITLE: Is merging worth it? Securely evaluating the information gain for causal
dataset acquisition
ABSTRACT: Merging datasets across institutions is a lengthy and costly procedure,
especially when it involves private information. Data hosts may therefore want
to prospectively gauge which datasets are most beneficial to merge with,
without revealing sensitive information. For causal estimation this is
particularly challenging as the value of a merge will depend not only on the
reduction in epistemic uncertainty but also the improvement in overlap. To
address this challenge, we introduce the first cryptographically secure
information-theoretic approach for quantifying the value of a merge in the
context of heterogeneous treatment effect estimation. We do this by evaluating
the Expected Information Gain (EIG) and utilising multi-party computation to
ensure it can be securely computed without revealing any raw data. As we
demonstrate, this can be used with differential privacy (DP) to ensure privacy
requirements whilst preserving more accurate computation than naive DP alone.
To the best of our knowledge, this work presents the first privacy-preserving
method for dataset acquisition tailored to causal estimation. We demonstrate
the effectiveness and reliability of our method on a range of simulated and
realistic benchmarks. The code is available anonymously.
| no_new_dataset | 0.937268 |
2409.10452 | Nikolaos Nakis | Nikolaos Nakis, Chrysoula Kosma, Giannis Nikolentzos, Michalis
Chatzianastasis, Iakovos Evdaimon, Michalis Vazirgiannis | Signed Graph Autoencoder for Explainable and Polarization-Aware Network
Embeddings | AISTATS 2025 Camera-ready version | null | null | null | cs.LG cs.SI | http://creativecommons.org/licenses/by/4.0/ | Autoencoders based on Graph Neural Networks (GNNs) have garnered significant
attention in recent years for their ability to extract informative latent
representations, characterizing the structure of complex topologies, such as
graphs. Despite the prevalence of Graph Autoencoders, there has been limited
focus on developing and evaluating explainable neural-based graph generative
models specifically designed for signed networks. To address this gap, we
propose the Signed Graph Archetypal Autoencoder (SGAAE) framework. SGAAE
extracts node-level representations that express node memberships over distinct
extreme profiles, referred to as archetypes, within the network. This is
achieved by projecting the graph onto a learned polytope, which governs its
polarization. The framework employs a recently proposed likelihood for
analyzing signed networks based on the Skellam distribution, combined with
relational archetypal analysis and GNNs. Our experimental evaluation
demonstrates the SGAAEs' capability to successfully infer node memberships over
the different underlying latent structures while extracting competing
communities formed through the participation of the opposing views in the
network. Additionally, we introduce the 2-level network polarization problem
and show how SGAAE is able to characterize such a setting. The proposed model
achieves high performance in different tasks of signed link prediction across
four real-world datasets, outperforming several baseline models.
| [
{
"version": "v1",
"created": "Mon, 16 Sep 2024 16:40:40 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Jan 2025 20:24:47 GMT"
},
{
"version": "v3",
"created": "Mon, 10 Mar 2025 15:02:47 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Nakis",
"Nikolaos",
""
],
[
"Kosma",
"Chrysoula",
""
],
[
"Nikolentzos",
"Giannis",
""
],
[
"Chatzianastasis",
"Michalis",
""
],
[
"Evdaimon",
"Iakovos",
""
],
[
"Vazirgiannis",
"Michalis",
""
]
]
| TITLE: Signed Graph Autoencoder for Explainable and Polarization-Aware Network
Embeddings
ABSTRACT: Autoencoders based on Graph Neural Networks (GNNs) have garnered significant
attention in recent years for their ability to extract informative latent
representations, characterizing the structure of complex topologies, such as
graphs. Despite the prevalence of Graph Autoencoders, there has been limited
focus on developing and evaluating explainable neural-based graph generative
models specifically designed for signed networks. To address this gap, we
propose the Signed Graph Archetypal Autoencoder (SGAAE) framework. SGAAE
extracts node-level representations that express node memberships over distinct
extreme profiles, referred to as archetypes, within the network. This is
achieved by projecting the graph onto a learned polytope, which governs its
polarization. The framework employs a recently proposed likelihood for
analyzing signed networks based on the Skellam distribution, combined with
relational archetypal analysis and GNNs. Our experimental evaluation
demonstrates the SGAAEs' capability to successfully infer node memberships over
the different underlying latent structures while extracting competing
communities formed through the participation of the opposing views in the
network. Additionally, we introduce the 2-level network polarization problem
and show how SGAAE is able to characterize such a setting. The proposed model
achieves high performance in different tasks of signed link prediction across
four real-world datasets, outperforming several baseline models.
| no_new_dataset | 0.946547 |
2409.15054 | Guoyang Zhao | Guoyang Zhao, Yuxuan Liu, Weiqing Qi, Fulong Ma, Ming Liu and Jun Ma | FisheyeDepth: A Real Scale Self-Supervised Depth Estimation Model for
Fisheye Camera | null | ICRA 2025 IEEE International Conference on Robotics and Automation | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate depth estimation is crucial for 3D scene comprehension in robotics
and autonomous vehicles. Fisheye cameras, known for their wide field of view,
have inherent geometric benefits. However, their use in depth estimation is
restricted by a scarcity of ground truth data and image distortions. We present
FisheyeDepth, a self-supervised depth estimation model tailored for fisheye
cameras. We incorporate a fisheye camera model into the projection and
reprojection stages during training to handle image distortions, thereby
improving depth estimation accuracy and training stability. Furthermore, we
incorporate real-scale pose information into the geometric projection between
consecutive frames, replacing the poses estimated by the conventional pose
network. Essentially, this method offers the necessary physical depth for
robotic tasks, and also streamlines the training and inference procedures.
Additionally, we devise a multi-channel output strategy to improve robustness
by adaptively fusing features at various scales, which reduces the noise from
real pose data. We demonstrate the superior performance and robustness of our
model in fisheye image depth estimation through evaluations on public datasets
and real-world scenarios. The project website is available at:
https://github.com/guoyangzhao/FisheyeDepth.
| [
{
"version": "v1",
"created": "Mon, 23 Sep 2024 14:31:42 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Mar 2025 06:45:13 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Zhao",
"Guoyang",
""
],
[
"Liu",
"Yuxuan",
""
],
[
"Qi",
"Weiqing",
""
],
[
"Ma",
"Fulong",
""
],
[
"Liu",
"Ming",
""
],
[
"Ma",
"Jun",
""
]
]
| TITLE: FisheyeDepth: A Real Scale Self-Supervised Depth Estimation Model for
Fisheye Camera
ABSTRACT: Accurate depth estimation is crucial for 3D scene comprehension in robotics
and autonomous vehicles. Fisheye cameras, known for their wide field of view,
have inherent geometric benefits. However, their use in depth estimation is
restricted by a scarcity of ground truth data and image distortions. We present
FisheyeDepth, a self-supervised depth estimation model tailored for fisheye
cameras. We incorporate a fisheye camera model into the projection and
reprojection stages during training to handle image distortions, thereby
improving depth estimation accuracy and training stability. Furthermore, we
incorporate real-scale pose information into the geometric projection between
consecutive frames, replacing the poses estimated by the conventional pose
network. Essentially, this method offers the necessary physical depth for
robotic tasks, and also streamlines the training and inference procedures.
Additionally, we devise a multi-channel output strategy to improve robustness
by adaptively fusing features at various scales, which reduces the noise from
real pose data. We demonstrate the superior performance and robustness of our
model in fisheye image depth estimation through evaluations on public datasets
and real-world scenarios. The project website is available at:
https://github.com/guoyangzhao/FisheyeDepth.
| no_new_dataset | 0.949529 |
2409.15077 | Guoyang Zhao | Guoyang Zhao, Fulong Ma, Weiqing Qi, Chenguang Zhang, Yuxuan Liu, Ming
Liu and Jun Ma | TSCLIP: Robust CLIP Fine-Tuning for Worldwide Cross-Regional Traffic
Sign Recognition | null | ICRA 2025 IEEE International Conference on Robotics and Automation | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traffic sign is a critical map feature for navigation and traffic control.
Nevertheless, current methods for traffic sign recognition rely on traditional
deep learning models, which typically suffer from significant performance
degradation considering the variations in data distribution across different
regions. In this paper, we propose TSCLIP, a robust fine-tuning approach with
the contrastive language-image pre-training (CLIP) model for worldwide
cross-regional traffic sign recognition. We first curate a cross-regional
traffic sign benchmark dataset by combining data from ten different sources.
Then, we propose a prompt engineering scheme tailored to the characteristics of
traffic signs, which involves specific scene descriptions and corresponding
rules to generate targeted text descriptions. During the TSCLIP fine-tuning
process, we implement adaptive dynamic weight ensembling (ADWE) to seamlessly
incorporate outcomes from each training iteration with the zero-shot CLIP
model. This approach ensures that the model retains its ability to generalize
while acquiring new knowledge about traffic signs. To the best knowledge of
authors, TSCLIP is the first contrastive language-image model used for the
worldwide cross-regional traffic sign recognition task. The project website is
available at: https://github.com/guoyangzhao/TSCLIP.
| [
{
"version": "v1",
"created": "Mon, 23 Sep 2024 14:51:26 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Mar 2025 06:34:18 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Zhao",
"Guoyang",
""
],
[
"Ma",
"Fulong",
""
],
[
"Qi",
"Weiqing",
""
],
[
"Zhang",
"Chenguang",
""
],
[
"Liu",
"Yuxuan",
""
],
[
"Liu",
"Ming",
""
],
[
"Ma",
"Jun",
""
]
]
| TITLE: TSCLIP: Robust CLIP Fine-Tuning for Worldwide Cross-Regional Traffic
Sign Recognition
ABSTRACT: Traffic sign is a critical map feature for navigation and traffic control.
Nevertheless, current methods for traffic sign recognition rely on traditional
deep learning models, which typically suffer from significant performance
degradation considering the variations in data distribution across different
regions. In this paper, we propose TSCLIP, a robust fine-tuning approach with
the contrastive language-image pre-training (CLIP) model for worldwide
cross-regional traffic sign recognition. We first curate a cross-regional
traffic sign benchmark dataset by combining data from ten different sources.
Then, we propose a prompt engineering scheme tailored to the characteristics of
traffic signs, which involves specific scene descriptions and corresponding
rules to generate targeted text descriptions. During the TSCLIP fine-tuning
process, we implement adaptive dynamic weight ensembling (ADWE) to seamlessly
incorporate outcomes from each training iteration with the zero-shot CLIP
model. This approach ensures that the model retains its ability to generalize
while acquiring new knowledge about traffic signs. To the best knowledge of
authors, TSCLIP is the first contrastive language-image model used for the
worldwide cross-regional traffic sign recognition task. The project website is
available at: https://github.com/guoyangzhao/TSCLIP.
| no_new_dataset | 0.950869 |
2409.15861 | Abdulfattah Safa | Abdulfattah Safa, G\"ozde G\"ul \c{S}ahin | A Zero-Shot Open-Vocabulary Pipeline for Dialogue Understanding | Accepted to NAACL 2025 | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Dialogue State Tracking (DST) is crucial for understanding user needs and
executing appropriate system actions in task-oriented dialogues. Majority of
existing DST methods are designed to work within predefined ontologies and
assume the availability of gold domain labels, struggling with adapting to new
slots values. While Large Language Models (LLMs)-based systems show promising
zero-shot DST performance, they either require extensive computational
resources or they underperform existing fully-trained systems, limiting their
practicality. To address these limitations, we propose a zero-shot,
open-vocabulary system that integrates domain classification and DST in a
single pipeline. Our approach includes reformulating DST as a
question-answering task for less capable models and employing self-refining
prompts for more adaptable ones. Our system does not rely on fixed slot values
defined in the ontology allowing the system to adapt dynamically. We compare
our approach with existing SOTA, and show that it provides up to 20% better
Joint Goal Accuracy (JGA) over previous methods on datasets like Multi-WOZ 2.1,
with up to 90% fewer requests to the LLM API.
| [
{
"version": "v1",
"created": "Tue, 24 Sep 2024 08:33:41 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Jan 2025 17:41:51 GMT"
},
{
"version": "v3",
"created": "Fri, 7 Mar 2025 19:50:00 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Safa",
"Abdulfattah",
""
],
[
"Şahin",
"Gözde Gül",
""
]
]
| TITLE: A Zero-Shot Open-Vocabulary Pipeline for Dialogue Understanding
ABSTRACT: Dialogue State Tracking (DST) is crucial for understanding user needs and
executing appropriate system actions in task-oriented dialogues. Majority of
existing DST methods are designed to work within predefined ontologies and
assume the availability of gold domain labels, struggling with adapting to new
slots values. While Large Language Models (LLMs)-based systems show promising
zero-shot DST performance, they either require extensive computational
resources or they underperform existing fully-trained systems, limiting their
practicality. To address these limitations, we propose a zero-shot,
open-vocabulary system that integrates domain classification and DST in a
single pipeline. Our approach includes reformulating DST as a
question-answering task for less capable models and employing self-refining
prompts for more adaptable ones. Our system does not rely on fixed slot values
defined in the ontology allowing the system to adapt dynamically. We compare
our approach with existing SOTA, and show that it provides up to 20% better
Joint Goal Accuracy (JGA) over previous methods on datasets like Multi-WOZ 2.1,
with up to 90% fewer requests to the LLM API.
| no_new_dataset | 0.94887 |
2409.16178 | Dimitrije Anti\'c | Dimitrije Anti\'c, Georgios Paschalidis, Shashank Tripathi, Theo
Gevers, Sai Kumar Dwivedi, Dimitrios Tzionas | SDFit: 3D Object Pose and Shape by Fitting a Morphable SDF to a Single
Image | 12 pages, 10 figures, 5 tables | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recovering 3D object pose and shape from a single image is a challenging and
highly ill-posed problem. This is due to strong (self-)occlusions, depth
ambiguities, the vast intra- and inter-class shape variance, and lack of 3D
ground truth for natural images. While existing methods train deep networks on
synthetic datasets to predict 3D shapes, they often struggle to generalize to
real-world scenarios, lack an explicit feedback loop for refining noisy
estimates, and primarily focus on geometry without explicitly considering pixel
alignment. To this end, we make two key observations: (1) a robust solution
requires a model that imposes a strong category-specific shape prior to
constrain the search space, and (2) foundational models embed 2D images and 3D
shapes in joint spaces; both help resolve ambiguities. Hence, we propose SDFit,
a novel optimization framework that is built on three key innovations: First,
we use a learned morphable signed-distance-function (mSDF) model that acts as a
strong shape prior, thus constraining the shape space. Second, we use
foundational models to establish rich 2D-to-3D correspondences between image
features and the mSDF. Third, we develop a fitting pipeline that iteratively
refines both shape and pose, aligning the mSDF to the image. We evaluate SDFit
on the Pix3D, Pascal3D+, and COMIC image datasets. SDFit performs on par with
SotA methods, while demonstrating exceptional robustness to occlusions and
requiring no retraining for unseen images. Therefore, SDFit contributes new
insights for generalizing in the wild, paving the way for future research. Code
will be released.
| [
{
"version": "v1",
"created": "Tue, 24 Sep 2024 15:22:04 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 14:43:42 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Antić",
"Dimitrije",
""
],
[
"Paschalidis",
"Georgios",
""
],
[
"Tripathi",
"Shashank",
""
],
[
"Gevers",
"Theo",
""
],
[
"Dwivedi",
"Sai Kumar",
""
],
[
"Tzionas",
"Dimitrios",
""
]
]
| TITLE: SDFit: 3D Object Pose and Shape by Fitting a Morphable SDF to a Single
Image
ABSTRACT: Recovering 3D object pose and shape from a single image is a challenging and
highly ill-posed problem. This is due to strong (self-)occlusions, depth
ambiguities, the vast intra- and inter-class shape variance, and lack of 3D
ground truth for natural images. While existing methods train deep networks on
synthetic datasets to predict 3D shapes, they often struggle to generalize to
real-world scenarios, lack an explicit feedback loop for refining noisy
estimates, and primarily focus on geometry without explicitly considering pixel
alignment. To this end, we make two key observations: (1) a robust solution
requires a model that imposes a strong category-specific shape prior to
constrain the search space, and (2) foundational models embed 2D images and 3D
shapes in joint spaces; both help resolve ambiguities. Hence, we propose SDFit,
a novel optimization framework that is built on three key innovations: First,
we use a learned morphable signed-distance-function (mSDF) model that acts as a
strong shape prior, thus constraining the shape space. Second, we use
foundational models to establish rich 2D-to-3D correspondences between image
features and the mSDF. Third, we develop a fitting pipeline that iteratively
refines both shape and pose, aligning the mSDF to the image. We evaluate SDFit
on the Pix3D, Pascal3D+, and COMIC image datasets. SDFit performs on par with
SotA methods, while demonstrating exceptional robustness to occlusions and
requiring no retraining for unseen images. Therefore, SDFit contributes new
insights for generalizing in the wild, paving the way for future research. Code
will be released.
| no_new_dataset | 0.94428 |
2409.17582 | Naoya Hasegawa | Naoya Hasegawa, Issei Sato | Multiplicative Logit Adjustment Approximates Neural-Collapse-Aware
Decision Boundary Adjustment | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Real-world data distributions are often highly skewed. This has spurred a
growing body of research on long-tailed recognition, aimed at addressing the
imbalance in training classification models. Among the methods studied,
multiplicative logit adjustment (MLA) stands out as a simple and effective
method. What theoretical foundation explains the effectiveness of this
heuristic method? We provide a justification for the effectiveness of MLA with
the following two-step process. First, we develop a theory that adjusts optimal
decision boundaries by estimating feature spread on the basis of neural
collapse. Second, we demonstrate that MLA approximates this optimal method.
Additionally, through experiments on long-tailed datasets, we illustrate the
practical usefulness of MLA under more realistic conditions. We also offer
experimental insights to guide the tuning of MLA hyperparameters.
| [
{
"version": "v1",
"created": "Thu, 26 Sep 2024 07:01:06 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Oct 2024 02:17:59 GMT"
},
{
"version": "v3",
"created": "Mon, 10 Mar 2025 01:47:01 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Hasegawa",
"Naoya",
""
],
[
"Sato",
"Issei",
""
]
]
| TITLE: Multiplicative Logit Adjustment Approximates Neural-Collapse-Aware
Decision Boundary Adjustment
ABSTRACT: Real-world data distributions are often highly skewed. This has spurred a
growing body of research on long-tailed recognition, aimed at addressing the
imbalance in training classification models. Among the methods studied,
multiplicative logit adjustment (MLA) stands out as a simple and effective
method. What theoretical foundation explains the effectiveness of this
heuristic method? We provide a justification for the effectiveness of MLA with
the following two-step process. First, we develop a theory that adjusts optimal
decision boundaries by estimating feature spread on the basis of neural
collapse. Second, we demonstrate that MLA approximates this optimal method.
Additionally, through experiments on long-tailed datasets, we illustrate the
practical usefulness of MLA under more realistic conditions. We also offer
experimental insights to guide the tuning of MLA hyperparameters.
| no_new_dataset | 0.946349 |
2409.18586 | Chinnawut Nantabut | Chinnawut Nantabut | Analysis of Truncated Singular Value Decomposition for Koopman
Operator-Based Lane Change Model | Submitted to the 21st International Conference on Informatics in
Control, Automation and Robotics (ICINCO 2024) | null | 10.5220/0012997800003822 | null | eess.SY cs.AI cs.RO cs.SY | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Understanding and modeling complex dynamic systems is crucial for enhancing
vehicle performance and safety, especially in the context of autonomous
driving. Recently, popular methods such as Koopman operators and their
approximators, known as Extended Dynamic Mode Decomposition (EDMD), have
emerged for their effectiveness in transforming strongly nonlinear system
behavior into linear representations. This allows them to be integrated with
conventional linear controllers. To achieve this, Singular Value Decomposition
(SVD), specifically truncated SVD, is employed to approximate Koopman operators
from extensive datasets efficiently. This study evaluates different basis
functions used in EDMD and ranks for truncated SVD for representing lane change
behavior models, aiming to balance computational efficiency with information
loss. The findings, however, suggest that the technique of truncated SVD does
not necessarily achieve substantial reductions in computational training time
and results in significant information loss.
| [
{
"version": "v1",
"created": "Fri, 27 Sep 2024 09:45:21 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Nantabut",
"Chinnawut",
""
]
]
| TITLE: Analysis of Truncated Singular Value Decomposition for Koopman
Operator-Based Lane Change Model
ABSTRACT: Understanding and modeling complex dynamic systems is crucial for enhancing
vehicle performance and safety, especially in the context of autonomous
driving. Recently, popular methods such as Koopman operators and their
approximators, known as Extended Dynamic Mode Decomposition (EDMD), have
emerged for their effectiveness in transforming strongly nonlinear system
behavior into linear representations. This allows them to be integrated with
conventional linear controllers. To achieve this, Singular Value Decomposition
(SVD), specifically truncated SVD, is employed to approximate Koopman operators
from extensive datasets efficiently. This study evaluates different basis
functions used in EDMD and ranks for truncated SVD for representing lane change
behavior models, aiming to balance computational efficiency with information
loss. The findings, however, suggest that the technique of truncated SVD does
not necessarily achieve substantial reductions in computational training time
and results in significant information loss.
| no_new_dataset | 0.943086 |
2410.00486 | Dapeng Feng | Dapeng Feng, Zhiqiang Chen, Yizhen Yin, Shipeng Zhong, Yuhua Qi,
Hongbo Chen | CaRtGS: Computational Alignment for Real-Time Gaussian Splatting SLAM | Accepted by IEEE Robotics and Automation Letters (RA-L) | null | 10.1109/LRA.2025.3544928 | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Simultaneous Localization and Mapping (SLAM) is pivotal in robotics, with
photorealistic scene reconstruction emerging as a key challenge. To address
this, we introduce Computational Alignment for Real-Time Gaussian Splatting
SLAM (CaRtGS), a novel method enhancing the efficiency and quality of
photorealistic scene reconstruction in real-time environments. Leveraging 3D
Gaussian Splatting (3DGS), CaRtGS achieves superior rendering quality and
processing speed, which is crucial for scene photorealistic reconstruction. Our
approach tackles computational misalignment in Gaussian Splatting SLAM
(GS-SLAM) through an adaptive strategy that enhances optimization iterations,
addresses long-tail optimization, and refines densification. Experiments on
Replica, TUM-RGBD, and VECtor datasets demonstrate CaRtGS's effectiveness in
achieving high-fidelity rendering with fewer Gaussian primitives. This work
propels SLAM towards real-time, photorealistic dense rendering, significantly
advancing photorealistic scene representation. For the benefit of the research
community, we release the code and accompanying videos on our project website:
https://dapengfeng.github.io/cartgs.
| [
{
"version": "v1",
"created": "Tue, 1 Oct 2024 08:18:12 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Oct 2024 14:07:56 GMT"
},
{
"version": "v3",
"created": "Thu, 20 Feb 2025 12:14:13 GMT"
},
{
"version": "v4",
"created": "Mon, 10 Mar 2025 02:15:03 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Feng",
"Dapeng",
""
],
[
"Chen",
"Zhiqiang",
""
],
[
"Yin",
"Yizhen",
""
],
[
"Zhong",
"Shipeng",
""
],
[
"Qi",
"Yuhua",
""
],
[
"Chen",
"Hongbo",
""
]
]
| TITLE: CaRtGS: Computational Alignment for Real-Time Gaussian Splatting SLAM
ABSTRACT: Simultaneous Localization and Mapping (SLAM) is pivotal in robotics, with
photorealistic scene reconstruction emerging as a key challenge. To address
this, we introduce Computational Alignment for Real-Time Gaussian Splatting
SLAM (CaRtGS), a novel method enhancing the efficiency and quality of
photorealistic scene reconstruction in real-time environments. Leveraging 3D
Gaussian Splatting (3DGS), CaRtGS achieves superior rendering quality and
processing speed, which is crucial for scene photorealistic reconstruction. Our
approach tackles computational misalignment in Gaussian Splatting SLAM
(GS-SLAM) through an adaptive strategy that enhances optimization iterations,
addresses long-tail optimization, and refines densification. Experiments on
Replica, TUM-RGBD, and VECtor datasets demonstrate CaRtGS's effectiveness in
achieving high-fidelity rendering with fewer Gaussian primitives. This work
propels SLAM towards real-time, photorealistic dense rendering, significantly
advancing photorealistic scene representation. For the benefit of the research
community, we release the code and accompanying videos on our project website:
https://dapengfeng.github.io/cartgs.
| no_new_dataset | 0.951006 |
2410.00982 | Liang Shi | Liang Shi, Boyu Jiang, Tong Zeng, Feng Guo | ScVLM: Enhancing Vision-Language Model for Safety-Critical Event
Understanding | To appear in Proceedings of the IEEE/CVF Winter Conference on
Applications of Computer Vision (WACV) 2025 | Proceedings of the Winter Conference on Applications of Computer
Vision (WACV) Workshops, 2025, pp. 1061-1071 | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Accurately identifying, understanding and describing traffic safety-critical
events (SCEs), including crashes, tire strikes, and near-crashes, is crucial
for advanced driver assistance systems, automated driving systems, and traffic
safety. As SCEs are rare events, most general vision-language models (VLMs)
have not been trained sufficiently to link SCE videos and narratives, which
could lead to hallucinations and missing key safety characteristics. Here, we
introduce ScVLM, a novel hybrid methodology that integrates supervised and
contrastive learning techniques to classify the severity and types of SCEs, as
well as to generate narrative descriptions of SCEs. This approach utilizes
classification to enhance VLMs' comprehension of driving videos and improve the
rationality of event descriptions. The proposed approach is trained on and
evaluated by more than 8,600 SCEs from the Second Strategic Highway Research
Program Naturalistic Driving Study dataset, the largest publicly accessible
driving dataset with videos and SCE annotations. The results demonstrate the
superiority of the proposed approach in generating contextually accurate event
descriptions and mitigating VLM hallucinations. The code will be available at
https://github.com/datadrivenwheels/ScVLM.
| [
{
"version": "v1",
"created": "Tue, 1 Oct 2024 18:10:23 GMT"
},
{
"version": "v2",
"created": "Mon, 13 Jan 2025 16:27:06 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Shi",
"Liang",
""
],
[
"Jiang",
"Boyu",
""
],
[
"Zeng",
"Tong",
""
],
[
"Guo",
"Feng",
""
]
]
| TITLE: ScVLM: Enhancing Vision-Language Model for Safety-Critical Event
Understanding
ABSTRACT: Accurately identifying, understanding and describing traffic safety-critical
events (SCEs), including crashes, tire strikes, and near-crashes, is crucial
for advanced driver assistance systems, automated driving systems, and traffic
safety. As SCEs are rare events, most general vision-language models (VLMs)
have not been trained sufficiently to link SCE videos and narratives, which
could lead to hallucinations and missing key safety characteristics. Here, we
introduce ScVLM, a novel hybrid methodology that integrates supervised and
contrastive learning techniques to classify the severity and types of SCEs, as
well as to generate narrative descriptions of SCEs. This approach utilizes
classification to enhance VLMs' comprehension of driving videos and improve the
rationality of event descriptions. The proposed approach is trained on and
evaluated by more than 8,600 SCEs from the Second Strategic Highway Research
Program Naturalistic Driving Study dataset, the largest publicly accessible
driving dataset with videos and SCE annotations. The results demonstrate the
superiority of the proposed approach in generating contextually accurate event
descriptions and mitigating VLM hallucinations. The code will be available at
https://github.com/datadrivenwheels/ScVLM.
| new_dataset | 0.963984 |
2410.02467 | Yunhao Chen | Yunhao Chen, Shujie Wang, Difan Zou, Xingjun Ma | Extracting Training Data from Unconditional Diffusion Models | null | null | null | null | cs.LG cs.CR cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | As diffusion probabilistic models (DPMs) are being employed as mainstream
models for Generative Artificial Intelligence (GenAI), the study of their
memorization has attracted growing attention. Existing works in this field aim
to establish an understanding of whether or to what extent DPMs learn via
memorization. Such an understanding is crucial for identifying potential risks
of data leakage and copyright infringement in diffusion models and, more
importantly, for trustworthy application of GenAI. Existing works revealed that
conditional DPMs are more prone to memorize training data than unconditional
DPMs. And most data extraction methods developed so far target conditional
DPMs. Although unconditional DPMs are less prone to data extraction, further
investigation into these attacks remains essential since they serve as the
foundation for conditional models like Stable Diffusion, and exploring these
attacks will enhance our understanding of memorization in DPMs. In this work,
we propose a novel data extraction method named \textbf{Surrogate condItional
Data Extraction (SIDE)} that leverages a time-dependent classifier trained on
generated data as surrogate conditions to extract training data from
unconditional DPMs. Empirical results demonstrate that it can extract training
data in challenging scenarios where previous methods fail, and it is, on
average, over 50\% more effective across different scales of the CelebA
dataset. Furthermore, we provide a theoretical understanding of memorization in
both conditional and unconditional DPMs and why SIDE is effective.
| [
{
"version": "v1",
"created": "Thu, 3 Oct 2024 13:17:06 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Oct 2024 14:30:08 GMT"
},
{
"version": "v3",
"created": "Thu, 10 Oct 2024 14:23:28 GMT"
},
{
"version": "v4",
"created": "Sun, 13 Oct 2024 16:51:04 GMT"
},
{
"version": "v5",
"created": "Thu, 28 Nov 2024 10:54:10 GMT"
},
{
"version": "v6",
"created": "Mon, 10 Mar 2025 13:57:12 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Chen",
"Yunhao",
""
],
[
"Wang",
"Shujie",
""
],
[
"Zou",
"Difan",
""
],
[
"Ma",
"Xingjun",
""
]
]
| TITLE: Extracting Training Data from Unconditional Diffusion Models
ABSTRACT: As diffusion probabilistic models (DPMs) are being employed as mainstream
models for Generative Artificial Intelligence (GenAI), the study of their
memorization has attracted growing attention. Existing works in this field aim
to establish an understanding of whether or to what extent DPMs learn via
memorization. Such an understanding is crucial for identifying potential risks
of data leakage and copyright infringement in diffusion models and, more
importantly, for trustworthy application of GenAI. Existing works revealed that
conditional DPMs are more prone to memorize training data than unconditional
DPMs. And most data extraction methods developed so far target conditional
DPMs. Although unconditional DPMs are less prone to data extraction, further
investigation into these attacks remains essential since they serve as the
foundation for conditional models like Stable Diffusion, and exploring these
attacks will enhance our understanding of memorization in DPMs. In this work,
we propose a novel data extraction method named \textbf{Surrogate condItional
Data Extraction (SIDE)} that leverages a time-dependent classifier trained on
generated data as surrogate conditions to extract training data from
unconditional DPMs. Empirical results demonstrate that it can extract training
data in challenging scenarios where previous methods fail, and it is, on
average, over 50\% more effective across different scales of the CelebA
dataset. Furthermore, we provide a theoretical understanding of memorization in
both conditional and unconditional DPMs and why SIDE is effective.
| no_new_dataset | 0.949529 |
2410.03427 | Marius Miron | Marius Miron, Sara Keen, Jen-Yu Liu, Benjamin Hoffman, Masato
Hagiwara, Olivier Pietquin, Felix Effenberger, Maddie Cusimano | Biodenoising: Animal Vocalization Denoising without Access to Clean Data | 5 pages, 2 tables | null | null | null | cs.SD eess.AS | http://creativecommons.org/licenses/by/4.0/ | Animal vocalization denoising is a task similar to human speech enhancement,
which is relatively well-studied. In contrast to the latter, it comprises a
higher diversity of sound production mechanisms and recording environments, and
this higher diversity is a challenge for existing models. Adding to the
challenge and in contrast to speech, we lack large and diverse datasets
comprising clean vocalizations. As a solution we use as training data
pseudo-clean targets, i.e. pre-denoised vocalizations, and segments of
background noise without a vocalization. We propose a train set derived from
bioacoustics datasets and repositories representing diverse species, acoustic
environments, geographic regions. Additionally, we introduce a non-overlapping
benchmark set comprising clean vocalizations from different taxa and noise
samples. We show that that denoising models (demucs, CleanUNet) trained on
pseudo-clean targets obtained with speech enhancement models achieve
competitive results on the benchmarking set. We publish data, code, libraries,
and demos at https://earthspecies.github.io/biodenoising/.
| [
{
"version": "v1",
"created": "Fri, 4 Oct 2024 13:37:07 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Jan 2025 15:55:13 GMT"
},
{
"version": "v3",
"created": "Mon, 10 Mar 2025 14:33:11 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Miron",
"Marius",
""
],
[
"Keen",
"Sara",
""
],
[
"Liu",
"Jen-Yu",
""
],
[
"Hoffman",
"Benjamin",
""
],
[
"Hagiwara",
"Masato",
""
],
[
"Pietquin",
"Olivier",
""
],
[
"Effenberger",
"Felix",
""
],
[
"Cusimano",
"Maddie",
""
]
]
| TITLE: Biodenoising: Animal Vocalization Denoising without Access to Clean Data
ABSTRACT: Animal vocalization denoising is a task similar to human speech enhancement,
which is relatively well-studied. In contrast to the latter, it comprises a
higher diversity of sound production mechanisms and recording environments, and
this higher diversity is a challenge for existing models. Adding to the
challenge and in contrast to speech, we lack large and diverse datasets
comprising clean vocalizations. As a solution we use as training data
pseudo-clean targets, i.e. pre-denoised vocalizations, and segments of
background noise without a vocalization. We propose a train set derived from
bioacoustics datasets and repositories representing diverse species, acoustic
environments, geographic regions. Additionally, we introduce a non-overlapping
benchmark set comprising clean vocalizations from different taxa and noise
samples. We show that that denoising models (demucs, CleanUNet) trained on
pseudo-clean targets obtained with speech enhancement models achieve
competitive results on the benchmarking set. We publish data, code, libraries,
and demos at https://earthspecies.github.io/biodenoising/.
| new_dataset | 0.864539 |
2410.03522 | Songsong Xiong | Songsong Xiong, Hamidreza Kasaei | HMT-Grasp: A Hybrid Mamba-Transformer Approach for Robot Grasping in
Cluttered Environments | null | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Robot grasping, whether handling isolated objects, cluttered items, or
stacked objects, plays a critical role in industrial and service applications.
However, current visual grasp detection methods based on Convolutional Neural
Networks (CNNs) and Vision Transformers (ViTs) often struggle to adapt to
diverse scenarios, as they tend to emphasize either local or global features
exclusively, neglecting complementary cues. In this paper, we propose a novel
hybrid Mamba-Transformer approach to address these challenges. Our method
improves robotic visual grasping by effectively capturing both global and local
information through the integration of Vision Mamba and parallel
convolutional-transformer blocks. This hybrid architecture significantly
improves adaptability, precision, and flexibility across various robotic tasks.
To ensure a fair evaluation, we conducted extensive experiments on the Cornell,
Jacquard, and OCID-Grasp datasets, ranging from simple to complex scenarios.
Additionally, we performed both simulated and real-world robotic experiments.
The results demonstrate that our method not only surpasses state-of-the-art
techniques on standard grasping datasets but also delivers strong performance
in both simulation and real-world robot applications.
| [
{
"version": "v1",
"created": "Fri, 4 Oct 2024 15:43:01 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Mar 2025 17:26:26 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Xiong",
"Songsong",
""
],
[
"Kasaei",
"Hamidreza",
""
]
]
| TITLE: HMT-Grasp: A Hybrid Mamba-Transformer Approach for Robot Grasping in
Cluttered Environments
ABSTRACT: Robot grasping, whether handling isolated objects, cluttered items, or
stacked objects, plays a critical role in industrial and service applications.
However, current visual grasp detection methods based on Convolutional Neural
Networks (CNNs) and Vision Transformers (ViTs) often struggle to adapt to
diverse scenarios, as they tend to emphasize either local or global features
exclusively, neglecting complementary cues. In this paper, we propose a novel
hybrid Mamba-Transformer approach to address these challenges. Our method
improves robotic visual grasping by effectively capturing both global and local
information through the integration of Vision Mamba and parallel
convolutional-transformer blocks. This hybrid architecture significantly
improves adaptability, precision, and flexibility across various robotic tasks.
To ensure a fair evaluation, we conducted extensive experiments on the Cornell,
Jacquard, and OCID-Grasp datasets, ranging from simple to complex scenarios.
Additionally, we performed both simulated and real-world robotic experiments.
The results demonstrate that our method not only surpasses state-of-the-art
techniques on standard grasping datasets but also delivers strong performance
in both simulation and real-world robot applications.
| no_new_dataset | 0.94868 |
2410.04415 | Javier Mar\'in | Javier Marin | Geometric Analysis of Reasoning Trajectories: A Phase Space Approach to
Understanding Valid and Invalid Multi-Hop Reasoning in LLMs | null | null | null | null | cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | This paper proposes a novel approach to analyzing multi-hop reasoning in
language models through Hamiltonian mechanics. We map reasoning chains in
embedding spaces to Hamiltonian systems, defining a function that balances
reasoning progression (kinetic energy) against question relevance (potential
energy). Analyzing reasoning chains from a question-answering dataset reveals
that valid reasoning shows lower Hamiltonian energy values, representing an
optimal trade-off between information gathering and targeted answering. While
our framework offers complex visualization and quantification methods, the
claimed ability to "steer" or "improve" reasoning algorithms requires more
rigorous empirical validation, as the connection between physical systems and
reasoning remains largely metaphorical. Nevertheless, our analysis reveals
consistent geometric patterns distinguishing valid reasoning, suggesting this
physics-inspired approach offers promising diagnostic tools and new
perspectives on reasoning processes in large language models.
| [
{
"version": "v1",
"created": "Sun, 6 Oct 2024 09:09:14 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Oct 2024 08:51:36 GMT"
},
{
"version": "v3",
"created": "Sat, 8 Mar 2025 13:54:10 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Marin",
"Javier",
""
]
]
| TITLE: Geometric Analysis of Reasoning Trajectories: A Phase Space Approach to
Understanding Valid and Invalid Multi-Hop Reasoning in LLMs
ABSTRACT: This paper proposes a novel approach to analyzing multi-hop reasoning in
language models through Hamiltonian mechanics. We map reasoning chains in
embedding spaces to Hamiltonian systems, defining a function that balances
reasoning progression (kinetic energy) against question relevance (potential
energy). Analyzing reasoning chains from a question-answering dataset reveals
that valid reasoning shows lower Hamiltonian energy values, representing an
optimal trade-off between information gathering and targeted answering. While
our framework offers complex visualization and quantification methods, the
claimed ability to "steer" or "improve" reasoning algorithms requires more
rigorous empirical validation, as the connection between physical systems and
reasoning remains largely metaphorical. Nevertheless, our analysis reveals
consistent geometric patterns distinguishing valid reasoning, suggesting this
physics-inspired approach offers promising diagnostic tools and new
perspectives on reasoning processes in large language models.
| no_new_dataset | 0.929376 |
2410.05217 | Mingxuan Liu | Mingxuan Liu, Zhun Zhong, Jun Li, Gianni Franchi, Subhankar Roy, Elisa
Ricci | Organizing Unstructured Image Collections using Natural Language | Preprint. Project webpage: https://oatmealliu.github.io/opensmc.html | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Organizing unstructured visual data into semantic clusters is a key challenge
in computer vision. Traditional deep clustering approaches focus on a single
partition of data, while multiple clustering (MC) methods address this
limitation by uncovering distinct clustering solutions. The rise of large
language models (LLMs) and multimodal LLMs has enhanced MC by allowing users to
define text clustering criteria. However, expecting users to manually define
such criteria for large datasets before understanding the data is impractical.
In this work, we introduce the task of Open-ended Semantic Multiple Clustering,
that aims to automatically discover clustering criteria from large,
unstructured image collections, uncovering interpretable substructures without
requiring human input. Our framework, X-Cluster: eXploratory Clustering, uses
text as a proxy to concurrently reason over large image collections, discover
partitioning criteria, expressed in natural language, and reveal semantic
substructures. To evaluate X-Cluster, we introduce the COCO-4c and Food-4c
benchmarks, each containing four grouping criteria and ground-truth
annotations. We apply X-Cluster to various real-world applications, such as
discovering biases and analyzing social media image popularity, demonstrating
its utility as a practical tool for organizing large unstructured image
collections and revealing novel insights. We will open-source our code and
benchmarks for reproducibility and future research.
| [
{
"version": "v1",
"created": "Mon, 7 Oct 2024 17:21:46 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Oct 2024 18:47:46 GMT"
},
{
"version": "v3",
"created": "Sun, 9 Mar 2025 20:32:56 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Liu",
"Mingxuan",
""
],
[
"Zhong",
"Zhun",
""
],
[
"Li",
"Jun",
""
],
[
"Franchi",
"Gianni",
""
],
[
"Roy",
"Subhankar",
""
],
[
"Ricci",
"Elisa",
""
]
]
| TITLE: Organizing Unstructured Image Collections using Natural Language
ABSTRACT: Organizing unstructured visual data into semantic clusters is a key challenge
in computer vision. Traditional deep clustering approaches focus on a single
partition of data, while multiple clustering (MC) methods address this
limitation by uncovering distinct clustering solutions. The rise of large
language models (LLMs) and multimodal LLMs has enhanced MC by allowing users to
define text clustering criteria. However, expecting users to manually define
such criteria for large datasets before understanding the data is impractical.
In this work, we introduce the task of Open-ended Semantic Multiple Clustering,
that aims to automatically discover clustering criteria from large,
unstructured image collections, uncovering interpretable substructures without
requiring human input. Our framework, X-Cluster: eXploratory Clustering, uses
text as a proxy to concurrently reason over large image collections, discover
partitioning criteria, expressed in natural language, and reveal semantic
substructures. To evaluate X-Cluster, we introduce the COCO-4c and Food-4c
benchmarks, each containing four grouping criteria and ground-truth
annotations. We apply X-Cluster to various real-world applications, such as
discovering biases and analyzing social media image popularity, demonstrating
its utility as a practical tool for organizing large unstructured image
collections and revealing novel insights. We will open-source our code and
benchmarks for reproducibility and future research.
| no_new_dataset | 0.944022 |
2410.05664 | Saemi Moon | Saemi Moon, Minjong Lee, Sangdon Park, Dongwoo Kim | Holistic Unlearning Benchmark: A Multi-Faceted Evaluation for
Text-to-Image Diffusion Model Unlearning | null | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | As text-to-image diffusion models gain widespread commercial applications,
there are increasing concerns about unethical or harmful use, including the
unauthorized generation of copyrighted or sensitive content. Concept unlearning
has emerged as a promising solution to these challenges by removing undesired
and harmful information from the pre-trained model. However, the previous
evaluations primarily focus on whether target concepts are removed while
preserving image quality, neglecting the broader impacts such as unintended
side effects. In this work, we propose Holistic Unlearning Benchmark (HUB), a
comprehensive framework for evaluating unlearning methods across six key
dimensions: faithfulness, alignment, pinpoint-ness, multilingual robustness,
attack robustness, and efficiency. Our benchmark covers 33 target concepts,
including 16,000 prompts per concept, spanning four categories: Celebrity,
Style, Intellectual Property, and NSFW. Our investigation reveals that no
single method excels across all evaluation criteria. By releasing our
evaluation code and dataset, we hope to inspire further research in this area,
leading to more reliable and effective unlearning methods.
| [
{
"version": "v1",
"created": "Tue, 8 Oct 2024 03:30:39 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Mar 2025 05:17:36 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Moon",
"Saemi",
""
],
[
"Lee",
"Minjong",
""
],
[
"Park",
"Sangdon",
""
],
[
"Kim",
"Dongwoo",
""
]
]
| TITLE: Holistic Unlearning Benchmark: A Multi-Faceted Evaluation for
Text-to-Image Diffusion Model Unlearning
ABSTRACT: As text-to-image diffusion models gain widespread commercial applications,
there are increasing concerns about unethical or harmful use, including the
unauthorized generation of copyrighted or sensitive content. Concept unlearning
has emerged as a promising solution to these challenges by removing undesired
and harmful information from the pre-trained model. However, the previous
evaluations primarily focus on whether target concepts are removed while
preserving image quality, neglecting the broader impacts such as unintended
side effects. In this work, we propose Holistic Unlearning Benchmark (HUB), a
comprehensive framework for evaluating unlearning methods across six key
dimensions: faithfulness, alignment, pinpoint-ness, multilingual robustness,
attack robustness, and efficiency. Our benchmark covers 33 target concepts,
including 16,000 prompts per concept, spanning four categories: Celebrity,
Style, Intellectual Property, and NSFW. Our investigation reveals that no
single method excels across all evaluation criteria. By releasing our
evaluation code and dataset, we hope to inspire further research in this area,
leading to more reliable and effective unlearning methods.
| new_dataset | 0.961061 |
2410.05966 | Tao Ren | Tao Ren, Zishi Zhang, Jinyang Jiang, Guanghao Li, Zeliang Zhang,
Mingqian Feng, Yijie Peng | FLOPS: Forward Learning with OPtimal Sampling | Published in the Thirteenth International Conference on Learning
Representations(ICLR 2025) | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given the limitations of backpropagation, perturbation-based gradient
computation methods have recently gained focus for learning with only forward
passes, also referred to as queries. Conventional forward learning consumes
enormous queries on each data point for accurate gradient estimation through
Monte Carlo sampling, which hinders the scalability of those algorithms.
However, not all data points deserve equal queries for gradient estimation. In
this paper, we study the problem of improving the forward learning efficiency
from a novel perspective: how to reduce the gradient estimation variance with
minimum cost? For this, we propose to allocate the optimal number of queries
over each data in one batch during training to achieve a good balance between
estimation accuracy and computational efficiency. Specifically, with a
simplified proxy objective and a reparameterization technique, we derive a
novel plug-and-play query allocator with minimal parameters. Theoretical
results are carried out to verify its optimality. We conduct extensive
experiments for fine-tuning Vision Transformers on various datasets and further
deploy the allocator to two black-box applications: prompt tuning and
multimodal alignment for foundation models. All findings demonstrate that our
proposed allocator significantly enhances the scalability of forward-learning
algorithms, paving the way for real-world applications.
| [
{
"version": "v1",
"created": "Tue, 8 Oct 2024 12:16:12 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Oct 2024 11:15:39 GMT"
},
{
"version": "v3",
"created": "Sat, 8 Mar 2025 12:06:49 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Ren",
"Tao",
""
],
[
"Zhang",
"Zishi",
""
],
[
"Jiang",
"Jinyang",
""
],
[
"Li",
"Guanghao",
""
],
[
"Zhang",
"Zeliang",
""
],
[
"Feng",
"Mingqian",
""
],
[
"Peng",
"Yijie",
""
]
]
| TITLE: FLOPS: Forward Learning with OPtimal Sampling
ABSTRACT: Given the limitations of backpropagation, perturbation-based gradient
computation methods have recently gained focus for learning with only forward
passes, also referred to as queries. Conventional forward learning consumes
enormous queries on each data point for accurate gradient estimation through
Monte Carlo sampling, which hinders the scalability of those algorithms.
However, not all data points deserve equal queries for gradient estimation. In
this paper, we study the problem of improving the forward learning efficiency
from a novel perspective: how to reduce the gradient estimation variance with
minimum cost? For this, we propose to allocate the optimal number of queries
over each data in one batch during training to achieve a good balance between
estimation accuracy and computational efficiency. Specifically, with a
simplified proxy objective and a reparameterization technique, we derive a
novel plug-and-play query allocator with minimal parameters. Theoretical
results are carried out to verify its optimality. We conduct extensive
experiments for fine-tuning Vision Transformers on various datasets and further
deploy the allocator to two black-box applications: prompt tuning and
multimodal alignment for foundation models. All findings demonstrate that our
proposed allocator significantly enhances the scalability of forward-learning
algorithms, paving the way for real-world applications.
| no_new_dataset | 0.940079 |
2410.07093 | Zhe Li | Zhe Li, Weihao Yuan, Yisheng He, Lingteng Qiu, Shenhao Zhu, Xiaodong
Gu, Weichao Shen, Yuan Dong, Zilong Dong, Laurence T. Yang | LaMP: Language-Motion Pretraining for Motion Generation, Retrieval, and
Captioning | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Language plays a vital role in the realm of human motion. Existing methods
have largely depended on CLIP text embeddings for motion generation, yet they
fall short in effectively aligning language and motion due to CLIP's
pretraining on static image-text pairs. This work introduces LaMP, a novel
Language-Motion Pretraining model, which transitions from a language-vision to
a more suitable language-motion latent space. It addresses key limitations by
generating motion-informative text embeddings, significantly enhancing the
relevance and semantics of generated motion sequences. With LaMP, we advance
three key tasks: text-to-motion generation, motion-text retrieval, and motion
captioning through aligned language-motion representation learning. For
generation, we utilize LaMP to provide the text condition instead of CLIP, and
an autoregressive masked prediction is designed to achieve mask modeling
without rank collapse in transformers. For retrieval, motion features from
LaMP's motion transformer interact with query tokens to retrieve text features
from the text transformer, and vice versa. For captioning, we finetune a large
language model with the language-informative motion features to develop a
strong motion captioning model. In addition, we introduce the LaMP-BertScore
metric to assess the alignment of generated motions with textual descriptions.
Extensive experimental results on multiple datasets demonstrate substantial
improvements over previous methods across all three tasks. The code of our
method will be made public.
| [
{
"version": "v1",
"created": "Wed, 9 Oct 2024 17:33:03 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Mar 2025 06:09:23 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Li",
"Zhe",
""
],
[
"Yuan",
"Weihao",
""
],
[
"He",
"Yisheng",
""
],
[
"Qiu",
"Lingteng",
""
],
[
"Zhu",
"Shenhao",
""
],
[
"Gu",
"Xiaodong",
""
],
[
"Shen",
"Weichao",
""
],
[
"Dong",
"Yuan",
""
],
[
"Dong",
"Zilong",
""
],
[
"Yang",
"Laurence T.",
""
]
]
| TITLE: LaMP: Language-Motion Pretraining for Motion Generation, Retrieval, and
Captioning
ABSTRACT: Language plays a vital role in the realm of human motion. Existing methods
have largely depended on CLIP text embeddings for motion generation, yet they
fall short in effectively aligning language and motion due to CLIP's
pretraining on static image-text pairs. This work introduces LaMP, a novel
Language-Motion Pretraining model, which transitions from a language-vision to
a more suitable language-motion latent space. It addresses key limitations by
generating motion-informative text embeddings, significantly enhancing the
relevance and semantics of generated motion sequences. With LaMP, we advance
three key tasks: text-to-motion generation, motion-text retrieval, and motion
captioning through aligned language-motion representation learning. For
generation, we utilize LaMP to provide the text condition instead of CLIP, and
an autoregressive masked prediction is designed to achieve mask modeling
without rank collapse in transformers. For retrieval, motion features from
LaMP's motion transformer interact with query tokens to retrieve text features
from the text transformer, and vice versa. For captioning, we finetune a large
language model with the language-informative motion features to develop a
strong motion captioning model. In addition, we introduce the LaMP-BertScore
metric to assess the alignment of generated motions with textual descriptions.
Extensive experimental results on multiple datasets demonstrate substantial
improvements over previous methods across all three tasks. The code of our
method will be made public.
| no_new_dataset | 0.950411 |
2410.07516 | Pengyu Xue | Pengyu Xue, Linhao Wu, Zhen Yang, Zhongxing Yu, Zhi Jin, Ge Li, Yan
Xiao, Shuo Liu, Xinyi Li, Hongyi Lin and Jingwen Wu | Exploring and Lifting the Robustness of LLM-powered Automated Program
Repair with Metamorphic Testing | null | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, Large language model-powered Automated Program Repair (LAPR)
techniques have achieved state-of-the-art bug-fixing performance and have been
pervasively applied and studied in both industry and academia. Nonetheless,
LLMs were proved to be highly sensitive to input prompts, with slight
differences in the expressions of semantically equivalent programs potentially
causing repair failures. Therefore, it is crucial to conduct robustness testing
on LAPR techniques before their practical deployment. However, related research
is scarce. To this end, we propose MT-LAPR, a Metamorphic Testing framework
exclusively for LAPR techniques, which summarizes nine widely-recognized
Metamorphic Relations (MRs) by developers across three perturbation levels:
token, statement, and block. Afterward, our proposed MRs are applied to buggy
codes to generate test cases, which are semantically equivalent yet to affect
the inference of LAPR. Experiments are carried out on two extensively examined
bug-fixing datasets, i.e., Defect4J and QuixBugs, and four bug-fixing abled
LLMs released recently, demonstrating that 34.4% - 48.5% of the test cases
expose the instability of LAPR techniques on average, showing the effectiveness
of MT-LAPR and uncovering a positive correlation between code readability and
the robustness of LAPR techniques. Inspired by the above findings, this paper
uses the test cases generated by MT-LAPR as samples to train a CodeT5-based
code editing model aiming at improving code readability and then embeds it into
the LAPR workflow as a data preprocessing step. Extensive experiments
demonstrate that this approach significantly enhances the robustness of LAPR by
49.32% at most.
| [
{
"version": "v1",
"created": "Thu, 10 Oct 2024 01:14:58 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Mar 2025 09:37:03 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Xue",
"Pengyu",
""
],
[
"Wu",
"Linhao",
""
],
[
"Yang",
"Zhen",
""
],
[
"Yu",
"Zhongxing",
""
],
[
"Jin",
"Zhi",
""
],
[
"Li",
"Ge",
""
],
[
"Xiao",
"Yan",
""
],
[
"Liu",
"Shuo",
""
],
[
"Li",
"Xinyi",
""
],
[
"Lin",
"Hongyi",
""
],
[
"Wu",
"Jingwen",
""
]
]
| TITLE: Exploring and Lifting the Robustness of LLM-powered Automated Program
Repair with Metamorphic Testing
ABSTRACT: In recent years, Large language model-powered Automated Program Repair (LAPR)
techniques have achieved state-of-the-art bug-fixing performance and have been
pervasively applied and studied in both industry and academia. Nonetheless,
LLMs were proved to be highly sensitive to input prompts, with slight
differences in the expressions of semantically equivalent programs potentially
causing repair failures. Therefore, it is crucial to conduct robustness testing
on LAPR techniques before their practical deployment. However, related research
is scarce. To this end, we propose MT-LAPR, a Metamorphic Testing framework
exclusively for LAPR techniques, which summarizes nine widely-recognized
Metamorphic Relations (MRs) by developers across three perturbation levels:
token, statement, and block. Afterward, our proposed MRs are applied to buggy
codes to generate test cases, which are semantically equivalent yet to affect
the inference of LAPR. Experiments are carried out on two extensively examined
bug-fixing datasets, i.e., Defect4J and QuixBugs, and four bug-fixing abled
LLMs released recently, demonstrating that 34.4% - 48.5% of the test cases
expose the instability of LAPR techniques on average, showing the effectiveness
of MT-LAPR and uncovering a positive correlation between code readability and
the robustness of LAPR techniques. Inspired by the above findings, this paper
uses the test cases generated by MT-LAPR as samples to train a CodeT5-based
code editing model aiming at improving code readability and then embeds it into
the LAPR workflow as a data preprocessing step. Extensive experiments
demonstrate that this approach significantly enhances the robustness of LAPR by
49.32% at most.
| no_new_dataset | 0.942507 |
2410.08793 | Ana-Maria Bucur | Ana-Maria Bucur, Andreea-Codrina Moldovan, Krutika Parvatikar, Marcos
Zampieri, Ashiqur R. KhudaBukhsh and Liviu P. Dinu | On the State of NLP Approaches to Modeling Depression in Social Media: A
Post-COVID-19 Outlook | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Computational approaches to predicting mental health conditions in social
media have been substantially explored in the past years. Multiple reviews have
been published on this topic, providing the community with comprehensive
accounts of the research in this area. Among all mental health conditions,
depression is the most widely studied due to its worldwide prevalence. The
COVID-19 global pandemic, starting in early 2020, has had a great impact on
mental health worldwide. Harsh measures employed by governments to slow the
spread of the virus (e.g., lockdowns) and the subsequent economic downturn
experienced in many countries have significantly impacted people's lives and
mental health. Studies have shown a substantial increase of above 50% in the
rate of depression in the population. In this context, we present a review on
natural language processing (NLP) approaches to modeling depression in social
media, providing the reader with a post-COVID-19 outlook. This review
contributes to the understanding of the impacts of the pandemic on modeling
depression in social media. We outline how state-of-the-art approaches and new
datasets have been used in the context of the COVID-19 pandemic. Finally, we
also discuss ethical issues in collecting and processing mental health data,
considering fairness, accountability, and ethics.
| [
{
"version": "v1",
"created": "Fri, 11 Oct 2024 13:20:54 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 22:09:42 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Bucur",
"Ana-Maria",
""
],
[
"Moldovan",
"Andreea-Codrina",
""
],
[
"Parvatikar",
"Krutika",
""
],
[
"Zampieri",
"Marcos",
""
],
[
"KhudaBukhsh",
"Ashiqur R.",
""
],
[
"Dinu",
"Liviu P.",
""
]
]
| TITLE: On the State of NLP Approaches to Modeling Depression in Social Media: A
Post-COVID-19 Outlook
ABSTRACT: Computational approaches to predicting mental health conditions in social
media have been substantially explored in the past years. Multiple reviews have
been published on this topic, providing the community with comprehensive
accounts of the research in this area. Among all mental health conditions,
depression is the most widely studied due to its worldwide prevalence. The
COVID-19 global pandemic, starting in early 2020, has had a great impact on
mental health worldwide. Harsh measures employed by governments to slow the
spread of the virus (e.g., lockdowns) and the subsequent economic downturn
experienced in many countries have significantly impacted people's lives and
mental health. Studies have shown a substantial increase of above 50% in the
rate of depression in the population. In this context, we present a review on
natural language processing (NLP) approaches to modeling depression in social
media, providing the reader with a post-COVID-19 outlook. This review
contributes to the understanding of the impacts of the pandemic on modeling
depression in social media. We outline how state-of-the-art approaches and new
datasets have been used in the context of the COVID-19 pandemic. Finally, we
also discuss ethical issues in collecting and processing mental health data,
considering fairness, accountability, and ethics.
| no_new_dataset | 0.946001 |
2410.09018 | Qiang Sun | Zichao Yu, Qiang Sun, and Wenyi Zhang | Data-Driven Neural Estimation of Indirect Rate-Distortion Function | null | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rate-distortion function (RDF) has long been an information-theoretic
benchmark for data compression. As its natural extension, the indirect
rate-distortion function (iRDF) corresponds to the scenario where the encoder
can only access an observation correlated with the source, rather than the
source itself. Such scenario is also relevant for modern applications like
remote sensing and goal-oriented communication. The iRDF can be reduced into a
standard RDF with the distortion measure replaced by its conditional
expectation conditioned upon the observation. This reduction, however, leads to
a non-trivial challenge when one needs to estimate the iRDF given datasets
only, because without statistical knowledge of the joint probability
distribution between the source and its observation, the conditional
expectation cannot be evaluated. To tackle this challenge, starting from the
well known fact that conditional expectation is the minimum mean-squared error
estimator and exploiting a Markovian relationship, we identify a functional
equivalence between the reduced distortion measure in the iRDF and the solution
of a quadratic loss minimization problem, which can be efficiently approximated
by neural network approach. We proceed to reformulate the iRDF as a variational
problem corresponding to the Lagrangian representation of the iRDF curve, and
propose a neural network based approximate solution, integrating the
aforementioned distortion measure estimator. Asymptotic analysis guarantees
consistency of the solution, and numerical experimental results demonstrate the
accuracy and effectiveness of the algorithm.
| [
{
"version": "v1",
"created": "Fri, 11 Oct 2024 17:31:57 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Mar 2025 13:07:23 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Yu",
"Zichao",
""
],
[
"Sun",
"Qiang",
""
],
[
"Zhang",
"Wenyi",
""
]
]
| TITLE: Data-Driven Neural Estimation of Indirect Rate-Distortion Function
ABSTRACT: The rate-distortion function (RDF) has long been an information-theoretic
benchmark for data compression. As its natural extension, the indirect
rate-distortion function (iRDF) corresponds to the scenario where the encoder
can only access an observation correlated with the source, rather than the
source itself. Such scenario is also relevant for modern applications like
remote sensing and goal-oriented communication. The iRDF can be reduced into a
standard RDF with the distortion measure replaced by its conditional
expectation conditioned upon the observation. This reduction, however, leads to
a non-trivial challenge when one needs to estimate the iRDF given datasets
only, because without statistical knowledge of the joint probability
distribution between the source and its observation, the conditional
expectation cannot be evaluated. To tackle this challenge, starting from the
well known fact that conditional expectation is the minimum mean-squared error
estimator and exploiting a Markovian relationship, we identify a functional
equivalence between the reduced distortion measure in the iRDF and the solution
of a quadratic loss minimization problem, which can be efficiently approximated
by neural network approach. We proceed to reformulate the iRDF as a variational
problem corresponding to the Lagrangian representation of the iRDF curve, and
propose a neural network based approximate solution, integrating the
aforementioned distortion measure estimator. Asymptotic analysis guarantees
consistency of the solution, and numerical experimental results demonstrate the
accuracy and effectiveness of the algorithm.
| no_new_dataset | 0.940353 |
2410.10855 | Hokin Deng | Yijiang Li, Qingying Gao, Tianwei Zhao, Bingyang Wang, Haoran Sun,
Haiyun Lyu, Dezhi Luo, Hokin Deng | Core Knowledge Deficits in Multi-Modal Language Models | Website with this
$\href{https://growing-ai-like-a-child.github.io/}{link}$ | null | null | null | cs.CL cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | While Multimodal Large Language Models (MLLMs) demonstrate impressive
abilities over high level perception and reasoning, their robustness in the
wild still lags behind humans and exhibits diminished efficacy on simple tasks
that are intuitive for humans. We examine the hypothesis that these
deficiencies stem from the absence of core knowledge, rudimentary cognitive
abilities innate to humans from early childhood. To probe core knowledge
representation in MLLMs, we draw from developmental cognitive sciences and
develop a large-scale benchmark, CoreCognition dataset, encompassing 12 core
cognitive concepts. We evaluate 219 models with 10 different prompts, leading
to a total of 2409 data points for analysis. Our findings reveal core knowledge
deficits in early developed core abilities while models demonstrate human
comparable performance in high level cognition. Moreover, we find that low
level abilities show little to no scaling, in stark contrast to high level
abilities. Finally, we introduce an evaluation technique, Concept Hacking,
through which we demonstrate that MLLMs do not genuinely advance toward core
knowledge but instead rely on illusory understanding and shortcut learning as
they scale. Website with this
$\href{https://growing-ai-like-a-child.github.io/}{link}$.
| [
{
"version": "v1",
"created": "Sun, 6 Oct 2024 20:13:11 GMT"
},
{
"version": "v2",
"created": "Sat, 2 Nov 2024 21:07:54 GMT"
},
{
"version": "v3",
"created": "Sun, 9 Mar 2025 04:39:42 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Li",
"Yijiang",
""
],
[
"Gao",
"Qingying",
""
],
[
"Zhao",
"Tianwei",
""
],
[
"Wang",
"Bingyang",
""
],
[
"Sun",
"Haoran",
""
],
[
"Lyu",
"Haiyun",
""
],
[
"Luo",
"Dezhi",
""
],
[
"Deng",
"Hokin",
""
]
]
| TITLE: Core Knowledge Deficits in Multi-Modal Language Models
ABSTRACT: While Multimodal Large Language Models (MLLMs) demonstrate impressive
abilities over high level perception and reasoning, their robustness in the
wild still lags behind humans and exhibits diminished efficacy on simple tasks
that are intuitive for humans. We examine the hypothesis that these
deficiencies stem from the absence of core knowledge, rudimentary cognitive
abilities innate to humans from early childhood. To probe core knowledge
representation in MLLMs, we draw from developmental cognitive sciences and
develop a large-scale benchmark, CoreCognition dataset, encompassing 12 core
cognitive concepts. We evaluate 219 models with 10 different prompts, leading
to a total of 2409 data points for analysis. Our findings reveal core knowledge
deficits in early developed core abilities while models demonstrate human
comparable performance in high level cognition. Moreover, we find that low
level abilities show little to no scaling, in stark contrast to high level
abilities. Finally, we introduce an evaluation technique, Concept Hacking,
through which we demonstrate that MLLMs do not genuinely advance toward core
knowledge but instead rely on illusory understanding and shortcut learning as
they scale. Website with this
$\href{https://growing-ai-like-a-child.github.io/}{link}$.
| new_dataset | 0.959649 |
2410.13061 | Adrian Ciotinga | Adrian Ciotinga and YooJung Choi | Optimal Transport for Probabilistic Circuits | null | null | null | null | cs.AI cs.LG math.OC | http://creativecommons.org/licenses/by/4.0/ | We introduce a novel optimal transport framework for probabilistic circuits
(PCs). While it has been shown recently that divergences between distributions
represented as certain classes of PCs can be computed tractably, to the best of
our knowledge, there is no existing approach to compute the Wasserstein
distance between probability distributions given by PCs. We propose a
Wasserstein-type distance that restricts the coupling measure of the associated
optimal transport problem to be a probabilistic circuit. We then develop an
algorithm for computing this distance by solving a series of small linear
programs and derive the circuit conditions under which this is tractable.
Furthermore, we show that we can easily retrieve the optimal transport plan
between the PCs from the solutions to these linear programs. Lastly, we study
the empirical Wasserstein distance between a PC and a dataset, and show that we
can estimate the PC parameters to minimize this distance through an efficient
iterative algorithm.
| [
{
"version": "v1",
"created": "Wed, 16 Oct 2024 21:42:16 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 20:03:13 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Ciotinga",
"Adrian",
""
],
[
"Choi",
"YooJung",
""
]
]
| TITLE: Optimal Transport for Probabilistic Circuits
ABSTRACT: We introduce a novel optimal transport framework for probabilistic circuits
(PCs). While it has been shown recently that divergences between distributions
represented as certain classes of PCs can be computed tractably, to the best of
our knowledge, there is no existing approach to compute the Wasserstein
distance between probability distributions given by PCs. We propose a
Wasserstein-type distance that restricts the coupling measure of the associated
optimal transport problem to be a probabilistic circuit. We then develop an
algorithm for computing this distance by solving a series of small linear
programs and derive the circuit conditions under which this is tractable.
Furthermore, we show that we can easily retrieve the optimal transport plan
between the PCs from the solutions to these linear programs. Lastly, we study
the empirical Wasserstein distance between a PC and a dataset, and show that we
can estimate the PC parameters to minimize this distance through an efficient
iterative algorithm.
| no_new_dataset | 0.94256 |
2410.14405 | Denitsa Saynova | Denitsa Saynova, Lovisa Hagstr\"om, Moa Johansson, Richard Johansson,
Marco Kuhlmann | Fact Recall, Heuristics or Pure Guesswork? Precise Interpretations of
Language Models for Fact Completion | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Language models (LMs) can make a correct prediction based on many possible
signals in a prompt, not all corresponding to recall of factual associations.
However, current interpretations of LMs fail to take this into account. For
example, given the query "Astrid Lindgren was born in" with the corresponding
completion "Sweden", no difference is made between whether the prediction was
based on knowing where the author was born or assuming that a person with a
Swedish-sounding name was born in Sweden. In this paper, we present a
model-specific recipe - PrISM - for constructing datasets with examples of four
different prediction scenarios: generic language modeling, guesswork,
heuristics recall and exact fact recall. We apply two popular interpretability
methods to the scenarios: causal tracing (CT) and information flow analysis. We
find that both yield distinct results for each scenario. Results for exact fact
recall and generic language modeling scenarios confirm previous conclusions
about the importance of mid-range MLP sublayers for fact recall, while results
for guesswork and heuristics indicate a critical role of late last token
position MLP sublayers. In summary, we contribute resources for a more
extensive and granular study of fact completion in LMs, together with analyses
that provide a more nuanced understanding of how LMs process fact-related
queries.
| [
{
"version": "v1",
"created": "Fri, 18 Oct 2024 12:08:07 GMT"
},
{
"version": "v2",
"created": "Thu, 31 Oct 2024 08:44:13 GMT"
},
{
"version": "v3",
"created": "Mon, 10 Mar 2025 12:47:31 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Saynova",
"Denitsa",
""
],
[
"Hagström",
"Lovisa",
""
],
[
"Johansson",
"Moa",
""
],
[
"Johansson",
"Richard",
""
],
[
"Kuhlmann",
"Marco",
""
]
]
| TITLE: Fact Recall, Heuristics or Pure Guesswork? Precise Interpretations of
Language Models for Fact Completion
ABSTRACT: Language models (LMs) can make a correct prediction based on many possible
signals in a prompt, not all corresponding to recall of factual associations.
However, current interpretations of LMs fail to take this into account. For
example, given the query "Astrid Lindgren was born in" with the corresponding
completion "Sweden", no difference is made between whether the prediction was
based on knowing where the author was born or assuming that a person with a
Swedish-sounding name was born in Sweden. In this paper, we present a
model-specific recipe - PrISM - for constructing datasets with examples of four
different prediction scenarios: generic language modeling, guesswork,
heuristics recall and exact fact recall. We apply two popular interpretability
methods to the scenarios: causal tracing (CT) and information flow analysis. We
find that both yield distinct results for each scenario. Results for exact fact
recall and generic language modeling scenarios confirm previous conclusions
about the importance of mid-range MLP sublayers for fact recall, while results
for guesswork and heuristics indicate a critical role of late last token
position MLP sublayers. In summary, we contribute resources for a more
extensive and granular study of fact completion in LMs, together with analyses
that provide a more nuanced understanding of how LMs process fact-related
queries.
| no_new_dataset | 0.951908 |
2410.14695 | Willem Meijer | Willem Meijer, Mirela Riveni, Ayushi Rastogi | Ecosystem-wide influences on pull request decisions: insights from NPM | 52 pages, 3 figures, 7 tables, 1 appendix. The abstract in the arXiv
metadata is shortened due to size constraints | null | null | null | cs.SE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The pull-based development model facilitates global collaboration within
open-source software projects. However, whereas it is increasingly common for
software to depend on other projects in their ecosystem, most research on the
pull request decision-making process explored factors within projects, not the
broader software ecosystem they comprise. We uncover ecosystem-wide factors
that influence pull request acceptance decisions. We collected a dataset of
approximately 1.8 million pull requests and 2.1 million issues from 20,052
GitHub projects within the NPM ecosystem. Of these, 98% depend on another
project in the dataset, enabling studying collaboration across dependent
projects. We employed social network analysis to create a collaboration network
in the ecosystem, and mixed effects logistic regression and random forest
techniques to measure the impact and predictive strength of the tested
features. We find that gaining experience within the software ecosystem through
active participation in issue-tracking systems, submitting pull requests, and
collaborating with pull request integrators and experienced developers benefits
all open-source contributors, especially project newcomers. These results are
complemented with an exploratory qualitative analysis of 538 pull requests. We
find that developers with ecosystem experience make different contributions
than users without. Zooming in on a subset of 111 pull requests with clear
ecosystem involvement, we find 3 overarching and 10 specific reasons why
developers involve ecosystem projects in their pull requests. The results show
that combining ecosystem-wide factors with features studied in previous work to
predict the outcome of pull requests reached an overall F1 score of 0.92.
However, the outcomes of pull requests submitted by newcomers are harder to
predict.
| [
{
"version": "v1",
"created": "Fri, 4 Oct 2024 13:14:39 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 07:29:00 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Meijer",
"Willem",
""
],
[
"Riveni",
"Mirela",
""
],
[
"Rastogi",
"Ayushi",
""
]
]
| TITLE: Ecosystem-wide influences on pull request decisions: insights from NPM
ABSTRACT: The pull-based development model facilitates global collaboration within
open-source software projects. However, whereas it is increasingly common for
software to depend on other projects in their ecosystem, most research on the
pull request decision-making process explored factors within projects, not the
broader software ecosystem they comprise. We uncover ecosystem-wide factors
that influence pull request acceptance decisions. We collected a dataset of
approximately 1.8 million pull requests and 2.1 million issues from 20,052
GitHub projects within the NPM ecosystem. Of these, 98% depend on another
project in the dataset, enabling studying collaboration across dependent
projects. We employed social network analysis to create a collaboration network
in the ecosystem, and mixed effects logistic regression and random forest
techniques to measure the impact and predictive strength of the tested
features. We find that gaining experience within the software ecosystem through
active participation in issue-tracking systems, submitting pull requests, and
collaborating with pull request integrators and experienced developers benefits
all open-source contributors, especially project newcomers. These results are
complemented with an exploratory qualitative analysis of 538 pull requests. We
find that developers with ecosystem experience make different contributions
than users without. Zooming in on a subset of 111 pull requests with clear
ecosystem involvement, we find 3 overarching and 10 specific reasons why
developers involve ecosystem projects in their pull requests. The results show
that combining ecosystem-wide factors with features studied in previous work to
predict the outcome of pull requests reached an overall F1 score of 0.92.
However, the outcomes of pull requests submitted by newcomers are harder to
predict.
| new_dataset | 0.648355 |
2410.15218 | Junyang He | Junyang He, Ying-Jung Chen, Alireza Jafari, Anushka Idamekorala,
Geoffrey Fox | Deep Learning Foundation and Pattern Models: Challenges in Hydrological
Time Series | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | There has been active investigation into deep learning approaches for time
series analysis, including foundation models. However, most studies do not
address significant scientific applications. This paper aims to identify key
features in time series by examining hydrology data. Our work advances computer
science by emphasizing critical application features and contributes to
hydrology and other scientific fields by identifying modeling approaches that
effectively capture these features. Scientific time series data are inherently
complex, involving observations from multiple locations, each with various
time-dependent data streams and exogenous factors that may be static or
time-varying and either application-dependent or purely mathematical. This
research analyzes hydrology time series from the CAMELS and Caravan global
datasets, which encompass rainfall and runoff data across catchments, featuring
up to six observed streams and 209 static parameters across approximately 8,000
locations. Our investigation assesses the impact of exogenous data through
eight different model configurations for key hydrology tasks. Results
demonstrate that integrating exogenous information enhances data
representation, reducing mean squared error by up to 40% in the largest
dataset. Additionally, we present a detailed performance comparison of over 20
state-of-the-art pattern and foundation models. The analysis is fully
open-source, facilitated by Jupyter Notebook on Google Colab for LSTM-based
modeling, data preprocessing, and model comparisons. Preliminary findings using
alternative deep learning architectures reveal that models incorporating
comprehensive observed and exogenous data outperform more limited approaches,
including foundation models. Notably, natural annual periodic exogenous time
series contribute the most significant improvements, though static and other
periodic factors are also valuable.
| [
{
"version": "v1",
"created": "Sat, 19 Oct 2024 21:23:48 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Mar 2025 21:54:42 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"He",
"Junyang",
""
],
[
"Chen",
"Ying-Jung",
""
],
[
"Jafari",
"Alireza",
""
],
[
"Idamekorala",
"Anushka",
""
],
[
"Fox",
"Geoffrey",
""
]
]
| TITLE: Deep Learning Foundation and Pattern Models: Challenges in Hydrological
Time Series
ABSTRACT: There has been active investigation into deep learning approaches for time
series analysis, including foundation models. However, most studies do not
address significant scientific applications. This paper aims to identify key
features in time series by examining hydrology data. Our work advances computer
science by emphasizing critical application features and contributes to
hydrology and other scientific fields by identifying modeling approaches that
effectively capture these features. Scientific time series data are inherently
complex, involving observations from multiple locations, each with various
time-dependent data streams and exogenous factors that may be static or
time-varying and either application-dependent or purely mathematical. This
research analyzes hydrology time series from the CAMELS and Caravan global
datasets, which encompass rainfall and runoff data across catchments, featuring
up to six observed streams and 209 static parameters across approximately 8,000
locations. Our investigation assesses the impact of exogenous data through
eight different model configurations for key hydrology tasks. Results
demonstrate that integrating exogenous information enhances data
representation, reducing mean squared error by up to 40% in the largest
dataset. Additionally, we present a detailed performance comparison of over 20
state-of-the-art pattern and foundation models. The analysis is fully
open-source, facilitated by Jupyter Notebook on Google Colab for LSTM-based
modeling, data preprocessing, and model comparisons. Preliminary findings using
alternative deep learning architectures reveal that models incorporating
comprehensive observed and exogenous data outperform more limited approaches,
including foundation models. Notably, natural annual periodic exogenous time
series contribute the most significant improvements, though static and other
periodic factors are also valuable.
| no_new_dataset | 0.94545 |
2410.15247 | Elynn Chen | Yujia Wu, Junyi Mo, Elynn Chen, Yuzhou Chen | Tensor-Fused Multi-View Graph Contrastive Learning | null | The 29th Pacific-Asia Conference on Knowledge Discovery and Data
Mining (PAKDD), 2025 | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Graph contrastive learning (GCL) has emerged as a promising approach to
enhance graph neural networks' (GNNs) ability to learn rich representations
from unlabeled graph-structured data. However, current GCL models face
challenges with computational demands and limited feature utilization, often
relying only on basic graph properties like node degrees and edge attributes.
This constrains their capacity to fully capture the complex topological
characteristics of real-world phenomena represented by graphs. To address these
limitations, we propose Tensor-Fused Multi-View Graph Contrastive Learning
(TensorMV-GCL), a novel framework that integrates extended persistent homology
(EPH) with GCL representations and facilitates multi-scale feature extraction.
Our approach uniquely employs tensor aggregation and compression to fuse
information from graph and topological features obtained from multiple
augmented views of the same graph. By incorporating tensor concatenation and
contraction modules, we reduce computational overhead by separating feature
tensor aggregation and transformation. Furthermore, we enhance the quality of
learned topological features and model robustness through noise-injected EPH.
Experiments on molecular, bioinformatic, and social network datasets
demonstrate TensorMV-GCL's superiority, outperforming 15 state-of-the-art
methods in graph classification tasks across 9 out of 11 benchmarks while
achieving comparable results on the remaining two. The code for this paper is
publicly available at https://github.com/CS-SAIL/Tensor-MV-GCL.git.
| [
{
"version": "v1",
"created": "Sun, 20 Oct 2024 01:40:12 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Mar 2025 01:31:59 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Wu",
"Yujia",
""
],
[
"Mo",
"Junyi",
""
],
[
"Chen",
"Elynn",
""
],
[
"Chen",
"Yuzhou",
""
]
]
| TITLE: Tensor-Fused Multi-View Graph Contrastive Learning
ABSTRACT: Graph contrastive learning (GCL) has emerged as a promising approach to
enhance graph neural networks' (GNNs) ability to learn rich representations
from unlabeled graph-structured data. However, current GCL models face
challenges with computational demands and limited feature utilization, often
relying only on basic graph properties like node degrees and edge attributes.
This constrains their capacity to fully capture the complex topological
characteristics of real-world phenomena represented by graphs. To address these
limitations, we propose Tensor-Fused Multi-View Graph Contrastive Learning
(TensorMV-GCL), a novel framework that integrates extended persistent homology
(EPH) with GCL representations and facilitates multi-scale feature extraction.
Our approach uniquely employs tensor aggregation and compression to fuse
information from graph and topological features obtained from multiple
augmented views of the same graph. By incorporating tensor concatenation and
contraction modules, we reduce computational overhead by separating feature
tensor aggregation and transformation. Furthermore, we enhance the quality of
learned topological features and model robustness through noise-injected EPH.
Experiments on molecular, bioinformatic, and social network datasets
demonstrate TensorMV-GCL's superiority, outperforming 15 state-of-the-art
methods in graph classification tasks across 9 out of 11 benchmarks while
achieving comparable results on the remaining two. The code for this paper is
publicly available at https://github.com/CS-SAIL/Tensor-MV-GCL.git.
| no_new_dataset | 0.946646 |
2410.16461 | Danial Namazifard | Paria Khoshtab, Danial Namazifard, Mostafa Masoudi, Ali Akhgary, Samin
Mahdizadeh Sani, Yadollah Yaghoobzadeh | Comparative Study of Multilingual Idioms and Similes in Large Language
Models | 22 pages, 4 figures | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | This study addresses the gap in the literature concerning the comparative
performance of LLMs in interpreting different types of figurative language
across multiple languages. By evaluating LLMs using two multilingual datasets
on simile and idiom interpretation, we explore the effectiveness of various
prompt engineering strategies, including chain-of-thought, few-shot, and
English translation prompts. We extend the language of these datasets to
Persian as well by building two new evaluation sets. Our comprehensive
assessment involves both closed-source (GPT-3.5, GPT-4o mini, Gemini 1.5), and
open-source models (Llama 3.1, Qwen2), highlighting significant differences in
performance across languages and figurative types. Our findings reveal that
while prompt engineering methods are generally effective, their success varies
by figurative type, language, and model. We also observe that open-source
models struggle particularly with low-resource languages in similes.
Additionally, idiom interpretation is nearing saturation for many languages,
necessitating more challenging evaluations.
| [
{
"version": "v1",
"created": "Mon, 21 Oct 2024 19:40:05 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Mar 2025 08:46:44 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Khoshtab",
"Paria",
""
],
[
"Namazifard",
"Danial",
""
],
[
"Masoudi",
"Mostafa",
""
],
[
"Akhgary",
"Ali",
""
],
[
"Sani",
"Samin Mahdizadeh",
""
],
[
"Yaghoobzadeh",
"Yadollah",
""
]
]
| TITLE: Comparative Study of Multilingual Idioms and Similes in Large Language
Models
ABSTRACT: This study addresses the gap in the literature concerning the comparative
performance of LLMs in interpreting different types of figurative language
across multiple languages. By evaluating LLMs using two multilingual datasets
on simile and idiom interpretation, we explore the effectiveness of various
prompt engineering strategies, including chain-of-thought, few-shot, and
English translation prompts. We extend the language of these datasets to
Persian as well by building two new evaluation sets. Our comprehensive
assessment involves both closed-source (GPT-3.5, GPT-4o mini, Gemini 1.5), and
open-source models (Llama 3.1, Qwen2), highlighting significant differences in
performance across languages and figurative types. Our findings reveal that
while prompt engineering methods are generally effective, their success varies
by figurative type, language, and model. We also observe that open-source
models struggle particularly with low-resource languages in similes.
Additionally, idiom interpretation is nearing saturation for many languages,
necessitating more challenging evaluations.
| no_new_dataset | 0.931898 |
2410.16512 | Kaifeng Chen | Kevis-Kokitsi Maninis, Kaifeng Chen, Soham Ghosh, Arjun Karpur, Koert
Chen, Ye Xia, Bingyi Cao, Daniel Salz, Guangxing Han, Jan Dlabal, Dan
Gnanapragasam, Mojtaba Seyedhosseini, Howard Zhou, Andre Araujo | TIPS: Text-Image Pretraining with Spatial awareness | ICLR2025 camera-ready + appendix | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | While image-text representation learning has become very popular in recent
years, existing models tend to lack spatial awareness and have limited direct
applicability for dense understanding tasks. For this reason, self-supervised
image-only pretraining is still the go-to method for many dense vision
applications (e.g. depth estimation, semantic segmentation), despite the lack
of explicit supervisory signals. In this paper, we close this gap between
image-text and self-supervised learning, by proposing a novel general-purpose
image-text model, which can be effectively used off the shelf for dense and
global vision tasks. Our method, which we refer to as Text-Image Pretraining
with Spatial awareness (TIPS), leverages two simple and effective insights.
First, on textual supervision: we reveal that replacing noisy web image
captions by synthetically generated textual descriptions boosts dense
understanding performance significantly, due to a much richer signal for
learning spatially aware representations. We propose an adapted training method
that combines noisy and synthetic captions, resulting in improvements across
both dense and global understanding tasks. Second, on the learning technique:
we propose to combine contrastive image-text learning with self-supervised
masked image modeling, to encourage spatial coherence, unlocking substantial
enhancements for downstream applications. Building on these two ideas, we scale
our model using the transformer architecture, trained on a curated set of
public images. Our experiments are conducted on 8 tasks involving 16 datasets
in total, demonstrating strong off-the-shelf performance on both dense and
global understanding, for several image-only and image-text tasks. Code and
models are released at https://github.com/google-deepmind/tips.
| [
{
"version": "v1",
"created": "Mon, 21 Oct 2024 21:05:04 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 19:38:42 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Maninis",
"Kevis-Kokitsi",
""
],
[
"Chen",
"Kaifeng",
""
],
[
"Ghosh",
"Soham",
""
],
[
"Karpur",
"Arjun",
""
],
[
"Chen",
"Koert",
""
],
[
"Xia",
"Ye",
""
],
[
"Cao",
"Bingyi",
""
],
[
"Salz",
"Daniel",
""
],
[
"Han",
"Guangxing",
""
],
[
"Dlabal",
"Jan",
""
],
[
"Gnanapragasam",
"Dan",
""
],
[
"Seyedhosseini",
"Mojtaba",
""
],
[
"Zhou",
"Howard",
""
],
[
"Araujo",
"Andre",
""
]
]
| TITLE: TIPS: Text-Image Pretraining with Spatial awareness
ABSTRACT: While image-text representation learning has become very popular in recent
years, existing models tend to lack spatial awareness and have limited direct
applicability for dense understanding tasks. For this reason, self-supervised
image-only pretraining is still the go-to method for many dense vision
applications (e.g. depth estimation, semantic segmentation), despite the lack
of explicit supervisory signals. In this paper, we close this gap between
image-text and self-supervised learning, by proposing a novel general-purpose
image-text model, which can be effectively used off the shelf for dense and
global vision tasks. Our method, which we refer to as Text-Image Pretraining
with Spatial awareness (TIPS), leverages two simple and effective insights.
First, on textual supervision: we reveal that replacing noisy web image
captions by synthetically generated textual descriptions boosts dense
understanding performance significantly, due to a much richer signal for
learning spatially aware representations. We propose an adapted training method
that combines noisy and synthetic captions, resulting in improvements across
both dense and global understanding tasks. Second, on the learning technique:
we propose to combine contrastive image-text learning with self-supervised
masked image modeling, to encourage spatial coherence, unlocking substantial
enhancements for downstream applications. Building on these two ideas, we scale
our model using the transformer architecture, trained on a curated set of
public images. Our experiments are conducted on 8 tasks involving 16 datasets
in total, demonstrating strong off-the-shelf performance on both dense and
global understanding, for several image-only and image-text tasks. Code and
models are released at https://github.com/google-deepmind/tips.
| no_new_dataset | 0.950227 |
2410.16701 | Veeramakali Vignesh Manivannan | Veeramakali Vignesh Manivannan, Yasaman Jafari, Srikar Eranky, Spencer
Ho, Rose Yu, Duncan Watson-Parris, Yian Ma, Leon Bergen, Taylor
Berg-Kirkpatrick | ClimaQA: An Automated Evaluation Framework for Climate Question
Answering Models | Accepted to ICLR 2025 | ICLR 2025 | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | The use of Large Language Models (LLMs) in climate science has recently
gained significant attention. However, a critical issue remains: the lack of a
comprehensive evaluation framework capable of assessing the quality and
scientific validity of model outputs. To address this issue, we develop
ClimaGen (Climate QA Generator), an adaptive learning framework that generates
question-answer pairs from graduate textbooks with climate scientists in the
loop. As a result, we present ClimaQA-Gold, an expert-annotated benchmark
dataset alongside ClimaQA-Silver, a large-scale, comprehensive synthetic QA
dataset for climate science. Finally, we develop evaluation strategies and
compare different LLMs on our benchmarks. Our results offer novel insights into
various approaches used to enhance knowledge of climate LLMs. The source code
is publicly available at https://github.com/Rose-STL-Lab/genie-climaqa
| [
{
"version": "v1",
"created": "Tue, 22 Oct 2024 05:12:19 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Mar 2025 18:31:12 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Manivannan",
"Veeramakali Vignesh",
""
],
[
"Jafari",
"Yasaman",
""
],
[
"Eranky",
"Srikar",
""
],
[
"Ho",
"Spencer",
""
],
[
"Yu",
"Rose",
""
],
[
"Watson-Parris",
"Duncan",
""
],
[
"Ma",
"Yian",
""
],
[
"Bergen",
"Leon",
""
],
[
"Berg-Kirkpatrick",
"Taylor",
""
]
]
| TITLE: ClimaQA: An Automated Evaluation Framework for Climate Question
Answering Models
ABSTRACT: The use of Large Language Models (LLMs) in climate science has recently
gained significant attention. However, a critical issue remains: the lack of a
comprehensive evaluation framework capable of assessing the quality and
scientific validity of model outputs. To address this issue, we develop
ClimaGen (Climate QA Generator), an adaptive learning framework that generates
question-answer pairs from graduate textbooks with climate scientists in the
loop. As a result, we present ClimaQA-Gold, an expert-annotated benchmark
dataset alongside ClimaQA-Silver, a large-scale, comprehensive synthetic QA
dataset for climate science. Finally, we develop evaluation strategies and
compare different LLMs on our benchmarks. Our results offer novel insights into
various approaches used to enhance knowledge of climate LLMs. The source code
is publicly available at https://github.com/Rose-STL-Lab/genie-climaqa
| new_dataset | 0.961134 |
2410.16795 | Pei Liu | Pei Liu, Haipeng Liu, Xingyu Liu, Yiqun Li, Junlan Chen, Yangfan He,
and Jun Ma | Scene-Aware Explainable Multimodal Trajectory Prediction | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Advancements in intelligent technologies have significantly improved
navigation in complex traffic environments by enhancing environment perception
and trajectory prediction for automated vehicles. However, current research
often overlooks the joint reasoning of scenario agents and lacks explainability
in trajectory prediction models, limiting their practical use in real-world
situations. To address this, we introduce the Explainable Conditional
Diffusion-based Multimodal Trajectory Prediction (DMTP) model, which is
designed to elucidate the environmental factors influencing predictions and
reveal the underlying mechanisms. Our model integrates a modified conditional
diffusion approach to capture multimodal trajectory patterns and employs a
revised Shapley Value model to assess the significance of global and
scenario-specific features. Experiments using the Waymo Open Motion Dataset
demonstrate that our explainable model excels in identifying critical inputs
and significantly outperforms baseline models in accuracy. Moreover, the
factors identified align with the human driving experience, underscoring the
model's effectiveness in learning accurate predictions. Code is available in
our open-source repository:
https://github.com/ocean-luna/Explainable-Prediction.
| [
{
"version": "v1",
"created": "Tue, 22 Oct 2024 08:17:33 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 01:33:26 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Liu",
"Pei",
""
],
[
"Liu",
"Haipeng",
""
],
[
"Liu",
"Xingyu",
""
],
[
"Li",
"Yiqun",
""
],
[
"Chen",
"Junlan",
""
],
[
"He",
"Yangfan",
""
],
[
"Ma",
"Jun",
""
]
]
| TITLE: Scene-Aware Explainable Multimodal Trajectory Prediction
ABSTRACT: Advancements in intelligent technologies have significantly improved
navigation in complex traffic environments by enhancing environment perception
and trajectory prediction for automated vehicles. However, current research
often overlooks the joint reasoning of scenario agents and lacks explainability
in trajectory prediction models, limiting their practical use in real-world
situations. To address this, we introduce the Explainable Conditional
Diffusion-based Multimodal Trajectory Prediction (DMTP) model, which is
designed to elucidate the environmental factors influencing predictions and
reveal the underlying mechanisms. Our model integrates a modified conditional
diffusion approach to capture multimodal trajectory patterns and employs a
revised Shapley Value model to assess the significance of global and
scenario-specific features. Experiments using the Waymo Open Motion Dataset
demonstrate that our explainable model excels in identifying critical inputs
and significantly outperforms baseline models in accuracy. Moreover, the
factors identified align with the human driving experience, underscoring the
model's effectiveness in learning accurate predictions. Code is available in
our open-source repository:
https://github.com/ocean-luna/Explainable-Prediction.
| no_new_dataset | 0.944638 |
2410.17031 | Shuyang Hou | Shuyang Hou, Zhangxiao Shen, Anqi Zhao, Jianyuan Liang, Zhipeng Gui,
Xuefeng Guan, Rui Li, Huayi Wu | GeoCode-GPT: A Large Language Model for Geospatial Code Generation Tasks | null | null | 10.1016/j.jag.2025.104456 | null | cs.SE cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The increasing demand for spatiotemporal data and modeling tasks in
geosciences has made geospatial code generation technology a critical factor in
enhancing productivity. Although large language models (LLMs) have demonstrated
potential in code generation tasks, they often encounter issues such as refusal
to code or hallucination in geospatial code generation due to a lack of
domain-specific knowledge and code corpora. To address these challenges, this
paper presents and open-sources the GeoCode-PT and GeoCode-SFT corpora, along
with the GeoCode-Eval evaluation dataset. Additionally, by leveraging QLoRA and
LoRA for pretraining and fine-tuning, we introduce GeoCode-GPT-7B, the first
LLM focused on geospatial code generation, fine-tuned from Code Llama-7B.
Furthermore, we establish a comprehensive geospatial code evaluation framework,
incorporating option matching, expert validation, and prompt engineering
scoring for LLMs, and systematically evaluate GeoCode-GPT-7B using the
GeoCode-Eval dataset. Experimental results show that GeoCode-GPT outperforms
other models in multiple-choice accuracy by 9.1% to 32.1%, in code
summarization ability by 1.7% to 25.4%, and in code generation capability by
1.2% to 25.1%. This paper provides a solution and empirical validation for
enhancing LLMs' performance in geospatial code generation, extends the
boundaries of domain-specific model applications, and offers valuable insights
into unlocking their potential in geospatial code generation.
| [
{
"version": "v1",
"created": "Tue, 22 Oct 2024 13:57:55 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Oct 2024 13:52:51 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Hou",
"Shuyang",
""
],
[
"Shen",
"Zhangxiao",
""
],
[
"Zhao",
"Anqi",
""
],
[
"Liang",
"Jianyuan",
""
],
[
"Gui",
"Zhipeng",
""
],
[
"Guan",
"Xuefeng",
""
],
[
"Li",
"Rui",
""
],
[
"Wu",
"Huayi",
""
]
]
| TITLE: GeoCode-GPT: A Large Language Model for Geospatial Code Generation Tasks
ABSTRACT: The increasing demand for spatiotemporal data and modeling tasks in
geosciences has made geospatial code generation technology a critical factor in
enhancing productivity. Although large language models (LLMs) have demonstrated
potential in code generation tasks, they often encounter issues such as refusal
to code or hallucination in geospatial code generation due to a lack of
domain-specific knowledge and code corpora. To address these challenges, this
paper presents and open-sources the GeoCode-PT and GeoCode-SFT corpora, along
with the GeoCode-Eval evaluation dataset. Additionally, by leveraging QLoRA and
LoRA for pretraining and fine-tuning, we introduce GeoCode-GPT-7B, the first
LLM focused on geospatial code generation, fine-tuned from Code Llama-7B.
Furthermore, we establish a comprehensive geospatial code evaluation framework,
incorporating option matching, expert validation, and prompt engineering
scoring for LLMs, and systematically evaluate GeoCode-GPT-7B using the
GeoCode-Eval dataset. Experimental results show that GeoCode-GPT outperforms
other models in multiple-choice accuracy by 9.1% to 32.1%, in code
summarization ability by 1.7% to 25.4%, and in code generation capability by
1.2% to 25.1%. This paper provides a solution and empirical validation for
enhancing LLMs' performance in geospatial code generation, extends the
boundaries of domain-specific model applications, and offers valuable insights
into unlocking their potential in geospatial code generation.
| no_new_dataset | 0.758645 |
2410.17547 | Sharath Matada | Sharath Matada, Luke Bhan, Yuanyuan Shi, Nikolay Atanasov | Generalizable Motion Planning via Operator Learning | Published in ICLR 2025 | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | In this work, we introduce a planning neural operator (PNO) for predicting
the value function of a motion planning problem. We recast value function
approximation as learning a single operator from the cost function space to the
value function space, which is defined by an Eikonal partial differential
equation (PDE). Therefore, our PNO model, despite being trained with a finite
number of samples at coarse resolution, inherits the zero-shot super-resolution
property of neural operators. We demonstrate accurate value function
approximation at $16\times$ the training resolution on the MovingAI lab's 2D
city dataset, compare with state-of-the-art neural value function predictors on
3D scenes from the iGibson building dataset and showcase optimal planning with
4-DOF robotic manipulators. Lastly, we investigate employing the value function
output of PNO as a heuristic function to accelerate motion planning. We show
theoretically that the PNO heuristic is $\epsilon$-consistent by introducing an
inductive bias layer that guarantees our value functions satisfy the triangle
inequality. With our heuristic, we achieve a $30\%$ decrease in nodes visited
while obtaining near optimal path lengths on the MovingAI lab 2D city dataset,
compared to classical planning methods ($A^\ast$, $RRT^\ast$).
| [
{
"version": "v1",
"created": "Wed, 23 Oct 2024 04:06:35 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 05:11:18 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Matada",
"Sharath",
""
],
[
"Bhan",
"Luke",
""
],
[
"Shi",
"Yuanyuan",
""
],
[
"Atanasov",
"Nikolay",
""
]
]
| TITLE: Generalizable Motion Planning via Operator Learning
ABSTRACT: In this work, we introduce a planning neural operator (PNO) for predicting
the value function of a motion planning problem. We recast value function
approximation as learning a single operator from the cost function space to the
value function space, which is defined by an Eikonal partial differential
equation (PDE). Therefore, our PNO model, despite being trained with a finite
number of samples at coarse resolution, inherits the zero-shot super-resolution
property of neural operators. We demonstrate accurate value function
approximation at $16\times$ the training resolution on the MovingAI lab's 2D
city dataset, compare with state-of-the-art neural value function predictors on
3D scenes from the iGibson building dataset and showcase optimal planning with
4-DOF robotic manipulators. Lastly, we investigate employing the value function
output of PNO as a heuristic function to accelerate motion planning. We show
theoretically that the PNO heuristic is $\epsilon$-consistent by introducing an
inductive bias layer that guarantees our value functions satisfy the triangle
inequality. With our heuristic, we achieve a $30\%$ decrease in nodes visited
while obtaining near optimal path lengths on the MovingAI lab 2D city dataset,
compared to classical planning methods ($A^\ast$, $RRT^\ast$).
| no_new_dataset | 0.948442 |
2410.18477 | Chuanxiang Yang | Chuanxiang Yang, Yuanfeng Zhou, Guangshun Wei, Long Ma, Junhui Hou,
Yuan Liu and Wenping Wang | Monge-Ampere Regularization for Learning Arbitrary Shapes from Point
Clouds | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As commonly used implicit geometry representations, the signed distance
function (SDF) is limited to modeling watertight shapes, while the unsigned
distance function (UDF) is capable of representing various surfaces. However,
its inherent theoretical shortcoming, i.e., the non-differentiability at the
zero level set, would result in sub-optimal reconstruction quality. In this
paper, we propose the scaled-squared distance function (S$^{2}$DF), a novel
implicit surface representation for modeling arbitrary surface types. S$^{2}$DF
does not distinguish between inside and outside regions while effectively
addressing the non-differentiability issue of UDF at the zero level set. We
demonstrate that S$^{2}$DF satisfies a second-order partial differential
equation of Monge-Ampere-type, allowing us to develop a learning pipeline that
leverages a novel Monge-Ampere regularization to directly learn S$^{2}$DF from
raw unoriented point clouds without supervision from ground-truth S$^{2}$DF
values. Extensive experiments across multiple datasets show that our method
significantly outperforms state-of-the-art supervised approaches that require
ground-truth surface information as supervision for training. The code will be
publicly available at https://github.com/chuanxiang-yang/S2DF.
| [
{
"version": "v1",
"created": "Thu, 24 Oct 2024 06:56:34 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Mar 2025 13:46:00 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Yang",
"Chuanxiang",
""
],
[
"Zhou",
"Yuanfeng",
""
],
[
"Wei",
"Guangshun",
""
],
[
"Ma",
"Long",
""
],
[
"Hou",
"Junhui",
""
],
[
"Liu",
"Yuan",
""
],
[
"Wang",
"Wenping",
""
]
]
| TITLE: Monge-Ampere Regularization for Learning Arbitrary Shapes from Point
Clouds
ABSTRACT: As commonly used implicit geometry representations, the signed distance
function (SDF) is limited to modeling watertight shapes, while the unsigned
distance function (UDF) is capable of representing various surfaces. However,
its inherent theoretical shortcoming, i.e., the non-differentiability at the
zero level set, would result in sub-optimal reconstruction quality. In this
paper, we propose the scaled-squared distance function (S$^{2}$DF), a novel
implicit surface representation for modeling arbitrary surface types. S$^{2}$DF
does not distinguish between inside and outside regions while effectively
addressing the non-differentiability issue of UDF at the zero level set. We
demonstrate that S$^{2}$DF satisfies a second-order partial differential
equation of Monge-Ampere-type, allowing us to develop a learning pipeline that
leverages a novel Monge-Ampere regularization to directly learn S$^{2}$DF from
raw unoriented point clouds without supervision from ground-truth S$^{2}$DF
values. Extensive experiments across multiple datasets show that our method
significantly outperforms state-of-the-art supervised approaches that require
ground-truth surface information as supervision for training. The code will be
publicly available at https://github.com/chuanxiang-yang/S2DF.
| no_new_dataset | 0.945349 |
2410.18955 | Yujuan Fu | Yujuan Velvin Fu, Giridhar Kaushik Ramachandran, Namu Park, Kevin
Lybarger, Fei Xia, Ozlem Uzuner, Meliha Yetisgen | BioMistral-NLU: Towards More Generalizable Medical Language
Understanding through Instruction Tuning | 3 figures an 5 tables; Accepted by AMIA 2025 Informatics Summit | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Large language models (LLMs) such as ChatGPT are fine-tuned on large and
diverse instruction-following corpora, and can generalize to new tasks.
However, those instruction-tuned LLMs often perform poorly in specialized
medical natural language understanding (NLU) tasks that require domain
knowledge, granular text comprehension, and structured data extraction. To
bridge the gap, we: (1) propose a unified prompting format for 7 important NLU
tasks, (2) curate an instruction-tuning dataset, MNLU-Instruct, utilizing
diverse existing open-source medical NLU corpora, and (3) develop
BioMistral-NLU, a generalizable medical NLU model, through fine-tuning
BioMistral on MNLU-Instruct. We evaluate BioMistral-NLU in a zero-shot setting,
across 6 important NLU tasks, from two widely adopted medical NLU benchmarks:
BLUE and BLURB. Our experiments show that our BioMistral-NLU outperforms the
original BioMistral, as well as the proprietary LLMs - ChatGPT and GPT-4. Our
dataset-agnostic prompting strategy and instruction tuning step over diverse
NLU tasks enhance LLMs' generalizability across diverse medical NLU tasks. Our
ablation experiments show that instruction-tuning on a wider variety of tasks,
even when the total number of training instances remains constant, enhances
downstream zero-shot generalization.
| [
{
"version": "v1",
"created": "Thu, 24 Oct 2024 17:53:53 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Mar 2025 07:21:04 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Fu",
"Yujuan Velvin",
""
],
[
"Ramachandran",
"Giridhar Kaushik",
""
],
[
"Park",
"Namu",
""
],
[
"Lybarger",
"Kevin",
""
],
[
"Xia",
"Fei",
""
],
[
"Uzuner",
"Ozlem",
""
],
[
"Yetisgen",
"Meliha",
""
]
]
| TITLE: BioMistral-NLU: Towards More Generalizable Medical Language
Understanding through Instruction Tuning
ABSTRACT: Large language models (LLMs) such as ChatGPT are fine-tuned on large and
diverse instruction-following corpora, and can generalize to new tasks.
However, those instruction-tuned LLMs often perform poorly in specialized
medical natural language understanding (NLU) tasks that require domain
knowledge, granular text comprehension, and structured data extraction. To
bridge the gap, we: (1) propose a unified prompting format for 7 important NLU
tasks, (2) curate an instruction-tuning dataset, MNLU-Instruct, utilizing
diverse existing open-source medical NLU corpora, and (3) develop
BioMistral-NLU, a generalizable medical NLU model, through fine-tuning
BioMistral on MNLU-Instruct. We evaluate BioMistral-NLU in a zero-shot setting,
across 6 important NLU tasks, from two widely adopted medical NLU benchmarks:
BLUE and BLURB. Our experiments show that our BioMistral-NLU outperforms the
original BioMistral, as well as the proprietary LLMs - ChatGPT and GPT-4. Our
dataset-agnostic prompting strategy and instruction tuning step over diverse
NLU tasks enhance LLMs' generalizability across diverse medical NLU tasks. Our
ablation experiments show that instruction-tuning on a wider variety of tasks,
even when the total number of training instances remains constant, enhances
downstream zero-shot generalization.
| new_dataset | 0.776284 |
2410.18966 | Yujuan Fu | Yujuan Fu, Ozlem Uzuner, Meliha Yetisgen, Fei Xia | Does Data Contamination Detection Work (Well) for LLMs? A Survey and
Evaluation on Detection Assumptions | 3 tables and 1 figures in the main text. This paper is accepted by
NAACL 2025 findings | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Large language models (LLMs) have demonstrated great performance across
various benchmarks, showing potential as general-purpose task solvers. However,
as LLMs are typically trained on vast amounts of data, a significant concern in
their evaluation is data contamination, where overlap between training data and
evaluation datasets inflates performance assessments. Multiple approaches have
been developed to identify data contamination. These approaches rely on
specific assumptions that may not hold universally across different settings.
To bridge this gap, we systematically review 50 papers on data contamination
detection, categorize the underlying assumptions, and assess whether they have
been rigorously validated. We identify and analyze eight categories of
assumptions and test three of them as case studies. Our case studies focus on
detecting direct, instance-level data contamination, which is also referred to
as Membership Inference Attacks (MIA). Our analysis reveals that MIA approaches
based on these three assumptions can have similar performance to random
guessing, on datasets used in LLM pretraining, suggesting that current LLMs
might learn data distributions rather than memorizing individual instances.
Meanwhile, MIA can easily fail when there are data distribution shifts between
the seen and unseen instances.
| [
{
"version": "v1",
"created": "Thu, 24 Oct 2024 17:58:22 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Mar 2025 02:46:31 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Fu",
"Yujuan",
""
],
[
"Uzuner",
"Ozlem",
""
],
[
"Yetisgen",
"Meliha",
""
],
[
"Xia",
"Fei",
""
]
]
| TITLE: Does Data Contamination Detection Work (Well) for LLMs? A Survey and
Evaluation on Detection Assumptions
ABSTRACT: Large language models (LLMs) have demonstrated great performance across
various benchmarks, showing potential as general-purpose task solvers. However,
as LLMs are typically trained on vast amounts of data, a significant concern in
their evaluation is data contamination, where overlap between training data and
evaluation datasets inflates performance assessments. Multiple approaches have
been developed to identify data contamination. These approaches rely on
specific assumptions that may not hold universally across different settings.
To bridge this gap, we systematically review 50 papers on data contamination
detection, categorize the underlying assumptions, and assess whether they have
been rigorously validated. We identify and analyze eight categories of
assumptions and test three of them as case studies. Our case studies focus on
detecting direct, instance-level data contamination, which is also referred to
as Membership Inference Attacks (MIA). Our analysis reveals that MIA approaches
based on these three assumptions can have similar performance to random
guessing, on datasets used in LLM pretraining, suggesting that current LLMs
might learn data distributions rather than memorizing individual instances.
Meanwhile, MIA can easily fail when there are data distribution shifts between
the seen and unseen instances.
| no_new_dataset | 0.949669 |
2410.20327 | Xupeng Chen | Xupeng Chen, Zhixin Lai, Kangrui Ruan, Shichu Chen, Jiaxiang Liu,
Zuozhu Liu | R-LLaVA: Improving Med-VQA Understanding through Visual Region of
Interest | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Artificial intelligence has made significant strides in medical visual
question answering (Med-VQA), yet prevalent studies often interpret images
holistically, overlooking the visual regions of interest that may contain
crucial information, potentially aligning with a doctor's prior knowledge that
can be incorporated with minimal annotations (e.g., bounding boxes). To address
this gap, this paper introduces R-LLaVA, designed to enhance biomedical VQA
understanding by integrating simple medical annotations as prior knowledge
directly into the image space through CLIP. These annotated visual regions of
interest are then fed into the LLaVA model during training, aiming to enrich
the model's understanding of biomedical queries. Experimental evaluation on
four standard Med-VQA datasets demonstrates R-LLaVA's superiority over existing
state-of-the-art (SoTA) methods. Additionally, to verify the model's capability
in visual comprehension, a novel multiple-choice medical visual understanding
dataset is introduced, confirming the positive impact of focusing on visual
regions of interest in advancing biomedical VQA understanding.
| [
{
"version": "v1",
"created": "Sun, 27 Oct 2024 03:56:56 GMT"
},
{
"version": "v2",
"created": "Fri, 1 Nov 2024 21:47:53 GMT"
},
{
"version": "v3",
"created": "Thu, 30 Jan 2025 18:16:17 GMT"
},
{
"version": "v4",
"created": "Fri, 7 Feb 2025 10:33:52 GMT"
},
{
"version": "v5",
"created": "Sun, 9 Mar 2025 05:23:35 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Chen",
"Xupeng",
""
],
[
"Lai",
"Zhixin",
""
],
[
"Ruan",
"Kangrui",
""
],
[
"Chen",
"Shichu",
""
],
[
"Liu",
"Jiaxiang",
""
],
[
"Liu",
"Zuozhu",
""
]
]
| TITLE: R-LLaVA: Improving Med-VQA Understanding through Visual Region of
Interest
ABSTRACT: Artificial intelligence has made significant strides in medical visual
question answering (Med-VQA), yet prevalent studies often interpret images
holistically, overlooking the visual regions of interest that may contain
crucial information, potentially aligning with a doctor's prior knowledge that
can be incorporated with minimal annotations (e.g., bounding boxes). To address
this gap, this paper introduces R-LLaVA, designed to enhance biomedical VQA
understanding by integrating simple medical annotations as prior knowledge
directly into the image space through CLIP. These annotated visual regions of
interest are then fed into the LLaVA model during training, aiming to enrich
the model's understanding of biomedical queries. Experimental evaluation on
four standard Med-VQA datasets demonstrates R-LLaVA's superiority over existing
state-of-the-art (SoTA) methods. Additionally, to verify the model's capability
in visual comprehension, a novel multiple-choice medical visual understanding
dataset is introduced, confirming the positive impact of focusing on visual
regions of interest in advancing biomedical VQA understanding.
| new_dataset | 0.970465 |
2410.23252 | Haoyi Qiu | Haoyi Qiu, Alexander R. Fabbri, Divyansh Agarwal, Kung-Hsiang Huang,
Sarah Tan, Nanyun Peng, Chien-Sheng Wu | Evaluating Cultural and Social Awareness of LLM Web Agents | NAACL 2025 Findings | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | As large language models (LLMs) expand into performing as agents for
real-world applications beyond traditional NLP tasks, evaluating their
robustness becomes increasingly important. However, existing benchmarks often
overlook critical dimensions like cultural and social awareness. To address
these, we introduce CASA, a benchmark designed to assess LLM agents'
sensitivity to cultural and social norms across two web-based tasks: online
shopping and social discussion forums. Our approach evaluates LLM agents'
ability to detect and appropriately respond to norm-violating user queries and
observations. Furthermore, we propose a comprehensive evaluation framework that
measures awareness coverage, helpfulness in managing user queries, and the
violation rate when facing misleading web content. Experiments show that
current LLMs perform significantly better in non-agent than in web-based agent
environments, with agents achieving less than 10% awareness coverage and over
40% violation rates. To improve performance, we explore two methods: prompting
and fine-tuning, and find that combining both methods can offer complementary
advantages -- fine-tuning on culture-specific datasets significantly enhances
the agents' ability to generalize across different regions, while prompting
boosts the agents' ability to navigate complex tasks. These findings highlight
the importance of constantly benchmarking LLM agents' cultural and social
awareness during the development cycle.
| [
{
"version": "v1",
"created": "Wed, 30 Oct 2024 17:35:44 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Feb 2025 15:03:49 GMT"
},
{
"version": "v3",
"created": "Sat, 8 Mar 2025 23:37:49 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Qiu",
"Haoyi",
""
],
[
"Fabbri",
"Alexander R.",
""
],
[
"Agarwal",
"Divyansh",
""
],
[
"Huang",
"Kung-Hsiang",
""
],
[
"Tan",
"Sarah",
""
],
[
"Peng",
"Nanyun",
""
],
[
"Wu",
"Chien-Sheng",
""
]
]
| TITLE: Evaluating Cultural and Social Awareness of LLM Web Agents
ABSTRACT: As large language models (LLMs) expand into performing as agents for
real-world applications beyond traditional NLP tasks, evaluating their
robustness becomes increasingly important. However, existing benchmarks often
overlook critical dimensions like cultural and social awareness. To address
these, we introduce CASA, a benchmark designed to assess LLM agents'
sensitivity to cultural and social norms across two web-based tasks: online
shopping and social discussion forums. Our approach evaluates LLM agents'
ability to detect and appropriately respond to norm-violating user queries and
observations. Furthermore, we propose a comprehensive evaluation framework that
measures awareness coverage, helpfulness in managing user queries, and the
violation rate when facing misleading web content. Experiments show that
current LLMs perform significantly better in non-agent than in web-based agent
environments, with agents achieving less than 10% awareness coverage and over
40% violation rates. To improve performance, we explore two methods: prompting
and fine-tuning, and find that combining both methods can offer complementary
advantages -- fine-tuning on culture-specific datasets significantly enhances
the agents' ability to generalize across different regions, while prompting
boosts the agents' ability to navigate complex tasks. These findings highlight
the importance of constantly benchmarking LLM agents' cultural and social
awareness during the development cycle.
| no_new_dataset | 0.940626 |
2411.00816 | Yixuan Weng | Yixuan Weng, Minjun Zhu, Guangsheng Bao, Hongbo Zhang, Jindong Wang,
Yue Zhang, Linyi Yang | CycleResearcher: Improving Automated Research via Automated Review | Accept in ICLR 2025 | null | null | null | cs.CL cs.AI cs.CY cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The automation of scientific discovery has been a long-standing goal within
the research community, driven by the potential to accelerate knowledge
creation. While significant progress has been made using commercial large
language models (LLMs) as research assistants or idea generators, the
possibility of automating the entire research process with open-source LLMs
remains largely unexplored. This paper explores the feasibility of using
open-source post-trained LLMs as autonomous agents capable of performing the
full cycle of automated research and review, from literature review and
manuscript preparation to peer review and paper refinement. Our iterative
preference training framework consists of CycleResearcher, which conducts
research tasks, and CycleReviewer, which simulates the peer review process,
providing iterative feedback via reinforcement learning. To train these models,
we develop two new datasets, Review-5k and Research-14k, reflecting real-world
machine learning research and peer review dynamics. Our results demonstrate
that CycleReviewer achieves promising performance with a 26.89\% reduction in
mean absolute error (MAE) compared to individual human reviewers in predicting
paper scores, indicating the potential of LLMs to effectively assist
expert-level research evaluation. In research, the papers generated by the
CycleResearcher model achieved a score of 5.36 in simulated peer reviews,
showing some competitiveness in terms of simulated review scores compared to
the preprint level of 5.24 from human experts, while still having room for
improvement compared to the accepted paper level of 5.69. This work represents
a significant step toward fully automated scientific inquiry, providing ethical
safeguards and exploring AI-driven research capabilities. The code, dataset and
model weight are released at https://wengsyx.github.io/Researcher/.
| [
{
"version": "v1",
"created": "Mon, 28 Oct 2024 08:10:21 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 16:36:05 GMT"
},
{
"version": "v3",
"created": "Sat, 8 Mar 2025 14:01:34 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Weng",
"Yixuan",
""
],
[
"Zhu",
"Minjun",
""
],
[
"Bao",
"Guangsheng",
""
],
[
"Zhang",
"Hongbo",
""
],
[
"Wang",
"Jindong",
""
],
[
"Zhang",
"Yue",
""
],
[
"Yang",
"Linyi",
""
]
]
| TITLE: CycleResearcher: Improving Automated Research via Automated Review
ABSTRACT: The automation of scientific discovery has been a long-standing goal within
the research community, driven by the potential to accelerate knowledge
creation. While significant progress has been made using commercial large
language models (LLMs) as research assistants or idea generators, the
possibility of automating the entire research process with open-source LLMs
remains largely unexplored. This paper explores the feasibility of using
open-source post-trained LLMs as autonomous agents capable of performing the
full cycle of automated research and review, from literature review and
manuscript preparation to peer review and paper refinement. Our iterative
preference training framework consists of CycleResearcher, which conducts
research tasks, and CycleReviewer, which simulates the peer review process,
providing iterative feedback via reinforcement learning. To train these models,
we develop two new datasets, Review-5k and Research-14k, reflecting real-world
machine learning research and peer review dynamics. Our results demonstrate
that CycleReviewer achieves promising performance with a 26.89\% reduction in
mean absolute error (MAE) compared to individual human reviewers in predicting
paper scores, indicating the potential of LLMs to effectively assist
expert-level research evaluation. In research, the papers generated by the
CycleResearcher model achieved a score of 5.36 in simulated peer reviews,
showing some competitiveness in terms of simulated review scores compared to
the preprint level of 5.24 from human experts, while still having room for
improvement compared to the accepted paper level of 5.69. This work represents
a significant step toward fully automated scientific inquiry, providing ethical
safeguards and exploring AI-driven research capabilities. The code, dataset and
model weight are released at https://wengsyx.github.io/Researcher/.
| no_new_dataset | 0.573529 |
2411.00827 | Ruofan Wang | Ruofan Wang, Juncheng Li, Yixu Wang, Bo Wang, Xiaosen Wang, Yan Teng,
Yingchun Wang, Xingjun Ma, Yu-Gang Jiang | IDEATOR: Jailbreaking and Benchmarking Large Vision-Language Models
Using Themselves | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As large Vision-Language Models (VLMs) gain prominence, ensuring their safe
deployment has become critical. Recent studies have explored VLM robustness
against jailbreak attacks-techniques that exploit model vulnerabilities to
elicit harmful outputs. However, the limited availability of diverse multimodal
data has constrained current approaches to rely heavily on adversarial or
manually crafted images derived from harmful text datasets, which often lack
effectiveness and diversity across different contexts. In this paper, we
propose IDEATOR, a novel jailbreak method that autonomously generates malicious
image-text pairs for black-box jailbreak attacks. IDEATOR is grounded in the
insight that VLMs themselves could serve as powerful red team models for
generating multimodal jailbreak prompts. Specifically, IDEATOR leverages a VLM
to create targeted jailbreak texts and pairs them with jailbreak images
generated by a state-of-the-art diffusion model. Extensive experiments
demonstrate IDEATOR's high effectiveness and transferability, achieving a 94%
attack success rate (ASR) in jailbreaking MiniGPT-4 with an average of only
5.34 queries, and high ASRs of 82%, 88%, and 75% when transferred to LLaVA,
InstructBLIP, and Chameleon, respectively. Building on IDEATOR's strong
transferability and automated process, we introduce the VLBreakBench, a safety
benchmark comprising 3,654 multimodal jailbreak samples. Our benchmark results
on 11 recently released VLMs reveal significant gaps in safety alignment. For
instance, our challenge set achieves ASRs of 46.31% on GPT-4o and 19.65% on
Claude-3.5-Sonnet, underscoring the urgent need for stronger defenses.
| [
{
"version": "v1",
"created": "Tue, 29 Oct 2024 07:15:56 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Nov 2024 05:41:50 GMT"
},
{
"version": "v3",
"created": "Sat, 8 Mar 2025 17:39:57 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Wang",
"Ruofan",
""
],
[
"Li",
"Juncheng",
""
],
[
"Wang",
"Yixu",
""
],
[
"Wang",
"Bo",
""
],
[
"Wang",
"Xiaosen",
""
],
[
"Teng",
"Yan",
""
],
[
"Wang",
"Yingchun",
""
],
[
"Ma",
"Xingjun",
""
],
[
"Jiang",
"Yu-Gang",
""
]
]
| TITLE: IDEATOR: Jailbreaking and Benchmarking Large Vision-Language Models
Using Themselves
ABSTRACT: As large Vision-Language Models (VLMs) gain prominence, ensuring their safe
deployment has become critical. Recent studies have explored VLM robustness
against jailbreak attacks-techniques that exploit model vulnerabilities to
elicit harmful outputs. However, the limited availability of diverse multimodal
data has constrained current approaches to rely heavily on adversarial or
manually crafted images derived from harmful text datasets, which often lack
effectiveness and diversity across different contexts. In this paper, we
propose IDEATOR, a novel jailbreak method that autonomously generates malicious
image-text pairs for black-box jailbreak attacks. IDEATOR is grounded in the
insight that VLMs themselves could serve as powerful red team models for
generating multimodal jailbreak prompts. Specifically, IDEATOR leverages a VLM
to create targeted jailbreak texts and pairs them with jailbreak images
generated by a state-of-the-art diffusion model. Extensive experiments
demonstrate IDEATOR's high effectiveness and transferability, achieving a 94%
attack success rate (ASR) in jailbreaking MiniGPT-4 with an average of only
5.34 queries, and high ASRs of 82%, 88%, and 75% when transferred to LLaVA,
InstructBLIP, and Chameleon, respectively. Building on IDEATOR's strong
transferability and automated process, we introduce the VLBreakBench, a safety
benchmark comprising 3,654 multimodal jailbreak samples. Our benchmark results
on 11 recently released VLMs reveal significant gaps in safety alignment. For
instance, our challenge set achieves ASRs of 46.31% on GPT-4o and 19.65% on
Claude-3.5-Sonnet, underscoring the urgent need for stronger defenses.
| no_new_dataset | 0.941007 |
2411.01386 | Abhijin Adiga | Abhijin Adiga, Ayush Chopra, Mandy L. Wilson, S. S. Ravi, Dawen Xie,
Samarth Swarup, Bryan Lewis, John Barnes, Ramesh Raskar and Madhav V. Marathe | A High-Resolution, US-scale Digital Similar of Interacting Livestock,
Wild Birds, and Human Ecosystems with Applications to Multi-host Epidemic
Spread | null | null | null | null | cs.CE | http://creativecommons.org/licenses/by/4.0/ | One Health issues, such as the spread of highly pathogenic avian
influenza~(HPAI), present significant challenges at the
human-animal-environmental interface. Recent H5N1 outbreaks underscore the need
for comprehensive modeling efforts that capture the complex interactions
between various entities in these interconnected ecosystems. To support such
efforts, we develop a methodology to construct a synthetic spatiotemporal
gridded dataset of livestock production and processing, human population, and
wild birds for the contiguous United States, called a \emph{digital similar}.
This representation is a result of fusing diverse datasets using statistical
and optimization techniques, followed by extensive verification and validation.
The livestock component includes farm-level representations of four major
livestock types -- cattle, poultry, swine, and sheep -- including further
categorization into subtypes such as dairy cows, beef cows, chickens, turkeys,
ducks, etc. Weekly abundance data for wild bird species identified in the
transmission of avian influenza are included. Gridded distributions of the
human population, along with demographic and occupational features, capture the
placement of agricultural workers and the general population. We demonstrate
how the digital similar can be applied to evaluate spillover risk to dairy cows
and poultry from wild bird population, then validate these results using
historical H5N1 incidences. The resulting subtype-specific spatiotemporal risk
maps identify hotspots of high risk from H5N1 infected wild bird population to
dairy cattle and poultry operations, thus guiding surveillance efforts.
| [
{
"version": "v1",
"created": "Sun, 3 Nov 2024 00:24:24 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Mar 2025 02:04:51 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Adiga",
"Abhijin",
""
],
[
"Chopra",
"Ayush",
""
],
[
"Wilson",
"Mandy L.",
""
],
[
"Ravi",
"S. S.",
""
],
[
"Xie",
"Dawen",
""
],
[
"Swarup",
"Samarth",
""
],
[
"Lewis",
"Bryan",
""
],
[
"Barnes",
"John",
""
],
[
"Raskar",
"Ramesh",
""
],
[
"Marathe",
"Madhav V.",
""
]
]
| TITLE: A High-Resolution, US-scale Digital Similar of Interacting Livestock,
Wild Birds, and Human Ecosystems with Applications to Multi-host Epidemic
Spread
ABSTRACT: One Health issues, such as the spread of highly pathogenic avian
influenza~(HPAI), present significant challenges at the
human-animal-environmental interface. Recent H5N1 outbreaks underscore the need
for comprehensive modeling efforts that capture the complex interactions
between various entities in these interconnected ecosystems. To support such
efforts, we develop a methodology to construct a synthetic spatiotemporal
gridded dataset of livestock production and processing, human population, and
wild birds for the contiguous United States, called a \emph{digital similar}.
This representation is a result of fusing diverse datasets using statistical
and optimization techniques, followed by extensive verification and validation.
The livestock component includes farm-level representations of four major
livestock types -- cattle, poultry, swine, and sheep -- including further
categorization into subtypes such as dairy cows, beef cows, chickens, turkeys,
ducks, etc. Weekly abundance data for wild bird species identified in the
transmission of avian influenza are included. Gridded distributions of the
human population, along with demographic and occupational features, capture the
placement of agricultural workers and the general population. We demonstrate
how the digital similar can be applied to evaluate spillover risk to dairy cows
and poultry from wild bird population, then validate these results using
historical H5N1 incidences. The resulting subtype-specific spatiotemporal risk
maps identify hotspots of high risk from H5N1 infected wild bird population to
dairy cattle and poultry operations, thus guiding surveillance efforts.
| no_new_dataset | 0.865793 |
2411.03260 | Xiujin Zhu | Xiujin Zhu, Chee-Onn Chow and Joon Huang Chuah | ShadowMamba: State-Space Model with Boundary-Region Selective Scan for
Shadow Removal | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Image shadow removal is a common low-level vision problem. Shadows cause
sudden brightness changes in some areas, which can affect the accuracy of
downstream tasks. Currently, Transformer-based shadow removal methods improve
computational efficiency by using a window mechanism. However, this approach
reduces the effective receptive field and weakens the ability to model
long-range dependencies in shadow images. Recently, Mamba has achieved
significant success in computer vision by modeling long-sequence information
globally with linear complexity. However, when applied to shadow removal, its
original scanning mechanism overlooks the semantic continuity along shadow
boundaries, and the coherence within each region. To solve this issue, we
propose a new boundary-region selective scanning mechanism that scans shadow,
boundary, and non-shadow regions separately, making pixels of the same type
closer in the sequence. This increases semantic continuity and helps the model
understand local details better. Incorporating this idea, we design the first
Mamba-based lightweight shadow removal model, called ShadowMamba. It uses a
hierarchical combination U-Net structure, which effectively reduces the number
of parameters and computational complexity. Shallow layers rely on our
boundary-region selective scanning to capture local details, while deeper
layers use global cross-scanning to learn global brightness features. Extensive
experiments show that ShadowMamba outperforms current state-of-the-art models
on ISTD+, ISTD, and SRD datasets, and it also requires fewer parameters and
less computational cost. (Code will be made available upon paper acceptance.)
| [
{
"version": "v1",
"created": "Tue, 5 Nov 2024 16:59:06 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Mar 2025 03:12:27 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Zhu",
"Xiujin",
""
],
[
"Chow",
"Chee-Onn",
""
],
[
"Chuah",
"Joon Huang",
""
]
]
| TITLE: ShadowMamba: State-Space Model with Boundary-Region Selective Scan for
Shadow Removal
ABSTRACT: Image shadow removal is a common low-level vision problem. Shadows cause
sudden brightness changes in some areas, which can affect the accuracy of
downstream tasks. Currently, Transformer-based shadow removal methods improve
computational efficiency by using a window mechanism. However, this approach
reduces the effective receptive field and weakens the ability to model
long-range dependencies in shadow images. Recently, Mamba has achieved
significant success in computer vision by modeling long-sequence information
globally with linear complexity. However, when applied to shadow removal, its
original scanning mechanism overlooks the semantic continuity along shadow
boundaries, and the coherence within each region. To solve this issue, we
propose a new boundary-region selective scanning mechanism that scans shadow,
boundary, and non-shadow regions separately, making pixels of the same type
closer in the sequence. This increases semantic continuity and helps the model
understand local details better. Incorporating this idea, we design the first
Mamba-based lightweight shadow removal model, called ShadowMamba. It uses a
hierarchical combination U-Net structure, which effectively reduces the number
of parameters and computational complexity. Shallow layers rely on our
boundary-region selective scanning to capture local details, while deeper
layers use global cross-scanning to learn global brightness features. Extensive
experiments show that ShadowMamba outperforms current state-of-the-art models
on ISTD+, ISTD, and SRD datasets, and it also requires fewer parameters and
less computational cost. (Code will be made available upon paper acceptance.)
| no_new_dataset | 0.953535 |
2411.06390 | Yutong Chen | Yutong Chen, Marko Mihajlovic, Xiyi Chen, Yiming Wang, Sergey Prokudin
and Siyu Tang | SplatFormer: Point Transformer for Robust 3D Gaussian Splatting | ICLR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | 3D Gaussian Splatting (3DGS) has recently transformed photorealistic
reconstruction, achieving high visual fidelity and real-time performance.
However, rendering quality significantly deteriorates when test views deviate
from the camera angles used during training, posing a major challenge for
applications in immersive free-viewpoint rendering and navigation. In this
work, we conduct a comprehensive evaluation of 3DGS and related novel view
synthesis methods under out-of-distribution (OOD) test camera scenarios. By
creating diverse test cases with synthetic and real-world datasets, we
demonstrate that most existing methods, including those incorporating various
regularization techniques and data-driven priors, struggle to generalize
effectively to OOD views. To address this limitation, we introduce SplatFormer,
the first point transformer model specifically designed to operate on Gaussian
splats. SplatFormer takes as input an initial 3DGS set optimized under limited
training views and refines it in a single forward pass, effectively removing
potential artifacts in OOD test views. To our knowledge, this is the first
successful application of point transformers directly on 3DGS sets, surpassing
the limitations of previous multi-scene training methods, which could handle
only a restricted number of input views during inference. Our model
significantly improves rendering quality under extreme novel views, achieving
state-of-the-art performance in these challenging scenarios and outperforming
various 3DGS regularization techniques, multi-scene models tailored for sparse
view synthesis, and diffusion-based frameworks.
| [
{
"version": "v1",
"created": "Sun, 10 Nov 2024 08:23:27 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Nov 2024 06:41:21 GMT"
},
{
"version": "v3",
"created": "Mon, 10 Mar 2025 08:37:42 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Chen",
"Yutong",
""
],
[
"Mihajlovic",
"Marko",
""
],
[
"Chen",
"Xiyi",
""
],
[
"Wang",
"Yiming",
""
],
[
"Prokudin",
"Sergey",
""
],
[
"Tang",
"Siyu",
""
]
]
| TITLE: SplatFormer: Point Transformer for Robust 3D Gaussian Splatting
ABSTRACT: 3D Gaussian Splatting (3DGS) has recently transformed photorealistic
reconstruction, achieving high visual fidelity and real-time performance.
However, rendering quality significantly deteriorates when test views deviate
from the camera angles used during training, posing a major challenge for
applications in immersive free-viewpoint rendering and navigation. In this
work, we conduct a comprehensive evaluation of 3DGS and related novel view
synthesis methods under out-of-distribution (OOD) test camera scenarios. By
creating diverse test cases with synthetic and real-world datasets, we
demonstrate that most existing methods, including those incorporating various
regularization techniques and data-driven priors, struggle to generalize
effectively to OOD views. To address this limitation, we introduce SplatFormer,
the first point transformer model specifically designed to operate on Gaussian
splats. SplatFormer takes as input an initial 3DGS set optimized under limited
training views and refines it in a single forward pass, effectively removing
potential artifacts in OOD test views. To our knowledge, this is the first
successful application of point transformers directly on 3DGS sets, surpassing
the limitations of previous multi-scene training methods, which could handle
only a restricted number of input views during inference. Our model
significantly improves rendering quality under extreme novel views, achieving
state-of-the-art performance in these challenging scenarios and outperforming
various 3DGS regularization techniques, multi-scene models tailored for sparse
view synthesis, and diffusion-based frameworks.
| no_new_dataset | 0.945951 |
2411.08508 | David Svitov | David Svitov, Pietro Morerio, Lourdes Agapito, Alessio Del Bue | BillBoard Splatting (BBSplat): Learnable Textured Primitives for Novel
View Synthesis | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | We present billboard Splatting (BBSplat) - a novel approach for novel view
synthesis based on textured geometric primitives. BBSplat represents the scene
as a set of optimizable textured planar primitives with learnable RGB textures
and alpha-maps to control their shape. BBSplat primitives can be used in any
Gaussian Splatting pipeline as drop-in replacements for Gaussians. The proposed
primitives close the rendering quality gap between 2D and 3D Gaussian Splatting
(GS), enabling the accurate extraction of 3D mesh as in the 2DGS framework.
Additionally, the explicit nature of planar primitives enables the use of the
ray-tracing effects in rasterization. Our novel regularization term encourages
textures to have a sparser structure, enabling an efficient compression that
leads to a reduction in the storage space of the model up to x17 times compared
to 3DGS. Our experiments show the efficiency of BBSplat on standard datasets of
real indoor and outdoor scenes such as Tanks&Temples, DTU, and Mip-NeRF-360.
Namely, we achieve a state-of-the-art PSNR of 29.72 for DTU at Full HD
resolution.
| [
{
"version": "v1",
"created": "Wed, 13 Nov 2024 10:43:39 GMT"
},
{
"version": "v2",
"created": "Fri, 22 Nov 2024 15:35:52 GMT"
},
{
"version": "v3",
"created": "Tue, 11 Feb 2025 10:38:48 GMT"
},
{
"version": "v4",
"created": "Mon, 10 Mar 2025 13:33:06 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Svitov",
"David",
""
],
[
"Morerio",
"Pietro",
""
],
[
"Agapito",
"Lourdes",
""
],
[
"Del Bue",
"Alessio",
""
]
]
| TITLE: BillBoard Splatting (BBSplat): Learnable Textured Primitives for Novel
View Synthesis
ABSTRACT: We present billboard Splatting (BBSplat) - a novel approach for novel view
synthesis based on textured geometric primitives. BBSplat represents the scene
as a set of optimizable textured planar primitives with learnable RGB textures
and alpha-maps to control their shape. BBSplat primitives can be used in any
Gaussian Splatting pipeline as drop-in replacements for Gaussians. The proposed
primitives close the rendering quality gap between 2D and 3D Gaussian Splatting
(GS), enabling the accurate extraction of 3D mesh as in the 2DGS framework.
Additionally, the explicit nature of planar primitives enables the use of the
ray-tracing effects in rasterization. Our novel regularization term encourages
textures to have a sparser structure, enabling an efficient compression that
leads to a reduction in the storage space of the model up to x17 times compared
to 3DGS. Our experiments show the efficiency of BBSplat on standard datasets of
real indoor and outdoor scenes such as Tanks&Temples, DTU, and Mip-NeRF-360.
Namely, we achieve a state-of-the-art PSNR of 29.72 for DTU at Full HD
resolution.
| no_new_dataset | 0.945298 |
2411.08592 | Xie Jun | Jun Xie, Wenxiao Li, Faqiang Wang, Liqiang Zhang, Zhengyang Hou, Jun
Liu | Slender Object Scene Segmentation in Remote Sensing Image Based on
Learnable Morphological Skeleton with Segment Anything Model | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Morphological methods play a crucial role in remote sensing image processing,
due to their ability to capture and preserve small structural details. However,
most of the existing deep learning models for semantic segmentation are based
on the encoder-decoder architecture including U-net and Segment Anything Model
(SAM), where the downsampling process tends to discard fine details. In this
paper, we propose a new approach that integrates learnable morphological
skeleton prior into deep neural networks using the variational method. To
address the difficulty in backpropagation in neural networks caused by the
non-differentiability presented in classical morphological operations, we
provide a smooth representation of the morphological skeleton and design a
variational segmentation model integrating morphological skeleton prior by
employing operator splitting and dual methods. Then, we integrate this model
into the network architecture of SAM, which is achieved by adding a token to
mask decoder and modifying the final sigmoid layer, ensuring the final
segmentation results preserve the skeleton structure as much as possible.
Experimental results on remote sensing datasets, including buildings, roads and
water, demonstrate that our method outperforms the original SAM on slender
object segmentation and exhibits better generalization capability.
| [
{
"version": "v1",
"created": "Wed, 13 Nov 2024 13:19:51 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Mar 2025 12:06:08 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Xie",
"Jun",
""
],
[
"Li",
"Wenxiao",
""
],
[
"Wang",
"Faqiang",
""
],
[
"Zhang",
"Liqiang",
""
],
[
"Hou",
"Zhengyang",
""
],
[
"Liu",
"Jun",
""
]
]
| TITLE: Slender Object Scene Segmentation in Remote Sensing Image Based on
Learnable Morphological Skeleton with Segment Anything Model
ABSTRACT: Morphological methods play a crucial role in remote sensing image processing,
due to their ability to capture and preserve small structural details. However,
most of the existing deep learning models for semantic segmentation are based
on the encoder-decoder architecture including U-net and Segment Anything Model
(SAM), where the downsampling process tends to discard fine details. In this
paper, we propose a new approach that integrates learnable morphological
skeleton prior into deep neural networks using the variational method. To
address the difficulty in backpropagation in neural networks caused by the
non-differentiability presented in classical morphological operations, we
provide a smooth representation of the morphological skeleton and design a
variational segmentation model integrating morphological skeleton prior by
employing operator splitting and dual methods. Then, we integrate this model
into the network architecture of SAM, which is achieved by adding a token to
mask decoder and modifying the final sigmoid layer, ensuring the final
segmentation results preserve the skeleton structure as much as possible.
Experimental results on remote sensing datasets, including buildings, roads and
water, demonstrate that our method outperforms the original SAM on slender
object segmentation and exhibits better generalization capability.
| no_new_dataset | 0.951953 |
2411.08832 | Reece O'Mahoney | Reece O'Mahoney, Alexander L. Mitchell, Wanming Yu, Ingmar Posner,
Ioannis Havoutis | Offline Adaptation of Quadruped Locomotion using Diffusion Models | null | null | null | null | cs.RO cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | We present a diffusion-based approach to quadrupedal locomotion that
simultaneously addresses the limitations of learning and interpolating between
multiple skills and of (modes) offline adapting to new locomotion behaviours
after training. This is the first framework to apply classifier-free guided
diffusion to quadruped locomotion and demonstrate its efficacy by extracting
goal-conditioned behaviour from an originally unlabelled dataset. We show that
these capabilities are compatible with a multi-skill policy and can be applied
with little modification and minimal compute overhead, i.e., running entirely
on the robots onboard CPU. We verify the validity of our approach with hardware
experiments on the ANYmal quadruped platform.
| [
{
"version": "v1",
"created": "Wed, 13 Nov 2024 18:12:15 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 07:30:55 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"O'Mahoney",
"Reece",
""
],
[
"Mitchell",
"Alexander L.",
""
],
[
"Yu",
"Wanming",
""
],
[
"Posner",
"Ingmar",
""
],
[
"Havoutis",
"Ioannis",
""
]
]
| TITLE: Offline Adaptation of Quadruped Locomotion using Diffusion Models
ABSTRACT: We present a diffusion-based approach to quadrupedal locomotion that
simultaneously addresses the limitations of learning and interpolating between
multiple skills and of (modes) offline adapting to new locomotion behaviours
after training. This is the first framework to apply classifier-free guided
diffusion to quadruped locomotion and demonstrate its efficacy by extracting
goal-conditioned behaviour from an originally unlabelled dataset. We show that
these capabilities are compatible with a multi-skill policy and can be applied
with little modification and minimal compute overhead, i.e., running entirely
on the robots onboard CPU. We verify the validity of our approach with hardware
experiments on the ANYmal quadruped platform.
| no_new_dataset | 0.946151 |
2411.10679 | Huan Kang | Huan Kang, Hui Li, Tianyang Xu, Rui Wang, Xiao-Jun Wu, Josef Kittler | SPDFusion: An Infrared and Visible Image Fusion Network Based on a
Non-Euclidean Representation of Riemannian Manifolds | 14 pages, 12 figures | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Euclidean representation learning methods have achieved commendable results
in image fusion tasks, which can be attributed to their clear advantages in
handling with linear space. However, data collected from a realistic scene
usually have a non-Euclidean structure, where Euclidean metric might be limited
in representing the true data relationships, degrading fusion performance. To
address this issue, a novel SPD (symmetric positive definite) manifold learning
framework is proposed for multi-modal image fusion, named SPDFusion, which
extends the image fusion approach from the Euclidean space to the SPD
manifolds. Specifically, we encode images according to the Riemannian geometry
to exploit their intrinsic statistical correlations, thereby aligning with
human visual perception. Actually, the SPD matrix underpins our network
learning, with a cross-modal fusion strategy employed to harness
modality-specific dependencies and augment complementary information.
Subsequently, an attention module is designed to process the learned weight
matrix, facilitating the weighting of spatial global correlation semantics via
SPD matrix multiplication. Based on this, we design an end-to-end fusion
network based on cross-modal manifold learning. Extensive experiments on public
datasets demonstrate that our framework exhibits superior performance compared
to the current state-of-the-art methods.
| [
{
"version": "v1",
"created": "Sat, 16 Nov 2024 03:09:49 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Mar 2025 15:12:15 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Kang",
"Huan",
""
],
[
"Li",
"Hui",
""
],
[
"Xu",
"Tianyang",
""
],
[
"Wang",
"Rui",
""
],
[
"Wu",
"Xiao-Jun",
""
],
[
"Kittler",
"Josef",
""
]
]
| TITLE: SPDFusion: An Infrared and Visible Image Fusion Network Based on a
Non-Euclidean Representation of Riemannian Manifolds
ABSTRACT: Euclidean representation learning methods have achieved commendable results
in image fusion tasks, which can be attributed to their clear advantages in
handling with linear space. However, data collected from a realistic scene
usually have a non-Euclidean structure, where Euclidean metric might be limited
in representing the true data relationships, degrading fusion performance. To
address this issue, a novel SPD (symmetric positive definite) manifold learning
framework is proposed for multi-modal image fusion, named SPDFusion, which
extends the image fusion approach from the Euclidean space to the SPD
manifolds. Specifically, we encode images according to the Riemannian geometry
to exploit their intrinsic statistical correlations, thereby aligning with
human visual perception. Actually, the SPD matrix underpins our network
learning, with a cross-modal fusion strategy employed to harness
modality-specific dependencies and augment complementary information.
Subsequently, an attention module is designed to process the learned weight
matrix, facilitating the weighting of spatial global correlation semantics via
SPD matrix multiplication. Based on this, we design an end-to-end fusion
network based on cross-modal manifold learning. Extensive experiments on public
datasets demonstrate that our framework exhibits superior performance compared
to the current state-of-the-art methods.
| no_new_dataset | 0.947624 |
2411.10693 | Qi Wang | Qi Wang, Jinjia Zhou | Multi-perspective Contrastive Logit Distillation | 10 pages, 6 figures, 9 tabels, 12 formulas | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In previous studies on knowledge distillation, the significance of logit
distillation has frequently been overlooked. To revitalize logit distillation,
we present a novel perspective by reconsidering its computation based on the
semantic properties of logits and exploring how to utilize it more efficiently.
Logits often contain a substantial amount of high-level semantic information;
however, the conventional approach of employing logits to compute
Kullback-Leibler (KL) divergence does not account for their semantic
properties. Furthermore, this direct KL divergence computation fails to fully
exploit the potential of logits. To address these challenges, we introduce a
novel and efficient logit distillation method, Multi-perspective Contrastive
Logit Distillation (MCLD), which substantially improves the performance and
efficacy of logit distillation. In comparison to existing logit distillation
methods and complex feature distillation methods, MCLD attains state-of-the-art
performance in image classification, and transfer learning tasks across
multiple datasets, including CIFAR-100, ImageNet, Tiny-ImageNet, and STL-10.
Additionally, MCLD exhibits superior training efficiency and outstanding
performance with distilling on Vision Transformers, further emphasizing its
notable advantages. This study unveils the vast potential of logits in
knowledge distillation and seeks to offer valuable insights for future
research.
| [
{
"version": "v1",
"created": "Sat, 16 Nov 2024 04:08:41 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Mar 2025 09:45:21 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Wang",
"Qi",
""
],
[
"Zhou",
"Jinjia",
""
]
]
| TITLE: Multi-perspective Contrastive Logit Distillation
ABSTRACT: In previous studies on knowledge distillation, the significance of logit
distillation has frequently been overlooked. To revitalize logit distillation,
we present a novel perspective by reconsidering its computation based on the
semantic properties of logits and exploring how to utilize it more efficiently.
Logits often contain a substantial amount of high-level semantic information;
however, the conventional approach of employing logits to compute
Kullback-Leibler (KL) divergence does not account for their semantic
properties. Furthermore, this direct KL divergence computation fails to fully
exploit the potential of logits. To address these challenges, we introduce a
novel and efficient logit distillation method, Multi-perspective Contrastive
Logit Distillation (MCLD), which substantially improves the performance and
efficacy of logit distillation. In comparison to existing logit distillation
methods and complex feature distillation methods, MCLD attains state-of-the-art
performance in image classification, and transfer learning tasks across
multiple datasets, including CIFAR-100, ImageNet, Tiny-ImageNet, and STL-10.
Additionally, MCLD exhibits superior training efficiency and outstanding
performance with distilling on Vision Transformers, further emphasizing its
notable advantages. This study unveils the vast potential of logits in
knowledge distillation and seeks to offer valuable insights for future
research.
| no_new_dataset | 0.947721 |
2411.10788 | Jeonghyeok Do | Jeonghyeok Do, Jaehyup Lee, Munchurl Kim | C-DiffSET: Leveraging Latent Diffusion for SAR-to-EO Image Translation
with Confidence-Guided Reliable Object Generation | Please visit our project page
https://kaist-viclab.github.io/C-DiffSET_site/ | null | null | null | cs.CV eess.IV | http://creativecommons.org/licenses/by/4.0/ | Synthetic Aperture Radar (SAR) imagery provides robust environmental and
temporal coverage (e.g., during clouds, seasons, day-night cycles), yet its
noise and unique structural patterns pose interpretation challenges, especially
for non-experts. SAR-to-EO (Electro-Optical) image translation (SET) has
emerged to make SAR images more perceptually interpretable. However,
traditional approaches trained from scratch on limited SAR-EO datasets are
prone to overfitting. To address these challenges, we introduce Confidence
Diffusion for SAR-to-EO Translation, called C-DiffSET, a framework leveraging
pretrained Latent Diffusion Model (LDM) extensively trained on natural images,
thus enabling effective adaptation to the EO domain. Remarkably, we find that
the pretrained VAE encoder aligns SAR and EO images in the same latent space,
even with varying noise levels in SAR inputs. To further improve pixel-wise
fidelity for SET, we propose a confidence-guided diffusion (C-Diff) loss that
mitigates artifacts from temporal discrepancies, such as appearing or
disappearing objects, thereby enhancing structural accuracy. C-DiffSET achieves
state-of-the-art (SOTA) results on multiple datasets, significantly
outperforming the very recent image-to-image translation methods and SET
methods with large margins.
| [
{
"version": "v1",
"created": "Sat, 16 Nov 2024 12:28:40 GMT"
},
{
"version": "v2",
"created": "Sat, 23 Nov 2024 08:25:59 GMT"
},
{
"version": "v3",
"created": "Mon, 10 Mar 2025 05:36:10 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Do",
"Jeonghyeok",
""
],
[
"Lee",
"Jaehyup",
""
],
[
"Kim",
"Munchurl",
""
]
]
| TITLE: C-DiffSET: Leveraging Latent Diffusion for SAR-to-EO Image Translation
with Confidence-Guided Reliable Object Generation
ABSTRACT: Synthetic Aperture Radar (SAR) imagery provides robust environmental and
temporal coverage (e.g., during clouds, seasons, day-night cycles), yet its
noise and unique structural patterns pose interpretation challenges, especially
for non-experts. SAR-to-EO (Electro-Optical) image translation (SET) has
emerged to make SAR images more perceptually interpretable. However,
traditional approaches trained from scratch on limited SAR-EO datasets are
prone to overfitting. To address these challenges, we introduce Confidence
Diffusion for SAR-to-EO Translation, called C-DiffSET, a framework leveraging
pretrained Latent Diffusion Model (LDM) extensively trained on natural images,
thus enabling effective adaptation to the EO domain. Remarkably, we find that
the pretrained VAE encoder aligns SAR and EO images in the same latent space,
even with varying noise levels in SAR inputs. To further improve pixel-wise
fidelity for SET, we propose a confidence-guided diffusion (C-Diff) loss that
mitigates artifacts from temporal discrepancies, such as appearing or
disappearing objects, thereby enhancing structural accuracy. C-DiffSET achieves
state-of-the-art (SOTA) results on multiple datasets, significantly
outperforming the very recent image-to-image translation methods and SET
methods with large margins.
| no_new_dataset | 0.950319 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.