Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2503.14064 | Xinhao Xiang | Xinhao Xiang, Xiao Liu, Zizhong Li, Zhuosheng Liu, Jiawei Zhang | AIGVE-Tool: AI-Generated Video Evaluation Toolkit with Multifaceted
Benchmark | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | The rapid advancement in AI-generated video synthesis has led to a growth
demand for standardized and effective evaluation metrics. Existing metrics lack
a unified framework for systematically categorizing methodologies, limiting a
holistic understanding of the evaluation landscape. Additionally, fragmented
implementations and the absence of standardized interfaces lead to redundant
processing overhead. Furthermore, many prior approaches are constrained by
dataset-specific dependencies, limiting their applicability across diverse
video domains. To address these challenges, we introduce AIGVE-Tool
(AI-Generated Video Evaluation Toolkit), a unified framework that provides a
structured and extensible evaluation pipeline for a comprehensive AI-generated
video evaluation. Organized within a novel five-category taxonomy, AIGVE-Tool
integrates multiple evaluation methodologies while allowing flexible
customization through a modular configuration system. Additionally, we propose
AIGVE-Bench, a large-scale benchmark dataset created with five SOTA video
generation models based on hand-crafted instructions and prompts. This dataset
systematically evaluates various video generation models across nine critical
quality dimensions. Extensive experiments demonstrate the effectiveness of
AIGVE-Tool in providing standardized and reliable evaluation results,
highlighting specific strengths and limitations of current models and
facilitating the advancements of next-generation AI-generated video techniques.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 09:36:33 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Xiang",
"Xinhao",
""
],
[
"Liu",
"Xiao",
""
],
[
"Li",
"Zizhong",
""
],
[
"Liu",
"Zhuosheng",
""
],
[
"Zhang",
"Jiawei",
""
]
] | TITLE: AIGVE-Tool: AI-Generated Video Evaluation Toolkit with Multifaceted
Benchmark
ABSTRACT: The rapid advancement in AI-generated video synthesis has led to a growth
demand for standardized and effective evaluation metrics. Existing metrics lack
a unified framework for systematically categorizing methodologies, limiting a
holistic understanding of the evaluation landscape. Additionally, fragmented
implementations and the absence of standardized interfaces lead to redundant
processing overhead. Furthermore, many prior approaches are constrained by
dataset-specific dependencies, limiting their applicability across diverse
video domains. To address these challenges, we introduce AIGVE-Tool
(AI-Generated Video Evaluation Toolkit), a unified framework that provides a
structured and extensible evaluation pipeline for a comprehensive AI-generated
video evaluation. Organized within a novel five-category taxonomy, AIGVE-Tool
integrates multiple evaluation methodologies while allowing flexible
customization through a modular configuration system. Additionally, we propose
AIGVE-Bench, a large-scale benchmark dataset created with five SOTA video
generation models based on hand-crafted instructions and prompts. This dataset
systematically evaluates various video generation models across nine critical
quality dimensions. Extensive experiments demonstrate the effectiveness of
AIGVE-Tool in providing standardized and reliable evaluation results,
highlighting specific strengths and limitations of current models and
facilitating the advancements of next-generation AI-generated video techniques.
|
2503.14070 | Junliang Guo | Yang Ye, Junliang Guo, Haoyu Wu, Tianyu He, Tim Pearce, Tabish Rashid,
Katja Hofmann, Jiang Bian | Fast Autoregressive Video Generation with Diagonal Decoding | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Autoregressive Transformer models have demonstrated impressive performance in
video generation, but their sequential token-by-token decoding process poses a
major bottleneck, particularly for long videos represented by tens of thousands
of tokens. In this paper, we propose Diagonal Decoding (DiagD), a training-free
inference acceleration algorithm for autoregressively pre-trained models that
exploits spatial and temporal correlations in videos. Our method generates
tokens along diagonal paths in the spatial-temporal token grid, enabling
parallel decoding within each frame as well as partially overlapping across
consecutive frames. The proposed algorithm is versatile and adaptive to various
generative models and tasks, while providing flexible control over the
trade-off between inference speed and visual quality. Furthermore, we propose a
cost-effective finetuning strategy that aligns the attention patterns of the
model with our decoding order, further mitigating the training-inference gap on
small-scale models. Experiments on multiple autoregressive video generation
models and datasets demonstrate that DiagD achieves up to $10\times$ speedup
compared to naive sequential decoding, while maintaining comparable visual
fidelity.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 09:42:55 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Ye",
"Yang",
""
],
[
"Guo",
"Junliang",
""
],
[
"Wu",
"Haoyu",
""
],
[
"He",
"Tianyu",
""
],
[
"Pearce",
"Tim",
""
],
[
"Rashid",
"Tabish",
""
],
[
"Hofmann",
"Katja",
""
],
[
"Bian",
"Jiang",
""
]
] | TITLE: Fast Autoregressive Video Generation with Diagonal Decoding
ABSTRACT: Autoregressive Transformer models have demonstrated impressive performance in
video generation, but their sequential token-by-token decoding process poses a
major bottleneck, particularly for long videos represented by tens of thousands
of tokens. In this paper, we propose Diagonal Decoding (DiagD), a training-free
inference acceleration algorithm for autoregressively pre-trained models that
exploits spatial and temporal correlations in videos. Our method generates
tokens along diagonal paths in the spatial-temporal token grid, enabling
parallel decoding within each frame as well as partially overlapping across
consecutive frames. The proposed algorithm is versatile and adaptive to various
generative models and tasks, while providing flexible control over the
trade-off between inference speed and visual quality. Furthermore, we propose a
cost-effective finetuning strategy that aligns the attention patterns of the
model with our decoding order, further mitigating the training-inference gap on
small-scale models. Experiments on multiple autoregressive video generation
models and datasets demonstrate that DiagD achieves up to $10\times$ speedup
compared to naive sequential decoding, while maintaining comparable visual
fidelity.
|
2503.14072 | Rossana Mastrandrea | Rossana Mastrandrea, Fabio Montobbio, Gabriele Pellegrino, Massimo
Riccaboni, Valerio Sterzi | Leveraging Knowledge Networks: Rethinking Technological Value
Distribution in mRNA Vaccine Innovations | null | null | null | null | physics.soc-ph econ.GN q-fin.EC | http://creativecommons.org/licenses/by/4.0/ | This study examines the roles of public and private sector actors in the
development of mRNA vaccines, a breakthrough innovation in modern medicine.
Using a dataset of 151 core patent families and 2,416 antecedent (cited)
patents, we analyze the structure and dynamics of the mRNA vaccine knowledge
network through network theory. Our findings highlight the central role of
biotechnology firms, such as Moderna and BioNTech, alongside the crucial
contributions of universities and public research organizations (PROs) in
providing foundational knowledge.We develop a novel credit allocation
framework, showing that universities, PROs, government and research centers
account for at least 27% of the external technological knowledge base behind
mRNA vaccine breakthroughs - representing a minimum threshold of their overall
contribution. Our study offers new insights into pharmaceutical and
biotechnology innovation dynamics, emphasizing how Moderna and BioNTech's mRNA
technologies have benefited from academic institutions, with notable
differences in their institutional knowledge sources.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 09:45:19 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Mastrandrea",
"Rossana",
""
],
[
"Montobbio",
"Fabio",
""
],
[
"Pellegrino",
"Gabriele",
""
],
[
"Riccaboni",
"Massimo",
""
],
[
"Sterzi",
"Valerio",
""
]
] | TITLE: Leveraging Knowledge Networks: Rethinking Technological Value
Distribution in mRNA Vaccine Innovations
ABSTRACT: This study examines the roles of public and private sector actors in the
development of mRNA vaccines, a breakthrough innovation in modern medicine.
Using a dataset of 151 core patent families and 2,416 antecedent (cited)
patents, we analyze the structure and dynamics of the mRNA vaccine knowledge
network through network theory. Our findings highlight the central role of
biotechnology firms, such as Moderna and BioNTech, alongside the crucial
contributions of universities and public research organizations (PROs) in
providing foundational knowledge.We develop a novel credit allocation
framework, showing that universities, PROs, government and research centers
account for at least 27% of the external technological knowledge base behind
mRNA vaccine breakthroughs - representing a minimum threshold of their overall
contribution. Our study offers new insights into pharmaceutical and
biotechnology innovation dynamics, emphasizing how Moderna and BioNTech's mRNA
technologies have benefited from academic institutions, with notable
differences in their institutional knowledge sources.
|
2503.14084 | Rongfei Fan | Xingrun Yan, Shiyuan Zuo, Yifeng Lyu, Rongfei Fan, Han Hu | Semantic Communication in Dynamic Channel Scenarios: Collaborative
Optimization of Dual-Pipeline Joint Source-Channel Coding and Personalized
Federated Learning | null | null | null | null | eess.IV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Semantic communication is designed to tackle issues like bandwidth
constraints and high latency in communication systems. However, in complex
network topologies with multiple users, the enormous combinations of client
data and channel state information (CSI) pose significant challenges for
existing semantic communication architectures. To improve the generalization
ability of semantic communication models in complex scenarios while meeting the
personalized needs of each user in their local environments, we propose a novel
personalized federated learning framework with dual-pipeline joint
source-channel coding based on channel awareness model (PFL-DPJSCCA). Within
this framework, we present a method that achieves zero optimization gap for
non-convex loss functions. Experiments conducted under varying SNR
distributions validate the outstanding performance of our framework across
diverse datasets.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 10:02:22 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Yan",
"Xingrun",
""
],
[
"Zuo",
"Shiyuan",
""
],
[
"Lyu",
"Yifeng",
""
],
[
"Fan",
"Rongfei",
""
],
[
"Hu",
"Han",
""
]
] | TITLE: Semantic Communication in Dynamic Channel Scenarios: Collaborative
Optimization of Dual-Pipeline Joint Source-Channel Coding and Personalized
Federated Learning
ABSTRACT: Semantic communication is designed to tackle issues like bandwidth
constraints and high latency in communication systems. However, in complex
network topologies with multiple users, the enormous combinations of client
data and channel state information (CSI) pose significant challenges for
existing semantic communication architectures. To improve the generalization
ability of semantic communication models in complex scenarios while meeting the
personalized needs of each user in their local environments, we propose a novel
personalized federated learning framework with dual-pipeline joint
source-channel coding based on channel awareness model (PFL-DPJSCCA). Within
this framework, we present a method that achieves zero optimization gap for
non-convex loss functions. Experiments conducted under varying SNR
distributions validate the outstanding performance of our framework across
diverse datasets.
|
2503.14090 | Jan G\"opfert | Jan G\"opfert, Patrick Kuckertz, Jann M. Weinand, Detlef Stolten | Wiki-Quantities and Wiki-Measurements: Datasets of Quantities and their
Measurement Context from Wikipedia | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | To cope with the large number of publications, more and more researchers are
automatically extracting data of interest using natural language processing
methods based on supervised learning. Much data, especially in the natural and
engineering sciences, is quantitative, but there is a lack of datasets for
identifying quantities and their context in text. To address this issue, we
present two large datasets based on Wikipedia and Wikidata: Wiki-Quantities is
a dataset consisting of over 1.2 million annotated quantities in the
English-language Wikipedia. Wiki-Measurements is a dataset of 38,738 annotated
quantities in the English-language Wikipedia along with their respective
measured entity, property, and optional qualifiers. Manual validation of 100
samples each of Wiki-Quantities and Wiki-Measurements found 100% and 84-94%
correct, respectively. The datasets can be used in pipeline approaches to
measurement extraction, where quantities are first identified and then their
measurement context. To allow reproduction of this work using newer or
different versions of Wikipedia and Wikidata, we publish the code used to
create the datasets along with the data.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 10:09:10 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Göpfert",
"Jan",
""
],
[
"Kuckertz",
"Patrick",
""
],
[
"Weinand",
"Jann M.",
""
],
[
"Stolten",
"Detlef",
""
]
] | TITLE: Wiki-Quantities and Wiki-Measurements: Datasets of Quantities and their
Measurement Context from Wikipedia
ABSTRACT: To cope with the large number of publications, more and more researchers are
automatically extracting data of interest using natural language processing
methods based on supervised learning. Much data, especially in the natural and
engineering sciences, is quantitative, but there is a lack of datasets for
identifying quantities and their context in text. To address this issue, we
present two large datasets based on Wikipedia and Wikidata: Wiki-Quantities is
a dataset consisting of over 1.2 million annotated quantities in the
English-language Wikipedia. Wiki-Measurements is a dataset of 38,738 annotated
quantities in the English-language Wikipedia along with their respective
measured entity, property, and optional qualifiers. Manual validation of 100
samples each of Wiki-Quantities and Wiki-Measurements found 100% and 84-94%
correct, respectively. The datasets can be used in pipeline approaches to
measurement extraction, where quantities are first identified and then their
measurement context. To allow reproduction of this work using newer or
different versions of Wikipedia and Wikidata, we publish the code used to
create the datasets along with the data.
|
2503.14095 | Bipin Kumar Dr. | Bipin Kumar, Bhvisy Kumar Yadav, Soumypdeep Mukhopadhyay, Rakshit
Rohan, Bhupendra Bahadur Singh, Rajib Chattopadhyay, Nagraju Chilukoti, Atul
Kumar Sahai | Towards Location-Specific Precipitation Projections Using Deep Neural
Networks | 21 pages, 9 figures | null | null | null | physics.ao-ph cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate precipitation estimates at individual locations are crucial for
weather forecasting and spatial analysis. This study presents a paradigm shift
by leveraging Deep Neural Networks (DNNs) to surpass traditional methods like
Kriging for station-specific precipitation approximation. We propose two
innovative NN architectures: one utilizing precipitation, elevation, and
location, and another incorporating additional meteorological parameters like
humidity, temperature, and wind speed. Trained on a vast dataset (1980-2019),
these models outperform Kriging across various evaluation metrics (correlation
coefficient, root mean square error, bias, and skill score) on a five-year
validation set. This compelling evidence demonstrates the transformative power
of deep learning for spatial prediction, offering a robust and precise
alternative for station-specific precipitation estimation.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 10:12:17 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Kumar",
"Bipin",
""
],
[
"Yadav",
"Bhvisy Kumar",
""
],
[
"Mukhopadhyay",
"Soumypdeep",
""
],
[
"Rohan",
"Rakshit",
""
],
[
"Singh",
"Bhupendra Bahadur",
""
],
[
"Chattopadhyay",
"Rajib",
""
],
[
"Chilukoti",
"Nagraju",
""
],
[
"Sahai",
"Atul Kumar",
""
]
] | TITLE: Towards Location-Specific Precipitation Projections Using Deep Neural
Networks
ABSTRACT: Accurate precipitation estimates at individual locations are crucial for
weather forecasting and spatial analysis. This study presents a paradigm shift
by leveraging Deep Neural Networks (DNNs) to surpass traditional methods like
Kriging for station-specific precipitation approximation. We propose two
innovative NN architectures: one utilizing precipitation, elevation, and
location, and another incorporating additional meteorological parameters like
humidity, temperature, and wind speed. Trained on a vast dataset (1980-2019),
these models outperform Kriging across various evaluation metrics (correlation
coefficient, root mean square error, bias, and skill score) on a five-year
validation set. This compelling evidence demonstrates the transformative power
of deep learning for spatial prediction, offering a robust and precise
alternative for station-specific precipitation estimation.
|
2503.14106 | Jef Jonkers | Jef Jonkers, Frank Coopman, Luc Duchateau, Glenn Van Wallendael, Sofie
Van Hoecke | Reliable uncertainty quantification for 2D/3D anatomical landmark
localization using multi-output conformal prediction | 33 pages, 10 figures | null | null | null | cs.CV cs.AI stat.ML | http://creativecommons.org/licenses/by/4.0/ | Automatic anatomical landmark localization in medical imaging requires not
just accurate predictions but reliable uncertainty quantification for effective
clinical decision support. Current uncertainty quantification approaches often
fall short, particularly when combined with normality assumptions,
systematically underestimating total predictive uncertainty. This paper
introduces conformal prediction as a framework for reliable uncertainty
quantification in anatomical landmark localization, addressing a critical gap
in automatic landmark localization. We present two novel approaches
guaranteeing finite-sample validity for multi-output prediction: Multi-output
Regression-as-Classification Conformal Prediction (M-R2CCP) and its variant
Multi-output Regression to Classification Conformal Prediction set to Region
(M-R2C2R). Unlike conventional methods that produce axis-aligned
hyperrectangular or ellipsoidal regions, our approaches generate flexible,
non-convex prediction regions that better capture the underlying uncertainty
structure of landmark predictions. Through extensive empirical evaluation
across multiple 2D and 3D datasets, we demonstrate that our methods
consistently outperform existing multi-output conformal prediction approaches
in both validity and efficiency. This work represents a significant advancement
in reliable uncertainty estimation for anatomical landmark localization,
providing clinicians with trustworthy confidence measures for their diagnoses.
While developed for medical imaging, these methods show promise for broader
applications in multi-output regression problems.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 10:21:32 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Jonkers",
"Jef",
""
],
[
"Coopman",
"Frank",
""
],
[
"Duchateau",
"Luc",
""
],
[
"Van Wallendael",
"Glenn",
""
],
[
"Van Hoecke",
"Sofie",
""
]
] | TITLE: Reliable uncertainty quantification for 2D/3D anatomical landmark
localization using multi-output conformal prediction
ABSTRACT: Automatic anatomical landmark localization in medical imaging requires not
just accurate predictions but reliable uncertainty quantification for effective
clinical decision support. Current uncertainty quantification approaches often
fall short, particularly when combined with normality assumptions,
systematically underestimating total predictive uncertainty. This paper
introduces conformal prediction as a framework for reliable uncertainty
quantification in anatomical landmark localization, addressing a critical gap
in automatic landmark localization. We present two novel approaches
guaranteeing finite-sample validity for multi-output prediction: Multi-output
Regression-as-Classification Conformal Prediction (M-R2CCP) and its variant
Multi-output Regression to Classification Conformal Prediction set to Region
(M-R2C2R). Unlike conventional methods that produce axis-aligned
hyperrectangular or ellipsoidal regions, our approaches generate flexible,
non-convex prediction regions that better capture the underlying uncertainty
structure of landmark predictions. Through extensive empirical evaluation
across multiple 2D and 3D datasets, we demonstrate that our methods
consistently outperform existing multi-output conformal prediction approaches
in both validity and efficiency. This work represents a significant advancement
in reliable uncertainty estimation for anatomical landmark localization,
providing clinicians with trustworthy confidence measures for their diagnoses.
While developed for medical imaging, these methods show promise for broader
applications in multi-output regression problems.
|
2503.14109 | Nicolas Gonthier | Nicolas Gonthier | Operational Change Detection for Geographical Information: Overview and
Challenges | Preprint under review | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Rapid evolution of territories due to climate change and human impact
requires prompt and effective updates to geospatial databases maintained by the
National Mapping Agency. This paper presents a comprehensive overview of change
detection methods tailored for the operational updating of large-scale
geographic databases. This review first outlines the fundamental definition of
change, emphasizing its multifaceted nature, from temporal to semantic
characterization. It categorizes automatic change detection methods into four
main families: rule-based, statistical, machine learning, and simulation
methods. The strengths, limitations, and applicability of every family are
discussed in the context of various input data. Then, key applications for
National Mapping Agencies are identified, particularly the optimization of
geospatial database updating, change-based phenomena, and dynamics monitoring.
Finally, the paper highlights the current challenges for leveraging change
detection such as the variability of change definition, the missing of relevant
large-scale datasets, the diversity of input data, the unstudied no-change
detection, the human in the loop integration and the operational constraints.
The discussion underscores the necessity for ongoing innovation in change
detection techniques to address the future needs of geographic information
systems for national mapping agencies.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 10:25:28 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Gonthier",
"Nicolas",
""
]
] | TITLE: Operational Change Detection for Geographical Information: Overview and
Challenges
ABSTRACT: Rapid evolution of territories due to climate change and human impact
requires prompt and effective updates to geospatial databases maintained by the
National Mapping Agency. This paper presents a comprehensive overview of change
detection methods tailored for the operational updating of large-scale
geographic databases. This review first outlines the fundamental definition of
change, emphasizing its multifaceted nature, from temporal to semantic
characterization. It categorizes automatic change detection methods into four
main families: rule-based, statistical, machine learning, and simulation
methods. The strengths, limitations, and applicability of every family are
discussed in the context of various input data. Then, key applications for
National Mapping Agencies are identified, particularly the optimization of
geospatial database updating, change-based phenomena, and dynamics monitoring.
Finally, the paper highlights the current challenges for leveraging change
detection such as the variability of change definition, the missing of relevant
large-scale datasets, the diversity of input data, the unstudied no-change
detection, the human in the loop integration and the operational constraints.
The discussion underscores the necessity for ongoing innovation in change
detection techniques to address the future needs of geographic information
systems for national mapping agencies.
|
2503.14112 | Guodong Ding Dr. | Guodong Ding, Rongyu Chen and Angela Yao | Condensing Action Segmentation Datasets via Generative Network Inversion | 10 pages, 3 figures, 5 tables, Accepted to CVPR2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | This work presents the first condensation approach for procedural video
datasets used in temporal action segmentation. We propose a condensation
framework that leverages generative prior learned from the dataset and network
inversion to condense data into compact latent codes with significant storage
reduced across temporal and channel aspects. Orthogonally, we propose sampling
diverse and representative action sequences to minimize video-wise redundancy.
Our evaluation on standard benchmarks demonstrates consistent effectiveness in
condensing TAS datasets and achieving competitive performances. Specifically,
on the Breakfast dataset, our approach reduces storage by over 500$\times$
while retaining 83% of the performance compared to training with the full
dataset. Furthermore, when applied to a downstream incremental learning task,
it yields superior performance compared to the state-of-the-art.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 10:29:47 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Ding",
"Guodong",
""
],
[
"Chen",
"Rongyu",
""
],
[
"Yao",
"Angela",
""
]
] | TITLE: Condensing Action Segmentation Datasets via Generative Network Inversion
ABSTRACT: This work presents the first condensation approach for procedural video
datasets used in temporal action segmentation. We propose a condensation
framework that leverages generative prior learned from the dataset and network
inversion to condense data into compact latent codes with significant storage
reduced across temporal and channel aspects. Orthogonally, we propose sampling
diverse and representative action sequences to minimize video-wise redundancy.
Our evaluation on standard benchmarks demonstrates consistent effectiveness in
condensing TAS datasets and achieving competitive performances. Specifically,
on the Breakfast dataset, our approach reduces storage by over 500$\times$
while retaining 83% of the performance compared to training with the full
dataset. Furthermore, when applied to a downstream incremental learning task,
it yields superior performance compared to the state-of-the-art.
|
2503.14118 | Michele Ceriotti | Arslan Mazitov, Filippo Bigi, Matthias Kellner, Paolo Pegolo, Davide
Tisi, Guillaume Fraux, Sergey Pozdnyakov, Philip Loche, and Michele Ceriotti | PET-MAD, a universal interatomic potential for advanced materials
modeling | null | null | null | null | cond-mat.mtrl-sci cs.LG physics.chem-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Machine-learning interatomic potentials (MLIPs) have greatly extended the
reach of atomic-scale simulations, offering the accuracy of first-principles
calculations at a fraction of the effort. Leveraging large quantum mechanical
databases and expressive architectures, recent "universal" models deliver
qualitative accuracy across the periodic table but are often biased toward
low-energy configurations. We introduce PET-MAD, a generally applicable MLIP
trained on a dataset combining stable inorganic and organic solids,
systematically modified to enhance atomic diversity. Using a moderate but
highly-consistent level of electronic-structure theory, we assess PET-MAD's
accuracy on established benchmarks and advanced simulations of six materials.
PET-MAD rivals state-of-the-art MLIPs for inorganic solids, while also being
reliable for molecules, organic materials, and surfaces. It is stable and fast,
enabling, out-of-the-box, the near-quantitative study of thermal and quantum
mechanical fluctuations, functional properties, and phase transitions. It can
be efficiently fine-tuned to deliver full quantum mechanical accuracy with a
minimal number of targeted calculations.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 10:35:30 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Mazitov",
"Arslan",
""
],
[
"Bigi",
"Filippo",
""
],
[
"Kellner",
"Matthias",
""
],
[
"Pegolo",
"Paolo",
""
],
[
"Tisi",
"Davide",
""
],
[
"Fraux",
"Guillaume",
""
],
[
"Pozdnyakov",
"Sergey",
""
],
[
"Loche",
"Philip",
""
],
[
"Ceriotti",
"Michele",
""
]
] | TITLE: PET-MAD, a universal interatomic potential for advanced materials
modeling
ABSTRACT: Machine-learning interatomic potentials (MLIPs) have greatly extended the
reach of atomic-scale simulations, offering the accuracy of first-principles
calculations at a fraction of the effort. Leveraging large quantum mechanical
databases and expressive architectures, recent "universal" models deliver
qualitative accuracy across the periodic table but are often biased toward
low-energy configurations. We introduce PET-MAD, a generally applicable MLIP
trained on a dataset combining stable inorganic and organic solids,
systematically modified to enhance atomic diversity. Using a moderate but
highly-consistent level of electronic-structure theory, we assess PET-MAD's
accuracy on established benchmarks and advanced simulations of six materials.
PET-MAD rivals state-of-the-art MLIPs for inorganic solids, while also being
reliable for molecules, organic materials, and surfaces. It is stable and fast,
enabling, out-of-the-box, the near-quantitative study of thermal and quantum
mechanical fluctuations, functional properties, and phase transitions. It can
be efficiently fine-tuned to deliver full quantum mechanical accuracy with a
minimal number of targeted calculations.
|
2503.14136 | Ankit Dutta | Ankit Dutta, Nabarup Ghosh, Ankush Chatterjee | CARE: A QLoRA-Fine Tuned Multi-Domain Chatbot With Fast Learning On
Minimal Hardware | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Large Language models have demonstrated excellent domain-specific
question-answering capabilities when finetuned with a particular dataset of
that specific domain. However, fine-tuning the models requires a significant
amount of training time and a considerable amount of hardware. In this work, we
propose CARE (Customer Assistance and Response Engine), a lightweight model
made by fine-tuning Phi3.5-mini on very minimal hardware and data, designed to
handle queries primarily across three domains: telecommunications support,
medical support, and banking support. For telecommunications and banking, the
chatbot addresses issues and problems faced by customers regularly in the
above-mentioned domains. In the medical domain, CARE provides preliminary
support by offering basic diagnoses and medical suggestions that a user might
take before consulting a healthcare professional. Since CARE is built on
Phi3.5-mini, it can be used even on mobile devices, increasing its usability.
Our research also shows that CARE performs relatively well on various medical
benchmarks, indicating that it can be used to make basic medical suggestions.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 10:58:10 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Dutta",
"Ankit",
""
],
[
"Ghosh",
"Nabarup",
""
],
[
"Chatterjee",
"Ankush",
""
]
] | TITLE: CARE: A QLoRA-Fine Tuned Multi-Domain Chatbot With Fast Learning On
Minimal Hardware
ABSTRACT: Large Language models have demonstrated excellent domain-specific
question-answering capabilities when finetuned with a particular dataset of
that specific domain. However, fine-tuning the models requires a significant
amount of training time and a considerable amount of hardware. In this work, we
propose CARE (Customer Assistance and Response Engine), a lightweight model
made by fine-tuning Phi3.5-mini on very minimal hardware and data, designed to
handle queries primarily across three domains: telecommunications support,
medical support, and banking support. For telecommunications and banking, the
chatbot addresses issues and problems faced by customers regularly in the
above-mentioned domains. In the medical domain, CARE provides preliminary
support by offering basic diagnoses and medical suggestions that a user might
take before consulting a healthcare professional. Since CARE is built on
Phi3.5-mini, it can be used even on mobile devices, increasing its usability.
Our research also shows that CARE performs relatively well on various medical
benchmarks, indicating that it can be used to make basic medical suggestions.
|
2503.14138 | Siddharth Jaiswal | Siddharth D Jaiswal, Sagnik Basu, Sandipan Sikdar, Animesh Mukherjee | Exploring Disparity-Accuracy Trade-offs in Face Recognition Systems: The
Role of Datasets, Architectures, and Loss Functions | This work has been accepted for publication at AAAI ICWSM 2025 | null | null | null | cs.CV cs.AI cs.CY | http://creativecommons.org/licenses/by/4.0/ | Automated Face Recognition Systems (FRSs), developed using deep learning
models, are deployed worldwide for identity verification and facial attribute
analysis. The performance of these models is determined by a complex
interdependence among the model architecture, optimization/loss function and
datasets. Although FRSs have surpassed human-level accuracy, they continue to
be disparate against certain demographics. Due to the ubiquity of applications,
it is extremely important to understand the impact of the three components --
model architecture, loss function and face image dataset on the
accuracy-disparity trade-off to design better, unbiased platforms. In this
work, we perform an in-depth analysis of three FRSs for the task of gender
prediction, with various architectural modifications resulting in ten
deep-learning models coupled with four loss functions and benchmark them on
seven face datasets across 266 evaluation configurations. Our results show that
all three components have an individual as well as a combined impact on both
accuracy and disparity. We identify that datasets have an inherent property
that causes them to perform similarly across models, independent of the choice
of loss functions. Moreover, the choice of dataset determines the model's
perceived bias -- the same model reports bias in opposite directions for three
gender-balanced datasets of ``in-the-wild'' face images of popular individuals.
Studying the facial embeddings shows that the models are unable to generalize a
uniform definition of what constitutes a ``female face'' as opposed to a ``male
face'', due to dataset diversity. We provide recommendations to model
developers on using our study as a blueprint for model development and
subsequent deployment.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 11:04:57 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Jaiswal",
"Siddharth D",
""
],
[
"Basu",
"Sagnik",
""
],
[
"Sikdar",
"Sandipan",
""
],
[
"Mukherjee",
"Animesh",
""
]
] | TITLE: Exploring Disparity-Accuracy Trade-offs in Face Recognition Systems: The
Role of Datasets, Architectures, and Loss Functions
ABSTRACT: Automated Face Recognition Systems (FRSs), developed using deep learning
models, are deployed worldwide for identity verification and facial attribute
analysis. The performance of these models is determined by a complex
interdependence among the model architecture, optimization/loss function and
datasets. Although FRSs have surpassed human-level accuracy, they continue to
be disparate against certain demographics. Due to the ubiquity of applications,
it is extremely important to understand the impact of the three components --
model architecture, loss function and face image dataset on the
accuracy-disparity trade-off to design better, unbiased platforms. In this
work, we perform an in-depth analysis of three FRSs for the task of gender
prediction, with various architectural modifications resulting in ten
deep-learning models coupled with four loss functions and benchmark them on
seven face datasets across 266 evaluation configurations. Our results show that
all three components have an individual as well as a combined impact on both
accuracy and disparity. We identify that datasets have an inherent property
that causes them to perform similarly across models, independent of the choice
of loss functions. Moreover, the choice of dataset determines the model's
perceived bias -- the same model reports bias in opposite directions for three
gender-balanced datasets of ``in-the-wild'' face images of popular individuals.
Studying the facial embeddings shows that the models are unable to generalize a
uniform definition of what constitutes a ``female face'' as opposed to a ``male
face'', due to dataset diversity. We provide recommendations to model
developers on using our study as a blueprint for model development and
subsequent deployment.
|
2503.14140 | Zining Wang | Zining Wang, Tongkun Guan, Pei Fu, Chen Duan, Qianyi Jiang, Zhentao
Guo, Shan Guo, Junfeng Luo, Wei Shen, Xiaokang Yang | Marten: Visual Question Answering with Mask Generation for Multi-modal
Document Understanding | Accepted by CVPR2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Multi-modal Large Language Models (MLLMs) have introduced a novel dimension
to document understanding, i.e., they endow large language models with visual
comprehension capabilities; however, how to design a suitable image-text
pre-training task for bridging the visual and language modality in
document-level MLLMs remains underexplored. In this study, we introduce a novel
visual-language alignment method that casts the key issue as a Visual Question
Answering with Mask generation (VQAMask) task, optimizing two tasks
simultaneously: VQA-based text parsing and mask generation. The former allows
the model to implicitly align images and text at the semantic level. The latter
introduces an additional mask generator (discarded during inference) to
explicitly ensure alignment between visual texts within images and their
corresponding image regions at a spatially-aware level. Together, they can
prevent model hallucinations when parsing visual text and effectively promote
spatially-aware feature representation learning. To support the proposed
VQAMask task, we construct a comprehensive image-mask generation pipeline and
provide a large-scale dataset with 6M data (MTMask6M). Subsequently, we
demonstrate that introducing the proposed mask generation task yields
competitive document-level understanding performance. Leveraging the proposed
VQAMask, we introduce Marten, a training-efficient MLLM tailored for
document-level understanding. Extensive experiments show that our Marten
consistently achieves significant improvements among 8B-MLLMs in
document-centric tasks. Code and datasets are available at
https://github.com/PriNing/Marten.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 11:07:14 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Wang",
"Zining",
""
],
[
"Guan",
"Tongkun",
""
],
[
"Fu",
"Pei",
""
],
[
"Duan",
"Chen",
""
],
[
"Jiang",
"Qianyi",
""
],
[
"Guo",
"Zhentao",
""
],
[
"Guo",
"Shan",
""
],
[
"Luo",
"Junfeng",
""
],
[
"Shen",
"Wei",
""
],
[
"Yang",
"Xiaokang",
""
]
] | TITLE: Marten: Visual Question Answering with Mask Generation for Multi-modal
Document Understanding
ABSTRACT: Multi-modal Large Language Models (MLLMs) have introduced a novel dimension
to document understanding, i.e., they endow large language models with visual
comprehension capabilities; however, how to design a suitable image-text
pre-training task for bridging the visual and language modality in
document-level MLLMs remains underexplored. In this study, we introduce a novel
visual-language alignment method that casts the key issue as a Visual Question
Answering with Mask generation (VQAMask) task, optimizing two tasks
simultaneously: VQA-based text parsing and mask generation. The former allows
the model to implicitly align images and text at the semantic level. The latter
introduces an additional mask generator (discarded during inference) to
explicitly ensure alignment between visual texts within images and their
corresponding image regions at a spatially-aware level. Together, they can
prevent model hallucinations when parsing visual text and effectively promote
spatially-aware feature representation learning. To support the proposed
VQAMask task, we construct a comprehensive image-mask generation pipeline and
provide a large-scale dataset with 6M data (MTMask6M). Subsequently, we
demonstrate that introducing the proposed mask generation task yields
competitive document-level understanding performance. Leveraging the proposed
VQAMask, we introduce Marten, a training-efficient MLLM tailored for
document-level understanding. Extensive experiments show that our Marten
consistently achieves significant improvements among 8B-MLLMs in
document-centric tasks. Code and datasets are available at
https://github.com/PriNing/Marten.
|
2503.14150 | Yihang Zhou | Yihang Zhou, Ruige Kong, Zhengsen Xu, Linlin Xu, Sibo Cheng | Comparative and Interpretative Analysis of CNN and Transformer Models in
Predicting Wildfire Spread Using Remote Sensing Data | null | null | 10.1029/2024JH000409 | null | cs.CV eess.IV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Facing the escalating threat of global wildfires, numerous computer vision
techniques using remote sensing data have been applied in this area. However,
the selection of deep learning methods for wildfire prediction remains
uncertain due to the lack of comparative analysis in a quantitative and
explainable manner, crucial for improving prevention measures and refining
models. This study aims to thoroughly compare the performance, efficiency, and
explainability of four prevalent deep learning architectures: Autoencoder,
ResNet, UNet, and Transformer-based Swin-UNet. Employing a real-world dataset
that includes nearly a decade of remote sensing data from California, U.S.,
these models predict the spread of wildfires for the following day. Through
detailed quantitative comparison analysis, we discovered that Transformer-based
Swin-UNet and UNet generally outperform Autoencoder and ResNet, particularly
due to the advanced attention mechanisms in Transformer-based Swin-UNet and the
efficient use of skip connections in both UNet and Transformer-based Swin-UNet,
which contribute to superior predictive accuracy and model interpretability.
Then we applied XAI techniques on all four models, this not only enhances the
clarity and trustworthiness of models but also promotes focused improvements in
wildfire prediction capabilities. The XAI analysis reveals that UNet and
Transformer-based Swin-UNet are able to focus on critical features such as
'Previous Fire Mask', 'Drought', and 'Vegetation' more effectively than the
other two models, while also maintaining balanced attention to the remaining
features, leading to their superior performance. The insights from our thorough
comparative analysis offer substantial implications for future model design and
also provide guidance for model selection in different scenarios.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 11:16:48 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Zhou",
"Yihang",
""
],
[
"Kong",
"Ruige",
""
],
[
"Xu",
"Zhengsen",
""
],
[
"Xu",
"Linlin",
""
],
[
"Cheng",
"Sibo",
""
]
] | TITLE: Comparative and Interpretative Analysis of CNN and Transformer Models in
Predicting Wildfire Spread Using Remote Sensing Data
ABSTRACT: Facing the escalating threat of global wildfires, numerous computer vision
techniques using remote sensing data have been applied in this area. However,
the selection of deep learning methods for wildfire prediction remains
uncertain due to the lack of comparative analysis in a quantitative and
explainable manner, crucial for improving prevention measures and refining
models. This study aims to thoroughly compare the performance, efficiency, and
explainability of four prevalent deep learning architectures: Autoencoder,
ResNet, UNet, and Transformer-based Swin-UNet. Employing a real-world dataset
that includes nearly a decade of remote sensing data from California, U.S.,
these models predict the spread of wildfires for the following day. Through
detailed quantitative comparison analysis, we discovered that Transformer-based
Swin-UNet and UNet generally outperform Autoencoder and ResNet, particularly
due to the advanced attention mechanisms in Transformer-based Swin-UNet and the
efficient use of skip connections in both UNet and Transformer-based Swin-UNet,
which contribute to superior predictive accuracy and model interpretability.
Then we applied XAI techniques on all four models, this not only enhances the
clarity and trustworthiness of models but also promotes focused improvements in
wildfire prediction capabilities. The XAI analysis reveals that UNet and
Transformer-based Swin-UNet are able to focus on critical features such as
'Previous Fire Mask', 'Drought', and 'Vegetation' more effectively than the
other two models, while also maintaining balanced attention to the remaining
features, leading to their superior performance. The insights from our thorough
comparative analysis offer substantial implications for future model design and
also provide guidance for model selection in different scenarios.
|
2503.14153 | Changran Xu | Changran Xu, Yi Liu, Yunhao Zhou, Shan Huang, Ningyi Xu, Qiang Xu | Speculative Decoding for Verilog: Speed and Quality, All in One | Accepted by the 62nd Design Automation Conference (DAC 2025) | null | null | null | cs.LG cs.AR cs.CL | http://creativecommons.org/licenses/by/4.0/ | The rapid advancement of large language models (LLMs) has revolutionized code
generation tasks across various programming languages. However, the unique
characteristics of programming languages, particularly those like Verilog with
specific syntax and lower representation in training datasets, pose significant
challenges for conventional tokenization and decoding approaches. In this
paper, we introduce a novel application of speculative decoding for Verilog
code generation, showing that it can improve both inference speed and output
quality, effectively achieving speed and quality all in one. Unlike standard
LLM tokenization schemes, which often fragment meaningful code structures, our
approach aligns decoding stops with syntactically significant tokens, making it
easier for models to learn the token distribution. This refinement addresses
inherent tokenization issues and enhances the model's ability to capture
Verilog's logical constructs more effectively. Our experimental results show
that our method achieves up to a 5.05x speedup in Verilog code generation and
increases pass@10 functional accuracy on RTLLM by up to 17.19% compared to
conventional training strategies. These findings highlight speculative decoding
as a promising approach to bridge the quality gap in code generation for
specialized programming languages.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 11:21:53 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Xu",
"Changran",
""
],
[
"Liu",
"Yi",
""
],
[
"Zhou",
"Yunhao",
""
],
[
"Huang",
"Shan",
""
],
[
"Xu",
"Ningyi",
""
],
[
"Xu",
"Qiang",
""
]
] | TITLE: Speculative Decoding for Verilog: Speed and Quality, All in One
ABSTRACT: The rapid advancement of large language models (LLMs) has revolutionized code
generation tasks across various programming languages. However, the unique
characteristics of programming languages, particularly those like Verilog with
specific syntax and lower representation in training datasets, pose significant
challenges for conventional tokenization and decoding approaches. In this
paper, we introduce a novel application of speculative decoding for Verilog
code generation, showing that it can improve both inference speed and output
quality, effectively achieving speed and quality all in one. Unlike standard
LLM tokenization schemes, which often fragment meaningful code structures, our
approach aligns decoding stops with syntactically significant tokens, making it
easier for models to learn the token distribution. This refinement addresses
inherent tokenization issues and enhances the model's ability to capture
Verilog's logical constructs more effectively. Our experimental results show
that our method achieves up to a 5.05x speedup in Verilog code generation and
increases pass@10 functional accuracy on RTLLM by up to 17.19% compared to
conventional training strategies. These findings highlight speculative decoding
as a promising approach to bridge the quality gap in code generation for
specialized programming languages.
|
2503.14154 | Zhang Chen | Zhang Chen, Shuai Wan, Siyu Ren, Fuzheng Yang, Mengting Yu, and Junhui
Hou | RBFIM: Perceptual Quality Assessment for Compressed Point Clouds Using
Radial Basis Function Interpolation | null | null | null | null | cs.CV cs.MM eess.IV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | One of the main challenges in point cloud compression (PCC) is how to
evaluate the perceived distortion so that the codec can be optimized for
perceptual quality. Current standard practices in PCC highlight a primary
issue: while single-feature metrics are widely used to assess compression
distortion, the classic method of searching point-to-point nearest neighbors
frequently fails to adequately build precise correspondences between point
clouds, resulting in an ineffective capture of human perceptual features. To
overcome the related limitations, we propose a novel assessment method called
RBFIM, utilizing radial basis function (RBF) interpolation to convert discrete
point features into a continuous feature function for the distorted point
cloud. By substituting the geometry coordinates of the original point cloud
into the feature function, we obtain the bijective sets of point features. This
enables an establishment of precise corresponding features between distorted
and original point clouds and significantly improves the accuracy of quality
assessments. Moreover, this method avoids the complexity caused by
bidirectional searches. Extensive experiments on multiple subjective quality
datasets of compressed point clouds demonstrate that our RBFIM excels in
addressing human perception tasks, thereby providing robust support for PCC
optimization efforts.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 11:25:55 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Chen",
"Zhang",
""
],
[
"Wan",
"Shuai",
""
],
[
"Ren",
"Siyu",
""
],
[
"Yang",
"Fuzheng",
""
],
[
"Yu",
"Mengting",
""
],
[
"Hou",
"Junhui",
""
]
] | TITLE: RBFIM: Perceptual Quality Assessment for Compressed Point Clouds Using
Radial Basis Function Interpolation
ABSTRACT: One of the main challenges in point cloud compression (PCC) is how to
evaluate the perceived distortion so that the codec can be optimized for
perceptual quality. Current standard practices in PCC highlight a primary
issue: while single-feature metrics are widely used to assess compression
distortion, the classic method of searching point-to-point nearest neighbors
frequently fails to adequately build precise correspondences between point
clouds, resulting in an ineffective capture of human perceptual features. To
overcome the related limitations, we propose a novel assessment method called
RBFIM, utilizing radial basis function (RBF) interpolation to convert discrete
point features into a continuous feature function for the distorted point
cloud. By substituting the geometry coordinates of the original point cloud
into the feature function, we obtain the bijective sets of point features. This
enables an establishment of precise corresponding features between distorted
and original point clouds and significantly improves the accuracy of quality
assessments. Moreover, this method avoids the complexity caused by
bidirectional searches. Extensive experiments on multiple subjective quality
datasets of compressed point clouds demonstrate that our RBFIM excels in
addressing human perception tasks, thereby providing robust support for PCC
optimization efforts.
|
2503.14162 | Zongyun Zhang | Zongyun Zhang, Jiacheng Ruan, Xian Gao, Ting Liu, Yuzhuo Fu | EIAD: Explainable Industrial Anomaly Detection Via Multi-Modal Large
Language Models | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Industrial Anomaly Detection (IAD) is critical to ensure product quality
during manufacturing. Although existing zero-shot defect segmentation and
detection methods have shown effectiveness, they cannot provide detailed
descriptions of the defects. Furthermore, the application of large multi-modal
models in IAD remains in its infancy, facing challenges in balancing
question-answering (QA) performance and mask-based grounding capabilities,
often owing to overfitting during the fine-tuning process. To address these
challenges, we propose a novel approach that introduces a dedicated multi-modal
defect localization module to decouple the dialog functionality from the core
feature extraction. This decoupling is achieved through independent
optimization objectives and tailored learning strategies. Additionally, we
contribute to the first multi-modal industrial anomaly detection training
dataset, named Defect Detection Question Answering (DDQA), encompassing a wide
range of defect types and industrial scenarios. Unlike conventional datasets
that rely on GPT-generated data, DDQA ensures authenticity and reliability and
offers a robust foundation for model training. Experimental results demonstrate
that our proposed method, Explainable Industrial Anomaly Detection Assistant
(EIAD), achieves outstanding performance in defect detection and localization
tasks. It not only significantly enhances accuracy but also improves
interpretability. These advancements highlight the potential of EIAD for
practical applications in industrial settings.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 11:33:29 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Zhang",
"Zongyun",
""
],
[
"Ruan",
"Jiacheng",
""
],
[
"Gao",
"Xian",
""
],
[
"Liu",
"Ting",
""
],
[
"Fu",
"Yuzhuo",
""
]
] | TITLE: EIAD: Explainable Industrial Anomaly Detection Via Multi-Modal Large
Language Models
ABSTRACT: Industrial Anomaly Detection (IAD) is critical to ensure product quality
during manufacturing. Although existing zero-shot defect segmentation and
detection methods have shown effectiveness, they cannot provide detailed
descriptions of the defects. Furthermore, the application of large multi-modal
models in IAD remains in its infancy, facing challenges in balancing
question-answering (QA) performance and mask-based grounding capabilities,
often owing to overfitting during the fine-tuning process. To address these
challenges, we propose a novel approach that introduces a dedicated multi-modal
defect localization module to decouple the dialog functionality from the core
feature extraction. This decoupling is achieved through independent
optimization objectives and tailored learning strategies. Additionally, we
contribute to the first multi-modal industrial anomaly detection training
dataset, named Defect Detection Question Answering (DDQA), encompassing a wide
range of defect types and industrial scenarios. Unlike conventional datasets
that rely on GPT-generated data, DDQA ensures authenticity and reliability and
offers a robust foundation for model training. Experimental results demonstrate
that our proposed method, Explainable Industrial Anomaly Detection Assistant
(EIAD), achieves outstanding performance in defect detection and localization
tasks. It not only significantly enhances accuracy but also improves
interpretability. These advancements highlight the potential of EIAD for
practical applications in industrial settings.
|
2503.14167 | Christian Poelitz | Christian Poelitz, Nick McKenna | Synthetic Clarification and Correction Dialogues about Data-Centric
Tasks -- A Teacher-Student Approach | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Real dialogues with AI assistants for solving data-centric tasks often follow
dynamic, unpredictable paths due to imperfect information provided by the user
or in the data, which must be caught and handled. Developing datasets which
capture such user-AI interactions is difficult and time-consuming. In this
work, we develop a novel framework for synthetically generating controlled,
multi-turn conversations between a user and AI assistant for the task of
table-based question answering, which can be generated from an existing dataset
with fully specified table QA examples for any target domain. Each conversation
aims to solve a table-based reasoning question through collaborative effort,
modeling one of two real-world scenarios: (1) an AI-initiated clarification, or
(2) a user-initiated correction. Critically, we employ a strong teacher LLM to
verify the correctness of our synthetic conversations, ensuring high quality.
We demonstrate synthetic datasets generated from TAT-QA and WikiTableQuestions
as benchmarks of frontier LLMs. We find that even larger models struggle to
effectively issuing clarification questions and accurately integrate user
feedback for corrections.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 11:37:25 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Poelitz",
"Christian",
""
],
[
"McKenna",
"Nick",
""
]
] | TITLE: Synthetic Clarification and Correction Dialogues about Data-Centric
Tasks -- A Teacher-Student Approach
ABSTRACT: Real dialogues with AI assistants for solving data-centric tasks often follow
dynamic, unpredictable paths due to imperfect information provided by the user
or in the data, which must be caught and handled. Developing datasets which
capture such user-AI interactions is difficult and time-consuming. In this
work, we develop a novel framework for synthetically generating controlled,
multi-turn conversations between a user and AI assistant for the task of
table-based question answering, which can be generated from an existing dataset
with fully specified table QA examples for any target domain. Each conversation
aims to solve a table-based reasoning question through collaborative effort,
modeling one of two real-world scenarios: (1) an AI-initiated clarification, or
(2) a user-initiated correction. Critically, we employ a strong teacher LLM to
verify the correctness of our synthetic conversations, ensuring high quality.
We demonstrate synthetic datasets generated from TAT-QA and WikiTableQuestions
as benchmarks of frontier LLMs. We find that even larger models struggle to
effectively issuing clarification questions and accurately integrate user
feedback for corrections.
|
2503.14171 | Simon Niedermayr | Simon Niedermayr, Christoph Neuhauser R\"udiger Westermann | Lightweight Gradient-Aware Upscaling of 3D Gaussian Splatting Images | null | null | null | null | cs.CV eess.IV | http://creativecommons.org/licenses/by-sa/4.0/ | We introduce an image upscaling technique tailored for 3D Gaussian Splatting
(3DGS) on lightweight GPUs. Compared to 3DGS, it achieves significantly higher
rendering speeds and reduces artifacts commonly observed in 3DGS
reconstructions. Our technique upscales low-resolution 3DGS renderings with a
marginal increase in cost by directly leveraging the analytical image gradients
of Gaussians for gradient-based bicubic spline interpolation. The technique is
agnostic to the specific 3DGS implementation, achieving novel view synthesis at
rates 3x-4x higher than the baseline implementation. Through extensive
experiments on multiple datasets, we showcase the performance improvements and
high reconstruction fidelity attainable with gradient-aware upscaling of 3DGS
images. We further demonstrate the integration of gradient-aware upscaling into
the gradient-based optimization of a 3DGS model and analyze its effects on
reconstruction quality and performance.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 11:42:52 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Niedermayr",
"Simon",
""
],
[
"Westermann",
"Christoph Neuhauser Rüdiger",
""
]
] | TITLE: Lightweight Gradient-Aware Upscaling of 3D Gaussian Splatting Images
ABSTRACT: We introduce an image upscaling technique tailored for 3D Gaussian Splatting
(3DGS) on lightweight GPUs. Compared to 3DGS, it achieves significantly higher
rendering speeds and reduces artifacts commonly observed in 3DGS
reconstructions. Our technique upscales low-resolution 3DGS renderings with a
marginal increase in cost by directly leveraging the analytical image gradients
of Gaussians for gradient-based bicubic spline interpolation. The technique is
agnostic to the specific 3DGS implementation, achieving novel view synthesis at
rates 3x-4x higher than the baseline implementation. Through extensive
experiments on multiple datasets, we showcase the performance improvements and
high reconstruction fidelity attainable with gradient-aware upscaling of 3DGS
images. We further demonstrate the integration of gradient-aware upscaling into
the gradient-based optimization of a 3DGS model and analyze its effects on
reconstruction quality and performance.
|
2503.14173 | Raul Quijada | Guillem Cadevall Ferreres, Marc Serrano Sanz, Marc Bardeli G\'amez,
Pol Gerdt Basullas, Francesc Tarres Ruiz, Raul Quijada Ferrero | NERCat: Fine-Tuning for Enhanced Named Entity Recognition in Catalan | 7 pages, 1 table | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Named Entity Recognition (NER) is a critical component of Natural Language
Processing (NLP) for extracting structured information from unstructured text.
However, for low-resource languages like Catalan, the performance of NER
systems often suffers due to the lack of high-quality annotated datasets. This
paper introduces NERCat, a fine-tuned version of the GLiNER[1] model, designed
to improve NER performance specifically for Catalan text. We used a dataset of
manually annotated Catalan television transcriptions to train and fine-tune the
model, focusing on domains such as politics, sports, and culture. The
evaluation results show significant improvements in precision, recall, and
F1-score, particularly for underrepresented named entity categories such as
Law, Product, and Facility. This study demonstrates the effectiveness of
domain-specific fine-tuning in low-resource languages and highlights the
potential for enhancing Catalan NLP applications through manual annotation and
high-quality datasets.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 11:44:19 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Ferreres",
"Guillem Cadevall",
""
],
[
"Sanz",
"Marc Serrano",
""
],
[
"Gámez",
"Marc Bardeli",
""
],
[
"Basullas",
"Pol Gerdt",
""
],
[
"Ruiz",
"Francesc Tarres",
""
],
[
"Ferrero",
"Raul Quijada",
""
]
] | TITLE: NERCat: Fine-Tuning for Enhanced Named Entity Recognition in Catalan
ABSTRACT: Named Entity Recognition (NER) is a critical component of Natural Language
Processing (NLP) for extracting structured information from unstructured text.
However, for low-resource languages like Catalan, the performance of NER
systems often suffers due to the lack of high-quality annotated datasets. This
paper introduces NERCat, a fine-tuned version of the GLiNER[1] model, designed
to improve NER performance specifically for Catalan text. We used a dataset of
manually annotated Catalan television transcriptions to train and fine-tune the
model, focusing on domains such as politics, sports, and culture. The
evaluation results show significant improvements in precision, recall, and
F1-score, particularly for underrepresented named entity categories such as
Law, Product, and Facility. This study demonstrates the effectiveness of
domain-specific fine-tuning in low-resource languages and highlights the
potential for enhancing Catalan NLP applications through manual annotation and
high-quality datasets.
|
2503.14182 | Bozhou Zhang | Bozhou Zhang, Nan Song, Xin Jin, Li Zhang | Bridging Past and Future: End-to-End Autonomous Driving with Historical
Prediction and Planning | CVPR 2025 | null | null | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | End-to-end autonomous driving unifies tasks in a differentiable framework,
enabling planning-oriented optimization and attracting growing attention.
Current methods aggregate historical information either through dense
historical bird's-eye-view (BEV) features or by querying a sparse memory bank,
following paradigms inherited from detection. However, we argue that these
paradigms either omit historical information in motion planning or fail to
align with its multi-step nature, which requires predicting or planning
multiple future time steps. In line with the philosophy of future is a
continuation of past, we propose BridgeAD, which reformulates motion and
planning queries as multi-step queries to differentiate the queries for each
future time step. This design enables the effective use of historical
prediction and planning by applying them to the appropriate parts of the
end-to-end system based on the time steps, which improves both perception and
motion planning. Specifically, historical queries for the current frame are
combined with perception, while queries for future frames are integrated with
motion planning. In this way, we bridge the gap between past and future by
aggregating historical insights at every time step, enhancing the overall
coherence and accuracy of the end-to-end autonomous driving pipeline. Extensive
experiments on the nuScenes dataset in both open-loop and closed-loop settings
demonstrate that BridgeAD achieves state-of-the-art performance.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 11:57:31 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Zhang",
"Bozhou",
""
],
[
"Song",
"Nan",
""
],
[
"Jin",
"Xin",
""
],
[
"Zhang",
"Li",
""
]
] | TITLE: Bridging Past and Future: End-to-End Autonomous Driving with Historical
Prediction and Planning
ABSTRACT: End-to-end autonomous driving unifies tasks in a differentiable framework,
enabling planning-oriented optimization and attracting growing attention.
Current methods aggregate historical information either through dense
historical bird's-eye-view (BEV) features or by querying a sparse memory bank,
following paradigms inherited from detection. However, we argue that these
paradigms either omit historical information in motion planning or fail to
align with its multi-step nature, which requires predicting or planning
multiple future time steps. In line with the philosophy of future is a
continuation of past, we propose BridgeAD, which reformulates motion and
planning queries as multi-step queries to differentiate the queries for each
future time step. This design enables the effective use of historical
prediction and planning by applying them to the appropriate parts of the
end-to-end system based on the time steps, which improves both perception and
motion planning. Specifically, historical queries for the current frame are
combined with perception, while queries for future frames are integrated with
motion planning. In this way, we bridge the gap between past and future by
aggregating historical insights at every time step, enhancing the overall
coherence and accuracy of the end-to-end autonomous driving pipeline. Extensive
experiments on the nuScenes dataset in both open-loop and closed-loop settings
demonstrate that BridgeAD achieves state-of-the-art performance.
|
2503.14183 | Ekaterina Verbitskaia | Aleksandr Shefer, Igor Engel, Stanislav Alekseev, Daniil Berezun,
Ekaterina Verbitskaia, Anton Podkopaev | Can LLMs Enable Verification in Mainstream Programming? | null | null | null | null | cs.SE cs.AI cs.PL | http://creativecommons.org/licenses/by/4.0/ | Although formal methods are capable of producing reliable software, they have
seen minimal adoption in everyday programming. Automatic code generation using
large language models is becoming increasingly widespread, but it rarely
considers producing strong correctness guarantees. In this study, we explore
the ability of LLMs to produce verified code in three verification languages
(Dafny, Nagini, and Verus). To do so, we use manually curated datasets derived
from the state-ofthe-art Python benchmark, HumanEval. We also assess what types
of information are sufficient to achieve good-quality results.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 11:58:00 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Shefer",
"Aleksandr",
""
],
[
"Engel",
"Igor",
""
],
[
"Alekseev",
"Stanislav",
""
],
[
"Berezun",
"Daniil",
""
],
[
"Verbitskaia",
"Ekaterina",
""
],
[
"Podkopaev",
"Anton",
""
]
] | TITLE: Can LLMs Enable Verification in Mainstream Programming?
ABSTRACT: Although formal methods are capable of producing reliable software, they have
seen minimal adoption in everyday programming. Automatic code generation using
large language models is becoming increasingly widespread, but it rarely
considers producing strong correctness guarantees. In this study, we explore
the ability of LLMs to produce verified code in three verification languages
(Dafny, Nagini, and Verus). To do so, we use manually curated datasets derived
from the state-ofthe-art Python benchmark, HumanEval. We also assess what types
of information are sufficient to achieve good-quality results.
|
2503.14185 | Wuwei Huang | Wuwei Huang, Dexin Wang, Deyi Xiong | AdaST: Dynamically Adapting Encoder States in the Decoder for End-to-End
Speech-to-Text Translation | ACL 2021 Findings | null | null | null | cs.CL cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In end-to-end speech translation, acoustic representations learned by the
encoder are usually fixed and static, from the perspective of the decoder,
which is not desirable for dealing with the cross-modal and cross-lingual
challenge in speech translation. In this paper, we show the benefits of varying
acoustic states according to decoder hidden states and propose an adaptive
speech-to-text translation model that is able to dynamically adapt acoustic
states in the decoder. We concatenate the acoustic state and target word
embedding sequence and feed the concatenated sequence into subsequent blocks in
the decoder. In order to model the deep interaction between acoustic states and
target hidden states, a speech-text mixed attention sublayer is introduced to
replace the conventional cross-attention network. Experiment results on two
widely-used datasets show that the proposed method significantly outperforms
state-of-the-art neural speech translation models.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 11:59:27 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Huang",
"Wuwei",
""
],
[
"Wang",
"Dexin",
""
],
[
"Xiong",
"Deyi",
""
]
] | TITLE: AdaST: Dynamically Adapting Encoder States in the Decoder for End-to-End
Speech-to-Text Translation
ABSTRACT: In end-to-end speech translation, acoustic representations learned by the
encoder are usually fixed and static, from the perspective of the decoder,
which is not desirable for dealing with the cross-modal and cross-lingual
challenge in speech translation. In this paper, we show the benefits of varying
acoustic states according to decoder hidden states and propose an adaptive
speech-to-text translation model that is able to dynamically adapt acoustic
states in the decoder. We concatenate the acoustic state and target word
embedding sequence and feed the concatenated sequence into subsequent blocks in
the decoder. In order to model the deep interaction between acoustic states and
target hidden states, a speech-text mixed attention sublayer is introduced to
replace the conventional cross-attention network. Experiment results on two
widely-used datasets show that the proposed method significantly outperforms
state-of-the-art neural speech translation models.
|
2503.14189 | Yongqi Li | Yongqi Li, Lu Yang, Jian Wang, Runyang You, Wenjie Li, Liqiang Nie | Towards Harmless Multimodal Assistants with Blind Preference
Optimization | null | null | null | null | cs.CL cs.CV | http://creativecommons.org/licenses/by/4.0/ | Multimodal Large Language Models (MLLMs) have demonstrated impressive
capabilities in multimodal understanding, reasoning, and interaction. Given the
extensive applications of MLLMs, the associated safety issues have become
increasingly critical. Due to the effectiveness of preference optimization in
aligning MLLMs with human preferences, there is an urgent need for
safety-related preference data for MLLMs. To address this, we construct the
MMSafe-PO preference dataset towards harmless multimodal assistants, featuring
multimodal instructions, the conversational format, and ranked paired responses
from human feedback. We also identify two insightful observations: modality
co-defense and modality cheating, which illustrate that MLLMs possess a certain
level of inherent defense while still presenting unique safety challenges.
Based on these observations, we propose the Blind Preference Optimization (BPO)
approach. Comprehensive experiments on three benchmarks show that BPO
effectively enhances the safety capabilities of MLLMs. Notably, BPO
significantly improves the safety rate of the base MLLM by 45.0%, outperforming
the DPO approach. Additionally, applying BPO to the MMSafe-PO dataset greatly
reduces the base MLLM's unsafe rate on other safety benchmarks (14.5% on
MM-SafetyBench and 82.9% on HarmEval, demonstrating the effectiveness and
robustness of both the dataset and the approach. We release code and data at
https://lu-yang666.github.io/MMsafe-PO-Web/.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 12:02:38 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Li",
"Yongqi",
""
],
[
"Yang",
"Lu",
""
],
[
"Wang",
"Jian",
""
],
[
"You",
"Runyang",
""
],
[
"Li",
"Wenjie",
""
],
[
"Nie",
"Liqiang",
""
]
] | TITLE: Towards Harmless Multimodal Assistants with Blind Preference
Optimization
ABSTRACT: Multimodal Large Language Models (MLLMs) have demonstrated impressive
capabilities in multimodal understanding, reasoning, and interaction. Given the
extensive applications of MLLMs, the associated safety issues have become
increasingly critical. Due to the effectiveness of preference optimization in
aligning MLLMs with human preferences, there is an urgent need for
safety-related preference data for MLLMs. To address this, we construct the
MMSafe-PO preference dataset towards harmless multimodal assistants, featuring
multimodal instructions, the conversational format, and ranked paired responses
from human feedback. We also identify two insightful observations: modality
co-defense and modality cheating, which illustrate that MLLMs possess a certain
level of inherent defense while still presenting unique safety challenges.
Based on these observations, we propose the Blind Preference Optimization (BPO)
approach. Comprehensive experiments on three benchmarks show that BPO
effectively enhances the safety capabilities of MLLMs. Notably, BPO
significantly improves the safety rate of the base MLLM by 45.0%, outperforming
the DPO approach. Additionally, applying BPO to the MMSafe-PO dataset greatly
reduces the base MLLM's unsafe rate on other safety benchmarks (14.5% on
MM-SafetyBench and 82.9% on HarmEval, demonstrating the effectiveness and
robustness of both the dataset and the approach. We release code and data at
https://lu-yang666.github.io/MMsafe-PO-Web/.
|
2503.14194 | Yilin Wang | Yilin Wang | Driving behavior recognition via self-discovery learning | 9 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Autonomous driving systems require a deep understanding of human driving
behaviors to achieve higher intelligence and safety.Despite advancements in
deep learning, challenges such as long-tail distribution due to scarce samples
and confusion from similar behaviors hinder effective driving behavior
detection.Existing methods often fail to address sample confusion adequately,
as datasets frequently contain ambiguous samples that obscure unique semantic
information.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 12:13:08 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Wang",
"Yilin",
""
]
] | TITLE: Driving behavior recognition via self-discovery learning
ABSTRACT: Autonomous driving systems require a deep understanding of human driving
behaviors to achieve higher intelligence and safety.Despite advancements in
deep learning, challenges such as long-tail distribution due to scarce samples
and confusion from similar behaviors hinder effective driving behavior
detection.Existing methods often fail to address sample confusion adequately,
as datasets frequently contain ambiguous samples that obscure unique semantic
information.
|
2503.14198 | Junjin Xiao | Junjin Xiao, Qing Zhang, Yonewei Nie, Lei Zhu, Wei-Shi Zheng | RoGSplat: Learning Robust Generalizable Human Gaussian Splatting from
Sparse Multi-View Images | Accepted to CVPR2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents RoGSplat, a novel approach for synthesizing high-fidelity
novel views of unseen human from sparse multi-view images, while requiring no
cumbersome per-subject optimization. Unlike previous methods that typically
struggle with sparse views with few overlappings and are less effective in
reconstructing complex human geometry, the proposed method enables robust
reconstruction in such challenging conditions. Our key idea is to lift SMPL
vertices to dense and reliable 3D prior points representing accurate human body
geometry, and then regress human Gaussian parameters based on the points. To
account for possible misalignment between SMPL model and images, we propose to
predict image-aligned 3D prior points by leveraging both pixel-level features
and voxel-level features, from which we regress the coarse Gaussians. To
enhance the ability to capture high-frequency details, we further render depth
maps from the coarse 3D Gaussians to help regress fine-grained pixel-wise
Gaussians. Experiments on several benchmark datasets demonstrate that our
method outperforms state-of-the-art methods in novel view synthesis and
cross-dataset generalization. Our code is available at
https://github.com/iSEE-Laboratory/RoGSplat.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 12:18:34 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Xiao",
"Junjin",
""
],
[
"Zhang",
"Qing",
""
],
[
"Nie",
"Yonewei",
""
],
[
"Zhu",
"Lei",
""
],
[
"Zheng",
"Wei-Shi",
""
]
] | TITLE: RoGSplat: Learning Robust Generalizable Human Gaussian Splatting from
Sparse Multi-View Images
ABSTRACT: This paper presents RoGSplat, a novel approach for synthesizing high-fidelity
novel views of unseen human from sparse multi-view images, while requiring no
cumbersome per-subject optimization. Unlike previous methods that typically
struggle with sparse views with few overlappings and are less effective in
reconstructing complex human geometry, the proposed method enables robust
reconstruction in such challenging conditions. Our key idea is to lift SMPL
vertices to dense and reliable 3D prior points representing accurate human body
geometry, and then regress human Gaussian parameters based on the points. To
account for possible misalignment between SMPL model and images, we propose to
predict image-aligned 3D prior points by leveraging both pixel-level features
and voxel-level features, from which we regress the coarse Gaussians. To
enhance the ability to capture high-frequency details, we further render depth
maps from the coarse 3D Gaussians to help regress fine-grained pixel-wise
Gaussians. Experiments on several benchmark datasets demonstrate that our
method outperforms state-of-the-art methods in novel view synthesis and
cross-dataset generalization. Our code is available at
https://github.com/iSEE-Laboratory/RoGSplat.
|
2503.14201 | Alberto Martin-Lopez | Alessandro Giagnorio, Alberto Martin-Lopez, Gabriele Bavota | Why Personalizing Deep Learning-Based Code Completion Tools Matters | Accepted for publication at ACM TOSEM | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep learning (DL)-based code completion tools have transformed software
development by enabling advanced code generation. These tools leverage models
trained on vast amounts of code from numerous repositories, capturing general
coding patterns. However, the impact of fine-tuning these models for specific
organizations or developers to boost their performance on such subjects remains
unexplored. In this work, we fill this gap by presenting solid empirical
evidence answering this question. More specifically, we consider 136 developers
from two organizations (Apache and Spring), two model architectures (T5 and
Code Llama), and three model sizes (60M, 750M, and 7B trainable parameters). T5
models (60M, 750M) were pre-trained and fine-tuned on over 2,000 open-source
projects, excluding the subject organizations' data, and compared against
versions fine-tuned on organization- and developer-specific datasets. For the
Code Llama model (7B), we compared the performance of the already pre-trained
model publicly available online with the same model fine-tuned via
parameter-efficient fine-tuning on organization- and developer-specific
datasets. Our results show that there is a boost in prediction capabilities
provided by both an organization-specific and a developer-specific additional
fine-tuning, with the former being particularly performant. Such a finding
generalizes across (i) the two subject organizations (i.e., Apache and Spring)
and (ii) models of completely different magnitude (from 60M to 7B trainable
parameters). Finally, we show that DL models fine-tuned on an
organization-specific dataset achieve the same completion performance of
pre-trained code models used out of the box and being $\sim$10$\times$ larger,
with consequent savings in terms of deployment and inference cost (e.g.,
smaller GPUs needed).
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 12:26:06 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Giagnorio",
"Alessandro",
""
],
[
"Martin-Lopez",
"Alberto",
""
],
[
"Bavota",
"Gabriele",
""
]
] | TITLE: Why Personalizing Deep Learning-Based Code Completion Tools Matters
ABSTRACT: Deep learning (DL)-based code completion tools have transformed software
development by enabling advanced code generation. These tools leverage models
trained on vast amounts of code from numerous repositories, capturing general
coding patterns. However, the impact of fine-tuning these models for specific
organizations or developers to boost their performance on such subjects remains
unexplored. In this work, we fill this gap by presenting solid empirical
evidence answering this question. More specifically, we consider 136 developers
from two organizations (Apache and Spring), two model architectures (T5 and
Code Llama), and three model sizes (60M, 750M, and 7B trainable parameters). T5
models (60M, 750M) were pre-trained and fine-tuned on over 2,000 open-source
projects, excluding the subject organizations' data, and compared against
versions fine-tuned on organization- and developer-specific datasets. For the
Code Llama model (7B), we compared the performance of the already pre-trained
model publicly available online with the same model fine-tuned via
parameter-efficient fine-tuning on organization- and developer-specific
datasets. Our results show that there is a boost in prediction capabilities
provided by both an organization-specific and a developer-specific additional
fine-tuning, with the former being particularly performant. Such a finding
generalizes across (i) the two subject organizations (i.e., Apache and Spring)
and (ii) models of completely different magnitude (from 60M to 7B trainable
parameters). Finally, we show that DL models fine-tuned on an
organization-specific dataset achieve the same completion performance of
pre-trained code models used out of the box and being $\sim$10$\times$ larger,
with consequent savings in terms of deployment and inference cost (e.g.,
smaller GPUs needed).
|
2503.14209 | Saif Ur Rehman Khan Mr | Saif Ur Rehman Khan, Muhammad Nabeel Asim, Sebastian Vollmer, Andreas
Dengel | AI-Driven Diabetic Retinopathy Diagnosis Enhancement through Image
Processing and Salp Swarm Algorithm-Optimized Ensemble Network | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Diabetic retinopathy is a leading cause of blindness in diabetic patients and
early detection plays a crucial role in preventing vision loss. Traditional
diagnostic methods are often time-consuming and prone to errors. The emergence
of deep learning techniques has provided innovative solutions to improve
diagnostic efficiency. However, single deep learning models frequently face
issues related to extracting key features from complex retinal images. To
handle this problem, we present an effective ensemble method for DR diagnosis
comprising four main phases: image pre-processing, selection of backbone
pre-trained models, feature enhancement, and optimization. Our methodology
initiates with the pre-processing phase, where we apply CLAHE to enhance image
contrast and Gamma correction is then used to adjust the brightness for better
feature recognition. We then apply Discrete Wavelet Transform (DWT) for image
fusion by combining multi-resolution details to create a richer dataset. Then,
we selected three pre-trained models with the best performance named
DenseNet169, MobileNetV1, and Xception for diverse feature extraction. To
further improve feature extraction, an improved residual block is integrated
into each model. Finally, the predictions from these base models are then
aggregated using weighted ensemble approach, with the weights optimized by
using Salp Swarm Algorithm (SSA).SSA intelligently explores the weight space
and finds the optimal configuration of base architectures to maximize the
performance of the ensemble model. The proposed model is evaluated on the
multiclass Kaggle APTOS 2019 dataset and obtained 88.52% accuracy.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 12:35:56 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Khan",
"Saif Ur Rehman",
""
],
[
"Asim",
"Muhammad Nabeel",
""
],
[
"Vollmer",
"Sebastian",
""
],
[
"Dengel",
"Andreas",
""
]
] | TITLE: AI-Driven Diabetic Retinopathy Diagnosis Enhancement through Image
Processing and Salp Swarm Algorithm-Optimized Ensemble Network
ABSTRACT: Diabetic retinopathy is a leading cause of blindness in diabetic patients and
early detection plays a crucial role in preventing vision loss. Traditional
diagnostic methods are often time-consuming and prone to errors. The emergence
of deep learning techniques has provided innovative solutions to improve
diagnostic efficiency. However, single deep learning models frequently face
issues related to extracting key features from complex retinal images. To
handle this problem, we present an effective ensemble method for DR diagnosis
comprising four main phases: image pre-processing, selection of backbone
pre-trained models, feature enhancement, and optimization. Our methodology
initiates with the pre-processing phase, where we apply CLAHE to enhance image
contrast and Gamma correction is then used to adjust the brightness for better
feature recognition. We then apply Discrete Wavelet Transform (DWT) for image
fusion by combining multi-resolution details to create a richer dataset. Then,
we selected three pre-trained models with the best performance named
DenseNet169, MobileNetV1, and Xception for diverse feature extraction. To
further improve feature extraction, an improved residual block is integrated
into each model. Finally, the predictions from these base models are then
aggregated using weighted ensemble approach, with the weights optimized by
using Salp Swarm Algorithm (SSA).SSA intelligently explores the weight space
and finds the optimal configuration of base architectures to maximize the
performance of the ensemble model. The proposed model is evaluated on the
multiclass Kaggle APTOS 2019 dataset and obtained 88.52% accuracy.
|
2503.14213 | Ashraf Ghiye | Ashraf Ghiye, Baptiste Barreau, Laurent Carlier, Michalis Vazirgiannis | Rolling Forward: Enhancing LightGCN with Causal Graph Convolution for
Credit Bond Recommendation | 8 pages, published in the international conference for AI in Finance
(ACM ICAIF'24) | null | 10.1145/3677052.3698683 | null | cs.IR cs.LG q-fin.CP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph Neural Networks have significantly advanced research in recommender
systems over the past few years. These methods typically capture global
interests using aggregated past interactions and rely on static embeddings of
users and items over extended periods of time. While effective in some domains,
these methods fall short in many real-world scenarios, especially in finance,
where user interests and item popularity evolve rapidly over time. To address
these challenges, we introduce a novel extension to Light Graph Convolutional
Network (LightGCN) designed to learn temporal node embeddings that capture
dynamic interests. Our approach employs causal convolution to maintain a
forward-looking model architecture. By preserving the chronological order of
user-item interactions and introducing a dynamic update mechanism for
embeddings through a sliding window, the proposed model generates well-timed
and contextually relevant recommendations. Extensive experiments on a
real-world dataset from BNP Paribas demonstrate that our approach significantly
enhances the performance of LightGCN while maintaining the simplicity and
efficiency of its architecture. Our findings provide new insights into
designing graph-based recommender systems in time-sensitive applications,
particularly for financial product recommendations.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 12:47:01 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Ghiye",
"Ashraf",
""
],
[
"Barreau",
"Baptiste",
""
],
[
"Carlier",
"Laurent",
""
],
[
"Vazirgiannis",
"Michalis",
""
]
] | TITLE: Rolling Forward: Enhancing LightGCN with Causal Graph Convolution for
Credit Bond Recommendation
ABSTRACT: Graph Neural Networks have significantly advanced research in recommender
systems over the past few years. These methods typically capture global
interests using aggregated past interactions and rely on static embeddings of
users and items over extended periods of time. While effective in some domains,
these methods fall short in many real-world scenarios, especially in finance,
where user interests and item popularity evolve rapidly over time. To address
these challenges, we introduce a novel extension to Light Graph Convolutional
Network (LightGCN) designed to learn temporal node embeddings that capture
dynamic interests. Our approach employs causal convolution to maintain a
forward-looking model architecture. By preserving the chronological order of
user-item interactions and introducing a dynamic update mechanism for
embeddings through a sliding window, the proposed model generates well-timed
and contextually relevant recommendations. Extensive experiments on a
real-world dataset from BNP Paribas demonstrate that our approach significantly
enhances the performance of LightGCN while maintaining the simplicity and
efficiency of its architecture. Our findings provide new insights into
designing graph-based recommender systems in time-sensitive applications,
particularly for financial product recommendations.
|
2503.14228 | Yasunori Ishii Mr | Nobuhiko Wakai, Satoshi Sato, Yasunori Ishii, Takayoshi Yamashita | Panoramic Distortion-Aware Tokenization for Person Detection and
Localization Using Transformers in Overhead Fisheye Images | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Person detection methods are used widely in applications including visual
surveillance, pedestrian detection, and robotics. However, accurate detection
of persons from overhead fisheye images remains an open challenge because of
factors including person rotation and small-sized persons. To address the
person rotation problem, we convert the fisheye images into panoramic images.
For smaller people, we focused on the geometry of the panoramas. Conventional
detection methods tend to focus on larger people because these larger people
yield large significant areas for feature maps. In equirectangular panoramic
images, we find that a person's height decreases linearly near the top of the
images. Using this finding, we leverage the significance values and aggregate
tokens that are sorted based on these values to balance the significant areas.
In this leveraging process, we introduce panoramic distortion-aware
tokenization. This tokenization procedure divides a panoramic image using
self-similarity figures that enable determination of optimal divisions without
gaps, and we leverage the maximum significant values in each tile of token
groups to preserve the significant areas of smaller people. To achieve higher
detection accuracy, we propose a person detection and localization method that
combines panoramic-image remapping and the tokenization procedure. Extensive
experiments demonstrated that our method outperforms conventional methods when
applied to large-scale datasets.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 13:05:41 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Wakai",
"Nobuhiko",
""
],
[
"Sato",
"Satoshi",
""
],
[
"Ishii",
"Yasunori",
""
],
[
"Yamashita",
"Takayoshi",
""
]
] | TITLE: Panoramic Distortion-Aware Tokenization for Person Detection and
Localization Using Transformers in Overhead Fisheye Images
ABSTRACT: Person detection methods are used widely in applications including visual
surveillance, pedestrian detection, and robotics. However, accurate detection
of persons from overhead fisheye images remains an open challenge because of
factors including person rotation and small-sized persons. To address the
person rotation problem, we convert the fisheye images into panoramic images.
For smaller people, we focused on the geometry of the panoramas. Conventional
detection methods tend to focus on larger people because these larger people
yield large significant areas for feature maps. In equirectangular panoramic
images, we find that a person's height decreases linearly near the top of the
images. Using this finding, we leverage the significance values and aggregate
tokens that are sorted based on these values to balance the significant areas.
In this leveraging process, we introduce panoramic distortion-aware
tokenization. This tokenization procedure divides a panoramic image using
self-similarity figures that enable determination of optimal divisions without
gaps, and we leverage the maximum significant values in each tile of token
groups to preserve the significant areas of smaller people. To achieve higher
detection accuracy, we propose a person detection and localization method that
combines panoramic-image remapping and the tokenization procedure. Extensive
experiments demonstrated that our method outperforms conventional methods when
applied to large-scale datasets.
|
2503.14229 | Yifei Dong | Yifei Dong, Fengyi Wu, Qi He, Heng Li, Minghan Li, Zebang Cheng,
Yuxuan Zhou, Jingdong Sun, Qi Dai, Zhi-Qi Cheng, Alexander G Hauptmann | HA-VLN: A Benchmark for Human-Aware Navigation in Discrete-Continuous
Environments with Dynamic Multi-Human Interactions, Real-World Validation,
and an Open Leaderboard | 27 pages, website: https://ha-vln-project.vercel.app/ | null | null | null | cs.AI cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vision-and-Language Navigation (VLN) systems often focus on either discrete
(panoramic) or continuous (free-motion) paradigms alone, overlooking the
complexities of human-populated, dynamic environments. We introduce a unified
Human-Aware VLN (HA-VLN) benchmark that merges these paradigms under explicit
social-awareness constraints. Our contributions include: 1. A standardized task
definition that balances discrete-continuous navigation with personal-space
requirements; 2. An enhanced human motion dataset (HAPS 2.0) and upgraded
simulators capturing realistic multi-human interactions, outdoor contexts, and
refined motion-language alignment; 3. Extensive benchmarking on 16,844
human-centric instructions, revealing how multi-human dynamics and partial
observability pose substantial challenges for leading VLN agents; 4. Real-world
robot tests validating sim-to-real transfer in crowded indoor spaces; and 5. A
public leaderboard supporting transparent comparisons across discrete and
continuous tasks. Empirical results show improved navigation success and fewer
collisions when social context is integrated, underscoring the need for
human-centric design. By releasing all datasets, simulators, agent code, and
evaluation tools, we aim to advance safer, more capable, and socially
responsible VLN research.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 13:05:55 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Dong",
"Yifei",
""
],
[
"Wu",
"Fengyi",
""
],
[
"He",
"Qi",
""
],
[
"Li",
"Heng",
""
],
[
"Li",
"Minghan",
""
],
[
"Cheng",
"Zebang",
""
],
[
"Zhou",
"Yuxuan",
""
],
[
"Sun",
"Jingdong",
""
],
[
"Dai",
"Qi",
""
],
[
"Cheng",
"Zhi-Qi",
""
],
[
"Hauptmann",
"Alexander G",
""
]
] | TITLE: HA-VLN: A Benchmark for Human-Aware Navigation in Discrete-Continuous
Environments with Dynamic Multi-Human Interactions, Real-World Validation,
and an Open Leaderboard
ABSTRACT: Vision-and-Language Navigation (VLN) systems often focus on either discrete
(panoramic) or continuous (free-motion) paradigms alone, overlooking the
complexities of human-populated, dynamic environments. We introduce a unified
Human-Aware VLN (HA-VLN) benchmark that merges these paradigms under explicit
social-awareness constraints. Our contributions include: 1. A standardized task
definition that balances discrete-continuous navigation with personal-space
requirements; 2. An enhanced human motion dataset (HAPS 2.0) and upgraded
simulators capturing realistic multi-human interactions, outdoor contexts, and
refined motion-language alignment; 3. Extensive benchmarking on 16,844
human-centric instructions, revealing how multi-human dynamics and partial
observability pose substantial challenges for leading VLN agents; 4. Real-world
robot tests validating sim-to-real transfer in crowded indoor spaces; and 5. A
public leaderboard supporting transparent comparisons across discrete and
continuous tasks. Empirical results show improved navigation success and fewer
collisions when social context is integrated, underscoring the need for
human-centric design. By releasing all datasets, simulators, agent code, and
evaluation tools, we aim to advance safer, more capable, and socially
responsible VLN research.
|
2503.14231 | Giovanni Delnevo | Ziyao Ling, Giovanni Delnevo, Paola Salomoni, Silvia Mirri | Multi-task Learning for Identification of Porcelain in Song and Yuan
Dynasties | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Chinese porcelain holds immense historical and cultural value, making its
accurate classification essential for archaeological research and cultural
heritage preservation. Traditional classification methods rely heavily on
expert analysis, which is time-consuming, subjective, and difficult to scale.
This paper explores the application of DL and transfer learning techniques to
automate the classification of porcelain artifacts across four key attributes:
dynasty, glaze, ware, and type. We evaluate four Convolutional Neural Networks
(CNNs) - ResNet50, MobileNetV2, VGG16, and InceptionV3 - comparing their
performance with and without pre-trained weights. Our results demonstrate that
transfer learning significantly enhances classification accuracy, particularly
for complex tasks like type classification, where models trained from scratch
exhibit lower performance. MobileNetV2 and ResNet50 consistently achieve high
accuracy and robustness across all tasks, while VGG16 struggles with more
diverse classifications. We further discuss the impact of dataset limitations
and propose future directions, including domain-specific pre-training,
integration of attention mechanisms, explainable AI methods, and generalization
to other cultural artifacts.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 13:09:00 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Ling",
"Ziyao",
""
],
[
"Delnevo",
"Giovanni",
""
],
[
"Salomoni",
"Paola",
""
],
[
"Mirri",
"Silvia",
""
]
] | TITLE: Multi-task Learning for Identification of Porcelain in Song and Yuan
Dynasties
ABSTRACT: Chinese porcelain holds immense historical and cultural value, making its
accurate classification essential for archaeological research and cultural
heritage preservation. Traditional classification methods rely heavily on
expert analysis, which is time-consuming, subjective, and difficult to scale.
This paper explores the application of DL and transfer learning techniques to
automate the classification of porcelain artifacts across four key attributes:
dynasty, glaze, ware, and type. We evaluate four Convolutional Neural Networks
(CNNs) - ResNet50, MobileNetV2, VGG16, and InceptionV3 - comparing their
performance with and without pre-trained weights. Our results demonstrate that
transfer learning significantly enhances classification accuracy, particularly
for complex tasks like type classification, where models trained from scratch
exhibit lower performance. MobileNetV2 and ResNet50 consistently achieve high
accuracy and robustness across all tasks, while VGG16 struggles with more
diverse classifications. We further discuss the impact of dataset limitations
and propose future directions, including domain-specific pre-training,
integration of attention mechanisms, explainable AI methods, and generalization
to other cultural artifacts.
|
2503.14247 | Tingyang Xiao | Tingyang Xiao, Xiaolin Zhou, Liu Liu, Wei Sui, Wei Feng, Jiaxiong Qiu,
Xinjie Wang, and Zhizhong Su | GeoFlow-SLAM: A Robust Tightly-Coupled RGBD-Inertial Fusion SLAM for
Dynamic Legged Robotics | 8 pages | null | null | null | cs.RO cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents GeoFlow-SLAM, a robust and effective Tightly-Coupled
RGBD-inertial SLAM for legged robots operating in highly dynamic
environments.By integrating geometric consistency, legged odometry constraints,
and dual-stream optical flow (GeoFlow), our method addresses three critical
challenges:feature matching and pose initialization failures during fast
locomotion and visual feature scarcity in texture-less scenes.Specifically, in
rapid motion scenarios, feature matching is notably enhanced by leveraging
dual-stream optical flow, which combines prior map points and poses.
Additionally, we propose a robust pose initialization method for fast
locomotion and IMU error in legged robots, integrating IMU/Legged odometry,
inter-frame Perspective-n-Point (PnP), and Generalized Iterative Closest Point
(GICP). Furthermore, a novel optimization framework that tightly couples
depth-to-map and GICP geometric constraints is first introduced to improve the
robustness and accuracy in long-duration, visually texture-less environments.
The proposed algorithms achieve state-of-the-art (SOTA) on collected legged
robots and open-source datasets. To further promote research and development,
the open-source datasets and code will be made publicly available at
https://github.com/NSN-Hello/GeoFlow-SLAM
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 13:35:49 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Xiao",
"Tingyang",
""
],
[
"Zhou",
"Xiaolin",
""
],
[
"Liu",
"Liu",
""
],
[
"Sui",
"Wei",
""
],
[
"Feng",
"Wei",
""
],
[
"Qiu",
"Jiaxiong",
""
],
[
"Wang",
"Xinjie",
""
],
[
"Su",
"Zhizhong",
""
]
] | TITLE: GeoFlow-SLAM: A Robust Tightly-Coupled RGBD-Inertial Fusion SLAM for
Dynamic Legged Robotics
ABSTRACT: This paper presents GeoFlow-SLAM, a robust and effective Tightly-Coupled
RGBD-inertial SLAM for legged robots operating in highly dynamic
environments.By integrating geometric consistency, legged odometry constraints,
and dual-stream optical flow (GeoFlow), our method addresses three critical
challenges:feature matching and pose initialization failures during fast
locomotion and visual feature scarcity in texture-less scenes.Specifically, in
rapid motion scenarios, feature matching is notably enhanced by leveraging
dual-stream optical flow, which combines prior map points and poses.
Additionally, we propose a robust pose initialization method for fast
locomotion and IMU error in legged robots, integrating IMU/Legged odometry,
inter-frame Perspective-n-Point (PnP), and Generalized Iterative Closest Point
(GICP). Furthermore, a novel optimization framework that tightly couples
depth-to-map and GICP geometric constraints is first introduced to improve the
robustness and accuracy in long-duration, visually texture-less environments.
The proposed algorithms achieve state-of-the-art (SOTA) on collected legged
robots and open-source datasets. To further promote research and development,
the open-source datasets and code will be made publicly available at
https://github.com/NSN-Hello/GeoFlow-SLAM
|
2503.14284 | Jiacen Xu | Jiacen Xu, Chenang Li, Yu Zheng, Zhou Li | Entente: Cross-silo Intrusion Detection on Network Log Graphs with
Federated Learning | null | null | null | null | cs.CR | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Graph-based Network Intrusion Detection System (GNIDS) has gained significant
momentum in detecting sophisticated cyber-attacks, like Advanced Persistent
Threat (APT), in an organization or across organizations. Though achieving
satisfying detection accuracy and adapting to ever-changing attacks and normal
patterns, all prior GNIDSs assume the centralized data settings directly, but
non-trivial data collection is not always practical under privacy regulations
nowadays. We argue that training a GNIDS model has to consider privacy
regulations, and propose to leverage federated learning (FL) to address this
prominent challenge.
Yet, directly applying FL to GNIDS is unlikely to succeed, due to issues like
non-IID (independent and identically distributed) graph data over clients and
the diverse design choices taken by different GNIDS. We address these issues
with a set of novel techniques tailored to the graph datasets, including
reference graph synthesis, graph sketching and adaptive contribution scaling,
and develop a new system Entente. We evaluate Entente on the large-scale LANL,
OpTC and Pivoting datasets. The result shows Entente outperforms the other
baseline FL algorithms and sometimes even the non-FL GNIDS. We also evaluate
Entente under FL poisoning attacks tailored to the GNIDS setting, and show
Entente is able to bound the attack success rate to low values. Overall, our
result suggests building cross-silo GNIDS is feasible and we hope to encourage
more efforts in this direction.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 14:21:24 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Xu",
"Jiacen",
""
],
[
"Li",
"Chenang",
""
],
[
"Zheng",
"Yu",
""
],
[
"Li",
"Zhou",
""
]
] | TITLE: Entente: Cross-silo Intrusion Detection on Network Log Graphs with
Federated Learning
ABSTRACT: Graph-based Network Intrusion Detection System (GNIDS) has gained significant
momentum in detecting sophisticated cyber-attacks, like Advanced Persistent
Threat (APT), in an organization or across organizations. Though achieving
satisfying detection accuracy and adapting to ever-changing attacks and normal
patterns, all prior GNIDSs assume the centralized data settings directly, but
non-trivial data collection is not always practical under privacy regulations
nowadays. We argue that training a GNIDS model has to consider privacy
regulations, and propose to leverage federated learning (FL) to address this
prominent challenge.
Yet, directly applying FL to GNIDS is unlikely to succeed, due to issues like
non-IID (independent and identically distributed) graph data over clients and
the diverse design choices taken by different GNIDS. We address these issues
with a set of novel techniques tailored to the graph datasets, including
reference graph synthesis, graph sketching and adaptive contribution scaling,
and develop a new system Entente. We evaluate Entente on the large-scale LANL,
OpTC and Pivoting datasets. The result shows Entente outperforms the other
baseline FL algorithms and sometimes even the non-FL GNIDS. We also evaluate
Entente under FL poisoning attacks tailored to the GNIDS setting, and show
Entente is able to bound the attack success rate to low values. Overall, our
result suggests building cross-silo GNIDS is feasible and we hope to encourage
more efforts in this direction.
|
2503.14287 | Enrico Tosi | Enrico Tosi, Panwei Hu, Aleksandar Ichkov, Marina Petrova, Ljiljana
Simi\'c | Cross-Environment Transfer Learning for Location-Aided Beam Prediction
in 5G and Beyond Millimeter-Wave Networks | null | null | null | null | eess.SP cs.SY eess.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Millimeter-wave (mm-wave) communications requirebeamforming and consequent
precise beam alignmentbetween the gNodeB (gNB) and the user equipment (UE)
toovercome high propagation losses. This beam alignment needs tobe constantly
updated for different UE locations based on beamsweepingradio frequency
measurements, leading to significantbeam management overhead. One potential
solution involvesusing machine learning (ML) beam prediction algorithms
thatleverage UE position information to select the serving beamwithout the
overhead of beam sweeping. However, the highlysite-specific nature of mm-wave
propagation means that MLmodels require training from scratch for each
scenario, whichis inefficient in practice. In this paper, we propose a
robustcross-environment transfer learning solution for location-aidedbeam
prediction, whereby the ML model trained on a referencegNB is transferred to a
target gNB by fine-tuning with a limiteddataset. Extensive simulation results
based on ray-tracing in twourban environments show the effectiveness of our
solution forboth inter- and intra-city model transfer. Our results show thatby
training the model on a reference gNB and transferring themodel by fine-tuning
with only 5% of the target gNB dataset,we can achieve 80% accuracy in
predicting the best beamfor the target gNB. Importantly, our approach improves
thepoor generalization accuracy of transferring the model to newenvironments
without fine-tuning by around 75 percentage points.This demonstrates that
transfer learning enables high predictionaccuracy while reducing the
computational and training datasetcollection burden of ML-based beam
prediction, making itpractical for 5G-and-beyond deployments.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 14:24:50 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Tosi",
"Enrico",
""
],
[
"Hu",
"Panwei",
""
],
[
"Ichkov",
"Aleksandar",
""
],
[
"Petrova",
"Marina",
""
],
[
"Simić",
"Ljiljana",
""
]
] | TITLE: Cross-Environment Transfer Learning for Location-Aided Beam Prediction
in 5G and Beyond Millimeter-Wave Networks
ABSTRACT: Millimeter-wave (mm-wave) communications requirebeamforming and consequent
precise beam alignmentbetween the gNodeB (gNB) and the user equipment (UE)
toovercome high propagation losses. This beam alignment needs tobe constantly
updated for different UE locations based on beamsweepingradio frequency
measurements, leading to significantbeam management overhead. One potential
solution involvesusing machine learning (ML) beam prediction algorithms
thatleverage UE position information to select the serving beamwithout the
overhead of beam sweeping. However, the highlysite-specific nature of mm-wave
propagation means that MLmodels require training from scratch for each
scenario, whichis inefficient in practice. In this paper, we propose a
robustcross-environment transfer learning solution for location-aidedbeam
prediction, whereby the ML model trained on a referencegNB is transferred to a
target gNB by fine-tuning with a limiteddataset. Extensive simulation results
based on ray-tracing in twourban environments show the effectiveness of our
solution forboth inter- and intra-city model transfer. Our results show thatby
training the model on a reference gNB and transferring themodel by fine-tuning
with only 5% of the target gNB dataset,we can achieve 80% accuracy in
predicting the best beamfor the target gNB. Importantly, our approach improves
thepoor generalization accuracy of transferring the model to newenvironments
without fine-tuning by around 75 percentage points.This demonstrates that
transfer learning enables high predictionaccuracy while reducing the
computational and training datasetcollection burden of ML-based beam
prediction, making itpractical for 5G-and-beyond deployments.
|
2503.14304 | Yuheng Li | Yuheng Li, Mingzhe Hu, Richard L.J. Qiu, Maria Thor, Andre Williams,
Deborah Marshall and Xiaofeng Yang | RoMedFormer: A Rotary-Embedding Transformer Foundation Model for 3D
Genito-Pelvic Structure Segmentation in MRI and CT | null | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Deep learning-based segmentation of genito-pelvic structures in MRI and CT is
crucial for applications such as radiation therapy, surgical planning, and
disease diagnosis. However, existing segmentation models often struggle with
generalizability across imaging modalities, and anatomical variations. In this
work, we propose RoMedFormer, a rotary-embedding transformer-based foundation
model designed for 3D female genito-pelvic structure segmentation in both MRI
and CT. RoMedFormer leverages self-supervised learning and rotary positional
embeddings to enhance spatial feature representation and capture long-range
dependencies in 3D medical data. We pre-train our model using a diverse dataset
of 3D MRI and CT scans and fine-tune it for downstream segmentation tasks.
Experimental results demonstrate that RoMedFormer achieves superior performance
segmenting genito-pelvic organs. Our findings highlight the potential of
transformer-based architectures in medical image segmentation and pave the way
for more transferable segmentation frameworks.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 14:45:05 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Li",
"Yuheng",
""
],
[
"Hu",
"Mingzhe",
""
],
[
"Qiu",
"Richard L. J.",
""
],
[
"Thor",
"Maria",
""
],
[
"Williams",
"Andre",
""
],
[
"Marshall",
"Deborah",
""
],
[
"Yang",
"Xiaofeng",
""
]
] | TITLE: RoMedFormer: A Rotary-Embedding Transformer Foundation Model for 3D
Genito-Pelvic Structure Segmentation in MRI and CT
ABSTRACT: Deep learning-based segmentation of genito-pelvic structures in MRI and CT is
crucial for applications such as radiation therapy, surgical planning, and
disease diagnosis. However, existing segmentation models often struggle with
generalizability across imaging modalities, and anatomical variations. In this
work, we propose RoMedFormer, a rotary-embedding transformer-based foundation
model designed for 3D female genito-pelvic structure segmentation in both MRI
and CT. RoMedFormer leverages self-supervised learning and rotary positional
embeddings to enhance spatial feature representation and capture long-range
dependencies in 3D medical data. We pre-train our model using a diverse dataset
of 3D MRI and CT scans and fine-tune it for downstream segmentation tasks.
Experimental results demonstrate that RoMedFormer achieves superior performance
segmenting genito-pelvic organs. Our findings highlight the potential of
transformer-based architectures in medical image segmentation and pave the way
for more transferable segmentation frameworks.
|
2503.14322 | Florian Heinrichs | Tiago Vasconcelos Afonso, Florian Heinrichs | Consumer-grade EEG-based Eye Tracking | Data descriptor, 13 pages, 8 figures, 5 tables | null | null | null | eess.SP cs.HC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Electroencephalography-based eye tracking (EEG-ET) leverages eye movement
artifacts in EEG signals as an alternative to camera-based tracking. While
EEG-ET offers advantages such as robustness in low-light conditions and better
integration with brain-computer interfaces, its development lags behind
traditional methods, particularly in consumer-grade settings. To support
research in this area, we present a dataset comprising simultaneous EEG and
eye-tracking recordings from 113 participants across 116 sessions, amounting to
11 hours and 45 minutes of recordings. Data was collected using a
consumer-grade EEG headset and webcam-based eye tracking, capturing eye
movements under four experimental paradigms with varying complexity. The
dataset enables the evaluation of EEG-ET methods across different gaze
conditions and serves as a benchmark for assessing feasibility with affordable
hardware. Data preprocessing includes handling of missing values and filtering
to enhance usability. In addition to the dataset, code for data preprocessing
and analysis is available to support reproducibility and further research.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 14:53:20 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Afonso",
"Tiago Vasconcelos",
""
],
[
"Heinrichs",
"Florian",
""
]
] | TITLE: Consumer-grade EEG-based Eye Tracking
ABSTRACT: Electroencephalography-based eye tracking (EEG-ET) leverages eye movement
artifacts in EEG signals as an alternative to camera-based tracking. While
EEG-ET offers advantages such as robustness in low-light conditions and better
integration with brain-computer interfaces, its development lags behind
traditional methods, particularly in consumer-grade settings. To support
research in this area, we present a dataset comprising simultaneous EEG and
eye-tracking recordings from 113 participants across 116 sessions, amounting to
11 hours and 45 minutes of recordings. Data was collected using a
consumer-grade EEG headset and webcam-based eye tracking, capturing eye
movements under four experimental paradigms with varying complexity. The
dataset enables the evaluation of EEG-ET methods across different gaze
conditions and serves as a benchmark for assessing feasibility with affordable
hardware. Data preprocessing includes handling of missing values and filtering
to enhance usability. In addition to the dataset, code for data preprocessing
and analysis is available to support reproducibility and further research.
|
2503.14346 | Xavier Anad\'on | X. Anad\'on, Javier Rodr\'iguez-Puigvert, J.M.M. Montiel | 3D Densification for Multi-Map Monocular VSLAM in Endoscopy | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Multi-map Sparse Monocular visual Simultaneous Localization and Mapping
applied to monocular endoscopic sequences has proven efficient to robustly
recover tracking after the frequent losses in endoscopy due to motion blur,
temporal occlusion, tools interaction or water jets. The sparse multi-maps are
adequate for robust camera localization, however they are very poor for
environment representation, they are noisy, with a high percentage of
inaccurately reconstructed 3D points, including significant outliers, and more
importantly with an unacceptable low density for clinical applications.
We propose a method to remove outliers and densify the maps of the state of
the art for sparse endoscopy multi-map CudaSIFT-SLAM. The NN LightDepth for
up-to-scale depth dense predictions are aligned with the sparse CudaSIFT
submaps by means of the robust to spurious LMedS. Our system mitigates the
inherent scale ambiguity in monocular depth estimation while filtering
outliers, leading to reliable densified 3D maps.
We provide experimental evidence of accurate densified maps 4.15 mm RMS
accuracy at affordable computing time in the C3VD phantom colon dataset. We
report qualitative results on the real colonoscopy from the Endomapper dataset.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 15:25:38 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Anadón",
"X.",
""
],
[
"Rodríguez-Puigvert",
"Javier",
""
],
[
"Montiel",
"J. M. M.",
""
]
] | TITLE: 3D Densification for Multi-Map Monocular VSLAM in Endoscopy
ABSTRACT: Multi-map Sparse Monocular visual Simultaneous Localization and Mapping
applied to monocular endoscopic sequences has proven efficient to robustly
recover tracking after the frequent losses in endoscopy due to motion blur,
temporal occlusion, tools interaction or water jets. The sparse multi-maps are
adequate for robust camera localization, however they are very poor for
environment representation, they are noisy, with a high percentage of
inaccurately reconstructed 3D points, including significant outliers, and more
importantly with an unacceptable low density for clinical applications.
We propose a method to remove outliers and densify the maps of the state of
the art for sparse endoscopy multi-map CudaSIFT-SLAM. The NN LightDepth for
up-to-scale depth dense predictions are aligned with the sparse CudaSIFT
submaps by means of the robust to spurious LMedS. Our system mitigates the
inherent scale ambiguity in monocular depth estimation while filtering
outliers, leading to reliable densified 3D maps.
We provide experimental evidence of accurate densified maps 4.15 mm RMS
accuracy at affordable computing time in the C3VD phantom colon dataset. We
report qualitative results on the real colonoscopy from the Endomapper dataset.
|
2503.14355 | Runqi Meng | Runqi Meng, Sifan Song, Pengfei Jin, Yujin Oh, Lin Teng, Yulin Wang,
Yiqun Sun, Ling Chen, Xiang Li, Quanzheng Li, Ning Guo, Dinggang Shen | MAST-Pro: Dynamic Mixture-of-Experts for Adaptive Segmentation of
Pan-Tumors with Knowledge-Driven Prompts | 10 pages, 2 figures | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Accurate tumor segmentation is crucial for cancer diagnosis and treatment.
While foundation models have advanced general-purpose segmentation, existing
methods still struggle with: (1) limited incorporation of medical priors, (2)
imbalance between generic and tumor-specific features, and (3) high
computational costs for clinical adaptation. To address these challenges, we
propose MAST-Pro (Mixture-of-experts for Adaptive Segmentation of pan-Tumors
with knowledge-driven Prompts), a novel framework that integrates dynamic
Mixture-of-Experts (D-MoE) and knowledge-driven prompts for pan-tumor
segmentation. Specifically, text and anatomical prompts provide domain-specific
priors, guiding tumor representation learning, while D-MoE dynamically selects
experts to balance generic and tumor-specific feature learning, improving
segmentation accuracy across diverse tumor types. To enhance efficiency, we
employ Parameter-Efficient Fine-Tuning (PEFT), optimizing MAST-Pro with
significantly reduced computational overhead. Experiments on multi-anatomical
tumor datasets demonstrate that MAST-Pro outperforms state-of-the-art
approaches, achieving up to a 5.20% improvement in average DSC while reducing
trainable parameters by 91.04%, without compromising accuracy.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 15:39:44 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Meng",
"Runqi",
""
],
[
"Song",
"Sifan",
""
],
[
"Jin",
"Pengfei",
""
],
[
"Oh",
"Yujin",
""
],
[
"Teng",
"Lin",
""
],
[
"Wang",
"Yulin",
""
],
[
"Sun",
"Yiqun",
""
],
[
"Chen",
"Ling",
""
],
[
"Li",
"Xiang",
""
],
[
"Li",
"Quanzheng",
""
],
[
"Guo",
"Ning",
""
],
[
"Shen",
"Dinggang",
""
]
] | TITLE: MAST-Pro: Dynamic Mixture-of-Experts for Adaptive Segmentation of
Pan-Tumors with Knowledge-Driven Prompts
ABSTRACT: Accurate tumor segmentation is crucial for cancer diagnosis and treatment.
While foundation models have advanced general-purpose segmentation, existing
methods still struggle with: (1) limited incorporation of medical priors, (2)
imbalance between generic and tumor-specific features, and (3) high
computational costs for clinical adaptation. To address these challenges, we
propose MAST-Pro (Mixture-of-experts for Adaptive Segmentation of pan-Tumors
with knowledge-driven Prompts), a novel framework that integrates dynamic
Mixture-of-Experts (D-MoE) and knowledge-driven prompts for pan-tumor
segmentation. Specifically, text and anatomical prompts provide domain-specific
priors, guiding tumor representation learning, while D-MoE dynamically selects
experts to balance generic and tumor-specific feature learning, improving
segmentation accuracy across diverse tumor types. To enhance efficiency, we
employ Parameter-Efficient Fine-Tuning (PEFT), optimizing MAST-Pro with
significantly reduced computational overhead. Experiments on multi-anatomical
tumor datasets demonstrate that MAST-Pro outperforms state-of-the-art
approaches, achieving up to a 5.20% improvement in average DSC while reducing
trainable parameters by 91.04%, without compromising accuracy.
|
2503.14356 | Alexander Partin | Alexander Partin (1), Priyanka Vasanthakumari (1), Oleksandr Narykov
(1), Andreas Wilke (1), Natasha Koussa (2), Sara E. Jones (2), Yitan Zhu (1),
Jamie C. Overbeek (1), Rajeev Jain (1), Gayara Demini Fernando (3), Cesar
Sanchez-Villalobos (4), Cristina Garcia-Cardona (5), Jamaludin Mohd-Yusof
(5), Nicholas Chia (1), Justin M. Wozniak (1), Souparno Ghosh (3), Ranadip
Pal (4), Thomas S. Brettin (1), M. Ryan Weil (2), Rick L. Stevens (1 and 6)
((1) Division of Data Science and Learning, Argonne National Laboratory,
Lemont, IL, USA, (2) Frederick National Laboratory for Cancer Research,
Cancer Data Science Initiatives, Cancer Research Technology Program,
Frederick, MD, USA, (3) Department of Statistics, University of
Nebraska-Lincoln, Lincoln, NE, USA, (4) Department of Electrical and Computer
Engineering, Texas Tech University, Lubbock, TX, USA, (5) Division of
Computer, Computational and Statistical Sciences, Los Alamos National
Laboratory, Los Alamos, NM, USA, (6) Department of Computer Science, The
University of Chicago, Chicago, IL, USA) | Benchmarking community drug response prediction models: datasets,
models, tools, and metrics for cross-dataset generalization analysis | 18 pages, 9 figures | null | null | null | cs.LG q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Deep learning (DL) and machine learning (ML) models have shown promise in
drug response prediction (DRP), yet their ability to generalize across datasets
remains an open question, raising concerns about their real-world
applicability. Due to the lack of standardized benchmarking approaches, model
evaluations and comparisons often rely on inconsistent datasets and evaluation
criteria, making it difficult to assess true predictive capabilities. In this
work, we introduce a benchmarking framework for evaluating cross-dataset
prediction generalization in DRP models. Our framework incorporates five
publicly available drug screening datasets, six standardized DRP models, and a
scalable workflow for systematic evaluation. To assess model generalization, we
introduce a set of evaluation metrics that quantify both absolute performance
(e.g., predictive accuracy across datasets) and relative performance (e.g.,
performance drop compared to within-dataset results), enabling a more
comprehensive assessment of model transferability. Our results reveal
substantial performance drops when models are tested on unseen datasets,
underscoring the importance of rigorous generalization assessments. While
several models demonstrate relatively strong cross-dataset generalization, no
single model consistently outperforms across all datasets. Furthermore, we
identify CTRPv2 as the most effective source dataset for training, yielding
higher generalization scores across target datasets. By sharing this
standardized evaluation framework with the community, our study aims to
establish a rigorous foundation for model comparison, and accelerate the
development of robust DRP models for real-world applications.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 15:40:18 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Partin",
"Alexander",
"",
"1 and 6"
],
[
"Vasanthakumari",
"Priyanka",
"",
"1 and 6"
],
[
"Narykov",
"Oleksandr",
"",
"1 and 6"
],
[
"Wilke",
"Andreas",
"",
"1 and 6"
],
[
"Koussa",
"Natasha",
"",
"1 and 6"
],
[
"Jones",
"Sara E.",
"",
"1 and 6"
],
[
"Zhu",
"Yitan",
"",
"1 and 6"
],
[
"Overbeek",
"Jamie C.",
"",
"1 and 6"
],
[
"Jain",
"Rajeev",
"",
"1 and 6"
],
[
"Fernando",
"Gayara Demini",
"",
"1 and 6"
],
[
"Sanchez-Villalobos",
"Cesar",
"",
"1 and 6"
],
[
"Garcia-Cardona",
"Cristina",
"",
"1 and 6"
],
[
"Mohd-Yusof",
"Jamaludin",
"",
"1 and 6"
],
[
"Chia",
"Nicholas",
"",
"1 and 6"
],
[
"Wozniak",
"Justin M.",
"",
"1 and 6"
],
[
"Ghosh",
"Souparno",
"",
"1 and 6"
],
[
"Pal",
"Ranadip",
"",
"1 and 6"
],
[
"Brettin",
"Thomas S.",
"",
"1 and 6"
],
[
"Weil",
"M. Ryan",
"",
"1 and 6"
],
[
"Stevens",
"Rick L.",
"",
"1 and 6"
]
] | TITLE: Benchmarking community drug response prediction models: datasets,
models, tools, and metrics for cross-dataset generalization analysis
ABSTRACT: Deep learning (DL) and machine learning (ML) models have shown promise in
drug response prediction (DRP), yet their ability to generalize across datasets
remains an open question, raising concerns about their real-world
applicability. Due to the lack of standardized benchmarking approaches, model
evaluations and comparisons often rely on inconsistent datasets and evaluation
criteria, making it difficult to assess true predictive capabilities. In this
work, we introduce a benchmarking framework for evaluating cross-dataset
prediction generalization in DRP models. Our framework incorporates five
publicly available drug screening datasets, six standardized DRP models, and a
scalable workflow for systematic evaluation. To assess model generalization, we
introduce a set of evaluation metrics that quantify both absolute performance
(e.g., predictive accuracy across datasets) and relative performance (e.g.,
performance drop compared to within-dataset results), enabling a more
comprehensive assessment of model transferability. Our results reveal
substantial performance drops when models are tested on unseen datasets,
underscoring the importance of rigorous generalization assessments. While
several models demonstrate relatively strong cross-dataset generalization, no
single model consistently outperforms across all datasets. Furthermore, we
identify CTRPv2 as the most effective source dataset for training, yielding
higher generalization scores across target datasets. By sharing this
standardized evaluation framework with the community, our study aims to
establish a rigorous foundation for model comparison, and accelerate the
development of robust DRP models for real-world applications.
|
2503.14357 | Giovanni Sansavini | Alfredo Oneto, Blazhe Gjorgiev, Giovanni Sansavini | Wasserstein-based Kernels for Clustering: Application to Power
Distribution Graphs | null | null | null | null | cs.LG stat.AP | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Many data clustering applications must handle objects that cannot be
represented as vector data. In this context, the bag-of-vectors representation
can be leveraged to describe complex objects through discrete distributions,
and the Wasserstein distance can effectively measure the dissimilarity between
them. Additionally, kernel methods can be used to embed data into feature
spaces that are easier to analyze. Despite significant progress in data
clustering, a method that simultaneously accounts for distributional and
vectorial dissimilarity measures is still lacking. To tackle this gap, this
work explores kernel methods and Wasserstein distance metrics to develop a
computationally tractable clustering framework. The compositional properties of
kernels allow the simultaneous handling of different metrics, enabling the
integration of both vectors and discrete distributions for object
representation. This approach is flexible enough to be applied in various
domains, such as graph analysis and image processing. The framework consists of
three main components. First, we efficiently approximate pairwise Wasserstein
distances using multiple reference distributions. Second, we employ kernel
functions based on Wasserstein distances and present ways of composing kernels
to express different types of information. Finally, we use the kernels to
cluster data and evaluate the quality of the results using scalable and
distance-agnostic validity indices. A case study involving two datasets of 879
and 34,920 power distribution graphs demonstrates the framework's effectiveness
and efficiency.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 15:40:55 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Oneto",
"Alfredo",
""
],
[
"Gjorgiev",
"Blazhe",
""
],
[
"Sansavini",
"Giovanni",
""
]
] | TITLE: Wasserstein-based Kernels for Clustering: Application to Power
Distribution Graphs
ABSTRACT: Many data clustering applications must handle objects that cannot be
represented as vector data. In this context, the bag-of-vectors representation
can be leveraged to describe complex objects through discrete distributions,
and the Wasserstein distance can effectively measure the dissimilarity between
them. Additionally, kernel methods can be used to embed data into feature
spaces that are easier to analyze. Despite significant progress in data
clustering, a method that simultaneously accounts for distributional and
vectorial dissimilarity measures is still lacking. To tackle this gap, this
work explores kernel methods and Wasserstein distance metrics to develop a
computationally tractable clustering framework. The compositional properties of
kernels allow the simultaneous handling of different metrics, enabling the
integration of both vectors and discrete distributions for object
representation. This approach is flexible enough to be applied in various
domains, such as graph analysis and image processing. The framework consists of
three main components. First, we efficiently approximate pairwise Wasserstein
distances using multiple reference distributions. Second, we employ kernel
functions based on Wasserstein distances and present ways of composing kernels
to express different types of information. Finally, we use the kernels to
cluster data and evaluate the quality of the results using scalable and
distance-agnostic validity indices. A case study involving two datasets of 879
and 34,920 power distribution graphs demonstrates the framework's effectiveness
and efficiency.
|
2503.14358 | Chao Wang | Chao Wang, Giulio Franzese, Alessandro Finamore, Pietro Michiardi | RFMI: Estimating Mutual Information on Rectified Flow for Text-to-Image
Alignment | to appear at ICLR 2025 Workshop on Deep Generative Model in Machine
Learning: Theory, Principle and Efficacy | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Rectified Flow (RF) models trained with a Flow matching framework have
achieved state-of-the-art performance on Text-to-Image (T2I) conditional
generation. Yet, multiple benchmarks show that synthetic images can still
suffer from poor alignment with the prompt, i.e., images show wrong attribute
binding, subject positioning, numeracy, etc. While the literature offers many
methods to improve T2I alignment, they all consider only Diffusion Models, and
require auxiliary datasets, scoring models, and linguistic analysis of the
prompt. In this paper we aim to address these gaps. First, we introduce RFMI, a
novel Mutual Information (MI) estimator for RF models that uses the pre-trained
model itself for the MI estimation. Then, we investigate a self-supervised
fine-tuning approach for T2I alignment based on RFMI that does not require
auxiliary information other than the pre-trained model itself. Specifically, a
fine-tuning set is constructed by selecting synthetic images generated from the
pre-trained RF model and having high point-wise MI between images and prompts.
Our experiments on MI estimation benchmarks demonstrate the validity of RFMI,
and empirical fine-tuning on SD3.5-Medium confirms the effectiveness of RFMI
for improving T2I alignment while maintaining image quality.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 15:41:45 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Wang",
"Chao",
""
],
[
"Franzese",
"Giulio",
""
],
[
"Finamore",
"Alessandro",
""
],
[
"Michiardi",
"Pietro",
""
]
] | TITLE: RFMI: Estimating Mutual Information on Rectified Flow for Text-to-Image
Alignment
ABSTRACT: Rectified Flow (RF) models trained with a Flow matching framework have
achieved state-of-the-art performance on Text-to-Image (T2I) conditional
generation. Yet, multiple benchmarks show that synthetic images can still
suffer from poor alignment with the prompt, i.e., images show wrong attribute
binding, subject positioning, numeracy, etc. While the literature offers many
methods to improve T2I alignment, they all consider only Diffusion Models, and
require auxiliary datasets, scoring models, and linguistic analysis of the
prompt. In this paper we aim to address these gaps. First, we introduce RFMI, a
novel Mutual Information (MI) estimator for RF models that uses the pre-trained
model itself for the MI estimation. Then, we investigate a self-supervised
fine-tuning approach for T2I alignment based on RFMI that does not require
auxiliary information other than the pre-trained model itself. Specifically, a
fine-tuning set is constructed by selecting synthetic images generated from the
pre-trained RF model and having high point-wise MI between images and prompts.
Our experiments on MI estimation benchmarks demonstrate the validity of RFMI,
and empirical fine-tuning on SD3.5-Medium confirms the effectiveness of RFMI
for improving T2I alignment while maintaining image quality.
|
2503.14359 | Zhengxian Yang | Zhengxian Yang, Shi Pan, Shengqi Wang, Haoxiang Wang, Li Lin, Guanjun
Li, Zhengqi Wen, Borong Lin, Jianhua Tao, Tao Yu | ImViD: Immersive Volumetric Videos for Enhanced VR Engagement | Accepted by CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | User engagement is greatly enhanced by fully immersive multi-modal
experiences that combine visual and auditory stimuli. Consequently, the next
frontier in VR/AR technologies lies in immersive volumetric videos with
complete scene capture, large 6-DoF interaction space, multi-modal feedback,
and high resolution & frame-rate contents. To stimulate the reconstruction of
immersive volumetric videos, we introduce ImViD, a multi-view, multi-modal
dataset featuring complete space-oriented data capture and various
indoor/outdoor scenarios. Our capture rig supports multi-view video-audio
capture while on the move, a capability absent in existing datasets,
significantly enhancing the completeness, flexibility, and efficiency of data
capture.
The captured multi-view videos (with synchronized audios) are in 5K
resolution at 60FPS, lasting from 1-5 minutes, and include rich
foreground-background elements, and complex dynamics. We benchmark existing
methods using our dataset and establish a base pipeline for constructing
immersive volumetric videos from multi-view audiovisual inputs for 6-DoF
multi-modal immersive VR experiences. The benchmark and the reconstruction and
interaction results demonstrate the effectiveness of our dataset and baseline
method, which we believe will stimulate future research on immersive volumetric
video production.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 15:42:22 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Yang",
"Zhengxian",
""
],
[
"Pan",
"Shi",
""
],
[
"Wang",
"Shengqi",
""
],
[
"Wang",
"Haoxiang",
""
],
[
"Lin",
"Li",
""
],
[
"Li",
"Guanjun",
""
],
[
"Wen",
"Zhengqi",
""
],
[
"Lin",
"Borong",
""
],
[
"Tao",
"Jianhua",
""
],
[
"Yu",
"Tao",
""
]
] | TITLE: ImViD: Immersive Volumetric Videos for Enhanced VR Engagement
ABSTRACT: User engagement is greatly enhanced by fully immersive multi-modal
experiences that combine visual and auditory stimuli. Consequently, the next
frontier in VR/AR technologies lies in immersive volumetric videos with
complete scene capture, large 6-DoF interaction space, multi-modal feedback,
and high resolution & frame-rate contents. To stimulate the reconstruction of
immersive volumetric videos, we introduce ImViD, a multi-view, multi-modal
dataset featuring complete space-oriented data capture and various
indoor/outdoor scenarios. Our capture rig supports multi-view video-audio
capture while on the move, a capability absent in existing datasets,
significantly enhancing the completeness, flexibility, and efficiency of data
capture.
The captured multi-view videos (with synchronized audios) are in 5K
resolution at 60FPS, lasting from 1-5 minutes, and include rich
foreground-background elements, and complex dynamics. We benchmark existing
methods using our dataset and establish a base pipeline for constructing
immersive volumetric videos from multi-view audiovisual inputs for 6-DoF
multi-modal immersive VR experiences. The benchmark and the reconstruction and
interaction results demonstrate the effectiveness of our dataset and baseline
method, which we believe will stimulate future research on immersive volumetric
video production.
|
2503.14362 | Nicolas Menand | Nicolas Menand and Erik Waingarten | Streaming and Massively Parallel Algorithms for Euclidean Max-Cut | null | null | null | null | cs.DS | http://creativecommons.org/licenses/by/4.0/ | Given a set of vectors $X = \{ x_1,\dots, x_n \} \subset \mathbb{R}^d$, the
Euclidean max-cut problem asks to partition the vectors into two parts so as to
maximize the sum of Euclidean distances which cross the partition. We design
new algorithms for Euclidean max-cut in models for massive datasets:
$\bullet$ We give a fully-scalable constant-round MPC algorithm using $O(nd)
+ n \cdot \text{poly}( \log(n) / \epsilon)$ total space which gives a
$(1+\epsilon)$-approximate Euclidean max-cut.
$\bullet$ We give a dynamic streaming algorithm using $\text{poly}(d \log
\Delta / \epsilon)$ space when $X \subseteq [\Delta]^d$, which provides oracle
access to a $(1+\epsilon)$-approximate Euclidean max-cut.
Recently, Chen, Jiang, and Krauthgamer $[\text{STOC}~'23]$ gave a dynamic
streaming algorithm with space $\text{poly}(d\log\Delta/\epsilon)$ to
approximate the value of the Euclidean max-cut, but could not provide oracle
access to an approximately optimal cut. This was left open in that work, and we
resolve it here. Both algorithms follow from the same framework, which analyzes
a ``parallel'' and ``subsampled'' (Euclidean) version of a greedy algorithm of
Mathieu and Schudy $[\text{SODA}~'08]$ for dense max-cut.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 15:45:00 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Menand",
"Nicolas",
""
],
[
"Waingarten",
"Erik",
""
]
] | TITLE: Streaming and Massively Parallel Algorithms for Euclidean Max-Cut
ABSTRACT: Given a set of vectors $X = \{ x_1,\dots, x_n \} \subset \mathbb{R}^d$, the
Euclidean max-cut problem asks to partition the vectors into two parts so as to
maximize the sum of Euclidean distances which cross the partition. We design
new algorithms for Euclidean max-cut in models for massive datasets:
$\bullet$ We give a fully-scalable constant-round MPC algorithm using $O(nd)
+ n \cdot \text{poly}( \log(n) / \epsilon)$ total space which gives a
$(1+\epsilon)$-approximate Euclidean max-cut.
$\bullet$ We give a dynamic streaming algorithm using $\text{poly}(d \log
\Delta / \epsilon)$ space when $X \subseteq [\Delta]^d$, which provides oracle
access to a $(1+\epsilon)$-approximate Euclidean max-cut.
Recently, Chen, Jiang, and Krauthgamer $[\text{STOC}~'23]$ gave a dynamic
streaming algorithm with space $\text{poly}(d\log\Delta/\epsilon)$ to
approximate the value of the Euclidean max-cut, but could not provide oracle
access to an approximately optimal cut. This was left open in that work, and we
resolve it here. Both algorithms follow from the same framework, which analyzes
a ``parallel'' and ``subsampled'' (Euclidean) version of a greedy algorithm of
Mathieu and Schudy $[\text{SODA}~'08]$ for dense max-cut.
|
2503.14369 | Giuseppe Bruni | Giuseppe Bruni, Sepehr Maleki, Senthil K Krishnababu | C(NN)FD -- Deep Learning Modelling of Multi-Stage Axial Compressors
Aerodynamics | null | null | null | null | physics.flu-dyn cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The field of scientific machine learning and its applications to numerical
analyses such as CFD has recently experienced a surge in interest. While its
viability has been demonstrated in different domains, it has not yet reached a
level of robustness and scalability to make it practical for industrial
applications in the turbomachinery field. The highly complex, turbulent, and
three-dimensional flows of multi-stage axial compressors for gas turbine
applications represent a remarkably challenging case. This is due to the
high-dimensionality of the regression of the flow-field from geometrical and
operational variables, and the high computational cost associated with the
large scale of the CFD domains. This paper demonstrates the development and
application of a generalized deep learning framework for predictions of the
flow field and aerodynamic performance of multi-stage axial compressors, also
potentially applicable to any type of turbomachinery. A physics-based
dimensionality reduction unlocks the potential for flow-field predictions for
large-scale domains, re-formulating the regression problem from an unstructured
to a structured one. The relevant physical equations are used to define a
multi-dimensional physical loss function. Compared to "black-box" approaches,
the proposed framework has the advantage of physically explainable predictions
of overall performance, as the corresponding aerodynamic drivers can be
identified on a 0D/1D/2D/3D level. An iterative architecture is employed,
improving the accuracy of the predictions, as well as estimating the associated
uncertainty. The model is trained on a series of dataset including
manufacturing and build variations, different geometries, compressor designs
and operating conditions. This demonstrates the capability to predict the
flow-field and the overall performance in a generalizable manner, with accuracy
comparable to the benchmark.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 15:58:58 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Bruni",
"Giuseppe",
""
],
[
"Maleki",
"Sepehr",
""
],
[
"Krishnababu",
"Senthil K",
""
]
] | TITLE: C(NN)FD -- Deep Learning Modelling of Multi-Stage Axial Compressors
Aerodynamics
ABSTRACT: The field of scientific machine learning and its applications to numerical
analyses such as CFD has recently experienced a surge in interest. While its
viability has been demonstrated in different domains, it has not yet reached a
level of robustness and scalability to make it practical for industrial
applications in the turbomachinery field. The highly complex, turbulent, and
three-dimensional flows of multi-stage axial compressors for gas turbine
applications represent a remarkably challenging case. This is due to the
high-dimensionality of the regression of the flow-field from geometrical and
operational variables, and the high computational cost associated with the
large scale of the CFD domains. This paper demonstrates the development and
application of a generalized deep learning framework for predictions of the
flow field and aerodynamic performance of multi-stage axial compressors, also
potentially applicable to any type of turbomachinery. A physics-based
dimensionality reduction unlocks the potential for flow-field predictions for
large-scale domains, re-formulating the regression problem from an unstructured
to a structured one. The relevant physical equations are used to define a
multi-dimensional physical loss function. Compared to "black-box" approaches,
the proposed framework has the advantage of physically explainable predictions
of overall performance, as the corresponding aerodynamic drivers can be
identified on a 0D/1D/2D/3D level. An iterative architecture is employed,
improving the accuracy of the predictions, as well as estimating the associated
uncertainty. The model is trained on a series of dataset including
manufacturing and build variations, different geometries, compressor designs
and operating conditions. This demonstrates the capability to predict the
flow-field and the overall performance in a generalizable manner, with accuracy
comparable to the benchmark.
|
2503.14375 | Zachary Kingston | Sai Coumar, Zachary Kingston | Evaluating Machine Learning Approaches for ASCII Art Generation | 9 pages, 7 figures, 3 tables. Code available at
https://github.com/saiccoumar/deep_ascii_converter | null | null | null | cs.GR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generating structured ASCII art using computational techniques demands a
careful interplay between aesthetic representation and computational precision,
requiring models that can effectively translate visual information into
symbolic text characters. Although Convolutional Neural Networks (CNNs) have
shown promise in this domain, the comparative performance of deep learning
architectures and classical machine learning methods remains unexplored. This
paper explores the application of contemporary ML and DL methods to generate
structured ASCII art, focusing on three key criteria: fidelity, character
classification accuracy, and output quality. We investigate deep learning
architectures, including Multilayer Perceptrons (MLPs), ResNet, and
MobileNetV2, alongside classical approaches such as Random Forests, Support
Vector Machines (SVMs) and k-Nearest Neighbors (k-NN), trained on an augmented
synthetic dataset of ASCII characters. Our results show that complex neural
network architectures often fall short in producing high-quality ASCII art,
whereas classical machine learning classifiers, despite their simplicity,
achieve performance similar to CNNs. Our findings highlight the strength of
classical methods in bridging model simplicity with output quality, offering
new insights into ASCII art synthesis and machine learning on image data with
low dimensionality.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 16:07:29 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Coumar",
"Sai",
""
],
[
"Kingston",
"Zachary",
""
]
] | TITLE: Evaluating Machine Learning Approaches for ASCII Art Generation
ABSTRACT: Generating structured ASCII art using computational techniques demands a
careful interplay between aesthetic representation and computational precision,
requiring models that can effectively translate visual information into
symbolic text characters. Although Convolutional Neural Networks (CNNs) have
shown promise in this domain, the comparative performance of deep learning
architectures and classical machine learning methods remains unexplored. This
paper explores the application of contemporary ML and DL methods to generate
structured ASCII art, focusing on three key criteria: fidelity, character
classification accuracy, and output quality. We investigate deep learning
architectures, including Multilayer Perceptrons (MLPs), ResNet, and
MobileNetV2, alongside classical approaches such as Random Forests, Support
Vector Machines (SVMs) and k-Nearest Neighbors (k-NN), trained on an augmented
synthetic dataset of ASCII characters. Our results show that complex neural
network architectures often fall short in producing high-quality ASCII art,
whereas classical machine learning classifiers, despite their simplicity,
achieve performance similar to CNNs. Our findings highlight the strength of
classical methods in bridging model simplicity with output quality, offering
new insights into ASCII art synthesis and machine learning on image data with
low dimensionality.
|
2503.14377 | Arash Afkanpour | Negin Baghbanzadeh, Adibvafa Fallahpour, Yasaman Parhizkar, Franklin
Ogidi, Shuvendu Roy, Sajad Ashkezari, Vahid Reza Khazaie, Michael Colacci,
Ali Etemad, Arash Afkanpour, Elham Dolatabadi | Advancing Medical Representation Learning Through High-Quality Data | null | null | null | null | eess.IV cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Despite the growing scale of medical Vision-Language datasets, the impact of
dataset quality on model performance remains under-explored. We introduce
Open-PMC, a high-quality medical dataset from PubMed Central, containing 2.2
million image-text pairs, enriched with image modality annotations, subfigures,
and summarized in-text references. Notably, the in-text references provide
richer medical context, extending beyond the abstract information typically
found in captions. Through extensive experiments, we benchmark Open-PMC against
larger datasets across retrieval and zero-shot classification tasks. Our
results show that dataset quality-not just size-drives significant performance
gains. We complement our benchmark with an in-depth analysis of feature
representation. Our findings highlight the crucial role of data curation
quality in advancing multimodal medical AI. We release Open-PMC, along with the
trained models and our codebase.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 16:10:11 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Baghbanzadeh",
"Negin",
""
],
[
"Fallahpour",
"Adibvafa",
""
],
[
"Parhizkar",
"Yasaman",
""
],
[
"Ogidi",
"Franklin",
""
],
[
"Roy",
"Shuvendu",
""
],
[
"Ashkezari",
"Sajad",
""
],
[
"Khazaie",
"Vahid Reza",
""
],
[
"Colacci",
"Michael",
""
],
[
"Etemad",
"Ali",
""
],
[
"Afkanpour",
"Arash",
""
],
[
"Dolatabadi",
"Elham",
""
]
] | TITLE: Advancing Medical Representation Learning Through High-Quality Data
ABSTRACT: Despite the growing scale of medical Vision-Language datasets, the impact of
dataset quality on model performance remains under-explored. We introduce
Open-PMC, a high-quality medical dataset from PubMed Central, containing 2.2
million image-text pairs, enriched with image modality annotations, subfigures,
and summarized in-text references. Notably, the in-text references provide
richer medical context, extending beyond the abstract information typically
found in captions. Through extensive experiments, we benchmark Open-PMC against
larger datasets across retrieval and zero-shot classification tasks. Our
results show that dataset quality-not just size-drives significant performance
gains. We complement our benchmark with an in-depth analysis of feature
representation. Our findings highlight the crucial role of data curation
quality in advancing multimodal medical AI. We release Open-PMC, along with the
trained models and our codebase.
|
2503.14378 | Zechen Bai | Zechen Bai, Hai Ci, Mike Zheng Shou | Impossible Videos | 26 pages | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Synthetic videos nowadays is widely used to complement data scarcity and
diversity of real-world videos. Current synthetic datasets primarily replicate
real-world scenarios, leaving impossible, counterfactual and anti-reality video
concepts underexplored. This work aims to answer two questions: 1) Can today's
video generation models effectively follow prompts to create impossible video
content? 2) Are today's video understanding models good enough for
understanding impossible videos? To this end, we introduce IPV-Bench, a novel
benchmark designed to evaluate and foster progress in video understanding and
generation. IPV-Bench is underpinned by a comprehensive taxonomy, encompassing
4 domains, 14 categories. It features diverse scenes that defy physical,
biological, geographical, or social laws. Based on the taxonomy, a prompt suite
is constructed to evaluate video generation models, challenging their prompt
following and creativity capabilities. In addition, a video benchmark is
curated to assess Video-LLMs on their ability of understanding impossible
videos, which particularly requires reasoning on temporal dynamics and world
knowledge. Comprehensive evaluations reveal limitations and insights for future
directions of video models, paving the way for next-generation video models.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 16:10:24 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Bai",
"Zechen",
""
],
[
"Ci",
"Hai",
""
],
[
"Shou",
"Mike Zheng",
""
]
] | TITLE: Impossible Videos
ABSTRACT: Synthetic videos nowadays is widely used to complement data scarcity and
diversity of real-world videos. Current synthetic datasets primarily replicate
real-world scenarios, leaving impossible, counterfactual and anti-reality video
concepts underexplored. This work aims to answer two questions: 1) Can today's
video generation models effectively follow prompts to create impossible video
content? 2) Are today's video understanding models good enough for
understanding impossible videos? To this end, we introduce IPV-Bench, a novel
benchmark designed to evaluate and foster progress in video understanding and
generation. IPV-Bench is underpinned by a comprehensive taxonomy, encompassing
4 domains, 14 categories. It features diverse scenes that defy physical,
biological, geographical, or social laws. Based on the taxonomy, a prompt suite
is constructed to evaluate video generation models, challenging their prompt
following and creativity capabilities. In addition, a video benchmark is
curated to assess Video-LLMs on their ability of understanding impossible
videos, which particularly requires reasoning on temporal dynamics and world
knowledge. Comprehensive evaluations reveal limitations and insights for future
directions of video models, paving the way for next-generation video models.
|
2503.14388 | Yekatierina Churakova | Yekatierina Churakova Mathias Ekstedt | Vexed by VEX tools: Consistency evaluation of container vulnerability
scanners | 22 pages, 1 listing, 18 tables | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a study that analyzed state-of-the-art vulnerability
scanning tools applied to containers. We have focused the work on tools
following the Vulnerability Exploitability eXchange (VEX) format, which has
been introduced to complement Software Bills of Material (SBOM) with security
advisories of known vulnerabilities. Being able to get an accurate
understanding of vulnerabilities found in the dependencies of third-party
software is critical for secure software development and risk analysis.
Accepting the overwhelming challenge of estimating the precise accuracy and
precision of a vulnerability scanner, we have in this study instead set out to
explore how consistently different tools perform. By doing this, we aim to
assess the maturity of the VEX tool field as a whole (rather than any
particular tool). We have used the Jaccard and Tversky indices to produce
similarity scores of tool performance for several different datasets created
from container images. Overall, our results show a low level of consistency
among the tools, thus indicating a low level of maturity in VEX tool space. We
have performed a number of experiments to find and explanation to our results,
but largely they are inconclusive and further research is needed to understand
the underlying causalities of our findings.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 16:22:43 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Ekstedt",
"Yekatierina Churakova Mathias",
""
]
] | TITLE: Vexed by VEX tools: Consistency evaluation of container vulnerability
scanners
ABSTRACT: This paper presents a study that analyzed state-of-the-art vulnerability
scanning tools applied to containers. We have focused the work on tools
following the Vulnerability Exploitability eXchange (VEX) format, which has
been introduced to complement Software Bills of Material (SBOM) with security
advisories of known vulnerabilities. Being able to get an accurate
understanding of vulnerabilities found in the dependencies of third-party
software is critical for secure software development and risk analysis.
Accepting the overwhelming challenge of estimating the precise accuracy and
precision of a vulnerability scanner, we have in this study instead set out to
explore how consistently different tools perform. By doing this, we aim to
assess the maturity of the VEX tool field as a whole (rather than any
particular tool). We have used the Jaccard and Tversky indices to produce
similarity scores of tool performance for several different datasets created
from container images. Overall, our results show a low level of consistency
among the tools, thus indicating a low level of maturity in VEX tool space. We
have performed a number of experiments to find and explanation to our results,
but largely they are inconclusive and further research is needed to understand
the underlying causalities of our findings.
|
2503.14395 | Ruirui Liu | Jing Wang, Ruirui Liu, Yu Lei, Michael J. Baine, Tian Liu, Yang Lei | Weakly Supervised Spatial Implicit Neural Representation Learning for 3D
MRI-Ultrasound Deformable Image Registration in HDR Prostate Brachytherapy | null | null | null | null | physics.med-ph cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Purpose: Accurate 3D MRI-ultrasound (US) deformable registration is critical
for real-time guidance in high-dose-rate (HDR) prostate brachytherapy. We
present a weakly supervised spatial implicit neural representation (SINR)
method to address modality differences and pelvic anatomy challenges.
Methods: The framework uses sparse surface supervision from MRI/US
segmentations instead of dense intensity matching. SINR models deformations as
continuous spatial functions, with patient-specific surface priors guiding a
stationary velocity field for biologically plausible deformations. Validation
included 20 public Prostate-MRI-US-Biopsy cases and 10 institutional HDR cases,
evaluated via Dice similarity coefficient (DSC), mean surface distance (MSD),
and 95% Hausdorff distance (HD95).
Results: The proposed method achieved robust registration. For the public
dataset, prostate DSC was $0.93 \pm 0.05$, MSD $0.87 \pm 0.10$ mm, and HD95
$1.58 \pm 0.37$ mm. For the institutional dataset, prostate CTV achieved DSC
$0.88 \pm 0.09$, MSD $1.21 \pm 0.38$ mm, and HD95 $2.09 \pm 1.48$ mm. Bladder
and rectum performance was lower due to ultrasound's limited field of view.
Visual assessments confirmed accurate alignment with minimal discrepancies.
Conclusion: This study introduces a novel weakly supervised SINR-based
approach for 3D MRI-US deformable registration. By leveraging sparse surface
supervision and spatial priors, it achieves accurate, robust, and
computationally efficient registration, enhancing real-time image guidance in
HDR prostate brachytherapy and improving treatment precision.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 16:30:08 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Wang",
"Jing",
""
],
[
"Liu",
"Ruirui",
""
],
[
"Lei",
"Yu",
""
],
[
"Baine",
"Michael J.",
""
],
[
"Liu",
"Tian",
""
],
[
"Lei",
"Yang",
""
]
] | TITLE: Weakly Supervised Spatial Implicit Neural Representation Learning for 3D
MRI-Ultrasound Deformable Image Registration in HDR Prostate Brachytherapy
ABSTRACT: Purpose: Accurate 3D MRI-ultrasound (US) deformable registration is critical
for real-time guidance in high-dose-rate (HDR) prostate brachytherapy. We
present a weakly supervised spatial implicit neural representation (SINR)
method to address modality differences and pelvic anatomy challenges.
Methods: The framework uses sparse surface supervision from MRI/US
segmentations instead of dense intensity matching. SINR models deformations as
continuous spatial functions, with patient-specific surface priors guiding a
stationary velocity field for biologically plausible deformations. Validation
included 20 public Prostate-MRI-US-Biopsy cases and 10 institutional HDR cases,
evaluated via Dice similarity coefficient (DSC), mean surface distance (MSD),
and 95% Hausdorff distance (HD95).
Results: The proposed method achieved robust registration. For the public
dataset, prostate DSC was $0.93 \pm 0.05$, MSD $0.87 \pm 0.10$ mm, and HD95
$1.58 \pm 0.37$ mm. For the institutional dataset, prostate CTV achieved DSC
$0.88 \pm 0.09$, MSD $1.21 \pm 0.38$ mm, and HD95 $2.09 \pm 1.48$ mm. Bladder
and rectum performance was lower due to ultrasound's limited field of view.
Visual assessments confirmed accurate alignment with minimal discrepancies.
Conclusion: This study introduces a novel weakly supervised SINR-based
approach for 3D MRI-US deformable registration. By leveraging sparse surface
supervision and spatial priors, it achieves accurate, robust, and
computationally efficient registration, enhancing real-time image guidance in
HDR prostate brachytherapy and improving treatment precision.
|
2503.14409 | Merijn Floren | Merijn Floren, Jean-Philippe No\"el, Jan Swevers | Inference and Learning of Nonlinear LFR State-space Models | null | null | null | null | eess.SY cs.SY | http://creativecommons.org/licenses/by/4.0/ | Estimating the parameters of nonlinear block-oriented state-space models from
input-output data typically involves solving a highly non-convex optimization
problem, making it susceptible to poor local minima and slow convergence. This
paper presents a computationally efficient initialization method for fully
parametrizing nonlinear linear fractional representation (NL-LFR) models using
periodic data. The approach first infers the latent variables and then
estimates the model parameters, yielding initial estimates that serve as a
starting point for further nonlinear optimization. The proposed method shows
robustness against poor local minima, and achieves a twofold error reduction
compared to the state-of-the-art on a challenging benchmark dataset.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 16:49:56 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Floren",
"Merijn",
""
],
[
"Noël",
"Jean-Philippe",
""
],
[
"Swevers",
"Jan",
""
]
] | TITLE: Inference and Learning of Nonlinear LFR State-space Models
ABSTRACT: Estimating the parameters of nonlinear block-oriented state-space models from
input-output data typically involves solving a highly non-convex optimization
problem, making it susceptible to poor local minima and slow convergence. This
paper presents a computationally efficient initialization method for fully
parametrizing nonlinear linear fractional representation (NL-LFR) models using
periodic data. The approach first infers the latent variables and then
estimates the model parameters, yielding initial estimates that serve as a
starting point for further nonlinear optimization. The proposed method shows
robustness against poor local minima, and achieves a twofold error reduction
compared to the state-of-the-art on a challenging benchmark dataset.
|
2503.14411 | Siwei Zhang | Siwei Zhang, Yun Xiong, Yateng Tang, Xi Chen, Zian Jia, Zehao Gu,
Jiarong Xu, Jiawei Zhang | Unifying Text Semantics and Graph Structures for Temporal
Text-attributed Graphs with Large Language Models | Submit to ICML2025 | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Temporal graph neural networks (TGNNs) have shown remarkable performance in
temporal graph modeling. However, real-world temporal graphs often possess rich
textual information, giving rise to temporal text-attributed graphs (TTAGs).
Such combination of dynamic text semantics and evolving graph structures
introduces heightened complexity. Existing TGNNs embed texts statically and
rely heavily on encoding mechanisms that biasedly prioritize structural
information, overlooking the temporal evolution of text semantics and the
essential interplay between semantics and structures for synergistic
reinforcement. To tackle these issues, we present \textbf{{Cross}}, a novel
framework that seamlessly extends existing TGNNs for TTAG modeling. The key
idea is to employ the advanced large language models (LLMs) to extract the
dynamic semantics in text space and then generate expressive representations
unifying both semantics and structures. Specifically, we propose a Temporal
Semantics Extractor in the {Cross} framework, which empowers the LLM to offer
the temporal semantic understanding of node's evolving contexts of textual
neighborhoods, facilitating semantic dynamics. Subsequently, we introduce the
Semantic-structural Co-encoder, which collaborates with the above Extractor for
synthesizing illuminating representations by jointly considering both semantic
and structural information while encouraging their mutual reinforcement.
Extensive experimental results on four public datasets and one practical
industrial dataset demonstrate {Cross}'s significant effectiveness and
robustness.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 16:50:10 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Zhang",
"Siwei",
""
],
[
"Xiong",
"Yun",
""
],
[
"Tang",
"Yateng",
""
],
[
"Chen",
"Xi",
""
],
[
"Jia",
"Zian",
""
],
[
"Gu",
"Zehao",
""
],
[
"Xu",
"Jiarong",
""
],
[
"Zhang",
"Jiawei",
""
]
] | TITLE: Unifying Text Semantics and Graph Structures for Temporal
Text-attributed Graphs with Large Language Models
ABSTRACT: Temporal graph neural networks (TGNNs) have shown remarkable performance in
temporal graph modeling. However, real-world temporal graphs often possess rich
textual information, giving rise to temporal text-attributed graphs (TTAGs).
Such combination of dynamic text semantics and evolving graph structures
introduces heightened complexity. Existing TGNNs embed texts statically and
rely heavily on encoding mechanisms that biasedly prioritize structural
information, overlooking the temporal evolution of text semantics and the
essential interplay between semantics and structures for synergistic
reinforcement. To tackle these issues, we present \textbf{{Cross}}, a novel
framework that seamlessly extends existing TGNNs for TTAG modeling. The key
idea is to employ the advanced large language models (LLMs) to extract the
dynamic semantics in text space and then generate expressive representations
unifying both semantics and structures. Specifically, we propose a Temporal
Semantics Extractor in the {Cross} framework, which empowers the LLM to offer
the temporal semantic understanding of node's evolving contexts of textual
neighborhoods, facilitating semantic dynamics. Subsequently, we introduce the
Semantic-structural Co-encoder, which collaborates with the above Extractor for
synthesizing illuminating representations by jointly considering both semantic
and structural information while encouraging their mutual reinforcement.
Extensive experimental results on four public datasets and one practical
industrial dataset demonstrate {Cross}'s significant effectiveness and
robustness.
|
2503.14421 | Radu Tudor Ionescu | Vlad Hondru, Eduard Hogea, Darian Onchis, Radu Tudor Ionescu | ExDDV: A New Dataset for Explainable Deepfake Detection in Video | null | null | null | null | cs.CV cs.AI cs.CL cs.LG cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ever growing realism and quality of generated videos makes it
increasingly harder for humans to spot deepfake content, who need to rely more
and more on automatic deepfake detectors. However, deepfake detectors are also
prone to errors, and their decisions are not explainable, leaving humans
vulnerable to deepfake-based fraud and misinformation. To this end, we
introduce ExDDV, the first dataset and benchmark for Explainable Deepfake
Detection in Video. ExDDV comprises around 5.4K real and deepfake videos that
are manually annotated with text descriptions (to explain the artifacts) and
clicks (to point out the artifacts). We evaluate a number of vision-language
models on ExDDV, performing experiments with various fine-tuning and in-context
learning strategies. Our results show that text and click supervision are both
required to develop robust explainable models for deepfake videos, which are
able to localize and describe the observed artifacts. Our novel dataset and
code to reproduce the results are available at
https://github.com/vladhondru25/ExDDV.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 16:55:07 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Hondru",
"Vlad",
""
],
[
"Hogea",
"Eduard",
""
],
[
"Onchis",
"Darian",
""
],
[
"Ionescu",
"Radu Tudor",
""
]
] | TITLE: ExDDV: A New Dataset for Explainable Deepfake Detection in Video
ABSTRACT: The ever growing realism and quality of generated videos makes it
increasingly harder for humans to spot deepfake content, who need to rely more
and more on automatic deepfake detectors. However, deepfake detectors are also
prone to errors, and their decisions are not explainable, leaving humans
vulnerable to deepfake-based fraud and misinformation. To this end, we
introduce ExDDV, the first dataset and benchmark for Explainable Deepfake
Detection in Video. ExDDV comprises around 5.4K real and deepfake videos that
are manually annotated with text descriptions (to explain the artifacts) and
clicks (to point out the artifacts). We evaluate a number of vision-language
models on ExDDV, performing experiments with various fine-tuning and in-context
learning strategies. Our results show that text and click supervision are both
required to develop robust explainable models for deepfake videos, which are
able to localize and describe the observed artifacts. Our novel dataset and
code to reproduce the results are available at
https://github.com/vladhondru25/ExDDV.
|
2503.14443 | Yaroslav Zharov | Aleksandra Eliseeva, Alexander Kovrigin, Ilia Kholkin, Egor Bogomolov,
Yaroslav Zharov | EnvBench: A Benchmark for Automated Environment Setup | Accepted at the DL4Code workshop at ICLR'25 | null | null | null | cs.LG cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in Large Language Models (LLMs) have enabled researchers to
focus on practical repository-level tasks in software engineering domain. In
this work, we consider a cornerstone task for automating work with software
repositories-environment setup, i.e., a task of configuring a
repository-specific development environment on a system. Existing studies on
environment setup introduce innovative agentic strategies, but their evaluation
is often based on small datasets that may not capture the full range of
configuration challenges encountered in practice. To address this gap, we
introduce a comprehensive environment setup benchmark EnvBench. It encompasses
329 Python and 665 JVM-based (Java, Kotlin) repositories, with a focus on
repositories that present genuine configuration challenges, excluding projects
that can be fully configured by simple deterministic scripts. To enable further
benchmark extension and usage for model tuning, we implement two automatic
metrics: a static analysis check for missing imports in Python and a
compilation check for JVM languages. We demonstrate the applicability of our
benchmark by evaluating three environment setup approaches, including a simple
zero-shot baseline and two agentic workflows, that we test with two powerful
LLM backbones, GPT-4o and GPT-4o-mini. The best approach manages to
successfully configure 6.69% repositories for Python and 29.47% repositories
for JVM, suggesting that EnvBench remains challenging for current approaches.
Our benchmark suite is publicly available at
https://github.com/JetBrains-Research/EnvBench. The dataset and experiment
trajectories are available at https://jb.gg/envbench.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 17:19:12 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Eliseeva",
"Aleksandra",
""
],
[
"Kovrigin",
"Alexander",
""
],
[
"Kholkin",
"Ilia",
""
],
[
"Bogomolov",
"Egor",
""
],
[
"Zharov",
"Yaroslav",
""
]
] | TITLE: EnvBench: A Benchmark for Automated Environment Setup
ABSTRACT: Recent advances in Large Language Models (LLMs) have enabled researchers to
focus on practical repository-level tasks in software engineering domain. In
this work, we consider a cornerstone task for automating work with software
repositories-environment setup, i.e., a task of configuring a
repository-specific development environment on a system. Existing studies on
environment setup introduce innovative agentic strategies, but their evaluation
is often based on small datasets that may not capture the full range of
configuration challenges encountered in practice. To address this gap, we
introduce a comprehensive environment setup benchmark EnvBench. It encompasses
329 Python and 665 JVM-based (Java, Kotlin) repositories, with a focus on
repositories that present genuine configuration challenges, excluding projects
that can be fully configured by simple deterministic scripts. To enable further
benchmark extension and usage for model tuning, we implement two automatic
metrics: a static analysis check for missing imports in Python and a
compilation check for JVM languages. We demonstrate the applicability of our
benchmark by evaluating three environment setup approaches, including a simple
zero-shot baseline and two agentic workflows, that we test with two powerful
LLM backbones, GPT-4o and GPT-4o-mini. The best approach manages to
successfully configure 6.69% repositories for Python and 29.47% repositories
for JVM, suggesting that EnvBench remains challenging for current approaches.
Our benchmark suite is publicly available at
https://github.com/JetBrains-Research/EnvBench. The dataset and experiment
trajectories are available at https://jb.gg/envbench.
|
2503.14445 | Stanislaw Szymanowicz | Stanislaw Szymanowicz and Jason Y. Zhang and Pratul Srinivasan and
Ruiqi Gao and Arthur Brussee and Aleksander Holynski and Ricardo
Martin-Brualla and Jonathan T. Barron and Philipp Henzler | Bolt3D: Generating 3D Scenes in Seconds | Project page: https://szymanowiczs.github.io/bolt3d | null | null | null | cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | We present a latent diffusion model for fast feed-forward 3D scene
generation. Given one or more images, our model Bolt3D directly samples a 3D
scene representation in less than seven seconds on a single GPU. We achieve
this by leveraging powerful and scalable existing 2D diffusion network
architectures to produce consistent high-fidelity 3D scene representations. To
train this model, we create a large-scale multiview-consistent dataset of 3D
geometry and appearance by applying state-of-the-art dense 3D reconstruction
techniques to existing multiview image datasets. Compared to prior multiview
generative models that require per-scene optimization for 3D reconstruction,
Bolt3D reduces the inference cost by a factor of up to 300 times.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 17:24:19 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Szymanowicz",
"Stanislaw",
""
],
[
"Zhang",
"Jason Y.",
""
],
[
"Srinivasan",
"Pratul",
""
],
[
"Gao",
"Ruiqi",
""
],
[
"Brussee",
"Arthur",
""
],
[
"Holynski",
"Aleksander",
""
],
[
"Martin-Brualla",
"Ricardo",
""
],
[
"Barron",
"Jonathan T.",
""
],
[
"Henzler",
"Philipp",
""
]
] | TITLE: Bolt3D: Generating 3D Scenes in Seconds
ABSTRACT: We present a latent diffusion model for fast feed-forward 3D scene
generation. Given one or more images, our model Bolt3D directly samples a 3D
scene representation in less than seven seconds on a single GPU. We achieve
this by leveraging powerful and scalable existing 2D diffusion network
architectures to produce consistent high-fidelity 3D scene representations. To
train this model, we create a large-scale multiview-consistent dataset of 3D
geometry and appearance by applying state-of-the-art dense 3D reconstruction
techniques to existing multiview image datasets. Compared to prior multiview
generative models that require per-scene optimization for 3D reconstruction,
Bolt3D reduces the inference cost by a factor of up to 300 times.
|
2503.14459 | Piersilvio De Bartolomeis | Piersilvio De Bartolomeis, Julia Kostin, Javier Abad, Yixin Wang,
Fanny Yang | Doubly robust identification of treatment effects from multiple
environments | Accepted for presentation at the International Conference on Learning
Representations (ICLR) 2025 | null | null | null | stat.ML cs.LG stat.ME | http://creativecommons.org/licenses/by/4.0/ | Practical and ethical constraints often require the use of observational data
for causal inference, particularly in medicine and social sciences. Yet,
observational datasets are prone to confounding, potentially compromising the
validity of causal conclusions. While it is possible to correct for biases if
the underlying causal graph is known, this is rarely a feasible ask in
practical scenarios. A common strategy is to adjust for all available
covariates, yet this approach can yield biased treatment effect estimates,
especially when post-treatment or unobserved variables are present. We propose
RAMEN, an algorithm that produces unbiased treatment effect estimates by
leveraging the heterogeneity of multiple data sources without the need to know
or learn the underlying causal graph. Notably, RAMEN achieves doubly robust
identification: it can identify the treatment effect whenever the causal
parents of the treatment or those of the outcome are observed, and the node
whose parents are observed satisfies an invariance assumption. Empirical
evaluations on synthetic and real-world datasets show that our approach
outperforms existing methods.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 17:33:10 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"De Bartolomeis",
"Piersilvio",
""
],
[
"Kostin",
"Julia",
""
],
[
"Abad",
"Javier",
""
],
[
"Wang",
"Yixin",
""
],
[
"Yang",
"Fanny",
""
]
] | TITLE: Doubly robust identification of treatment effects from multiple
environments
ABSTRACT: Practical and ethical constraints often require the use of observational data
for causal inference, particularly in medicine and social sciences. Yet,
observational datasets are prone to confounding, potentially compromising the
validity of causal conclusions. While it is possible to correct for biases if
the underlying causal graph is known, this is rarely a feasible ask in
practical scenarios. A common strategy is to adjust for all available
covariates, yet this approach can yield biased treatment effect estimates,
especially when post-treatment or unobserved variables are present. We propose
RAMEN, an algorithm that produces unbiased treatment effect estimates by
leveraging the heterogeneity of multiple data sources without the need to know
or learn the underlying causal graph. Notably, RAMEN achieves doubly robust
identification: it can identify the treatment effect whenever the causal
parents of the treatment or those of the outcome are observed, and the node
whose parents are observed satisfies an invariance assumption. Empirical
evaluations on synthetic and real-world datasets show that our approach
outperforms existing methods.
|
2503.14473 | Jason Han | Jason Han, Nicholas S. DiBrita, Younghyun Cho, Hengrui Luo, Tirthak
Patel | EnQode: Fast Amplitude Embedding for Quantum Machine Learning Using
Classical Data | EnQode will appear in the Proceedings of the Design Automation
Conference (DAC), 2025 | null | null | null | quant-ph cs.ET cs.LG | http://creativecommons.org/licenses/by/4.0/ | Amplitude embedding (AE) is essential in quantum machine learning (QML) for
encoding classical data onto quantum circuits. However, conventional AE methods
suffer from deep, variable-length circuits that introduce high output error due
to extensive gate usage and variable error rates across samples, resulting in
noise-driven inconsistencies that degrade model accuracy. We introduce EnQode,
a fast AE technique based on symbolic representation that addresses these
limitations by clustering dataset samples and solving for cluster mean states
through a low-depth, machine-specific ansatz. Optimized to reduce physical
gates and SWAP operations, EnQode ensures all samples face consistent, low
noise levels by standardizing circuit depth and composition. With over 90%
fidelity in data mapping, EnQode enables robust, high-performance QML on noisy
intermediate-scale quantum (NISQ) devices. Our open-source solution provides a
scalable and efficient alternative for integrating classical data with quantum
models.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 17:48:03 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Han",
"Jason",
""
],
[
"DiBrita",
"Nicholas S.",
""
],
[
"Cho",
"Younghyun",
""
],
[
"Luo",
"Hengrui",
""
],
[
"Patel",
"Tirthak",
""
]
] | TITLE: EnQode: Fast Amplitude Embedding for Quantum Machine Learning Using
Classical Data
ABSTRACT: Amplitude embedding (AE) is essential in quantum machine learning (QML) for
encoding classical data onto quantum circuits. However, conventional AE methods
suffer from deep, variable-length circuits that introduce high output error due
to extensive gate usage and variable error rates across samples, resulting in
noise-driven inconsistencies that degrade model accuracy. We introduce EnQode,
a fast AE technique based on symbolic representation that addresses these
limitations by clustering dataset samples and solving for cluster mean states
through a low-depth, machine-specific ansatz. Optimized to reduce physical
gates and SWAP operations, EnQode ensures all samples face consistent, low
noise levels by standardizing circuit depth and composition. With over 90%
fidelity in data mapping, EnQode enables robust, high-performance QML on noisy
intermediate-scale quantum (NISQ) devices. Our open-source solution provides a
scalable and efficient alternative for integrating classical data with quantum
models.
|
2503.14476 | Qiying Yu | Qiying Yu, Zheng Zhang, Ruofei Zhu, Yufeng Yuan, Xiaochen Zuo, Yu Yue,
Tiantian Fan, Gaohong Liu, Lingjun Liu, Xin Liu, Haibin Lin, Zhiqi Lin, Bole
Ma, Guangming Sheng, Yuxuan Tong, Chi Zhang, Mofan Zhang, Wang Zhang, Hang
Zhu, Jinhua Zhu, Jiaze Chen, Jiangjie Chen, Chengyi Wang, Hongli Yu, Weinan
Dai, Yuxuan Song, Xiangpeng Wei, Hao Zhou, Jingjing Liu, Wei-Ying Ma, Ya-Qin
Zhang, Lin Yan, Mu Qiao, Yonghui Wu, Mingxuan Wang | DAPO: An Open-Source LLM Reinforcement Learning System at Scale | Project Page: https://dapo-sia.github.io/ | null | null | null | cs.LG cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Inference scaling empowers LLMs with unprecedented reasoning ability, with
reinforcement learning as the core technique to elicit complex reasoning.
However, key technical details of state-of-the-art reasoning LLMs are concealed
(such as in OpenAI o1 blog and DeepSeek R1 technical report), thus the
community still struggles to reproduce their RL training results. We propose
the $\textbf{D}$ecoupled Clip and $\textbf{D}$ynamic s$\textbf{A}$mpling
$\textbf{P}$olicy $\textbf{O}$ptimization ($\textbf{DAPO}$) algorithm, and
fully open-source a state-of-the-art large-scale RL system that achieves 50
points on AIME 2024 using Qwen2.5-32B base model. Unlike previous works that
withhold training details, we introduce four key techniques of our algorithm
that make large-scale LLM RL a success. In addition, we open-source our
training code, which is built on the verl framework, along with a carefully
curated and processed dataset. These components of our open-source system
enhance reproducibility and support future research in large-scale LLM RL.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 17:49:06 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Yu",
"Qiying",
""
],
[
"Zhang",
"Zheng",
""
],
[
"Zhu",
"Ruofei",
""
],
[
"Yuan",
"Yufeng",
""
],
[
"Zuo",
"Xiaochen",
""
],
[
"Yue",
"Yu",
""
],
[
"Fan",
"Tiantian",
""
],
[
"Liu",
"Gaohong",
""
],
[
"Liu",
"Lingjun",
""
],
[
"Liu",
"Xin",
""
],
[
"Lin",
"Haibin",
""
],
[
"Lin",
"Zhiqi",
""
],
[
"Ma",
"Bole",
""
],
[
"Sheng",
"Guangming",
""
],
[
"Tong",
"Yuxuan",
""
],
[
"Zhang",
"Chi",
""
],
[
"Zhang",
"Mofan",
""
],
[
"Zhang",
"Wang",
""
],
[
"Zhu",
"Hang",
""
],
[
"Zhu",
"Jinhua",
""
],
[
"Chen",
"Jiaze",
""
],
[
"Chen",
"Jiangjie",
""
],
[
"Wang",
"Chengyi",
""
],
[
"Yu",
"Hongli",
""
],
[
"Dai",
"Weinan",
""
],
[
"Song",
"Yuxuan",
""
],
[
"Wei",
"Xiangpeng",
""
],
[
"Zhou",
"Hao",
""
],
[
"Liu",
"Jingjing",
""
],
[
"Ma",
"Wei-Ying",
""
],
[
"Zhang",
"Ya-Qin",
""
],
[
"Yan",
"Lin",
""
],
[
"Qiao",
"Mu",
""
],
[
"Wu",
"Yonghui",
""
],
[
"Wang",
"Mingxuan",
""
]
] | TITLE: DAPO: An Open-Source LLM Reinforcement Learning System at Scale
ABSTRACT: Inference scaling empowers LLMs with unprecedented reasoning ability, with
reinforcement learning as the core technique to elicit complex reasoning.
However, key technical details of state-of-the-art reasoning LLMs are concealed
(such as in OpenAI o1 blog and DeepSeek R1 technical report), thus the
community still struggles to reproduce their RL training results. We propose
the $\textbf{D}$ecoupled Clip and $\textbf{D}$ynamic s$\textbf{A}$mpling
$\textbf{P}$olicy $\textbf{O}$ptimization ($\textbf{DAPO}$) algorithm, and
fully open-source a state-of-the-art large-scale RL system that achieves 50
points on AIME 2024 using Qwen2.5-32B base model. Unlike previous works that
withhold training details, we introduce four key techniques of our algorithm
that make large-scale LLM RL a success. In addition, we open-source our
training code, which is built on the verl framework, along with a carefully
curated and processed dataset. These components of our open-source system
enhance reproducibility and support future research in large-scale LLM RL.
|
2503.14482 | Yulin Pan | Yulin Pan, Xiangteng He, Chaojie Mao, Zhen Han, Zeyinzi Jiang,
Jingfeng Zhang, Yu Liu | ICE-Bench: A Unified and Comprehensive Benchmark for Image Creating and
Editing | 17 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Image generation has witnessed significant advancements in the past few
years. However, evaluating the performance of image generation models remains a
formidable challenge. In this paper, we propose ICE-Bench, a unified and
comprehensive benchmark designed to rigorously assess image generation models.
Its comprehensiveness could be summarized in the following key features: (1)
Coarse-to-Fine Tasks: We systematically deconstruct image generation into four
task categories: No-ref/Ref Image Creating/Editing, based on the presence or
absence of source images and reference images. And further decompose them into
31 fine-grained tasks covering a broad spectrum of image generation
requirements, culminating in a comprehensive benchmark. (2) Multi-dimensional
Metrics: The evaluation framework assesses image generation capabilities across
6 dimensions: aesthetic quality, imaging quality, prompt following, source
consistency, reference consistency, and controllability. 11 metrics are
introduced to support the multi-dimensional evaluation. Notably, we introduce
VLLM-QA, an innovative metric designed to assess the success of image editing
by leveraging large models. (3) Hybrid Data: The data comes from real scenes
and virtual generation, which effectively improves data diversity and
alleviates the bias problem in model evaluation. Through ICE-Bench, we conduct
a thorough analysis of existing generation models, revealing both the
challenging nature of our benchmark and the gap between current model
capabilities and real-world generation requirements. To foster further
advancements in the field, we will open-source ICE-Bench, including its
dataset, evaluation code, and models, thereby providing a valuable resource for
the research community.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 17:53:29 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Pan",
"Yulin",
""
],
[
"He",
"Xiangteng",
""
],
[
"Mao",
"Chaojie",
""
],
[
"Han",
"Zhen",
""
],
[
"Jiang",
"Zeyinzi",
""
],
[
"Zhang",
"Jingfeng",
""
],
[
"Liu",
"Yu",
""
]
] | TITLE: ICE-Bench: A Unified and Comprehensive Benchmark for Image Creating and
Editing
ABSTRACT: Image generation has witnessed significant advancements in the past few
years. However, evaluating the performance of image generation models remains a
formidable challenge. In this paper, we propose ICE-Bench, a unified and
comprehensive benchmark designed to rigorously assess image generation models.
Its comprehensiveness could be summarized in the following key features: (1)
Coarse-to-Fine Tasks: We systematically deconstruct image generation into four
task categories: No-ref/Ref Image Creating/Editing, based on the presence or
absence of source images and reference images. And further decompose them into
31 fine-grained tasks covering a broad spectrum of image generation
requirements, culminating in a comprehensive benchmark. (2) Multi-dimensional
Metrics: The evaluation framework assesses image generation capabilities across
6 dimensions: aesthetic quality, imaging quality, prompt following, source
consistency, reference consistency, and controllability. 11 metrics are
introduced to support the multi-dimensional evaluation. Notably, we introduce
VLLM-QA, an innovative metric designed to assess the success of image editing
by leveraging large models. (3) Hybrid Data: The data comes from real scenes
and virtual generation, which effectively improves data diversity and
alleviates the bias problem in model evaluation. Through ICE-Bench, we conduct
a thorough analysis of existing generation models, revealing both the
challenging nature of our benchmark and the gap between current model
capabilities and real-world generation requirements. To foster further
advancements in the field, we will open-source ICE-Bench, including its
dataset, evaluation code, and models, thereby providing a valuable resource for
the research community.
|
2503.14483 | Haoyu Guo | Haoyu Guo, He Zhu, Sida Peng, Haotong Lin, Yunzhi Yan, Tao Xie,
Wenguan Wang, Xiaowei Zhou, Hujun Bao | Multi-view Reconstruction via SfM-guided Monocular Depth Estimation | CVPR 2025. Project page: https://zju3dv.github.io/murre/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present a new method for multi-view geometric
reconstruction. In recent years, large vision models have rapidly developed,
performing excellently across various tasks and demonstrating remarkable
generalization capabilities. Some works use large vision models for monocular
depth estimation, which have been applied to facilitate multi-view
reconstruction tasks in an indirect manner. Due to the ambiguity of the
monocular depth estimation task, the estimated depth values are usually not
accurate enough, limiting their utility in aiding multi-view reconstruction. We
propose to incorporate SfM information, a strong multi-view prior, into the
depth estimation process, thus enhancing the quality of depth prediction and
enabling their direct application in multi-view geometric reconstruction.
Experimental results on public real-world datasets show that our method
significantly improves the quality of depth estimation compared to previous
monocular depth estimation works. Additionally, we evaluate the reconstruction
quality of our approach in various types of scenes including indoor,
streetscape, and aerial views, surpassing state-of-the-art MVS methods. The
code and supplementary materials are available at
https://zju3dv.github.io/murre/ .
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 17:54:06 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Guo",
"Haoyu",
""
],
[
"Zhu",
"He",
""
],
[
"Peng",
"Sida",
""
],
[
"Lin",
"Haotong",
""
],
[
"Yan",
"Yunzhi",
""
],
[
"Xie",
"Tao",
""
],
[
"Wang",
"Wenguan",
""
],
[
"Zhou",
"Xiaowei",
""
],
[
"Bao",
"Hujun",
""
]
] | TITLE: Multi-view Reconstruction via SfM-guided Monocular Depth Estimation
ABSTRACT: In this paper, we present a new method for multi-view geometric
reconstruction. In recent years, large vision models have rapidly developed,
performing excellently across various tasks and demonstrating remarkable
generalization capabilities. Some works use large vision models for monocular
depth estimation, which have been applied to facilitate multi-view
reconstruction tasks in an indirect manner. Due to the ambiguity of the
monocular depth estimation task, the estimated depth values are usually not
accurate enough, limiting their utility in aiding multi-view reconstruction. We
propose to incorporate SfM information, a strong multi-view prior, into the
depth estimation process, thus enhancing the quality of depth prediction and
enabling their direct application in multi-view geometric reconstruction.
Experimental results on public real-world datasets show that our method
significantly improves the quality of depth estimation compared to previous
monocular depth estimation works. Additionally, we evaluate the reconstruction
quality of our approach in various types of scenes including indoor,
streetscape, and aerial views, surpassing state-of-the-art MVS methods. The
code and supplementary materials are available at
https://zju3dv.github.io/murre/ .
|
1902.02595 | Antonino Sabetta | Serena E. Ponta, Henrik Plate, Antonino Sabetta, Michele Bezzi,
C\'edric Dangremont | A Manually-Curated Dataset of Fixes to Vulnerabilities of Open-Source
Software | This is a pre-print version of the paper that appears in the
proceedings of The 16th International Conference on Mining Software
Repositories (MSR), Data Showcase track | Proceedings of The 16th International Conference on Mining
Software Repositories (Data Showcase track), 2019 | 10.1109/MSR.2019.0006 | null | cs.SE cs.CR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Advancing our understanding of software vulnerabilities, automating their
identification, the analysis of their impact, and ultimately their mitigation
is necessary to enable the development of software that is more secure. While
operating a vulnerability assessment tool that we developed and that is
currently used by hundreds of development units at SAP, we manually collected
and curated a dataset of vulnerabilities of open-source software and the
commits fixing them. The data was obtained both from the National Vulnerability
Database (NVD) and from project-specific Web resources that we monitor on a
continuous basis. From that data, we extracted a dataset that maps 624 publicly
disclosed vulnerabilities affecting 205 distinct open-source Java projects,
used in SAP products or internal tools, onto the 1282 commits that fix them.
Out of 624 vulnerabilities, 29 do not have a CVE identifier at all and 46,
which do have a CVE identifier assigned by a numbering authority, are not
available in the NVD yet. The dataset is released under an open-source license,
together with supporting scripts that allow researchers to automatically
retrieve the actual content of the commits from the corresponding repositories
and to augment the attributes available for each instance. Also, these scripts
allow to complement the dataset with additional instances that are not security
fixes (which is useful, for example, in machine learning applications). Our
dataset has been successfully used to train classifiers that could
automatically identify security-relevant commits in code repositories. The
release of this dataset and the supporting code as open-source will allow
future research to be based on data of industrial relevance; also, it
represents a concrete step towards making the maintenance of this dataset a
shared effort involving open-source communities, academia, and the industry.
| [
{
"version": "v1",
"created": "Thu, 7 Feb 2019 12:47:13 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Feb 2019 09:06:58 GMT"
},
{
"version": "v3",
"created": "Tue, 19 Mar 2019 10:33:34 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Ponta",
"Serena E.",
""
],
[
"Plate",
"Henrik",
""
],
[
"Sabetta",
"Antonino",
""
],
[
"Bezzi",
"Michele",
""
],
[
"Dangremont",
"Cédric",
""
]
] | TITLE: A Manually-Curated Dataset of Fixes to Vulnerabilities of Open-Source
Software
ABSTRACT: Advancing our understanding of software vulnerabilities, automating their
identification, the analysis of their impact, and ultimately their mitigation
is necessary to enable the development of software that is more secure. While
operating a vulnerability assessment tool that we developed and that is
currently used by hundreds of development units at SAP, we manually collected
and curated a dataset of vulnerabilities of open-source software and the
commits fixing them. The data was obtained both from the National Vulnerability
Database (NVD) and from project-specific Web resources that we monitor on a
continuous basis. From that data, we extracted a dataset that maps 624 publicly
disclosed vulnerabilities affecting 205 distinct open-source Java projects,
used in SAP products or internal tools, onto the 1282 commits that fix them.
Out of 624 vulnerabilities, 29 do not have a CVE identifier at all and 46,
which do have a CVE identifier assigned by a numbering authority, are not
available in the NVD yet. The dataset is released under an open-source license,
together with supporting scripts that allow researchers to automatically
retrieve the actual content of the commits from the corresponding repositories
and to augment the attributes available for each instance. Also, these scripts
allow to complement the dataset with additional instances that are not security
fixes (which is useful, for example, in machine learning applications). Our
dataset has been successfully used to train classifiers that could
automatically identify security-relevant commits in code repositories. The
release of this dataset and the supporting code as open-source will allow
future research to be based on data of industrial relevance; also, it
represents a concrete step towards making the maintenance of this dataset a
shared effort involving open-source communities, academia, and the industry.
|
1906.08256 | Bala Krishnamoorthy | Dustin L. Arendt, Matthew Broussard, Bala Krishnamoorthy, Nathaniel
Saul, Amber Thrall | Steinhaus Filtration and Stable Paths in the Mapper | Proof of stability added; to appear in SoCG 2025 | null | null | null | cs.LG cs.CG math.AT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We define a new filtration called the Steinhaus filtration built from a
single cover based on a generalized Steinhaus distance, a generalization of
Jaccard distance. The homology persistence module of a Steinhaus filtration
with infinitely many cover elements may not be $q$-tame, even when the covers
are in a totally bounded space. While this may pose a challenge to derive
stability results, we show that the Steinhaus filtration is stable when the
cover is finite. We show that while the \v{C}ech and Steinhaus filtrations are
not isomorphic in general, they are isomorphic for a finite point set in
dimension one. Furthermore, the VR filtration completely determines the
$1$-skeleton of the Steinhaus filtration in arbitrary dimension.
We then develop a language and theory for stable paths within the Steinhaus
filtration. We demonstrate how the framework can be applied to several
applications where a standard metric may not be defined but a cover is readily
available. We introduce a new perspective for modeling recommendation system
datasets. As an example, we look at a movies dataset and we find the stable
paths identified in our framework represent a sequence of movies constituting a
gentle transition and ordering from one genre to another.
For explainable machine learning, we apply the Mapper algorithm for model
induction by building a filtration from a single Mapper complex, and provide
explanations in the form of stable paths between subpopulations. For
illustration, we build a Mapper complex from a supervised machine learning
model trained on the FashionMNIST dataset. Stable paths in the Steinhaus
filtration provide improved explanations of relationships between
subpopulations of images.
| [
{
"version": "v1",
"created": "Wed, 19 Jun 2019 05:02:42 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Sep 2020 09:23:47 GMT"
},
{
"version": "v3",
"created": "Sun, 16 Mar 2025 18:18:17 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Arendt",
"Dustin L.",
""
],
[
"Broussard",
"Matthew",
""
],
[
"Krishnamoorthy",
"Bala",
""
],
[
"Saul",
"Nathaniel",
""
],
[
"Thrall",
"Amber",
""
]
] | TITLE: Steinhaus Filtration and Stable Paths in the Mapper
ABSTRACT: We define a new filtration called the Steinhaus filtration built from a
single cover based on a generalized Steinhaus distance, a generalization of
Jaccard distance. The homology persistence module of a Steinhaus filtration
with infinitely many cover elements may not be $q$-tame, even when the covers
are in a totally bounded space. While this may pose a challenge to derive
stability results, we show that the Steinhaus filtration is stable when the
cover is finite. We show that while the \v{C}ech and Steinhaus filtrations are
not isomorphic in general, they are isomorphic for a finite point set in
dimension one. Furthermore, the VR filtration completely determines the
$1$-skeleton of the Steinhaus filtration in arbitrary dimension.
We then develop a language and theory for stable paths within the Steinhaus
filtration. We demonstrate how the framework can be applied to several
applications where a standard metric may not be defined but a cover is readily
available. We introduce a new perspective for modeling recommendation system
datasets. As an example, we look at a movies dataset and we find the stable
paths identified in our framework represent a sequence of movies constituting a
gentle transition and ordering from one genre to another.
For explainable machine learning, we apply the Mapper algorithm for model
induction by building a filtration from a single Mapper complex, and provide
explanations in the form of stable paths between subpopulations. For
illustration, we build a Mapper complex from a supervised machine learning
model trained on the FashionMNIST dataset. Stable paths in the Steinhaus
filtration provide improved explanations of relationships between
subpopulations of images.
|
1906.10969 | Mirco Schoenfeld | Mirco Schoenfeld, Steffen Eckhard, Ronny Patz, Hilde van Meegdenburg,
Antonio Pires | The UN Security Council debates 1992-2023 | The UN Security Council Debates corpus is available online at
https://doi.org/10.7910/DVN/KGVSYH | null | null | null | cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents an updated dataset containing 106,302 speeches held in
the public meetings of the UN Security Council (UNSC) between 1992 and 2023.
The dataset is based on publicly available meeting transcripts with the S/PV
document symbol and includes the full substance of individual speeches as well
as automatically extracted and manually corrected metadata on the speaker, the
position of the speech in the sequence of speeches of a meeting, and the date
of the speech. After contextualizing the dataset in recent research on the
UNSC, the paper presents descriptive statistics on UNSC meetings and speeches
that characterize the period covered by the dataset. Data highlight the
extensive presence of the UN bureaucracy in UNSC meetings as well as an
emerging trend towards more lengthy open UNSC debates. These open debates cover
key issues that have emerged only during the period that is covered by the
dataset, for example the debates relating to Women, Peace and Security or
Climate-related Disasters. The corpus is available online:
https://doi.org/10.7910/DVN/KGVSYH
| [
{
"version": "v1",
"created": "Wed, 26 Jun 2019 10:57:34 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Oct 2019 12:30:38 GMT"
},
{
"version": "v3",
"created": "Mon, 17 Mar 2025 10:07:09 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Schoenfeld",
"Mirco",
""
],
[
"Eckhard",
"Steffen",
""
],
[
"Patz",
"Ronny",
""
],
[
"van Meegdenburg",
"Hilde",
""
],
[
"Pires",
"Antonio",
""
]
] | TITLE: The UN Security Council debates 1992-2023
ABSTRACT: This paper presents an updated dataset containing 106,302 speeches held in
the public meetings of the UN Security Council (UNSC) between 1992 and 2023.
The dataset is based on publicly available meeting transcripts with the S/PV
document symbol and includes the full substance of individual speeches as well
as automatically extracted and manually corrected metadata on the speaker, the
position of the speech in the sequence of speeches of a meeting, and the date
of the speech. After contextualizing the dataset in recent research on the
UNSC, the paper presents descriptive statistics on UNSC meetings and speeches
that characterize the period covered by the dataset. Data highlight the
extensive presence of the UN bureaucracy in UNSC meetings as well as an
emerging trend towards more lengthy open UNSC debates. These open debates cover
key issues that have emerged only during the period that is covered by the
dataset, for example the debates relating to Women, Peace and Security or
Climate-related Disasters. The corpus is available online:
https://doi.org/10.7910/DVN/KGVSYH
|
1911.07308 | Tsu-Jui Fu | Tsu-Jui Fu, Xin Eric Wang, Matthew Peterson, Scott Grafton, Miguel
Eckstein, William Yang Wang | Counterfactual Vision-and-Language Navigation via Adversarial Path
Sampling | ECCV'20 (Spotlight) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vision-and-Language Navigation (VLN) is a task where agents must decide how
to move through a 3D environment to reach a goal by grounding natural language
instructions to the visual surroundings. One of the problems of the VLN task is
data scarcity since it is difficult to collect enough navigation paths with
human-annotated instructions for interactive environments. In this paper, we
explore the use of counterfactual thinking as a human-inspired data
augmentation method that results in robust models. Counterfactual thinking is a
concept that describes the human propensity to create possible alternatives to
life events that have already occurred. We propose an adversarial-driven
counterfactual reasoning model that can consider effective conditions instead
of low-quality augmented data. In particular, we present a model-agnostic
adversarial path sampler (APS) that learns to sample challenging paths that
force the navigator to improve based on the navigation performance. APS also
serves to do pre-exploration of unseen environments to strengthen the model's
ability to generalize. We evaluate the influence of APS on the performance of
different VLN baseline models using the room-to-room dataset (R2R). The results
show that the adversarial training process with our proposed APS benefits VLN
models under both seen and unseen environments. And the pre-exploration process
can further gain additional improvements under unseen environments.
| [
{
"version": "v1",
"created": "Sun, 17 Nov 2019 18:02:51 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Jul 2020 15:46:58 GMT"
},
{
"version": "v3",
"created": "Fri, 17 Jul 2020 00:18:45 GMT"
},
{
"version": "v4",
"created": "Mon, 17 Mar 2025 16:53:11 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Fu",
"Tsu-Jui",
""
],
[
"Wang",
"Xin Eric",
""
],
[
"Peterson",
"Matthew",
""
],
[
"Grafton",
"Scott",
""
],
[
"Eckstein",
"Miguel",
""
],
[
"Wang",
"William Yang",
""
]
] | TITLE: Counterfactual Vision-and-Language Navigation via Adversarial Path
Sampling
ABSTRACT: Vision-and-Language Navigation (VLN) is a task where agents must decide how
to move through a 3D environment to reach a goal by grounding natural language
instructions to the visual surroundings. One of the problems of the VLN task is
data scarcity since it is difficult to collect enough navigation paths with
human-annotated instructions for interactive environments. In this paper, we
explore the use of counterfactual thinking as a human-inspired data
augmentation method that results in robust models. Counterfactual thinking is a
concept that describes the human propensity to create possible alternatives to
life events that have already occurred. We propose an adversarial-driven
counterfactual reasoning model that can consider effective conditions instead
of low-quality augmented data. In particular, we present a model-agnostic
adversarial path sampler (APS) that learns to sample challenging paths that
force the navigator to improve based on the navigation performance. APS also
serves to do pre-exploration of unseen environments to strengthen the model's
ability to generalize. We evaluate the influence of APS on the performance of
different VLN baseline models using the room-to-room dataset (R2R). The results
show that the adversarial training process with our proposed APS benefits VLN
models under both seen and unseen environments. And the pre-exploration process
can further gain additional improvements under unseen environments.
|
2009.09566 | Tsu-Jui Fu | Tsu-Jui Fu, Xin Eric Wang, Scott Grafton, Miguel Eckstein, William
Yang Wang | SSCR: Iterative Language-Based Image Editing via Self-Supervised
Counterfactual Reasoning | EMNLP'20 (Oral) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Iterative Language-Based Image Editing (IL-BIE) tasks follow iterative
instructions to edit images step by step. Data scarcity is a significant issue
for ILBIE as it is challenging to collect large-scale examples of images before
and after instruction-based changes. However, humans still accomplish these
editing tasks even when presented with an unfamiliar image-instruction pair.
Such ability results from counterfactual thinking and the ability to think
about alternatives to events that have happened already. In this paper, we
introduce a Self-Supervised Counterfactual Reasoning (SSCR) framework that
incorporates counterfactual thinking to overcome data scarcity. SSCR allows the
model to consider out-of-distribution instructions paired with previous images.
With the help of cross-task consistency (CTC), we train these counterfactual
instructions in a self-supervised scenario. Extensive results show that SSCR
improves the correctness of ILBIE in terms of both object identity and
position, establishing a new state of the art (SOTA) on two IBLIE datasets
(i-CLEVR and CoDraw). Even with only 50% of the training data, SSCR achieves a
comparable result to using complete data.
| [
{
"version": "v1",
"created": "Mon, 21 Sep 2020 01:45:58 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Sep 2020 00:24:25 GMT"
},
{
"version": "v3",
"created": "Mon, 17 Mar 2025 16:55:12 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Fu",
"Tsu-Jui",
""
],
[
"Wang",
"Xin Eric",
""
],
[
"Grafton",
"Scott",
""
],
[
"Eckstein",
"Miguel",
""
],
[
"Wang",
"William Yang",
""
]
] | TITLE: SSCR: Iterative Language-Based Image Editing via Self-Supervised
Counterfactual Reasoning
ABSTRACT: Iterative Language-Based Image Editing (IL-BIE) tasks follow iterative
instructions to edit images step by step. Data scarcity is a significant issue
for ILBIE as it is challenging to collect large-scale examples of images before
and after instruction-based changes. However, humans still accomplish these
editing tasks even when presented with an unfamiliar image-instruction pair.
Such ability results from counterfactual thinking and the ability to think
about alternatives to events that have happened already. In this paper, we
introduce a Self-Supervised Counterfactual Reasoning (SSCR) framework that
incorporates counterfactual thinking to overcome data scarcity. SSCR allows the
model to consider out-of-distribution instructions paired with previous images.
With the help of cross-task consistency (CTC), we train these counterfactual
instructions in a self-supervised scenario. Extensive results show that SSCR
improves the correctness of ILBIE in terms of both object identity and
position, establishing a new state of the art (SOTA) on two IBLIE datasets
(i-CLEVR and CoDraw). Even with only 50% of the training data, SSCR achieves a
comparable result to using complete data.
|
2111.09304 | Archismita Dalal | Archismita Dalal, Mohsen Bagherimehrab and Barry C. Sanders | Quantum-Assisted Support Vector Regression | 15 pages, 5 figures | Quantum Inf Process 24, 82 (2025) | 10.1007/s11128-025-04674-0 | null | quant-ph cs.LG | http://creativecommons.org/licenses/by/4.0/ | A popular machine-learning model for regression tasks, including stock-market
prediction, weather forecasting and real-estate pricing, is the classical
support vector regression (SVR). However, a practically realisable quantum SVR
remains to be formulated. We devise annealing-based algorithms, namely
simulated and quantum-classical hybrid, for training two SVR models and compare
their empirical performances against the SVR implementation of Python's
scikit-learn package for facial-landmark detection (FLD), a particular use case
for SVR. Our method is to derive a quadratic-unconstrained-binary formulation
for the optimisation problem used for training a SVR model and solve this
problem using annealing. Using D-Wave's hybrid solver, we construct a
quantum-assisted SVR model, thereby demonstrating a slight advantage over
classical models regarding FLD accuracy. Furthermore, we observe that
annealing-based SVR models predict landmarks with lower variances compared to
the SVR models trained by gradient-based methods. Our work is a
proof-of-concept example for applying quantum-assisted SVR to a
supervised-learning task with a small training dataset.
| [
{
"version": "v1",
"created": "Wed, 17 Nov 2021 18:57:10 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Mar 2025 19:33:03 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Dalal",
"Archismita",
""
],
[
"Bagherimehrab",
"Mohsen",
""
],
[
"Sanders",
"Barry C.",
""
]
] | TITLE: Quantum-Assisted Support Vector Regression
ABSTRACT: A popular machine-learning model for regression tasks, including stock-market
prediction, weather forecasting and real-estate pricing, is the classical
support vector regression (SVR). However, a practically realisable quantum SVR
remains to be formulated. We devise annealing-based algorithms, namely
simulated and quantum-classical hybrid, for training two SVR models and compare
their empirical performances against the SVR implementation of Python's
scikit-learn package for facial-landmark detection (FLD), a particular use case
for SVR. Our method is to derive a quadratic-unconstrained-binary formulation
for the optimisation problem used for training a SVR model and solve this
problem using annealing. Using D-Wave's hybrid solver, we construct a
quantum-assisted SVR model, thereby demonstrating a slight advantage over
classical models regarding FLD accuracy. Furthermore, we observe that
annealing-based SVR models predict landmarks with lower variances compared to
the SVR models trained by gradient-based methods. Our work is a
proof-of-concept example for applying quantum-assisted SVR to a
supervised-learning task with a small training dataset.
|
2202.01602 | Shahin Jabbari | Satyapriya Krishna, Tessa Han, Alex Gu, Steven Wu, Shahin Jabbari,
Himabindu Lakkaraju | The Disagreement Problem in Explainable Machine Learning: A
Practitioner's Perspective | Published in Transactions on Machine Learning Research (TMLR). Added
a misplaced reference | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | As various post hoc explanation methods are increasingly being leveraged to
explain complex models in high-stakes settings, it becomes critical to develop
a deeper understanding of if and when the explanations output by these methods
disagree with each other, and how such disagreements are resolved in practice.
However, there is little to no research that provides answers to these critical
questions. In this work, we introduce and study the disagreement problem in
explainable machine learning. More specifically, we formalize the notion of
disagreement between explanations, analyze how often such disagreements occur
in practice, and how practitioners resolve these disagreements. We first
conduct interviews with data scientists to understand what constitutes
disagreement between explanations generated by different methods for the same
model prediction and introduce a novel quantitative framework to formalize this
understanding. We then leverage this framework to carry out a rigorous
empirical analysis with four real-world datasets, six state-of-the-art post hoc
explanation methods, and six different predictive models, to measure the extent
of disagreement between the explanations generated by various popular
explanation methods. In addition, we carry out an online user study with data
scientists to understand how they resolve the aforementioned disagreements. Our
results indicate that (1) state-of-the-art explanation methods often disagree
in terms of the explanations they output, and (2) machine learning
practitioners often employ ad hoc heuristics when resolving such disagreements.
These findings suggest that practitioners may be relying on misleading
explanations when making consequential decisions. They also underscore the
importance of developing principled frameworks for effectively evaluating and
comparing explanations output by various explanation techniques.
| [
{
"version": "v1",
"created": "Thu, 3 Feb 2022 14:19:23 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Feb 2022 01:46:00 GMT"
},
{
"version": "v3",
"created": "Tue, 8 Feb 2022 07:24:09 GMT"
},
{
"version": "v4",
"created": "Mon, 8 Jul 2024 12:11:38 GMT"
},
{
"version": "v5",
"created": "Mon, 17 Mar 2025 15:00:52 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Krishna",
"Satyapriya",
""
],
[
"Han",
"Tessa",
""
],
[
"Gu",
"Alex",
""
],
[
"Wu",
"Steven",
""
],
[
"Jabbari",
"Shahin",
""
],
[
"Lakkaraju",
"Himabindu",
""
]
] | TITLE: The Disagreement Problem in Explainable Machine Learning: A
Practitioner's Perspective
ABSTRACT: As various post hoc explanation methods are increasingly being leveraged to
explain complex models in high-stakes settings, it becomes critical to develop
a deeper understanding of if and when the explanations output by these methods
disagree with each other, and how such disagreements are resolved in practice.
However, there is little to no research that provides answers to these critical
questions. In this work, we introduce and study the disagreement problem in
explainable machine learning. More specifically, we formalize the notion of
disagreement between explanations, analyze how often such disagreements occur
in practice, and how practitioners resolve these disagreements. We first
conduct interviews with data scientists to understand what constitutes
disagreement between explanations generated by different methods for the same
model prediction and introduce a novel quantitative framework to formalize this
understanding. We then leverage this framework to carry out a rigorous
empirical analysis with four real-world datasets, six state-of-the-art post hoc
explanation methods, and six different predictive models, to measure the extent
of disagreement between the explanations generated by various popular
explanation methods. In addition, we carry out an online user study with data
scientists to understand how they resolve the aforementioned disagreements. Our
results indicate that (1) state-of-the-art explanation methods often disagree
in terms of the explanations they output, and (2) machine learning
practitioners often employ ad hoc heuristics when resolving such disagreements.
These findings suggest that practitioners may be relying on misleading
explanations when making consequential decisions. They also underscore the
importance of developing principled frameworks for effectively evaluating and
comparing explanations output by various explanation techniques.
|
2211.03983 | Chengchun Shi | Liyuan Hu and Mengbing Li and Chengchun Shi and Zhenke Wu and Piotr
Fryzlewicz | Doubly Inhomogeneous Reinforcement Learning | null | null | null | null | stat.ML cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper studies reinforcement learning (RL) in doubly inhomogeneous
environments under temporal non-stationarity and subject heterogeneity. In a
number of applications, it is commonplace to encounter datasets generated by
system dynamics that may change over time and population, challenging
high-quality sequential decision making. Nonetheless, most existing RL
solutions require either temporal stationarity or subject homogeneity, which
would result in sub-optimal policies if both assumptions were violated. To
address both challenges simultaneously, we propose an original algorithm to
determine the ``best data chunks" that display similar dynamics over time and
across individuals for policy learning, which alternates between most recent
change point detection and cluster identification. Our method is general, and
works with a wide range of clustering and change point detection algorithms. It
is multiply robust in the sense that it takes multiple initial estimators as
input and only requires one of them to be consistent. Moreover, by borrowing
information over time and population, it allows us to detect weaker signals and
has better convergence properties when compared to applying the clustering
algorithm per time or the change point detection algorithm per subject.
Empirically, we demonstrate the usefulness of our method through extensive
simulations and a real data application.
| [
{
"version": "v1",
"created": "Tue, 8 Nov 2022 03:41:14 GMT"
},
{
"version": "v2",
"created": "Sat, 12 Nov 2022 09:35:42 GMT"
},
{
"version": "v3",
"created": "Sun, 16 Mar 2025 17:25:47 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Hu",
"Liyuan",
""
],
[
"Li",
"Mengbing",
""
],
[
"Shi",
"Chengchun",
""
],
[
"Wu",
"Zhenke",
""
],
[
"Fryzlewicz",
"Piotr",
""
]
] | TITLE: Doubly Inhomogeneous Reinforcement Learning
ABSTRACT: This paper studies reinforcement learning (RL) in doubly inhomogeneous
environments under temporal non-stationarity and subject heterogeneity. In a
number of applications, it is commonplace to encounter datasets generated by
system dynamics that may change over time and population, challenging
high-quality sequential decision making. Nonetheless, most existing RL
solutions require either temporal stationarity or subject homogeneity, which
would result in sub-optimal policies if both assumptions were violated. To
address both challenges simultaneously, we propose an original algorithm to
determine the ``best data chunks" that display similar dynamics over time and
across individuals for policy learning, which alternates between most recent
change point detection and cluster identification. Our method is general, and
works with a wide range of clustering and change point detection algorithms. It
is multiply robust in the sense that it takes multiple initial estimators as
input and only requires one of them to be consistent. Moreover, by borrowing
information over time and population, it allows us to detect weaker signals and
has better convergence properties when compared to applying the clustering
algorithm per time or the change point detection algorithm per subject.
Empirically, we demonstrate the usefulness of our method through extensive
simulations and a real data application.
|
2211.10724 | Youwei Huang | Youwei Huang, Sen Fang, Jianwen Li, Jiachun Tao, Bin Hu, and Tao Zhang | Deep Smart Contract Intent Detection | 12 pages, 8 figures, conference | null | 10.1109/SANER64311.2025.00020 | null | cs.SE cs.LG | http://creativecommons.org/licenses/by/4.0/ | In recent years, research in software security has concentrated on
identifying vulnerabilities in smart contracts to prevent significant losses of
crypto assets on blockchains. Despite early successes in this area, detecting
developers' intents in smart contracts has become a more pressing issue, as
malicious intents have caused substantial financial losses. Unfortunately,
existing research lacks effective methods for detecting development intents in
smart contracts.
To address this gap, we propose \textsc{SmartIntentNN} (Smart Contract Intent
Neural Network), a deep learning model designed to automatically detect
development intents in smart contracts. \textsc{SmartIntentNN} leverages a
pre-trained sentence encoder to generate contextual representations of smart
contracts, employs a K-means clustering model to identify and highlight
prominent intent features, and utilizes a bidirectional LSTM-based deep neural
network for multi-label classification.
We trained and evaluated \textsc{SmartIntentNN} on a dataset containing over
40,000 real-world smart contracts, employing self-comparison baselines in our
experimental setup. The results show that \textsc{SmartIntentNN} achieves an
F1-score of 0.8633 in identifying intents across 10 distinct categories,
outperforming all baselines and addressing the gap in smart contract detection
by incorporating intent analysis.
| [
{
"version": "v1",
"created": "Sat, 19 Nov 2022 15:40:26 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Oct 2024 02:48:51 GMT"
},
{
"version": "v3",
"created": "Thu, 26 Dec 2024 13:10:25 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Huang",
"Youwei",
""
],
[
"Fang",
"Sen",
""
],
[
"Li",
"Jianwen",
""
],
[
"Tao",
"Jiachun",
""
],
[
"Hu",
"Bin",
""
],
[
"Zhang",
"Tao",
""
]
] | TITLE: Deep Smart Contract Intent Detection
ABSTRACT: In recent years, research in software security has concentrated on
identifying vulnerabilities in smart contracts to prevent significant losses of
crypto assets on blockchains. Despite early successes in this area, detecting
developers' intents in smart contracts has become a more pressing issue, as
malicious intents have caused substantial financial losses. Unfortunately,
existing research lacks effective methods for detecting development intents in
smart contracts.
To address this gap, we propose \textsc{SmartIntentNN} (Smart Contract Intent
Neural Network), a deep learning model designed to automatically detect
development intents in smart contracts. \textsc{SmartIntentNN} leverages a
pre-trained sentence encoder to generate contextual representations of smart
contracts, employs a K-means clustering model to identify and highlight
prominent intent features, and utilizes a bidirectional LSTM-based deep neural
network for multi-label classification.
We trained and evaluated \textsc{SmartIntentNN} on a dataset containing over
40,000 real-world smart contracts, employing self-comparison baselines in our
experimental setup. The results show that \textsc{SmartIntentNN} achieves an
F1-score of 0.8633 in identifying intents across 10 distinct categories,
outperforming all baselines and addressing the gap in smart contract detection
by incorporating intent analysis.
|
2211.10760 | Javier Mar\'in | Javier Marin | Evaluating Synthetic Tabular Data Generated To Augment Small Sample
Datasets | null | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by-sa/4.0/ | This work proposes a method to evaluate synthetic tabular data generated to
augment small sample datasets. While data augmentation techniques can increase
sample counts for machine learning applications, traditional validation
approaches fail when applied to extremely limited sample sizes. Our experiments
across four datasets reveal significant inconsistencies between global metrics
and topological measures, with statistical tests producing unreliable
significance values due to insufficient sample sizes. We demonstrate that
common metrics like propensity scoring and MMD often suggest similarity where
fundamental topological differences exist. Our proposed normalized Bottleneck
distance based metric provides complementary insights but suffers from high
variability across experimental runs and occasional values exceeding
theoretical bounds, showing inherent instability in topological approaches for
very small datasets. These findings highlight the critical need for
multi-faceted evaluation methodologies when validating synthetic data generated
from limited samples, as no single metric reliably captures both distributional
and structural similarity.
| [
{
"version": "v1",
"created": "Sat, 19 Nov 2022 18:18:52 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Dec 2022 15:00:12 GMT"
},
{
"version": "v3",
"created": "Sat, 21 Jan 2023 09:50:45 GMT"
},
{
"version": "v4",
"created": "Mon, 11 Nov 2024 11:04:06 GMT"
},
{
"version": "v5",
"created": "Fri, 14 Mar 2025 18:08:54 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Marin",
"Javier",
""
]
] | TITLE: Evaluating Synthetic Tabular Data Generated To Augment Small Sample
Datasets
ABSTRACT: This work proposes a method to evaluate synthetic tabular data generated to
augment small sample datasets. While data augmentation techniques can increase
sample counts for machine learning applications, traditional validation
approaches fail when applied to extremely limited sample sizes. Our experiments
across four datasets reveal significant inconsistencies between global metrics
and topological measures, with statistical tests producing unreliable
significance values due to insufficient sample sizes. We demonstrate that
common metrics like propensity scoring and MMD often suggest similarity where
fundamental topological differences exist. Our proposed normalized Bottleneck
distance based metric provides complementary insights but suffers from high
variability across experimental runs and occasional values exceeding
theoretical bounds, showing inherent instability in topological approaches for
very small datasets. These findings highlight the critical need for
multi-faceted evaluation methodologies when validating synthetic data generated
from limited samples, as no single metric reliably captures both distributional
and structural similarity.
|
2302.02150 | Dimitris Iakovidis | Dimitrios E. Diamantis, Panagiota Gatoula, Anastasios Koulaouzidis,
and Dimitris K. Iakovidis | This Intestine Does Not Exist: Multiscale Residual Variational
Autoencoder for Realistic Wireless Capsule Endoscopy Image Generation | 10 pages | IEEE Access, 12, 25668-25683 (2024) | 10.1109/ACCESS.2024.3366801 | null | cs.CV cs.LG eess.IV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Medical image synthesis has emerged as a promising solution to address the
limited availability of annotated medical data needed for training machine
learning algorithms in the context of image-based Clinical Decision Support
(CDS) systems. To this end, Generative Adversarial Networks (GANs) have been
mainly applied to support the algorithm training process by generating
synthetic images for data augmentation. However, in the field of Wireless
Capsule Endoscopy (WCE), the limited content diversity and size of existing
publicly available annotated datasets, adversely affect both the training
stability and synthesis performance of GANs. Aiming to a viable solution for
WCE image synthesis, a novel Variational Autoencoder architecture is proposed,
namely "This Intestine Does not Exist" (TIDE). The proposed architecture
comprises multiscale feature extraction convolutional blocks and residual
connections, which enable the generation of high-quality and diverse datasets
even with a limited number of training images. Contrary to the current
approaches, which are oriented towards the augmentation of the available
datasets, this study demonstrates that using TIDE, real WCE datasets can be
fully substituted by artificially generated ones, without compromising
classification performance. Furthermore, qualitative and user evaluation
studies by experienced WCE specialists, validate from a medical viewpoint that
both the normal and abnormal WCE images synthesized by TIDE are sufficiently
realistic.
| [
{
"version": "v1",
"created": "Sat, 4 Feb 2023 11:49:38 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Feb 2023 03:50:25 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Diamantis",
"Dimitrios E.",
""
],
[
"Gatoula",
"Panagiota",
""
],
[
"Koulaouzidis",
"Anastasios",
""
],
[
"Iakovidis",
"Dimitris K.",
""
]
] | TITLE: This Intestine Does Not Exist: Multiscale Residual Variational
Autoencoder for Realistic Wireless Capsule Endoscopy Image Generation
ABSTRACT: Medical image synthesis has emerged as a promising solution to address the
limited availability of annotated medical data needed for training machine
learning algorithms in the context of image-based Clinical Decision Support
(CDS) systems. To this end, Generative Adversarial Networks (GANs) have been
mainly applied to support the algorithm training process by generating
synthetic images for data augmentation. However, in the field of Wireless
Capsule Endoscopy (WCE), the limited content diversity and size of existing
publicly available annotated datasets, adversely affect both the training
stability and synthesis performance of GANs. Aiming to a viable solution for
WCE image synthesis, a novel Variational Autoencoder architecture is proposed,
namely "This Intestine Does not Exist" (TIDE). The proposed architecture
comprises multiscale feature extraction convolutional blocks and residual
connections, which enable the generation of high-quality and diverse datasets
even with a limited number of training images. Contrary to the current
approaches, which are oriented towards the augmentation of the available
datasets, this study demonstrates that using TIDE, real WCE datasets can be
fully substituted by artificially generated ones, without compromising
classification performance. Furthermore, qualitative and user evaluation
studies by experienced WCE specialists, validate from a medical viewpoint that
both the normal and abnormal WCE images synthesized by TIDE are sufficiently
realistic.
|
2305.00767 | Cong Cao | Huanjing Yue, Cong Cao, Lei Liao, and Jingyu Yang | RViDeformer: Efficient Raw Video Denoising Transformer with a Larger
Benchmark Dataset | Accepted by TCSVT 2025 | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, raw video denoising has garnered increased attention due to
the consistency with the imaging process and well-studied noise modeling in the
raw domain. However, two problems still hinder the denoising performance.
Firstly, there is no large dataset with realistic motions for supervised raw
video denoising, as capturing noisy and clean frames for real dynamic scenes is
difficult. To address this, we propose recapturing existing high-resolution
videos displayed on a 4K screen with high-low ISO settings to construct
noisy-clean paired frames. In this way, we construct a video denoising dataset
(named as ReCRVD) with 120 groups of noisy-clean videos, whose ISO values
ranging from 1600 to 25600. Secondly, while non-local temporal-spatial
attention is beneficial for denoising, it often leads to heavy computation
costs. We propose an efficient raw video denoising transformer network
(RViDeformer) that explores both short and long-distance correlations.
Specifically, we propose multi-branch spatial and temporal attention modules,
which explore the patch correlations from local window, local low-resolution
window, global downsampled window, and neighbor-involved window, and then they
are fused together. We employ reparameterization to reduce computation costs.
Our network is trained in both supervised and unsupervised manners, achieving
the best performance compared with state-of-the-art methods. Additionally, the
model trained with our proposed dataset (ReCRVD) outperforms the model trained
with previous benchmark dataset (CRVD) when evaluated on the real-world outdoor
noisy videos. Our code and dataset are available at
https://github.com/cao-cong/RViDeformer.
| [
{
"version": "v1",
"created": "Mon, 1 May 2023 11:06:58 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Mar 2025 10:07:37 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Yue",
"Huanjing",
""
],
[
"Cao",
"Cong",
""
],
[
"Liao",
"Lei",
""
],
[
"Yang",
"Jingyu",
""
]
] | TITLE: RViDeformer: Efficient Raw Video Denoising Transformer with a Larger
Benchmark Dataset
ABSTRACT: In recent years, raw video denoising has garnered increased attention due to
the consistency with the imaging process and well-studied noise modeling in the
raw domain. However, two problems still hinder the denoising performance.
Firstly, there is no large dataset with realistic motions for supervised raw
video denoising, as capturing noisy and clean frames for real dynamic scenes is
difficult. To address this, we propose recapturing existing high-resolution
videos displayed on a 4K screen with high-low ISO settings to construct
noisy-clean paired frames. In this way, we construct a video denoising dataset
(named as ReCRVD) with 120 groups of noisy-clean videos, whose ISO values
ranging from 1600 to 25600. Secondly, while non-local temporal-spatial
attention is beneficial for denoising, it often leads to heavy computation
costs. We propose an efficient raw video denoising transformer network
(RViDeformer) that explores both short and long-distance correlations.
Specifically, we propose multi-branch spatial and temporal attention modules,
which explore the patch correlations from local window, local low-resolution
window, global downsampled window, and neighbor-involved window, and then they
are fused together. We employ reparameterization to reduce computation costs.
Our network is trained in both supervised and unsupervised manners, achieving
the best performance compared with state-of-the-art methods. Additionally, the
model trained with our proposed dataset (ReCRVD) outperforms the model trained
with previous benchmark dataset (CRVD) when evaluated on the real-world outdoor
noisy videos. Our code and dataset are available at
https://github.com/cao-cong/RViDeformer.
|
2305.03944 | Deyi Ji | Deyi Ji, Haoran Wang, Mingyuan Tao, Jianqiang Huang, Xian-Sheng Hua,
Hongtao Lu | Structural and Statistical Texture Knowledge Distillation for Semantic
Segmentation | Accepted to CVPR 2022. Extended TPAMI 2025 Journal Version:
arXiv:2503.08043 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing knowledge distillation works for semantic segmentation mainly focus
on transferring high-level contextual knowledge from teacher to student.
However, low-level texture knowledge is also of vital importance for
characterizing the local structural pattern and global statistical property,
such as boundary, smoothness, regularity and color contrast, which may not be
well addressed by high-level deep features. In this paper, we are intended to
take full advantage of both structural and statistical texture knowledge and
propose a novel Structural and Statistical Texture Knowledge Distillation
(SSTKD) framework for semantic segmentation. Specifically, for structural
texture knowledge, we introduce a Contourlet Decomposition Module (CDM) that
decomposes low-level features with iterative Laplacian pyramid and directional
filter bank to mine the structural texture knowledge. For statistical
knowledge, we propose a Denoised Texture Intensity Equalization Module (DTIEM)
to adaptively extract and enhance statistical texture knowledge through
heuristics iterative quantization and denoised operation. Finally, each
knowledge learning is supervised by an individual loss function, forcing the
student network to mimic the teacher better from a broader perspective.
Experiments show that the proposed method achieves state-of-the-art performance
on Cityscapes, Pascal VOC 2012 and ADE20K datasets.
| [
{
"version": "v1",
"created": "Sat, 6 May 2023 06:01:11 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Jul 2023 02:43:50 GMT"
},
{
"version": "v3",
"created": "Sun, 16 Mar 2025 11:07:28 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Ji",
"Deyi",
""
],
[
"Wang",
"Haoran",
""
],
[
"Tao",
"Mingyuan",
""
],
[
"Huang",
"Jianqiang",
""
],
[
"Hua",
"Xian-Sheng",
""
],
[
"Lu",
"Hongtao",
""
]
] | TITLE: Structural and Statistical Texture Knowledge Distillation for Semantic
Segmentation
ABSTRACT: Existing knowledge distillation works for semantic segmentation mainly focus
on transferring high-level contextual knowledge from teacher to student.
However, low-level texture knowledge is also of vital importance for
characterizing the local structural pattern and global statistical property,
such as boundary, smoothness, regularity and color contrast, which may not be
well addressed by high-level deep features. In this paper, we are intended to
take full advantage of both structural and statistical texture knowledge and
propose a novel Structural and Statistical Texture Knowledge Distillation
(SSTKD) framework for semantic segmentation. Specifically, for structural
texture knowledge, we introduce a Contourlet Decomposition Module (CDM) that
decomposes low-level features with iterative Laplacian pyramid and directional
filter bank to mine the structural texture knowledge. For statistical
knowledge, we propose a Denoised Texture Intensity Equalization Module (DTIEM)
to adaptively extract and enhance statistical texture knowledge through
heuristics iterative quantization and denoised operation. Finally, each
knowledge learning is supervised by an individual loss function, forcing the
student network to mimic the teacher better from a broader perspective.
Experiments show that the proposed method achieves state-of-the-art performance
on Cityscapes, Pascal VOC 2012 and ADE20K datasets.
|
2305.13584 | Pan Li | Li Pan, Lv Peizhuo, Chen Kai, Zhang Shengzhi, Cai Yuling, Xiang Fan | A Model Stealing Attack Against Multi-Exit Networks | null | null | null | null | cs.CR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Compared to traditional neural networks with a single output channel, a
multi-exit network has multiple exits that allow for early outputs from the
model's intermediate layers, thus significantly improving computational
efficiency while maintaining similar main task accuracy. Existing model
stealing attacks can only steal the model's utility while failing to capture
its output strategy, i.e., a set of thresholds used to determine from which
exit to output. This leads to a significant decrease in computational
efficiency for the extracted model, thereby losing the advantage of multi-exit
networks. In this paper, we propose the first model stealing attack against
multi-exit networks to extract both the model utility and the output strategy.
We employ Kernel Density Estimation to analyze the target model's output
strategy and use performance loss and strategy loss to guide the training of
the extracted model. Furthermore, we design a novel output strategy search
algorithm to maximize the consistency between the victim model and the
extracted model's output behaviors. In experiments across multiple multi-exit
networks and benchmark datasets, our method always achieves accuracy and
efficiency closest to the victim models.
| [
{
"version": "v1",
"created": "Tue, 23 May 2023 01:24:39 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 00:56:01 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Pan",
"Li",
""
],
[
"Peizhuo",
"Lv",
""
],
[
"Kai",
"Chen",
""
],
[
"Shengzhi",
"Zhang",
""
],
[
"Yuling",
"Cai",
""
],
[
"Fan",
"Xiang",
""
]
] | TITLE: A Model Stealing Attack Against Multi-Exit Networks
ABSTRACT: Compared to traditional neural networks with a single output channel, a
multi-exit network has multiple exits that allow for early outputs from the
model's intermediate layers, thus significantly improving computational
efficiency while maintaining similar main task accuracy. Existing model
stealing attacks can only steal the model's utility while failing to capture
its output strategy, i.e., a set of thresholds used to determine from which
exit to output. This leads to a significant decrease in computational
efficiency for the extracted model, thereby losing the advantage of multi-exit
networks. In this paper, we propose the first model stealing attack against
multi-exit networks to extract both the model utility and the output strategy.
We employ Kernel Density Estimation to analyze the target model's output
strategy and use performance loss and strategy loss to guide the training of
the extracted model. Furthermore, we design a novel output strategy search
algorithm to maximize the consistency between the victim model and the
extracted model's output behaviors. In experiments across multiple multi-exit
networks and benchmark datasets, our method always achieves accuracy and
efficiency closest to the victim models.
|
2305.17473 | Farhad Mortezapour Shiri | Farhad Mortezapour Shiri, Thinagaran Perumal, Norwati Mustapha,
Raihani Mohamed | A Comprehensive Overview and Comparative Analysis on Deep Learning
Models: CNN, RNN, LSTM, GRU | 62 pages, 37 figures | Journal on Artificial Intelligence 2024 Vol. 6 Issue 1 Pages
301-360 | 10.32604/jai.2024.054314 | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Deep learning (DL) has emerged as a powerful subset of machine learning (ML)
and artificial intelligence (AI), outperforming traditional ML methods,
especially in handling unstructured and large datasets. Its impact spans across
various domains, including speech recognition, healthcare, autonomous vehicles,
cybersecurity, predictive analytics, and more. However, the complexity and
dynamic nature of real-world problems present challenges in designing effective
deep learning models. Consequently, several deep learning models have been
developed to address different problems and applications. In this article, we
conduct a comprehensive survey of various deep learning models, including
Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), Temporal
Convolutional Networks (TCN), Transformer, Kolmogorov-Arnold networks (KAN),
Generative Models, Deep Reinforcement Learning (DRL), and Deep Transfer
Learning. We examine the structure, applications, benefits, and limitations of
each model. Furthermore, we perform an analysis using three publicly available
datasets: IMDB, ARAS, and Fruit-360. We compared the performance of six
renowned deep learning models: CNN, RNN, Long Short-Term Memory (LSTM),
Bidirectional LSTM, Gated Recurrent Unit (GRU), and Bidirectional GRU alongside
two newer models, TCN and Transformer, using the IMDB and ARAS datasets.
Additionally, we evaluated the performance of eight CNN-based models, including
VGG (Visual Geometry Group), Inception, ResNet (Residual Network),
InceptionResNet, Xception (Extreme Inception), MobileNet, DenseNet (Dense
Convolutional Network), and NASNet (Neural Architecture Search Network), for
image classification tasks using the Fruit-360 dataset.
| [
{
"version": "v1",
"created": "Sat, 27 May 2023 13:23:21 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Jun 2023 16:53:28 GMT"
},
{
"version": "v3",
"created": "Thu, 24 Oct 2024 17:41:58 GMT"
},
{
"version": "v4",
"created": "Mon, 17 Mar 2025 10:18:52 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Shiri",
"Farhad Mortezapour",
""
],
[
"Perumal",
"Thinagaran",
""
],
[
"Mustapha",
"Norwati",
""
],
[
"Mohamed",
"Raihani",
""
]
] | TITLE: A Comprehensive Overview and Comparative Analysis on Deep Learning
Models: CNN, RNN, LSTM, GRU
ABSTRACT: Deep learning (DL) has emerged as a powerful subset of machine learning (ML)
and artificial intelligence (AI), outperforming traditional ML methods,
especially in handling unstructured and large datasets. Its impact spans across
various domains, including speech recognition, healthcare, autonomous vehicles,
cybersecurity, predictive analytics, and more. However, the complexity and
dynamic nature of real-world problems present challenges in designing effective
deep learning models. Consequently, several deep learning models have been
developed to address different problems and applications. In this article, we
conduct a comprehensive survey of various deep learning models, including
Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), Temporal
Convolutional Networks (TCN), Transformer, Kolmogorov-Arnold networks (KAN),
Generative Models, Deep Reinforcement Learning (DRL), and Deep Transfer
Learning. We examine the structure, applications, benefits, and limitations of
each model. Furthermore, we perform an analysis using three publicly available
datasets: IMDB, ARAS, and Fruit-360. We compared the performance of six
renowned deep learning models: CNN, RNN, Long Short-Term Memory (LSTM),
Bidirectional LSTM, Gated Recurrent Unit (GRU), and Bidirectional GRU alongside
two newer models, TCN and Transformer, using the IMDB and ARAS datasets.
Additionally, we evaluated the performance of eight CNN-based models, including
VGG (Visual Geometry Group), Inception, ResNet (Residual Network),
InceptionResNet, Xception (Extreme Inception), MobileNet, DenseNet (Dense
Convolutional Network), and NASNet (Neural Architecture Search Network), for
image classification tasks using the Fruit-360 dataset.
|
2306.07207 | Ziwang Zhao | Ruipu Luo, Ziwang Zhao, Min Yang, Zheming Yang, Minghui Qiu, Tao Wang,
Zhongyu Wei, Yanhao Wang, Cen Chen | Valley: Video Assistant with Large Language model Enhanced abilitY | null | null | null | null | cs.CV cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs), with remarkable conversational capability, have
emerged as AI assistants that can handle both visual and textual modalities.
However, their effectiveness in joint video and language understanding has not
been extensively explored. In the paper, we introduce Valley, a multi-modal
foundation model that is designed to enable enhanced video comprehension and
instruction-following capabilities. To this end, we construct two datasets,
namely Valley-702k and Valley-instruct-73k, to cover a diverse range of
video-text alignment and video-based instruction tasks, such as multi-shot
captions, long video descriptions, action recognition, causal inference, etc.
Then, we adopt ViT-L/14 as the vision encoder and explore three different
temporal modeling modules to learn multifaceted features for enhanced video
understanding. In addition, we implement a two-phase training approach for
Valley: the first phase focuses solely on training the projection module to
facilitate the LLM's capacity to understand visual input, and the second phase
jointly trains the projection module and the LLM to improve their instruction
following ability. Extensive experiments demonstrate that Valley has the
potential to serve as an effective video assistant, simplifying complex
video-understanding scenarios. Our code and data are published anonymously at
https://github.com/valley-vl/Valley.
| [
{
"version": "v1",
"created": "Mon, 12 Jun 2023 16:11:10 GMT"
},
{
"version": "v2",
"created": "Sun, 8 Oct 2023 09:49:53 GMT"
},
{
"version": "v3",
"created": "Mon, 17 Mar 2025 13:51:51 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Luo",
"Ruipu",
""
],
[
"Zhao",
"Ziwang",
""
],
[
"Yang",
"Min",
""
],
[
"Yang",
"Zheming",
""
],
[
"Qiu",
"Minghui",
""
],
[
"Wang",
"Tao",
""
],
[
"Wei",
"Zhongyu",
""
],
[
"Wang",
"Yanhao",
""
],
[
"Chen",
"Cen",
""
]
] | TITLE: Valley: Video Assistant with Large Language model Enhanced abilitY
ABSTRACT: Large Language Models (LLMs), with remarkable conversational capability, have
emerged as AI assistants that can handle both visual and textual modalities.
However, their effectiveness in joint video and language understanding has not
been extensively explored. In the paper, we introduce Valley, a multi-modal
foundation model that is designed to enable enhanced video comprehension and
instruction-following capabilities. To this end, we construct two datasets,
namely Valley-702k and Valley-instruct-73k, to cover a diverse range of
video-text alignment and video-based instruction tasks, such as multi-shot
captions, long video descriptions, action recognition, causal inference, etc.
Then, we adopt ViT-L/14 as the vision encoder and explore three different
temporal modeling modules to learn multifaceted features for enhanced video
understanding. In addition, we implement a two-phase training approach for
Valley: the first phase focuses solely on training the projection module to
facilitate the LLM's capacity to understand visual input, and the second phase
jointly trains the projection module and the LLM to improve their instruction
following ability. Extensive experiments demonstrate that Valley has the
potential to serve as an effective video assistant, simplifying complex
video-understanding scenarios. Our code and data are published anonymously at
https://github.com/valley-vl/Valley.
|
2306.14858 | Dominik Peters | Nikhil Chandak, Shashwat Goel, Dominik Peters | Proportional Aggregation of Preferences for Sequential Decision Making | Updated version with improved exposition. Axioms were renamed to
better fit the literature | null | null | null | cs.GT cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the problem of fair sequential decision making given voter
preferences. In each round, a decision rule must choose a decision from a set
of alternatives where each voter reports which of these alternatives they
approve. Instead of going with the most popular choice in each round, we aim
for proportional representation across rounds, using axioms inspired by the
multi-winner voting literature. The axioms require that every group of
$\alpha\%$ of the voters that agrees in every round (i.e., approves a common
alternative), must approve at least $\alpha\%$ of the decisions. A stronger
version of the axioms requires that every group of $\alpha\%$ of the voters
that agrees in a $\beta$ fraction of rounds must approve $\beta\cdot\alpha\%$
of the decisions. We show that three attractive voting rules satisfy axioms of
this style. One of them (Sequential Phragm\'en) makes its decisions online, and
the other two satisfy strengthened versions of the axioms but make decisions
semi-online (Method of Equal Shares) or fully offline (Proportional Approval
Voting). We present empirical results for these rules based on synthetic data
and U.S. political elections. We also run experiments using the moral machine
dataset about ethical dilemmas: We train preference models on user responses
from different countries and let the models cast votes. We find that
aggregating these votes using our rules leads to a more equal utility
distribution across demographics than making decisions using a single global
preference model.
| [
{
"version": "v1",
"created": "Mon, 26 Jun 2023 17:10:10 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 14:25:48 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Chandak",
"Nikhil",
""
],
[
"Goel",
"Shashwat",
""
],
[
"Peters",
"Dominik",
""
]
] | TITLE: Proportional Aggregation of Preferences for Sequential Decision Making
ABSTRACT: We study the problem of fair sequential decision making given voter
preferences. In each round, a decision rule must choose a decision from a set
of alternatives where each voter reports which of these alternatives they
approve. Instead of going with the most popular choice in each round, we aim
for proportional representation across rounds, using axioms inspired by the
multi-winner voting literature. The axioms require that every group of
$\alpha\%$ of the voters that agrees in every round (i.e., approves a common
alternative), must approve at least $\alpha\%$ of the decisions. A stronger
version of the axioms requires that every group of $\alpha\%$ of the voters
that agrees in a $\beta$ fraction of rounds must approve $\beta\cdot\alpha\%$
of the decisions. We show that three attractive voting rules satisfy axioms of
this style. One of them (Sequential Phragm\'en) makes its decisions online, and
the other two satisfy strengthened versions of the axioms but make decisions
semi-online (Method of Equal Shares) or fully offline (Proportional Approval
Voting). We present empirical results for these rules based on synthetic data
and U.S. political elections. We also run experiments using the moral machine
dataset about ethical dilemmas: We train preference models on user responses
from different countries and let the models cast votes. We find that
aggregating these votes using our rules leads to a more equal utility
distribution across demographics than making decisions using a single global
preference model.
|
2307.03601 | Shilong Zhang | Shilong Zhang, Peize Sun, Shoufa Chen, Min Xiao, Wenqi Shao, Wenwei
Zhang, Yu Liu, Kai Chen, Ping Luo | GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interest | Code has been released at https://github.com/jshilong/GPT4RoI | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual instruction tuning large language model(LLM) on image-text pairs has
achieved general-purpose vision-language abilities. However, the lack of
region-text pairs limits their advancements to fine-grained multimodal
understanding. In this paper, we propose spatial instruction tuning, which
introduces the reference to the region-of-interest(RoI) in the instruction.
Before sending to LLM, the reference is replaced by RoI features and
interleaved with language embeddings as a sequence. Our model GPT4RoI, trained
on 7 region-text pair datasets, brings an unprecedented interactive and
conversational experience compared to previous image-level models. (1)
Interaction beyond language: Users can interact with our model by both language
and drawing bounding boxes to flexibly adjust the referring granularity. (2)
Versatile multimodal abilities: A variety of attribute information within each
RoI can be mined by GPT4RoI, e.g., color, shape, material, action, etc.
Furthermore, it can reason about multiple RoIs based on common sense. On the
Visual Commonsense Reasoning(VCR) dataset, GPT4RoI achieves a remarkable
accuracy of 81.6%, surpassing all existing models by a significant margin (the
second place is 75.6%) and almost reaching human-level performance of 85.0%.
The code and model can be found at https://github.com/jshilong/GPT4RoI.
| [
{
"version": "v1",
"created": "Fri, 7 Jul 2023 13:43:44 GMT"
},
{
"version": "v2",
"created": "Fri, 13 Oct 2023 03:25:34 GMT"
},
{
"version": "v3",
"created": "Sat, 1 Jun 2024 08:50:14 GMT"
},
{
"version": "v4",
"created": "Sun, 16 Mar 2025 02:50:51 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Zhang",
"Shilong",
""
],
[
"Sun",
"Peize",
""
],
[
"Chen",
"Shoufa",
""
],
[
"Xiao",
"Min",
""
],
[
"Shao",
"Wenqi",
""
],
[
"Zhang",
"Wenwei",
""
],
[
"Liu",
"Yu",
""
],
[
"Chen",
"Kai",
""
],
[
"Luo",
"Ping",
""
]
] | TITLE: GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interest
ABSTRACT: Visual instruction tuning large language model(LLM) on image-text pairs has
achieved general-purpose vision-language abilities. However, the lack of
region-text pairs limits their advancements to fine-grained multimodal
understanding. In this paper, we propose spatial instruction tuning, which
introduces the reference to the region-of-interest(RoI) in the instruction.
Before sending to LLM, the reference is replaced by RoI features and
interleaved with language embeddings as a sequence. Our model GPT4RoI, trained
on 7 region-text pair datasets, brings an unprecedented interactive and
conversational experience compared to previous image-level models. (1)
Interaction beyond language: Users can interact with our model by both language
and drawing bounding boxes to flexibly adjust the referring granularity. (2)
Versatile multimodal abilities: A variety of attribute information within each
RoI can be mined by GPT4RoI, e.g., color, shape, material, action, etc.
Furthermore, it can reason about multiple RoIs based on common sense. On the
Visual Commonsense Reasoning(VCR) dataset, GPT4RoI achieves a remarkable
accuracy of 81.6%, surpassing all existing models by a significant margin (the
second place is 75.6%) and almost reaching human-level performance of 85.0%.
The code and model can be found at https://github.com/jshilong/GPT4RoI.
|
2307.03972 | Fanyi Qu | Fanyi Qu, Chenming Tang and Yunfang Wu | Evaluating the Capability of Large-scale Language Models on Chinese
Grammatical Error Correction Task | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Large-scale language models (LLMs) has shown remarkable capability in various
of Natural Language Processing (NLP) tasks and attracted lots of attention
recently. However, some studies indicated that large language models fail to
achieve promising result beyond the state-of-the-art models in English
grammatical error correction (GEC) tasks. In this report, we aim to explore the
how large language models perform on Chinese grammatical error correction tasks
and provide guidance for future work. We conduct experiments with 3 different
LLMs of different model scale on 4 Chinese GEC dataset. Our experimental
results indicate that the performances of LLMs on automatic evaluation metrics
falls short of the previous sota models because of the problem of
over-correction. Furthermore, we also discover notable variations in the
performance of LLMs when evaluated on different data distributions. Our
findings demonstrates that further investigation is required for the
application of LLMs on Chinese GEC task.
| [
{
"version": "v1",
"created": "Sat, 8 Jul 2023 13:10:59 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Mar 2025 11:21:32 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Qu",
"Fanyi",
""
],
[
"Tang",
"Chenming",
""
],
[
"Wu",
"Yunfang",
""
]
] | TITLE: Evaluating the Capability of Large-scale Language Models on Chinese
Grammatical Error Correction Task
ABSTRACT: Large-scale language models (LLMs) has shown remarkable capability in various
of Natural Language Processing (NLP) tasks and attracted lots of attention
recently. However, some studies indicated that large language models fail to
achieve promising result beyond the state-of-the-art models in English
grammatical error correction (GEC) tasks. In this report, we aim to explore the
how large language models perform on Chinese grammatical error correction tasks
and provide guidance for future work. We conduct experiments with 3 different
LLMs of different model scale on 4 Chinese GEC dataset. Our experimental
results indicate that the performances of LLMs on automatic evaluation metrics
falls short of the previous sota models because of the problem of
over-correction. Furthermore, we also discover notable variations in the
performance of LLMs when evaluated on different data distributions. Our
findings demonstrates that further investigation is required for the
application of LLMs on Chinese GEC task.
|
2307.08789 | Ranjan Sapkota | Ranjan Sapkota, Manoj Karkee | Generative AI in Agriculture: Creating Image Datasets Using DALL.E's
Advanced Large Language Model Capabilities | 9 Figures, 1 table, 19 pages | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | This research investigated the role of artificial intelligence (AI),
specifically the DALL.E model by OpenAI, in advancing data generation and
visualization techniques in agriculture. DALL.E, an advanced AI image
generator, works alongside ChatGPT's language processing to transform text
descriptions and image clues into realistic visual representations of the
content. The study used both approaches of image generation: text-to-image and
image-to-image (variation). Six types of datasets depicting fruit crop
environment were generated. These AI-generated images were then compared
against ground truth images captured by sensors in real agricultural fields.
The comparison was based on Peak Signal-to-Noise Ratio (PSNR) and Feature
Similarity Index (FSIM) metrics. The image-to-image generation exhibited a
5.78% increase in average PSNR over text-to-image methods, signifying superior
image clarity and quality. However, this method also resulted in a 10.23%
decrease in average FSIM, indicating a diminished structural and textural
similarity to the original images. Similar to these measures, human evaluation
also showed that images generated using image-to-image-based method were more
realistic compared to those generated with text-to-image approach. The results
highlighted DALL.E's potential in generating realistic agricultural image
datasets and thus accelerating the development and adoption of imaging-based
precision agricultural solutions. In future, DALL.E along with other
alternative LLM based image generation models such as MidJourney, Stable
Diffusion, Craiyon, Imagen, Parti, DreamStudio, Make-A-Scene, DeepDream, and
VQ-GAN + CLIP could demonstrate further significant potential for enhancing
image clarity, quality, and realism in depicting agricultural environments,
which could revolutionize precision farming practices.
| [
{
"version": "v1",
"created": "Mon, 17 Jul 2023 19:17:10 GMT"
},
{
"version": "v2",
"created": "Sun, 10 Mar 2024 17:47:38 GMT"
},
{
"version": "v3",
"created": "Sat, 16 Mar 2024 18:38:18 GMT"
},
{
"version": "v4",
"created": "Tue, 27 Aug 2024 16:43:17 GMT"
},
{
"version": "v5",
"created": "Sat, 15 Mar 2025 17:13:08 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Sapkota",
"Ranjan",
""
],
[
"Karkee",
"Manoj",
""
]
] | TITLE: Generative AI in Agriculture: Creating Image Datasets Using DALL.E's
Advanced Large Language Model Capabilities
ABSTRACT: This research investigated the role of artificial intelligence (AI),
specifically the DALL.E model by OpenAI, in advancing data generation and
visualization techniques in agriculture. DALL.E, an advanced AI image
generator, works alongside ChatGPT's language processing to transform text
descriptions and image clues into realistic visual representations of the
content. The study used both approaches of image generation: text-to-image and
image-to-image (variation). Six types of datasets depicting fruit crop
environment were generated. These AI-generated images were then compared
against ground truth images captured by sensors in real agricultural fields.
The comparison was based on Peak Signal-to-Noise Ratio (PSNR) and Feature
Similarity Index (FSIM) metrics. The image-to-image generation exhibited a
5.78% increase in average PSNR over text-to-image methods, signifying superior
image clarity and quality. However, this method also resulted in a 10.23%
decrease in average FSIM, indicating a diminished structural and textural
similarity to the original images. Similar to these measures, human evaluation
also showed that images generated using image-to-image-based method were more
realistic compared to those generated with text-to-image approach. The results
highlighted DALL.E's potential in generating realistic agricultural image
datasets and thus accelerating the development and adoption of imaging-based
precision agricultural solutions. In future, DALL.E along with other
alternative LLM based image generation models such as MidJourney, Stable
Diffusion, Craiyon, Imagen, Parti, DreamStudio, Make-A-Scene, DeepDream, and
VQ-GAN + CLIP could demonstrate further significant potential for enhancing
image clarity, quality, and realism in depicting agricultural environments,
which could revolutionize precision farming practices.
|
2308.06743 | Baolin Liu | Baolin Liu and Zongyuan Yang and Pengfei Wang and Junjie Zhou and Ziqi
Liu and Ziyi Song and Yan Liu and Yongping Xiong | TextDiff: Mask-Guided Residual Diffusion Models for Scene Text Image
Super-Resolution | Correct and update some data | Pattern Recognition (2025) | 10.1016/j.patcog.2025.111513 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The goal of scene text image super-resolution is to reconstruct
high-resolution text-line images from unrecognizable low-resolution inputs. The
existing methods relying on the optimization of pixel-level loss tend to yield
text edges that exhibit a notable degree of blurring, thereby exerting a
substantial impact on both the readability and recognizability of the text. To
address these issues, we propose TextDiff, the first diffusion-based framework
tailored for scene text image super-resolution. It contains two modules: the
Text Enhancement Module (TEM) and the Mask-Guided Residual Diffusion Module
(MRD). The TEM generates an initial deblurred text image and a mask that
encodes the spatial location of the text. The MRD is responsible for
effectively sharpening the text edge by modeling the residuals between the
ground-truth images and the initial deblurred images. Extensive experiments
demonstrate that our TextDiff achieves state-of-the-art (SOTA) performance on
public benchmark datasets and can improve the readability of scene text images.
Moreover, our proposed MRD module is plug-and-play that effectively sharpens
the text edges produced by SOTA methods. This enhancement not only improves the
readability and recognizability of the results generated by SOTA methods but
also does not require any additional joint training. Available
Codes:https://github.com/Lenubolim/TextDiff.
| [
{
"version": "v1",
"created": "Sun, 13 Aug 2023 11:02:16 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Mar 2025 08:22:00 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Liu",
"Baolin",
""
],
[
"Yang",
"Zongyuan",
""
],
[
"Wang",
"Pengfei",
""
],
[
"Zhou",
"Junjie",
""
],
[
"Liu",
"Ziqi",
""
],
[
"Song",
"Ziyi",
""
],
[
"Liu",
"Yan",
""
],
[
"Xiong",
"Yongping",
""
]
] | TITLE: TextDiff: Mask-Guided Residual Diffusion Models for Scene Text Image
Super-Resolution
ABSTRACT: The goal of scene text image super-resolution is to reconstruct
high-resolution text-line images from unrecognizable low-resolution inputs. The
existing methods relying on the optimization of pixel-level loss tend to yield
text edges that exhibit a notable degree of blurring, thereby exerting a
substantial impact on both the readability and recognizability of the text. To
address these issues, we propose TextDiff, the first diffusion-based framework
tailored for scene text image super-resolution. It contains two modules: the
Text Enhancement Module (TEM) and the Mask-Guided Residual Diffusion Module
(MRD). The TEM generates an initial deblurred text image and a mask that
encodes the spatial location of the text. The MRD is responsible for
effectively sharpening the text edge by modeling the residuals between the
ground-truth images and the initial deblurred images. Extensive experiments
demonstrate that our TextDiff achieves state-of-the-art (SOTA) performance on
public benchmark datasets and can improve the readability of scene text images.
Moreover, our proposed MRD module is plug-and-play that effectively sharpens
the text edges produced by SOTA methods. This enhancement not only improves the
readability and recognizability of the results generated by SOTA methods but
also does not require any additional joint training. Available
Codes:https://github.com/Lenubolim/TextDiff.
|
2308.15568 | Singh Akansha | Singh Akansha | Over-Squashing in Graph Neural Networks: A Comprehensive survey | 18 pages | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Graph Neural Networks (GNNs) revolutionize machine learning for
graph-structured data, effectively capturing complex relationships. They
disseminate information through interconnected nodes, but long-range
interactions face challenges known as "over-squashing". This survey delves into
the challenge of over-squashing in Graph Neural Networks (GNNs), where
long-range information dissemination is hindered, impacting tasks reliant on
intricate long-distance interactions. It comprehensively explores the causes,
consequences, and mitigation strategies for over-squashing. Various
methodologies are reviewed, including graph rewiring, novel normalization,
spectral analysis, and curvature-based strategies, with a focus on their
trade-offs and effectiveness. The survey also discusses the interplay between
over-squashing and other GNN limitations, such as over-smoothing, and provides
a taxonomy of models designed to address these issues in node and graph-level
tasks. Benchmark datasets for performance evaluation are also detailed, making
this survey a valuable resource for researchers and practitioners in the GNN
field.
| [
{
"version": "v1",
"created": "Tue, 29 Aug 2023 18:46:15 GMT"
},
{
"version": "v2",
"created": "Mon, 4 Sep 2023 11:54:33 GMT"
},
{
"version": "v3",
"created": "Sun, 17 Sep 2023 13:06:01 GMT"
},
{
"version": "v4",
"created": "Sat, 21 Oct 2023 09:39:48 GMT"
},
{
"version": "v5",
"created": "Tue, 28 Nov 2023 11:03:06 GMT"
},
{
"version": "v6",
"created": "Mon, 29 Apr 2024 14:15:42 GMT"
},
{
"version": "v7",
"created": "Fri, 14 Mar 2025 20:10:31 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Akansha",
"Singh",
""
]
] | TITLE: Over-Squashing in Graph Neural Networks: A Comprehensive survey
ABSTRACT: Graph Neural Networks (GNNs) revolutionize machine learning for
graph-structured data, effectively capturing complex relationships. They
disseminate information through interconnected nodes, but long-range
interactions face challenges known as "over-squashing". This survey delves into
the challenge of over-squashing in Graph Neural Networks (GNNs), where
long-range information dissemination is hindered, impacting tasks reliant on
intricate long-distance interactions. It comprehensively explores the causes,
consequences, and mitigation strategies for over-squashing. Various
methodologies are reviewed, including graph rewiring, novel normalization,
spectral analysis, and curvature-based strategies, with a focus on their
trade-offs and effectiveness. The survey also discusses the interplay between
over-squashing and other GNN limitations, such as over-smoothing, and provides
a taxonomy of models designed to address these issues in node and graph-level
tasks. Benchmark datasets for performance evaluation are also detailed, making
this survey a valuable resource for researchers and practitioners in the GNN
field.
|
2310.07049 | Yiming Chen | Yiming Chen, Chi Chen, Inhui Hwang, Michael J. Davis, Wanli Yang,
Chengjun Sun, Gi-Hyeok Lee, Dylan McReynolds, Daniel Allen, Juan Marulanda
Arias, Shyue Ping Ong and Maria K.Y. Chan | Robust Machine Learning Inference from X-ray Absorption Near Edge
Spectra through Featurization | null | Chemistry of Materials 36.5 (2024): 2304-2313 | 10.1021/acs.chemmater.3c02584 | null | physics.comp-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | X-ray absorption spectroscopy (XAS) is a commonly-employed technique for
characterizing functional materials. In particular, x-ray absorption near edge
spectra (XANES) encodes local coordination and electronic information and
machine learning approaches to extract this information is of significant
interest. To date, most ML approaches for XANES have primarily focused on using
the raw spectral intensities as input, overlooking the potential benefits of
incorporating spectral transformations and dimensionality reduction techniques
into ML predictions. In this work, we focused on systematically comparing the
impact of different featurization methods on the performance of ML models for
XAS analysis. We evaluated the classification and regression capabilities of
these models on computed datasets and validated their performance on previously
unseen experimental datasets. Our analysis revealed an intriguing discovery:
the cumulative distribution function (CDF) feature achieves both high
prediction accuracy and exceptional transferability. This remarkably robust
performance can be attributed to its tolerance to horizontal shifts in spectra,
which is crucial when validating models using experimental data. While this
work exclusively focuses on XANES analysis, we anticipate that the methodology
presented here will hold promise as a versatile asset to the broader
spectroscopy community.
| [
{
"version": "v1",
"created": "Tue, 10 Oct 2023 22:23:36 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Mar 2025 23:05:41 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Chen",
"Yiming",
""
],
[
"Chen",
"Chi",
""
],
[
"Hwang",
"Inhui",
""
],
[
"Davis",
"Michael J.",
""
],
[
"Yang",
"Wanli",
""
],
[
"Sun",
"Chengjun",
""
],
[
"Lee",
"Gi-Hyeok",
""
],
[
"McReynolds",
"Dylan",
""
],
[
"Allen",
"Daniel",
""
],
[
"Arias",
"Juan Marulanda",
""
],
[
"Ong",
"Shyue Ping",
""
],
[
"Chan",
"Maria K. Y.",
""
]
] | TITLE: Robust Machine Learning Inference from X-ray Absorption Near Edge
Spectra through Featurization
ABSTRACT: X-ray absorption spectroscopy (XAS) is a commonly-employed technique for
characterizing functional materials. In particular, x-ray absorption near edge
spectra (XANES) encodes local coordination and electronic information and
machine learning approaches to extract this information is of significant
interest. To date, most ML approaches for XANES have primarily focused on using
the raw spectral intensities as input, overlooking the potential benefits of
incorporating spectral transformations and dimensionality reduction techniques
into ML predictions. In this work, we focused on systematically comparing the
impact of different featurization methods on the performance of ML models for
XAS analysis. We evaluated the classification and regression capabilities of
these models on computed datasets and validated their performance on previously
unseen experimental datasets. Our analysis revealed an intriguing discovery:
the cumulative distribution function (CDF) feature achieves both high
prediction accuracy and exceptional transferability. This remarkably robust
performance can be attributed to its tolerance to horizontal shifts in spectra,
which is crucial when validating models using experimental data. While this
work exclusively focuses on XANES analysis, we anticipate that the methodology
presented here will hold promise as a versatile asset to the broader
spectroscopy community.
|
2311.05589 | Yida Yin | Yida Yin, Zhiqiu Xu, Zhiyuan Li, Trevor Darrell, Zhuang Liu | A Coefficient Makes SVRG Effective | Published in ICLR 2025 | null | null | null | cs.LG math.OC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Stochastic Variance Reduced Gradient (SVRG), introduced by Johnson & Zhang
(2013), is a theoretically compelling optimization method. However, as Defazio
& Bottou (2019) highlight, its effectiveness in deep learning is yet to be
proven. In this work, we demonstrate the potential of SVRG in optimizing
real-world neural networks. Our empirical analysis finds that, for deeper
neural networks, the strength of the variance reduction term in SVRG should be
smaller and decrease as training progresses. Inspired by this, we introduce a
multiplicative coefficient $\alpha$ to control the strength and adjust it
through a linear decay schedule. We name our method $\alpha$-SVRG. Our results
show $\alpha$-SVRG better optimizes models, consistently reducing training loss
compared to the baseline and standard SVRG across various model architectures
and multiple image classification datasets. We hope our findings encourage
further exploration into variance reduction techniques in deep learning. Code
is available at github.com/davidyyd/alpha-SVRG.
| [
{
"version": "v1",
"created": "Thu, 9 Nov 2023 18:47:44 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 11:14:58 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Yin",
"Yida",
""
],
[
"Xu",
"Zhiqiu",
""
],
[
"Li",
"Zhiyuan",
""
],
[
"Darrell",
"Trevor",
""
],
[
"Liu",
"Zhuang",
""
]
] | TITLE: A Coefficient Makes SVRG Effective
ABSTRACT: Stochastic Variance Reduced Gradient (SVRG), introduced by Johnson & Zhang
(2013), is a theoretically compelling optimization method. However, as Defazio
& Bottou (2019) highlight, its effectiveness in deep learning is yet to be
proven. In this work, we demonstrate the potential of SVRG in optimizing
real-world neural networks. Our empirical analysis finds that, for deeper
neural networks, the strength of the variance reduction term in SVRG should be
smaller and decrease as training progresses. Inspired by this, we introduce a
multiplicative coefficient $\alpha$ to control the strength and adjust it
through a linear decay schedule. We name our method $\alpha$-SVRG. Our results
show $\alpha$-SVRG better optimizes models, consistently reducing training loss
compared to the baseline and standard SVRG across various model architectures
and multiple image classification datasets. We hope our findings encourage
further exploration into variance reduction techniques in deep learning. Code
is available at github.com/davidyyd/alpha-SVRG.
|
2312.10766 | Xiaoyu Zhang | Xiaoyu Zhang, Cen Zhang, Tianlin Li, Yihao Huang, Xiaojun Jia, Ming
Hu, Jie Zhang, Yang Liu, Shiqing Ma, Chao Shen | JailGuard: A Universal Detection Framework for LLM Prompt-based Attacks | 40 pages, 12 figures | null | null | null | cs.CR | http://creativecommons.org/licenses/by/4.0/ | The systems and software powered by Large Language Models (LLMs) and
Multi-Modal LLMs (MLLMs) have played a critical role in numerous scenarios.
However, current LLM systems are vulnerable to prompt-based attacks, with
jailbreaking attacks enabling the LLM system to generate harmful content, while
hijacking attacks manipulate the LLM system to perform attacker-desired tasks,
underscoring the necessity for detection tools. Unfortunately, existing
detecting approaches are usually tailored to specific attacks, resulting in
poor generalization in detecting various attacks across different modalities.
To address it, we propose JailGuard, a universal detection framework deployed
on top of LLM systems for prompt-based attacks across text and image
modalities. JailGuard operates on the principle that attacks are inherently
less robust than benign ones. Specifically, JailGuard mutates untrusted inputs
to generate variants and leverages the discrepancy of the variants' responses
on the target model to distinguish attack samples from benign samples. We
implement 18 mutators for text and image inputs and design a mutator
combination policy to further improve detection generalization. The evaluation
on the dataset containing 15 known attack types suggests that JailGuard
achieves the best detection accuracy of 86.14%/82.90% on text and image inputs,
outperforming state-of-the-art methods by 11.81%-25.73% and 12.20%-21.40%.
| [
{
"version": "v1",
"created": "Sun, 17 Dec 2023 17:02:14 GMT"
},
{
"version": "v2",
"created": "Sat, 23 Dec 2023 14:17:31 GMT"
},
{
"version": "v3",
"created": "Tue, 18 Jun 2024 02:21:02 GMT"
},
{
"version": "v4",
"created": "Sat, 15 Mar 2025 00:49:45 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Zhang",
"Xiaoyu",
""
],
[
"Zhang",
"Cen",
""
],
[
"Li",
"Tianlin",
""
],
[
"Huang",
"Yihao",
""
],
[
"Jia",
"Xiaojun",
""
],
[
"Hu",
"Ming",
""
],
[
"Zhang",
"Jie",
""
],
[
"Liu",
"Yang",
""
],
[
"Ma",
"Shiqing",
""
],
[
"Shen",
"Chao",
""
]
] | TITLE: JailGuard: A Universal Detection Framework for LLM Prompt-based Attacks
ABSTRACT: The systems and software powered by Large Language Models (LLMs) and
Multi-Modal LLMs (MLLMs) have played a critical role in numerous scenarios.
However, current LLM systems are vulnerable to prompt-based attacks, with
jailbreaking attacks enabling the LLM system to generate harmful content, while
hijacking attacks manipulate the LLM system to perform attacker-desired tasks,
underscoring the necessity for detection tools. Unfortunately, existing
detecting approaches are usually tailored to specific attacks, resulting in
poor generalization in detecting various attacks across different modalities.
To address it, we propose JailGuard, a universal detection framework deployed
on top of LLM systems for prompt-based attacks across text and image
modalities. JailGuard operates on the principle that attacks are inherently
less robust than benign ones. Specifically, JailGuard mutates untrusted inputs
to generate variants and leverages the discrepancy of the variants' responses
on the target model to distinguish attack samples from benign samples. We
implement 18 mutators for text and image inputs and design a mutator
combination policy to further improve detection generalization. The evaluation
on the dataset containing 15 known attack types suggests that JailGuard
achieves the best detection accuracy of 86.14%/82.90% on text and image inputs,
outperforming state-of-the-art methods by 11.81%-25.73% and 12.20%-21.40%.
|
2312.12634 | Payam Jome Yazdian | Payam Jome Yazdian, Rachel Lagasse, Hamid Mohammadi, Eric Liu, Li
Cheng, Angelica Lim | MotionScript: Natural Language Descriptions for Expressive 3D Human
Motions | Project webpage: https://pjyazdian.github.io/MotionScript | null | null | null | cs.CV cs.AI cs.CL cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce MotionScript, a novel framework for generating highly detailed,
natural language descriptions of 3D human motions. Unlike existing motion
datasets that rely on broad action labels or generic captions, MotionScript
provides fine-grained, structured descriptions that capture the full complexity
of human movement including expressive actions (e.g., emotions, stylistic
walking) and interactions beyond standard motion capture datasets. MotionScript
serves as both a descriptive tool and a training resource for text-to-motion
models, enabling the synthesis of highly realistic and diverse human motions
from text. By augmenting motion datasets with MotionScript captions, we
demonstrate significant improvements in out-of-distribution motion generation,
allowing large language models (LLMs) to generate motions that extend beyond
existing data. Additionally, MotionScript opens new applications in animation,
virtual human simulation, and robotics, providing an interpretable bridge
between intuitive descriptions and motion synthesis. To the best of our
knowledge, this is the first attempt to systematically translate 3D motion into
structured natural language without requiring training data.
| [
{
"version": "v1",
"created": "Tue, 19 Dec 2023 22:33:17 GMT"
},
{
"version": "v2",
"created": "Sun, 29 Sep 2024 20:24:27 GMT"
},
{
"version": "v3",
"created": "Wed, 12 Mar 2025 21:16:45 GMT"
},
{
"version": "v4",
"created": "Sun, 16 Mar 2025 17:50:27 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Yazdian",
"Payam Jome",
""
],
[
"Lagasse",
"Rachel",
""
],
[
"Mohammadi",
"Hamid",
""
],
[
"Liu",
"Eric",
""
],
[
"Cheng",
"Li",
""
],
[
"Lim",
"Angelica",
""
]
] | TITLE: MotionScript: Natural Language Descriptions for Expressive 3D Human
Motions
ABSTRACT: We introduce MotionScript, a novel framework for generating highly detailed,
natural language descriptions of 3D human motions. Unlike existing motion
datasets that rely on broad action labels or generic captions, MotionScript
provides fine-grained, structured descriptions that capture the full complexity
of human movement including expressive actions (e.g., emotions, stylistic
walking) and interactions beyond standard motion capture datasets. MotionScript
serves as both a descriptive tool and a training resource for text-to-motion
models, enabling the synthesis of highly realistic and diverse human motions
from text. By augmenting motion datasets with MotionScript captions, we
demonstrate significant improvements in out-of-distribution motion generation,
allowing large language models (LLMs) to generate motions that extend beyond
existing data. Additionally, MotionScript opens new applications in animation,
virtual human simulation, and robotics, providing an interpretable bridge
between intuitive descriptions and motion synthesis. To the best of our
knowledge, this is the first attempt to systematically translate 3D motion into
structured natural language without requiring training data.
|
2401.05787 | Md Rizwan Parvez | Md Rizwan Parvez | Chain of Evidences and Evidence to Generate: Prompting for Context
Grounded and Retrieval Augmented Reasoning | Accepted at NAACL KnowledgeNLP 2025 | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | While chain-of-thoughts (CoT) prompting has revolutionized how LLMs perform
reasoning tasks, its current methods and variations (e.g, Self-consistency,
ReACT, Reflexion, Tree-of-Thoughts (ToT), Cumulative Reasoning (CR) etc.,)
suffer from limitations like limited context grounding,
hallucination/inconsistent output generation, and iterative sluggishness. To
overcome these challenges, we introduce a novel mono/dual-step zero-shot
prompting framework built upon two unique strategies Chain of Evidences (CoE)}
and Evidence to Generate (E2G). Instead of unverified reasoning claims, our
innovative approaches leverage the power of "evidence for decision making" by
first focusing exclusively on the thought sequences explicitly mentioned in the
context which then serve as extracted evidence, guiding the LLM's output
generation process with greater precision and efficiency. This simple yet
potent approach unlocks the full potential of chain-of-thoughts prompting,
facilitating faster, more reliable, and contextually aware reasoning in LLMs.
Our framework consistently achieves remarkable results across various
knowledge-intensive reasoning and generation tasks, surpassing baseline
approaches with state-of-the-art LLMs. For instance, (i) on the LogiQA
benchmark using GPT-4, CoE achieves a new state-of-the-art accuracy of 53.8%,
surpassing CoT by 18%, ToT by 11%, and CR by 9%; (ii) CoE with PaLM-2
outperforms the variable-shot performance of Gemini Ultra by 0.9 F1 points,
achieving an F1 score of 83.3 on DROP. We release our prompts and outputs on
these benchmarks as a new instruction tuning dataset for future research at
https://huggingface.co/datasets/kagnlp/Chain-of-Evidences/.
| [
{
"version": "v1",
"created": "Thu, 11 Jan 2024 09:49:15 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 10:35:11 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Parvez",
"Md Rizwan",
""
]
] | TITLE: Chain of Evidences and Evidence to Generate: Prompting for Context
Grounded and Retrieval Augmented Reasoning
ABSTRACT: While chain-of-thoughts (CoT) prompting has revolutionized how LLMs perform
reasoning tasks, its current methods and variations (e.g, Self-consistency,
ReACT, Reflexion, Tree-of-Thoughts (ToT), Cumulative Reasoning (CR) etc.,)
suffer from limitations like limited context grounding,
hallucination/inconsistent output generation, and iterative sluggishness. To
overcome these challenges, we introduce a novel mono/dual-step zero-shot
prompting framework built upon two unique strategies Chain of Evidences (CoE)}
and Evidence to Generate (E2G). Instead of unverified reasoning claims, our
innovative approaches leverage the power of "evidence for decision making" by
first focusing exclusively on the thought sequences explicitly mentioned in the
context which then serve as extracted evidence, guiding the LLM's output
generation process with greater precision and efficiency. This simple yet
potent approach unlocks the full potential of chain-of-thoughts prompting,
facilitating faster, more reliable, and contextually aware reasoning in LLMs.
Our framework consistently achieves remarkable results across various
knowledge-intensive reasoning and generation tasks, surpassing baseline
approaches with state-of-the-art LLMs. For instance, (i) on the LogiQA
benchmark using GPT-4, CoE achieves a new state-of-the-art accuracy of 53.8%,
surpassing CoT by 18%, ToT by 11%, and CR by 9%; (ii) CoE with PaLM-2
outperforms the variable-shot performance of Gemini Ultra by 0.9 F1 points,
achieving an F1 score of 83.3 on DROP. We release our prompts and outputs on
these benchmarks as a new instruction tuning dataset for future research at
https://huggingface.co/datasets/kagnlp/Chain-of-Evidences/.
|
2401.08957 | Kun Wu | Kun Wu, Ning Liu, Zhen Zhao, Di Qiu, Jinming Li, Zhengping Che,
Zhiyuan Xu, Jian Tang | Learning from Imperfect Demonstrations with Self-Supervision for Robotic
Manipulation | 8 pages, 4 figures | null | null | null | cs.RO cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Improving data utilization, especially for imperfect data from task failures,
is crucial for robotic manipulation due to the challenging, time-consuming, and
expensive data collection process in the real world. Current imitation learning
(IL) typically discards imperfect data, focusing solely on successful expert
data. While reinforcement learning (RL) can learn from explorations and
failures, the sim2real gap and its reliance on dense reward and online
exploration make it difficult to apply effectively in real-world scenarios. In
this work, we aim to conquer the challenge of leveraging imperfect data without
the need for reward information to improve the model performance for robotic
manipulation in an offline manner. Specifically, we introduce a Self-Supervised
Data Filtering framework (SSDF) that combines expert and imperfect data to
compute quality scores for failed trajectory segments. High-quality segments
from the failed data are used to expand the training dataset. Then, the
enhanced dataset can be used with any downstream policy learning method for
robotic manipulation tasks. Extensive experiments on the ManiSkill2 benchmark
built on the high-fidelity Sapien simulator and real-world robotic manipulation
tasks using the Franka robot arm demonstrated that the SSDF can accurately
expand the training dataset with high-quality imperfect data and improve the
success rates for all robotic manipulation tasks.
| [
{
"version": "v1",
"created": "Wed, 17 Jan 2024 04:15:56 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Feb 2025 06:41:03 GMT"
},
{
"version": "v3",
"created": "Mon, 17 Mar 2025 06:17:11 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Wu",
"Kun",
""
],
[
"Liu",
"Ning",
""
],
[
"Zhao",
"Zhen",
""
],
[
"Qiu",
"Di",
""
],
[
"Li",
"Jinming",
""
],
[
"Che",
"Zhengping",
""
],
[
"Xu",
"Zhiyuan",
""
],
[
"Tang",
"Jian",
""
]
] | TITLE: Learning from Imperfect Demonstrations with Self-Supervision for Robotic
Manipulation
ABSTRACT: Improving data utilization, especially for imperfect data from task failures,
is crucial for robotic manipulation due to the challenging, time-consuming, and
expensive data collection process in the real world. Current imitation learning
(IL) typically discards imperfect data, focusing solely on successful expert
data. While reinforcement learning (RL) can learn from explorations and
failures, the sim2real gap and its reliance on dense reward and online
exploration make it difficult to apply effectively in real-world scenarios. In
this work, we aim to conquer the challenge of leveraging imperfect data without
the need for reward information to improve the model performance for robotic
manipulation in an offline manner. Specifically, we introduce a Self-Supervised
Data Filtering framework (SSDF) that combines expert and imperfect data to
compute quality scores for failed trajectory segments. High-quality segments
from the failed data are used to expand the training dataset. Then, the
enhanced dataset can be used with any downstream policy learning method for
robotic manipulation tasks. Extensive experiments on the ManiSkill2 benchmark
built on the high-fidelity Sapien simulator and real-world robotic manipulation
tasks using the Franka robot arm demonstrated that the SSDF can accurately
expand the training dataset with high-quality imperfect data and improve the
success rates for all robotic manipulation tasks.
|
2401.16515 | Sean Lam | Sean Lam, Ahmed Khaled, Simon Bilodeau, Bicky A. Marquez, Paul R.
Prucnal, Lukas Chrostowski, Bhavin J. Shastri, Sudip Shekhar | Neuromorphic Photonic Computing with an Electro-Optic Analog Memory | null | null | null | null | cs.ET cs.SY eess.SP eess.SY physics.optics | http://creativecommons.org/licenses/by/4.0/ | Artificial intelligence (AI) has seen remarkable advancements across various
domains, including natural language processing, computer vision, autonomous
vehicles, and biology. However, the rapid expansion of AI technologies has
escalated the demand for more powerful computing resources. As digital
computing approaches fundamental limits, neuromorphic photonics emerges as a
promising platform to complement existing digital systems. In neuromorphic
photonic computing, photonic devices are controlled using analog signals. This
necessitates the use of digital-to-analog converters (DAC) and
analog-to-digital converters (ADC) for interfacing with these devices during
inference and training. However, data movement between memory and these
converters in conventional von Neumann computing architectures consumes energy.
To address this, analog memory co-located with photonic computing devices is
proposed. This approach aims to reduce the reliance on DACs and minimize data
movement to enhance compute efficiency. This paper demonstrates a
monolithically integrated neuromorphic photonic circuit with co-located
capacitive analog memory and analyzes analog memory specifications for
neuromorphic photonic computing using the MNIST dataset as a benchmark.
| [
{
"version": "v1",
"created": "Mon, 29 Jan 2024 19:37:50 GMT"
},
{
"version": "v2",
"created": "Tue, 10 Sep 2024 23:55:57 GMT"
},
{
"version": "v3",
"created": "Sun, 16 Mar 2025 21:58:12 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Lam",
"Sean",
""
],
[
"Khaled",
"Ahmed",
""
],
[
"Bilodeau",
"Simon",
""
],
[
"Marquez",
"Bicky A.",
""
],
[
"Prucnal",
"Paul R.",
""
],
[
"Chrostowski",
"Lukas",
""
],
[
"Shastri",
"Bhavin J.",
""
],
[
"Shekhar",
"Sudip",
""
]
] | TITLE: Neuromorphic Photonic Computing with an Electro-Optic Analog Memory
ABSTRACT: Artificial intelligence (AI) has seen remarkable advancements across various
domains, including natural language processing, computer vision, autonomous
vehicles, and biology. However, the rapid expansion of AI technologies has
escalated the demand for more powerful computing resources. As digital
computing approaches fundamental limits, neuromorphic photonics emerges as a
promising platform to complement existing digital systems. In neuromorphic
photonic computing, photonic devices are controlled using analog signals. This
necessitates the use of digital-to-analog converters (DAC) and
analog-to-digital converters (ADC) for interfacing with these devices during
inference and training. However, data movement between memory and these
converters in conventional von Neumann computing architectures consumes energy.
To address this, analog memory co-located with photonic computing devices is
proposed. This approach aims to reduce the reliance on DACs and minimize data
movement to enhance compute efficiency. This paper demonstrates a
monolithically integrated neuromorphic photonic circuit with co-located
capacitive analog memory and analyzes analog memory specifications for
neuromorphic photonic computing using the MNIST dataset as a benchmark.
|
2402.04398 | Sujay Nagaraj | Sujay Nagaraj, Walter Gerych, Sana Tonekaboni, Anna Goldenberg, Berk
Ustun, Thomas Hartvigsen | Learning under Temporal Label Noise | The Thirteenth International Conference on Learning Representations
(ICLR 2025) | null | null | null | cs.LG cs.AI stat.ML | http://creativecommons.org/licenses/by/4.0/ | Many time series classification tasks, where labels vary over time, are
affected by label noise that also varies over time. Such noise can cause label
quality to improve, worsen, or periodically change over time. We first propose
and formalize temporal label noise, an unstudied problem for sequential
classification of time series. In this setting, multiple labels are recorded
over time while being corrupted by a time-dependent noise function. We first
demonstrate the importance of modeling the temporal nature of the label noise
function and how existing methods will consistently underperform. We then
propose methods to train noise-tolerant classifiers by estimating the temporal
label noise function directly from data. We show that our methods lead to
state-of-the-art performance under diverse types of temporal label noise on
real-world datasets
| [
{
"version": "v1",
"created": "Tue, 6 Feb 2024 20:56:31 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Mar 2025 09:14:36 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Nagaraj",
"Sujay",
""
],
[
"Gerych",
"Walter",
""
],
[
"Tonekaboni",
"Sana",
""
],
[
"Goldenberg",
"Anna",
""
],
[
"Ustun",
"Berk",
""
],
[
"Hartvigsen",
"Thomas",
""
]
] | TITLE: Learning under Temporal Label Noise
ABSTRACT: Many time series classification tasks, where labels vary over time, are
affected by label noise that also varies over time. Such noise can cause label
quality to improve, worsen, or periodically change over time. We first propose
and formalize temporal label noise, an unstudied problem for sequential
classification of time series. In this setting, multiple labels are recorded
over time while being corrupted by a time-dependent noise function. We first
demonstrate the importance of modeling the temporal nature of the label noise
function and how existing methods will consistently underperform. We then
propose methods to train noise-tolerant classifiers by estimating the temporal
label noise function directly from data. We show that our methods lead to
state-of-the-art performance under diverse types of temporal label noise on
real-world datasets
|
2402.06423 | Erkang Cheng | Yifeng Bai, Zhirong Chen, Pengpeng Liang, Bo Song, Erkang Cheng | CurveFormer++: 3D Lane Detection by Curve Propagation with Temporal
Curve Queries and Attention | arXiv admin note: text overlap with arXiv:2209.07989 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In autonomous driving, accurate 3D lane detection using monocular cameras is
important for downstream tasks. Recent CNN and Transformer approaches usually
apply a two-stage model design. The first stage transforms the image feature
from a front image into a bird's-eye-view (BEV) representation. Subsequently, a
sub-network processes the BEV feature to generate the 3D detection results.
However, these approaches heavily rely on a challenging image feature
transformation module from a perspective view to a BEV representation. In our
work, we present CurveFormer++, a single-stage Transformer-based method that
does not require the view transform module and directly infers 3D lane results
from the perspective image features. Specifically, our approach models the 3D
lane detection task as a curve propagation problem, where each lane is
represented by a curve query with a dynamic and ordered anchor point set. By
employing a Transformer decoder, the model can iteratively refine the 3D lane
results. A curve cross-attention module is introduced to calculate similarities
between image features and curve queries. To handle varying lane lengths, we
employ context sampling and anchor point restriction techniques to compute more
relevant image features. Furthermore, we apply a temporal fusion module that
incorporates selected informative sparse curve queries and their corresponding
anchor point sets to leverage historical information. In the experiments, we
evaluate our approach on two publicly real-world datasets. The results
demonstrate that our method provides outstanding performance compared with both
CNN and Transformer based methods. We also conduct ablation studies to analyze
the impact of each component.
| [
{
"version": "v1",
"created": "Fri, 9 Feb 2024 14:13:40 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Mar 2025 14:40:20 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Bai",
"Yifeng",
""
],
[
"Chen",
"Zhirong",
""
],
[
"Liang",
"Pengpeng",
""
],
[
"Song",
"Bo",
""
],
[
"Cheng",
"Erkang",
""
]
] | TITLE: CurveFormer++: 3D Lane Detection by Curve Propagation with Temporal
Curve Queries and Attention
ABSTRACT: In autonomous driving, accurate 3D lane detection using monocular cameras is
important for downstream tasks. Recent CNN and Transformer approaches usually
apply a two-stage model design. The first stage transforms the image feature
from a front image into a bird's-eye-view (BEV) representation. Subsequently, a
sub-network processes the BEV feature to generate the 3D detection results.
However, these approaches heavily rely on a challenging image feature
transformation module from a perspective view to a BEV representation. In our
work, we present CurveFormer++, a single-stage Transformer-based method that
does not require the view transform module and directly infers 3D lane results
from the perspective image features. Specifically, our approach models the 3D
lane detection task as a curve propagation problem, where each lane is
represented by a curve query with a dynamic and ordered anchor point set. By
employing a Transformer decoder, the model can iteratively refine the 3D lane
results. A curve cross-attention module is introduced to calculate similarities
between image features and curve queries. To handle varying lane lengths, we
employ context sampling and anchor point restriction techniques to compute more
relevant image features. Furthermore, we apply a temporal fusion module that
incorporates selected informative sparse curve queries and their corresponding
anchor point sets to leverage historical information. In the experiments, we
evaluate our approach on two publicly real-world datasets. The results
demonstrate that our method provides outstanding performance compared with both
CNN and Transformer based methods. We also conduct ablation studies to analyze
the impact of each component.
|
2402.07927 | Pranab Sahoo | Pranab Sahoo, Ayush Kumar Singh, Sriparna Saha, Vinija Jain, Samrat
Mondal, and Aman Chadha | A Systematic Survey of Prompt Engineering in Large Language Models:
Techniques and Applications | 12 pages, 2 figures | null | null | null | cs.AI cs.CL cs.HC | http://creativecommons.org/licenses/by/4.0/ | Prompt engineering has emerged as an indispensable technique for extending
the capabilities of large language models (LLMs) and vision-language models
(VLMs). This approach leverages task-specific instructions, known as prompts,
to enhance model efficacy without modifying the core model parameters. Rather
than updating the model parameters, prompts allow seamless integration of
pre-trained models into downstream tasks by eliciting desired model behaviors
solely based on the given prompt. Prompts can be natural language instructions
that provide context to guide the model or learned vector representations that
activate relevant knowledge. This burgeoning field has enabled success across
various applications, from question-answering to commonsense reasoning.
However, there remains a lack of systematic organization and understanding of
the diverse prompt engineering methods and techniques. This survey paper
addresses the gap by providing a structured overview of recent advancements in
prompt engineering, categorized by application area. For each prompting
approach, we provide a summary detailing the prompting methodology, its
applications, the models involved, and the datasets utilized. We also delve
into the strengths and limitations of each approach and include a taxonomy
diagram and table summarizing datasets, models, and critical points of each
prompting technique. This systematic analysis enables a better understanding of
this rapidly developing field and facilitates future research by illuminating
open challenges and opportunities for prompt engineering.
| [
{
"version": "v1",
"created": "Mon, 5 Feb 2024 19:49:13 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Mar 2025 06:23:34 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Sahoo",
"Pranab",
""
],
[
"Singh",
"Ayush Kumar",
""
],
[
"Saha",
"Sriparna",
""
],
[
"Jain",
"Vinija",
""
],
[
"Mondal",
"Samrat",
""
],
[
"Chadha",
"Aman",
""
]
] | TITLE: A Systematic Survey of Prompt Engineering in Large Language Models:
Techniques and Applications
ABSTRACT: Prompt engineering has emerged as an indispensable technique for extending
the capabilities of large language models (LLMs) and vision-language models
(VLMs). This approach leverages task-specific instructions, known as prompts,
to enhance model efficacy without modifying the core model parameters. Rather
than updating the model parameters, prompts allow seamless integration of
pre-trained models into downstream tasks by eliciting desired model behaviors
solely based on the given prompt. Prompts can be natural language instructions
that provide context to guide the model or learned vector representations that
activate relevant knowledge. This burgeoning field has enabled success across
various applications, from question-answering to commonsense reasoning.
However, there remains a lack of systematic organization and understanding of
the diverse prompt engineering methods and techniques. This survey paper
addresses the gap by providing a structured overview of recent advancements in
prompt engineering, categorized by application area. For each prompting
approach, we provide a summary detailing the prompting methodology, its
applications, the models involved, and the datasets utilized. We also delve
into the strengths and limitations of each approach and include a taxonomy
diagram and table summarizing datasets, models, and critical points of each
prompting technique. This systematic analysis enables a better understanding of
this rapidly developing field and facilitates future research by illuminating
open challenges and opportunities for prompt engineering.
|
2402.12265 | Christophe Roux | Christophe Roux, Max Zimmer, Sebastian Pokutta | On the Byzantine-Resilience of Distillation-Based Federated Learning | null | null | null | null | cs.LG cs.AI cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Federated Learning (FL) algorithms using Knowledge Distillation (KD) have
received increasing attention due to their favorable properties with respect to
privacy, non-i.i.d. data and communication cost. These methods depart from
transmitting model parameters and instead communicate information about a
learning task by sharing predictions on a public dataset. In this work, we
study the performance of such approaches in the byzantine setting, where a
subset of the clients act in an adversarial manner aiming to disrupt the
learning process. We show that KD-based FL algorithms are remarkably resilient
and analyze how byzantine clients can influence the learning process. Based on
these insights, we introduce two new byzantine attacks and demonstrate their
ability to break existing byzantine-resilient methods. Additionally, we propose
a novel defence method which enhances the byzantine resilience of KD-based FL
algorithms. Finally, we provide a general framework to obfuscate attacks,
making them significantly harder to detect, thereby improving their
effectiveness. Our findings serve as an important building block in the
analysis of byzantine FL, contributing through the development of new attacks
and new defence mechanisms, further advancing the robustness of KD-based FL
algorithms.
| [
{
"version": "v1",
"created": "Mon, 19 Feb 2024 16:26:40 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Oct 2024 12:38:26 GMT"
},
{
"version": "v3",
"created": "Mon, 17 Mar 2025 14:08:19 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Roux",
"Christophe",
""
],
[
"Zimmer",
"Max",
""
],
[
"Pokutta",
"Sebastian",
""
]
] | TITLE: On the Byzantine-Resilience of Distillation-Based Federated Learning
ABSTRACT: Federated Learning (FL) algorithms using Knowledge Distillation (KD) have
received increasing attention due to their favorable properties with respect to
privacy, non-i.i.d. data and communication cost. These methods depart from
transmitting model parameters and instead communicate information about a
learning task by sharing predictions on a public dataset. In this work, we
study the performance of such approaches in the byzantine setting, where a
subset of the clients act in an adversarial manner aiming to disrupt the
learning process. We show that KD-based FL algorithms are remarkably resilient
and analyze how byzantine clients can influence the learning process. Based on
these insights, we introduce two new byzantine attacks and demonstrate their
ability to break existing byzantine-resilient methods. Additionally, we propose
a novel defence method which enhances the byzantine resilience of KD-based FL
algorithms. Finally, we provide a general framework to obfuscate attacks,
making them significantly harder to detect, thereby improving their
effectiveness. Our findings serve as an important building block in the
analysis of byzantine FL, contributing through the development of new attacks
and new defence mechanisms, further advancing the robustness of KD-based FL
algorithms.
|
2402.14598 | Depin Liang | Jianming Lv, Chengjun Wang, Depin Liang, Qianli Ma, Wei Chen, Xueqi
Cheng | EMN: Brain-inspired Elastic Memory Network for Quick Domain Adaptive
Feature Mapping | 15 pages,15 figures | null | null | null | cs.NE cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Utilizing unlabeled data in the target domain to perform continuous
optimization is critical to enhance the generalization ability of neural
networks. Most domain adaptation methods focus on time-consuming optimization
of deep feature extractors, which limits the deployment on lightweight edge
devices. Inspired by the memory mechanism and powerful generalization ability
of biological neural networks in human brains, we propose a novel gradient-free
Elastic Memory Network, namely EMN, to support quick fine-tuning of the mapping
between features and prediction without heavy optimization of deep features. In
particular, EMN adopts randomly connected neurons to memorize the association
of features and labels, where the signals in the network are propagated as
impulses, and the prediction is made by associating the memories stored on
neurons based on their confidence. More importantly, EMN supports reinforced
memorization of feature mapping based on unlabeled data to quickly adapt to a
new domain. Experiments based on four cross-domain real-world datasets show
that EMN can achieve up to 10% enhancement of performance while only needing
less than 1% timing cost of traditional domain adaptation methods.
| [
{
"version": "v1",
"created": "Sun, 4 Feb 2024 09:58:17 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 08:34:07 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Lv",
"Jianming",
""
],
[
"Wang",
"Chengjun",
""
],
[
"Liang",
"Depin",
""
],
[
"Ma",
"Qianli",
""
],
[
"Chen",
"Wei",
""
],
[
"Cheng",
"Xueqi",
""
]
] | TITLE: EMN: Brain-inspired Elastic Memory Network for Quick Domain Adaptive
Feature Mapping
ABSTRACT: Utilizing unlabeled data in the target domain to perform continuous
optimization is critical to enhance the generalization ability of neural
networks. Most domain adaptation methods focus on time-consuming optimization
of deep feature extractors, which limits the deployment on lightweight edge
devices. Inspired by the memory mechanism and powerful generalization ability
of biological neural networks in human brains, we propose a novel gradient-free
Elastic Memory Network, namely EMN, to support quick fine-tuning of the mapping
between features and prediction without heavy optimization of deep features. In
particular, EMN adopts randomly connected neurons to memorize the association
of features and labels, where the signals in the network are propagated as
impulses, and the prediction is made by associating the memories stored on
neurons based on their confidence. More importantly, EMN supports reinforced
memorization of feature mapping based on unlabeled data to quickly adapt to a
new domain. Experiments based on four cross-domain real-world datasets show
that EMN can achieve up to 10% enhancement of performance while only needing
less than 1% timing cost of traditional domain adaptation methods.
|
2402.17555 | Xinliang Zhang | Xinliang Zhang, Lei Zhu, Hangzhou He, Lujia Jin, Yanye Lu | Scribble Hides Class: Promoting Scribble-Based Weakly-Supervised
Semantic Segmentation with Its Class Label | null | null | 10.1609/aaai.v38i7.28563 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Scribble-based weakly-supervised semantic segmentation using sparse scribble
supervision is gaining traction as it reduces annotation costs when compared to
fully annotated alternatives. Existing methods primarily generate pseudo-labels
by diffusing labeled pixels to unlabeled ones with local cues for supervision.
However, this diffusion process fails to exploit global semantics and
class-specific cues, which are important for semantic segmentation. In this
study, we propose a class-driven scribble promotion network, which utilizes
both scribble annotations and pseudo-labels informed by image-level classes and
global semantics for supervision. Directly adopting pseudo-labels might
misguide the segmentation model, thus we design a localization rectification
module to correct foreground representations in the feature space. To further
combine the advantages of both supervisions, we also introduce a distance
entropy loss for uncertainty reduction, which adapts per-pixel confidence
weights according to the reliable region determined by the scribble and
pseudo-label's boundary. Experiments on the ScribbleSup dataset with different
qualities of scribble annotations outperform all the previous methods,
demonstrating the superiority and robustness of our method.The code is
available at
https://github.com/Zxl19990529/Class-driven-Scribble-Promotion-Network.
| [
{
"version": "v1",
"created": "Tue, 27 Feb 2024 14:51:56 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Zhang",
"Xinliang",
""
],
[
"Zhu",
"Lei",
""
],
[
"He",
"Hangzhou",
""
],
[
"Jin",
"Lujia",
""
],
[
"Lu",
"Yanye",
""
]
] | TITLE: Scribble Hides Class: Promoting Scribble-Based Weakly-Supervised
Semantic Segmentation with Its Class Label
ABSTRACT: Scribble-based weakly-supervised semantic segmentation using sparse scribble
supervision is gaining traction as it reduces annotation costs when compared to
fully annotated alternatives. Existing methods primarily generate pseudo-labels
by diffusing labeled pixels to unlabeled ones with local cues for supervision.
However, this diffusion process fails to exploit global semantics and
class-specific cues, which are important for semantic segmentation. In this
study, we propose a class-driven scribble promotion network, which utilizes
both scribble annotations and pseudo-labels informed by image-level classes and
global semantics for supervision. Directly adopting pseudo-labels might
misguide the segmentation model, thus we design a localization rectification
module to correct foreground representations in the feature space. To further
combine the advantages of both supervisions, we also introduce a distance
entropy loss for uncertainty reduction, which adapts per-pixel confidence
weights according to the reliable region determined by the scribble and
pseudo-label's boundary. Experiments on the ScribbleSup dataset with different
qualities of scribble annotations outperform all the previous methods,
demonstrating the superiority and robustness of our method.The code is
available at
https://github.com/Zxl19990529/Class-driven-Scribble-Promotion-Network.
|
2403.02971 | Xiaoyi Zhu | Xiaoyi Zhu, Yuxiang Tian, Lingxiao Huang, Zengfeng Huang | Space Complexity of Euclidean Clustering | Accepted by SoCG2024, TIT2025, in IEEE Transactions on Information
Theory, 2025 | null | 10.1109/TIT.2025.3550192 | null | cs.CG cs.DS | http://creativecommons.org/licenses/by/4.0/ | The $(k, z)$-Clustering problem in Euclidean space $\mathbb{R}^d$ has been
extensively studied. Given the scale of data involved, compression methods for
the Euclidean $(k, z)$-Clustering problem, such as data compression and
dimension reduction, have received significant attention in the literature.
However, the space complexity of the clustering problem, specifically, the
number of bits required to compress the cost function within a multiplicative
error $\varepsilon$, remains unclear in existing literature. This paper
initiates the study of space complexity for Euclidean $(k, z)$-Clustering and
offers both upper and lower bounds. Our space bounds are nearly tight when $k$
is constant, indicating that storing a coreset, a well-known data compression
approach, serves as the optimal compression scheme. Furthermore, our lower
bound result for $(k, z)$-Clustering establishes a tight space bound of
$\Theta( n d )$ for terminal embedding, where $n$ represents the dataset size.
Our technical approach leverages new geometric insights for principal angles
and discrepancy methods, which may hold independent interest.
| [
{
"version": "v1",
"created": "Tue, 5 Mar 2024 13:49:32 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Mar 2024 02:05:36 GMT"
},
{
"version": "v3",
"created": "Sat, 15 Mar 2025 01:58:29 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Zhu",
"Xiaoyi",
""
],
[
"Tian",
"Yuxiang",
""
],
[
"Huang",
"Lingxiao",
""
],
[
"Huang",
"Zengfeng",
""
]
] | TITLE: Space Complexity of Euclidean Clustering
ABSTRACT: The $(k, z)$-Clustering problem in Euclidean space $\mathbb{R}^d$ has been
extensively studied. Given the scale of data involved, compression methods for
the Euclidean $(k, z)$-Clustering problem, such as data compression and
dimension reduction, have received significant attention in the literature.
However, the space complexity of the clustering problem, specifically, the
number of bits required to compress the cost function within a multiplicative
error $\varepsilon$, remains unclear in existing literature. This paper
initiates the study of space complexity for Euclidean $(k, z)$-Clustering and
offers both upper and lower bounds. Our space bounds are nearly tight when $k$
is constant, indicating that storing a coreset, a well-known data compression
approach, serves as the optimal compression scheme. Furthermore, our lower
bound result for $(k, z)$-Clustering establishes a tight space bound of
$\Theta( n d )$ for terminal embedding, where $n$ represents the dataset size.
Our technical approach leverages new geometric insights for principal angles
and discrepancy methods, which may hold independent interest.
|
2403.06869 | Hao Chen | Hao Chen, Zihan Wang, Ran Tao, Hongxin Wei, Xing Xie, Masashi
Sugiyama, Bhiksha Raj, Jindong Wang | Impact of Noisy Supervision in Foundation Model Learning | 18 pages, 10 figures, 6 tables, preprint. arXiv admin note:
substantial text overlap with arXiv:2309.17002 | null | null | null | cs.LG cs.AI cs.CL cs.CV | http://creativecommons.org/licenses/by/4.0/ | Foundation models are usually pre-trained on large-scale datasets and then
adapted to downstream tasks through tuning. However, the large-scale
pre-training datasets, often inaccessible or too expensive to handle, can
contain label noise that may adversely affect the generalization of the model
and pose unexpected risks. This paper stands out as the first work to
comprehensively understand and analyze the nature of noise in pre-training
datasets and then effectively mitigate its impacts on downstream tasks.
Specifically, through extensive experiments of fully-supervised and image-text
contrastive pre-training on synthetic noisy ImageNet-1K, YFCC15M, and CC12M
datasets, we demonstrate that, while slight noise in pre-training can benefit
in-domain (ID) performance, where the training and testing data share a similar
distribution, it always deteriorates out-of-domain (OOD) performance, where
training and testing distributions are significantly different. These
observations are agnostic to scales of pre-training datasets, pre-training
noise types, model architectures, pre-training objectives, downstream tuning
methods, and downstream applications. We empirically ascertain that the reason
behind this is that the pre-training noise shapes the feature space
differently. We then propose a tuning method (NMTune) to affine the feature
space to mitigate the malignant effect of noise and improve generalization,
which is applicable in both parameter-efficient and black-box tuning manners.
We additionally conduct extensive experiments on popular vision and language
models, including APIs, which are supervised and self-supervised pre-trained on
realistic noisy data for evaluation. Our analysis and results demonstrate the
importance of this novel and fundamental research direction, which we term as
Noisy Model Learning.
| [
{
"version": "v1",
"created": "Mon, 11 Mar 2024 16:22:41 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Mar 2025 22:46:43 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Chen",
"Hao",
""
],
[
"Wang",
"Zihan",
""
],
[
"Tao",
"Ran",
""
],
[
"Wei",
"Hongxin",
""
],
[
"Xie",
"Xing",
""
],
[
"Sugiyama",
"Masashi",
""
],
[
"Raj",
"Bhiksha",
""
],
[
"Wang",
"Jindong",
""
]
] | TITLE: Impact of Noisy Supervision in Foundation Model Learning
ABSTRACT: Foundation models are usually pre-trained on large-scale datasets and then
adapted to downstream tasks through tuning. However, the large-scale
pre-training datasets, often inaccessible or too expensive to handle, can
contain label noise that may adversely affect the generalization of the model
and pose unexpected risks. This paper stands out as the first work to
comprehensively understand and analyze the nature of noise in pre-training
datasets and then effectively mitigate its impacts on downstream tasks.
Specifically, through extensive experiments of fully-supervised and image-text
contrastive pre-training on synthetic noisy ImageNet-1K, YFCC15M, and CC12M
datasets, we demonstrate that, while slight noise in pre-training can benefit
in-domain (ID) performance, where the training and testing data share a similar
distribution, it always deteriorates out-of-domain (OOD) performance, where
training and testing distributions are significantly different. These
observations are agnostic to scales of pre-training datasets, pre-training
noise types, model architectures, pre-training objectives, downstream tuning
methods, and downstream applications. We empirically ascertain that the reason
behind this is that the pre-training noise shapes the feature space
differently. We then propose a tuning method (NMTune) to affine the feature
space to mitigate the malignant effect of noise and improve generalization,
which is applicable in both parameter-efficient and black-box tuning manners.
We additionally conduct extensive experiments on popular vision and language
models, including APIs, which are supervised and self-supervised pre-trained on
realistic noisy data for evaluation. Our analysis and results demonstrate the
importance of this novel and fundamental research direction, which we term as
Noisy Model Learning.
|
2403.07378 | Xin Wang | Xin Wang, Yu Zheng, Zhongwei Wan, Mi Zhang | SVD-LLM: Truncation-aware Singular Value Decomposition for Large
Language Model Compression | ICLR 2025; Code available at:
https://github.com/AIoT-MLSys-Lab/SVD-LLM | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The advancements in Large Language Models (LLMs) have been hindered by their
substantial sizes, which necessitates LLM compression methods for practical
deployment. Singular Value Decomposition (SVD) offers a promising solution for
LLM compression. However, state-of-the-art SVD-based LLM compression methods
have two key limitations: truncating smaller singular values may lead to higher
compression loss, and the lack of update on the compressed weights after SVD
truncation. In this work, we propose SVD-LLM, a SVD-based post-training LLM
compression method that addresses the limitations of existing methods. SVD-LLM
incorporates a truncation-aware data whitening technique to ensure a direct
mapping between singular values and compression loss. Moreover, SVD-LLM adopts
a parameter update with sequential low-rank approximation to compensate for the
accuracy degradation after SVD compression. We evaluate SVD-LLM on 10 datasets
and seven models from three different LLM families at three different scales.
Our results demonstrate the superiority of SVD-LLM over state-of-the-arts,
especially at high model compression ratios. Our code is available at
https://github.com/AIoT-MLSys-Lab/SVD-LLM
| [
{
"version": "v1",
"created": "Tue, 12 Mar 2024 07:31:18 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Mar 2024 02:59:10 GMT"
},
{
"version": "v3",
"created": "Mon, 1 Apr 2024 15:04:15 GMT"
},
{
"version": "v4",
"created": "Tue, 28 May 2024 13:41:26 GMT"
},
{
"version": "v5",
"created": "Sun, 16 Mar 2025 03:27:33 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Wang",
"Xin",
""
],
[
"Zheng",
"Yu",
""
],
[
"Wan",
"Zhongwei",
""
],
[
"Zhang",
"Mi",
""
]
] | TITLE: SVD-LLM: Truncation-aware Singular Value Decomposition for Large
Language Model Compression
ABSTRACT: The advancements in Large Language Models (LLMs) have been hindered by their
substantial sizes, which necessitates LLM compression methods for practical
deployment. Singular Value Decomposition (SVD) offers a promising solution for
LLM compression. However, state-of-the-art SVD-based LLM compression methods
have two key limitations: truncating smaller singular values may lead to higher
compression loss, and the lack of update on the compressed weights after SVD
truncation. In this work, we propose SVD-LLM, a SVD-based post-training LLM
compression method that addresses the limitations of existing methods. SVD-LLM
incorporates a truncation-aware data whitening technique to ensure a direct
mapping between singular values and compression loss. Moreover, SVD-LLM adopts
a parameter update with sequential low-rank approximation to compensate for the
accuracy degradation after SVD compression. We evaluate SVD-LLM on 10 datasets
and seven models from three different LLM families at three different scales.
Our results demonstrate the superiority of SVD-LLM over state-of-the-arts,
especially at high model compression ratios. Our code is available at
https://github.com/AIoT-MLSys-Lab/SVD-LLM
|
2403.09964 | Zixin Yang | Zixin Yang, Richard Simon, Kelly Merrell, Cristian. A. Linte | Boundary Constraint-free Biomechanical Model-Based Surface Matching for
Intraoperative Liver Deformation Correction | null | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In image-guided liver surgery, 3D-3D non-rigid registration methods play a
crucial role in estimating the mapping between the preoperative model and the
intraoperative surface represented as point clouds, addressing the challenge of
tissue deformation. Typically, these methods incorporate a biomechanical model,
represented as a finite element model (FEM), into the strain energy term to
regularize a surface matching term. We propose a 3D-3D non-rigid registration
method that incorporates a modified FEM into the surface matching term. The
modified FEM alleviates the need to specify boundary conditions, which is
achieved by modifying the stiffness matrix of a FEM and using diagonal loading
for stabilization. As a result, the modified surface matching term does not
require the specification of boundary conditions or an additional strain energy
term to regularize the surface matching term. Optimization is achieved through
an accelerated gradient algorithm, further enhanced by our proposed method for
determining the optimal step size. We evaluated our method and compared it to
several state-of-the-art methods across various datasets. Our straightforward
and effective approach consistently outperformed or achieved comparable
performance to the state-of-the-art methods. Our code and datasets are
available at https://github.com/zixinyang9109/BCF-FEM.
| [
{
"version": "v1",
"created": "Fri, 15 Mar 2024 02:05:20 GMT"
},
{
"version": "v2",
"created": "Mon, 9 Sep 2024 10:41:31 GMT"
},
{
"version": "v3",
"created": "Tue, 4 Feb 2025 17:29:11 GMT"
},
{
"version": "v4",
"created": "Mon, 17 Mar 2025 15:19:09 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Yang",
"Zixin",
""
],
[
"Simon",
"Richard",
""
],
[
"Merrell",
"Kelly",
""
],
[
"Linte",
"Cristian. A.",
""
]
] | TITLE: Boundary Constraint-free Biomechanical Model-Based Surface Matching for
Intraoperative Liver Deformation Correction
ABSTRACT: In image-guided liver surgery, 3D-3D non-rigid registration methods play a
crucial role in estimating the mapping between the preoperative model and the
intraoperative surface represented as point clouds, addressing the challenge of
tissue deformation. Typically, these methods incorporate a biomechanical model,
represented as a finite element model (FEM), into the strain energy term to
regularize a surface matching term. We propose a 3D-3D non-rigid registration
method that incorporates a modified FEM into the surface matching term. The
modified FEM alleviates the need to specify boundary conditions, which is
achieved by modifying the stiffness matrix of a FEM and using diagonal loading
for stabilization. As a result, the modified surface matching term does not
require the specification of boundary conditions or an additional strain energy
term to regularize the surface matching term. Optimization is achieved through
an accelerated gradient algorithm, further enhanced by our proposed method for
determining the optimal step size. We evaluated our method and compared it to
several state-of-the-art methods across various datasets. Our straightforward
and effective approach consistently outperformed or achieved comparable
performance to the state-of-the-art methods. Our code and datasets are
available at https://github.com/zixinyang9109/BCF-FEM.
|
2403.13683 | Chen Zhao | Chen Zhao, Tong Zhang, Zheng Dang, Mathieu Salzmann | DVMNet++: Rethinking Relative Pose Estimation for Unseen Objects | null | null | null | null | cs.CV cs.RO | http://creativecommons.org/licenses/by/4.0/ | Determining the relative pose of a previously unseen object between two
images is pivotal to the success of generalizable object pose estimation.
Existing approaches typically predict 3D translation utilizing the ground-truth
object bounding box and approximate 3D rotation with a large number of discrete
hypotheses. This strategy makes unrealistic assumptions about the availability
of ground truth and incurs a computationally expensive process of scoring each
hypothesis at test time. By contrast, we rethink the problem of relative pose
estimation for unseen objects by presenting a Deep Voxel Matching Network
(DVMNet++). Our method computes the relative object pose in a single pass,
eliminating the need for ground-truth object bounding boxes and rotation
hypotheses. We achieve open-set object detection by leveraging image feature
embedding and natural language understanding as reference. The detection result
is then employed to approximate the translation parameters and crop the object
from the query image. For rotation estimation, we map the two RGB images, i.e.,
reference and cropped query, to their respective voxelized 3D representations.
The resulting voxels are passed through a rotation estimation module, which
aligns the voxels and computes the rotation in an end-to-end fashion by solving
a least-squares problem. To enhance robustness, we introduce a weighted closest
voxel algorithm capable of mitigating the impact of noisy voxels. We conduct
extensive experiments on the CO3D, Objaverse, LINEMOD, and LINEMOD-O datasets,
demonstrating that our approach delivers more accurate relative pose estimates
for novel objects at a lower computational cost compared to state-of-the-art
methods. Our code is released at https://github.com/sailor-z/DVMNet/.
| [
{
"version": "v1",
"created": "Wed, 20 Mar 2024 15:41:32 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Mar 2025 04:15:35 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Zhao",
"Chen",
""
],
[
"Zhang",
"Tong",
""
],
[
"Dang",
"Zheng",
""
],
[
"Salzmann",
"Mathieu",
""
]
] | TITLE: DVMNet++: Rethinking Relative Pose Estimation for Unseen Objects
ABSTRACT: Determining the relative pose of a previously unseen object between two
images is pivotal to the success of generalizable object pose estimation.
Existing approaches typically predict 3D translation utilizing the ground-truth
object bounding box and approximate 3D rotation with a large number of discrete
hypotheses. This strategy makes unrealistic assumptions about the availability
of ground truth and incurs a computationally expensive process of scoring each
hypothesis at test time. By contrast, we rethink the problem of relative pose
estimation for unseen objects by presenting a Deep Voxel Matching Network
(DVMNet++). Our method computes the relative object pose in a single pass,
eliminating the need for ground-truth object bounding boxes and rotation
hypotheses. We achieve open-set object detection by leveraging image feature
embedding and natural language understanding as reference. The detection result
is then employed to approximate the translation parameters and crop the object
from the query image. For rotation estimation, we map the two RGB images, i.e.,
reference and cropped query, to their respective voxelized 3D representations.
The resulting voxels are passed through a rotation estimation module, which
aligns the voxels and computes the rotation in an end-to-end fashion by solving
a least-squares problem. To enhance robustness, we introduce a weighted closest
voxel algorithm capable of mitigating the impact of noisy voxels. We conduct
extensive experiments on the CO3D, Objaverse, LINEMOD, and LINEMOD-O datasets,
demonstrating that our approach delivers more accurate relative pose estimates
for novel objects at a lower computational cost compared to state-of-the-art
methods. Our code is released at https://github.com/sailor-z/DVMNet/.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.