id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
listlengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2406.13547 | Giuseppe Floris Floris | Christian Scano, Giuseppe Floris, Biagio Montaruli, Luca Demetrio,
Andrea Valenza, Luca Compagna, Davide Ariu, Luca Piras, Davide Balzarotti,
and Battista Biggio | ModSec-Learn: Boosting ModSecurity with Machine Learning | arXiv admin note: text overlap with arXiv:2308.04964 | null | 10.1007/978-3-031-76459-2_3 | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | ModSecurity is widely recognized as the standard open-source Web Application
Firewall (WAF), maintained by the OWASP Foundation. It detects malicious
requests by matching them against the Core Rule Set (CRS), identifying
well-known attack patterns. Each rule is manually assigned a weight based on
the severity of the corresponding attack, and a request is blocked if the sum
of the weights of matched rules exceeds a given threshold. However, we argue
that this strategy is largely ineffective against web attacks, as detection is
only based on heuristics and not customized on the application to protect. In
this work, we overcome this issue by proposing a machine-learning model that
uses the CRS rules as input features. Through training, ModSec-Learn is able to
tune the contribution of each CRS rule to predictions, thus adapting the
severity level to the web applications to protect. Our experiments show that
ModSec-Learn achieves a significantly better trade-off between detection and
false positive rates. Finally, we analyze how sparse regularization can reduce
the number of rules that are relevant at inference time, by discarding more
than 30% of the CRS rules. We release our open-source code and the dataset at
https://github.com/pralab/modsec-learn and
https://github.com/pralab/http-traffic-dataset, respectively.
| [
{
"version": "v1",
"created": "Wed, 19 Jun 2024 13:32:47 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Scano",
"Christian",
""
],
[
"Floris",
"Giuseppe",
""
],
[
"Montaruli",
"Biagio",
""
],
[
"Demetrio",
"Luca",
""
],
[
"Valenza",
"Andrea",
""
],
[
"Compagna",
"Luca",
""
],
[
"Ariu",
"Davide",
""
],
[
"Piras",
"Luca",
""
],
[
"Balzarotti",
"Davide",
""
],
[
"Biggio",
"Battista",
""
]
]
| TITLE: ModSec-Learn: Boosting ModSecurity with Machine Learning
ABSTRACT: ModSecurity is widely recognized as the standard open-source Web Application
Firewall (WAF), maintained by the OWASP Foundation. It detects malicious
requests by matching them against the Core Rule Set (CRS), identifying
well-known attack patterns. Each rule is manually assigned a weight based on
the severity of the corresponding attack, and a request is blocked if the sum
of the weights of matched rules exceeds a given threshold. However, we argue
that this strategy is largely ineffective against web attacks, as detection is
only based on heuristics and not customized on the application to protect. In
this work, we overcome this issue by proposing a machine-learning model that
uses the CRS rules as input features. Through training, ModSec-Learn is able to
tune the contribution of each CRS rule to predictions, thus adapting the
severity level to the web applications to protect. Our experiments show that
ModSec-Learn achieves a significantly better trade-off between detection and
false positive rates. Finally, we analyze how sparse regularization can reduce
the number of rules that are relevant at inference time, by discarding more
than 30% of the CRS rules. We release our open-source code and the dataset at
https://github.com/pralab/modsec-learn and
https://github.com/pralab/http-traffic-dataset, respectively.
| no_new_dataset | 0.938181 |
2406.16810 | William F. Shen | Xinchi Qiu, William F. Shen, Yihong Chen, Meghdad Kurmanji, Nicola
Cancedda, Pontus Stenetorp, Nicholas D. Lane | How Data Inter-connectivity Shapes LLMs Unlearning: A Structural
Unlearning Perspective | null | null | null | null | cs.LG cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | While unlearning knowledge from large language models (LLMs) is receiving
increasing attention, one important aspect remains unexplored. Existing
approaches and benchmarks assume data points to-be-forgotten are independent,
ignoring their inter-connectivity - a fundamental characteristic of real-world
data structures. In this paper, we propose PISTOL, a method for compiling
structural datasets. PISTOL leverages the inherently structured nature of
contractual relationships, offering several key benefits. First, it enables
insights into the impact of structural data on unlearning effectiveness.
Second, it provides precise and concise ground truths for clearer evaluation.
Third, its attribute generation does not require input from pre-trained LLMs,
mitigating confounding risks. Leveraging datasets synthesized using PISTOL, we
demonstrate how data inter-connectivity impacts LLM unlearning. Specifically,
(a) in both the pre-trained and fine-tuned models, unlearning difficulty
increases as data inter-connectivity grows, (b) there is a positive correlation
between the density of the knowledge graph and unlearning difficulty, and (c)
when the to-be-forgotten data is skewed towards one domain, balancing retaining
performance across all domains is challenging.
| [
{
"version": "v1",
"created": "Mon, 24 Jun 2024 17:22:36 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 21:33:53 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Qiu",
"Xinchi",
""
],
[
"Shen",
"William F.",
""
],
[
"Chen",
"Yihong",
""
],
[
"Kurmanji",
"Meghdad",
""
],
[
"Cancedda",
"Nicola",
""
],
[
"Stenetorp",
"Pontus",
""
],
[
"Lane",
"Nicholas D.",
""
]
]
| TITLE: How Data Inter-connectivity Shapes LLMs Unlearning: A Structural
Unlearning Perspective
ABSTRACT: While unlearning knowledge from large language models (LLMs) is receiving
increasing attention, one important aspect remains unexplored. Existing
approaches and benchmarks assume data points to-be-forgotten are independent,
ignoring their inter-connectivity - a fundamental characteristic of real-world
data structures. In this paper, we propose PISTOL, a method for compiling
structural datasets. PISTOL leverages the inherently structured nature of
contractual relationships, offering several key benefits. First, it enables
insights into the impact of structural data on unlearning effectiveness.
Second, it provides precise and concise ground truths for clearer evaluation.
Third, its attribute generation does not require input from pre-trained LLMs,
mitigating confounding risks. Leveraging datasets synthesized using PISTOL, we
demonstrate how data inter-connectivity impacts LLM unlearning. Specifically,
(a) in both the pre-trained and fine-tuned models, unlearning difficulty
increases as data inter-connectivity grows, (b) there is a positive correlation
between the density of the knowledge graph and unlearning difficulty, and (c)
when the to-be-forgotten data is skewed towards one domain, balancing retaining
performance across all domains is challenging.
| no_new_dataset | 0.943867 |
2406.18113 | Boris Meinardus | Boris Meinardus, Hector Rodriguez, Anil Batra, Anna Rohrbach, Marcus
Rohrbach | Chrono: A Simple Blueprint for Representing Time in MLLMs | Code: https://github.com/sudo-Boris/mr-Blip | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The recent success of Large Language Models (LLMs) has prompted the extension
to the multimodal domain developing image-text Multimodal LLMs (MLLMs) and then
video-text models. In this work, we investigate the challenge of contextual and
temporal comprehension in video-language models by exploring the task of
temporal localization in videos. To address this problem, prior works have
developed complex task-specific architectures, novel modules to embed time into
MLLMs, or leveraged additional input signals such as video transcripts to best
encode contextual and temporal information. Interestingly, we find that most of
these efforts are surpassed by a much simpler design. We introduce Chrono, a
universal sequence blueprint that can be applied to an image-text pretrained
MLLM. Through extensive ablations across different MLLM architectures,
finetuning and zero-shot settings, and different datasets, we achieve a new
SOTA in moment retrieval on the most widely used benchmarks Charades-STA,
QVHighlights, ActivityNet Captions, and grounded video question answering on
NeXT-GQA.
| [
{
"version": "v1",
"created": "Wed, 26 Jun 2024 06:59:09 GMT"
},
{
"version": "v2",
"created": "Wed, 24 Jul 2024 06:43:07 GMT"
},
{
"version": "v3",
"created": "Mon, 14 Oct 2024 06:50:19 GMT"
},
{
"version": "v4",
"created": "Fri, 21 Feb 2025 00:49:07 GMT"
},
{
"version": "v5",
"created": "Tue, 11 Mar 2025 10:03:46 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Meinardus",
"Boris",
""
],
[
"Rodriguez",
"Hector",
""
],
[
"Batra",
"Anil",
""
],
[
"Rohrbach",
"Anna",
""
],
[
"Rohrbach",
"Marcus",
""
]
]
| TITLE: Chrono: A Simple Blueprint for Representing Time in MLLMs
ABSTRACT: The recent success of Large Language Models (LLMs) has prompted the extension
to the multimodal domain developing image-text Multimodal LLMs (MLLMs) and then
video-text models. In this work, we investigate the challenge of contextual and
temporal comprehension in video-language models by exploring the task of
temporal localization in videos. To address this problem, prior works have
developed complex task-specific architectures, novel modules to embed time into
MLLMs, or leveraged additional input signals such as video transcripts to best
encode contextual and temporal information. Interestingly, we find that most of
these efforts are surpassed by a much simpler design. We introduce Chrono, a
universal sequence blueprint that can be applied to an image-text pretrained
MLLM. Through extensive ablations across different MLLM architectures,
finetuning and zero-shot settings, and different datasets, we achieve a new
SOTA in moment retrieval on the most widely used benchmarks Charades-STA,
QVHighlights, ActivityNet Captions, and grounded video question answering on
NeXT-GQA.
| no_new_dataset | 0.941277 |
2407.02906 | Zhanglei Yang | Zhanglei Yang, Haipeng Li, Mingbo Hong, Chen-Lin Zhang, Jiajun Li,
Shuaicheng Liu | Single Image Rolling Shutter Removal with Diffusion Models | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present RS-Diffusion, the first Diffusion Models-based method for
single-frame Rolling Shutter (RS) correction. RS artifacts compromise visual
quality of frames due to the row-wise exposure of CMOS sensors. Most previous
methods have focused on multi-frame approaches, using temporal information from
consecutive frames for the motion rectification. However, few approaches
address the more challenging but important single frame RS correction. In this
work, we present an ``image-to-motion" framework via diffusion techniques, with
a designed patch-attention module. In addition, we present the RS-Real dataset,
comprised of captured RS frames alongside their corresponding Global Shutter
(GS) ground-truth pairs. The GS frames are corrected from the RS ones, guided
by the corresponding Inertial Measurement Unit (IMU) gyroscope data acquired
during capture. Experiments show that RS-Diffusion surpasses previous
single-frame RS methods, demonstrates the potential of diffusion-based
approaches, and provides a valuable dataset for further research.
| [
{
"version": "v1",
"created": "Wed, 3 Jul 2024 08:25:02 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 08:29:34 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Yang",
"Zhanglei",
""
],
[
"Li",
"Haipeng",
""
],
[
"Hong",
"Mingbo",
""
],
[
"Zhang",
"Chen-Lin",
""
],
[
"Li",
"Jiajun",
""
],
[
"Liu",
"Shuaicheng",
""
]
]
| TITLE: Single Image Rolling Shutter Removal with Diffusion Models
ABSTRACT: We present RS-Diffusion, the first Diffusion Models-based method for
single-frame Rolling Shutter (RS) correction. RS artifacts compromise visual
quality of frames due to the row-wise exposure of CMOS sensors. Most previous
methods have focused on multi-frame approaches, using temporal information from
consecutive frames for the motion rectification. However, few approaches
address the more challenging but important single frame RS correction. In this
work, we present an ``image-to-motion" framework via diffusion techniques, with
a designed patch-attention module. In addition, we present the RS-Real dataset,
comprised of captured RS frames alongside their corresponding Global Shutter
(GS) ground-truth pairs. The GS frames are corrected from the RS ones, guided
by the corresponding Inertial Measurement Unit (IMU) gyroscope data acquired
during capture. Experiments show that RS-Diffusion surpasses previous
single-frame RS methods, demonstrates the potential of diffusion-based
approaches, and provides a valuable dataset for further research.
| new_dataset | 0.961098 |
2407.04465 | Tanujit Chakraborty | Tanujit Chakraborty, Swarup Chattopadhyay, Suchismita Das, Shraddha M.
Naik, Chittaranjan Hens | Learning Patterns from Biological Networks: A Compounded Burr
Probability Model | null | null | null | null | stat.AP cs.SI physics.data-an | http://creativecommons.org/licenses/by/4.0/ | Complex biological networks, encompassing metabolic reactions, gene
interactions, and protein-protein interactions, often exhibit scale-free
characteristics with power-law degree distributions. However, empirical
evidence reveals significant deviations from ideal power-law fits,
necessitating more flexible and accurate modeling approaches. To address this
challenge, we introduce a novel Compounded Burr (CBurr) distribution, a novel
probability model derived from the Burr family, designed to capture the
intricate structural properties of biological networks. We rigorously establish
its statistical properties, including moment analysis, hazard functions, and
tail behavior, and provide a robust parameter estimation framework using the
maximum likelihood method. The CBurr distribution is broadly applicable to
networks with fat-tailed degree distributions, making it highly relevant for
modeling biological, social, and technological networks. To validate its
efficacy, we conduct an extensive empirical study on large-scale biological
network datasets, demonstrating that CBurr consistently outperforms
conventional power-law and alternative heavy-tailed models in fitting the
entire range of node degree distributions. Our proposed CBurr probability
distribution holds great promise for accurately capturing the complex nature of
biological networks and advancing our understanding of their underlying
mechanisms.
| [
{
"version": "v1",
"created": "Fri, 5 Jul 2024 12:26:21 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 14:35:17 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Chakraborty",
"Tanujit",
""
],
[
"Chattopadhyay",
"Swarup",
""
],
[
"Das",
"Suchismita",
""
],
[
"Naik",
"Shraddha M.",
""
],
[
"Hens",
"Chittaranjan",
""
]
]
| TITLE: Learning Patterns from Biological Networks: A Compounded Burr
Probability Model
ABSTRACT: Complex biological networks, encompassing metabolic reactions, gene
interactions, and protein-protein interactions, often exhibit scale-free
characteristics with power-law degree distributions. However, empirical
evidence reveals significant deviations from ideal power-law fits,
necessitating more flexible and accurate modeling approaches. To address this
challenge, we introduce a novel Compounded Burr (CBurr) distribution, a novel
probability model derived from the Burr family, designed to capture the
intricate structural properties of biological networks. We rigorously establish
its statistical properties, including moment analysis, hazard functions, and
tail behavior, and provide a robust parameter estimation framework using the
maximum likelihood method. The CBurr distribution is broadly applicable to
networks with fat-tailed degree distributions, making it highly relevant for
modeling biological, social, and technological networks. To validate its
efficacy, we conduct an extensive empirical study on large-scale biological
network datasets, demonstrating that CBurr consistently outperforms
conventional power-law and alternative heavy-tailed models in fitting the
entire range of node degree distributions. Our proposed CBurr probability
distribution holds great promise for accurately capturing the complex nature of
biological networks and advancing our understanding of their underlying
mechanisms.
| no_new_dataset | 0.947478 |
2407.13579 | Matthieu Futeral | Matthieu Futeral and Cordelia Schmid and Beno\^it Sagot and Rachel
Bawden | Towards Zero-Shot Multimodal Machine Translation | NAACL 2025 (Findings) | null | null | null | cs.CL | http://creativecommons.org/licenses/by-sa/4.0/ | Current multimodal machine translation (MMT) systems rely on fully supervised
data (i.e models are trained on sentences with their translations and
accompanying images). However, this type of data is costly to collect, limiting
the extension of MMT to other language pairs for which such data does not
exist. In this work, we propose a method to bypass the need for fully
supervised data to train MMT systems, using multimodal English data only. Our
method, called ZeroMMT, consists in adapting a strong text-only machine
translation (MT) model by training it on a mixture of two objectives: visually
conditioned masked language modelling and the Kullback-Leibler divergence
between the original and new MMT outputs. We evaluate on standard MMT
benchmarks and the recently released CoMMuTE, a contrastive benchmark aiming to
evaluate how well models use images to disambiguate English sentences. We
obtain disambiguation performance close to state-of-the-art MMT models trained
additionally on fully supervised examples. To prove that our method generalizes
to languages with no fully supervised training data available, we extend the
CoMMuTE evaluation dataset to three new languages: Arabic, Russian and Chinese.
We further show that we can control the trade-off between disambiguation
capabilities and translation fidelity at inference time using classifier-free
guidance and without any additional data. Our code, data and trained models are
publicly accessible.
| [
{
"version": "v1",
"created": "Thu, 18 Jul 2024 15:20:31 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 13:07:09 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Futeral",
"Matthieu",
""
],
[
"Schmid",
"Cordelia",
""
],
[
"Sagot",
"Benoît",
""
],
[
"Bawden",
"Rachel",
""
]
]
| TITLE: Towards Zero-Shot Multimodal Machine Translation
ABSTRACT: Current multimodal machine translation (MMT) systems rely on fully supervised
data (i.e models are trained on sentences with their translations and
accompanying images). However, this type of data is costly to collect, limiting
the extension of MMT to other language pairs for which such data does not
exist. In this work, we propose a method to bypass the need for fully
supervised data to train MMT systems, using multimodal English data only. Our
method, called ZeroMMT, consists in adapting a strong text-only machine
translation (MT) model by training it on a mixture of two objectives: visually
conditioned masked language modelling and the Kullback-Leibler divergence
between the original and new MMT outputs. We evaluate on standard MMT
benchmarks and the recently released CoMMuTE, a contrastive benchmark aiming to
evaluate how well models use images to disambiguate English sentences. We
obtain disambiguation performance close to state-of-the-art MMT models trained
additionally on fully supervised examples. To prove that our method generalizes
to languages with no fully supervised training data available, we extend the
CoMMuTE evaluation dataset to three new languages: Arabic, Russian and Chinese.
We further show that we can control the trade-off between disambiguation
capabilities and translation fidelity at inference time using classifier-free
guidance and without any additional data. Our code, data and trained models are
publicly accessible.
| no_new_dataset | 0.562687 |
2407.13766 | Tsung-Han Wu | Tsung-Han Wu, Giscard Biamby, Jerome Quenum, Ritwik Gupta, Joseph E.
Gonzalez, Trevor Darrell, David M. Chan | Visual Haystacks: A Vision-Centric Needle-In-A-Haystack Benchmark | Accepted to ICLR 2025; Project page:
https://visual-haystacks.github.io | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Multimodal Models (LMMs) have made significant strides in visual
question-answering for single images. Recent advancements like long-context
LMMs have allowed them to ingest larger, or even multiple, images. However, the
ability to process a large number of visual tokens does not guarantee effective
retrieval and reasoning for multi-image question answering (MIQA), especially
in real-world applications like photo album searches or satellite imagery
analysis. In this work, we first assess the limitations of current benchmarks
for long-context LMMs. We address these limitations by introducing a new
vision-centric, long-context benchmark, "Visual Haystacks (VHs)". We
comprehensively evaluate both open-source and proprietary models on VHs, and
demonstrate that these models struggle when reasoning across potentially
unrelated images, perform poorly on cross-image reasoning, as well as exhibit
biases based on the placement of key information within the context window.
Towards a solution, we introduce MIRAGE (Multi-Image Retrieval Augmented
Generation), an open-source, lightweight visual-RAG framework that processes up
to 10k images on a single 40G A100 GPU -- far surpassing the 1k-image limit of
contemporary models. MIRAGE demonstrates up to 13% performance improvement over
existing open-source LMMs on VHs, sets a new state-of-the-art on the RetVQA
multi-image QA benchmark, and achieves competitive performance on single-image
QA with state-of-the-art LMMs. Our dataset, model, and code are available at:
https://visual-haystacks.github.io.
| [
{
"version": "v1",
"created": "Thu, 18 Jul 2024 17:59:30 GMT"
},
{
"version": "v2",
"created": "Thu, 10 Oct 2024 21:03:15 GMT"
},
{
"version": "v3",
"created": "Sun, 9 Feb 2025 17:56:16 GMT"
},
{
"version": "v4",
"created": "Tue, 11 Mar 2025 17:31:27 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Wu",
"Tsung-Han",
""
],
[
"Biamby",
"Giscard",
""
],
[
"Quenum",
"Jerome",
""
],
[
"Gupta",
"Ritwik",
""
],
[
"Gonzalez",
"Joseph E.",
""
],
[
"Darrell",
"Trevor",
""
],
[
"Chan",
"David M.",
""
]
]
| TITLE: Visual Haystacks: A Vision-Centric Needle-In-A-Haystack Benchmark
ABSTRACT: Large Multimodal Models (LMMs) have made significant strides in visual
question-answering for single images. Recent advancements like long-context
LMMs have allowed them to ingest larger, or even multiple, images. However, the
ability to process a large number of visual tokens does not guarantee effective
retrieval and reasoning for multi-image question answering (MIQA), especially
in real-world applications like photo album searches or satellite imagery
analysis. In this work, we first assess the limitations of current benchmarks
for long-context LMMs. We address these limitations by introducing a new
vision-centric, long-context benchmark, "Visual Haystacks (VHs)". We
comprehensively evaluate both open-source and proprietary models on VHs, and
demonstrate that these models struggle when reasoning across potentially
unrelated images, perform poorly on cross-image reasoning, as well as exhibit
biases based on the placement of key information within the context window.
Towards a solution, we introduce MIRAGE (Multi-Image Retrieval Augmented
Generation), an open-source, lightweight visual-RAG framework that processes up
to 10k images on a single 40G A100 GPU -- far surpassing the 1k-image limit of
contemporary models. MIRAGE demonstrates up to 13% performance improvement over
existing open-source LMMs on VHs, sets a new state-of-the-art on the RetVQA
multi-image QA benchmark, and achieves competitive performance on single-image
QA with state-of-the-art LMMs. Our dataset, model, and code are available at:
https://visual-haystacks.github.io.
| no_new_dataset | 0.628464 |
2407.19345 | Gleb Kuzmin | Gleb Kuzmin, Neemesh Yadav, Ivan Smirnov, Timothy Baldwin, Artem
Shelmanov | Inference-Time Selective Debiasing to Enhance Fairness in Text
Classification Models | Accepted to NAACL 2025 | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | We propose selective debiasing -- an inference-time safety mechanism designed
to enhance the overall model quality in terms of prediction performance and
fairness, especially in scenarios where retraining the model is impractical.
The method draws inspiration from selective classification, where at inference
time, predictions with low quality, as indicated by their uncertainty scores,
are discarded. In our approach, we identify the potentially biased model
predictions and, instead of discarding them, we remove bias from these
predictions using LEACE -- a post-processing debiasing method. To select
problematic predictions, we propose a bias quantification approach based on KL
divergence, which achieves better results than standard uncertainty
quantification methods. Experiments on text classification datasets with
encoder-based classification models demonstrate that selective debiasing helps
to reduce the performance gap between post-processing methods and debiasing
techniques from the at-training and pre-processing categories.
| [
{
"version": "v1",
"created": "Sat, 27 Jul 2024 21:56:23 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Aug 2024 12:22:51 GMT"
},
{
"version": "v3",
"created": "Mon, 10 Feb 2025 13:18:25 GMT"
},
{
"version": "v4",
"created": "Tue, 11 Mar 2025 08:39:45 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Kuzmin",
"Gleb",
""
],
[
"Yadav",
"Neemesh",
""
],
[
"Smirnov",
"Ivan",
""
],
[
"Baldwin",
"Timothy",
""
],
[
"Shelmanov",
"Artem",
""
]
]
| TITLE: Inference-Time Selective Debiasing to Enhance Fairness in Text
Classification Models
ABSTRACT: We propose selective debiasing -- an inference-time safety mechanism designed
to enhance the overall model quality in terms of prediction performance and
fairness, especially in scenarios where retraining the model is impractical.
The method draws inspiration from selective classification, where at inference
time, predictions with low quality, as indicated by their uncertainty scores,
are discarded. In our approach, we identify the potentially biased model
predictions and, instead of discarding them, we remove bias from these
predictions using LEACE -- a post-processing debiasing method. To select
problematic predictions, we propose a bias quantification approach based on KL
divergence, which achieves better results than standard uncertainty
quantification methods. Experiments on text classification datasets with
encoder-based classification models demonstrate that selective debiasing helps
to reduce the performance gap between post-processing methods and debiasing
techniques from the at-training and pre-processing categories.
| no_new_dataset | 0.947332 |
2408.06631 | Mingning Guo | Mingning Guo, Mengwei Wu, Yuxiang Shen, Haifeng Li and Chao Tao | IFShip: Interpretable Fine-grained Ship Classification with Domain
Knowledge-Enhanced Vision-Language Models | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | End-to-end interpretation currently dominates the remote sensing fine-grained
ship classification (RS-FGSC) task. However, the inference process remains
uninterpretable, leading to criticisms of these models as "black box" systems.
To address this issue, we propose a domain knowledge-enhanced Chain-of-Thought
(CoT) prompt generation mechanism, which is used to semi-automatically
construct a task-specific instruction-following dataset, TITANIC-FGS. By
training on TITANIC-FGS, we adapt general-domain vision-language models (VLMs)
to the FGSC task, resulting in a model named IFShip. Building upon IFShip, we
develop an FGSC visual chatbot that redefines the FGSC problem as a
step-by-step reasoning task and conveys the reasoning process in natural
language. Experimental results show that IFShip outperforms state-of-the-art
FGSC algorithms in both interpretability and classification accuracy.
Furthermore, compared to VLMs such as LLaVA and MiniGPT-4, IFShip demonstrates
superior performance on the FGSC task. It provides an accurate chain of
reasoning when fine-grained ship types are recognizable to the human eye and
offers interpretable explanations when they are not.
| [
{
"version": "v1",
"created": "Tue, 13 Aug 2024 04:36:18 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 12:02:01 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Guo",
"Mingning",
""
],
[
"Wu",
"Mengwei",
""
],
[
"Shen",
"Yuxiang",
""
],
[
"Li",
"Haifeng",
""
],
[
"Tao",
"Chao",
""
]
]
| TITLE: IFShip: Interpretable Fine-grained Ship Classification with Domain
Knowledge-Enhanced Vision-Language Models
ABSTRACT: End-to-end interpretation currently dominates the remote sensing fine-grained
ship classification (RS-FGSC) task. However, the inference process remains
uninterpretable, leading to criticisms of these models as "black box" systems.
To address this issue, we propose a domain knowledge-enhanced Chain-of-Thought
(CoT) prompt generation mechanism, which is used to semi-automatically
construct a task-specific instruction-following dataset, TITANIC-FGS. By
training on TITANIC-FGS, we adapt general-domain vision-language models (VLMs)
to the FGSC task, resulting in a model named IFShip. Building upon IFShip, we
develop an FGSC visual chatbot that redefines the FGSC problem as a
step-by-step reasoning task and conveys the reasoning process in natural
language. Experimental results show that IFShip outperforms state-of-the-art
FGSC algorithms in both interpretability and classification accuracy.
Furthermore, compared to VLMs such as LLaVA and MiniGPT-4, IFShip demonstrates
superior performance on the FGSC task. It provides an accurate chain of
reasoning when fine-grained ship types are recognizable to the human eye and
offers interpretable explanations when they are not.
| new_dataset | 0.966976 |
2408.07514 | Andr\'as Kalapos | Andr\'as Kalapos, B\'alint Gyires-T\'oth | CNN-JEPA: Self-Supervised Pretraining Convolutional Neural Networks
Using Joint Embedding Predictive Architecture | Preprint | 2024 International Conference on Machine Learning and Applications
(ICMLA), Miami, FL, USA, 2024, pp. 1111-1114 | 10.1109/ICMLA61862.2024.00169 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Self-supervised learning (SSL) has become an important approach in
pretraining large neural networks, enabling unprecedented scaling of model and
dataset sizes. While recent advances like I-JEPA have shown promising results
for Vision Transformers, adapting such methods to Convolutional Neural Networks
(CNNs) presents unique challenges. In this paper, we introduce CNN-JEPA, a
novel SSL method that successfully applies the joint embedding predictive
architecture approach to CNNs. Our method incorporates a sparse CNN encoder to
handle masked inputs, a fully convolutional predictor using depthwise separable
convolutions, and an improved masking strategy. We demonstrate that CNN-JEPA
outperforms I-JEPA with ViT architectures on ImageNet-100, achieving a 73.3%
linear top-1 accuracy using a standard ResNet-50 encoder. Compared to other
CNN-based SSL methods, CNN-JEPA requires 17-35% less training time for the same
number of epochs and approaches the linear and k-NN top-1 accuracies of BYOL,
SimCLR, and VICReg. Our approach offers a simpler, more efficient alternative
to existing SSL methods for CNNs, requiring minimal augmentations and no
separate projector network.
| [
{
"version": "v1",
"created": "Wed, 14 Aug 2024 12:48:37 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 09:42:28 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Kalapos",
"András",
""
],
[
"Gyires-Tóth",
"Bálint",
""
]
]
| TITLE: CNN-JEPA: Self-Supervised Pretraining Convolutional Neural Networks
Using Joint Embedding Predictive Architecture
ABSTRACT: Self-supervised learning (SSL) has become an important approach in
pretraining large neural networks, enabling unprecedented scaling of model and
dataset sizes. While recent advances like I-JEPA have shown promising results
for Vision Transformers, adapting such methods to Convolutional Neural Networks
(CNNs) presents unique challenges. In this paper, we introduce CNN-JEPA, a
novel SSL method that successfully applies the joint embedding predictive
architecture approach to CNNs. Our method incorporates a sparse CNN encoder to
handle masked inputs, a fully convolutional predictor using depthwise separable
convolutions, and an improved masking strategy. We demonstrate that CNN-JEPA
outperforms I-JEPA with ViT architectures on ImageNet-100, achieving a 73.3%
linear top-1 accuracy using a standard ResNet-50 encoder. Compared to other
CNN-based SSL methods, CNN-JEPA requires 17-35% less training time for the same
number of epochs and approaches the linear and k-NN top-1 accuracies of BYOL,
SimCLR, and VICReg. Our approach offers a simpler, more efficient alternative
to existing SSL methods for CNNs, requiring minimal augmentations and no
separate projector network.
| no_new_dataset | 0.946399 |
2408.10883 | Xinqi Su | Xinqi Su, Zitong Yu, Yawen Cui, Ajian Liu, Xun Lin, Yuhao Wang,
Haochen Liang, Wenhui Li, Li Shen, Xiaochun Cao | Dynamic Analysis and Adaptive Discriminator for Fake News Detection | null | null | null | null | cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In current web environment, fake news spreads rapidly across online social
networks, posing serious threats to society. Existing multimodal fake news
detection methods can generally be classified into knowledge-based and
semantic-based approaches. However, these methods are heavily rely on human
expertise and feedback, lacking flexibility. To address this challenge, we
propose a Dynamic Analysis and Adaptive Discriminator (DAAD) approach for fake
news detection. For knowledge-based methods, we introduce the Monte Carlo Tree
Search algorithm to leverage the self-reflective capabilities of large language
models (LLMs) for prompt optimization, providing richer, domain-specific
details and guidance to the LLMs, while enabling more flexible integration of
LLM comment on news content. For semantic-based methods, we define four typical
deceit patterns: emotional exaggeration, logical inconsistency, image
manipulation, and semantic inconsistency, to reveal the mechanisms behind fake
news creation. To detect these patterns, we carefully design four
discriminators and expand them in depth and breadth, using the soft-routing
mechanism to explore optimal detection models. Experimental results on three
real-world datasets demonstrate the superiority of our approach. The code will
be available at: https://github.com/SuXinqi/DAAD.
| [
{
"version": "v1",
"created": "Tue, 20 Aug 2024 14:13:54 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 03:05:45 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Su",
"Xinqi",
""
],
[
"Yu",
"Zitong",
""
],
[
"Cui",
"Yawen",
""
],
[
"Liu",
"Ajian",
""
],
[
"Lin",
"Xun",
""
],
[
"Wang",
"Yuhao",
""
],
[
"Liang",
"Haochen",
""
],
[
"Li",
"Wenhui",
""
],
[
"Shen",
"Li",
""
],
[
"Cao",
"Xiaochun",
""
]
]
| TITLE: Dynamic Analysis and Adaptive Discriminator for Fake News Detection
ABSTRACT: In current web environment, fake news spreads rapidly across online social
networks, posing serious threats to society. Existing multimodal fake news
detection methods can generally be classified into knowledge-based and
semantic-based approaches. However, these methods are heavily rely on human
expertise and feedback, lacking flexibility. To address this challenge, we
propose a Dynamic Analysis and Adaptive Discriminator (DAAD) approach for fake
news detection. For knowledge-based methods, we introduce the Monte Carlo Tree
Search algorithm to leverage the self-reflective capabilities of large language
models (LLMs) for prompt optimization, providing richer, domain-specific
details and guidance to the LLMs, while enabling more flexible integration of
LLM comment on news content. For semantic-based methods, we define four typical
deceit patterns: emotional exaggeration, logical inconsistency, image
manipulation, and semantic inconsistency, to reveal the mechanisms behind fake
news creation. To detect these patterns, we carefully design four
discriminators and expand them in depth and breadth, using the soft-routing
mechanism to explore optimal detection models. Experimental results on three
real-world datasets demonstrate the superiority of our approach. The code will
be available at: https://github.com/SuXinqi/DAAD.
| no_new_dataset | 0.942981 |
2408.15993 | Sungduk Yu | Sungduk Yu, Brian L. White, Anahita Bhiwandiwalla, Musashi Hinck,
Matthew Lyle Olson, Yaniv Gurwicz, Raanan Y. Rohekar, Tung Nguyen, Vasudev
Lal | ClimDetect: A Benchmark Dataset for Climate Change Detection and
Attribution | null | null | null | null | cs.CV cs.LG physics.ao-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Detecting and attributing temperature increases driven by climate change is
crucial for understanding global warming and informing adaptation strategies.
However, distinguishing human-induced climate signals from natural variability
remains challenging for traditional detection and attribution (D&A) methods,
which rely on identifying specific "fingerprints" -- spatial patterns expected
to emerge from external forcings such as greenhouse gas emissions. Deep
learning offers promise in discerning these complex patterns within expansive
spatial datasets, yet the lack of standardized protocols has hindered
consistent comparisons across studies.
To address this gap, we introduce ClimDetect, a standardized dataset
comprising 1.17M daily climate snapshots paired with target climate change
indicator variables. The dataset is curated from both CMIP6 climate model
simulations and real-world observation-assimilated reanalysis datasets (ERA5,
JRA-3Q, and MERRA-2), and is designed to enhance model accuracy in detecting
climate change signals. ClimDetect integrates various input and target
variables used in previous research, ensuring comparability and consistency
across studies. We also explore the application of vision transformers (ViT) to
climate data -- a novel approach that, to our knowledge, has not been attempted
before for climate change detection tasks. Our open-access data serve as a
benchmark for advancing climate science by enabling end-to-end model
development and evaluation. ClimDetect is publicly accessible via Hugging Face
dataset repository at: https://huggingface.co/datasets/ClimDetect/ClimDetect.
| [
{
"version": "v1",
"created": "Wed, 28 Aug 2024 17:58:53 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 20:45:11 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Yu",
"Sungduk",
""
],
[
"White",
"Brian L.",
""
],
[
"Bhiwandiwalla",
"Anahita",
""
],
[
"Hinck",
"Musashi",
""
],
[
"Olson",
"Matthew Lyle",
""
],
[
"Gurwicz",
"Yaniv",
""
],
[
"Rohekar",
"Raanan Y.",
""
],
[
"Nguyen",
"Tung",
""
],
[
"Lal",
"Vasudev",
""
]
]
| TITLE: ClimDetect: A Benchmark Dataset for Climate Change Detection and
Attribution
ABSTRACT: Detecting and attributing temperature increases driven by climate change is
crucial for understanding global warming and informing adaptation strategies.
However, distinguishing human-induced climate signals from natural variability
remains challenging for traditional detection and attribution (D&A) methods,
which rely on identifying specific "fingerprints" -- spatial patterns expected
to emerge from external forcings such as greenhouse gas emissions. Deep
learning offers promise in discerning these complex patterns within expansive
spatial datasets, yet the lack of standardized protocols has hindered
consistent comparisons across studies.
To address this gap, we introduce ClimDetect, a standardized dataset
comprising 1.17M daily climate snapshots paired with target climate change
indicator variables. The dataset is curated from both CMIP6 climate model
simulations and real-world observation-assimilated reanalysis datasets (ERA5,
JRA-3Q, and MERRA-2), and is designed to enhance model accuracy in detecting
climate change signals. ClimDetect integrates various input and target
variables used in previous research, ensuring comparability and consistency
across studies. We also explore the application of vision transformers (ViT) to
climate data -- a novel approach that, to our knowledge, has not been attempted
before for climate change detection tasks. Our open-access data serve as a
benchmark for advancing climate science by enabling end-to-end model
development and evaluation. ClimDetect is publicly accessible via Hugging Face
dataset repository at: https://huggingface.co/datasets/ClimDetect/ClimDetect.
| new_dataset | 0.966976 |
2409.17932 | Mathieu Bazinet | Mathieu Bazinet, Valentina Zantedeschi, Pascal Germain | Sample Compression Unleashed: New Generalization Bounds for Real Valued
Losses | Proceedings of the 28th International Conference on Artificial
Intelligence and Statistics (AISTATS) 2025, Mai Khao, Thailand. PMLR: Volume
258 | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | The sample compression theory provides generalization guarantees for
predictors that can be fully defined using a subset of the training dataset and
a (short) message string, generally defined as a binary sequence. Previous
works provided generalization bounds for the zero-one loss, which is
restrictive notably when applied to deep learning approaches. In this paper, we
present a general framework for deriving new sample compression bounds that
hold for real-valued unbounded losses. Using the Pick-To-Learn (P2L)
meta-algorithm, which transforms the training method of any machine-learning
predictor to yield sample-compressed predictors, we empirically demonstrate the
tightness of the bounds and their versatility by evaluating them on random
forests and multiple types of neural networks.
| [
{
"version": "v1",
"created": "Thu, 26 Sep 2024 15:08:52 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Oct 2024 17:16:43 GMT"
},
{
"version": "v3",
"created": "Tue, 11 Mar 2025 12:12:13 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Bazinet",
"Mathieu",
""
],
[
"Zantedeschi",
"Valentina",
""
],
[
"Germain",
"Pascal",
""
]
]
| TITLE: Sample Compression Unleashed: New Generalization Bounds for Real Valued
Losses
ABSTRACT: The sample compression theory provides generalization guarantees for
predictors that can be fully defined using a subset of the training dataset and
a (short) message string, generally defined as a binary sequence. Previous
works provided generalization bounds for the zero-one loss, which is
restrictive notably when applied to deep learning approaches. In this paper, we
present a general framework for deriving new sample compression bounds that
hold for real-valued unbounded losses. Using the Pick-To-Learn (P2L)
meta-algorithm, which transforms the training method of any machine-learning
predictor to yield sample-compressed predictors, we empirically demonstrate the
tightness of the bounds and their versatility by evaluating them on random
forests and multiple types of neural networks.
| no_new_dataset | 0.943086 |
2409.20503 | Xingfang Wu | Xingfang Wu, Heng Li, Foutse Khomh | What Information Contributes to Log-based Anomaly Detection? Insights
from a Configurable Transformer-Based Approach | 30 pages | null | null | null | cs.SE cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Log data are generated from logging statements in the source code, providing
insights into the execution processes of software applications and systems.
State-of-the-art log-based anomaly detection approaches typically leverage deep
learning models to capture the semantic or sequential information in the log
data and detect anomalous runtime behaviors. However, the impacts of these
different types of information are not clear. In addition, most existing
approaches ignore the timestamps in log data, which can potentially provide
fine-grained sequential and temporal information. In this work, we propose a
configurable Transformer-based anomaly detection model that can capture the
semantic, sequential, and temporal information in the log data and allows us to
configure the different types of information as the model's features.
Additionally, we train and evaluate the proposed model using log sequences of
different lengths, thus overcoming the constraint of existing methods that rely
on fixed-length or time-windowed log sequences as inputs. With the proposed
model, we conduct a series of experiments with different combinations of input
features to evaluate the roles of different types of information in anomaly
detection. The model can attain competitive and consistently stable performance
compared to the baselines when presented with log sequences of varying lengths.
The results indicate that the event occurrence information plays a key role in
identifying anomalies, while the impact of the sequential and temporal
information is not significant for anomaly detection on the studied public
datasets. On the other hand, the findings also reveal the simplicity of the
studied public datasets and highlight the importance of constructing new
datasets that contain different types of anomalies to better evaluate the
performance of anomaly detection models.
| [
{
"version": "v1",
"created": "Mon, 30 Sep 2024 17:03:13 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 01:55:49 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Wu",
"Xingfang",
""
],
[
"Li",
"Heng",
""
],
[
"Khomh",
"Foutse",
""
]
]
| TITLE: What Information Contributes to Log-based Anomaly Detection? Insights
from a Configurable Transformer-Based Approach
ABSTRACT: Log data are generated from logging statements in the source code, providing
insights into the execution processes of software applications and systems.
State-of-the-art log-based anomaly detection approaches typically leverage deep
learning models to capture the semantic or sequential information in the log
data and detect anomalous runtime behaviors. However, the impacts of these
different types of information are not clear. In addition, most existing
approaches ignore the timestamps in log data, which can potentially provide
fine-grained sequential and temporal information. In this work, we propose a
configurable Transformer-based anomaly detection model that can capture the
semantic, sequential, and temporal information in the log data and allows us to
configure the different types of information as the model's features.
Additionally, we train and evaluate the proposed model using log sequences of
different lengths, thus overcoming the constraint of existing methods that rely
on fixed-length or time-windowed log sequences as inputs. With the proposed
model, we conduct a series of experiments with different combinations of input
features to evaluate the roles of different types of information in anomaly
detection. The model can attain competitive and consistently stable performance
compared to the baselines when presented with log sequences of varying lengths.
The results indicate that the event occurrence information plays a key role in
identifying anomalies, while the impact of the sequential and temporal
information is not significant for anomaly detection on the studied public
datasets. On the other hand, the findings also reveal the simplicity of the
studied public datasets and highlight the importance of constructing new
datasets that contain different types of anomalies to better evaluate the
performance of anomaly detection models.
| no_new_dataset | 0.944638 |
2410.03735 | David Grangier | David Grangier, Simin Fan, Skyler Seto, Pierre Ablin | Task-Adaptive Pretrained Language Models via Clustered-Importance
Sampling | 23 pages, presented at the International Conference on Learning
Representation (ICLR), 2025 | null | null | null | cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Specialist language models (LMs) focus on a specific task or domain on which
they often outperform generalist LMs of the same size. However, the specialist
data needed to pretrain these models is only available in limited amount for
most tasks. In this work, we build specialist models from large generalist
training sets instead. We propose a novel method, ClusteRed Importance SamPling
(CRISP). CRISP clusters the generalist dataset and samples from these clusters
based on their frequencies in the smaller specialist dataset. It is scalable,
suitable for both pretraining and continued pretraining, and works well in
multi-task settings. CRISP performs favorably compared to other methods that
adjust the training distribution of the generalist data with guidance from the
limited domain-specific data. Our findings demonstrate improvements across
different domains in terms of language modeling perplexity and accuracy on
multiple-choice question tasks. We also present ablation studies that examine
the impact of dataset sizes, clustering configurations, and model sizes.
| [
{
"version": "v1",
"created": "Mon, 30 Sep 2024 20:49:54 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 00:20:30 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Grangier",
"David",
""
],
[
"Fan",
"Simin",
""
],
[
"Seto",
"Skyler",
""
],
[
"Ablin",
"Pierre",
""
]
]
| TITLE: Task-Adaptive Pretrained Language Models via Clustered-Importance
Sampling
ABSTRACT: Specialist language models (LMs) focus on a specific task or domain on which
they often outperform generalist LMs of the same size. However, the specialist
data needed to pretrain these models is only available in limited amount for
most tasks. In this work, we build specialist models from large generalist
training sets instead. We propose a novel method, ClusteRed Importance SamPling
(CRISP). CRISP clusters the generalist dataset and samples from these clusters
based on their frequencies in the smaller specialist dataset. It is scalable,
suitable for both pretraining and continued pretraining, and works well in
multi-task settings. CRISP performs favorably compared to other methods that
adjust the training distribution of the generalist data with guidance from the
limited domain-specific data. Our findings demonstrate improvements across
different domains in terms of language modeling perplexity and accuracy on
multiple-choice question tasks. We also present ablation studies that examine
the impact of dataset sizes, clustering configurations, and model sizes.
| no_new_dataset | 0.95297 |
2410.05331 | Guanchu Wang | Guanchu Wang, Yu-Neng Chuang, Ruixiang Tang, Shaochen Zhong, Jiayi
Yuan, Hongye Jin, Zirui Liu, Vipin Chaudhary, Shuai Xu, James Caverlee, Xia
Hu | Taylor Unswift: Secured Weight Release for Large Language Models via
Taylor Expansion | null | null | null | null | cs.CR cs.AI cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ensuring the security of released large language models (LLMs) poses a
significant dilemma, as existing mechanisms either compromise ownership rights
or raise data privacy concerns. To address this dilemma, we introduce TaylorMLP
to protect the ownership of released LLMs and prevent their abuse.
Specifically, TaylorMLP preserves the ownership of LLMs by transforming the
weights of LLMs into parameters of Taylor-series. Instead of releasing the
original weights, developers can release the Taylor-series parameters with
users, thereby ensuring the security of LLMs. Moreover, TaylorMLP can prevent
abuse of LLMs by adjusting the generation speed. It can induce low-speed token
generation for the protected LLMs by increasing the terms in the Taylor-series.
This intentional delay helps LLM developers prevent potential large-scale
unauthorized uses of their models. Empirical experiments across five datasets
and three LLM architectures demonstrate that TaylorMLP induces over 4x increase
in latency, producing the tokens precisely matched with original LLMs.
Subsequent defensive experiments further confirm that TaylorMLP effectively
prevents users from reconstructing the weight values based on downstream
datasets.
| [
{
"version": "v1",
"created": "Sun, 6 Oct 2024 01:13:49 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 02:16:12 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Wang",
"Guanchu",
""
],
[
"Chuang",
"Yu-Neng",
""
],
[
"Tang",
"Ruixiang",
""
],
[
"Zhong",
"Shaochen",
""
],
[
"Yuan",
"Jiayi",
""
],
[
"Jin",
"Hongye",
""
],
[
"Liu",
"Zirui",
""
],
[
"Chaudhary",
"Vipin",
""
],
[
"Xu",
"Shuai",
""
],
[
"Caverlee",
"James",
""
],
[
"Hu",
"Xia",
""
]
]
| TITLE: Taylor Unswift: Secured Weight Release for Large Language Models via
Taylor Expansion
ABSTRACT: Ensuring the security of released large language models (LLMs) poses a
significant dilemma, as existing mechanisms either compromise ownership rights
or raise data privacy concerns. To address this dilemma, we introduce TaylorMLP
to protect the ownership of released LLMs and prevent their abuse.
Specifically, TaylorMLP preserves the ownership of LLMs by transforming the
weights of LLMs into parameters of Taylor-series. Instead of releasing the
original weights, developers can release the Taylor-series parameters with
users, thereby ensuring the security of LLMs. Moreover, TaylorMLP can prevent
abuse of LLMs by adjusting the generation speed. It can induce low-speed token
generation for the protected LLMs by increasing the terms in the Taylor-series.
This intentional delay helps LLM developers prevent potential large-scale
unauthorized uses of their models. Empirical experiments across five datasets
and three LLM architectures demonstrate that TaylorMLP induces over 4x increase
in latency, producing the tokens precisely matched with original LLMs.
Subsequent defensive experiments further confirm that TaylorMLP effectively
prevents users from reconstructing the weight values based on downstream
datasets.
| no_new_dataset | 0.948728 |
2410.06502 | Yuchen Shen | Yuchen Shen, Chenhao Zhang, Sijie Fu, Chenghui Zhou, Newell Washburn,
Barnab\'as P\'oczos | Chemistry-Inspired Diffusion with Non-Differentiable Guidance | accepted by ICLR 2025 | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in diffusion models have shown remarkable potential in the
conditional generation of novel molecules. These models can be guided in two
ways: (i) explicitly, through additional features representing the condition,
or (ii) implicitly, using a property predictor. However, training property
predictors or conditional diffusion models requires an abundance of labeled
data and is inherently challenging in real-world applications. We propose a
novel approach that attenuates the limitations of acquiring large labeled
datasets by leveraging domain knowledge from quantum chemistry as a
non-differentiable oracle to guide an unconditional diffusion model. Instead of
relying on neural networks, the oracle provides accurate guidance in the form
of estimated gradients, allowing the diffusion process to sample from a
conditional distribution specified by quantum chemistry. We show that this
results in more precise conditional generation of novel and stable molecular
structures. Our experiments demonstrate that our method: (1) significantly
reduces atomic forces, enhancing the validity of generated molecules when used
for stability optimization; (2) is compatible with both explicit and implicit
guidance in diffusion models, enabling joint optimization of molecular
properties and stability; and (3) generalizes effectively to molecular
optimization tasks beyond stability optimization.
| [
{
"version": "v1",
"created": "Wed, 9 Oct 2024 03:10:21 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 14:58:58 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Shen",
"Yuchen",
""
],
[
"Zhang",
"Chenhao",
""
],
[
"Fu",
"Sijie",
""
],
[
"Zhou",
"Chenghui",
""
],
[
"Washburn",
"Newell",
""
],
[
"Póczos",
"Barnabás",
""
]
]
| TITLE: Chemistry-Inspired Diffusion with Non-Differentiable Guidance
ABSTRACT: Recent advances in diffusion models have shown remarkable potential in the
conditional generation of novel molecules. These models can be guided in two
ways: (i) explicitly, through additional features representing the condition,
or (ii) implicitly, using a property predictor. However, training property
predictors or conditional diffusion models requires an abundance of labeled
data and is inherently challenging in real-world applications. We propose a
novel approach that attenuates the limitations of acquiring large labeled
datasets by leveraging domain knowledge from quantum chemistry as a
non-differentiable oracle to guide an unconditional diffusion model. Instead of
relying on neural networks, the oracle provides accurate guidance in the form
of estimated gradients, allowing the diffusion process to sample from a
conditional distribution specified by quantum chemistry. We show that this
results in more precise conditional generation of novel and stable molecular
structures. Our experiments demonstrate that our method: (1) significantly
reduces atomic forces, enhancing the validity of generated molecules when used
for stability optimization; (2) is compatible with both explicit and implicit
guidance in diffusion models, enabling joint optimization of molecular
properties and stability; and (3) generalizes effectively to molecular
optimization tasks beyond stability optimization.
| no_new_dataset | 0.951818 |
2410.07659 | Sparsh Mittal | Onkar Susladkar, Jishu Sen Gupta, Chirag Sehgal, Sparsh Mittal, Rekha
Singhal | MotionAura: Generating High-Quality and Motion Consistent Videos using
Discrete Diffusion | Accepted in ICLR 2025 (spotlight paper) | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The spatio-temporal complexity of video data presents significant challenges
in tasks such as compression, generation, and inpainting. We present four key
contributions to address the challenges of spatiotemporal video processing.
First, we introduce the 3D Mobile Inverted Vector-Quantization Variational
Autoencoder (3D-MBQ-VAE), which combines Variational Autoencoders (VAEs) with
masked token modeling to enhance spatiotemporal video compression. The model
achieves superior temporal consistency and state-of-the-art (SOTA)
reconstruction quality by employing a novel training strategy with full frame
masking. Second, we present MotionAura, a text-to-video generation framework
that utilizes vector-quantized diffusion models to discretize the latent space
and capture complex motion dynamics, producing temporally coherent videos
aligned with text prompts. Third, we propose a spectral transformer-based
denoising network that processes video data in the frequency domain using the
Fourier Transform. This method effectively captures global context and
long-range dependencies for high-quality video generation and denoising.
Lastly, we introduce a downstream task of Sketch Guided Video Inpainting. This
task leverages Low-Rank Adaptation (LoRA) for parameter-efficient fine-tuning.
Our models achieve SOTA performance on a range of benchmarks. Our work offers
robust frameworks for spatiotemporal modeling and user-driven video content
manipulation. We will release the code, datasets, and models in open-source.
| [
{
"version": "v1",
"created": "Thu, 10 Oct 2024 07:07:56 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 05:19:31 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Susladkar",
"Onkar",
""
],
[
"Gupta",
"Jishu Sen",
""
],
[
"Sehgal",
"Chirag",
""
],
[
"Mittal",
"Sparsh",
""
],
[
"Singhal",
"Rekha",
""
]
]
| TITLE: MotionAura: Generating High-Quality and Motion Consistent Videos using
Discrete Diffusion
ABSTRACT: The spatio-temporal complexity of video data presents significant challenges
in tasks such as compression, generation, and inpainting. We present four key
contributions to address the challenges of spatiotemporal video processing.
First, we introduce the 3D Mobile Inverted Vector-Quantization Variational
Autoencoder (3D-MBQ-VAE), which combines Variational Autoencoders (VAEs) with
masked token modeling to enhance spatiotemporal video compression. The model
achieves superior temporal consistency and state-of-the-art (SOTA)
reconstruction quality by employing a novel training strategy with full frame
masking. Second, we present MotionAura, a text-to-video generation framework
that utilizes vector-quantized diffusion models to discretize the latent space
and capture complex motion dynamics, producing temporally coherent videos
aligned with text prompts. Third, we propose a spectral transformer-based
denoising network that processes video data in the frequency domain using the
Fourier Transform. This method effectively captures global context and
long-range dependencies for high-quality video generation and denoising.
Lastly, we introduce a downstream task of Sketch Guided Video Inpainting. This
task leverages Low-Rank Adaptation (LoRA) for parameter-efficient fine-tuning.
Our models achieve SOTA performance on a range of benchmarks. Our work offers
robust frameworks for spatiotemporal modeling and user-driven video content
manipulation. We will release the code, datasets, and models in open-source.
| no_new_dataset | 0.946101 |
2410.10663 | Zhengwei Yang | Zhengwei Yang, Yuke Li, Qiang Sun, Basura Fernando, Heng Huang, Zheng
Wang | Cross-Modal Few-Shot Learning: a Generative Transfer Learning Framework | 15 pages, 9 figures, 7 tables | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Most existing studies on few-shot learning focus on unimodal settings, where
models are trained to generalize to unseen data using a limited amount of
labeled examples from a single modality. However, real-world data are
inherently multi-modal, and such unimodal approaches limit the practical
applications of few-shot learning. To bridge this gap, this paper introduces
the Cross-modal Few-Shot Learning (CFSL) task, which aims to recognize
instances across multiple modalities while relying on scarce labeled data. This
task presents unique challenges compared to classical few-shot learning arising
from the distinct visual attributes and structural disparities inherent to each
modality. To tackle these challenges, we propose a Generative Transfer Learning
(GTL) framework by simulating how humans abstract and generalize concepts.
Specifically, the GTL jointly estimates the latent shared concept across
modalities and the in-modality disturbance through a generative structure.
Establishing the relationship between latent concepts and visual content among
abundant unimodal data enables GTL to effectively transfer knowledge from
unimodal to novel multimodal data, as humans did. Comprehensive experiments
demonstrate that the GTL achieves state-of-the-art performance across seven
multi-modal datasets across RGB-Sketch, RGB-Infrared, and RGB-Depth.
| [
{
"version": "v1",
"created": "Mon, 14 Oct 2024 16:09:38 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 08:58:21 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Yang",
"Zhengwei",
""
],
[
"Li",
"Yuke",
""
],
[
"Sun",
"Qiang",
""
],
[
"Fernando",
"Basura",
""
],
[
"Huang",
"Heng",
""
],
[
"Wang",
"Zheng",
""
]
]
| TITLE: Cross-Modal Few-Shot Learning: a Generative Transfer Learning Framework
ABSTRACT: Most existing studies on few-shot learning focus on unimodal settings, where
models are trained to generalize to unseen data using a limited amount of
labeled examples from a single modality. However, real-world data are
inherently multi-modal, and such unimodal approaches limit the practical
applications of few-shot learning. To bridge this gap, this paper introduces
the Cross-modal Few-Shot Learning (CFSL) task, which aims to recognize
instances across multiple modalities while relying on scarce labeled data. This
task presents unique challenges compared to classical few-shot learning arising
from the distinct visual attributes and structural disparities inherent to each
modality. To tackle these challenges, we propose a Generative Transfer Learning
(GTL) framework by simulating how humans abstract and generalize concepts.
Specifically, the GTL jointly estimates the latent shared concept across
modalities and the in-modality disturbance through a generative structure.
Establishing the relationship between latent concepts and visual content among
abundant unimodal data enables GTL to effectively transfer knowledge from
unimodal to novel multimodal data, as humans did. Comprehensive experiments
demonstrate that the GTL achieves state-of-the-art performance across seven
multi-modal datasets across RGB-Sketch, RGB-Infrared, and RGB-Depth.
| no_new_dataset | 0.942981 |
2410.10995 | Giuseppe Attanasio | Emmanouil Zaranis, Giuseppe Attanasio, Sweta Agrawal, Andr\'e F.T.
Martins | Watching the Watchers: Exposing Gender Disparities in Machine
Translation Quality Estimation | Work in progress | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Quality estimation (QE) -- the automatic assessment of translation quality --
has recently become crucial across several stages of the translation pipeline,
from data curation to training and decoding. While QE metrics have been
optimized to align with human judgments, whether they encode social biases has
been largely overlooked. Biased QE risks favoring certain demographic groups
over others, e.g., by exacerbating gaps in visibility and usability. This paper
defines and investigates gender bias of QE metrics and discusses its downstream
implications for machine translation (MT). Experiments with state-of-the-art QE
metrics across multiple domains, datasets, and languages reveal significant
bias. When a human entity's gender in the source is undisclosed,
masculine-inflected translations score higher than feminine-inflected ones and
gender-neutral translations are penalized. Even when contextual cues
disambiguate gender, using context-aware QE metrics leads to more errors in
picking the correct translation inflection for feminine than masculine
referents. Moreover, a biased QE metric affects data filtering and
quality-aware decoding. Our findings highlight the need for renewed focus in
developing and evaluating QE metrics centered around gender.
| [
{
"version": "v1",
"created": "Mon, 14 Oct 2024 18:24:52 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Nov 2024 23:50:46 GMT"
},
{
"version": "v3",
"created": "Tue, 11 Mar 2025 10:13:54 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Zaranis",
"Emmanouil",
""
],
[
"Attanasio",
"Giuseppe",
""
],
[
"Agrawal",
"Sweta",
""
],
[
"Martins",
"André F. T.",
""
]
]
| TITLE: Watching the Watchers: Exposing Gender Disparities in Machine
Translation Quality Estimation
ABSTRACT: Quality estimation (QE) -- the automatic assessment of translation quality --
has recently become crucial across several stages of the translation pipeline,
from data curation to training and decoding. While QE metrics have been
optimized to align with human judgments, whether they encode social biases has
been largely overlooked. Biased QE risks favoring certain demographic groups
over others, e.g., by exacerbating gaps in visibility and usability. This paper
defines and investigates gender bias of QE metrics and discusses its downstream
implications for machine translation (MT). Experiments with state-of-the-art QE
metrics across multiple domains, datasets, and languages reveal significant
bias. When a human entity's gender in the source is undisclosed,
masculine-inflected translations score higher than feminine-inflected ones and
gender-neutral translations are penalized. Even when contextual cues
disambiguate gender, using context-aware QE metrics leads to more errors in
picking the correct translation inflection for feminine than masculine
referents. Moreover, a biased QE metric affects data filtering and
quality-aware decoding. Our findings highlight the need for renewed focus in
developing and evaluating QE metrics centered around gender.
| no_new_dataset | 0.949153 |
2410.15068 | Hrishav Bakul Barua | Hrishav Bakul Barua, Kalin Stefanov, Lemuel Lai En Che, Abhinav Dhall,
KokSheik Wong, Ganesh Krishnasamy | LLM-HDR: Bridging LLM-based Perception and Self-Supervision for Unpaired
LDR-to-HDR Image Reconstruction | null | null | null | null | cs.CV cs.AI cs.GR cs.LG cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The translation of Low Dynamic Range (LDR) to High Dynamic Range (HDR) images
is an important computer vision task. There is a significant amount of research
utilizing both conventional non-learning methods and modern data-driven
approaches, focusing on using both single-exposed and multi-exposed LDR for HDR
image reconstruction. However, most current state-of-the-art methods require
high-quality paired {LDR,HDR} datasets for model training. In addition, there
is limited literature on using unpaired datasets for this task, that is, the
model learns a mapping between domains, i.e., {LDR,HDR}. This paper proposes
LLM-HDR, a method that integrates the perception of Large Language Models (LLM)
into a modified semantic- and cycle-consistent adversarial architecture that
utilizes unpaired {LDR,HDR} datasets for training. The method introduces novel
artifact- and exposure-aware generators to address visual artifact removal and
an encoder and loss to address semantic consistency, another under-explored
topic. LLM-HDR is the first to use an LLM for the {LDR,HDR} translation task in
a self-supervised setup. The method achieves state-of-the-art performance
across several benchmark datasets and reconstructs high-quality HDR images. The
official website of this work is available at:
https://github.com/HrishavBakulBarua/LLM-HDR
| [
{
"version": "v1",
"created": "Sat, 19 Oct 2024 11:11:58 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 06:46:42 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Barua",
"Hrishav Bakul",
""
],
[
"Stefanov",
"Kalin",
""
],
[
"Che",
"Lemuel Lai En",
""
],
[
"Dhall",
"Abhinav",
""
],
[
"Wong",
"KokSheik",
""
],
[
"Krishnasamy",
"Ganesh",
""
]
]
| TITLE: LLM-HDR: Bridging LLM-based Perception and Self-Supervision for Unpaired
LDR-to-HDR Image Reconstruction
ABSTRACT: The translation of Low Dynamic Range (LDR) to High Dynamic Range (HDR) images
is an important computer vision task. There is a significant amount of research
utilizing both conventional non-learning methods and modern data-driven
approaches, focusing on using both single-exposed and multi-exposed LDR for HDR
image reconstruction. However, most current state-of-the-art methods require
high-quality paired {LDR,HDR} datasets for model training. In addition, there
is limited literature on using unpaired datasets for this task, that is, the
model learns a mapping between domains, i.e., {LDR,HDR}. This paper proposes
LLM-HDR, a method that integrates the perception of Large Language Models (LLM)
into a modified semantic- and cycle-consistent adversarial architecture that
utilizes unpaired {LDR,HDR} datasets for training. The method introduces novel
artifact- and exposure-aware generators to address visual artifact removal and
an encoder and loss to address semantic consistency, another under-explored
topic. LLM-HDR is the first to use an LLM for the {LDR,HDR} translation task in
a self-supervised setup. The method achieves state-of-the-art performance
across several benchmark datasets and reconstructs high-quality HDR images. The
official website of this work is available at:
https://github.com/HrishavBakulBarua/LLM-HDR
| no_new_dataset | 0.950778 |
2410.15180 | Xin Liu | Xin Liu, Weijia Zhang, Min-Ling Zhang | HACSurv: A Hierarchical Copula-Based Approach for Survival Analysis with
Dependent Competing Risks | Accepted at AISTATS 2025 | null | null | null | stat.ML cs.LG stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In survival analysis, subjects often face competing risks; for example,
individuals with cancer may also suffer from heart disease or other illnesses,
which can jointly influence the prognosis of risks and censoring. Traditional
survival analysis methods often treat competing risks as independent and fail
to accommodate the dependencies between different conditions. In this paper, we
introduce HACSurv, a survival analysis method that learns Hierarchical
Archimedean Copulas structures and cause-specific survival functions from data
with competing risks. HACSurv employs a flexible dependency structure using
hierarchical Archimedean copulas to represent the relationships between
competing risks and censoring. By capturing the dependencies between risks and
censoring, HACSurv improves the accuracy of survival predictions and offers
insights into risk interactions. Experiments on synthetic dataset demonstrate
that our method can accurately identify the complex dependency structure and
precisely predict survival distributions, whereas the compared methods exhibit
significant deviations between their predictions and the true distributions.
Experiments on multiple real-world datasets also demonstrate that our method
achieves better survival prediction compared to previous state-of-the-art
methods.
| [
{
"version": "v1",
"created": "Sat, 19 Oct 2024 18:52:18 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 16:00:06 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Liu",
"Xin",
""
],
[
"Zhang",
"Weijia",
""
],
[
"Zhang",
"Min-Ling",
""
]
]
| TITLE: HACSurv: A Hierarchical Copula-Based Approach for Survival Analysis with
Dependent Competing Risks
ABSTRACT: In survival analysis, subjects often face competing risks; for example,
individuals with cancer may also suffer from heart disease or other illnesses,
which can jointly influence the prognosis of risks and censoring. Traditional
survival analysis methods often treat competing risks as independent and fail
to accommodate the dependencies between different conditions. In this paper, we
introduce HACSurv, a survival analysis method that learns Hierarchical
Archimedean Copulas structures and cause-specific survival functions from data
with competing risks. HACSurv employs a flexible dependency structure using
hierarchical Archimedean copulas to represent the relationships between
competing risks and censoring. By capturing the dependencies between risks and
censoring, HACSurv improves the accuracy of survival predictions and offers
insights into risk interactions. Experiments on synthetic dataset demonstrate
that our method can accurately identify the complex dependency structure and
precisely predict survival distributions, whereas the compared methods exhibit
significant deviations between their predictions and the true distributions.
Experiments on multiple real-world datasets also demonstrate that our method
achieves better survival prediction compared to previous state-of-the-art
methods.
| no_new_dataset | 0.945901 |
2410.16162 | Yihong Tang | Yihong Tang, Ao Qu, Zhaokai Wang, Dingyi Zhuang, Zhaofeng Wu, Wei Ma,
Shenhao Wang, Yunhan Zheng, Zhan Zhao, Jinhua Zhao | Sparkle: Mastering Basic Spatial Capabilities in Vision Language Models
Elicits Generalization to Spatial Reasoning | null | null | null | null | cs.CV cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vision language models (VLMs) have demonstrated impressive performance across
a wide range of downstream tasks. However, their proficiency in spatial
reasoning remains limited, despite its crucial role in tasks involving
navigation and interaction with physical environments. Specifically, most of
these tasks rely on the core spatial reasoning capabilities in two-dimensional
(2D) environments, and our evaluation reveals that state-of-the-art VLMs
frequently generate implausible and incorrect responses to composite spatial
reasoning problems, including simple pathfinding tasks that humans can solve
effortlessly at a glance. To address this, we explore an effective approach to
enhance 2D spatial reasoning within VLMs by training the model solely on basic
spatial capabilities. We begin by disentangling the key components of 2D
spatial reasoning: direction comprehension, distance estimation, and
localization. Our central hypothesis is that mastering these basic spatial
capabilities can significantly enhance a model's performance on composite
spatial tasks requiring advanced spatial understanding and combinatorial
problem-solving, with generalized improvements in real-world visual-spatial
tasks. To investigate this hypothesis, we introduce Sparkle: a framework that
uses synthetic data generation to provide targeted supervision for vision
language models (VLMs) in three basic spatial capabilities, creating an
instruction dataset for each capability. Our experiments demonstrate that VLMs
fine-tuned with Sparkle achieve significant performance gains, not only in the
basic tasks themselves but also in generalizing to composite and
out-of-distribution real-world spatial reasoning tasks. These findings offer
insights into systematic strategies for improving VLMs' spatial reasoning
capabilities.
| [
{
"version": "v1",
"created": "Mon, 21 Oct 2024 16:26:09 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Nov 2024 18:05:04 GMT"
},
{
"version": "v3",
"created": "Mon, 10 Mar 2025 22:01:59 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Tang",
"Yihong",
""
],
[
"Qu",
"Ao",
""
],
[
"Wang",
"Zhaokai",
""
],
[
"Zhuang",
"Dingyi",
""
],
[
"Wu",
"Zhaofeng",
""
],
[
"Ma",
"Wei",
""
],
[
"Wang",
"Shenhao",
""
],
[
"Zheng",
"Yunhan",
""
],
[
"Zhao",
"Zhan",
""
],
[
"Zhao",
"Jinhua",
""
]
]
| TITLE: Sparkle: Mastering Basic Spatial Capabilities in Vision Language Models
Elicits Generalization to Spatial Reasoning
ABSTRACT: Vision language models (VLMs) have demonstrated impressive performance across
a wide range of downstream tasks. However, their proficiency in spatial
reasoning remains limited, despite its crucial role in tasks involving
navigation and interaction with physical environments. Specifically, most of
these tasks rely on the core spatial reasoning capabilities in two-dimensional
(2D) environments, and our evaluation reveals that state-of-the-art VLMs
frequently generate implausible and incorrect responses to composite spatial
reasoning problems, including simple pathfinding tasks that humans can solve
effortlessly at a glance. To address this, we explore an effective approach to
enhance 2D spatial reasoning within VLMs by training the model solely on basic
spatial capabilities. We begin by disentangling the key components of 2D
spatial reasoning: direction comprehension, distance estimation, and
localization. Our central hypothesis is that mastering these basic spatial
capabilities can significantly enhance a model's performance on composite
spatial tasks requiring advanced spatial understanding and combinatorial
problem-solving, with generalized improvements in real-world visual-spatial
tasks. To investigate this hypothesis, we introduce Sparkle: a framework that
uses synthetic data generation to provide targeted supervision for vision
language models (VLMs) in three basic spatial capabilities, creating an
instruction dataset for each capability. Our experiments demonstrate that VLMs
fine-tuned with Sparkle achieve significant performance gains, not only in the
basic tasks themselves but also in generalizing to composite and
out-of-distribution real-world spatial reasoning tasks. These findings offer
insights into systematic strategies for improving VLMs' spatial reasoning
capabilities.
| new_dataset | 0.942082 |
2410.16888 | Kai Zhao | Kai Zhao, Zhihao Zhuang, Chenjuan Guo, Hao Miao, Yunyao Cheng and Bin
Yang | Unsupervised Time Series Anomaly Prediction with Importance-based
Generative Contrastive Learning | revised | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Time series anomaly prediction plays an essential role in many real-world
scenarios, such as environmental prevention and prompt maintenance of
cyber-physical systems. However, existing time series anomaly prediction
methods mainly require supervised training with plenty of manually labeled
data, which are difficult to obtain in practice. Besides, unseen anomalies can
occur during inference, which could differ from the labeled training data and
make these models fail to predict such new anomalies. In this paper, we study a
novel problem of unsupervised time series anomaly prediction. We provide a
theoretical analysis and propose Importance-based Generative Contrastive
Learning (IGCL) to address the aforementioned problems. IGCL distinguishes
between normal and anomaly precursors, which are generated by our anomaly
precursor pattern generation module. To address the efficiency issues caused by
the potential complex anomaly precursor combinations, we propose a memory bank
with importance-based scores to adaptively store representative anomaly
precursors and generate more complicated anomaly precursors. Extensive
experiments on seven benchmark datasets show our method outperforms
state-of-the-art baselines on unsupervised time series anomaly prediction
problems.
| [
{
"version": "v1",
"created": "Tue, 22 Oct 2024 10:46:36 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 14:46:34 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Zhao",
"Kai",
""
],
[
"Zhuang",
"Zhihao",
""
],
[
"Guo",
"Chenjuan",
""
],
[
"Miao",
"Hao",
""
],
[
"Cheng",
"Yunyao",
""
],
[
"Yang",
"Bin",
""
]
]
| TITLE: Unsupervised Time Series Anomaly Prediction with Importance-based
Generative Contrastive Learning
ABSTRACT: Time series anomaly prediction plays an essential role in many real-world
scenarios, such as environmental prevention and prompt maintenance of
cyber-physical systems. However, existing time series anomaly prediction
methods mainly require supervised training with plenty of manually labeled
data, which are difficult to obtain in practice. Besides, unseen anomalies can
occur during inference, which could differ from the labeled training data and
make these models fail to predict such new anomalies. In this paper, we study a
novel problem of unsupervised time series anomaly prediction. We provide a
theoretical analysis and propose Importance-based Generative Contrastive
Learning (IGCL) to address the aforementioned problems. IGCL distinguishes
between normal and anomaly precursors, which are generated by our anomaly
precursor pattern generation module. To address the efficiency issues caused by
the potential complex anomaly precursor combinations, we propose a memory bank
with importance-based scores to adaptively store representative anomaly
precursors and generate more complicated anomaly precursors. Extensive
experiments on seven benchmark datasets show our method outperforms
state-of-the-art baselines on unsupervised time series anomaly prediction
problems.
| no_new_dataset | 0.948058 |
2410.19256 | Yiqing Guo | Yiqing Guo, Karel Mokany, Shaun R. Levick, Jinyan Yang, Peyman
Moghadam | Spatioformer: A Geo-encoded Transformer for Large-Scale Plant Species
Richness Prediction | Published in IEEE Transactions on Geoscience and Remote Sensing. Link
to the paper: https://ieeexplore.ieee.org/abstract/document/10854505 | IEEE Transactions on Geoscience and Remote Sensing, vol. 63, pp.
1-16, 2025, Art no. 4403216 | 10.1109/tgrs.2025.3534654 | null | cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Earth observation data have shown promise in predicting species richness of
vascular plants ($\alpha$-diversity), but extending this approach to large
spatial scales is challenging because geographically distant regions may
exhibit different compositions of plant species ($\beta$-diversity), resulting
in a location-dependent relationship between richness and spectral
measurements. In order to handle such geolocation dependency, we propose
\textit{Spatioformer}, where a novel geolocation encoder is coupled with the
transformer model to encode geolocation context into remote sensing imagery.
The Spatioformer model compares favourably to state-of-the-art models in
richness predictions on a large-scale ground-truth richness dataset (HAVPlot)
that consists of 68,170 in-situ richness samples covering diverse landscapes
across Australia. The results demonstrate that geolocational information is
advantageous in predicting species richness from satellite observations over
large spatial scales. With Spatioformer, plant species richness maps over
Australia are compiled from Landsat archive for the years from 2015 to 2023.
The richness maps produced in this study reveal the spatiotemporal dynamics of
plant species richness in Australia, providing supporting evidence to inform
effective planning and policy development for plant diversity conservation.
Regions of high richness prediction uncertainties are identified, highlighting
the need for future in-situ surveys to be conducted in these areas to enhance
the prediction accuracy.
| [
{
"version": "v1",
"created": "Fri, 25 Oct 2024 02:21:01 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Nov 2024 01:15:13 GMT"
},
{
"version": "v3",
"created": "Tue, 11 Mar 2025 06:27:10 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Guo",
"Yiqing",
""
],
[
"Mokany",
"Karel",
""
],
[
"Levick",
"Shaun R.",
""
],
[
"Yang",
"Jinyan",
""
],
[
"Moghadam",
"Peyman",
""
]
]
| TITLE: Spatioformer: A Geo-encoded Transformer for Large-Scale Plant Species
Richness Prediction
ABSTRACT: Earth observation data have shown promise in predicting species richness of
vascular plants ($\alpha$-diversity), but extending this approach to large
spatial scales is challenging because geographically distant regions may
exhibit different compositions of plant species ($\beta$-diversity), resulting
in a location-dependent relationship between richness and spectral
measurements. In order to handle such geolocation dependency, we propose
\textit{Spatioformer}, where a novel geolocation encoder is coupled with the
transformer model to encode geolocation context into remote sensing imagery.
The Spatioformer model compares favourably to state-of-the-art models in
richness predictions on a large-scale ground-truth richness dataset (HAVPlot)
that consists of 68,170 in-situ richness samples covering diverse landscapes
across Australia. The results demonstrate that geolocational information is
advantageous in predicting species richness from satellite observations over
large spatial scales. With Spatioformer, plant species richness maps over
Australia are compiled from Landsat archive for the years from 2015 to 2023.
The richness maps produced in this study reveal the spatiotemporal dynamics of
plant species richness in Australia, providing supporting evidence to inform
effective planning and policy development for plant diversity conservation.
Regions of high richness prediction uncertainties are identified, highlighting
the need for future in-situ surveys to be conducted in these areas to enhance
the prediction accuracy.
| new_dataset | 0.817283 |
2410.19780 | Peter Archibald Whalley | Daniel Paulin, Peter A. Whalley, Neil K. Chada, Benedict Leimkuhler | Sampling from Bayesian Neural Network Posteriors with Symmetric
Minibatch Splitting Langevin Dynamics | 33 pages, 7 figures. The first two authors contributed equally | null | null | null | stat.ML cs.LG cs.NA math.NA math.PR stat.CO stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a scalable kinetic Langevin dynamics algorithm for sampling
parameter spaces of big data and AI applications. Our scheme combines a
symmetric forward/backward sweep over minibatches with a symmetric
discretization of Langevin dynamics. For a particular Langevin splitting method
(UBU), we show that the resulting Symmetric Minibatch Splitting-UBU (SMS-UBU)
integrator has bias $O(h^2 d^{1/2})$ in dimension $d>0$ with stepsize $h>0$,
despite only using one minibatch per iteration, thus providing excellent
control of the sampling bias as a function of the stepsize. We apply the
algorithm to explore local modes of the posterior distribution of Bayesian
neural networks (BNNs) and evaluate the calibration performance of the
posterior predictive probabilities for neural networks with convolutional
neural network architectures for classification problems on three different
datasets (Fashion-MNIST, Celeb-A and chest X-ray). Our results indicate that
BNNs sampled with SMS-UBU can offer significantly better calibration
performance compared to standard methods of training and stochastic weight
averaging.
| [
{
"version": "v1",
"created": "Mon, 14 Oct 2024 13:47:02 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 11:04:40 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Paulin",
"Daniel",
""
],
[
"Whalley",
"Peter A.",
""
],
[
"Chada",
"Neil K.",
""
],
[
"Leimkuhler",
"Benedict",
""
]
]
| TITLE: Sampling from Bayesian Neural Network Posteriors with Symmetric
Minibatch Splitting Langevin Dynamics
ABSTRACT: We propose a scalable kinetic Langevin dynamics algorithm for sampling
parameter spaces of big data and AI applications. Our scheme combines a
symmetric forward/backward sweep over minibatches with a symmetric
discretization of Langevin dynamics. For a particular Langevin splitting method
(UBU), we show that the resulting Symmetric Minibatch Splitting-UBU (SMS-UBU)
integrator has bias $O(h^2 d^{1/2})$ in dimension $d>0$ with stepsize $h>0$,
despite only using one minibatch per iteration, thus providing excellent
control of the sampling bias as a function of the stepsize. We apply the
algorithm to explore local modes of the posterior distribution of Bayesian
neural networks (BNNs) and evaluate the calibration performance of the
posterior predictive probabilities for neural networks with convolutional
neural network architectures for classification problems on three different
datasets (Fashion-MNIST, Celeb-A and chest X-ray). Our results indicate that
BNNs sampled with SMS-UBU can offer significantly better calibration
performance compared to standard methods of training and stochastic weight
averaging.
| no_new_dataset | 0.9462 |
2410.21826 | Suhyun Ahn | Suhyun Ahn, Wonjung Park, Jihoon Cho, Seunghyuck Park, Jinah Park | Volumetric Conditioning Module to Control Pretrained Diffusion Models
for 3D Medical Images | 17 pages, 18 figures, accepted @ WACV 2025 | Proceedings of the Winter Conference on Applications of Computer
Vision (WACV), pp. 85-95, Feb. 2025 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Spatial control methods using additional modules on pretrained diffusion
models have gained attention for enabling conditional generation in natural
images. These methods guide the generation process with new conditions while
leveraging the capabilities of large models. They could be beneficial as
training strategies in the context of 3D medical imaging, where training a
diffusion model from scratch is challenging due to high computational costs and
data scarcity. However, the potential application of spatial control methods
with additional modules to 3D medical images has not yet been explored. In this
paper, we present a tailored spatial control method for 3D medical images with
a novel lightweight module, Volumetric Conditioning Module (VCM). Our VCM
employs an asymmetric U-Net architecture to effectively encode complex
information from various levels of 3D conditions, providing detailed guidance
in image synthesis. To examine the applicability of spatial control methods and
the effectiveness of VCM for 3D medical data, we conduct experiments under
single- and multimodal conditions scenarios across a wide range of dataset
sizes, from extremely small datasets with 10 samples to large datasets with 500
samples. The experimental results show that the VCM is effective for
conditional generation and efficient in terms of requiring less training data
and computational resources. We further investigate the potential applications
for our spatial control method through axial super-resolution for medical
images. Our code is available at \url{https://github.com/Ahn-Ssu/VCM}
| [
{
"version": "v1",
"created": "Tue, 29 Oct 2024 07:48:52 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Ahn",
"Suhyun",
""
],
[
"Park",
"Wonjung",
""
],
[
"Cho",
"Jihoon",
""
],
[
"Park",
"Seunghyuck",
""
],
[
"Park",
"Jinah",
""
]
]
| TITLE: Volumetric Conditioning Module to Control Pretrained Diffusion Models
for 3D Medical Images
ABSTRACT: Spatial control methods using additional modules on pretrained diffusion
models have gained attention for enabling conditional generation in natural
images. These methods guide the generation process with new conditions while
leveraging the capabilities of large models. They could be beneficial as
training strategies in the context of 3D medical imaging, where training a
diffusion model from scratch is challenging due to high computational costs and
data scarcity. However, the potential application of spatial control methods
with additional modules to 3D medical images has not yet been explored. In this
paper, we present a tailored spatial control method for 3D medical images with
a novel lightweight module, Volumetric Conditioning Module (VCM). Our VCM
employs an asymmetric U-Net architecture to effectively encode complex
information from various levels of 3D conditions, providing detailed guidance
in image synthesis. To examine the applicability of spatial control methods and
the effectiveness of VCM for 3D medical data, we conduct experiments under
single- and multimodal conditions scenarios across a wide range of dataset
sizes, from extremely small datasets with 10 samples to large datasets with 500
samples. The experimental results show that the VCM is effective for
conditional generation and efficient in terms of requiring less training data
and computational resources. We further investigate the potential applications
for our spatial control method through axial super-resolution for medical
images. Our code is available at \url{https://github.com/Ahn-Ssu/VCM}
| no_new_dataset | 0.949059 |
2410.22269 | Nate Gillman | Nate Gillman, Daksh Aggarwal, Michael Freeman, Saurabh Singh, Chen Sun | Fourier Head: Helping Large Language Models Learn Complex Probability
Distributions | Camera ready version (ICLR 2025). Code at
https://nategillman.com/fourier-head | null | null | null | cs.LG cs.AI cs.CL stat.ML | http://creativecommons.org/licenses/by/4.0/ | As the quality of large language models has improved, there has been
increased interest in using them to model non-linguistic tokens. For example,
the Decision Transformer recasts agentic decision making as a sequence modeling
problem, using a decoder-only LLM to model the distribution over the discrete
action space for an Atari agent. However, when adapting LLMs to non-linguistic
domains, it remains unclear if softmax over discrete bins captures the
continuous structure of the tokens and the potentially complex distributions
needed for high quality token generation. We introduce a neural network layer,
constructed using Fourier series, which we can easily substitute for any linear
layer if we want the outputs to have a more continuous structure. We perform
extensive analysis on synthetic datasets, as well as on large-scale decision
making and time series forecasting tasks. We also provide theoretical evidence
that this layer can better learn signal from data while ignoring high-frequency
noise. All of our results support the effectiveness of our proposed Fourier
head in scenarios where the underlying data distribution has a natural
continuous structure. For example, the Fourier head improves a Decision
Transformer agent's returns across four benchmark Atari games by as much as
377%, and increases a state-of-the-art times series foundation model's
forecasting performance by 3.5% across 20 benchmarks unseen during training.
| [
{
"version": "v1",
"created": "Tue, 29 Oct 2024 17:27:58 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 23:59:12 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Gillman",
"Nate",
""
],
[
"Aggarwal",
"Daksh",
""
],
[
"Freeman",
"Michael",
""
],
[
"Singh",
"Saurabh",
""
],
[
"Sun",
"Chen",
""
]
]
| TITLE: Fourier Head: Helping Large Language Models Learn Complex Probability
Distributions
ABSTRACT: As the quality of large language models has improved, there has been
increased interest in using them to model non-linguistic tokens. For example,
the Decision Transformer recasts agentic decision making as a sequence modeling
problem, using a decoder-only LLM to model the distribution over the discrete
action space for an Atari agent. However, when adapting LLMs to non-linguistic
domains, it remains unclear if softmax over discrete bins captures the
continuous structure of the tokens and the potentially complex distributions
needed for high quality token generation. We introduce a neural network layer,
constructed using Fourier series, which we can easily substitute for any linear
layer if we want the outputs to have a more continuous structure. We perform
extensive analysis on synthetic datasets, as well as on large-scale decision
making and time series forecasting tasks. We also provide theoretical evidence
that this layer can better learn signal from data while ignoring high-frequency
noise. All of our results support the effectiveness of our proposed Fourier
head in scenarios where the underlying data distribution has a natural
continuous structure. For example, the Fourier head improves a Decision
Transformer agent's returns across four benchmark Atari games by as much as
377%, and increases a state-of-the-art times series foundation model's
forecasting performance by 3.5% across 20 benchmarks unseen during training.
| no_new_dataset | 0.949059 |
2411.05837 | Zhuorui Ye | Zhuorui Ye, Farzan Farnia | Gaussian Smoothing in Saliency Maps: The Stability-Fidelity Trade-Off in
Neural Network Interpretability | Accepted at AISTATS 2025 | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Saliency maps have been widely used to interpret the decisions of neural
network classifiers and discover phenomena from their learned functions.
However, standard gradient-based maps are frequently observed to be highly
sensitive to the randomness of training data and the stochasticity in the
training process. In this work, we study the role of Gaussian smoothing in the
well-known Smooth-Grad algorithm in the stability of the gradient-based maps to
the randomness of training samples. We extend the algorithmic stability
framework to gradient-based interpretation maps and prove bounds on the
stability error of standard Simple-Grad, Integrated-Gradients, and Smooth-Grad
saliency maps. Our theoretical results suggest the role of Gaussian smoothing
in boosting the stability of gradient-based maps to the randomness of training
settings. On the other hand, we analyze the faithfulness of the Smooth-Grad
maps to the original Simple-Grad and show the lower fidelity under a more
intense Gaussian smoothing. We support our theoretical results by performing
several numerical experiments on standard image datasets. Our empirical results
confirm our hypothesis on the fidelity-stability trade-off in the application
of Gaussian smoothing to gradient-based interpretation maps.
| [
{
"version": "v1",
"created": "Wed, 6 Nov 2024 13:26:57 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 10:19:52 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Ye",
"Zhuorui",
""
],
[
"Farnia",
"Farzan",
""
]
]
| TITLE: Gaussian Smoothing in Saliency Maps: The Stability-Fidelity Trade-Off in
Neural Network Interpretability
ABSTRACT: Saliency maps have been widely used to interpret the decisions of neural
network classifiers and discover phenomena from their learned functions.
However, standard gradient-based maps are frequently observed to be highly
sensitive to the randomness of training data and the stochasticity in the
training process. In this work, we study the role of Gaussian smoothing in the
well-known Smooth-Grad algorithm in the stability of the gradient-based maps to
the randomness of training samples. We extend the algorithmic stability
framework to gradient-based interpretation maps and prove bounds on the
stability error of standard Simple-Grad, Integrated-Gradients, and Smooth-Grad
saliency maps. Our theoretical results suggest the role of Gaussian smoothing
in boosting the stability of gradient-based maps to the randomness of training
settings. On the other hand, we analyze the faithfulness of the Smooth-Grad
maps to the original Simple-Grad and show the lower fidelity under a more
intense Gaussian smoothing. We support our theoretical results by performing
several numerical experiments on standard image datasets. Our empirical results
confirm our hypothesis on the fidelity-stability trade-off in the application
of Gaussian smoothing to gradient-based interpretation maps.
| no_new_dataset | 0.952618 |
2411.05979 | Ha Manh Bui | Ha Manh Bui, Enrique Mallada, Anqi Liu | Variance-Aware Linear UCB with Deep Representation for Neural Contextual
Bandits | International Conference on Artificial Intelligence and Statistics,
2025 | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | By leveraging the representation power of deep neural networks, neural upper
confidence bound (UCB) algorithms have shown success in contextual bandits. To
further balance the exploration and exploitation, we propose
Neural-$\sigma^2$-LinearUCB, a variance-aware algorithm that utilizes
$\sigma^2_t$, i.e., an upper bound of the reward noise variance at round $t$,
to enhance the uncertainty quantification quality of the UCB, resulting in a
regret performance improvement. We provide an oracle version for our algorithm
characterized by an oracle variance upper bound $\sigma^2_t$ and a practical
version with a novel estimation for this variance bound. Theoretically, we
provide rigorous regret analysis for both versions and prove that our oracle
algorithm achieves a better regret guarantee than other neural-UCB algorithms
in the neural contextual bandits setting. Empirically, our practical method
enjoys a similar computational efficiency, while outperforming state-of-the-art
techniques by having a better calibration and lower regret across multiple
standard settings, including on the synthetic, UCI, MNIST, and CIFAR-10
datasets.
| [
{
"version": "v1",
"created": "Fri, 8 Nov 2024 21:24:14 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 02:32:48 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Bui",
"Ha Manh",
""
],
[
"Mallada",
"Enrique",
""
],
[
"Liu",
"Anqi",
""
]
]
| TITLE: Variance-Aware Linear UCB with Deep Representation for Neural Contextual
Bandits
ABSTRACT: By leveraging the representation power of deep neural networks, neural upper
confidence bound (UCB) algorithms have shown success in contextual bandits. To
further balance the exploration and exploitation, we propose
Neural-$\sigma^2$-LinearUCB, a variance-aware algorithm that utilizes
$\sigma^2_t$, i.e., an upper bound of the reward noise variance at round $t$,
to enhance the uncertainty quantification quality of the UCB, resulting in a
regret performance improvement. We provide an oracle version for our algorithm
characterized by an oracle variance upper bound $\sigma^2_t$ and a practical
version with a novel estimation for this variance bound. Theoretically, we
provide rigorous regret analysis for both versions and prove that our oracle
algorithm achieves a better regret guarantee than other neural-UCB algorithms
in the neural contextual bandits setting. Empirically, our practical method
enjoys a similar computational efficiency, while outperforming state-of-the-art
techniques by having a better calibration and lower regret across multiple
standard settings, including on the synthetic, UCI, MNIST, and CIFAR-10
datasets.
| no_new_dataset | 0.945951 |
2411.10573 | Moshe Kimhi | Moshe Kimhi, Idan Kashani, Avi Mendelson, Chaim Baskin | Hysteresis Activation Function for Efficient Inference | Accepted to 4th NeurIPS Efficient Natural Language and Speech
Processing Workshop (ENLSP-IV 2024) | Proceedings of Machine Learning Research, Volume 262, Pages 414
422, 2024 | null | null | cs.LG cs.CL cs.NE | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The widely used ReLU is favored for its hardware efficiency, {as the
implementation at inference is a one bit sign case,} yet suffers from issues
such as the ``dying ReLU'' problem, where during training, neurons fail to
activate and constantly remain at zero, as highlighted by Lu et al. Traditional
approaches to mitigate this issue often introduce more complex and less
hardware-friendly activation functions. In this work, we propose a Hysteresis
Rectified Linear Unit (HeLU), an efficient activation function designed to
address the ``dying ReLU'' problem with minimal complexity. Unlike traditional
activation functions with fixed thresholds for training and inference, HeLU
employs a variable threshold that refines the backpropagation. This refined
mechanism allows simpler activation functions to achieve competitive
performance comparable to their more complex counterparts without introducing
unnecessary complexity or requiring inductive biases. Empirical evaluations
demonstrate that HeLU enhances model generalization across diverse datasets,
offering a promising solution for efficient and effective inference suitable
for a wide range of neural network architectures.
| [
{
"version": "v1",
"created": "Fri, 15 Nov 2024 20:46:58 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 13:41:59 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Kimhi",
"Moshe",
""
],
[
"Kashani",
"Idan",
""
],
[
"Mendelson",
"Avi",
""
],
[
"Baskin",
"Chaim",
""
]
]
| TITLE: Hysteresis Activation Function for Efficient Inference
ABSTRACT: The widely used ReLU is favored for its hardware efficiency, {as the
implementation at inference is a one bit sign case,} yet suffers from issues
such as the ``dying ReLU'' problem, where during training, neurons fail to
activate and constantly remain at zero, as highlighted by Lu et al. Traditional
approaches to mitigate this issue often introduce more complex and less
hardware-friendly activation functions. In this work, we propose a Hysteresis
Rectified Linear Unit (HeLU), an efficient activation function designed to
address the ``dying ReLU'' problem with minimal complexity. Unlike traditional
activation functions with fixed thresholds for training and inference, HeLU
employs a variable threshold that refines the backpropagation. This refined
mechanism allows simpler activation functions to achieve competitive
performance comparable to their more complex counterparts without introducing
unnecessary complexity or requiring inductive biases. Empirical evaluations
demonstrate that HeLU enhances model generalization across diverse datasets,
offering a promising solution for efficient and effective inference suitable
for a wide range of neural network architectures.
| no_new_dataset | 0.94887 |
2411.10639 | Yunsheng Ma | Yunsheng Ma, Burhaneddin Yaman, Xin Ye, Jingru Luo, Feng Tao, Abhirup
Mallik, Ziran Wang, Liu Ren | MTA: Multimodal Task Alignment for BEV Perception and Captioning | 10 pages | null | null | null | cs.CV cs.AI cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bird's eye view (BEV)-based 3D perception plays a crucial role in autonomous
driving applications. The rise of large language models has spurred interest in
BEV-based captioning to understand object behavior in the surrounding
environment. However, existing approaches treat perception and captioning as
separate tasks, focusing on the performance of only one task and overlooking
the potential benefits of multimodal alignment. To bridge this gap between
modalities, we introduce MTA, a novel multimodal task alignment framework that
boosts both BEV perception and captioning. MTA consists of two key components:
(1) BEV-Language Alignment (BLA), a contextual learning mechanism that aligns
the BEV scene representations with ground-truth language representations, and
(2) Detection-Captioning Alignment (DCA), a cross-modal prompting mechanism
that aligns detection and captioning outputs. MTA seamlessly integrates into
state-of-the-art baselines during training, adding no extra computational
complexity at runtime. Extensive experiments on the nuScenes and TOD3Cap
datasets show that MTA significantly outperforms state-of-the-art baselines in
both tasks, achieving a 10.7% improvement in challenging rare perception
scenarios and a 9.2% improvement in captioning. These results underscore the
effectiveness of unified alignment in reconciling BEV-based perception and
captioning.
| [
{
"version": "v1",
"created": "Sat, 16 Nov 2024 00:14:13 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 20:59:22 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Ma",
"Yunsheng",
""
],
[
"Yaman",
"Burhaneddin",
""
],
[
"Ye",
"Xin",
""
],
[
"Luo",
"Jingru",
""
],
[
"Tao",
"Feng",
""
],
[
"Mallik",
"Abhirup",
""
],
[
"Wang",
"Ziran",
""
],
[
"Ren",
"Liu",
""
]
]
| TITLE: MTA: Multimodal Task Alignment for BEV Perception and Captioning
ABSTRACT: Bird's eye view (BEV)-based 3D perception plays a crucial role in autonomous
driving applications. The rise of large language models has spurred interest in
BEV-based captioning to understand object behavior in the surrounding
environment. However, existing approaches treat perception and captioning as
separate tasks, focusing on the performance of only one task and overlooking
the potential benefits of multimodal alignment. To bridge this gap between
modalities, we introduce MTA, a novel multimodal task alignment framework that
boosts both BEV perception and captioning. MTA consists of two key components:
(1) BEV-Language Alignment (BLA), a contextual learning mechanism that aligns
the BEV scene representations with ground-truth language representations, and
(2) Detection-Captioning Alignment (DCA), a cross-modal prompting mechanism
that aligns detection and captioning outputs. MTA seamlessly integrates into
state-of-the-art baselines during training, adding no extra computational
complexity at runtime. Extensive experiments on the nuScenes and TOD3Cap
datasets show that MTA significantly outperforms state-of-the-art baselines in
both tasks, achieving a 10.7% improvement in challenging rare perception
scenarios and a 9.2% improvement in captioning. These results underscore the
effectiveness of unified alignment in reconciling BEV-based perception and
captioning.
| no_new_dataset | 0.942981 |
2411.10794 | Sudarshan Regmi | Sudarshan Regmi | Going Beyond Conventional OOD Detection | null | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Out-of-distribution (OOD) detection is critical to ensure the safe deployment
of deep learning models in critical applications. Deep learning models can
often misidentify OOD samples as in-distribution (ID) samples. This
vulnerability worsens in the presence of spurious correlation in the training
set. Likewise, in fine-grained classification settings, detection of
fine-grained OOD samples becomes inherently challenging due to their high
similarity to ID samples. However, current research on OOD detection has
largely ignored these challenging scenarios, focusing instead on relatively
easier (conventional) cases. In this work, we present a unified Approach to
Spurious, fine-grained, and Conventional OOD Detection (ASCOOD). First, we
propose synthesizing virtual outliers from ID data by approximating the
destruction of invariant features. To this end, we identify invariant features
with the pixel attribution method using the model being learned. This approach
eliminates the burden of curating external OOD datasets. Then, we
simultaneously incentivize ID classification and predictive uncertainty towards
virtual outliers leveraging standardized feature representation. Our approach
effectively mitigates the impact of spurious correlations and encourages
capturing fine-grained attributes. Extensive experiments across seven datasets
demonstrate the merit of ASCOOD in spurious, fine-grained, and conventional
settings. The code is available at: https://github.com/sudarshanregmi/ASCOOD/
| [
{
"version": "v1",
"created": "Sat, 16 Nov 2024 13:04:52 GMT"
},
{
"version": "v2",
"created": "Tue, 31 Dec 2024 17:22:30 GMT"
},
{
"version": "v3",
"created": "Tue, 11 Mar 2025 17:21:00 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Regmi",
"Sudarshan",
""
]
]
| TITLE: Going Beyond Conventional OOD Detection
ABSTRACT: Out-of-distribution (OOD) detection is critical to ensure the safe deployment
of deep learning models in critical applications. Deep learning models can
often misidentify OOD samples as in-distribution (ID) samples. This
vulnerability worsens in the presence of spurious correlation in the training
set. Likewise, in fine-grained classification settings, detection of
fine-grained OOD samples becomes inherently challenging due to their high
similarity to ID samples. However, current research on OOD detection has
largely ignored these challenging scenarios, focusing instead on relatively
easier (conventional) cases. In this work, we present a unified Approach to
Spurious, fine-grained, and Conventional OOD Detection (ASCOOD). First, we
propose synthesizing virtual outliers from ID data by approximating the
destruction of invariant features. To this end, we identify invariant features
with the pixel attribution method using the model being learned. This approach
eliminates the burden of curating external OOD datasets. Then, we
simultaneously incentivize ID classification and predictive uncertainty towards
virtual outliers leveraging standardized feature representation. Our approach
effectively mitigates the impact of spurious correlations and encourages
capturing fine-grained attributes. Extensive experiments across seven datasets
demonstrate the merit of ASCOOD in spurious, fine-grained, and conventional
settings. The code is available at: https://github.com/sudarshanregmi/ASCOOD/
| no_new_dataset | 0.949248 |
2411.11278 | Jinxing Zhou | Jinxing Zhou, Dan Guo, Ruohao Guo, Yuxin Mao, Jingjing Hu, Yiran
Zhong, Xiaojun Chang, Meng Wang | Towards Open-Vocabulary Audio-Visual Event Localization | accepted by CVPR 2025; Project page:
https://github.com/jasongief/OV-AVEL | null | null | null | cs.CV cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Audio-Visual Event Localization (AVEL) task aims to temporally locate and
classify video events that are both audible and visible. Most research in this
field assumes a closed-set setting, which restricts these models' ability to
handle test data containing event categories absent (unseen) during training.
Recently, a few studies have explored AVEL in an open-set setting, enabling the
recognition of unseen events as ``unknown'', but without providing
category-specific semantics. In this paper, we advance the field by introducing
the Open-Vocabulary Audio-Visual Event Localization (OV-AVEL) problem, which
requires localizing audio-visual events and predicting explicit categories for
both seen and unseen data at inference. To address this new task, we propose
the OV-AVEBench dataset, comprising 24,800 videos across 67 real-life
audio-visual scenes (seen:unseen = 46:21), each with manual segment-level
annotation. We also establish three evaluation metrics for this task. Moreover,
we investigate two baseline approaches, one training-free and one using a
further fine-tuning paradigm. Specifically, we utilize the unified multimodal
space from the pretrained ImageBind model to extract audio, visual, and textual
(event classes) features. The training-free baseline then determines
predictions by comparing the consistency of audio-text and visual-text feature
similarities. The fine-tuning baseline incorporates lightweight temporal layers
to encode temporal relations within the audio and visual modalities, using
OV-AVEBench training data for model fine-tuning. We evaluate these baselines on
the proposed OV-AVEBench dataset and discuss potential directions for future
work in this new field.
| [
{
"version": "v1",
"created": "Mon, 18 Nov 2024 04:35:20 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 11:30:09 GMT"
},
{
"version": "v3",
"created": "Tue, 11 Mar 2025 05:22:20 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Zhou",
"Jinxing",
""
],
[
"Guo",
"Dan",
""
],
[
"Guo",
"Ruohao",
""
],
[
"Mao",
"Yuxin",
""
],
[
"Hu",
"Jingjing",
""
],
[
"Zhong",
"Yiran",
""
],
[
"Chang",
"Xiaojun",
""
],
[
"Wang",
"Meng",
""
]
]
| TITLE: Towards Open-Vocabulary Audio-Visual Event Localization
ABSTRACT: The Audio-Visual Event Localization (AVEL) task aims to temporally locate and
classify video events that are both audible and visible. Most research in this
field assumes a closed-set setting, which restricts these models' ability to
handle test data containing event categories absent (unseen) during training.
Recently, a few studies have explored AVEL in an open-set setting, enabling the
recognition of unseen events as ``unknown'', but without providing
category-specific semantics. In this paper, we advance the field by introducing
the Open-Vocabulary Audio-Visual Event Localization (OV-AVEL) problem, which
requires localizing audio-visual events and predicting explicit categories for
both seen and unseen data at inference. To address this new task, we propose
the OV-AVEBench dataset, comprising 24,800 videos across 67 real-life
audio-visual scenes (seen:unseen = 46:21), each with manual segment-level
annotation. We also establish three evaluation metrics for this task. Moreover,
we investigate two baseline approaches, one training-free and one using a
further fine-tuning paradigm. Specifically, we utilize the unified multimodal
space from the pretrained ImageBind model to extract audio, visual, and textual
(event classes) features. The training-free baseline then determines
predictions by comparing the consistency of audio-text and visual-text feature
similarities. The fine-tuning baseline incorporates lightweight temporal layers
to encode temporal relations within the audio and visual modalities, using
OV-AVEBench training data for model fine-tuning. We evaluate these baselines on
the proposed OV-AVEBench dataset and discuss potential directions for future
work in this new field.
| new_dataset | 0.962108 |
2411.12159 | Ayush Mohanty | Benjamin Peters, Ayush Mohanty, Xiaolei Fang, Stephen K. Robinson and
Nagi Gebraeel | Sensor-fusion based Prognostics Framework for Complex Engineering
Systems Exhibiting Multiple Failure Modes | null | null | null | null | stat.ML cs.LG cs.SY eess.SY stat.AP | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Complex engineering systems are often subject to multiple failure modes.
Developing a remaining useful life (RUL) prediction model that does not
consider the failure mode causing degradation is likely to result in inaccurate
predictions. However, distinguishing between causes of failure without manually
inspecting the system is nontrivial. This challenge is increased when the
causes of historically observed failures are unknown. Sensors, which are useful
for monitoring the state-of-health of systems, can also be used for
distinguishing between multiple failure modes as the presence of multiple
failure modes results in discriminatory behavior of the sensor signals. When
systems are equipped with multiple sensors, some sensors may exhibit behavior
correlated with degradation, while other sensors do not. Furthermore, which
sensors exhibit this behavior may differ for each failure mode. In this paper,
we present a simultaneous clustering and sensor selection approach for
unlabeled training datasets of systems exhibiting multiple failure modes. The
cluster assignments and the selected sensors are then utilized in real-time to
first diagnose the active failure mode and then to predict the system RUL. We
validate the methodology using a simulated dataset of systems exhibiting two
failure modes and on NASA turbofan degradation dataset.
| [
{
"version": "v1",
"created": "Tue, 19 Nov 2024 01:52:59 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 21:05:51 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Peters",
"Benjamin",
""
],
[
"Mohanty",
"Ayush",
""
],
[
"Fang",
"Xiaolei",
""
],
[
"Robinson",
"Stephen K.",
""
],
[
"Gebraeel",
"Nagi",
""
]
]
| TITLE: Sensor-fusion based Prognostics Framework for Complex Engineering
Systems Exhibiting Multiple Failure Modes
ABSTRACT: Complex engineering systems are often subject to multiple failure modes.
Developing a remaining useful life (RUL) prediction model that does not
consider the failure mode causing degradation is likely to result in inaccurate
predictions. However, distinguishing between causes of failure without manually
inspecting the system is nontrivial. This challenge is increased when the
causes of historically observed failures are unknown. Sensors, which are useful
for monitoring the state-of-health of systems, can also be used for
distinguishing between multiple failure modes as the presence of multiple
failure modes results in discriminatory behavior of the sensor signals. When
systems are equipped with multiple sensors, some sensors may exhibit behavior
correlated with degradation, while other sensors do not. Furthermore, which
sensors exhibit this behavior may differ for each failure mode. In this paper,
we present a simultaneous clustering and sensor selection approach for
unlabeled training datasets of systems exhibiting multiple failure modes. The
cluster assignments and the selected sensors are then utilized in real-time to
first diagnose the active failure mode and then to predict the system RUL. We
validate the methodology using a simulated dataset of systems exhibiting two
failure modes and on NASA turbofan degradation dataset.
| no_new_dataset | 0.77886 |
2411.13901 | Sparsh Mittal | Gayatri Deshmukh, Somsubhra De, Chirag Sehgal, Jishu Sen Gupta, Sparsh
Mittal | Dressing the Imagination: A Dataset for AI-Powered Translation of Text
into Fashion Outfits and A Novel KAN Adapter for Enhanced Feature Adaptation | Under review at a conference | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Specialized datasets that capture the fashion industry's rich language and
styling elements can boost progress in AI-driven fashion design. We present
FLORA (Fashion Language Outfit Representation for Apparel Generation), the
first comprehensive dataset containing 4,330 curated pairs of fashion outfits
and corresponding textual descriptions. Each description utilizes
industry-specific terminology and jargon commonly used by professional fashion
designers, providing precise and detailed insights into the outfits. Hence, the
dataset captures the delicate features and subtle stylistic elements necessary
to create high-fidelity fashion designs. We demonstrate that fine-tuning
generative models on the FLORA dataset significantly enhances their capability
to generate accurate and stylistically rich images from textual descriptions of
fashion sketches. FLORA will catalyze the creation of advanced AI models
capable of comprehending and producing subtle, stylistically rich fashion
designs. It will also help fashion designers and end-users to bring their ideas
to life.
As a second orthogonal contribution, we introduce KAN Adapters, which
leverage Kolmogorov-Arnold Networks (KAN) as adaptive modules. They serve as
replacements for traditional MLP-based LoRA adapters. With learnable
spline-based activations, KAN Adapters excel in modeling complex, non-linear
relationships, achieving superior fidelity, faster convergence and semantic
alignment. Extensive experiments and ablation studies on our proposed FLORA
dataset validate the superiority of KAN Adapters over LoRA adapters. To foster
further research and collaboration, we will open-source both the FLORA and our
implementation code.
| [
{
"version": "v1",
"created": "Thu, 21 Nov 2024 07:27:45 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 09:55:48 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Deshmukh",
"Gayatri",
""
],
[
"De",
"Somsubhra",
""
],
[
"Sehgal",
"Chirag",
""
],
[
"Gupta",
"Jishu Sen",
""
],
[
"Mittal",
"Sparsh",
""
]
]
| TITLE: Dressing the Imagination: A Dataset for AI-Powered Translation of Text
into Fashion Outfits and A Novel KAN Adapter for Enhanced Feature Adaptation
ABSTRACT: Specialized datasets that capture the fashion industry's rich language and
styling elements can boost progress in AI-driven fashion design. We present
FLORA (Fashion Language Outfit Representation for Apparel Generation), the
first comprehensive dataset containing 4,330 curated pairs of fashion outfits
and corresponding textual descriptions. Each description utilizes
industry-specific terminology and jargon commonly used by professional fashion
designers, providing precise and detailed insights into the outfits. Hence, the
dataset captures the delicate features and subtle stylistic elements necessary
to create high-fidelity fashion designs. We demonstrate that fine-tuning
generative models on the FLORA dataset significantly enhances their capability
to generate accurate and stylistically rich images from textual descriptions of
fashion sketches. FLORA will catalyze the creation of advanced AI models
capable of comprehending and producing subtle, stylistically rich fashion
designs. It will also help fashion designers and end-users to bring their ideas
to life.
As a second orthogonal contribution, we introduce KAN Adapters, which
leverage Kolmogorov-Arnold Networks (KAN) as adaptive modules. They serve as
replacements for traditional MLP-based LoRA adapters. With learnable
spline-based activations, KAN Adapters excel in modeling complex, non-linear
relationships, achieving superior fidelity, faster convergence and semantic
alignment. Extensive experiments and ablation studies on our proposed FLORA
dataset validate the superiority of KAN Adapters over LoRA adapters. To foster
further research and collaboration, we will open-source both the FLORA and our
implementation code.
| no_new_dataset | 0.886125 |
2411.14137 | Heejeong Nam | Heejeong Nam, Jinwoo Ahn, Keummin Ka, Jiwan Chung, Youngjae Yu | VAGUE: Visual Contexts Clarify Ambiguous Expressions | 31 pages | null | null | null | cs.CV cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human communication often relies on visual cues to resolve ambiguity. While
humans can intuitively integrate these cues, AI systems often find it
challenging to engage in sophisticated multimodal reasoning. We introduce
VAGUE, a benchmark evaluating multimodal AI systems' ability to integrate
visual context for intent disambiguation. VAGUE consists of 1.6K ambiguous
textual expressions, each paired with an image and multiple-choice
interpretations, where the correct answer is only apparent with visual context.
The dataset spans both staged, complex (Visual Commonsense Reasoning) and
natural, personal (Ego4D) scenes, ensuring diversity. Our experiments reveal
that existing multimodal AI models struggle to infer the speaker's true intent.
While performance consistently improves from the introduction of more visual
cues, the overall accuracy remains far below human performance, highlighting a
critical gap in multimodal reasoning. Analysis of failure cases demonstrates
that current models fail to distinguish true intent from superficial
correlations in the visual scene, indicating that they perceive images but do
not effectively reason with them. We release our code and data at
https://github.com/Hazel-Heejeong-Nam/VAGUE.git.
| [
{
"version": "v1",
"created": "Thu, 21 Nov 2024 14:01:42 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 13:29:47 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Nam",
"Heejeong",
""
],
[
"Ahn",
"Jinwoo",
""
],
[
"Ka",
"Keummin",
""
],
[
"Chung",
"Jiwan",
""
],
[
"Yu",
"Youngjae",
""
]
]
| TITLE: VAGUE: Visual Contexts Clarify Ambiguous Expressions
ABSTRACT: Human communication often relies on visual cues to resolve ambiguity. While
humans can intuitively integrate these cues, AI systems often find it
challenging to engage in sophisticated multimodal reasoning. We introduce
VAGUE, a benchmark evaluating multimodal AI systems' ability to integrate
visual context for intent disambiguation. VAGUE consists of 1.6K ambiguous
textual expressions, each paired with an image and multiple-choice
interpretations, where the correct answer is only apparent with visual context.
The dataset spans both staged, complex (Visual Commonsense Reasoning) and
natural, personal (Ego4D) scenes, ensuring diversity. Our experiments reveal
that existing multimodal AI models struggle to infer the speaker's true intent.
While performance consistently improves from the introduction of more visual
cues, the overall accuracy remains far below human performance, highlighting a
critical gap in multimodal reasoning. Analysis of failure cases demonstrates
that current models fail to distinguish true intent from superficial
correlations in the visual scene, indicating that they perceive images but do
not effectively reason with them. We release our code and data at
https://github.com/Hazel-Heejeong-Nam/VAGUE.git.
| new_dataset | 0.967899 |
2411.15098 | Zhenxiong Tan | Zhenxiong Tan, Songhua Liu, Xingyi Yang, Qiaochu Xue, Xinchao Wang | OminiControl: Minimal and Universal Control for Diffusion Transformer | null | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | We present OminiControl, a novel approach that rethinks how image conditions
are integrated into Diffusion Transformer (DiT) architectures. Current image
conditioning methods either introduce substantial parameter overhead or handle
only specific control tasks effectively, limiting their practical versatility.
OminiControl addresses these limitations through three key innovations: (1) a
minimal architectural design that leverages the DiT's own VAE encoder and
transformer blocks, requiring just 0.1% additional parameters; (2) a unified
sequence processing strategy that combines condition tokens with image tokens
for flexible token interactions; and (3) a dynamic position encoding mechanism
that adapts to both spatially-aligned and non-aligned control tasks. Our
extensive experiments show that this streamlined approach not only matches but
surpasses the performance of specialized methods across multiple conditioning
tasks. To overcome data limitations in subject-driven generation, we also
introduce Subjects200K, a large-scale dataset of identity-consistent image
pairs synthesized using DiT models themselves. This work demonstrates that
effective image control can be achieved without architectural complexity,
opening new possibilities for efficient and versatile image generation systems.
| [
{
"version": "v1",
"created": "Fri, 22 Nov 2024 17:55:15 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Nov 2024 17:46:35 GMT"
},
{
"version": "v3",
"created": "Mon, 2 Dec 2024 17:59:40 GMT"
},
{
"version": "v4",
"created": "Wed, 15 Jan 2025 07:30:29 GMT"
},
{
"version": "v5",
"created": "Tue, 11 Mar 2025 10:41:44 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Tan",
"Zhenxiong",
""
],
[
"Liu",
"Songhua",
""
],
[
"Yang",
"Xingyi",
""
],
[
"Xue",
"Qiaochu",
""
],
[
"Wang",
"Xinchao",
""
]
]
| TITLE: OminiControl: Minimal and Universal Control for Diffusion Transformer
ABSTRACT: We present OminiControl, a novel approach that rethinks how image conditions
are integrated into Diffusion Transformer (DiT) architectures. Current image
conditioning methods either introduce substantial parameter overhead or handle
only specific control tasks effectively, limiting their practical versatility.
OminiControl addresses these limitations through three key innovations: (1) a
minimal architectural design that leverages the DiT's own VAE encoder and
transformer blocks, requiring just 0.1% additional parameters; (2) a unified
sequence processing strategy that combines condition tokens with image tokens
for flexible token interactions; and (3) a dynamic position encoding mechanism
that adapts to both spatially-aligned and non-aligned control tasks. Our
extensive experiments show that this streamlined approach not only matches but
surpasses the performance of specialized methods across multiple conditioning
tasks. To overcome data limitations in subject-driven generation, we also
introduce Subjects200K, a large-scale dataset of identity-consistent image
pairs synthesized using DiT models themselves. This work demonstrates that
effective image control can be achieved without architectural complexity,
opening new possibilities for efficient and versatile image generation systems.
| new_dataset | 0.949576 |
2411.15210 | Yong Xie | Yong Xie and Weijie Zheng and Hanxun Huang and Guangnan Ye and Xingjun
Ma | Towards Million-Scale Adversarial Robustness Evaluation With Stronger
Individual Attacks | null | null | null | null | cs.LG cs.AI cs.CR cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As deep learning models are increasingly deployed in safety-critical
applications, evaluating their vulnerabilities to adversarial perturbations is
essential for ensuring their reliability and trustworthiness. Over the past
decade, a large number of white-box adversarial robustness evaluation methods
(i.e., attacks) have been proposed, ranging from single-step to multi-step
methods and from individual to ensemble methods. Despite these advances,
challenges remain in conducting meaningful and comprehensive robustness
evaluations, particularly when it comes to large-scale testing and ensuring
evaluations reflect real-world adversarial risks. In this work, we focus on
image classification models and propose a novel individual attack method,
Probability Margin Attack (PMA), which defines the adversarial margin in the
probability space rather than the logits space. We analyze the relationship
between PMA and existing cross-entropy or logits-margin-based attacks, and show
that PMA can outperform the current state-of-the-art individual methods.
Building on PMA, we propose two types of ensemble attacks that balance
effectiveness and efficiency. Furthermore, we create a million-scale dataset,
CC1M, derived from the existing CC3M dataset, and use it to conduct the first
million-scale white-box adversarial robustness evaluation of
adversarially-trained ImageNet models. Our findings provide valuable insights
into the robustness gaps between individual versus ensemble attacks and
small-scale versus million-scale evaluations.
| [
{
"version": "v1",
"created": "Wed, 20 Nov 2024 10:41:23 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Nov 2024 02:21:07 GMT"
},
{
"version": "v3",
"created": "Mon, 10 Mar 2025 02:39:40 GMT"
},
{
"version": "v4",
"created": "Tue, 11 Mar 2025 02:56:08 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Xie",
"Yong",
""
],
[
"Zheng",
"Weijie",
""
],
[
"Huang",
"Hanxun",
""
],
[
"Ye",
"Guangnan",
""
],
[
"Ma",
"Xingjun",
""
]
]
| TITLE: Towards Million-Scale Adversarial Robustness Evaluation With Stronger
Individual Attacks
ABSTRACT: As deep learning models are increasingly deployed in safety-critical
applications, evaluating their vulnerabilities to adversarial perturbations is
essential for ensuring their reliability and trustworthiness. Over the past
decade, a large number of white-box adversarial robustness evaluation methods
(i.e., attacks) have been proposed, ranging from single-step to multi-step
methods and from individual to ensemble methods. Despite these advances,
challenges remain in conducting meaningful and comprehensive robustness
evaluations, particularly when it comes to large-scale testing and ensuring
evaluations reflect real-world adversarial risks. In this work, we focus on
image classification models and propose a novel individual attack method,
Probability Margin Attack (PMA), which defines the adversarial margin in the
probability space rather than the logits space. We analyze the relationship
between PMA and existing cross-entropy or logits-margin-based attacks, and show
that PMA can outperform the current state-of-the-art individual methods.
Building on PMA, we propose two types of ensemble attacks that balance
effectiveness and efficiency. Furthermore, we create a million-scale dataset,
CC1M, derived from the existing CC3M dataset, and use it to conduct the first
million-scale white-box adversarial robustness evaluation of
adversarially-trained ImageNet models. Our findings provide valuable insights
into the robustness gaps between individual versus ensemble attacks and
small-scale versus million-scale evaluations.
| new_dataset | 0.958538 |
2411.15472 | Pinxin Liu | Pengfei Zhang, Pinxin Liu, Hyeongwoo Kim, Pablo Garrido, Bindita
Chaudhuri | KinMo: Kinematic-aware Human Motion Understanding and Generation | null | null | null | null | cs.CV cs.AI cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current human motion synthesis frameworks rely on global action descriptions,
creating a modality gap that limits both motion understanding and generation
capabilities. A single coarse description, such as ``run", fails to capture
details like variations in speed, limb positioning, and kinematic dynamics,
leading to ambiguities between text and motion modalities. To address this
challenge, we introduce \textbf{KinMo}, a unified framework built on a
hierarchical describable motion representation that extends beyond global
action by incorporating kinematic group movements and their interactions. We
design an automated annotation pipeline to generate high-quality, fine-grained
descriptions for this decomposition, resulting in the KinMo dataset. To
leverage these structured descriptions, we propose Hierarchical Text-Motion
Alignment, improving spatial understanding by integrating additional motion
details. Furthermore, we introduce a coarse-to-fine generation procedure to
leverage enhanced spatial understanding to improve motion synthesis.
Experimental results show that KinMo significantly improves motion
understanding, demonstrated by enhanced text-motion retrieval performance and
enabling more fine-grained motion generation and editing capabilities. Project
Page: https://andypinxinliu.github.io/KinMo
| [
{
"version": "v1",
"created": "Sat, 23 Nov 2024 06:50:11 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 14:29:56 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Zhang",
"Pengfei",
""
],
[
"Liu",
"Pinxin",
""
],
[
"Kim",
"Hyeongwoo",
""
],
[
"Garrido",
"Pablo",
""
],
[
"Chaudhuri",
"Bindita",
""
]
]
| TITLE: KinMo: Kinematic-aware Human Motion Understanding and Generation
ABSTRACT: Current human motion synthesis frameworks rely on global action descriptions,
creating a modality gap that limits both motion understanding and generation
capabilities. A single coarse description, such as ``run", fails to capture
details like variations in speed, limb positioning, and kinematic dynamics,
leading to ambiguities between text and motion modalities. To address this
challenge, we introduce \textbf{KinMo}, a unified framework built on a
hierarchical describable motion representation that extends beyond global
action by incorporating kinematic group movements and their interactions. We
design an automated annotation pipeline to generate high-quality, fine-grained
descriptions for this decomposition, resulting in the KinMo dataset. To
leverage these structured descriptions, we propose Hierarchical Text-Motion
Alignment, improving spatial understanding by integrating additional motion
details. Furthermore, we introduce a coarse-to-fine generation procedure to
leverage enhanced spatial understanding to improve motion synthesis.
Experimental results show that KinMo significantly improves motion
understanding, demonstrated by enhanced text-motion retrieval performance and
enabling more fine-grained motion generation and editing capabilities. Project
Page: https://andypinxinliu.github.io/KinMo
| new_dataset | 0.954816 |
2411.15933 | Klara Janouskova | Klara Janouskova, Cristian Gavrus, Jiri Matas | Bringing the Context Back into Object Recognition, Robustly | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | In object recognition, both the subject of interest (referred to as
foreground, FG, for simplicity) and its surrounding context (background, BG)
may play an important role. However, standard supervised learning often leads
to unintended over-reliance on the BG, limiting model robustness in real-world
deployment settings. The problem is mainly addressed by suppressing the BG,
sacrificing context information for improved generalization.
We propose "Localize to Recognize Robustly" (L2R2), a novel recognition
approach which exploits the benefits of context-aware classification while
maintaining robustness to distribution shifts. L2R2 leverages advances in
zero-shot detection to localize the FG before recognition. It improves the
performance of both standard recognition with supervised training, as well as
multimodal zero-shot recognition with VLMs, while being robust to long-tail BGs
and distribution shifts. The results confirm localization before recognition is
possible for a wide range of datasets and they highlight the limits of object
detection on others
| [
{
"version": "v1",
"created": "Sun, 24 Nov 2024 17:39:39 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 12:08:58 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Janouskova",
"Klara",
""
],
[
"Gavrus",
"Cristian",
""
],
[
"Matas",
"Jiri",
""
]
]
| TITLE: Bringing the Context Back into Object Recognition, Robustly
ABSTRACT: In object recognition, both the subject of interest (referred to as
foreground, FG, for simplicity) and its surrounding context (background, BG)
may play an important role. However, standard supervised learning often leads
to unintended over-reliance on the BG, limiting model robustness in real-world
deployment settings. The problem is mainly addressed by suppressing the BG,
sacrificing context information for improved generalization.
We propose "Localize to Recognize Robustly" (L2R2), a novel recognition
approach which exploits the benefits of context-aware classification while
maintaining robustness to distribution shifts. L2R2 leverages advances in
zero-shot detection to localize the FG before recognition. It improves the
performance of both standard recognition with supervised training, as well as
multimodal zero-shot recognition with VLMs, while being robust to long-tail BGs
and distribution shifts. The results confirm localization before recognition is
possible for a wide range of datasets and they highlight the limits of object
detection on others
| no_new_dataset | 0.949902 |
2411.17237 | Zheng Chen | Zheng Chen, Xun Zhang, Wenbo Li, Renjing Pei, Fenglong Song, Xiongkuo
Min, Xiaohong Liu, Xin Yuan, Yong Guo, Yulun Zhang | Grounding-IQA: Multimodal Language Grounding Model for Image Quality
Assessment | Code is available at: https://github.com/zhengchen1999/Grounding-IQA | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The development of multimodal large language models (MLLMs) enables the
evaluation of image quality through natural language descriptions. This
advancement allows for more detailed assessments. However, these MLLM-based IQA
methods primarily rely on general contextual descriptions, sometimes limiting
fine-grained quality assessment. To address this limitation, we introduce a new
image quality assessment (IQA) task paradigm, grounding-IQA. This paradigm
integrates multimodal referring and grounding with IQA to realize more
fine-grained quality perception. Specifically, grounding-IQA comprises two
subtasks: grounding-IQA-description (GIQA-DES) and visual question answering
(GIQA-VQA). GIQA-DES involves detailed descriptions with precise locations
(e.g., bounding boxes), while GIQA-VQA focuses on quality QA for local regions.
To realize grounding-IQA, we construct a corresponding dataset, GIQA-160K,
through our proposed automated annotation pipeline. Furthermore, we develop a
well-designed benchmark, GIQA-Bench. The benchmark comprehensively evaluates
the model grounding-IQA performance from three perspectives: description
quality, VQA accuracy, and grounding precision. Experiments demonstrate that
our proposed task paradigm, dataset, and benchmark facilitate the more
fine-grained IQA application. Code:
https://github.com/zhengchen1999/Grounding-IQA.
| [
{
"version": "v1",
"created": "Tue, 26 Nov 2024 09:03:16 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 02:18:29 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Chen",
"Zheng",
""
],
[
"Zhang",
"Xun",
""
],
[
"Li",
"Wenbo",
""
],
[
"Pei",
"Renjing",
""
],
[
"Song",
"Fenglong",
""
],
[
"Min",
"Xiongkuo",
""
],
[
"Liu",
"Xiaohong",
""
],
[
"Yuan",
"Xin",
""
],
[
"Guo",
"Yong",
""
],
[
"Zhang",
"Yulun",
""
]
]
| TITLE: Grounding-IQA: Multimodal Language Grounding Model for Image Quality
Assessment
ABSTRACT: The development of multimodal large language models (MLLMs) enables the
evaluation of image quality through natural language descriptions. This
advancement allows for more detailed assessments. However, these MLLM-based IQA
methods primarily rely on general contextual descriptions, sometimes limiting
fine-grained quality assessment. To address this limitation, we introduce a new
image quality assessment (IQA) task paradigm, grounding-IQA. This paradigm
integrates multimodal referring and grounding with IQA to realize more
fine-grained quality perception. Specifically, grounding-IQA comprises two
subtasks: grounding-IQA-description (GIQA-DES) and visual question answering
(GIQA-VQA). GIQA-DES involves detailed descriptions with precise locations
(e.g., bounding boxes), while GIQA-VQA focuses on quality QA for local regions.
To realize grounding-IQA, we construct a corresponding dataset, GIQA-160K,
through our proposed automated annotation pipeline. Furthermore, we develop a
well-designed benchmark, GIQA-Bench. The benchmark comprehensively evaluates
the model grounding-IQA performance from three perspectives: description
quality, VQA accuracy, and grounding precision. Experiments demonstrate that
our proposed task paradigm, dataset, and benchmark facilitate the more
fine-grained IQA application. Code:
https://github.com/zhengchen1999/Grounding-IQA.
| new_dataset | 0.96802 |
2411.17580 | Stuti Pathak | Stuti Pathak, Prashant Kumar, Dheeraj Baiju, Nicholus Mboga, Gunther
Steenackers, Rudi Penne | Revisiting Point Cloud Completion: Are We Ready For The Real-World? | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Point clouds acquired in constrained, challenging, uncontrolled, and
multi-sensor real-world settings are noisy, incomplete, and non-uniformly
sparse. This presents acute challenges for the vital task of point cloud
completion. Using tools from Algebraic Topology and Persistent Homology (PH),
we demonstrate that current benchmark object point clouds lack rich topological
features that are integral part of point clouds captured in realistic
environments. To facilitate research in this direction, we contribute the first
real-world industrial dataset for point cloud completion, RealPC - a diverse,
rich and varied set of point clouds. It consists of ~ 40,000 pairs across 21
categories of industrial structures in railway establishments. Benchmark
results on several strong baselines reveal that existing methods fail in
real-world scenarios. We discover a striking observation - unlike current
datasets, RealPC consists of multiple 0- and 1-dimensional PH-based topological
features. We prove that integrating these topological priors into existing
works helps improve completion. We present how 0-dimensional PH priors extract
the global topology of a complete shape in the form of a 3D skeleton and assist
a model in generating topologically consistent complete shapes. Since computing
Homology is expensive, we present a simple, yet effective Homology Sampler
guided network, BOSHNet that bypasses the Homology computation by sampling
proxy backbones akin to 0-dim PH. These backbones provide similar benefits of
0-dim PH right from the start of the training, unlike similar methods where
accurate backbones are obtained only during later phases of the training.
| [
{
"version": "v1",
"created": "Tue, 26 Nov 2024 16:46:47 GMT"
},
{
"version": "v2",
"created": "Tue, 31 Dec 2024 12:31:49 GMT"
},
{
"version": "v3",
"created": "Mon, 10 Mar 2025 13:03:43 GMT"
},
{
"version": "v4",
"created": "Tue, 11 Mar 2025 14:53:35 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Pathak",
"Stuti",
""
],
[
"Kumar",
"Prashant",
""
],
[
"Baiju",
"Dheeraj",
""
],
[
"Mboga",
"Nicholus",
""
],
[
"Steenackers",
"Gunther",
""
],
[
"Penne",
"Rudi",
""
]
]
| TITLE: Revisiting Point Cloud Completion: Are We Ready For The Real-World?
ABSTRACT: Point clouds acquired in constrained, challenging, uncontrolled, and
multi-sensor real-world settings are noisy, incomplete, and non-uniformly
sparse. This presents acute challenges for the vital task of point cloud
completion. Using tools from Algebraic Topology and Persistent Homology (PH),
we demonstrate that current benchmark object point clouds lack rich topological
features that are integral part of point clouds captured in realistic
environments. To facilitate research in this direction, we contribute the first
real-world industrial dataset for point cloud completion, RealPC - a diverse,
rich and varied set of point clouds. It consists of ~ 40,000 pairs across 21
categories of industrial structures in railway establishments. Benchmark
results on several strong baselines reveal that existing methods fail in
real-world scenarios. We discover a striking observation - unlike current
datasets, RealPC consists of multiple 0- and 1-dimensional PH-based topological
features. We prove that integrating these topological priors into existing
works helps improve completion. We present how 0-dimensional PH priors extract
the global topology of a complete shape in the form of a 3D skeleton and assist
a model in generating topologically consistent complete shapes. Since computing
Homology is expensive, we present a simple, yet effective Homology Sampler
guided network, BOSHNet that bypasses the Homology computation by sampling
proxy backbones akin to 0-dim PH. These backbones provide similar benefits of
0-dim PH right from the start of the training, unlike similar methods where
accurate backbones are obtained only during later phases of the training.
| new_dataset | 0.954435 |
2411.18203 | Junxian Li | Di Zhang, Junxian Li, Jingdi Lei, Xunzhi Wang, Yujie Liu, Zonglin
Yang, Jiatong Li, Weida Wang, Suorong Yang, Jianbo Wu, Peng Ye, Wanli Ouyang,
Dongzhan Zhou | Critic-V: VLM Critics Help Catch VLM Errors in Multimodal Reasoning | 16 pages, 11 figures | null | null | null | cs.CV cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vision-language models (VLMs) have shown remarkable advancements in
multimodal reasoning tasks. However, they still often generate inaccurate or
irrelevant responses due to issues like hallucinated image understandings or
unrefined reasoning paths. To address these challenges, we introduce Critic-V,
a novel framework inspired by the Actor-Critic paradigm to boost the reasoning
capability of VLMs. This framework decouples the reasoning process and critic
process by integrating two independent components: the Reasoner, which
generates reasoning paths based on visual and textual inputs, and the Critic,
which provides constructive critique to refine these paths. In this approach,
the Reasoner generates reasoning responses according to text prompts, which can
evolve iteratively as a policy based on feedback from the Critic. This
interaction process was theoretically driven by a reinforcement learning
framework where the Critic offers natural language critiques instead of scalar
rewards, enabling more nuanced feedback to boost the Reasoner's capability on
complex reasoning tasks. The Critic model is trained using Direct Preference
Optimization (DPO), leveraging a preference dataset of critiques ranked by
Rule-based Reward~(RBR) to enhance its critic capabilities. Evaluation results
show that the Critic-V framework significantly outperforms existing methods,
including GPT-4V, on 5 out of 8 benchmarks, especially regarding reasoning
accuracy and efficiency. Combining a dynamic text-based policy for the Reasoner
and constructive feedback from the preference-optimized Critic enables a more
reliable and context-sensitive multimodal reasoning process. Our approach
provides a promising solution to enhance the reliability of VLMs, improving
their performance in real-world reasoning-heavy multimodal applications such as
autonomous driving and embodied intelligence.
| [
{
"version": "v1",
"created": "Wed, 27 Nov 2024 10:28:57 GMT"
},
{
"version": "v2",
"created": "Mon, 2 Dec 2024 05:00:19 GMT"
},
{
"version": "v3",
"created": "Mon, 16 Dec 2024 08:12:17 GMT"
},
{
"version": "v4",
"created": "Tue, 11 Mar 2025 15:46:15 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Zhang",
"Di",
""
],
[
"Li",
"Junxian",
""
],
[
"Lei",
"Jingdi",
""
],
[
"Wang",
"Xunzhi",
""
],
[
"Liu",
"Yujie",
""
],
[
"Yang",
"Zonglin",
""
],
[
"Li",
"Jiatong",
""
],
[
"Wang",
"Weida",
""
],
[
"Yang",
"Suorong",
""
],
[
"Wu",
"Jianbo",
""
],
[
"Ye",
"Peng",
""
],
[
"Ouyang",
"Wanli",
""
],
[
"Zhou",
"Dongzhan",
""
]
]
| TITLE: Critic-V: VLM Critics Help Catch VLM Errors in Multimodal Reasoning
ABSTRACT: Vision-language models (VLMs) have shown remarkable advancements in
multimodal reasoning tasks. However, they still often generate inaccurate or
irrelevant responses due to issues like hallucinated image understandings or
unrefined reasoning paths. To address these challenges, we introduce Critic-V,
a novel framework inspired by the Actor-Critic paradigm to boost the reasoning
capability of VLMs. This framework decouples the reasoning process and critic
process by integrating two independent components: the Reasoner, which
generates reasoning paths based on visual and textual inputs, and the Critic,
which provides constructive critique to refine these paths. In this approach,
the Reasoner generates reasoning responses according to text prompts, which can
evolve iteratively as a policy based on feedback from the Critic. This
interaction process was theoretically driven by a reinforcement learning
framework where the Critic offers natural language critiques instead of scalar
rewards, enabling more nuanced feedback to boost the Reasoner's capability on
complex reasoning tasks. The Critic model is trained using Direct Preference
Optimization (DPO), leveraging a preference dataset of critiques ranked by
Rule-based Reward~(RBR) to enhance its critic capabilities. Evaluation results
show that the Critic-V framework significantly outperforms existing methods,
including GPT-4V, on 5 out of 8 benchmarks, especially regarding reasoning
accuracy and efficiency. Combining a dynamic text-based policy for the Reasoner
and constructive feedback from the preference-optimized Critic enables a more
reliable and context-sensitive multimodal reasoning process. Our approach
provides a promising solution to enhance the reliability of VLMs, improving
their performance in real-world reasoning-heavy multimodal applications such as
autonomous driving and embodied intelligence.
| no_new_dataset | 0.944022 |
2411.18363 | Qing Jiang | Qing Jiang, Gen Luo, Yuqin Yang, Yuda Xiong, Yihao Chen, Zhaoyang
Zeng, Tianhe Ren, Lei Zhang | ChatRex: Taming Multimodal LLM for Joint Perception and Understanding | 35 pages, 19 figures | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Perception and understanding are two pillars of computer vision. While
multimodal large language models (MLLM) have demonstrated remarkable visual
understanding capabilities, they arguably lack accurate perception abilities,
e.g. the stage-of-the-art model Qwen2-VL only achieves a 43.9 recall rate on
the COCO dataset, limiting many tasks requiring the combination of perception
and understanding. In this work, we aim to bridge this perception gap from both
model designing and data development perspectives. We first introduce ChatRex,
an MLLM with a decoupled perception design. Instead of having the LLM directly
predict box coordinates, we feed the output boxes from a universal proposal
network into the LLM, allowing it to output the corresponding box indices to
represent its detection results, turning the regression task into a
retrieval-based task that LLM handles more proficiently. From the data
perspective, we build a fully automated data engine and construct the
Rexverse-2M dataset which possesses multiple granularities to support the joint
training of perception and understanding. After a three-stage training
approach, ChatRex demonstrates strong perception and understanding performance,
and the combination of these two capabilities also unlocks many attractive
applications, demonstrating their complementary roles in MLLM. Code is
available at https://github.com/IDEA-Research/ChatRex.
| [
{
"version": "v1",
"created": "Wed, 27 Nov 2024 14:11:10 GMT"
},
{
"version": "v2",
"created": "Mon, 2 Dec 2024 07:04:40 GMT"
},
{
"version": "v3",
"created": "Tue, 11 Mar 2025 14:19:42 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Jiang",
"Qing",
""
],
[
"Luo",
"Gen",
""
],
[
"Yang",
"Yuqin",
""
],
[
"Xiong",
"Yuda",
""
],
[
"Chen",
"Yihao",
""
],
[
"Zeng",
"Zhaoyang",
""
],
[
"Ren",
"Tianhe",
""
],
[
"Zhang",
"Lei",
""
]
]
| TITLE: ChatRex: Taming Multimodal LLM for Joint Perception and Understanding
ABSTRACT: Perception and understanding are two pillars of computer vision. While
multimodal large language models (MLLM) have demonstrated remarkable visual
understanding capabilities, they arguably lack accurate perception abilities,
e.g. the stage-of-the-art model Qwen2-VL only achieves a 43.9 recall rate on
the COCO dataset, limiting many tasks requiring the combination of perception
and understanding. In this work, we aim to bridge this perception gap from both
model designing and data development perspectives. We first introduce ChatRex,
an MLLM with a decoupled perception design. Instead of having the LLM directly
predict box coordinates, we feed the output boxes from a universal proposal
network into the LLM, allowing it to output the corresponding box indices to
represent its detection results, turning the regression task into a
retrieval-based task that LLM handles more proficiently. From the data
perspective, we build a fully automated data engine and construct the
Rexverse-2M dataset which possesses multiple granularities to support the joint
training of perception and understanding. After a three-stage training
approach, ChatRex demonstrates strong perception and understanding performance,
and the combination of these two capabilities also unlocks many attractive
applications, demonstrating their complementary roles in MLLM. Code is
available at https://github.com/IDEA-Research/ChatRex.
| new_dataset | 0.962356 |
2412.02193 | Fan-Yun Sun | Fan-Yun Sun, Weiyu Liu, Siyi Gu, Dylan Lim, Goutam Bhat, Federico
Tombari, Manling Li, Nick Haber, Jiajun Wu | LayoutVLM: Differentiable Optimization of 3D Layout via Vision-Language
Models | CVPR 2025, project website:
https://ai.stanford.edu/~sunfanyun/layoutvlm/ | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Spatial reasoning is a fundamental aspect of human cognition, enabling
intuitive understanding and manipulation of objects in three-dimensional space.
While foundation models demonstrate remarkable performance on some benchmarks,
they still struggle with 3D reasoning tasks like arranging objects in space
according to open-ended language instructions, particularly in dense and
physically constrained environments. We introduce LayoutVLM, a framework and
scene layout representation that exploits the semantic knowledge of
Vision-Language Models (VLMs) and supports differentiable optimization to
ensure physical plausibility. LayoutVLM employs VLMs to generate two mutually
reinforcing representations from visually marked images, and a self-consistent
decoding process to improve VLMs spatial planning. Our experiments show that
LayoutVLM addresses the limitations of existing LLM and constraint-based
approaches, producing physically plausible 3D layouts better aligned with the
semantic intent of input language instructions. We also demonstrate that
fine-tuning VLMs with the proposed scene layout representation extracted from
existing scene datasets can improve their reasoning performance.
| [
{
"version": "v1",
"created": "Tue, 3 Dec 2024 06:15:04 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Mar 2025 07:05:27 GMT"
},
{
"version": "v3",
"created": "Tue, 11 Mar 2025 05:58:39 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Sun",
"Fan-Yun",
""
],
[
"Liu",
"Weiyu",
""
],
[
"Gu",
"Siyi",
""
],
[
"Lim",
"Dylan",
""
],
[
"Bhat",
"Goutam",
""
],
[
"Tombari",
"Federico",
""
],
[
"Li",
"Manling",
""
],
[
"Haber",
"Nick",
""
],
[
"Wu",
"Jiajun",
""
]
]
| TITLE: LayoutVLM: Differentiable Optimization of 3D Layout via Vision-Language
Models
ABSTRACT: Spatial reasoning is a fundamental aspect of human cognition, enabling
intuitive understanding and manipulation of objects in three-dimensional space.
While foundation models demonstrate remarkable performance on some benchmarks,
they still struggle with 3D reasoning tasks like arranging objects in space
according to open-ended language instructions, particularly in dense and
physically constrained environments. We introduce LayoutVLM, a framework and
scene layout representation that exploits the semantic knowledge of
Vision-Language Models (VLMs) and supports differentiable optimization to
ensure physical plausibility. LayoutVLM employs VLMs to generate two mutually
reinforcing representations from visually marked images, and a self-consistent
decoding process to improve VLMs spatial planning. Our experiments show that
LayoutVLM addresses the limitations of existing LLM and constraint-based
approaches, producing physically plausible 3D layouts better aligned with the
semantic intent of input language instructions. We also demonstrate that
fine-tuning VLMs with the proposed scene layout representation extracted from
existing scene datasets can improve their reasoning performance.
| no_new_dataset | 0.948298 |
2412.07205 | Yingchu Wang | Yingchu Wang, Ji He, Shijie Yu | CrackESS: A Self-Prompting Crack Segmentation System for Edge Devices | null | null | null | null | cs.CV cs.LG cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Structural Health Monitoring (SHM) is a sustainable and essential approach
for infrastructure maintenance, enabling the early detection of structural
defects. Leveraging computer vision (CV) methods for automated infrastructure
monitoring can significantly enhance monitoring efficiency and precision.
However, these methods often face challenges in efficiency and accuracy,
particularly in complex environments. Recent CNN-based and SAM-based approaches
have demonstrated excellent performance in crack segmentation, but their high
computational demands limit their applicability on edge devices. This paper
introduces CrackESS, a novel system for detecting and segmenting concrete
cracks. The approach first utilizes a YOLOv8 model for self-prompting and a
LoRA-based fine-tuned SAM model for crack segmentation, followed by refining
the segmentation masks through the proposed Crack Mask Refinement Module
(CMRM). We conduct experiments on three datasets(Khanhha's dataset, Crack500,
CrackCR) and validate CrackESS on a climbing robot system to demonstrate the
advantage and effectiveness of our approach.
| [
{
"version": "v1",
"created": "Tue, 10 Dec 2024 05:50:50 GMT"
},
{
"version": "v2",
"created": "Fri, 13 Dec 2024 12:38:04 GMT"
},
{
"version": "v3",
"created": "Tue, 11 Mar 2025 12:55:57 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Wang",
"Yingchu",
""
],
[
"He",
"Ji",
""
],
[
"Yu",
"Shijie",
""
]
]
| TITLE: CrackESS: A Self-Prompting Crack Segmentation System for Edge Devices
ABSTRACT: Structural Health Monitoring (SHM) is a sustainable and essential approach
for infrastructure maintenance, enabling the early detection of structural
defects. Leveraging computer vision (CV) methods for automated infrastructure
monitoring can significantly enhance monitoring efficiency and precision.
However, these methods often face challenges in efficiency and accuracy,
particularly in complex environments. Recent CNN-based and SAM-based approaches
have demonstrated excellent performance in crack segmentation, but their high
computational demands limit their applicability on edge devices. This paper
introduces CrackESS, a novel system for detecting and segmenting concrete
cracks. The approach first utilizes a YOLOv8 model for self-prompting and a
LoRA-based fine-tuned SAM model for crack segmentation, followed by refining
the segmentation masks through the proposed Crack Mask Refinement Module
(CMRM). We conduct experiments on three datasets(Khanhha's dataset, Crack500,
CrackCR) and validate CrackESS on a climbing robot system to demonstrate the
advantage and effectiveness of our approach.
| no_new_dataset | 0.946498 |
2412.09376 | Maria Eleftheria Vlontzou | Maria Eleftheria Vlontzou, Maria Athanasiou, Kalliopi Dalakleidi,
Ioanna Skampardoni, Christos Davatzikos, Konstantina Nikita | A comprehensive interpretable machine learning framework for Mild
Cognitive Impairment and Alzheimer's disease diagnosis | null | Sci Rep 15, 8410 (2025) | 10.1038/s41598-025-92577-6 | null | cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | An interpretable machine learning (ML) framework is introduced to enhance the
diagnosis of Mild Cognitive Impairment (MCI) and Alzheimer's disease (AD) by
ensuring robustness of the ML models' interpretations. The dataset used
comprises volumetric measurements from brain MRI and genetic data from healthy
individuals and patients with MCI/AD, obtained through the Alzheimer's Disease
Neuroimaging Initiative. The existing class imbalance is addressed by an
ensemble learning approach, while various attribution-based and
counterfactual-based interpretability methods are leveraged towards producing
diverse explanations related to the pathophysiology of MCI/AD. A unification
method combining SHAP with counterfactual explanations assesses the
interpretability techniques' robustness. The best performing model yielded
87.5% balanced accuracy and 90.8% F1-score. The attribution-based
interpretability methods highlighted significant volumetric and genetic
features related to MCI/AD risk. The unification method provided useful
insights regarding those features' necessity and sufficiency, further
showcasing their significance in MCI/AD diagnosis.
| [
{
"version": "v1",
"created": "Thu, 12 Dec 2024 15:45:21 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 14:40:18 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Vlontzou",
"Maria Eleftheria",
""
],
[
"Athanasiou",
"Maria",
""
],
[
"Dalakleidi",
"Kalliopi",
""
],
[
"Skampardoni",
"Ioanna",
""
],
[
"Davatzikos",
"Christos",
""
],
[
"Nikita",
"Konstantina",
""
]
]
| TITLE: A comprehensive interpretable machine learning framework for Mild
Cognitive Impairment and Alzheimer's disease diagnosis
ABSTRACT: An interpretable machine learning (ML) framework is introduced to enhance the
diagnosis of Mild Cognitive Impairment (MCI) and Alzheimer's disease (AD) by
ensuring robustness of the ML models' interpretations. The dataset used
comprises volumetric measurements from brain MRI and genetic data from healthy
individuals and patients with MCI/AD, obtained through the Alzheimer's Disease
Neuroimaging Initiative. The existing class imbalance is addressed by an
ensemble learning approach, while various attribution-based and
counterfactual-based interpretability methods are leveraged towards producing
diverse explanations related to the pathophysiology of MCI/AD. A unification
method combining SHAP with counterfactual explanations assesses the
interpretability techniques' robustness. The best performing model yielded
87.5% balanced accuracy and 90.8% F1-score. The attribution-based
interpretability methods highlighted significant volumetric and genetic
features related to MCI/AD risk. The unification method provided useful
insights regarding those features' necessity and sufficiency, further
showcasing their significance in MCI/AD diagnosis.
| no_new_dataset | 0.94801 |
2412.10443 | Zhentao Tan | Zhentao Tan, Ben Xue, Jian Jia, Junhao Wang, Wencai Ye, Shaoyun Shi,
Mingjie Sun, Wenjin Wu, Quan Chen, Peng Jiang | SweetTok: Semantic-Aware Spatial-Temporal Tokenizer for Compact Video
Discretization | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents the \textbf{S}emantic-a\textbf{W}ar\textbf{E}
spatial-t\textbf{E}mporal \textbf{T}okenizer (SweetTok), a novel video
tokenizer to overcome the limitations in current video tokenization methods for
compacted yet effective discretization. Unlike previous approaches that process
flattened local visual patches via direct discretization or adaptive query
tokenization, SweetTok proposes a decoupling framework, compressing visual
inputs through distinct spatial and temporal queries via \textbf{D}ecoupled
\textbf{Q}uery \textbf{A}uto\textbf{E}ncoder (DQAE). This design allows
SweetTok to efficiently compress video token count while achieving superior
fidelity by capturing essential information across spatial and temporal
dimensions. Furthermore, we design a \textbf{M}otion-enhanced \textbf{L}anguage
\textbf{C}odebook (MLC) tailored for spatial and temporal compression to
address the differences in semantic representation between appearance and
motion information. SweetTok significantly improves video reconstruction
results by \textbf{42.8\%} w.r.t rFVD on UCF-101 dataset. With a better token
compression strategy, it also boosts downstream video generation results by
\textbf{15.1\%} w.r.t gFVD. Additionally, the compressed decoupled tokens are
imbued with semantic information, enabling few-shot recognition capabilities
powered by LLMs in downstream applications.
| [
{
"version": "v1",
"created": "Wed, 11 Dec 2024 13:48:06 GMT"
},
{
"version": "v2",
"created": "Tue, 17 Dec 2024 03:55:34 GMT"
},
{
"version": "v3",
"created": "Tue, 11 Mar 2025 03:19:42 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Tan",
"Zhentao",
""
],
[
"Xue",
"Ben",
""
],
[
"Jia",
"Jian",
""
],
[
"Wang",
"Junhao",
""
],
[
"Ye",
"Wencai",
""
],
[
"Shi",
"Shaoyun",
""
],
[
"Sun",
"Mingjie",
""
],
[
"Wu",
"Wenjin",
""
],
[
"Chen",
"Quan",
""
],
[
"Jiang",
"Peng",
""
]
]
| TITLE: SweetTok: Semantic-Aware Spatial-Temporal Tokenizer for Compact Video
Discretization
ABSTRACT: This paper presents the \textbf{S}emantic-a\textbf{W}ar\textbf{E}
spatial-t\textbf{E}mporal \textbf{T}okenizer (SweetTok), a novel video
tokenizer to overcome the limitations in current video tokenization methods for
compacted yet effective discretization. Unlike previous approaches that process
flattened local visual patches via direct discretization or adaptive query
tokenization, SweetTok proposes a decoupling framework, compressing visual
inputs through distinct spatial and temporal queries via \textbf{D}ecoupled
\textbf{Q}uery \textbf{A}uto\textbf{E}ncoder (DQAE). This design allows
SweetTok to efficiently compress video token count while achieving superior
fidelity by capturing essential information across spatial and temporal
dimensions. Furthermore, we design a \textbf{M}otion-enhanced \textbf{L}anguage
\textbf{C}odebook (MLC) tailored for spatial and temporal compression to
address the differences in semantic representation between appearance and
motion information. SweetTok significantly improves video reconstruction
results by \textbf{42.8\%} w.r.t rFVD on UCF-101 dataset. With a better token
compression strategy, it also boosts downstream video generation results by
\textbf{15.1\%} w.r.t gFVD. Additionally, the compressed decoupled tokens are
imbued with semantic information, enabling few-shot recognition capabilities
powered by LLMs in downstream applications.
| no_new_dataset | 0.945601 |
2412.14042 | Danila Rukhovich | Danila Rukhovich, Elona Dupont, Dimitrios Mallis, Kseniya Cherenkova,
Anis Kacem, Djamila Aouada | CAD-Recode: Reverse Engineering CAD Code from Point Clouds | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Computer-Aided Design (CAD) models are typically constructed by sequentially
drawing parametric sketches and applying CAD operations to obtain a 3D model.
The problem of 3D CAD reverse engineering consists of reconstructing the sketch
and CAD operation sequences from 3D representations such as point clouds. In
this paper, we address this challenge through novel contributions across three
levels: CAD sequence representation, network design, and training dataset. In
particular, we represent CAD sketch-extrude sequences as Python code. The
proposed CAD-Recode translates a point cloud into Python code that, when
executed, reconstructs the CAD model. Taking advantage of the exposure of
pre-trained Large Language Models (LLMs) to Python code, we leverage a
relatively small LLM as a decoder for CAD-Recode and combine it with a
lightweight point cloud projector. CAD-Recode is trained on a procedurally
generated dataset of one million CAD sequences. CAD-Recode significantly
outperforms existing methods across the DeepCAD, Fusion360 and real-world CC3D
datasets. Furthermore, we show that our CAD Python code output is interpretable
by off-the-shelf LLMs, enabling CAD editing and CAD-specific question answering
from point clouds.
| [
{
"version": "v1",
"created": "Wed, 18 Dec 2024 16:55:42 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 15:54:17 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Rukhovich",
"Danila",
""
],
[
"Dupont",
"Elona",
""
],
[
"Mallis",
"Dimitrios",
""
],
[
"Cherenkova",
"Kseniya",
""
],
[
"Kacem",
"Anis",
""
],
[
"Aouada",
"Djamila",
""
]
]
| TITLE: CAD-Recode: Reverse Engineering CAD Code from Point Clouds
ABSTRACT: Computer-Aided Design (CAD) models are typically constructed by sequentially
drawing parametric sketches and applying CAD operations to obtain a 3D model.
The problem of 3D CAD reverse engineering consists of reconstructing the sketch
and CAD operation sequences from 3D representations such as point clouds. In
this paper, we address this challenge through novel contributions across three
levels: CAD sequence representation, network design, and training dataset. In
particular, we represent CAD sketch-extrude sequences as Python code. The
proposed CAD-Recode translates a point cloud into Python code that, when
executed, reconstructs the CAD model. Taking advantage of the exposure of
pre-trained Large Language Models (LLMs) to Python code, we leverage a
relatively small LLM as a decoder for CAD-Recode and combine it with a
lightweight point cloud projector. CAD-Recode is trained on a procedurally
generated dataset of one million CAD sequences. CAD-Recode significantly
outperforms existing methods across the DeepCAD, Fusion360 and real-world CC3D
datasets. Furthermore, we show that our CAD Python code output is interpretable
by off-the-shelf LLMs, enabling CAD editing and CAD-specific question answering
from point clouds.
| new_dataset | 0.957557 |
2412.16563 | Xiangyue Zhang | Xiangyue Zhang, Jianfang Li, Jiaxu Zhang, Ziqiang Dang, Jianqiang Ren,
Liefeng Bo, Zhigang Tu | SemTalk: Holistic Co-speech Motion Generation with Frame-level Semantic
Emphasis | 11 pages, 8 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A good co-speech motion generation cannot be achieved without a careful
integration of common rhythmic motion and rare yet essential semantic motion.
In this work, we propose SemTalk for holistic co-speech motion generation with
frame-level semantic emphasis. Our key insight is to separately learn base
motions and sparse motions, and then adaptively fuse them. In particular,
coarse2fine cross-attention module and rhythmic consistency learning are
explored to establish rhythm-related base motion, ensuring a coherent
foundation that synchronizes gestures with the speech rhythm. Subsequently,
semantic emphasis learning is designed to generate semantic-aware sparse
motion, focusing on frame-level semantic cues. Finally, to integrate sparse
motion into the base motion and generate semantic-emphasized co-speech
gestures, we further leverage a learned semantic score for adaptive synthesis.
Qualitative and quantitative comparisons on two public datasets demonstrate
that our method outperforms the state-of-the-art, delivering high-quality
co-speech motion with enhanced semantic richness over a stable base motion.
| [
{
"version": "v1",
"created": "Sat, 21 Dec 2024 10:16:07 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Jan 2025 13:34:12 GMT"
},
{
"version": "v3",
"created": "Tue, 11 Mar 2025 13:04:35 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Zhang",
"Xiangyue",
""
],
[
"Li",
"Jianfang",
""
],
[
"Zhang",
"Jiaxu",
""
],
[
"Dang",
"Ziqiang",
""
],
[
"Ren",
"Jianqiang",
""
],
[
"Bo",
"Liefeng",
""
],
[
"Tu",
"Zhigang",
""
]
]
| TITLE: SemTalk: Holistic Co-speech Motion Generation with Frame-level Semantic
Emphasis
ABSTRACT: A good co-speech motion generation cannot be achieved without a careful
integration of common rhythmic motion and rare yet essential semantic motion.
In this work, we propose SemTalk for holistic co-speech motion generation with
frame-level semantic emphasis. Our key insight is to separately learn base
motions and sparse motions, and then adaptively fuse them. In particular,
coarse2fine cross-attention module and rhythmic consistency learning are
explored to establish rhythm-related base motion, ensuring a coherent
foundation that synchronizes gestures with the speech rhythm. Subsequently,
semantic emphasis learning is designed to generate semantic-aware sparse
motion, focusing on frame-level semantic cues. Finally, to integrate sparse
motion into the base motion and generate semantic-emphasized co-speech
gestures, we further leverage a learned semantic score for adaptive synthesis.
Qualitative and quantitative comparisons on two public datasets demonstrate
that our method outperforms the state-of-the-art, delivering high-quality
co-speech motion with enhanced semantic richness over a stable base motion.
| no_new_dataset | 0.954351 |
2501.01428 | Zhangyang Qi | Zhangyang Qi, Zhixiong Zhang, Ye Fang, Jiaqi Wang, Hengshuang Zhao | GPT4Scene: Understand 3D Scenes from Videos with Vision-Language Models | Project page: https://gpt4scene.github.io/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, 2D Vision-Language Models (VLMs) have made significant
strides in image-text understanding tasks. However, their performance in 3D
spatial comprehension, which is critical for embodied intelligence, remains
limited. Recent advances have leveraged 3D point clouds and multi-view images
as inputs, yielding promising results. However, we propose exploring a purely
vision-based solution inspired by human perception, which merely relies on
visual cues for 3D spatial understanding. This paper empirically investigates
the limitations of VLMs in 3D spatial knowledge, revealing that their primary
shortcoming lies in the lack of global-local correspondence between the scene
and individual frames. To address this, we introduce GPT4Scene, a novel visual
prompting paradigm in VLM training and inference that helps build the
global-local relationship, significantly improving the 3D spatial understanding
of indoor scenes. Specifically, GPT4Scene constructs a Bird's Eye View (BEV)
image from the video and marks consistent object IDs across both frames and the
BEV image. The model then inputs the concatenated BEV image and video frames
with markers. In zero-shot evaluations, GPT4Scene improves performance over
closed-source VLMs like GPT-4o. Additionally, we prepare a processed video
dataset consisting of 165K text annotation to fine-tune open-source VLMs,
achieving state-of-the-art performance on all 3D understanding tasks.
Surprisingly, after training with the GPT4Scene paradigm, VLMs consistently
improve during inference, even without object marker prompting and BEV image as
explicit correspondence. It demonstrates that the proposed paradigm helps VLMs
develop an intrinsic ability to understand 3D scenes, which paves the way for a
seamless approach to extending pre-trained VLMs for 3D scene understanding.
| [
{
"version": "v1",
"created": "Thu, 2 Jan 2025 18:59:59 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Jan 2025 12:30:16 GMT"
},
{
"version": "v3",
"created": "Thu, 9 Jan 2025 16:41:07 GMT"
},
{
"version": "v4",
"created": "Tue, 11 Mar 2025 07:54:04 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Qi",
"Zhangyang",
""
],
[
"Zhang",
"Zhixiong",
""
],
[
"Fang",
"Ye",
""
],
[
"Wang",
"Jiaqi",
""
],
[
"Zhao",
"Hengshuang",
""
]
]
| TITLE: GPT4Scene: Understand 3D Scenes from Videos with Vision-Language Models
ABSTRACT: In recent years, 2D Vision-Language Models (VLMs) have made significant
strides in image-text understanding tasks. However, their performance in 3D
spatial comprehension, which is critical for embodied intelligence, remains
limited. Recent advances have leveraged 3D point clouds and multi-view images
as inputs, yielding promising results. However, we propose exploring a purely
vision-based solution inspired by human perception, which merely relies on
visual cues for 3D spatial understanding. This paper empirically investigates
the limitations of VLMs in 3D spatial knowledge, revealing that their primary
shortcoming lies in the lack of global-local correspondence between the scene
and individual frames. To address this, we introduce GPT4Scene, a novel visual
prompting paradigm in VLM training and inference that helps build the
global-local relationship, significantly improving the 3D spatial understanding
of indoor scenes. Specifically, GPT4Scene constructs a Bird's Eye View (BEV)
image from the video and marks consistent object IDs across both frames and the
BEV image. The model then inputs the concatenated BEV image and video frames
with markers. In zero-shot evaluations, GPT4Scene improves performance over
closed-source VLMs like GPT-4o. Additionally, we prepare a processed video
dataset consisting of 165K text annotation to fine-tune open-source VLMs,
achieving state-of-the-art performance on all 3D understanding tasks.
Surprisingly, after training with the GPT4Scene paradigm, VLMs consistently
improve during inference, even without object marker prompting and BEV image as
explicit correspondence. It demonstrates that the proposed paradigm helps VLMs
develop an intrinsic ability to understand 3D scenes, which paves the way for a
seamless approach to extending pre-trained VLMs for 3D scene understanding.
| no_new_dataset | 0.949716 |
2501.04926 | JunHak Yun | Jun-Hak Yun, Seung-Bin Kim, Seong-Whan Lee | FLowHigh: Towards Efficient and High-Quality Audio Super-Resolution with
Single-Step Flow Matching | Accepted by ICASSP 2025 | null | 10.1109/ICASSP49660.2025.10888772 | null | eess.AS cs.AI cs.CL cs.SD | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Audio super-resolution is challenging owing to its ill-posed nature.
Recently, the application of diffusion models in audio super-resolution has
shown promising results in alleviating this challenge. However, diffusion-based
models have limitations, primarily the necessity for numerous sampling steps,
which causes significantly increased latency when synthesizing high-quality
audio samples. In this paper, we propose FLowHigh, a novel approach that
integrates flow matching, a highly efficient generative model, into audio
super-resolution. We also explore probability paths specially tailored for
audio super-resolution, which effectively capture high-resolution audio
distributions, thereby enhancing reconstruction quality. The proposed method
generates high-fidelity, high-resolution audio through a single-step sampling
process across various input sampling rates. The experimental results on the
VCTK benchmark dataset demonstrate that FLowHigh achieves state-of-the-art
performance in audio super-resolution, as evaluated by log-spectral distance
and ViSQOL while maintaining computational efficiency with only a single-step
sampling process.
| [
{
"version": "v1",
"created": "Thu, 9 Jan 2025 02:30:26 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Yun",
"Jun-Hak",
""
],
[
"Kim",
"Seung-Bin",
""
],
[
"Lee",
"Seong-Whan",
""
]
]
| TITLE: FLowHigh: Towards Efficient and High-Quality Audio Super-Resolution with
Single-Step Flow Matching
ABSTRACT: Audio super-resolution is challenging owing to its ill-posed nature.
Recently, the application of diffusion models in audio super-resolution has
shown promising results in alleviating this challenge. However, diffusion-based
models have limitations, primarily the necessity for numerous sampling steps,
which causes significantly increased latency when synthesizing high-quality
audio samples. In this paper, we propose FLowHigh, a novel approach that
integrates flow matching, a highly efficient generative model, into audio
super-resolution. We also explore probability paths specially tailored for
audio super-resolution, which effectively capture high-resolution audio
distributions, thereby enhancing reconstruction quality. The proposed method
generates high-fidelity, high-resolution audio through a single-step sampling
process across various input sampling rates. The experimental results on the
VCTK benchmark dataset demonstrate that FLowHigh achieves state-of-the-art
performance in audio super-resolution, as evaluated by log-spectral distance
and ViSQOL while maintaining computational efficiency with only a single-step
sampling process.
| no_new_dataset | 0.95222 |
2501.06714 | Yuxin Wang | Yuxin Wang, Qianyi Wu, Dan Xu | F3D-Gaus: Feed-forward 3D-aware Generation on ImageNet with
Cycle-Aggregative Gaussian Splatting | Project Page: https://w-ted.github.io/publications/F3D-Gaus | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper tackles the problem of generalizable 3D-aware generation from
monocular datasets, e.g., ImageNet. The key challenge of this task is learning
a robust 3D-aware representation without multi-view or dynamic data, while
ensuring consistent texture and geometry across different viewpoints. Although
some baseline methods are capable of 3D-aware generation, the quality of the
generated images still lags behind state-of-the-art 2D generation approaches,
which excel in producing high-quality, detailed images. To address this severe
limitation, we propose a novel feed-forward pipeline based on pixel-aligned
Gaussian Splatting, coined as F3D-Gaus, which can produce more realistic and
reliable 3D renderings from monocular inputs. In addition, we introduce a
self-supervised cycle-aggregative constraint to enforce cross-view consistency
in the learned 3D representation. This training strategy naturally allows
aggregation of multiple aligned Gaussian primitives and significantly
alleviates the interpolation limitations inherent in single-view pixel-aligned
Gaussian Splatting. Furthermore, we incorporate video model priors to perform
geometry-aware refinement, enhancing the generation of fine details in
wide-viewpoint scenarios and improving the model's capability to capture
intricate 3D textures. Extensive experiments demonstrate that our approach not
only achieves high-quality, multi-view consistent 3D-aware generation from
monocular datasets, but also significantly improves training and inference
efficiency.
| [
{
"version": "v1",
"created": "Sun, 12 Jan 2025 04:44:44 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Jan 2025 08:33:26 GMT"
},
{
"version": "v3",
"created": "Tue, 11 Mar 2025 07:55:22 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Wang",
"Yuxin",
""
],
[
"Wu",
"Qianyi",
""
],
[
"Xu",
"Dan",
""
]
]
| TITLE: F3D-Gaus: Feed-forward 3D-aware Generation on ImageNet with
Cycle-Aggregative Gaussian Splatting
ABSTRACT: This paper tackles the problem of generalizable 3D-aware generation from
monocular datasets, e.g., ImageNet. The key challenge of this task is learning
a robust 3D-aware representation without multi-view or dynamic data, while
ensuring consistent texture and geometry across different viewpoints. Although
some baseline methods are capable of 3D-aware generation, the quality of the
generated images still lags behind state-of-the-art 2D generation approaches,
which excel in producing high-quality, detailed images. To address this severe
limitation, we propose a novel feed-forward pipeline based on pixel-aligned
Gaussian Splatting, coined as F3D-Gaus, which can produce more realistic and
reliable 3D renderings from monocular inputs. In addition, we introduce a
self-supervised cycle-aggregative constraint to enforce cross-view consistency
in the learned 3D representation. This training strategy naturally allows
aggregation of multiple aligned Gaussian primitives and significantly
alleviates the interpolation limitations inherent in single-view pixel-aligned
Gaussian Splatting. Furthermore, we incorporate video model priors to perform
geometry-aware refinement, enhancing the generation of fine details in
wide-viewpoint scenarios and improving the model's capability to capture
intricate 3D textures. Extensive experiments demonstrate that our approach not
only achieves high-quality, multi-view consistent 3D-aware generation from
monocular datasets, but also significantly improves training and inference
efficiency.
| no_new_dataset | 0.950134 |
2501.07397 | Shuo Zhang | Runpu Wei, Zijin Yin, Shuo Zhang, Lanxiang Zhou, Xueyi Wang, Chao Ban,
Tianwei Cao, Hao Sun, Zhongjiang He, Kongming Liang, Zhanyu Ma | OmniEraser: Remove Objects and Their Effects in Images with Paired
Video-Frame Data | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Inpainting algorithms have achieved remarkable progress in removing objects
from images, yet still face two challenges: 1) struggle to handle the object's
visual effects such as shadow and reflection; 2) easily generate shape-like
artifacts and unintended content. In this paper, we propose Video4Removal, a
large-scale dataset comprising over 100,000 high-quality samples with realistic
object shadows and reflections. By constructing object-background pairs from
video frames with off-the-shelf vision models, the labor costs of data
acquisition can be significantly reduced. To avoid generating shape-like
artifacts and unintended content, we propose Object-Background Guidance, an
elaborated paradigm that takes both the foreground object and background
images. It can guide the diffusion process to harness richer contextual
information. Based on the above two designs, we present OmniEraser, a novel
method that seamlessly removes objects and their visual effects using only
object masks as input. Extensive experiments show that OmniEraser significantly
outperforms previous methods, particularly in complex in-the-wild scenes. And
it also exhibits a strong generalization ability in anime-style images.
Datasets, models, and codes will be published.
| [
{
"version": "v1",
"created": "Mon, 13 Jan 2025 15:12:40 GMT"
},
{
"version": "v2",
"created": "Fri, 31 Jan 2025 06:41:24 GMT"
},
{
"version": "v3",
"created": "Tue, 11 Mar 2025 14:04:38 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Wei",
"Runpu",
""
],
[
"Yin",
"Zijin",
""
],
[
"Zhang",
"Shuo",
""
],
[
"Zhou",
"Lanxiang",
""
],
[
"Wang",
"Xueyi",
""
],
[
"Ban",
"Chao",
""
],
[
"Cao",
"Tianwei",
""
],
[
"Sun",
"Hao",
""
],
[
"He",
"Zhongjiang",
""
],
[
"Liang",
"Kongming",
""
],
[
"Ma",
"Zhanyu",
""
]
]
| TITLE: OmniEraser: Remove Objects and Their Effects in Images with Paired
Video-Frame Data
ABSTRACT: Inpainting algorithms have achieved remarkable progress in removing objects
from images, yet still face two challenges: 1) struggle to handle the object's
visual effects such as shadow and reflection; 2) easily generate shape-like
artifacts and unintended content. In this paper, we propose Video4Removal, a
large-scale dataset comprising over 100,000 high-quality samples with realistic
object shadows and reflections. By constructing object-background pairs from
video frames with off-the-shelf vision models, the labor costs of data
acquisition can be significantly reduced. To avoid generating shape-like
artifacts and unintended content, we propose Object-Background Guidance, an
elaborated paradigm that takes both the foreground object and background
images. It can guide the diffusion process to harness richer contextual
information. Based on the above two designs, we present OmniEraser, a novel
method that seamlessly removes objects and their visual effects using only
object masks as input. Extensive experiments show that OmniEraser significantly
outperforms previous methods, particularly in complex in-the-wild scenes. And
it also exhibits a strong generalization ability in anime-style images.
Datasets, models, and codes will be published.
| new_dataset | 0.957238 |
2501.08545 | Zelu Qi | Zelu Qi, Ping Shi, Shuqi Wang, Chaoyang Zhang, Fei Zhao, Zefeng Ying,
Da Pan, Xi Yang, Zheqi He, Teng Dai | T2VEval: Benchmark Dataset and Objective Evaluation Method for
T2V-generated Videos | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in text-to-video (T2V) technology, as demonstrated by models
such as Runway Gen-3, Pika, Sora, and Kling, have significantly broadened the
applicability and popularity of the technology. This progress has created a
growing demand for accurate quality assessment metrics to evaluate the
perceptual quality of T2V-generated videos and optimize video generation
models. However, assessing the quality of text-to-video outputs remain
challenging due to the presence of highly complex distortions, such as
unnatural actions and phenomena that defy human cognition. To address these
challenges, we constructed T2VEval-Bench, a multi-dimensional benchmark dataset
for text-to-video quality evaluation, which contains 148 textual prompts and
1,783 videos generated by 13 T2V models. To ensure a comprehensive evaluation,
we scored each video on four dimensions in the subjective experiment, which are
overall impression, text-video consistency, realness, and technical quality.
Based on T2VEval-Bench, we developed T2VEval, a multi-branch fusion scheme for
T2V quality evaluation. T2VEval assesses videos across three branches:
text-video consistency, realness, and technical quality. Using an
attention-based fusion module, T2VEval effectively integrates features from
each branch and predicts scores with the aid of a large language model.
Additionally, we implemented a divide-and-conquer training strategy, enabling
each branch to learn targeted knowledge while maintaining synergy with the
others. Experimental results demonstrate that T2VEval achieves state-of-the-art
performance across multiple metrics.
| [
{
"version": "v1",
"created": "Wed, 15 Jan 2025 03:11:33 GMT"
},
{
"version": "v2",
"created": "Fri, 31 Jan 2025 09:39:47 GMT"
},
{
"version": "v3",
"created": "Mon, 17 Feb 2025 12:59:13 GMT"
},
{
"version": "v4",
"created": "Tue, 18 Feb 2025 12:58:49 GMT"
},
{
"version": "v5",
"created": "Tue, 11 Mar 2025 04:47:57 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Qi",
"Zelu",
""
],
[
"Shi",
"Ping",
""
],
[
"Wang",
"Shuqi",
""
],
[
"Zhang",
"Chaoyang",
""
],
[
"Zhao",
"Fei",
""
],
[
"Ying",
"Zefeng",
""
],
[
"Pan",
"Da",
""
],
[
"Yang",
"Xi",
""
],
[
"He",
"Zheqi",
""
],
[
"Dai",
"Teng",
""
]
]
| TITLE: T2VEval: Benchmark Dataset and Objective Evaluation Method for
T2V-generated Videos
ABSTRACT: Recent advances in text-to-video (T2V) technology, as demonstrated by models
such as Runway Gen-3, Pika, Sora, and Kling, have significantly broadened the
applicability and popularity of the technology. This progress has created a
growing demand for accurate quality assessment metrics to evaluate the
perceptual quality of T2V-generated videos and optimize video generation
models. However, assessing the quality of text-to-video outputs remain
challenging due to the presence of highly complex distortions, such as
unnatural actions and phenomena that defy human cognition. To address these
challenges, we constructed T2VEval-Bench, a multi-dimensional benchmark dataset
for text-to-video quality evaluation, which contains 148 textual prompts and
1,783 videos generated by 13 T2V models. To ensure a comprehensive evaluation,
we scored each video on four dimensions in the subjective experiment, which are
overall impression, text-video consistency, realness, and technical quality.
Based on T2VEval-Bench, we developed T2VEval, a multi-branch fusion scheme for
T2V quality evaluation. T2VEval assesses videos across three branches:
text-video consistency, realness, and technical quality. Using an
attention-based fusion module, T2VEval effectively integrates features from
each branch and predicts scores with the aid of a large language model.
Additionally, we implemented a divide-and-conquer training strategy, enabling
each branch to learn targeted knowledge while maintaining synergy with the
others. Experimental results demonstrate that T2VEval achieves state-of-the-art
performance across multiple metrics.
| new_dataset | 0.961534 |
2501.08682 | Siqi Li | Siqi Li, Zhengkai Jiang, Jiawei Zhou, Zhihong Liu, Xiaowei Chi,
Haoqian Wang | RealVVT: Towards Photorealistic Video Virtual Try-on via Spatio-Temporal
Consistency | 10 pages (8 pages main text, 2 pages references), 5 figures in the
main text, and 4 pages supplementary materials with 3 additional figures | null | null | null | cs.CV cs.GR | http://creativecommons.org/licenses/by/4.0/ | Virtual try-on has emerged as a pivotal task at the intersection of computer
vision and fashion, aimed at digitally simulating how clothing items fit on the
human body. Despite notable progress in single-image virtual try-on (VTO),
current methodologies often struggle to preserve a consistent and authentic
appearance of clothing across extended video sequences. This challenge arises
from the complexities of capturing dynamic human pose and maintaining target
clothing characteristics. We leverage pre-existing video foundation models to
introduce RealVVT, a photoRealistic Video Virtual Try-on framework tailored to
bolster stability and realism within dynamic video contexts. Our methodology
encompasses a Clothing & Temporal Consistency strategy, an Agnostic-guided
Attention Focus Loss mechanism to ensure spatial consistency, and a Pose-guided
Long Video VTO technique adept at handling extended video sequences.Extensive
experiments across various datasets confirms that our approach outperforms
existing state-of-the-art models in both single-image and video VTO tasks,
offering a viable solution for practical applications within the realms of
fashion e-commerce and virtual fitting environments.
| [
{
"version": "v1",
"created": "Wed, 15 Jan 2025 09:22:38 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 10:06:51 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Li",
"Siqi",
""
],
[
"Jiang",
"Zhengkai",
""
],
[
"Zhou",
"Jiawei",
""
],
[
"Liu",
"Zhihong",
""
],
[
"Chi",
"Xiaowei",
""
],
[
"Wang",
"Haoqian",
""
]
]
| TITLE: RealVVT: Towards Photorealistic Video Virtual Try-on via Spatio-Temporal
Consistency
ABSTRACT: Virtual try-on has emerged as a pivotal task at the intersection of computer
vision and fashion, aimed at digitally simulating how clothing items fit on the
human body. Despite notable progress in single-image virtual try-on (VTO),
current methodologies often struggle to preserve a consistent and authentic
appearance of clothing across extended video sequences. This challenge arises
from the complexities of capturing dynamic human pose and maintaining target
clothing characteristics. We leverage pre-existing video foundation models to
introduce RealVVT, a photoRealistic Video Virtual Try-on framework tailored to
bolster stability and realism within dynamic video contexts. Our methodology
encompasses a Clothing & Temporal Consistency strategy, an Agnostic-guided
Attention Focus Loss mechanism to ensure spatial consistency, and a Pose-guided
Long Video VTO technique adept at handling extended video sequences.Extensive
experiments across various datasets confirms that our approach outperforms
existing state-of-the-art models in both single-image and video VTO tasks,
offering a viable solution for practical applications within the realms of
fashion e-commerce and virtual fitting environments.
| no_new_dataset | 0.952264 |
2501.09096 | Badhan Kumar Das | Badhan Kumar Das, Gengyan Zhao, Han Liu, Thomas J. Re, Dorin
Comaniciu, Eli Gibson, and Andreas Maier | Self Pre-training with Adaptive Mask Autoencoders for Variable-Contrast
3D Medical Imaging | 5 pages, ISBI 2025 accepted | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | The Masked Autoencoder (MAE) has recently demonstrated effectiveness in
pre-training Vision Transformers (ViT) for analyzing natural images. By
reconstructing complete images from partially masked inputs, the ViT encoder
gathers contextual information to predict the missing regions. This capability
to aggregate context is especially important in medical imaging, where
anatomical structures are functionally and mechanically linked to surrounding
regions. However, current methods do not consider variations in the number of
input images, which is typically the case in real-world Magnetic Resonance (MR)
studies. To address this limitation, we propose a 3D Adaptive Masked
Autoencoders (AMAE) architecture that accommodates a variable number of 3D
input contrasts per subject. A magnetic resonance imaging (MRI) dataset of
45,364 subjects was used for pretraining and a subset of 1648 training, 193
validation and 215 test subjects were used for finetuning. The performance
demonstrates that self pre-training of this adaptive masked autoencoders can
enhance the infarct segmentation performance by 2.8%-3.7% for ViT-based
segmentation models.
| [
{
"version": "v1",
"created": "Wed, 15 Jan 2025 19:29:31 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 18:48:15 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Das",
"Badhan Kumar",
""
],
[
"Zhao",
"Gengyan",
""
],
[
"Liu",
"Han",
""
],
[
"Re",
"Thomas J.",
""
],
[
"Comaniciu",
"Dorin",
""
],
[
"Gibson",
"Eli",
""
],
[
"Maier",
"Andreas",
""
]
]
| TITLE: Self Pre-training with Adaptive Mask Autoencoders for Variable-Contrast
3D Medical Imaging
ABSTRACT: The Masked Autoencoder (MAE) has recently demonstrated effectiveness in
pre-training Vision Transformers (ViT) for analyzing natural images. By
reconstructing complete images from partially masked inputs, the ViT encoder
gathers contextual information to predict the missing regions. This capability
to aggregate context is especially important in medical imaging, where
anatomical structures are functionally and mechanically linked to surrounding
regions. However, current methods do not consider variations in the number of
input images, which is typically the case in real-world Magnetic Resonance (MR)
studies. To address this limitation, we propose a 3D Adaptive Masked
Autoencoders (AMAE) architecture that accommodates a variable number of 3D
input contrasts per subject. A magnetic resonance imaging (MRI) dataset of
45,364 subjects was used for pretraining and a subset of 1648 training, 193
validation and 215 test subjects were used for finetuning. The performance
demonstrates that self pre-training of this adaptive masked autoencoders can
enhance the infarct segmentation performance by 2.8%-3.7% for ViT-based
segmentation models.
| no_new_dataset | 0.939582 |
2501.09363 | Deepjyoti Chetia | Deepjyoti Chetia, Sanjib Kr Kalita, Prof Partha Pratim Baruah,
Debasish Dutta, Tanaz Akhter | Identification of Traditional Medicinal Plant Leaves Using an effective
Deep Learning model and Self-Curated Dataset | null | null | 10.1007/978-3-031-83793-7_22 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Medicinal plants have been a key component in producing traditional and
modern medicines, especially in the field of Ayurveda, an ancient Indian
medical system. Producing these medicines and collecting and extracting the
right plant is a crucial step due to the visually similar nature of some
plants. The extraction of these plants from nonmedicinal plants requires human
expert intervention. To solve the issue of accurate plant identification and
reduce the need for a human expert in the collection process; employing
computer vision methods will be efficient and beneficial. In this paper, we
have proposed a model that solves such issues. The proposed model is a custom
convolutional neural network (CNN) architecture with 6 convolution layers,
max-pooling layers, and dense layers. The model was tested on three different
datasets named Indian Medicinal Leaves Image Dataset,MED117 Medicinal Plant
Leaf Dataset, and the self-curated dataset by the authors. The proposed model
achieved respective accuracies of 99.5%, 98.4%, and 99.7% using various
optimizers including Adam, RMSprop, and SGD with momentum.
| [
{
"version": "v1",
"created": "Thu, 16 Jan 2025 08:18:03 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Chetia",
"Deepjyoti",
""
],
[
"Kalita",
"Sanjib Kr",
""
],
[
"Baruah",
"Prof Partha Pratim",
""
],
[
"Dutta",
"Debasish",
""
],
[
"Akhter",
"Tanaz",
""
]
]
| TITLE: Identification of Traditional Medicinal Plant Leaves Using an effective
Deep Learning model and Self-Curated Dataset
ABSTRACT: Medicinal plants have been a key component in producing traditional and
modern medicines, especially in the field of Ayurveda, an ancient Indian
medical system. Producing these medicines and collecting and extracting the
right plant is a crucial step due to the visually similar nature of some
plants. The extraction of these plants from nonmedicinal plants requires human
expert intervention. To solve the issue of accurate plant identification and
reduce the need for a human expert in the collection process; employing
computer vision methods will be efficient and beneficial. In this paper, we
have proposed a model that solves such issues. The proposed model is a custom
convolutional neural network (CNN) architecture with 6 convolution layers,
max-pooling layers, and dense layers. The model was tested on three different
datasets named Indian Medicinal Leaves Image Dataset,MED117 Medicinal Plant
Leaf Dataset, and the self-curated dataset by the authors. The proposed model
achieved respective accuracies of 99.5%, 98.4%, and 99.7% using various
optimizers including Adam, RMSprop, and SGD with momentum.
| new_dataset | 0.963882 |
2501.10229 | \v{S}imon Kucharsk\'y | \v{S}imon Kucharsk\'y and Paul Christian B\"urkner | Amortized Bayesian Mixture Models | 34 pages, 17 figures | null | null | null | stat.ML cs.LG stat.CO | http://creativecommons.org/licenses/by-sa/4.0/ | Finite mixtures are a broad class of models useful in scenarios where
observed data is generated by multiple distinct processes but without explicit
information about the responsible process for each data point. Estimating
Bayesian mixture models is computationally challenging due to issues such as
high-dimensional posterior inference and label switching. Furthermore,
traditional methods such as MCMC are applicable only if the likelihoods for
each mixture component are analytically tractable.
Amortized Bayesian Inference (ABI) is a simulation-based framework for
estimating Bayesian models using generative neural networks. This allows the
fitting of models without explicit likelihoods, and provides fast inference.
ABI is therefore an attractive framework for estimating mixture models. This
paper introduces a novel extension of ABI tailored to mixture models. We
factorize the posterior into a distribution of the parameters and a
distribution of (categorical) mixture indicators, which allows us to use a
combination of generative neural networks for parameter inference, and
classification networks for mixture membership identification. The proposed
framework accommodates both independent and dependent mixture models, enabling
filtering and smoothing. We validate and demonstrate our approach through
synthetic and real-world datasets.
| [
{
"version": "v1",
"created": "Fri, 17 Jan 2025 14:51:03 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 07:27:19 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Kucharský",
"Šimon",
""
],
[
"Bürkner",
"Paul Christian",
""
]
]
| TITLE: Amortized Bayesian Mixture Models
ABSTRACT: Finite mixtures are a broad class of models useful in scenarios where
observed data is generated by multiple distinct processes but without explicit
information about the responsible process for each data point. Estimating
Bayesian mixture models is computationally challenging due to issues such as
high-dimensional posterior inference and label switching. Furthermore,
traditional methods such as MCMC are applicable only if the likelihoods for
each mixture component are analytically tractable.
Amortized Bayesian Inference (ABI) is a simulation-based framework for
estimating Bayesian models using generative neural networks. This allows the
fitting of models without explicit likelihoods, and provides fast inference.
ABI is therefore an attractive framework for estimating mixture models. This
paper introduces a novel extension of ABI tailored to mixture models. We
factorize the posterior into a distribution of the parameters and a
distribution of (categorical) mixture indicators, which allows us to use a
combination of generative neural networks for parameter inference, and
classification networks for mixture membership identification. The proposed
framework accommodates both independent and dependent mixture models, enabling
filtering and smoothing. We validate and demonstrate our approach through
synthetic and real-world datasets.
| no_new_dataset | 0.950457 |
2501.10290 | Ishank Juneja | Ishank Juneja, Carlee Joe-Wong and Osman Ya\u{g}an | Pairwise Elimination with Instance-Dependent Guarantees for Bandits with
Cost Subsidy | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-armed bandits (MAB) are commonly used in sequential online
decision-making when the reward of each decision is an unknown random variable.
In practice however, the typical goal of maximizing total reward may be less
important than minimizing the total cost of the decisions taken, subject to a
reward constraint. For example, we may seek to make decisions that have at
least the reward of a reference ``default'' decision, with as low a cost as
possible. This problem was recently introduced in the Multi-Armed Bandits with
Cost Subsidy (MAB-CS) framework. MAB-CS is broadly applicable to problem
domains where a primary metric (cost) is constrained by a secondary metric
(reward), and the rewards are unknown. In our work, we address variants of
MAB-CS including ones with reward constrained by the reward of a known
reference arm or by the subsidized best reward. We introduce the
Pairwise-Elimination (PE) algorithm for the known reference arm variant and
generalize PE to PE-CS for the subsidized best reward variant. Our
instance-dependent analysis of PE and PE-CS reveals that both algorithms have
an order-wise logarithmic upper bound on Cost and Quality Regret, making our
policies the first with such a guarantee. Moreover, by comparing our upper and
lower bound results we establish that PE is order-optimal for all known
reference arm problem instances. Finally, experiments are conducted using the
MovieLens 25M and Goodreads datasets for both PE and PE-CS revealing the
effectiveness of PE and the superior balance between performance and
reliability offered by PE-CS compared to baselines from the literature.
| [
{
"version": "v1",
"created": "Fri, 17 Jan 2025 16:34:45 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 18:55:40 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Juneja",
"Ishank",
""
],
[
"Joe-Wong",
"Carlee",
""
],
[
"Yağan",
"Osman",
""
]
]
| TITLE: Pairwise Elimination with Instance-Dependent Guarantees for Bandits with
Cost Subsidy
ABSTRACT: Multi-armed bandits (MAB) are commonly used in sequential online
decision-making when the reward of each decision is an unknown random variable.
In practice however, the typical goal of maximizing total reward may be less
important than minimizing the total cost of the decisions taken, subject to a
reward constraint. For example, we may seek to make decisions that have at
least the reward of a reference ``default'' decision, with as low a cost as
possible. This problem was recently introduced in the Multi-Armed Bandits with
Cost Subsidy (MAB-CS) framework. MAB-CS is broadly applicable to problem
domains where a primary metric (cost) is constrained by a secondary metric
(reward), and the rewards are unknown. In our work, we address variants of
MAB-CS including ones with reward constrained by the reward of a known
reference arm or by the subsidized best reward. We introduce the
Pairwise-Elimination (PE) algorithm for the known reference arm variant and
generalize PE to PE-CS for the subsidized best reward variant. Our
instance-dependent analysis of PE and PE-CS reveals that both algorithms have
an order-wise logarithmic upper bound on Cost and Quality Regret, making our
policies the first with such a guarantee. Moreover, by comparing our upper and
lower bound results we establish that PE is order-optimal for all known
reference arm problem instances. Finally, experiments are conducted using the
MovieLens 25M and Goodreads datasets for both PE and PE-CS revealing the
effectiveness of PE and the superior balance between performance and
reliability offered by PE-CS compared to baselines from the literature.
| no_new_dataset | 0.945701 |
2501.10360 | Kartik Narayan | Kartik Narayan, Vibashan VS, Vishal M. Patel | FaceXBench: Evaluating Multimodal LLMs on Face Understanding | Project Page: https://kartik-3004.github.io/facexbench/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multimodal Large Language Models (MLLMs) demonstrate impressive
problem-solving abilities across a wide range of tasks and domains. However,
their capacity for face understanding has not been systematically studied. To
address this gap, we introduce FaceXBench, a comprehensive benchmark designed
to evaluate MLLMs on complex face understanding tasks. FaceXBench includes
5,000 multimodal multiple-choice questions derived from 25 public datasets and
a newly created dataset, FaceXAPI. These questions cover 14 tasks across 6
broad categories, assessing MLLMs' face understanding abilities in bias and
fairness, face authentication, recognition, analysis, localization and tool
retrieval. Using FaceXBench, we conduct an extensive evaluation of 26
open-source MLLMs alongside 2 proprietary models, revealing the unique
challenges in complex face understanding tasks. We analyze the models across
three evaluation settings: zero-shot, in-context task description, and
chain-of-thought prompting. Our detailed analysis reveals that current MLLMs,
including advanced models like GPT-4o, and GeminiPro 1.5, show significant room
for improvement. We believe FaceXBench will be a crucial resource for
developing MLLMs equipped to perform sophisticated face understanding. Code:
https://github.com/Kartik-3004/facexbench
| [
{
"version": "v1",
"created": "Fri, 17 Jan 2025 18:59:55 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 18:19:52 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Narayan",
"Kartik",
""
],
[
"VS",
"Vibashan",
""
],
[
"Patel",
"Vishal M.",
""
]
]
| TITLE: FaceXBench: Evaluating Multimodal LLMs on Face Understanding
ABSTRACT: Multimodal Large Language Models (MLLMs) demonstrate impressive
problem-solving abilities across a wide range of tasks and domains. However,
their capacity for face understanding has not been systematically studied. To
address this gap, we introduce FaceXBench, a comprehensive benchmark designed
to evaluate MLLMs on complex face understanding tasks. FaceXBench includes
5,000 multimodal multiple-choice questions derived from 25 public datasets and
a newly created dataset, FaceXAPI. These questions cover 14 tasks across 6
broad categories, assessing MLLMs' face understanding abilities in bias and
fairness, face authentication, recognition, analysis, localization and tool
retrieval. Using FaceXBench, we conduct an extensive evaluation of 26
open-source MLLMs alongside 2 proprietary models, revealing the unique
challenges in complex face understanding tasks. We analyze the models across
three evaluation settings: zero-shot, in-context task description, and
chain-of-thought prompting. Our detailed analysis reveals that current MLLMs,
including advanced models like GPT-4o, and GeminiPro 1.5, show significant room
for improvement. We believe FaceXBench will be a crucial resource for
developing MLLMs equipped to perform sophisticated face understanding. Code:
https://github.com/Kartik-3004/facexbench
| new_dataset | 0.959001 |
2501.10459 | Qianru Zhang | Qianru Zhang, Xinyi Gao, Haixin Wang, Siu-Ming Yiu and Hongzhi Yin | Efficient Traffic Prediction Through Spatio-Temporal Distillation | 9 pages | AAAI'2025 | null | null | cs.LG cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph neural networks (GNNs) have gained considerable attention in recent
years for traffic flow prediction due to their ability to learn spatio-temporal
pattern representations through a graph-based message-passing framework.
Although GNNs have shown great promise in handling traffic datasets, their
deployment in real-life applications has been hindered by scalability
constraints arising from high-order message passing. Additionally, the
over-smoothing problem of GNNs may lead to indistinguishable region
representations as the number of layers increases, resulting in performance
degradation. To address these challenges, we propose a new knowledge
distillation paradigm termed LightST that transfers spatial and temporal
knowledge from a high-capacity teacher to a lightweight student. Specifically,
we introduce a spatio-temporal knowledge distillation framework that helps
student MLPs capture graph-structured global spatio-temporal patterns while
alleviating the over-smoothing effect with adaptive knowledge distillation.
Extensive experiments verify that LightST significantly speeds up traffic flow
predictions by 5X to 40X compared to state-of-the-art spatio-temporal GNNs, all
while maintaining superior accuracy.
| [
{
"version": "v1",
"created": "Wed, 15 Jan 2025 04:23:10 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 06:38:35 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Zhang",
"Qianru",
""
],
[
"Gao",
"Xinyi",
""
],
[
"Wang",
"Haixin",
""
],
[
"Yiu",
"Siu-Ming",
""
],
[
"Yin",
"Hongzhi",
""
]
]
| TITLE: Efficient Traffic Prediction Through Spatio-Temporal Distillation
ABSTRACT: Graph neural networks (GNNs) have gained considerable attention in recent
years for traffic flow prediction due to their ability to learn spatio-temporal
pattern representations through a graph-based message-passing framework.
Although GNNs have shown great promise in handling traffic datasets, their
deployment in real-life applications has been hindered by scalability
constraints arising from high-order message passing. Additionally, the
over-smoothing problem of GNNs may lead to indistinguishable region
representations as the number of layers increases, resulting in performance
degradation. To address these challenges, we propose a new knowledge
distillation paradigm termed LightST that transfers spatial and temporal
knowledge from a high-capacity teacher to a lightweight student. Specifically,
we introduce a spatio-temporal knowledge distillation framework that helps
student MLPs capture graph-structured global spatio-temporal patterns while
alleviating the over-smoothing effect with adaptive knowledge distillation.
Extensive experiments verify that LightST significantly speeds up traffic flow
predictions by 5X to 40X compared to state-of-the-art spatio-temporal GNNs, all
while maintaining superior accuracy.
| no_new_dataset | 0.947769 |
2501.11803 | Riqiang Gao | Riqiang Gao, Mamadou Diallo, Han Liu, Anthony Magliari, Jonathan
Sackett, Wilko Verbakel, Sandra Meyers, Masoud Zarepisheh, Rafe Mcbeth, Simon
Arberet, Martin Kraus, Florin C. Ghesu, Ali Kamen | Automating High Quality RT Planning at Scale | radiotherapy planning | null | null | null | cs.HC cs.LG cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Radiotherapy (RT) planning is complex, subjective, and time-intensive.
Advances in artificial intelligence (AI) promise to improve its precision,
efficiency, and consistency, but progress is often limited by the scarcity of
large, standardized datasets. To address this, we introduce the Automated
Iterative RT Planning (AIRTP) system, a scalable solution for generating
high-quality treatment plans. This scalable solution is designed to generate
substantial volumes of consistently high-quality treatment plans, overcoming a
key obstacle in the advancement of AI-driven RT planning. Our AIRTP pipeline
adheres to clinical guidelines and automates essential steps, including
organ-at-risk (OAR) contouring, helper structure creation, beam setup,
optimization, and plan quality improvement, using AI integrated with RT
planning software like Eclipse of Varian. Furthermore, a novel approach for
determining optimization parameters to reproduce 3D dose distributions, i.e. a
method to convert dose predictions to deliverable treatment plans constrained
by machine limitations. A comparative analysis of plan quality reveals that our
automated pipeline produces treatment plans of quality comparable to those
generated manually, which traditionally require several hours of labor per
plan. Committed to public research, the first data release of our AIRTP
pipeline includes nine cohorts covering head-and-neck and lung cancer sites to
support an AAPM 2025 challenge. This data set features more than 10 times the
number of plans compared to the largest existing well-curated public data set
to our best knowledge. Repo:
https://github.com/RiqiangGao/GDP-HMM_AAPMChallenge.
| [
{
"version": "v1",
"created": "Tue, 21 Jan 2025 00:44:18 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 14:53:10 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Gao",
"Riqiang",
""
],
[
"Diallo",
"Mamadou",
""
],
[
"Liu",
"Han",
""
],
[
"Magliari",
"Anthony",
""
],
[
"Sackett",
"Jonathan",
""
],
[
"Verbakel",
"Wilko",
""
],
[
"Meyers",
"Sandra",
""
],
[
"Zarepisheh",
"Masoud",
""
],
[
"Mcbeth",
"Rafe",
""
],
[
"Arberet",
"Simon",
""
],
[
"Kraus",
"Martin",
""
],
[
"Ghesu",
"Florin C.",
""
],
[
"Kamen",
"Ali",
""
]
]
| TITLE: Automating High Quality RT Planning at Scale
ABSTRACT: Radiotherapy (RT) planning is complex, subjective, and time-intensive.
Advances in artificial intelligence (AI) promise to improve its precision,
efficiency, and consistency, but progress is often limited by the scarcity of
large, standardized datasets. To address this, we introduce the Automated
Iterative RT Planning (AIRTP) system, a scalable solution for generating
high-quality treatment plans. This scalable solution is designed to generate
substantial volumes of consistently high-quality treatment plans, overcoming a
key obstacle in the advancement of AI-driven RT planning. Our AIRTP pipeline
adheres to clinical guidelines and automates essential steps, including
organ-at-risk (OAR) contouring, helper structure creation, beam setup,
optimization, and plan quality improvement, using AI integrated with RT
planning software like Eclipse of Varian. Furthermore, a novel approach for
determining optimization parameters to reproduce 3D dose distributions, i.e. a
method to convert dose predictions to deliverable treatment plans constrained
by machine limitations. A comparative analysis of plan quality reveals that our
automated pipeline produces treatment plans of quality comparable to those
generated manually, which traditionally require several hours of labor per
plan. Committed to public research, the first data release of our AIRTP
pipeline includes nine cohorts covering head-and-neck and lung cancer sites to
support an AAPM 2025 challenge. This data set features more than 10 times the
number of plans compared to the largest existing well-curated public data set
to our best knowledge. Repo:
https://github.com/RiqiangGao/GDP-HMM_AAPMChallenge.
| no_new_dataset | 0.938801 |
2501.12382 | Yiyang Wang | Yiyang Wang, Xi Chen, Xiaogang Xu, Sihui Ji, Yu Liu, Yujun Shen,
Hengshuang Zhao | DiffDoctor: Diagnosing Image Diffusion Models Before Treating | 8 pages of main body | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In spite of recent progress, image diffusion models still produce artifacts.
A common solution is to leverage the feedback provided by quality assessment
systems or human annotators to optimize the model, where images are generally
rated in their entirety. In this work, we believe problem-solving starts with
identification, yielding the request that the model should be aware of not just
the presence of defects in an image, but their specific locations. Motivated by
this, we propose DiffDoctor, a two-stage pipeline to assist image diffusion
models in generating fewer artifacts. Concretely, the first stage targets
developing a robust artifact detector, for which we collect a dataset of over
1M flawed synthesized images and set up an efficient human-in-the-loop
annotation process, incorporating a carefully designed class-balance strategy.
The learned artifact detector is then involved in the second stage to optimize
the diffusion model by providing pixel-level feedback. Extensive experiments on
text-to-image diffusion models demonstrate the effectiveness of our artifact
detector as well as the soundness of our diagnose-then-treat design.
| [
{
"version": "v1",
"created": "Tue, 21 Jan 2025 18:56:41 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 12:44:34 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Wang",
"Yiyang",
""
],
[
"Chen",
"Xi",
""
],
[
"Xu",
"Xiaogang",
""
],
[
"Ji",
"Sihui",
""
],
[
"Liu",
"Yu",
""
],
[
"Shen",
"Yujun",
""
],
[
"Zhao",
"Hengshuang",
""
]
]
| TITLE: DiffDoctor: Diagnosing Image Diffusion Models Before Treating
ABSTRACT: In spite of recent progress, image diffusion models still produce artifacts.
A common solution is to leverage the feedback provided by quality assessment
systems or human annotators to optimize the model, where images are generally
rated in their entirety. In this work, we believe problem-solving starts with
identification, yielding the request that the model should be aware of not just
the presence of defects in an image, but their specific locations. Motivated by
this, we propose DiffDoctor, a two-stage pipeline to assist image diffusion
models in generating fewer artifacts. Concretely, the first stage targets
developing a robust artifact detector, for which we collect a dataset of over
1M flawed synthesized images and set up an efficient human-in-the-loop
annotation process, incorporating a carefully designed class-balance strategy.
The learned artifact detector is then involved in the second stage to optimize
the diffusion model by providing pixel-level feedback. Extensive experiments on
text-to-image diffusion models demonstrate the effectiveness of our artifact
detector as well as the soundness of our diagnose-then-treat design.
| new_dataset | 0.830663 |
2501.16663 | Dayong Ye | Dayong Ye, Tianqing Zhu, Jiayang Li, Kun Gao, Bo Liu, Leo Yu Zhang,
Wanlei Zhou, Yang Zhang | Data Duplication: A Novel Multi-Purpose Attack Paradigm in Machine
Unlearning | Accepted at USENIX Security 2025 | null | null | null | cs.CR cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | Duplication is a prevalent issue within datasets. Existing research has
demonstrated that the presence of duplicated data in training datasets can
significantly influence both model performance and data privacy. However, the
impact of data duplication on the unlearning process remains largely
unexplored. This paper addresses this gap by pioneering a comprehensive
investigation into the role of data duplication, not only in standard machine
unlearning but also in federated and reinforcement unlearning paradigms.
Specifically, we propose an adversary who duplicates a subset of the target
model's training set and incorporates it into the training set. After training,
the adversary requests the model owner to unlearn this duplicated subset, and
analyzes the impact on the unlearned model. For example, the adversary can
challenge the model owner by revealing that, despite efforts to unlearn it, the
influence of the duplicated subset remains in the model. Moreover, to
circumvent detection by de-duplication techniques, we propose three novel
near-duplication methods for the adversary, each tailored to a specific
unlearning paradigm. We then examine their impacts on the unlearning process
when de-duplication techniques are applied. Our findings reveal several crucial
insights: 1) the gold standard unlearning method, retraining from scratch,
fails to effectively conduct unlearning under certain conditions; 2) unlearning
duplicated data can lead to significant model degradation in specific
scenarios; and 3) meticulously crafted duplicates can evade detection by
de-duplication methods.
| [
{
"version": "v1",
"created": "Tue, 28 Jan 2025 02:52:51 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 04:54:03 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Ye",
"Dayong",
""
],
[
"Zhu",
"Tianqing",
""
],
[
"Li",
"Jiayang",
""
],
[
"Gao",
"Kun",
""
],
[
"Liu",
"Bo",
""
],
[
"Zhang",
"Leo Yu",
""
],
[
"Zhou",
"Wanlei",
""
],
[
"Zhang",
"Yang",
""
]
]
| TITLE: Data Duplication: A Novel Multi-Purpose Attack Paradigm in Machine
Unlearning
ABSTRACT: Duplication is a prevalent issue within datasets. Existing research has
demonstrated that the presence of duplicated data in training datasets can
significantly influence both model performance and data privacy. However, the
impact of data duplication on the unlearning process remains largely
unexplored. This paper addresses this gap by pioneering a comprehensive
investigation into the role of data duplication, not only in standard machine
unlearning but also in federated and reinforcement unlearning paradigms.
Specifically, we propose an adversary who duplicates a subset of the target
model's training set and incorporates it into the training set. After training,
the adversary requests the model owner to unlearn this duplicated subset, and
analyzes the impact on the unlearned model. For example, the adversary can
challenge the model owner by revealing that, despite efforts to unlearn it, the
influence of the duplicated subset remains in the model. Moreover, to
circumvent detection by de-duplication techniques, we propose three novel
near-duplication methods for the adversary, each tailored to a specific
unlearning paradigm. We then examine their impacts on the unlearning process
when de-duplication techniques are applied. Our findings reveal several crucial
insights: 1) the gold standard unlearning method, retraining from scratch,
fails to effectively conduct unlearning under certain conditions; 2) unlearning
duplicated data can lead to significant model degradation in specific
scenarios; and 3) meticulously crafted duplicates can evade detection by
de-duplication methods.
| no_new_dataset | 0.942082 |
2501.17328 | Tom Nuno Wolf | Tom Nuno Wolf and Emre Kavak and Fabian Bongratz and Christian
Wachinger | SIC: Similarity-Based Interpretable Image Classification with Neural
Networks | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The deployment of deep learning models in critical domains necessitates a
balance between high accuracy and interpretability. We introduce SIC, an
inherently interpretable neural network that provides local and global
explanations of its decision-making process. Leveraging the concept of
case-based reasoning, SIC extracts class-representative support vectors from
training images, ensuring they capture relevant features while suppressing
irrelevant ones. Classification decisions are made by calculating and
aggregating similarity scores between these support vectors and the input's
latent feature vector. We employ B-Cos transformations, which align model
weights with inputs, to yield coherent pixel-level explanations in addition to
global explanations of case-based reasoning. We evaluate SIC on three tasks:
fine-grained classification on Stanford Dogs and FunnyBirds, multi-label
classification on Pascal VOC, and pathology detection on the RSNA dataset.
Results indicate that SIC not only achieves competitive accuracy compared to
state-of-the-art black-box and inherently interpretable models but also offers
insightful explanations verified through practical evaluation on the FunnyBirds
benchmark. Our theoretical analysis proves that these explanations fulfill
established axioms for explanations. Our findings underscore SIC's potential
for applications where understanding model decisions is as critical as the
decisions themselves.
| [
{
"version": "v1",
"created": "Tue, 28 Jan 2025 22:39:03 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 19:36:39 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Wolf",
"Tom Nuno",
""
],
[
"Kavak",
"Emre",
""
],
[
"Bongratz",
"Fabian",
""
],
[
"Wachinger",
"Christian",
""
]
]
| TITLE: SIC: Similarity-Based Interpretable Image Classification with Neural
Networks
ABSTRACT: The deployment of deep learning models in critical domains necessitates a
balance between high accuracy and interpretability. We introduce SIC, an
inherently interpretable neural network that provides local and global
explanations of its decision-making process. Leveraging the concept of
case-based reasoning, SIC extracts class-representative support vectors from
training images, ensuring they capture relevant features while suppressing
irrelevant ones. Classification decisions are made by calculating and
aggregating similarity scores between these support vectors and the input's
latent feature vector. We employ B-Cos transformations, which align model
weights with inputs, to yield coherent pixel-level explanations in addition to
global explanations of case-based reasoning. We evaluate SIC on three tasks:
fine-grained classification on Stanford Dogs and FunnyBirds, multi-label
classification on Pascal VOC, and pathology detection on the RSNA dataset.
Results indicate that SIC not only achieves competitive accuracy compared to
state-of-the-art black-box and inherently interpretable models but also offers
insightful explanations verified through practical evaluation on the FunnyBirds
benchmark. Our theoretical analysis proves that these explanations fulfill
established axioms for explanations. Our findings underscore SIC's potential
for applications where understanding model decisions is as critical as the
decisions themselves.
| no_new_dataset | 0.946646 |
2501.18509 | Faegheh Sardari | Faegheh Sardari, Armin Mustafa, Philip J. B. Jackson, Adrian Hilton | Reframing Dense Action Detection (RefDense): A Paradigm Shift in Problem
Solving & a Novel Optimization Strategy | Computer Vision | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Dense action detection involves detecting multiple co-occurring actions while
action classes are often ambiguous and represent overlapping concepts. We argue
that handling the dual challenge of temporal and class overlaps is too complex
to effectively be tackled by a single network. To address this, we propose to
decompose the task of detecting dense ambiguous actions into detecting dense,
unambiguous sub-concepts that form the action classes (i.e., action entities
and action motions), and assigning these sub-tasks to distinct sub-networks. By
isolating these unambiguous concepts, the sub-networks can focus exclusively on
resolving a single challenge, dense temporal overlaps. Furthermore,
simultaneous actions in a video often exhibit interrelationships, and
exploiting these relationships can improve the method performance. However,
current dense action detection networks fail to effectively learn these
relationships due to their reliance on binary cross-entropy optimization, which
treats each class independently. To address this limitation, we propose
providing explicit supervision on co-occurring concepts during network
optimization through a novel language-guided contrastive learning loss. Our
extensive experiments demonstrate the superiority of our approach over
state-of-the-art methods, achieving substantial improvements of 3.8% and 1.7%
on average across all metrics on the challenging benchmark datasets, Charades
and MultiTHUMOS.
| [
{
"version": "v1",
"created": "Thu, 30 Jan 2025 17:20:42 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 12:34:08 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Sardari",
"Faegheh",
""
],
[
"Mustafa",
"Armin",
""
],
[
"Jackson",
"Philip J. B.",
""
],
[
"Hilton",
"Adrian",
""
]
]
| TITLE: Reframing Dense Action Detection (RefDense): A Paradigm Shift in Problem
Solving & a Novel Optimization Strategy
ABSTRACT: Dense action detection involves detecting multiple co-occurring actions while
action classes are often ambiguous and represent overlapping concepts. We argue
that handling the dual challenge of temporal and class overlaps is too complex
to effectively be tackled by a single network. To address this, we propose to
decompose the task of detecting dense ambiguous actions into detecting dense,
unambiguous sub-concepts that form the action classes (i.e., action entities
and action motions), and assigning these sub-tasks to distinct sub-networks. By
isolating these unambiguous concepts, the sub-networks can focus exclusively on
resolving a single challenge, dense temporal overlaps. Furthermore,
simultaneous actions in a video often exhibit interrelationships, and
exploiting these relationships can improve the method performance. However,
current dense action detection networks fail to effectively learn these
relationships due to their reliance on binary cross-entropy optimization, which
treats each class independently. To address this limitation, we propose
providing explicit supervision on co-occurring concepts during network
optimization through a novel language-guided contrastive learning loss. Our
extensive experiments demonstrate the superiority of our approach over
state-of-the-art methods, achieving substantial improvements of 3.8% and 1.7%
on average across all metrics on the challenging benchmark datasets, Charades
and MultiTHUMOS.
| no_new_dataset | 0.942401 |
2502.00412 | Ziyu Wang | Ziyu Wang, Tengyu Pan, Zhenyu Li, Ji Wu, Xiuxing Li and Jianyong Wang | TROI: Cross-Subject Pretraining with Sparse Voxel Selection for Enhanced
fMRI Visual Decoding | ICASSP 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | fMRI (functional Magnetic Resonance Imaging) visual decoding involves
decoding the original image from brain signals elicited by visual stimuli. This
often relies on manually labeled ROIs (Regions of Interest) to select brain
voxels. However, these ROIs can contain redundant information and noise,
reducing decoding performance. Additionally, the lack of automated ROI labeling
methods hinders the practical application of fMRI visual decoding technology,
especially for new subjects. This work presents TROI (Trainable Region of
Interest), a novel two-stage, data-driven ROI labeling method for cross-subject
fMRI decoding tasks, particularly when subject samples are limited. TROI
leverages labeled ROIs in the dataset to pretrain an image decoding backbone on
a cross-subject dataset, enabling efficient optimization of the input layer for
new subjects without retraining the entire model from scratch. In the first
stage, we introduce a voxel selection method that combines sparse mask training
and low-pass filtering to quickly generate the voxel mask and determine input
layer dimensions. In the second stage, we apply a learning rate rewinding
strategy to fine-tune the input layer for downstream tasks. Experimental
results on the same small sample dataset as the baseline method for brain
visual retrieval and reconstruction tasks show that our voxel selection method
surpasses the state-of-the-art method MindEye2 with an annotated ROI mask.
| [
{
"version": "v1",
"created": "Sat, 1 Feb 2025 12:20:17 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 07:44:46 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Wang",
"Ziyu",
""
],
[
"Pan",
"Tengyu",
""
],
[
"Li",
"Zhenyu",
""
],
[
"Wu",
"Ji",
""
],
[
"Li",
"Xiuxing",
""
],
[
"Wang",
"Jianyong",
""
]
]
| TITLE: TROI: Cross-Subject Pretraining with Sparse Voxel Selection for Enhanced
fMRI Visual Decoding
ABSTRACT: fMRI (functional Magnetic Resonance Imaging) visual decoding involves
decoding the original image from brain signals elicited by visual stimuli. This
often relies on manually labeled ROIs (Regions of Interest) to select brain
voxels. However, these ROIs can contain redundant information and noise,
reducing decoding performance. Additionally, the lack of automated ROI labeling
methods hinders the practical application of fMRI visual decoding technology,
especially for new subjects. This work presents TROI (Trainable Region of
Interest), a novel two-stage, data-driven ROI labeling method for cross-subject
fMRI decoding tasks, particularly when subject samples are limited. TROI
leverages labeled ROIs in the dataset to pretrain an image decoding backbone on
a cross-subject dataset, enabling efficient optimization of the input layer for
new subjects without retraining the entire model from scratch. In the first
stage, we introduce a voxel selection method that combines sparse mask training
and low-pass filtering to quickly generate the voxel mask and determine input
layer dimensions. In the second stage, we apply a learning rate rewinding
strategy to fine-tune the input layer for downstream tasks. Experimental
results on the same small sample dataset as the baseline method for brain
visual retrieval and reconstruction tasks show that our voxel selection method
surpasses the state-of-the-art method MindEye2 with an annotated ROI mask.
| no_new_dataset | 0.950869 |
2502.03424 | Yuan Xinjie | Yuan Xinjie and Khalid M. Mosalam | Prediction of the Most Fire-Sensitive Point in Building Structures with
Differentiable Agents for Thermal Simulators | This paper is currently under review at Computer-Aided Civil and
Infrastructure Engineering | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fire safety is crucial for ensuring the stability of building structures, yet
evaluating whether a structure meets fire safety requirement is challenging.
Fires can originate at any point within a structure, and simulating every
potential fire scenario is both expensive and time-consuming. To address this
challenge, we propose the concept of the Most Fire-Sensitive Point (MFSP) and
an efficient machine learning framework for its identification. The MFSP is
defined as the location at which a fire, if initiated, would cause the most
severe detrimental impact on the building's stability, effectively representing
the worst-case fire scenario. In our framework, a Graph Neural Network (GNN)
serves as an efficient and differentiable agent for conventional Finite Element
Analysis (FEA) simulators by predicting the Maximum Interstory Drift Ratio
(MIDR) under fire, which then guides the training and evaluation of the MFSP
predictor. Additionally, we enhance our framework with a novel edge update
mechanism and a transfer learning-based training scheme. Evaluations on a
large-scale simulation dataset demonstrate the good performance of the proposed
framework in identifying the MFSP, offering a transformative tool for
optimizing fire safety assessments in structural design. All developed datasets
and codes are open-sourced online.
| [
{
"version": "v1",
"created": "Wed, 5 Feb 2025 18:14:20 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 21:24:28 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Xinjie",
"Yuan",
""
],
[
"Mosalam",
"Khalid M.",
""
]
]
| TITLE: Prediction of the Most Fire-Sensitive Point in Building Structures with
Differentiable Agents for Thermal Simulators
ABSTRACT: Fire safety is crucial for ensuring the stability of building structures, yet
evaluating whether a structure meets fire safety requirement is challenging.
Fires can originate at any point within a structure, and simulating every
potential fire scenario is both expensive and time-consuming. To address this
challenge, we propose the concept of the Most Fire-Sensitive Point (MFSP) and
an efficient machine learning framework for its identification. The MFSP is
defined as the location at which a fire, if initiated, would cause the most
severe detrimental impact on the building's stability, effectively representing
the worst-case fire scenario. In our framework, a Graph Neural Network (GNN)
serves as an efficient and differentiable agent for conventional Finite Element
Analysis (FEA) simulators by predicting the Maximum Interstory Drift Ratio
(MIDR) under fire, which then guides the training and evaluation of the MFSP
predictor. Additionally, we enhance our framework with a novel edge update
mechanism and a transfer learning-based training scheme. Evaluations on a
large-scale simulation dataset demonstrate the good performance of the proposed
framework in identifying the MFSP, offering a transformative tool for
optimizing fire safety assessments in structural design. All developed datasets
and codes are open-sourced online.
| no_new_dataset | 0.944177 |
2502.07072 | Sayem Mohammad Imtiaz | Sayem Mohammad Imtiaz, Astha Singh, Fraol Batole, Hridesh Rajan | IRepair: An Intent-Aware Approach to Repair Data-Driven Errors in Large
Language Models | Accepted as full research paper at FSE'2025 | null | null | null | cs.CL cs.AI cs.SE | http://creativecommons.org/licenses/by/4.0/ | Not a day goes by without hearing about the impressive feats of large
language models (LLMs), and equally, not a day passes without hearing about
their challenges. LLMs are notoriously vulnerable to biases in their dataset,
leading to issues such as toxicity. While domain-adaptive training has been
employed to mitigate these issues, these techniques often address all model
parameters indiscriminately during the repair process, resulting in poor repair
quality and reduced model versatility. In this paper, we introduce a novel
dynamic slicing-based intent-aware LLM repair strategy, IRepair. This approach
selectively targets the most error-prone sections of the model for repair.
Specifically, we propose dynamically slicing the model's most sensitive layers
that require immediate attention, concentrating repair efforts on those areas.
This method enables more effective repairs with potentially less impact on the
model's overall performance by altering a smaller portion of the model. We
evaluated our technique on three models from the GPT2 and GPT-Neo families,
with parameters ranging from 800M to 1.6B, in a toxicity mitigation setup. Our
results show that IRepair repairs errors 43.6% more effectively while causing
46% less disruption to general performance compared to the closest baseline,
direct preference optimization. Our empirical analysis also reveals that errors
are more concentrated in a smaller section of the model, with the top 20% of
layers exhibiting 773% more error density than the remaining 80\%. This
highlights the need for selective repair. Additionally, we demonstrate that a
dynamic selection approach is essential for addressing errors dispersed
throughout the model, ensuring a robust and efficient repair.
| [
{
"version": "v1",
"created": "Mon, 10 Feb 2025 22:07:02 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Feb 2025 05:14:41 GMT"
},
{
"version": "v3",
"created": "Tue, 11 Mar 2025 17:08:05 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Imtiaz",
"Sayem Mohammad",
""
],
[
"Singh",
"Astha",
""
],
[
"Batole",
"Fraol",
""
],
[
"Rajan",
"Hridesh",
""
]
]
| TITLE: IRepair: An Intent-Aware Approach to Repair Data-Driven Errors in Large
Language Models
ABSTRACT: Not a day goes by without hearing about the impressive feats of large
language models (LLMs), and equally, not a day passes without hearing about
their challenges. LLMs are notoriously vulnerable to biases in their dataset,
leading to issues such as toxicity. While domain-adaptive training has been
employed to mitigate these issues, these techniques often address all model
parameters indiscriminately during the repair process, resulting in poor repair
quality and reduced model versatility. In this paper, we introduce a novel
dynamic slicing-based intent-aware LLM repair strategy, IRepair. This approach
selectively targets the most error-prone sections of the model for repair.
Specifically, we propose dynamically slicing the model's most sensitive layers
that require immediate attention, concentrating repair efforts on those areas.
This method enables more effective repairs with potentially less impact on the
model's overall performance by altering a smaller portion of the model. We
evaluated our technique on three models from the GPT2 and GPT-Neo families,
with parameters ranging from 800M to 1.6B, in a toxicity mitigation setup. Our
results show that IRepair repairs errors 43.6% more effectively while causing
46% less disruption to general performance compared to the closest baseline,
direct preference optimization. Our empirical analysis also reveals that errors
are more concentrated in a smaller section of the model, with the top 20% of
layers exhibiting 773% more error density than the remaining 80\%. This
highlights the need for selective repair. Additionally, we demonstrate that a
dynamic selection approach is essential for addressing errors dispersed
throughout the model, ensuring a robust and efficient repair.
| no_new_dataset | 0.946547 |
2502.07302 | Ruining Deng | Ruining Deng, Yihe Yang, David J. Pisapia, Benjamin Liechty, Junchao
Zhu, Juming Xiong, Junlin Guo, Zhengyi Lu, Jiacheng Wang, Xing Yao, Runxuan
Yu, Rendong Zhang, Gaurav Rudravaram, Mengmeng Yin, Pinaki Sarder, Haichun
Yang, Yuankai Huo, Mert R. Sabuncu | CASC-AI: Consensus-aware Self-corrective Learning for Noise Cell
Segmentation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-class cell segmentation in high-resolution gigapixel whole slide images
(WSIs) is crucial for various clinical applications. However, training such
models typically requires labor-intensive, pixel-wise annotations by domain
experts. Recent efforts have democratized this process by involving lay
annotators without medical expertise. However, conventional non-corrective
approaches struggle to handle annotation noise adaptively because they lack
mechanisms to mitigate false positives (FP) and false negatives (FN) at both
the image-feature and pixel levels. In this paper, we propose a consensus-aware
self-corrective AI agent that leverages the Consensus Matrix to guide its
learning process. The Consensus Matrix defines regions where both the AI and
annotators agree on cell and non-cell annotations, which are prioritized with
stronger supervision. Conversely, areas of disagreement are adaptively weighted
based on their feature similarity to high-confidence consensus regions, with
more similar regions receiving greater attention. Additionally, contrastive
learning is employed to separate features of noisy regions from those of
reliable consensus regions by maximizing their dissimilarity. This paradigm
enables the model to iteratively refine noisy labels, enhancing its robustness.
Validated on one real-world lay-annotated cell dataset and two reasoning-guided
simulated noisy datasets, our method demonstrates improved segmentation
performance, effectively correcting FP and FN errors and showcasing its
potential for training robust models on noisy datasets. The official
implementation and cell annotations are publicly available at
https://github.com/ddrrnn123/CASC-AI.
| [
{
"version": "v1",
"created": "Tue, 11 Feb 2025 06:58:50 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 20:58:06 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Deng",
"Ruining",
""
],
[
"Yang",
"Yihe",
""
],
[
"Pisapia",
"David J.",
""
],
[
"Liechty",
"Benjamin",
""
],
[
"Zhu",
"Junchao",
""
],
[
"Xiong",
"Juming",
""
],
[
"Guo",
"Junlin",
""
],
[
"Lu",
"Zhengyi",
""
],
[
"Wang",
"Jiacheng",
""
],
[
"Yao",
"Xing",
""
],
[
"Yu",
"Runxuan",
""
],
[
"Zhang",
"Rendong",
""
],
[
"Rudravaram",
"Gaurav",
""
],
[
"Yin",
"Mengmeng",
""
],
[
"Sarder",
"Pinaki",
""
],
[
"Yang",
"Haichun",
""
],
[
"Huo",
"Yuankai",
""
],
[
"Sabuncu",
"Mert R.",
""
]
]
| TITLE: CASC-AI: Consensus-aware Self-corrective Learning for Noise Cell
Segmentation
ABSTRACT: Multi-class cell segmentation in high-resolution gigapixel whole slide images
(WSIs) is crucial for various clinical applications. However, training such
models typically requires labor-intensive, pixel-wise annotations by domain
experts. Recent efforts have democratized this process by involving lay
annotators without medical expertise. However, conventional non-corrective
approaches struggle to handle annotation noise adaptively because they lack
mechanisms to mitigate false positives (FP) and false negatives (FN) at both
the image-feature and pixel levels. In this paper, we propose a consensus-aware
self-corrective AI agent that leverages the Consensus Matrix to guide its
learning process. The Consensus Matrix defines regions where both the AI and
annotators agree on cell and non-cell annotations, which are prioritized with
stronger supervision. Conversely, areas of disagreement are adaptively weighted
based on their feature similarity to high-confidence consensus regions, with
more similar regions receiving greater attention. Additionally, contrastive
learning is employed to separate features of noisy regions from those of
reliable consensus regions by maximizing their dissimilarity. This paradigm
enables the model to iteratively refine noisy labels, enhancing its robustness.
Validated on one real-world lay-annotated cell dataset and two reasoning-guided
simulated noisy datasets, our method demonstrates improved segmentation
performance, effectively correcting FP and FN errors and showcasing its
potential for training robust models on noisy datasets. The official
implementation and cell annotations are publicly available at
https://github.com/ddrrnn123/CASC-AI.
| no_new_dataset | 0.954478 |
2502.10720 | Shutong Zhang | Shutong Zhang | NPSim: Nighttime Photorealistic Simulation From Daytime Images With
Monocular Inverse Rendering and Ray Tracing | null | null | null | null | cs.CV cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Semantic segmentation is an important task for autonomous driving. A powerful
autonomous driving system should be capable of handling images under all
conditions, including nighttime. Generating accurate and diverse nighttime
semantic segmentation datasets is crucial for enhancing the performance of
computer vision algorithms in low-light conditions. In this thesis, we
introduce a novel approach named NPSim, which enables the simulation of
realistic nighttime images from real daytime counterparts with monocular
inverse rendering and ray tracing. NPSim comprises two key components: mesh
reconstruction and relighting. The mesh reconstruction component generates an
accurate representation of the scene structure by combining geometric
information extracted from the input RGB image and semantic information from
its corresponding semantic labels. The relighting component integrates
real-world nighttime light sources and material characteristics to simulate the
complex interplay of light and object surfaces under low-light conditions. The
scope of this thesis mainly focuses on the implementation and evaluation of the
mesh reconstruction component. Through experiments, we demonstrate the
effectiveness of the mesh reconstruction component in producing high-quality
scene meshes and their generality across different autonomous driving datasets.
We also propose a detailed experiment plan for evaluating the entire pipeline,
including both quantitative metrics in training state-of-the-art supervised and
unsupervised semantic segmentation approaches and human perceptual studies,
aiming to indicate the capability of our approach to generate realistic
nighttime images and the value of our dataset in steering future progress in
the field.
| [
{
"version": "v1",
"created": "Sat, 15 Feb 2025 08:24:19 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Feb 2025 09:03:48 GMT"
},
{
"version": "v3",
"created": "Mon, 10 Mar 2025 18:47:24 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Zhang",
"Shutong",
""
]
]
| TITLE: NPSim: Nighttime Photorealistic Simulation From Daytime Images With
Monocular Inverse Rendering and Ray Tracing
ABSTRACT: Semantic segmentation is an important task for autonomous driving. A powerful
autonomous driving system should be capable of handling images under all
conditions, including nighttime. Generating accurate and diverse nighttime
semantic segmentation datasets is crucial for enhancing the performance of
computer vision algorithms in low-light conditions. In this thesis, we
introduce a novel approach named NPSim, which enables the simulation of
realistic nighttime images from real daytime counterparts with monocular
inverse rendering and ray tracing. NPSim comprises two key components: mesh
reconstruction and relighting. The mesh reconstruction component generates an
accurate representation of the scene structure by combining geometric
information extracted from the input RGB image and semantic information from
its corresponding semantic labels. The relighting component integrates
real-world nighttime light sources and material characteristics to simulate the
complex interplay of light and object surfaces under low-light conditions. The
scope of this thesis mainly focuses on the implementation and evaluation of the
mesh reconstruction component. Through experiments, we demonstrate the
effectiveness of the mesh reconstruction component in producing high-quality
scene meshes and their generality across different autonomous driving datasets.
We also propose a detailed experiment plan for evaluating the entire pipeline,
including both quantitative metrics in training state-of-the-art supervised and
unsupervised semantic segmentation approaches and human perceptual studies,
aiming to indicate the capability of our approach to generate realistic
nighttime images and the value of our dataset in steering future progress in
the field.
| no_new_dataset | 0.94428 |
2502.10776 | Zhipeng Liu | Zhipeng Liu, Peibo Duan, Mingyang Geng, Bin Zhang | A Distillation-based Future-aware Graph Neural Network for Stock Trend
Prediction | null | null | 10.1109/ICASSP49660.2025.10889901 | null | cs.LG cs.AI q-fin.PM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Stock trend prediction involves forecasting the future price movements by
analyzing historical data and various market indicators. With the advancement
of machine learning, graph neural networks (GNNs) have been extensively
employed in stock prediction due to their powerful capability to capture
spatiotemporal dependencies of stocks. However, despite the efforts of various
GNN stock predictors to enhance predictive performance, the improvements remain
limited, as they focus solely on analyzing historical spatiotemporal
dependencies, overlooking the correlation between historical and future
patterns. In this study, we propose a novel distillation-based future-aware GNN
framework (DishFT-GNN) for stock trend prediction. Specifically, DishFT-GNN
trains a teacher model and a student model, iteratively. The teacher model
learns to capture the correlation between distribution shifts of historical and
future data, which is then utilized as intermediate supervision to guide the
student model to learn future-aware spatiotemporal embeddings for accurate
prediction. Through extensive experiments on two real-world datasets, we verify
the state-of-the-art performance of DishFT-GNN.
| [
{
"version": "v1",
"created": "Sat, 15 Feb 2025 11:44:15 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Liu",
"Zhipeng",
""
],
[
"Duan",
"Peibo",
""
],
[
"Geng",
"Mingyang",
""
],
[
"Zhang",
"Bin",
""
]
]
| TITLE: A Distillation-based Future-aware Graph Neural Network for Stock Trend
Prediction
ABSTRACT: Stock trend prediction involves forecasting the future price movements by
analyzing historical data and various market indicators. With the advancement
of machine learning, graph neural networks (GNNs) have been extensively
employed in stock prediction due to their powerful capability to capture
spatiotemporal dependencies of stocks. However, despite the efforts of various
GNN stock predictors to enhance predictive performance, the improvements remain
limited, as they focus solely on analyzing historical spatiotemporal
dependencies, overlooking the correlation between historical and future
patterns. In this study, we propose a novel distillation-based future-aware GNN
framework (DishFT-GNN) for stock trend prediction. Specifically, DishFT-GNN
trains a teacher model and a student model, iteratively. The teacher model
learns to capture the correlation between distribution shifts of historical and
future data, which is then utilized as intermediate supervision to guide the
student model to learn future-aware spatiotemporal embeddings for accurate
prediction. Through extensive experiments on two real-world datasets, we verify
the state-of-the-art performance of DishFT-GNN.
| no_new_dataset | 0.948155 |
2502.12371 | Krishan Rana Dr | Krishan Rana, Robert Lee, David Pershouse, Niko Suenderhauf | IMLE Policy: Fast and Sample Efficient Visuomotor Policy Learning via
Implicit Maximum Likelihood Estimation | Videos and code are available at https://imle-policy.github.io/ | null | null | null | cs.RO cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Recent advances in imitation learning, particularly using generative
modelling techniques like diffusion, have enabled policies to capture complex
multi-modal action distributions. However, these methods often require large
datasets and multiple inference steps for action generation, posing challenges
in robotics where the cost for data collection is high and computation
resources are limited. To address this, we introduce IMLE Policy, a novel
behaviour cloning approach based on Implicit Maximum Likelihood Estimation
(IMLE). IMLE Policy excels in low-data regimes, effectively learning from
minimal demonstrations and requiring 38\% less data on average to match the
performance of baseline methods in learning complex multi-modal behaviours. Its
simple generator-based architecture enables single-step action generation,
improving inference speed by 97.3\% compared to Diffusion Policy, while
outperforming single-step Flow Matching. We validate our approach across
diverse manipulation tasks in simulated and real-world environments, showcasing
its ability to capture complex behaviours under data constraints. Videos and
code are provided on our project page: https://imle-policy.github.io/.
| [
{
"version": "v1",
"created": "Mon, 17 Feb 2025 23:22:49 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 00:38:28 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Rana",
"Krishan",
""
],
[
"Lee",
"Robert",
""
],
[
"Pershouse",
"David",
""
],
[
"Suenderhauf",
"Niko",
""
]
]
| TITLE: IMLE Policy: Fast and Sample Efficient Visuomotor Policy Learning via
Implicit Maximum Likelihood Estimation
ABSTRACT: Recent advances in imitation learning, particularly using generative
modelling techniques like diffusion, have enabled policies to capture complex
multi-modal action distributions. However, these methods often require large
datasets and multiple inference steps for action generation, posing challenges
in robotics where the cost for data collection is high and computation
resources are limited. To address this, we introduce IMLE Policy, a novel
behaviour cloning approach based on Implicit Maximum Likelihood Estimation
(IMLE). IMLE Policy excels in low-data regimes, effectively learning from
minimal demonstrations and requiring 38\% less data on average to match the
performance of baseline methods in learning complex multi-modal behaviours. Its
simple generator-based architecture enables single-step action generation,
improving inference speed by 97.3\% compared to Diffusion Policy, while
outperforming single-step Flow Matching. We validate our approach across
diverse manipulation tasks in simulated and real-world environments, showcasing
its ability to capture complex behaviours under data constraints. Videos and
code are provided on our project page: https://imle-policy.github.io/.
| no_new_dataset | 0.948442 |
2502.12691 | Stanislav Frolov | Timon Winter, Stanislav Frolov, Brian Bernhard Moser, Andreas Dengel | Spherical Dense Text-to-Image Synthesis | Link to project page https://sdt2i.github.io/ | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Recent advancements in text-to-image (T2I) have improved synthesis results,
but challenges remain in layout control and generating omnidirectional
panoramic images. Dense T2I (DT2I) and spherical T2I (ST2I) models address
these issues, but so far no unified approach exists. Trivial approaches, like
prompting a DT2I model to generate panoramas can not generate proper spherical
distortions and seamless transitions at the borders. Our work shows that
spherical dense text-to-image (SDT2I) can be achieved by integrating
training-free DT2I approaches into finetuned panorama models. Specifically, we
propose MultiStitchDiffusion (MSTD) and MultiPanFusion (MPF) by integrating
MultiDiffusion into StitchDiffusion and PanFusion, respectively. Since no
benchmark for SDT2I exists, we further construct Dense-Synthetic-View
(DSynView), a new synthetic dataset containing spherical layouts to evaluate
our models. Our results show that MSTD outperforms MPF across image quality as
well as prompt- and layout adherence. MultiPanFusion generates more diverse
images but struggles to synthesize flawless foreground objects. We propose
bootstrap-coupling and turning off equirectangular perspective-projection
attention in the foreground as an improvement of MPF. Link to code
https://github.com/sdt2i/spherical-dense-text-to-image
| [
{
"version": "v1",
"created": "Tue, 18 Feb 2025 09:51:11 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Feb 2025 13:00:18 GMT"
},
{
"version": "v3",
"created": "Mon, 10 Mar 2025 18:50:41 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Winter",
"Timon",
""
],
[
"Frolov",
"Stanislav",
""
],
[
"Moser",
"Brian Bernhard",
""
],
[
"Dengel",
"Andreas",
""
]
]
| TITLE: Spherical Dense Text-to-Image Synthesis
ABSTRACT: Recent advancements in text-to-image (T2I) have improved synthesis results,
but challenges remain in layout control and generating omnidirectional
panoramic images. Dense T2I (DT2I) and spherical T2I (ST2I) models address
these issues, but so far no unified approach exists. Trivial approaches, like
prompting a DT2I model to generate panoramas can not generate proper spherical
distortions and seamless transitions at the borders. Our work shows that
spherical dense text-to-image (SDT2I) can be achieved by integrating
training-free DT2I approaches into finetuned panorama models. Specifically, we
propose MultiStitchDiffusion (MSTD) and MultiPanFusion (MPF) by integrating
MultiDiffusion into StitchDiffusion and PanFusion, respectively. Since no
benchmark for SDT2I exists, we further construct Dense-Synthetic-View
(DSynView), a new synthetic dataset containing spherical layouts to evaluate
our models. Our results show that MSTD outperforms MPF across image quality as
well as prompt- and layout adherence. MultiPanFusion generates more diverse
images but struggles to synthesize flawless foreground objects. We propose
bootstrap-coupling and turning off equirectangular perspective-projection
attention in the foreground as an improvement of MPF. Link to code
https://github.com/sdt2i/spherical-dense-text-to-image
| new_dataset | 0.957755 |
2502.13335 | Ahmad Salimi | Ahmad Salimi, Tristan Aumentado-Armstrong, Marcus A. Brubaker,
Konstantinos G. Derpanis | Geometry-Aware Diffusion Models for Multiview Scene Inpainting | Our project page is available at https://geomvi.github.io | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we focus on 3D scene inpainting, where parts of an input image
set, captured from different viewpoints, are masked out. The main challenge
lies in generating plausible image completions that are geometrically
consistent across views. Most recent work addresses this challenge by combining
generative models with a 3D radiance field to fuse information across a
relatively dense set of viewpoints. However, a major drawback of these methods
is that they often produce blurry images due to the fusion of inconsistent
cross-view images. To avoid blurry inpaintings, we eschew the use of an
explicit or implicit radiance field altogether and instead fuse cross-view
information in a learned space. In particular, we introduce a geometry-aware
conditional generative model, capable of multi-view consistent inpainting using
reference-based geometric and appearance cues. A key advantage of our approach
over existing methods is its unique ability to inpaint masked scenes with a
limited number of views (i.e., few-view inpainting), whereas previous methods
require relatively large image sets for their 3D model fitting step.
Empirically, we evaluate and compare our scene-centric inpainting method on two
datasets, SPIn-NeRF and NeRFiller, which contain images captured at narrow and
wide baselines, respectively, and achieve state-of-the-art 3D inpainting
performance on both. Additionally, we demonstrate the efficacy of our approach
in the few-view setting compared to prior methods.
| [
{
"version": "v1",
"created": "Tue, 18 Feb 2025 23:30:10 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 19:26:28 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Salimi",
"Ahmad",
""
],
[
"Aumentado-Armstrong",
"Tristan",
""
],
[
"Brubaker",
"Marcus A.",
""
],
[
"Derpanis",
"Konstantinos G.",
""
]
]
| TITLE: Geometry-Aware Diffusion Models for Multiview Scene Inpainting
ABSTRACT: In this paper, we focus on 3D scene inpainting, where parts of an input image
set, captured from different viewpoints, are masked out. The main challenge
lies in generating plausible image completions that are geometrically
consistent across views. Most recent work addresses this challenge by combining
generative models with a 3D radiance field to fuse information across a
relatively dense set of viewpoints. However, a major drawback of these methods
is that they often produce blurry images due to the fusion of inconsistent
cross-view images. To avoid blurry inpaintings, we eschew the use of an
explicit or implicit radiance field altogether and instead fuse cross-view
information in a learned space. In particular, we introduce a geometry-aware
conditional generative model, capable of multi-view consistent inpainting using
reference-based geometric and appearance cues. A key advantage of our approach
over existing methods is its unique ability to inpaint masked scenes with a
limited number of views (i.e., few-view inpainting), whereas previous methods
require relatively large image sets for their 3D model fitting step.
Empirically, we evaluate and compare our scene-centric inpainting method on two
datasets, SPIn-NeRF and NeRFiller, which contain images captured at narrow and
wide baselines, respectively, and achieve state-of-the-art 3D inpainting
performance on both. Additionally, we demonstrate the efficacy of our approach
in the few-view setting compared to prior methods.
| no_new_dataset | 0.947624 |
2502.14856 | Weilin Zhao | Weilin Zhao, Tengyu Pan, Xu Han, Yudi Zhang, Ao Sun, Yuxiang Huang,
Kaihuo Zhang, Weilun Zhao, Yuxuan Li, Jianyong Wang, Zhiyuan Liu, Maosong Sun | FR-Spec: Accelerating Large-Vocabulary Language Models via
Frequency-Ranked Speculative Sampling | null | null | null | null | cs.CL cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Speculative sampling has emerged as an important technique for accelerating
the auto-regressive generation process of large language models (LLMs) by
utilizing a draft-then-verify mechanism to produce multiple tokens per forward
pass. While state-of-the-art speculative sampling methods use only a single
layer and a language modeling (LM) head as the draft model to achieve
impressive layer compression, their efficiency gains are substantially reduced
for large-vocabulary LLMs, such as Llama-3-8B with a vocabulary of 128k tokens.
To address this, we present FR-Spec, a frequency-ranked speculative sampling
framework that optimizes draft candidate selection through vocabulary space
compression. By constraining the draft search to a frequency-prioritized token
subset, our method reduces LM Head computation overhead by 75% while ensuring
the equivalence of the final output distribution. Experiments across multiple
datasets demonstrate an average of 1.12$\times$ speedup over the
state-of-the-art speculative sampling method EAGLE-2. Code available at
https://github.com/thunlp/FR-Spec.
| [
{
"version": "v1",
"created": "Thu, 20 Feb 2025 18:58:10 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 08:54:55 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Zhao",
"Weilin",
""
],
[
"Pan",
"Tengyu",
""
],
[
"Han",
"Xu",
""
],
[
"Zhang",
"Yudi",
""
],
[
"Sun",
"Ao",
""
],
[
"Huang",
"Yuxiang",
""
],
[
"Zhang",
"Kaihuo",
""
],
[
"Zhao",
"Weilun",
""
],
[
"Li",
"Yuxuan",
""
],
[
"Wang",
"Jianyong",
""
],
[
"Liu",
"Zhiyuan",
""
],
[
"Sun",
"Maosong",
""
]
]
| TITLE: FR-Spec: Accelerating Large-Vocabulary Language Models via
Frequency-Ranked Speculative Sampling
ABSTRACT: Speculative sampling has emerged as an important technique for accelerating
the auto-regressive generation process of large language models (LLMs) by
utilizing a draft-then-verify mechanism to produce multiple tokens per forward
pass. While state-of-the-art speculative sampling methods use only a single
layer and a language modeling (LM) head as the draft model to achieve
impressive layer compression, their efficiency gains are substantially reduced
for large-vocabulary LLMs, such as Llama-3-8B with a vocabulary of 128k tokens.
To address this, we present FR-Spec, a frequency-ranked speculative sampling
framework that optimizes draft candidate selection through vocabulary space
compression. By constraining the draft search to a frequency-prioritized token
subset, our method reduces LM Head computation overhead by 75% while ensuring
the equivalence of the final output distribution. Experiments across multiple
datasets demonstrate an average of 1.12$\times$ speedup over the
state-of-the-art speculative sampling method EAGLE-2. Code available at
https://github.com/thunlp/FR-Spec.
| no_new_dataset | 0.945901 |
2502.15488 | Changyong Shu | Jiangyong Yu, Changyong Shu, Dawei Yang, Sifan Zhou, Zichen Yu, Xing
Hu, Yan Chen | Q-PETR: Quant-aware Position Embedding Transformation for Multi-View 3D
Object Detection | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Camera-based multi-view 3D detection has emerged as an attractive solution
for autonomous driving due to its low cost and broad applicability. However,
despite the strong performance of PETR-based methods in 3D perception
benchmarks, their direct INT8 quantization for onboard deployment leads to
drastic accuracy drops-up to 58.2% in mAP and 36.9% in NDS on the NuScenes
dataset. In this work, we propose Q-PETR, a quantization-aware position
embedding transformation that re-engineers key components of the PETR framework
to reconcile the discrepancy between the dynamic ranges of positional encodings
and image features, and to adapt the cross-attention mechanism for low-bit
inference. By redesigning the positional encoding module and introducing an
adaptive quantization strategy, Q-PETR maintains floating-point performance
with a performance degradation of less than 1% under standard 8-bit per-tensor
post-training quantization. Moreover, compared to its FP32 counterpart, Q-PETR
achieves a two-fold speedup and reduces memory usage by three times, thereby
offering a deployment-friendly solution for resource-constrained onboard
devices. Extensive experiments across various PETR-series models validate the
strong generalization and practical benefits of our approach.
| [
{
"version": "v1",
"created": "Fri, 21 Feb 2025 14:26:23 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 15:05:41 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Yu",
"Jiangyong",
""
],
[
"Shu",
"Changyong",
""
],
[
"Yang",
"Dawei",
""
],
[
"Zhou",
"Sifan",
""
],
[
"Yu",
"Zichen",
""
],
[
"Hu",
"Xing",
""
],
[
"Chen",
"Yan",
""
]
]
| TITLE: Q-PETR: Quant-aware Position Embedding Transformation for Multi-View 3D
Object Detection
ABSTRACT: Camera-based multi-view 3D detection has emerged as an attractive solution
for autonomous driving due to its low cost and broad applicability. However,
despite the strong performance of PETR-based methods in 3D perception
benchmarks, their direct INT8 quantization for onboard deployment leads to
drastic accuracy drops-up to 58.2% in mAP and 36.9% in NDS on the NuScenes
dataset. In this work, we propose Q-PETR, a quantization-aware position
embedding transformation that re-engineers key components of the PETR framework
to reconcile the discrepancy between the dynamic ranges of positional encodings
and image features, and to adapt the cross-attention mechanism for low-bit
inference. By redesigning the positional encoding module and introducing an
adaptive quantization strategy, Q-PETR maintains floating-point performance
with a performance degradation of less than 1% under standard 8-bit per-tensor
post-training quantization. Moreover, compared to its FP32 counterpart, Q-PETR
achieves a two-fold speedup and reduces memory usage by three times, thereby
offering a deployment-friendly solution for resource-constrained onboard
devices. Extensive experiments across various PETR-series models validate the
strong generalization and practical benefits of our approach.
| no_new_dataset | 0.950227 |
2502.19170 | Luzius Moll | Emanuele Mengoli, Luzius Moll, Virgilio Strozzi, El-Mahdi El-Mhamdi | On the Byzantine Fault Tolerance of signSGD with Majority Vote | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | In distributed learning, sign-based compression algorithms such as signSGD
with majority vote provide a lightweight alternative to SGD with an additional
advantage: fault tolerance (almost) for free. However, for signSGD with
majority vote, this fault tolerance has been shown to cover only the case of
weaker adversaries, i.e., ones that are not omniscient or cannot collude to
base their attack on common knowledge and strategy. In this work, we close this
gap and provide new insights into how signSGD with majority vote can be
resilient against omniscient and colluding adversaries, which craft an attack
after communicating with other adversaries, thus having better information to
perform the most damaging attack based on a common optimal strategy. Our core
contribution is in providing a proof that begins by defining the omniscience
framework and the strongest possible damage against signSGD with majority vote
without imposing any restrictions on the attacker. Thanks to the filtering
effect of the sign-based method, we upper-bound the space of attacks to the
optimal strategy for maximizing damage by an attacker. Hence, we derive an
explicit probabilistic bound in terms of incorrect aggregation without
resorting to unknown constants, providing a convergence bound on signSGD with
majority vote in the presence of Byzantine attackers, along with a precise
convergence rate. Our findings are supported by experiments on the MNIST
dataset in a distributed learning environment with adversaries of varying
strength.
| [
{
"version": "v1",
"created": "Wed, 26 Feb 2025 14:26:33 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 18:46:52 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Mengoli",
"Emanuele",
""
],
[
"Moll",
"Luzius",
""
],
[
"Strozzi",
"Virgilio",
""
],
[
"El-Mhamdi",
"El-Mahdi",
""
]
]
| TITLE: On the Byzantine Fault Tolerance of signSGD with Majority Vote
ABSTRACT: In distributed learning, sign-based compression algorithms such as signSGD
with majority vote provide a lightweight alternative to SGD with an additional
advantage: fault tolerance (almost) for free. However, for signSGD with
majority vote, this fault tolerance has been shown to cover only the case of
weaker adversaries, i.e., ones that are not omniscient or cannot collude to
base their attack on common knowledge and strategy. In this work, we close this
gap and provide new insights into how signSGD with majority vote can be
resilient against omniscient and colluding adversaries, which craft an attack
after communicating with other adversaries, thus having better information to
perform the most damaging attack based on a common optimal strategy. Our core
contribution is in providing a proof that begins by defining the omniscience
framework and the strongest possible damage against signSGD with majority vote
without imposing any restrictions on the attacker. Thanks to the filtering
effect of the sign-based method, we upper-bound the space of attacks to the
optimal strategy for maximizing damage by an attacker. Hence, we derive an
explicit probabilistic bound in terms of incorrect aggregation without
resorting to unknown constants, providing a convergence bound on signSGD with
majority vote in the presence of Byzantine attackers, along with a precise
convergence rate. Our findings are supported by experiments on the MNIST
dataset in a distributed learning environment with adversaries of varying
strength.
| no_new_dataset | 0.9463 |
2502.19902 | Zaijing Li | Zaijing Li, Yuquan Xie, Rui Shao, Gongwei Chen, Dongmei Jiang, Liqiang
Nie | Optimus-2: Multimodal Minecraft Agent with Goal-Observation-Action
Conditioned Policy | Accept to CVPR 2025, Project page:
https://cybertronagent.github.io/Optimus-2.github.io/ | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Building an agent that can mimic human behavior patterns to accomplish
various open-world tasks is a long-term goal. To enable agents to effectively
learn behavioral patterns across diverse tasks, a key challenge lies in
modeling the intricate relationships among observations, actions, and language.
To this end, we propose Optimus-2, a novel Minecraft agent that incorporates a
Multimodal Large Language Model (MLLM) for high-level planning, alongside a
Goal-Observation-Action Conditioned Policy (GOAP) for low-level control. GOAP
contains (1) an Action-guided Behavior Encoder that models causal relationships
between observations and actions at each timestep, then dynamically interacts
with the historical observation-action sequence, consolidating it into
fixed-length behavior tokens, and (2) an MLLM that aligns behavior tokens with
open-ended language instructions to predict actions auto-regressively.
Moreover, we introduce a high-quality Minecraft Goal-Observation-Action (MGOA)}
dataset, which contains 25,000 videos across 8 atomic tasks, providing about
30M goal-observation-action pairs. The automated construction method, along
with the MGOA dataset, can contribute to the community's efforts to train
Minecraft agents. Extensive experimental results demonstrate that Optimus-2
exhibits superior performance across atomic tasks, long-horizon tasks, and
open-ended instruction tasks in Minecraft. Please see the project page at
https://cybertronagent.github.io/Optimus-2.github.io/.
| [
{
"version": "v1",
"created": "Thu, 27 Feb 2025 09:18:04 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 07:51:05 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Li",
"Zaijing",
""
],
[
"Xie",
"Yuquan",
""
],
[
"Shao",
"Rui",
""
],
[
"Chen",
"Gongwei",
""
],
[
"Jiang",
"Dongmei",
""
],
[
"Nie",
"Liqiang",
""
]
]
| TITLE: Optimus-2: Multimodal Minecraft Agent with Goal-Observation-Action
Conditioned Policy
ABSTRACT: Building an agent that can mimic human behavior patterns to accomplish
various open-world tasks is a long-term goal. To enable agents to effectively
learn behavioral patterns across diverse tasks, a key challenge lies in
modeling the intricate relationships among observations, actions, and language.
To this end, we propose Optimus-2, a novel Minecraft agent that incorporates a
Multimodal Large Language Model (MLLM) for high-level planning, alongside a
Goal-Observation-Action Conditioned Policy (GOAP) for low-level control. GOAP
contains (1) an Action-guided Behavior Encoder that models causal relationships
between observations and actions at each timestep, then dynamically interacts
with the historical observation-action sequence, consolidating it into
fixed-length behavior tokens, and (2) an MLLM that aligns behavior tokens with
open-ended language instructions to predict actions auto-regressively.
Moreover, we introduce a high-quality Minecraft Goal-Observation-Action (MGOA)}
dataset, which contains 25,000 videos across 8 atomic tasks, providing about
30M goal-observation-action pairs. The automated construction method, along
with the MGOA dataset, can contribute to the community's efforts to train
Minecraft agents. Extensive experimental results demonstrate that Optimus-2
exhibits superior performance across atomic tasks, long-horizon tasks, and
open-ended instruction tasks in Minecraft. Please see the project page at
https://cybertronagent.github.io/Optimus-2.github.io/.
| new_dataset | 0.973241 |
2503.00852 | Shubham Gupta | Sidharth Agarwal, Tanishq Dubey, Shubham Gupta, Srikanta Bedathur | A Transfer Framework for Enhancing Temporal Graph Learning in
Data-Scarce Settings | null | null | null | null | cs.LG cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dynamic interactions between entities are prevalent in domains like social
platforms, financial systems, healthcare, and e-commerce. These interactions
can be effectively represented as time-evolving graphs, where predicting future
connections is a key task in applications such as recommendation systems.
Temporal Graph Neural Networks (TGNNs) have achieved strong results for such
predictive tasks but typically require extensive training data, which is often
limited in real-world scenarios. One approach to mitigating data scarcity is
leveraging pre-trained models from related datasets. However, direct knowledge
transfer between TGNNs is challenging due to their reliance on node-specific
memory structures, making them inherently difficult to adapt across datasets.
To address this, we introduce a novel transfer approach that disentangles
node representations from their associated features through a structured
bipartite encoding mechanism. This decoupling enables more effective transfer
of memory components and other learned inductive patterns from one dataset to
another. Empirical evaluations on real-world benchmarks demonstrate that our
method significantly enhances TGNN performance in low-data regimes,
outperforming non-transfer baselines by up to 56\% and surpassing existing
transfer strategies by 36\%
| [
{
"version": "v1",
"created": "Sun, 2 Mar 2025 11:10:29 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 05:03:25 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Agarwal",
"Sidharth",
""
],
[
"Dubey",
"Tanishq",
""
],
[
"Gupta",
"Shubham",
""
],
[
"Bedathur",
"Srikanta",
""
]
]
| TITLE: A Transfer Framework for Enhancing Temporal Graph Learning in
Data-Scarce Settings
ABSTRACT: Dynamic interactions between entities are prevalent in domains like social
platforms, financial systems, healthcare, and e-commerce. These interactions
can be effectively represented as time-evolving graphs, where predicting future
connections is a key task in applications such as recommendation systems.
Temporal Graph Neural Networks (TGNNs) have achieved strong results for such
predictive tasks but typically require extensive training data, which is often
limited in real-world scenarios. One approach to mitigating data scarcity is
leveraging pre-trained models from related datasets. However, direct knowledge
transfer between TGNNs is challenging due to their reliance on node-specific
memory structures, making them inherently difficult to adapt across datasets.
To address this, we introduce a novel transfer approach that disentangles
node representations from their associated features through a structured
bipartite encoding mechanism. This decoupling enables more effective transfer
of memory components and other learned inductive patterns from one dataset to
another. Empirical evaluations on real-world benchmarks demonstrate that our
method significantly enhances TGNN performance in low-data regimes,
outperforming non-transfer baselines by up to 56\% and surpassing existing
transfer strategies by 36\%
| no_new_dataset | 0.948058 |
2503.01261 | Guotao Liang | Guotao Liang, Baoquan Zhang, Zhiyuan Wen, Junteng Zhao, Yunming Ye,
Kola Ye, Yao He | Towards Improved Text-Aligned Codebook Learning: Multi-Hierarchical
Codebook-Text Alignment with Long Text | Accepted by CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Image quantization is a crucial technique in image generation, aimed at
learning a codebook that encodes an image into a discrete token sequence.
Recent advancements have seen researchers exploring learning multi-modal
codebook (i.e., text-aligned codebook) by utilizing image caption semantics,
aiming to enhance codebook performance in cross-modal tasks. However, existing
image-text paired datasets exhibit a notable flaw in that the text descriptions
tend to be overly concise, failing to adequately describe the images and
provide sufficient semantic knowledge, resulting in limited alignment of text
and codebook at a fine-grained level. In this paper, we propose a novel
Text-Augmented Codebook Learning framework, named TA-VQ, which generates longer
text for each image using the visual-language model for improved text-aligned
codebook learning. However, the long text presents two key challenges: how to
encode text and how to align codebook and text. To tackle two challenges, we
propose to split the long text into multiple granularities for encoding, i.e.,
word, phrase, and sentence, so that the long text can be fully encoded without
losing any key semantic knowledge. Following this, a hierarchical encoder and
novel sampling-based alignment strategy are designed to achieve fine-grained
codebook-text alignment. Additionally, our method can be seamlessly integrated
into existing VQ models. Extensive experiments in reconstruction and various
downstream tasks demonstrate its effectiveness compared to previous
state-of-the-art approaches.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 07:38:18 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 06:09:18 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Liang",
"Guotao",
""
],
[
"Zhang",
"Baoquan",
""
],
[
"Wen",
"Zhiyuan",
""
],
[
"Zhao",
"Junteng",
""
],
[
"Ye",
"Yunming",
""
],
[
"Ye",
"Kola",
""
],
[
"He",
"Yao",
""
]
]
| TITLE: Towards Improved Text-Aligned Codebook Learning: Multi-Hierarchical
Codebook-Text Alignment with Long Text
ABSTRACT: Image quantization is a crucial technique in image generation, aimed at
learning a codebook that encodes an image into a discrete token sequence.
Recent advancements have seen researchers exploring learning multi-modal
codebook (i.e., text-aligned codebook) by utilizing image caption semantics,
aiming to enhance codebook performance in cross-modal tasks. However, existing
image-text paired datasets exhibit a notable flaw in that the text descriptions
tend to be overly concise, failing to adequately describe the images and
provide sufficient semantic knowledge, resulting in limited alignment of text
and codebook at a fine-grained level. In this paper, we propose a novel
Text-Augmented Codebook Learning framework, named TA-VQ, which generates longer
text for each image using the visual-language model for improved text-aligned
codebook learning. However, the long text presents two key challenges: how to
encode text and how to align codebook and text. To tackle two challenges, we
propose to split the long text into multiple granularities for encoding, i.e.,
word, phrase, and sentence, so that the long text can be fully encoded without
losing any key semantic knowledge. Following this, a hierarchical encoder and
novel sampling-based alignment strategy are designed to achieve fine-grained
codebook-text alignment. Additionally, our method can be seamlessly integrated
into existing VQ models. Extensive experiments in reconstruction and various
downstream tasks demonstrate its effectiveness compared to previous
state-of-the-art approaches.
| no_new_dataset | 0.9462 |
2503.01905 | Sunghyeon Woo | Sunghyeon Woo, Sol Namkung, Sunwoo Lee, Inho Jeong, Beomseok Kim,
Dongsuk Jeon | PaCA: Partial Connection Adaptation for Efficient Fine-Tuning | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Prior parameter-efficient fine-tuning (PEFT) algorithms reduce memory usage
and computational costs of fine-tuning large neural network models by training
only a few additional adapter parameters, rather than the entire model.
However, the reduction in computational costs due to PEFT does not necessarily
translate to a reduction in training time; although the computational costs of
the adapter layers are much smaller than the pretrained layers, it is well
known that those two types of layers are processed sequentially on GPUs,
resulting in significant latency overhead. LoRA and its variants merge low-rank
adapter matrices with pretrained weights during inference to avoid latency
overhead, but during training, the pretrained weights remain frozen while the
adapter matrices are continuously updated, preventing such merging. To mitigate
this issue, we propose Partial Connection Adaptation (PaCA), which fine-tunes
randomly selected partial connections within the pretrained weights instead of
introducing adapter layers in the model. PaCA not only enhances training speed
by eliminating the time overhead due to the sequential processing of the
adapter and pretrained layers but also reduces activation memory since only
partial activations, rather than full activations, need to be stored for
gradient computation. Compared to LoRA, PaCA reduces training time by 22% and
total memory usage by 16%, while maintaining comparable accuracy across various
fine-tuning scenarios, such as fine-tuning on the MMLU dataset and instruction
tuning on the Oasst1 dataset. PaCA can also be combined with quantization,
enabling the fine-tuning of large models such as LLaMA3.1-70B. In addition,
PaCA enables training with 23% longer sequence and improves throughput by 16%
on both NVIDIA A100 GPU and INTEL Gaudi2 HPU compared to LoRA. The code is
available at https://github.com/WooSunghyeon/paca.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 13:30:10 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 15:24:13 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Woo",
"Sunghyeon",
""
],
[
"Namkung",
"Sol",
""
],
[
"Lee",
"Sunwoo",
""
],
[
"Jeong",
"Inho",
""
],
[
"Kim",
"Beomseok",
""
],
[
"Jeon",
"Dongsuk",
""
]
]
| TITLE: PaCA: Partial Connection Adaptation for Efficient Fine-Tuning
ABSTRACT: Prior parameter-efficient fine-tuning (PEFT) algorithms reduce memory usage
and computational costs of fine-tuning large neural network models by training
only a few additional adapter parameters, rather than the entire model.
However, the reduction in computational costs due to PEFT does not necessarily
translate to a reduction in training time; although the computational costs of
the adapter layers are much smaller than the pretrained layers, it is well
known that those two types of layers are processed sequentially on GPUs,
resulting in significant latency overhead. LoRA and its variants merge low-rank
adapter matrices with pretrained weights during inference to avoid latency
overhead, but during training, the pretrained weights remain frozen while the
adapter matrices are continuously updated, preventing such merging. To mitigate
this issue, we propose Partial Connection Adaptation (PaCA), which fine-tunes
randomly selected partial connections within the pretrained weights instead of
introducing adapter layers in the model. PaCA not only enhances training speed
by eliminating the time overhead due to the sequential processing of the
adapter and pretrained layers but also reduces activation memory since only
partial activations, rather than full activations, need to be stored for
gradient computation. Compared to LoRA, PaCA reduces training time by 22% and
total memory usage by 16%, while maintaining comparable accuracy across various
fine-tuning scenarios, such as fine-tuning on the MMLU dataset and instruction
tuning on the Oasst1 dataset. PaCA can also be combined with quantization,
enabling the fine-tuning of large models such as LLaMA3.1-70B. In addition,
PaCA enables training with 23% longer sequence and improves throughput by 16%
on both NVIDIA A100 GPU and INTEL Gaudi2 HPU compared to LoRA. The code is
available at https://github.com/WooSunghyeon/paca.
| no_new_dataset | 0.94801 |
2503.02162 | Jianzhong You | Jianzhong You, Yuan Gao, Sangwook Kim, Chris Mcintosh | X2CT-CLIP: Enable Multi-Abnormality Detection in Computed Tomography
from Chest Radiography via Tri-Modal Contrastive Learning | 11 pages, 1 figure, 5 tables | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Computed tomography (CT) is a key imaging modality for diagnosis, yet its
clinical utility is marred by high radiation exposure and long turnaround
times, restricting its use for larger-scale screening. Although chest
radiography (CXR) is more accessible and safer, existing CXR foundation models
focus primarily on detecting diseases that are readily visible on the CXR.
Recently, works have explored training disease classification models on
simulated CXRs, but they remain limited to recognizing a single disease type
from CT. CT foundation models have also emerged with significantly improved
detection of pathologies in CT. However, the generalized application of
CT-derived labels on CXR has remained illusive. In this study, we propose
X2CT-CLIP, a tri-modal knowledge transfer learning framework that bridges the
modality gap between CT and CXR while reducing the computational burden of
model training. Our approach is the first work to enable multi-abnormality
classification in CT, using CXR, by transferring knowledge from 3D CT volumes
and associated radiology reports to a CXR encoder via a carefully designed
tri-modal alignment mechanism in latent space. Extensive evaluations on three
multi-label CT datasets demonstrate that our method outperforms
state-of-the-art baselines in cross-modal retrieval, few-shot adaptation, and
external validation. These results highlight the potential of CXR, enriched
with knowledge derived from CT, as a viable efficient alternative for disease
detection in resource-limited settings.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 00:48:09 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 00:50:53 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"You",
"Jianzhong",
""
],
[
"Gao",
"Yuan",
""
],
[
"Kim",
"Sangwook",
""
],
[
"Mcintosh",
"Chris",
""
]
]
| TITLE: X2CT-CLIP: Enable Multi-Abnormality Detection in Computed Tomography
from Chest Radiography via Tri-Modal Contrastive Learning
ABSTRACT: Computed tomography (CT) is a key imaging modality for diagnosis, yet its
clinical utility is marred by high radiation exposure and long turnaround
times, restricting its use for larger-scale screening. Although chest
radiography (CXR) is more accessible and safer, existing CXR foundation models
focus primarily on detecting diseases that are readily visible on the CXR.
Recently, works have explored training disease classification models on
simulated CXRs, but they remain limited to recognizing a single disease type
from CT. CT foundation models have also emerged with significantly improved
detection of pathologies in CT. However, the generalized application of
CT-derived labels on CXR has remained illusive. In this study, we propose
X2CT-CLIP, a tri-modal knowledge transfer learning framework that bridges the
modality gap between CT and CXR while reducing the computational burden of
model training. Our approach is the first work to enable multi-abnormality
classification in CT, using CXR, by transferring knowledge from 3D CT volumes
and associated radiology reports to a CXR encoder via a carefully designed
tri-modal alignment mechanism in latent space. Extensive evaluations on three
multi-label CT datasets demonstrate that our method outperforms
state-of-the-art baselines in cross-modal retrieval, few-shot adaptation, and
external validation. These results highlight the potential of CXR, enriched
with knowledge derived from CT, as a viable efficient alternative for disease
detection in resource-limited settings.
| no_new_dataset | 0.946051 |
2503.02770 | Michael Mior | Juan Cruz Viotti and Michael J. Mior | Blaze: Compiling JSON Schema for 10x Faster Validation | null | null | null | null | cs.DB cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | JSON Schemas provide useful guardrails for developers of Web APIs to
guarantee that the semi-structured JSON input provided by clients matches a
predefined structure. This is important both to ensure the correctness of the
data received as input and also to avoid potential security issues from
processing input that is not correctly validated. However, this validation
process can be time-consuming and adds overhead to every request. Different
keywords in the JSON Schema specification have complex interactions that may
increase validation time. Since popular APIs may process thousands of requests
per second and schemas change infrequently, we observe that we can resolve some
of the complexity ahead of time in order to achieve faster validation.
Our JSON Schema validator, Blaze, compiles complex schemas to an efficient
representation in seconds to minutes, adding minimal overhead at build time.
Blaze incorporates several unique optimizations to reduce the validation time
by an average of approximately 10x compared existing validators on a variety of
datasets. In some cases, Blaze achieves a reduction in validation time of
multiple orders of magnitude compared to the next fastest validator. We also
demonstrate that several popular validators produce incorrect results in some
cases, while Blaze maintains strict adherence to the JSON Schema specification.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 16:35:51 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 15:54:15 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Viotti",
"Juan Cruz",
""
],
[
"Mior",
"Michael J.",
""
]
]
| TITLE: Blaze: Compiling JSON Schema for 10x Faster Validation
ABSTRACT: JSON Schemas provide useful guardrails for developers of Web APIs to
guarantee that the semi-structured JSON input provided by clients matches a
predefined structure. This is important both to ensure the correctness of the
data received as input and also to avoid potential security issues from
processing input that is not correctly validated. However, this validation
process can be time-consuming and adds overhead to every request. Different
keywords in the JSON Schema specification have complex interactions that may
increase validation time. Since popular APIs may process thousands of requests
per second and schemas change infrequently, we observe that we can resolve some
of the complexity ahead of time in order to achieve faster validation.
Our JSON Schema validator, Blaze, compiles complex schemas to an efficient
representation in seconds to minutes, adding minimal overhead at build time.
Blaze incorporates several unique optimizations to reduce the validation time
by an average of approximately 10x compared existing validators on a variety of
datasets. In some cases, Blaze achieves a reduction in validation time of
multiple orders of magnitude compared to the next fastest validator. We also
demonstrate that several popular validators produce incorrect results in some
cases, while Blaze maintains strict adherence to the JSON Schema specification.
| no_new_dataset | 0.941654 |
2503.02783 | Haoling Li | Jie Wu, Haoling Li, Xin Zhang, Jianwen Luo, Yangyu Huang, Ruihang Chu,
Yujiu Yang, Scarlett Li | IterPref: Focal Preference Learning for Code Generation via Iterative
Debugging | The code and data will be released soon | null | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Preference learning enhances Code LLMs beyond supervised fine-tuning by
leveraging relative quality comparisons. Existing methods construct preference
pairs from
candidates based on test case success, treating the higher pass rate sample
as positive and the lower as negative. However, this approach does not pinpoint
specific errors in the code, which prevents the model from learning more
informative error correction patterns, as aligning failing code as a whole
lacks the granularity needed to capture meaningful error-resolution
relationships. To address these issues, we propose IterPref, a new preference
alignment framework that mimics human iterative debugging to refine Code LLMs.
IterPref explicitly locates error regions and aligns the corresponding tokens
via a tailored DPO algorithm. To generate informative pairs, we introduce the
CodeFlow dataset, where samples are iteratively refined until passing tests,
with modifications capturing error corrections. Extensive experiments show that
a diverse suite of Code LLMs equipped with IterPref achieves significant
performance gains in code generation and improves on challenging tasks like
BigCodeBench. In-depth analysis reveals that IterPref yields fewer errors. Our
code and data will be made publicaly available.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 16:56:34 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 18:08:16 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Wu",
"Jie",
""
],
[
"Li",
"Haoling",
""
],
[
"Zhang",
"Xin",
""
],
[
"Luo",
"Jianwen",
""
],
[
"Huang",
"Yangyu",
""
],
[
"Chu",
"Ruihang",
""
],
[
"Yang",
"Yujiu",
""
],
[
"Li",
"Scarlett",
""
]
]
| TITLE: IterPref: Focal Preference Learning for Code Generation via Iterative
Debugging
ABSTRACT: Preference learning enhances Code LLMs beyond supervised fine-tuning by
leveraging relative quality comparisons. Existing methods construct preference
pairs from
candidates based on test case success, treating the higher pass rate sample
as positive and the lower as negative. However, this approach does not pinpoint
specific errors in the code, which prevents the model from learning more
informative error correction patterns, as aligning failing code as a whole
lacks the granularity needed to capture meaningful error-resolution
relationships. To address these issues, we propose IterPref, a new preference
alignment framework that mimics human iterative debugging to refine Code LLMs.
IterPref explicitly locates error regions and aligns the corresponding tokens
via a tailored DPO algorithm. To generate informative pairs, we introduce the
CodeFlow dataset, where samples are iteratively refined until passing tests,
with modifications capturing error corrections. Extensive experiments show that
a diverse suite of Code LLMs equipped with IterPref achieves significant
performance gains in code generation and improves on challenging tasks like
BigCodeBench. In-depth analysis reveals that IterPref yields fewer errors. Our
code and data will be made publicaly available.
| new_dataset | 0.950732 |
2503.02800 | Alicia Russell-Gilbert | Alicia Russell-Gilbert, Sudip Mittal, Shahram Rahimi, Maria Seale,
Joseph Jabour, Thomas Arnold, Joshua Church | RAAD-LLM: Adaptive Anomaly Detection Using LLMs and RAG Integration | arXiv admin note: substantial text overlap with arXiv:2411.00914 | null | null | null | cs.LG cs.CE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Anomaly detection in complex industrial environments poses unique challenges,
particularly in contexts characterized by data sparsity and evolving
operational conditions. Predictive maintenance (PdM) in such settings demands
methodologies that are adaptive, transferable, and capable of integrating
domain-specific knowledge. In this paper, we present RAAD-LLM, a novel
framework for adaptive anomaly detection, leveraging large language models
(LLMs) integrated with Retrieval-Augmented Generation (RAG). This approach
addresses the aforementioned PdM challenges. By effectively utilizing
domain-specific knowledge, RAAD-LLM enhances the detection of anomalies in time
series data without requiring fine-tuning on specific datasets. The framework's
adaptability mechanism enables it to adjust its understanding of normal
operating conditions dynamically, thus increasing detection accuracy. We
validate this methodology through a real-world application for a plastics
manufacturing plant and the Skoltech Anomaly Benchmark (SKAB). Results show
significant improvements over our previous model with an accuracy increase from
70.7% to 88.6% on the real-world dataset. By allowing for the enriching of
input series data with semantics, RAAD-LLM incorporates multimodal capabilities
that facilitate more collaborative decision-making between the model and plant
operators. Overall, our findings support RAAD-LLM's ability to revolutionize
anomaly detection methodologies in PdM, potentially leading to a paradigm shift
in how anomaly detection is implemented across various industries.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 17:20:43 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Mar 2025 18:30:45 GMT"
},
{
"version": "v3",
"created": "Tue, 11 Mar 2025 15:47:37 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Russell-Gilbert",
"Alicia",
""
],
[
"Mittal",
"Sudip",
""
],
[
"Rahimi",
"Shahram",
""
],
[
"Seale",
"Maria",
""
],
[
"Jabour",
"Joseph",
""
],
[
"Arnold",
"Thomas",
""
],
[
"Church",
"Joshua",
""
]
]
| TITLE: RAAD-LLM: Adaptive Anomaly Detection Using LLMs and RAG Integration
ABSTRACT: Anomaly detection in complex industrial environments poses unique challenges,
particularly in contexts characterized by data sparsity and evolving
operational conditions. Predictive maintenance (PdM) in such settings demands
methodologies that are adaptive, transferable, and capable of integrating
domain-specific knowledge. In this paper, we present RAAD-LLM, a novel
framework for adaptive anomaly detection, leveraging large language models
(LLMs) integrated with Retrieval-Augmented Generation (RAG). This approach
addresses the aforementioned PdM challenges. By effectively utilizing
domain-specific knowledge, RAAD-LLM enhances the detection of anomalies in time
series data without requiring fine-tuning on specific datasets. The framework's
adaptability mechanism enables it to adjust its understanding of normal
operating conditions dynamically, thus increasing detection accuracy. We
validate this methodology through a real-world application for a plastics
manufacturing plant and the Skoltech Anomaly Benchmark (SKAB). Results show
significant improvements over our previous model with an accuracy increase from
70.7% to 88.6% on the real-world dataset. By allowing for the enriching of
input series data with semantics, RAAD-LLM incorporates multimodal capabilities
that facilitate more collaborative decision-making between the model and plant
operators. Overall, our findings support RAAD-LLM's ability to revolutionize
anomaly detection methodologies in PdM, potentially leading to a paradigm shift
in how anomaly detection is implemented across various industries.
| no_new_dataset | 0.943348 |
2503.03953 | Morteza Karimzadeh | Aidan Marler, Yannik Roell, Steffen Knoblauch, Jane P. Messina, Thomas
Jaenisch, Morteza Karimzadeh | GeoDEN: A Visual Exploration Tool for Analysing the Geographic Spread of
Dengue Serotypes | To appear in Computer Graphics Forum (2025) | null | 10.1111/cgf.70087 | null | cs.HC | http://creativecommons.org/licenses/by/4.0/ | Static maps and animations remain popular in spatial epidemiology of dengue,
limiting the analytical depth and scope of visualisations. Over half of the
global population live in dengue endemic regions. Understanding the
spatiotemporal dynamics of the four closely related dengue serotypes, and their
immunological interactions, remains a challenge at a global scale. To
facilitate this understanding, we worked with dengue epidemiologists in a
user-centered design framework to create GeoDEN, an exploratory visualisation
tool that empowers experts to investigate spatiotemporal patterns in dengue
serotype reports. The tool has several linked visualisations and filtering
mechanisms, enabling analysis at a range of spatial and temporal scales. To
identify successes and failures, we present both insight-based and value-driven
evaluations. Our domain experts found GeoDEN valuable, verifying existing
hypotheses and uncovering novel insights that warrant further investigation by
the epidemiology community. The developed visual exploration approach can be
adapted for exploring other epidemiology and disease incident datasets.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 22:54:38 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Marler",
"Aidan",
""
],
[
"Roell",
"Yannik",
""
],
[
"Knoblauch",
"Steffen",
""
],
[
"Messina",
"Jane P.",
""
],
[
"Jaenisch",
"Thomas",
""
],
[
"Karimzadeh",
"Morteza",
""
]
]
| TITLE: GeoDEN: A Visual Exploration Tool for Analysing the Geographic Spread of
Dengue Serotypes
ABSTRACT: Static maps and animations remain popular in spatial epidemiology of dengue,
limiting the analytical depth and scope of visualisations. Over half of the
global population live in dengue endemic regions. Understanding the
spatiotemporal dynamics of the four closely related dengue serotypes, and their
immunological interactions, remains a challenge at a global scale. To
facilitate this understanding, we worked with dengue epidemiologists in a
user-centered design framework to create GeoDEN, an exploratory visualisation
tool that empowers experts to investigate spatiotemporal patterns in dengue
serotype reports. The tool has several linked visualisations and filtering
mechanisms, enabling analysis at a range of spatial and temporal scales. To
identify successes and failures, we present both insight-based and value-driven
evaluations. Our domain experts found GeoDEN valuable, verifying existing
hypotheses and uncovering novel insights that warrant further investigation by
the epidemiology community. The developed visual exploration approach can be
adapted for exploring other epidemiology and disease incident datasets.
| no_new_dataset | 0.948822 |
2503.04838 | Suman Ghosh | Thilo Reinold, Suman Ghosh, Guillermo Gallego | Combined Physics and Event Camera Simulator for Slip Detection | 9 pages, 8 figures, 2 tables, https://github.com/tub-rip/event_slip | Winter Conference on Applications of Computer Vision (WACV)
Workshops, Tucson (USA), 2025 | null | null | cs.CV cs.RO | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Robot manipulation is a common task in fields like industrial manufacturing.
Detecting when objects slip from a robot's grasp is crucial for safe and
reliable operation. Event cameras, which register pixel-level brightness
changes at high temporal resolution (called ``events''), offer an elegant
feature when mounted on a robot's end effector: since they only detect motion
relative to their viewpoint, a properly grasped object produces no events,
while a slipping object immediately triggers them. To research this feature,
representative datasets are essential, both for analytic approaches and for
training machine learning models. The majority of current research on slip
detection with event-based data is done on real-world scenarios and manual data
collection, as well as additional setups for data labeling. This can result in
a significant increase in the time required for data collection, a lack of
flexibility in scene setups, and a high level of complexity in the repetition
of experiments. This paper presents a simulation pipeline for generating slip
data using the described camera-gripper configuration in a robot arm, and
demonstrates its effectiveness through initial data-driven experiments. The use
of a simulator, once it is set up, has the potential to reduce the time spent
on data collection, provide the ability to alter the setup at any time,
simplify the process of repetition and the generation of arbitrarily large data
sets. Two distinct datasets were created and validated through visual
inspection and artificial neural networks (ANNs). Visual inspection confirmed
photorealistic frame generation and accurate slip modeling, while three ANNs
trained on this data achieved high validation accuracy and demonstrated good
generalization capabilities on a separate test set, along with initial
applicability to real-world data. Project page:
https://github.com/tub-rip/event_slip
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 14:50:21 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 22:49:56 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Reinold",
"Thilo",
""
],
[
"Ghosh",
"Suman",
""
],
[
"Gallego",
"Guillermo",
""
]
]
| TITLE: Combined Physics and Event Camera Simulator for Slip Detection
ABSTRACT: Robot manipulation is a common task in fields like industrial manufacturing.
Detecting when objects slip from a robot's grasp is crucial for safe and
reliable operation. Event cameras, which register pixel-level brightness
changes at high temporal resolution (called ``events''), offer an elegant
feature when mounted on a robot's end effector: since they only detect motion
relative to their viewpoint, a properly grasped object produces no events,
while a slipping object immediately triggers them. To research this feature,
representative datasets are essential, both for analytic approaches and for
training machine learning models. The majority of current research on slip
detection with event-based data is done on real-world scenarios and manual data
collection, as well as additional setups for data labeling. This can result in
a significant increase in the time required for data collection, a lack of
flexibility in scene setups, and a high level of complexity in the repetition
of experiments. This paper presents a simulation pipeline for generating slip
data using the described camera-gripper configuration in a robot arm, and
demonstrates its effectiveness through initial data-driven experiments. The use
of a simulator, once it is set up, has the potential to reduce the time spent
on data collection, provide the ability to alter the setup at any time,
simplify the process of repetition and the generation of arbitrarily large data
sets. Two distinct datasets were created and validated through visual
inspection and artificial neural networks (ANNs). Visual inspection confirmed
photorealistic frame generation and accurate slip modeling, while three ANNs
trained on this data achieved high validation accuracy and demonstrated good
generalization capabilities on a separate test set, along with initial
applicability to real-world data. Project page:
https://github.com/tub-rip/event_slip
| no_new_dataset | 0.954052 |
2503.05810 | Derin Ozer | Derin Ozer, Sylvain Lamprier, Thomas Cauchy, Nicolas Gutowski, Benoit
Da Mota | A Transformer Model for Predicting Chemical Reaction Products from
Generic Templates | null | null | null | null | cs.LG cs.AI physics.chem-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The accurate prediction of chemical reaction outcomes is a major challenge in
computational chemistry. Current models rely heavily on either highly specific
reaction templates or template-free methods, both of which present limitations.
To address these limitations, this work proposes the Broad Reaction Set (BRS),
a dataset featuring 20 generic reaction templates that allow for the efficient
exploration of the chemical space. Additionally, ProPreT5 is introduced, a T5
model tailored to chemistry that achieves a balance between rigid templates and
template-free methods. ProPreT5 demonstrates its capability to generate
accurate, valid, and realistic reaction products, making it a promising
solution that goes beyond the current state-of-the-art on the complex reaction
product prediction task.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 10:18:32 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 08:22:15 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Ozer",
"Derin",
""
],
[
"Lamprier",
"Sylvain",
""
],
[
"Cauchy",
"Thomas",
""
],
[
"Gutowski",
"Nicolas",
""
],
[
"Da Mota",
"Benoit",
""
]
]
| TITLE: A Transformer Model for Predicting Chemical Reaction Products from
Generic Templates
ABSTRACT: The accurate prediction of chemical reaction outcomes is a major challenge in
computational chemistry. Current models rely heavily on either highly specific
reaction templates or template-free methods, both of which present limitations.
To address these limitations, this work proposes the Broad Reaction Set (BRS),
a dataset featuring 20 generic reaction templates that allow for the efficient
exploration of the chemical space. Additionally, ProPreT5 is introduced, a T5
model tailored to chemistry that achieves a balance between rigid templates and
template-free methods. ProPreT5 demonstrates its capability to generate
accurate, valid, and realistic reaction products, making it a promising
solution that goes beyond the current state-of-the-art on the complex reaction
product prediction task.
| new_dataset | 0.958924 |
2503.06094 | Yong He | Yong He, Hongshan Yu, Mingtao Feng, Tongjia Chen, Zechuan Li, Anwaar
Ulhaq, Saeed Anwar, Ajmal Saeed Mian | PointDiffuse: A Dual-Conditional Diffusion Model for Enhanced Point
Cloud Semantic Segmentation | 8 pages, 3 figures, 7 tables | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Diffusion probabilistic models are traditionally used to generate colors at
fixed pixel positions in 2D images. Building on this, we extend diffusion
models to point cloud semantic segmentation, where point positions also remain
fixed, and the diffusion model generates point labels instead of colors. To
accelerate the denoising process in reverse diffusion, we introduce a noisy
label embedding mechanism. This approach integrates semantic information into
the noisy label, providing an initial semantic reference that improves the
reverse diffusion efficiency. Additionally, we propose a point frequency
transformer that enhances the adjustment of high-level context in point clouds.
To reduce computational complexity, we introduce the position condition into
MLP and propose denoising PointNet to process the high-resolution point cloud
without sacrificing geometric details. Finally, we integrate the proposed noisy
label embedding, point frequency transformer and denoising PointNet in our
proposed dual conditional diffusion model-based network (PointDiffuse) to
perform large-scale point cloud semantic segmentation. Extensive experiments on
five benchmarks demonstrate the superiority of PointDiffuse, achieving the
state-of-the-art mIoU of 74.2\% on S3DIS Area 5, 81.2\% on S3DIS 6-fold and
64.8\% on SWAN dataset.
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2025 06:53:22 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 14:59:28 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"He",
"Yong",
""
],
[
"Yu",
"Hongshan",
""
],
[
"Feng",
"Mingtao",
""
],
[
"Chen",
"Tongjia",
""
],
[
"Li",
"Zechuan",
""
],
[
"Ulhaq",
"Anwaar",
""
],
[
"Anwar",
"Saeed",
""
],
[
"Mian",
"Ajmal Saeed",
""
]
]
| TITLE: PointDiffuse: A Dual-Conditional Diffusion Model for Enhanced Point
Cloud Semantic Segmentation
ABSTRACT: Diffusion probabilistic models are traditionally used to generate colors at
fixed pixel positions in 2D images. Building on this, we extend diffusion
models to point cloud semantic segmentation, where point positions also remain
fixed, and the diffusion model generates point labels instead of colors. To
accelerate the denoising process in reverse diffusion, we introduce a noisy
label embedding mechanism. This approach integrates semantic information into
the noisy label, providing an initial semantic reference that improves the
reverse diffusion efficiency. Additionally, we propose a point frequency
transformer that enhances the adjustment of high-level context in point clouds.
To reduce computational complexity, we introduce the position condition into
MLP and propose denoising PointNet to process the high-resolution point cloud
without sacrificing geometric details. Finally, we integrate the proposed noisy
label embedding, point frequency transformer and denoising PointNet in our
proposed dual conditional diffusion model-based network (PointDiffuse) to
perform large-scale point cloud semantic segmentation. Extensive experiments on
five benchmarks demonstrate the superiority of PointDiffuse, achieving the
state-of-the-art mIoU of 74.2\% on S3DIS Area 5, 81.2\% on S3DIS 6-fold and
64.8\% on SWAN dataset.
| no_new_dataset | 0.957873 |
2503.06150 | Huan Tian | Huan Tian, Guangsheng Zhang, Bo Liu, Tianqing Zhu, Ming Ding, Wanlei
Zhou | Do Fairness Interventions Come at the Cost of Privacy: Evaluations for
Binary Classifiers | Accepted to IEEE Transactions on Dependable and Secure Computing
(TDSC) | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | While in-processing fairness approaches show promise in mitigating biased
predictions, their potential impact on privacy leakage remains under-explored.
We aim to address this gap by assessing the privacy risks of fairness-enhanced
binary classifiers via membership inference attacks (MIAs) and attribute
inference attacks (AIAs). Surprisingly, our results reveal that enhancing
fairness does not necessarily lead to privacy compromises. For example, these
fairness interventions exhibit increased resilience against MIAs and AIAs. This
is because fairness interventions tend to remove sensitive information among
extracted features and reduce confidence scores for the majority of training
data for fairer predictions. However, during the evaluations, we uncover a
potential threat mechanism that exploits prediction discrepancies between fair
and biased models, leading to advanced attack results for both MIAs and AIAs.
This mechanism reveals potent vulnerabilities of fair models and poses
significant privacy risks of current fairness methods. Extensive experiments
across multiple datasets, attack methods, and representative fairness
approaches confirm our findings and demonstrate the efficacy of the uncovered
mechanism. Our study exposes the under-explored privacy threats in fairness
studies, advocating for thorough evaluations of potential security
vulnerabilities before model deployments.
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2025 10:21:21 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 11:28:18 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Tian",
"Huan",
""
],
[
"Zhang",
"Guangsheng",
""
],
[
"Liu",
"Bo",
""
],
[
"Zhu",
"Tianqing",
""
],
[
"Ding",
"Ming",
""
],
[
"Zhou",
"Wanlei",
""
]
]
| TITLE: Do Fairness Interventions Come at the Cost of Privacy: Evaluations for
Binary Classifiers
ABSTRACT: While in-processing fairness approaches show promise in mitigating biased
predictions, their potential impact on privacy leakage remains under-explored.
We aim to address this gap by assessing the privacy risks of fairness-enhanced
binary classifiers via membership inference attacks (MIAs) and attribute
inference attacks (AIAs). Surprisingly, our results reveal that enhancing
fairness does not necessarily lead to privacy compromises. For example, these
fairness interventions exhibit increased resilience against MIAs and AIAs. This
is because fairness interventions tend to remove sensitive information among
extracted features and reduce confidence scores for the majority of training
data for fairer predictions. However, during the evaluations, we uncover a
potential threat mechanism that exploits prediction discrepancies between fair
and biased models, leading to advanced attack results for both MIAs and AIAs.
This mechanism reveals potent vulnerabilities of fair models and poses
significant privacy risks of current fairness methods. Extensive experiments
across multiple datasets, attack methods, and representative fairness
approaches confirm our findings and demonstrate the efficacy of the uncovered
mechanism. Our study exposes the under-explored privacy threats in fairness
studies, advocating for thorough evaluations of potential security
vulnerabilities before model deployments.
| no_new_dataset | 0.946597 |
2503.06364 | Chen Liu | Chen Liu, Tobias Ritschel | Generative Video Bi-flow | null | null | null | null | cs.CV cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel generative video model by robustly learning temporal
change as a neural Ordinary Differential Equation (ODE) flow with a bilinear
objective of combining two aspects: The first is to map from the past into
future video frames directly. Previous work has mapped the noise to new frames,
a more computationally expensive process. Unfortunately, starting from the
previous frame, instead of noise, is more prone to drifting errors. Hence,
second, we additionally learn how to remove the accumulated errors as the joint
objective by adding noise during training. We demonstrate unconditional video
generation in a streaming manner for various video datasets, all at competitive
quality compared to a baseline conditional diffusion but with higher speed,
i.e., fewer ODE solver steps.
| [
{
"version": "v1",
"created": "Sun, 9 Mar 2025 00:03:59 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Liu",
"Chen",
""
],
[
"Ritschel",
"Tobias",
""
]
]
| TITLE: Generative Video Bi-flow
ABSTRACT: We propose a novel generative video model by robustly learning temporal
change as a neural Ordinary Differential Equation (ODE) flow with a bilinear
objective of combining two aspects: The first is to map from the past into
future video frames directly. Previous work has mapped the noise to new frames,
a more computationally expensive process. Unfortunately, starting from the
previous frame, instead of noise, is more prone to drifting errors. Hence,
second, we additionally learn how to remove the accumulated errors as the joint
objective by adding noise during training. We demonstrate unconditional video
generation in a streaming manner for various video datasets, all at competitive
quality compared to a baseline conditional diffusion but with higher speed,
i.e., fewer ODE solver steps.
| no_new_dataset | 0.952574 |
2503.06749 | Wenxuan Huang | Wenxuan Huang, Bohan Jia, Zijie Zhai, Shaosheng Cao, Zheyu Ye, Fei
Zhao, Zhe Xu, Yao Hu, Shaohui Lin | Vision-R1: Incentivizing Reasoning Capability in Multimodal Large
Language Models | null | null | null | null | cs.CV cs.AI cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | DeepSeek-R1-Zero has successfully demonstrated the emergence of reasoning
capabilities in LLMs purely through Reinforcement Learning (RL). Inspired by
this breakthrough, we explore how RL can be utilized to enhance the reasoning
capability of MLLMs. However, direct training with RL struggles to activate
complex reasoning capabilities such as questioning and reflection in MLLMs, due
to the absence of substantial high-quality multimodal reasoning data. To
address this issue, we propose the reasoning MLLM, Vision-R1, to improve
multimodal reasoning capability. Specifically, we first construct a
high-quality multimodal CoT dataset without human annotations by leveraging an
existing MLLM and DeepSeek-R1 through modality bridging and data filtering to
obtain a 200K multimodal CoT dataset, Vision-R1-cold dataset. It serves as
cold-start initialization data for Vision-R1. To mitigate the optimization
challenges caused by overthinking after cold start, we propose Progressive
Thinking Suppression Training (PTST) strategy and employ Group Relative Policy
Optimization (GRPO) with the hard formatting result reward function to
gradually refine the model's ability to learn correct and complex reasoning
processes on a 10K multimodal math dataset. Comprehensive experiments show our
model achieves an average improvement of $\sim$6% across various multimodal
math reasoning benchmarks. Vision-R1-7B achieves a 73.5% accuracy on the widely
used MathVista benchmark, which is only 0.4% lower than the leading reasoning
model, OpenAI O1. The datasets and code will be released in:
https://github.com/Osilly/Vision-R1 .
| [
{
"version": "v1",
"created": "Sun, 9 Mar 2025 20:06:45 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 09:47:44 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Huang",
"Wenxuan",
""
],
[
"Jia",
"Bohan",
""
],
[
"Zhai",
"Zijie",
""
],
[
"Cao",
"Shaosheng",
""
],
[
"Ye",
"Zheyu",
""
],
[
"Zhao",
"Fei",
""
],
[
"Xu",
"Zhe",
""
],
[
"Hu",
"Yao",
""
],
[
"Lin",
"Shaohui",
""
]
]
| TITLE: Vision-R1: Incentivizing Reasoning Capability in Multimodal Large
Language Models
ABSTRACT: DeepSeek-R1-Zero has successfully demonstrated the emergence of reasoning
capabilities in LLMs purely through Reinforcement Learning (RL). Inspired by
this breakthrough, we explore how RL can be utilized to enhance the reasoning
capability of MLLMs. However, direct training with RL struggles to activate
complex reasoning capabilities such as questioning and reflection in MLLMs, due
to the absence of substantial high-quality multimodal reasoning data. To
address this issue, we propose the reasoning MLLM, Vision-R1, to improve
multimodal reasoning capability. Specifically, we first construct a
high-quality multimodal CoT dataset without human annotations by leveraging an
existing MLLM and DeepSeek-R1 through modality bridging and data filtering to
obtain a 200K multimodal CoT dataset, Vision-R1-cold dataset. It serves as
cold-start initialization data for Vision-R1. To mitigate the optimization
challenges caused by overthinking after cold start, we propose Progressive
Thinking Suppression Training (PTST) strategy and employ Group Relative Policy
Optimization (GRPO) with the hard formatting result reward function to
gradually refine the model's ability to learn correct and complex reasoning
processes on a 10K multimodal math dataset. Comprehensive experiments show our
model achieves an average improvement of $\sim$6% across various multimodal
math reasoning benchmarks. Vision-R1-7B achieves a 73.5% accuracy on the widely
used MathVista benchmark, which is only 0.4% lower than the leading reasoning
model, OpenAI O1. The datasets and code will be released in:
https://github.com/Osilly/Vision-R1 .
| no_new_dataset | 0.80837 |
2503.06873 | Ta Duc Huy | Ta Duc Huy, Sen Kim Tran, Phan Nguyen, Nguyen Hoang Tran, Tran Bao
Sam, Anton van den Hengel, Zhibin Liao, Johan W. Verjans, Minh-Son To, Vu
Minh Hieu Phan | Interactive Medical Image Analysis with Concept-based Similarity
Reasoning | Accepted CVPR2025 | CVPR 2025 | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | The ability to interpret and intervene model decisions is important for the
adoption of computer-aided diagnosis methods in clinical workflows. Recent
concept-based methods link the model predictions with interpretable concepts
and modify their activation scores to interact with the model. However, these
concepts are at the image level, which hinders the model from pinpointing the
exact patches the concepts are activated. Alternatively, prototype-based
methods learn representations from training image patches and compare these
with test image patches, using the similarity scores for final class
prediction. However, interpreting the underlying concepts of these patches can
be challenging and often necessitates post-hoc guesswork. To address this
issue, this paper introduces the novel Concept-based Similarity Reasoning
network (CSR), which offers (i) patch-level prototype with intrinsic concept
interpretation, and (ii) spatial interactivity. First, the proposed CSR
provides localized explanation by grounding prototypes of each concept on image
regions. Second, our model introduces novel spatial-level interaction, allowing
doctors to engage directly with specific image areas, making it an intuitive
and transparent tool for medical imaging. CSR improves upon prior
state-of-the-art interpretable methods by up to 4.5\% across three biomedical
datasets. Our code is released at https://github.com/tadeephuy/InteractCSR.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 02:52:47 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 09:06:03 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Huy",
"Ta Duc",
""
],
[
"Tran",
"Sen Kim",
""
],
[
"Nguyen",
"Phan",
""
],
[
"Tran",
"Nguyen Hoang",
""
],
[
"Sam",
"Tran Bao",
""
],
[
"Hengel",
"Anton van den",
""
],
[
"Liao",
"Zhibin",
""
],
[
"Verjans",
"Johan W.",
""
],
[
"To",
"Minh-Son",
""
],
[
"Phan",
"Vu Minh Hieu",
""
]
]
| TITLE: Interactive Medical Image Analysis with Concept-based Similarity
Reasoning
ABSTRACT: The ability to interpret and intervene model decisions is important for the
adoption of computer-aided diagnosis methods in clinical workflows. Recent
concept-based methods link the model predictions with interpretable concepts
and modify their activation scores to interact with the model. However, these
concepts are at the image level, which hinders the model from pinpointing the
exact patches the concepts are activated. Alternatively, prototype-based
methods learn representations from training image patches and compare these
with test image patches, using the similarity scores for final class
prediction. However, interpreting the underlying concepts of these patches can
be challenging and often necessitates post-hoc guesswork. To address this
issue, this paper introduces the novel Concept-based Similarity Reasoning
network (CSR), which offers (i) patch-level prototype with intrinsic concept
interpretation, and (ii) spatial interactivity. First, the proposed CSR
provides localized explanation by grounding prototypes of each concept on image
regions. Second, our model introduces novel spatial-level interaction, allowing
doctors to engage directly with specific image areas, making it an intuitive
and transparent tool for medical imaging. CSR improves upon prior
state-of-the-art interpretable methods by up to 4.5\% across three biomedical
datasets. Our code is released at https://github.com/tadeephuy/InteractCSR.
| no_new_dataset | 0.949059 |
2503.06949 | Haotian Chen | Haotian Chen, Yanyu Xu, Boyan Wang, Chaoyue Zhao, Xiaoyu Han, Fang
Wang, Lizhen Cui, Yonghui Xu | LexPro-1.0 Technical Report | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this report, we introduce our first-generation reasoning model,
LexPro-1.0, a large language model designed for the highly specialized Chinese
legal domain, offering comprehensive capabilities to meet diverse realistic
needs. Existing legal LLMs face two primary challenges. Firstly, their design
and evaluation are predominantly driven by computer science perspectives,
leading to insufficient incorporation of legal expertise and logic, which is
crucial for high-precision legal applications, such as handling complex
prosecutorial tasks. Secondly, these models often underperform due to a lack of
comprehensive training data from the legal domain, limiting their ability to
effectively address real-world legal scenarios. To address this, we first
compile millions of legal documents covering over 20 types of crimes from 31
provinces in China for model training. From the extensive dataset, we further
select high-quality for supervised fine-tuning, ensuring enhanced relevance and
precision. The model further undergoes large-scale reinforcement learning
without additional supervision, emphasizing the enhancement of its reasoning
capabilities and explainability. To validate its effectiveness in complex legal
applications, we also conduct human evaluations with legal experts. We develop
fine-tuned models based on DeepSeek-R1-Distilled versions, available in three
dense configurations: 14B, 32B, and 70B.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 05:54:23 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 04:58:27 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Chen",
"Haotian",
""
],
[
"Xu",
"Yanyu",
""
],
[
"Wang",
"Boyan",
""
],
[
"Zhao",
"Chaoyue",
""
],
[
"Han",
"Xiaoyu",
""
],
[
"Wang",
"Fang",
""
],
[
"Cui",
"Lizhen",
""
],
[
"Xu",
"Yonghui",
""
]
]
| TITLE: LexPro-1.0 Technical Report
ABSTRACT: In this report, we introduce our first-generation reasoning model,
LexPro-1.0, a large language model designed for the highly specialized Chinese
legal domain, offering comprehensive capabilities to meet diverse realistic
needs. Existing legal LLMs face two primary challenges. Firstly, their design
and evaluation are predominantly driven by computer science perspectives,
leading to insufficient incorporation of legal expertise and logic, which is
crucial for high-precision legal applications, such as handling complex
prosecutorial tasks. Secondly, these models often underperform due to a lack of
comprehensive training data from the legal domain, limiting their ability to
effectively address real-world legal scenarios. To address this, we first
compile millions of legal documents covering over 20 types of crimes from 31
provinces in China for model training. From the extensive dataset, we further
select high-quality for supervised fine-tuning, ensuring enhanced relevance and
precision. The model further undergoes large-scale reinforcement learning
without additional supervision, emphasizing the enhancement of its reasoning
capabilities and explainability. To validate its effectiveness in complex legal
applications, we also conduct human evaluations with legal experts. We develop
fine-tuned models based on DeepSeek-R1-Distilled versions, available in three
dense configurations: 14B, 32B, and 70B.
| new_dataset | 0.871584 |
2503.06966 | Guanghao Li | Guanghao Li, Mingzhi Chen, Hao Yu, Shuting Dong, Wenhao Jiang, Ming
Tang, Chun Yuan | MIGA: Mutual Information-Guided Attack on Denoising Models for Semantic
Manipulation | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Deep learning-based denoising models have been widely employed in vision
tasks, functioning as filters to eliminate noise while retaining crucial
semantic information. Additionally, they play a vital role in defending against
adversarial perturbations that threaten downstream tasks. However, these models
can be intrinsically susceptible to adversarial attacks due to their dependence
on specific noise assumptions. Existing attacks on denoising models mainly aim
at deteriorating visual clarity while neglecting semantic manipulation,
rendering them either easily detectable or limited in effectiveness. In this
paper, we propose Mutual Information-Guided Attack (MIGA), the first method
designed to directly attack deep denoising models by strategically disrupting
their ability to preserve semantic content via adversarial perturbations. By
minimizing the mutual information between the original and denoised images, a
measure of semantic similarity. MIGA forces the denoiser to produce
perceptually clean yet semantically altered outputs. While these images appear
visually plausible, they encode systematically distorted semantics, revealing a
fundamental vulnerability in denoising models. These distortions persist in
denoised outputs and can be quantitatively assessed through downstream task
performance. We propose new evaluation metrics and systematically assess MIGA
on four denoising models across five datasets, demonstrating its consistent
effectiveness in disrupting semantic fidelity. Our findings suggest that
denoising models are not always robust and can introduce security risks in
real-world applications.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 06:26:34 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 06:01:25 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Li",
"Guanghao",
""
],
[
"Chen",
"Mingzhi",
""
],
[
"Yu",
"Hao",
""
],
[
"Dong",
"Shuting",
""
],
[
"Jiang",
"Wenhao",
""
],
[
"Tang",
"Ming",
""
],
[
"Yuan",
"Chun",
""
]
]
| TITLE: MIGA: Mutual Information-Guided Attack on Denoising Models for Semantic
Manipulation
ABSTRACT: Deep learning-based denoising models have been widely employed in vision
tasks, functioning as filters to eliminate noise while retaining crucial
semantic information. Additionally, they play a vital role in defending against
adversarial perturbations that threaten downstream tasks. However, these models
can be intrinsically susceptible to adversarial attacks due to their dependence
on specific noise assumptions. Existing attacks on denoising models mainly aim
at deteriorating visual clarity while neglecting semantic manipulation,
rendering them either easily detectable or limited in effectiveness. In this
paper, we propose Mutual Information-Guided Attack (MIGA), the first method
designed to directly attack deep denoising models by strategically disrupting
their ability to preserve semantic content via adversarial perturbations. By
minimizing the mutual information between the original and denoised images, a
measure of semantic similarity. MIGA forces the denoiser to produce
perceptually clean yet semantically altered outputs. While these images appear
visually plausible, they encode systematically distorted semantics, revealing a
fundamental vulnerability in denoising models. These distortions persist in
denoised outputs and can be quantitatively assessed through downstream task
performance. We propose new evaluation metrics and systematically assess MIGA
on four denoising models across five datasets, demonstrating its consistent
effectiveness in disrupting semantic fidelity. Our findings suggest that
denoising models are not always robust and can introduce security risks in
real-world applications.
| no_new_dataset | 0.941761 |
2503.06990 | Hyeonsoo Jo | Hyeonsoo Jo, Jongha Lee, Fanchen Bu, Kijung Shin | TiGer: Self-Supervised Purification for Time-evolving Graphs | PAKDD 2025 | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Time-evolving graphs, such as social and citation networks, often contain
noise that distorts structural and temporal patterns, adversely affecting
downstream tasks, such as node classification. Existing purification methods
focus on static graphs, limiting their ability to account for critical temporal
dependencies in dynamic graphs. In this work, we propose TiGer (Time-evolving
Graph purifier), a self-supervised method explicitly designed for time-evolving
graphs. TiGer assigns two different sub-scores to edges using (1)
self-attention for capturing long-term contextual patterns shaped by both
adjacent and distant past events of varying significance and (2) statistical
distance measures for detecting inconsistency over a short-term period. These
sub-scores are used to identify and filter out suspicious (i.e., noise-like)
edges through an ensemble strategy, ensuring robustness without requiring noise
labels. Our experiments on five real-world datasets show TiGer filters out
noise with up to 10.2% higher accuracy and improves node classification
performance by up to 5.3%, compared to state-of-the-art methods.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 07:10:45 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 05:17:04 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Jo",
"Hyeonsoo",
""
],
[
"Lee",
"Jongha",
""
],
[
"Bu",
"Fanchen",
""
],
[
"Shin",
"Kijung",
""
]
]
| TITLE: TiGer: Self-Supervised Purification for Time-evolving Graphs
ABSTRACT: Time-evolving graphs, such as social and citation networks, often contain
noise that distorts structural and temporal patterns, adversely affecting
downstream tasks, such as node classification. Existing purification methods
focus on static graphs, limiting their ability to account for critical temporal
dependencies in dynamic graphs. In this work, we propose TiGer (Time-evolving
Graph purifier), a self-supervised method explicitly designed for time-evolving
graphs. TiGer assigns two different sub-scores to edges using (1)
self-attention for capturing long-term contextual patterns shaped by both
adjacent and distant past events of varying significance and (2) statistical
distance measures for detecting inconsistency over a short-term period. These
sub-scores are used to identify and filter out suspicious (i.e., noise-like)
edges through an ensemble strategy, ensuring robustness without requiring noise
labels. Our experiments on five real-world datasets show TiGer filters out
noise with up to 10.2% higher accuracy and improves node classification
performance by up to 5.3%, compared to state-of-the-art methods.
| no_new_dataset | 0.954393 |
2503.07111 | Alan Dao | Alan Dao (Gia Tuan Dao), Dinh Bach Vu, Tuan Le Duc Anh, Bui Quang Huy | PoseLess: Depth-Free Vision-to-Joint Control via Direct Image Mapping
with VLM | null | null | null | null | cs.RO cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces PoseLess, a novel framework for robot hand control that
eliminates the need for explicit pose estimation by directly mapping 2D images
to joint angles using projected representations. Our approach leverages
synthetic training data generated through randomized joint configurations,
enabling zero-shot generalization to real-world scenarios and cross-morphology
transfer from robotic to human hands. By projecting visual inputs and employing
a transformer-based decoder, PoseLess achieves robust, low-latency control
while addressing challenges such as depth ambiguity and data scarcity.
Experimental results demonstrate competitive performance in joint angle
prediction accuracy without relying on any human-labelled dataset.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 09:34:05 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 02:26:42 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Dao",
"Alan",
"",
"Gia Tuan Dao"
],
[
"Vu",
"Dinh Bach",
""
],
[
"Anh",
"Tuan Le Duc",
""
],
[
"Huy",
"Bui Quang",
""
]
]
| TITLE: PoseLess: Depth-Free Vision-to-Joint Control via Direct Image Mapping
with VLM
ABSTRACT: This paper introduces PoseLess, a novel framework for robot hand control that
eliminates the need for explicit pose estimation by directly mapping 2D images
to joint angles using projected representations. Our approach leverages
synthetic training data generated through randomized joint configurations,
enabling zero-shot generalization to real-world scenarios and cross-morphology
transfer from robotic to human hands. By projecting visual inputs and employing
a transformer-based decoder, PoseLess achieves robust, low-latency control
while addressing challenges such as depth ambiguity and data scarcity.
Experimental results demonstrate competitive performance in joint angle
prediction accuracy without relying on any human-labelled dataset.
| no_new_dataset | 0.950411 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.