id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
listlengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2503.01292
|
Yurui Pan
|
Yurui Pan, Lidong Wang, Yuchao Chen, Wenbing Zhu, Bo Peng, Mingmin Chi
|
PA-CLIP: Enhancing Zero-Shot Anomaly Detection through Pseudo-Anomaly
Awareness
|
9 pages
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In industrial anomaly detection (IAD), accurately identifying defects amidst
diverse anomalies and under varying imaging conditions remains a significant
challenge. Traditional approaches often struggle with high false-positive
rates, frequently misclassifying normal shadows and surface deformations as
defects, an issue that becomes particularly pronounced in products with complex
and intricate surface features. To address these challenges, we introduce
PA-CLIP, a zero-shot anomaly detection method that reduces background noise and
enhances defect detection through a pseudo-anomaly-based framework. The
proposed method integrates a multiscale feature aggregation strategy for
capturing detailed global and local information, two memory banks for
distinguishing background information, including normal patterns and
pseudo-anomalies, from true anomaly features, and a decision-making module
designed to minimize false positives caused by environmental variations while
maintaining high defect sensitivity. Demonstrated on the MVTec AD and VisA
datasets, PA-CLIP outperforms existing zero-shot methods, providing a robust
solution for industrial defect detection.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 08:29:27 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Pan",
"Yurui",
""
],
[
"Wang",
"Lidong",
""
],
[
"Chen",
"Yuchao",
""
],
[
"Zhu",
"Wenbing",
""
],
[
"Peng",
"Bo",
""
],
[
"Chi",
"Mingmin",
""
]
] |
TITLE: PA-CLIP: Enhancing Zero-Shot Anomaly Detection through Pseudo-Anomaly
Awareness
ABSTRACT: In industrial anomaly detection (IAD), accurately identifying defects amidst
diverse anomalies and under varying imaging conditions remains a significant
challenge. Traditional approaches often struggle with high false-positive
rates, frequently misclassifying normal shadows and surface deformations as
defects, an issue that becomes particularly pronounced in products with complex
and intricate surface features. To address these challenges, we introduce
PA-CLIP, a zero-shot anomaly detection method that reduces background noise and
enhances defect detection through a pseudo-anomaly-based framework. The
proposed method integrates a multiscale feature aggregation strategy for
capturing detailed global and local information, two memory banks for
distinguishing background information, including normal patterns and
pseudo-anomalies, from true anomaly features, and a decision-making module
designed to minimize false positives caused by environmental variations while
maintaining high defect sensitivity. Demonstrated on the MVTec AD and VisA
datasets, PA-CLIP outperforms existing zero-shot methods, providing a robust
solution for industrial defect detection.
|
no_new_dataset
| 0.947721 |
2503.01302
|
Sakiko Yahata
|
Sakiko Yahata, Zhen Wan, Fei Cheng, Sadao Kurohashi, Hisahiko Sato and
Ryozo Nagai
|
Causal Tree Extraction from Medical Case Reports: A Novel Task for
Experts-like Text Comprehension
|
Work in progress
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Extracting causal relationships from a medical case report is essential for
comprehending the case, particularly its diagnostic process. Since the
diagnostic process is regarded as a bottom-up inference, causal relationships
in cases naturally form a multi-layered tree structure. The existing tasks,
such as medical relation extraction, are insufficient for capturing the causal
relationships of an entire case, as they treat all relations equally without
considering the hierarchical structure inherent in the diagnostic process.
Thus, we propose a novel task, Causal Tree Extraction (CTE), which receives a
case report and generates a causal tree with the primary disease as the root,
providing an intuitive understanding of a case's diagnostic process.
Subsequently, we construct a Japanese case report CTE dataset, J-Casemap,
propose a generation-based CTE method that outperforms the baseline by 20.2
points in the human evaluation, and introduce evaluation metrics that reflect
clinician preferences. Further experiments also show that J-Casemap enhances
the performance of solving other medical tasks, such as question answering.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 08:40:01 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Yahata",
"Sakiko",
""
],
[
"Wan",
"Zhen",
""
],
[
"Cheng",
"Fei",
""
],
[
"Kurohashi",
"Sadao",
""
],
[
"Sato",
"Hisahiko",
""
],
[
"Nagai",
"Ryozo",
""
]
] |
TITLE: Causal Tree Extraction from Medical Case Reports: A Novel Task for
Experts-like Text Comprehension
ABSTRACT: Extracting causal relationships from a medical case report is essential for
comprehending the case, particularly its diagnostic process. Since the
diagnostic process is regarded as a bottom-up inference, causal relationships
in cases naturally form a multi-layered tree structure. The existing tasks,
such as medical relation extraction, are insufficient for capturing the causal
relationships of an entire case, as they treat all relations equally without
considering the hierarchical structure inherent in the diagnostic process.
Thus, we propose a novel task, Causal Tree Extraction (CTE), which receives a
case report and generates a causal tree with the primary disease as the root,
providing an intuitive understanding of a case's diagnostic process.
Subsequently, we construct a Japanese case report CTE dataset, J-Casemap,
propose a generation-based CTE method that outperforms the baseline by 20.2
points in the human evaluation, and introduce evaluation metrics that reflect
clinician preferences. Further experiments also show that J-Casemap enhances
the performance of solving other medical tasks, such as question answering.
|
new_dataset
| 0.9601 |
2503.01305
|
Ya-Hui An
|
Yu Peng and Ya-Hui An
|
HI-Series Algorithms A Hybrid of Substance Diffusion Algorithm and
Collaborative Filtering
| null | null | null | null |
cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
Recommendation systems face the challenge of balancing accuracy and
diversity, as traditional collaborative filtering (CF) and network-based
diffusion algorithms exhibit complementary limitations. While item-based CF
(ItemCF) enhances diversity through item similarity, it compromises accuracy.
Conversely, mass diffusion (MD) algorithms prioritize accuracy by favoring
popular items but lack diversity. To address this trade-off, we propose the
HI-series algorithms, hybrid models integrating ItemCF with diffusion-based
approaches (MD, HHP, BHC, BD) through a nonlinear combination controlled by
parameter $\epsilon$. This hybridization leverages ItemCF's diversity and MD's
accuracy, extending to advanced diffusion models (HI-HHP, HI-BHC, HI-BD) for
enhanced performance. Experiments on MovieLens, Netflix, and RYM datasets
demonstrate that HI-series algorithms significantly outperform their base
counterparts. In sparse data ($20\%$ training), HI-MD achieves a
$0.8\%$-$4.4\%$ improvement in F1-score over MD while maintaining higher
diversity (Diversity@20: 459 vs. 396 on MovieLens). For dense data ($80\%$
training), HI-BD improves F1-score by $2.3\%$-$5.2\%$ compared to BD, with
diversity gains up to $18.6\%$. Notably, hybrid models consistently enhance
novelty in sparse settings and exhibit robust parameter adaptability. The
results validate that strategic hybridization effectively breaks the
accuracy-diversity trade-off, offering a flexible framework for optimizing
recommendation systems across data sparsity levels.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 08:43:40 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Peng",
"Yu",
""
],
[
"An",
"Ya-Hui",
""
]
] |
TITLE: HI-Series Algorithms A Hybrid of Substance Diffusion Algorithm and
Collaborative Filtering
ABSTRACT: Recommendation systems face the challenge of balancing accuracy and
diversity, as traditional collaborative filtering (CF) and network-based
diffusion algorithms exhibit complementary limitations. While item-based CF
(ItemCF) enhances diversity through item similarity, it compromises accuracy.
Conversely, mass diffusion (MD) algorithms prioritize accuracy by favoring
popular items but lack diversity. To address this trade-off, we propose the
HI-series algorithms, hybrid models integrating ItemCF with diffusion-based
approaches (MD, HHP, BHC, BD) through a nonlinear combination controlled by
parameter $\epsilon$. This hybridization leverages ItemCF's diversity and MD's
accuracy, extending to advanced diffusion models (HI-HHP, HI-BHC, HI-BD) for
enhanced performance. Experiments on MovieLens, Netflix, and RYM datasets
demonstrate that HI-series algorithms significantly outperform their base
counterparts. In sparse data ($20\%$ training), HI-MD achieves a
$0.8\%$-$4.4\%$ improvement in F1-score over MD while maintaining higher
diversity (Diversity@20: 459 vs. 396 on MovieLens). For dense data ($80\%$
training), HI-BD improves F1-score by $2.3\%$-$5.2\%$ compared to BD, with
diversity gains up to $18.6\%$. Notably, hybrid models consistently enhance
novelty in sparse settings and exhibit robust parameter adaptability. The
results validate that strategic hybridization effectively breaks the
accuracy-diversity trade-off, offering a flexible framework for optimizing
recommendation systems across data sparsity levels.
|
no_new_dataset
| 0.953405 |
2503.01306
|
Pooya Mohammadi Kazaj
|
Pooya Mohammadi Kazaj, Giovanni Baj, Yazdan Salimi, Anselm W. Stark,
Waldo Valenzuela, George CM. Siontis, Habib Zaidi, Mauricio Reyes, Christoph
Graeni, Isaac Shiri
|
From Claims to Evidence: A Unified Framework and Critical Analysis of
CNN vs. Transformer vs. Mamba in Medical Image Segmentation
| null | null | null | null |
eess.IV cs.AI cs.CV physics.med-ph
|
http://creativecommons.org/licenses/by-sa/4.0/
|
While numerous architectures for medical image segmentation have been
proposed, achieving competitive performance with state-of-the-art models
networks such as nnUNet, still leave room for further innovation. In this work,
we introduce nnUZoo, an open source benchmarking framework built upon nnUNet,
which incorporates various deep learning architectures, including CNNs,
Transformers, and Mamba-based models. Using this framework, we provide a fair
comparison to demystify performance claims across different medical image
segmentation tasks. Additionally, in an effort to enrich the benchmarking, we
explored five new architectures based on Mamba and Transformers, collectively
named X2Net, and integrated them into nnUZoo for further evaluation. The
proposed models combine the features of conventional U2Net, nnUNet, CNN,
Transformer, and Mamba layers and architectures, called X2Net (UNETR2Net
(UNETR), SwT2Net (SwinTransformer), SS2D2Net (SwinUMamba), Alt1DM2Net
(LightUMamba), and MambaND2Net (MambaND)). We extensively evaluate the
performance of different models on six diverse medical image segmentation
datasets, including microscopy, ultrasound, CT, MRI, and PET, covering various
body parts, organs, and labels. We compare their performance, in terms of dice
score and computational efficiency, against their baseline models, U2Net, and
nnUNet. CNN models like nnUNet and U2Net demonstrated both speed and accuracy,
making them effective choices for medical image segmentation tasks.
Transformer-based models, while promising for certain imaging modalities,
exhibited high computational costs. Proposed Mamba-based X2Net architecture
(SS2D2Net) achieved competitive accuracy with no significantly difference from
nnUNet and U2Net, while using fewer parameters. However, they required
significantly longer training time, highlighting a trade-off between model
efficiency and computational cost.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 08:44:51 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Kazaj",
"Pooya Mohammadi",
""
],
[
"Baj",
"Giovanni",
""
],
[
"Salimi",
"Yazdan",
""
],
[
"Stark",
"Anselm W.",
""
],
[
"Valenzuela",
"Waldo",
""
],
[
"Siontis",
"George CM.",
""
],
[
"Zaidi",
"Habib",
""
],
[
"Reyes",
"Mauricio",
""
],
[
"Graeni",
"Christoph",
""
],
[
"Shiri",
"Isaac",
""
]
] |
TITLE: From Claims to Evidence: A Unified Framework and Critical Analysis of
CNN vs. Transformer vs. Mamba in Medical Image Segmentation
ABSTRACT: While numerous architectures for medical image segmentation have been
proposed, achieving competitive performance with state-of-the-art models
networks such as nnUNet, still leave room for further innovation. In this work,
we introduce nnUZoo, an open source benchmarking framework built upon nnUNet,
which incorporates various deep learning architectures, including CNNs,
Transformers, and Mamba-based models. Using this framework, we provide a fair
comparison to demystify performance claims across different medical image
segmentation tasks. Additionally, in an effort to enrich the benchmarking, we
explored five new architectures based on Mamba and Transformers, collectively
named X2Net, and integrated them into nnUZoo for further evaluation. The
proposed models combine the features of conventional U2Net, nnUNet, CNN,
Transformer, and Mamba layers and architectures, called X2Net (UNETR2Net
(UNETR), SwT2Net (SwinTransformer), SS2D2Net (SwinUMamba), Alt1DM2Net
(LightUMamba), and MambaND2Net (MambaND)). We extensively evaluate the
performance of different models on six diverse medical image segmentation
datasets, including microscopy, ultrasound, CT, MRI, and PET, covering various
body parts, organs, and labels. We compare their performance, in terms of dice
score and computational efficiency, against their baseline models, U2Net, and
nnUNet. CNN models like nnUNet and U2Net demonstrated both speed and accuracy,
making them effective choices for medical image segmentation tasks.
Transformer-based models, while promising for certain imaging modalities,
exhibited high computational costs. Proposed Mamba-based X2Net architecture
(SS2D2Net) achieved competitive accuracy with no significantly difference from
nnUNet and U2Net, while using fewer parameters. However, they required
significantly longer training time, highlighting a trade-off between model
efficiency and computational cost.
|
no_new_dataset
| 0.953101 |
2503.01314
|
Zhenmei Shi
|
Yifang Chen, Xuyang Guo, Xiaoyu Li, Yingyu Liang, Zhenmei Shi, Zhao
Song
|
Scaling Law Phenomena Across Regression Paradigms: Multiple and Kernel
Approaches
| null | null | null | null |
cs.LG cs.AI stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
Recently, Large Language Models (LLMs) have achieved remarkable success. A
key factor behind this success is the scaling law observed by OpenAI.
Specifically, for models with Transformer architecture, the test loss exhibits
a power-law relationship with model size, dataset size, and the amount of
computation used in training, demonstrating trends that span more than seven
orders of magnitude. This scaling law challenges traditional machine learning
wisdom, notably the Oscar Scissors principle, which suggests that an
overparametrized algorithm will overfit the training datasets, resulting in
poor test performance. Recent research has also identified the scaling law in
simpler machine learning contexts, such as linear regression. However, fully
explaining the scaling law in large practical models remains an elusive goal.
In this work, we advance our understanding by demonstrating that the scaling
law phenomenon extends to multiple regression and kernel regression settings,
which are significantly more expressive and powerful than linear methods. Our
analysis provides deeper insights into the scaling law, potentially enhancing
our understanding of LLMs.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 08:57:49 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Chen",
"Yifang",
""
],
[
"Guo",
"Xuyang",
""
],
[
"Li",
"Xiaoyu",
""
],
[
"Liang",
"Yingyu",
""
],
[
"Shi",
"Zhenmei",
""
],
[
"Song",
"Zhao",
""
]
] |
TITLE: Scaling Law Phenomena Across Regression Paradigms: Multiple and Kernel
Approaches
ABSTRACT: Recently, Large Language Models (LLMs) have achieved remarkable success. A
key factor behind this success is the scaling law observed by OpenAI.
Specifically, for models with Transformer architecture, the test loss exhibits
a power-law relationship with model size, dataset size, and the amount of
computation used in training, demonstrating trends that span more than seven
orders of magnitude. This scaling law challenges traditional machine learning
wisdom, notably the Oscar Scissors principle, which suggests that an
overparametrized algorithm will overfit the training datasets, resulting in
poor test performance. Recent research has also identified the scaling law in
simpler machine learning contexts, such as linear regression. However, fully
explaining the scaling law in large practical models remains an elusive goal.
In this work, we advance our understanding by demonstrating that the scaling
law phenomenon extends to multiple regression and kernel regression settings,
which are significantly more expressive and powerful than linear methods. Our
analysis provides deeper insights into the scaling law, potentially enhancing
our understanding of LLMs.
|
no_new_dataset
| 0.950915 |
2503.01319
|
Mingxuan Xiao
|
Mingxuan Xiao, Yan Xiao, Shunhui Ji, Yunhe Li, Lei Xue, Pengcheng
Zhang
|
ABFS: Natural Robustness Testing for LLM-based NLP Software
| null | null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Owing to the exceptional performance of Large Language Models (LLMs) in
Natural Language Processing (NLP) tasks, LLM-based NLP software has rapidly
gained traction across various domains, such as financial analysis and content
moderation. However, these applications frequently exhibit robustness
deficiencies, where slight perturbations in input (prompt+example) may lead to
erroneous outputs. Current robustness testing methods face two main
limitations: (1) low testing effectiveness, limiting the applicability of
LLM-based software in safety-critical scenarios, and (2) insufficient
naturalness of test cases, reducing the practical value of testing outcomes. To
address these issues, this paper proposes ABFS, a straightforward yet effective
automated testing method that, for the first time, treats the input prompts and
examples as a unified whole for robustness testing. Specifically, ABFS
formulates the testing process as a combinatorial optimization problem,
employing Best-First Search to identify successful test cases within the
perturbation space and designing a novel Adaptive control strategy to enhance
test case naturalness. We evaluate the robustness testing performance of ABFS
on three datasets across five threat models. On Llama2-13b, the traditional
StressTest achieves only a 13.273% success rate, while ABFS attains a success
rate of 98.064%, supporting a more comprehensive robustness assessment before
software deployment. Compared to baseline methods, ABFS introduces fewer
modifications to the original input and consistently generates test cases with
superior naturalness. Furthermore, test cases generated by ABFS exhibit
stronger transferability and higher testing efficiency, significantly reducing
testing costs.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 09:02:06 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Xiao",
"Mingxuan",
""
],
[
"Xiao",
"Yan",
""
],
[
"Ji",
"Shunhui",
""
],
[
"Li",
"Yunhe",
""
],
[
"Xue",
"Lei",
""
],
[
"Zhang",
"Pengcheng",
""
]
] |
TITLE: ABFS: Natural Robustness Testing for LLM-based NLP Software
ABSTRACT: Owing to the exceptional performance of Large Language Models (LLMs) in
Natural Language Processing (NLP) tasks, LLM-based NLP software has rapidly
gained traction across various domains, such as financial analysis and content
moderation. However, these applications frequently exhibit robustness
deficiencies, where slight perturbations in input (prompt+example) may lead to
erroneous outputs. Current robustness testing methods face two main
limitations: (1) low testing effectiveness, limiting the applicability of
LLM-based software in safety-critical scenarios, and (2) insufficient
naturalness of test cases, reducing the practical value of testing outcomes. To
address these issues, this paper proposes ABFS, a straightforward yet effective
automated testing method that, for the first time, treats the input prompts and
examples as a unified whole for robustness testing. Specifically, ABFS
formulates the testing process as a combinatorial optimization problem,
employing Best-First Search to identify successful test cases within the
perturbation space and designing a novel Adaptive control strategy to enhance
test case naturalness. We evaluate the robustness testing performance of ABFS
on three datasets across five threat models. On Llama2-13b, the traditional
StressTest achieves only a 13.273% success rate, while ABFS attains a success
rate of 98.064%, supporting a more comprehensive robustness assessment before
software deployment. Compared to baseline methods, ABFS introduces fewer
modifications to the original input and consistently generates test cases with
superior naturalness. Furthermore, test cases generated by ABFS exhibit
stronger transferability and higher testing efficiency, significantly reducing
testing costs.
|
no_new_dataset
| 0.949389 |
2503.01329
|
Anh Tong
|
Anh Tong and Thanh Nguyen-Tang and Dongeun Lee and Duc Nguyen and Toan
Tran and David Hall and Cheongwoong Kang and Jaesik Choi
|
Neural ODE Transformers: Analyzing Internal Dynamics and Adaptive
Fine-tuning
|
ICLR 2025
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent advancements in large language models (LLMs) based on transformer
architectures have sparked significant interest in understanding their inner
workings. In this paper, we introduce a novel approach to modeling transformer
architectures using highly flexible non-autonomous neural ordinary differential
equations (ODEs). Our proposed model parameterizes all weights of attention and
feed-forward blocks through neural networks, expressing these weights as
functions of a continuous layer index. Through spectral analysis of the model's
dynamics, we uncover an increase in eigenvalue magnitude that challenges the
weight-sharing assumption prevalent in existing theoretical studies. We also
leverage the Lyapunov exponent to examine token-level sensitivity, enhancing
model interpretability. Our neural ODE transformer demonstrates performance
comparable to or better than vanilla transformers across various configurations
and datasets, while offering flexible fine-tuning capabilities that can adapt
to different architectural constraints.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 09:12:14 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Tong",
"Anh",
""
],
[
"Nguyen-Tang",
"Thanh",
""
],
[
"Lee",
"Dongeun",
""
],
[
"Nguyen",
"Duc",
""
],
[
"Tran",
"Toan",
""
],
[
"Hall",
"David",
""
],
[
"Kang",
"Cheongwoong",
""
],
[
"Choi",
"Jaesik",
""
]
] |
TITLE: Neural ODE Transformers: Analyzing Internal Dynamics and Adaptive
Fine-tuning
ABSTRACT: Recent advancements in large language models (LLMs) based on transformer
architectures have sparked significant interest in understanding their inner
workings. In this paper, we introduce a novel approach to modeling transformer
architectures using highly flexible non-autonomous neural ordinary differential
equations (ODEs). Our proposed model parameterizes all weights of attention and
feed-forward blocks through neural networks, expressing these weights as
functions of a continuous layer index. Through spectral analysis of the model's
dynamics, we uncover an increase in eigenvalue magnitude that challenges the
weight-sharing assumption prevalent in existing theoretical studies. We also
leverage the Lyapunov exponent to examine token-level sensitivity, enhancing
model interpretability. Our neural ODE transformer demonstrates performance
comparable to or better than vanilla transformers across various configurations
and datasets, while offering flexible fine-tuning capabilities that can adapt
to different architectural constraints.
|
no_new_dataset
| 0.944331 |
2503.01330
|
Jian Yuan
|
Jian Yuan, Ziwei He, Haoli Bai, Jingwen Leng, Bo Jiang
|
WeightedKV: Attention Scores Weighted Key-Value Cache Merging for Large
Language Models
|
Accepted by ICASSP 2025
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large Language Models (LLMs) use key-value (KV) cache to reduce redundant
computation in autoregressive generation. However, the KV cache size increases
linearly during generation, leading to excessive memory usage, especially for
long texts. Most KV cache compression methods evict the unimportant KV pairs to
maintain a fixed cache size, which leads to the permanent loss of tokens during
generation. However, singular value decomposition shows that \textit{values} do
not exhibit a strong low-rank property as \textit{keys} do, suggesting that
information is distributed more evenly across \textit{values}, in contrast to
its more redundant distribution within \textit{keys}. Therefore, methods that
evict both \textit{keys} and \textit{values} risk losing crucial information
and compromise context integrity, ultimately degrading the output quality. To
address this problem, we propose WeightedKV, a novel, training-free approach
that discards the \textit{keys} of less important tokens, while merging their
\textit{values} into neighboring tokens via a convex combination weighted by
their average attention scores. In this way, the retained \textit{keys} serve
as anchors that guide the generation process, while the merged \textit{values}
provide a rich contextual backdrop. We assess our method on four widely used
language modeling datasets, demonstrating superior performance compared to all
baseline methods, particularly with a lower budget ratio.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 09:12:34 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Yuan",
"Jian",
""
],
[
"He",
"Ziwei",
""
],
[
"Bai",
"Haoli",
""
],
[
"Leng",
"Jingwen",
""
],
[
"Jiang",
"Bo",
""
]
] |
TITLE: WeightedKV: Attention Scores Weighted Key-Value Cache Merging for Large
Language Models
ABSTRACT: Large Language Models (LLMs) use key-value (KV) cache to reduce redundant
computation in autoregressive generation. However, the KV cache size increases
linearly during generation, leading to excessive memory usage, especially for
long texts. Most KV cache compression methods evict the unimportant KV pairs to
maintain a fixed cache size, which leads to the permanent loss of tokens during
generation. However, singular value decomposition shows that \textit{values} do
not exhibit a strong low-rank property as \textit{keys} do, suggesting that
information is distributed more evenly across \textit{values}, in contrast to
its more redundant distribution within \textit{keys}. Therefore, methods that
evict both \textit{keys} and \textit{values} risk losing crucial information
and compromise context integrity, ultimately degrading the output quality. To
address this problem, we propose WeightedKV, a novel, training-free approach
that discards the \textit{keys} of less important tokens, while merging their
\textit{values} into neighboring tokens via a convex combination weighted by
their average attention scores. In this way, the retained \textit{keys} serve
as anchors that guide the generation process, while the merged \textit{values}
provide a rich contextual backdrop. We assess our method on four widely used
language modeling datasets, demonstrating superior performance compared to all
baseline methods, particularly with a lower budget ratio.
|
no_new_dataset
| 0.942533 |
2503.01347
|
Ruikun Zhang
|
Ruikun Zhang, Yan Yang, Liyuan Pan
|
Spatial Transcriptomics Analysis of Spatially Dense Gene Expression
Prediction
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Spatial transcriptomics (ST) measures gene expression at fine-grained spatial
resolution, offering insights into tissue molecular landscapes. Previous
methods for spatial gene expression prediction usually crop spots of interest
from pathology tissue slide images, and learn a model that maps each spot to a
single gene expression profile. However, it fundamentally loses spatial
resolution of gene expression: 1) each spot often contains multiple cells with
distinct gene expression; 2) spots are cropped at fixed resolutions, limiting
the ability to predict gene expression at varying spatial scales. To address
these limitations, this paper presents PixNet, a dense prediction network
capable of predicting spatially resolved gene expression across spots of
varying sizes and scales directly from pathology images. Different from
previous methods that map individual spots to gene expression values, we
generate a dense continuous gene expression map from the pathology image, and
aggregate values within spots of interest to predict the gene expression. Our
PixNet outperforms state-of-the-art methods on 3 common ST datasets, while
showing superior performance in predicting gene expression across multiple
spatial scales. The source code will be publicly available.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 09:38:01 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Zhang",
"Ruikun",
""
],
[
"Yang",
"Yan",
""
],
[
"Pan",
"Liyuan",
""
]
] |
TITLE: Spatial Transcriptomics Analysis of Spatially Dense Gene Expression
Prediction
ABSTRACT: Spatial transcriptomics (ST) measures gene expression at fine-grained spatial
resolution, offering insights into tissue molecular landscapes. Previous
methods for spatial gene expression prediction usually crop spots of interest
from pathology tissue slide images, and learn a model that maps each spot to a
single gene expression profile. However, it fundamentally loses spatial
resolution of gene expression: 1) each spot often contains multiple cells with
distinct gene expression; 2) spots are cropped at fixed resolutions, limiting
the ability to predict gene expression at varying spatial scales. To address
these limitations, this paper presents PixNet, a dense prediction network
capable of predicting spatially resolved gene expression across spots of
varying sizes and scales directly from pathology images. Different from
previous methods that map individual spots to gene expression values, we
generate a dense continuous gene expression map from the pathology image, and
aggregate values within spots of interest to predict the gene expression. Our
PixNet outperforms state-of-the-art methods on 3 common ST datasets, while
showing superior performance in predicting gene expression across multiple
spatial scales. The source code will be publicly available.
|
no_new_dataset
| 0.953923 |
2503.01352
|
Jia-Xin Zhuang
|
Xiaoyu Zheng, Jing Wen, Jiaxin Zhuang, Yao Du, Jing Cong, Limei Guo,
Chao He, Lin Luo, and Hao Chen
|
Diffusion-based Virtual Staining from Polarimetric Mueller Matrix
Imaging
| null | null | null | null |
eess.IV cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Polarization, as a new optical imaging tool, has been explored to assist in
the diagnosis of pathology. Moreover, converting the polarimetric Mueller
Matrix (MM) to standardized stained images becomes a promising approach to help
pathologists interpret the results. However, existing methods for
polarization-based virtual staining are still in the early stage, and the
diffusion-based model, which has shown great potential in enhancing the
fidelity of the generated images, has not been studied yet. In this paper, a
Regulated Bridge Diffusion Model (RBDM) for polarization-based virtual staining
is proposed. RBDM utilizes the bidirectional bridge diffusion process to learn
the mapping from polarization images to other modalities such as H\&E and
fluorescence. And to demonstrate the effectiveness of our model, we conduct the
experiment on our manually collected dataset, which consists of 18,000 paired
polarization, fluorescence and H\&E images, due to the unavailability of the
public dataset. The experiment results show that our model greatly outperforms
other benchmark methods. Our dataset and code will be released upon acceptance.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 09:45:27 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Zheng",
"Xiaoyu",
""
],
[
"Wen",
"Jing",
""
],
[
"Zhuang",
"Jiaxin",
""
],
[
"Du",
"Yao",
""
],
[
"Cong",
"Jing",
""
],
[
"Guo",
"Limei",
""
],
[
"He",
"Chao",
""
],
[
"Luo",
"Lin",
""
],
[
"Chen",
"Hao",
""
]
] |
TITLE: Diffusion-based Virtual Staining from Polarimetric Mueller Matrix
Imaging
ABSTRACT: Polarization, as a new optical imaging tool, has been explored to assist in
the diagnosis of pathology. Moreover, converting the polarimetric Mueller
Matrix (MM) to standardized stained images becomes a promising approach to help
pathologists interpret the results. However, existing methods for
polarization-based virtual staining are still in the early stage, and the
diffusion-based model, which has shown great potential in enhancing the
fidelity of the generated images, has not been studied yet. In this paper, a
Regulated Bridge Diffusion Model (RBDM) for polarization-based virtual staining
is proposed. RBDM utilizes the bidirectional bridge diffusion process to learn
the mapping from polarization images to other modalities such as H\&E and
fluorescence. And to demonstrate the effectiveness of our model, we conduct the
experiment on our manually collected dataset, which consists of 18,000 paired
polarization, fluorescence and H\&E images, due to the unavailability of the
public dataset. The experiment results show that our model greatly outperforms
other benchmark methods. Our dataset and code will be released upon acceptance.
|
new_dataset
| 0.959687 |
2503.01353
|
Hazem Hesham Yousef Shalby
|
Hazem Hesham Yousef Shalby and Manuel Roveri
|
Dendron: Enhancing Human Activity Recognition with On-Device TinyML
Learning
|
Accepted to IEEE SSCI
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Human activity recognition (HAR) is a research field that employs Machine
Learning (ML) techniques to identify user activities. Recent studies have
prioritized the development of HAR solutions directly executed on wearable
devices, enabling the on-device activity recognition. This approach is
supported by the Tiny Machine Learning (TinyML) paradigm, which integrates ML
within embedded devices with limited resources. However, existing approaches in
the field lack in the capability for on-device learning of new HAR tasks,
particularly when supervised data are scarce. To address this limitation, our
paper introduces Dendron, a novel TinyML methodology designed to facilitate the
on-device learning of new tasks for HAR, even in conditions of limited
supervised data. Experimental results on two public-available datasets and an
off-the-shelf device (STM32-NUCLEO-F401RE) show the effectiveness and
efficiency of the proposed solution.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 09:45:52 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Shalby",
"Hazem Hesham Yousef",
""
],
[
"Roveri",
"Manuel",
""
]
] |
TITLE: Dendron: Enhancing Human Activity Recognition with On-Device TinyML
Learning
ABSTRACT: Human activity recognition (HAR) is a research field that employs Machine
Learning (ML) techniques to identify user activities. Recent studies have
prioritized the development of HAR solutions directly executed on wearable
devices, enabling the on-device activity recognition. This approach is
supported by the Tiny Machine Learning (TinyML) paradigm, which integrates ML
within embedded devices with limited resources. However, existing approaches in
the field lack in the capability for on-device learning of new HAR tasks,
particularly when supervised data are scarce. To address this limitation, our
paper introduces Dendron, a novel TinyML methodology designed to facilitate the
on-device learning of new tasks for HAR, even in conditions of limited
supervised data. Experimental results on two public-available datasets and an
off-the-shelf device (STM32-NUCLEO-F401RE) show the effectiveness and
efficiency of the proposed solution.
|
no_new_dataset
| 0.945801 |
2503.01362
|
Weixing Wei
|
Weixing Wei, Jiahao Zhao, Yulun Wu, Kazuyoshi Yoshii
|
Streaming Piano Transcription Based on Consistent Onset and Offset
Decoding with Sustain Pedal Detection
|
Accepted to ISMIR 2024
| null | null | null |
cs.SD cs.IR cs.MM eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper describes a streaming audio-to-MIDI piano transcription approach
that aims to sequentially translate a music signal into a sequence of note
onset and offset events. The sequence-to-sequence nature of this task may call
for the computationally-intensive transformer model for better performance,
which has recently been used for offline transcription benchmarks and could be
extended for streaming transcription with causal attention mechanisms. We
assume that the performance limitation of this naive approach lies in the
decoder. Although time-frequency features useful for onset detection are
considerably different from those for offset detection, the single decoder is
trained to output a mixed sequence of onset and offset events without guarantee
of the correspondence between the onset and offset events of the same note. To
overcome this limitation, we propose a streaming encoder-decoder model that
uses a convolutional encoder aggregating local acoustic features, followed by
an autoregressive Transformer decoder detecting a variable number of onset
events and another decoder detecting the offset events for the active pitches
with validation of the sustain pedal at each time frame. Experiments using the
MAESTRO dataset showed that the proposed streaming method performed comparably
with or even better than the state-of-the-art offline methods while
significantly reducing the computational cost.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 09:55:54 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Wei",
"Weixing",
""
],
[
"Zhao",
"Jiahao",
""
],
[
"Wu",
"Yulun",
""
],
[
"Yoshii",
"Kazuyoshi",
""
]
] |
TITLE: Streaming Piano Transcription Based on Consistent Onset and Offset
Decoding with Sustain Pedal Detection
ABSTRACT: This paper describes a streaming audio-to-MIDI piano transcription approach
that aims to sequentially translate a music signal into a sequence of note
onset and offset events. The sequence-to-sequence nature of this task may call
for the computationally-intensive transformer model for better performance,
which has recently been used for offline transcription benchmarks and could be
extended for streaming transcription with causal attention mechanisms. We
assume that the performance limitation of this naive approach lies in the
decoder. Although time-frequency features useful for onset detection are
considerably different from those for offset detection, the single decoder is
trained to output a mixed sequence of onset and offset events without guarantee
of the correspondence between the onset and offset events of the same note. To
overcome this limitation, we propose a streaming encoder-decoder model that
uses a convolutional encoder aggregating local acoustic features, followed by
an autoregressive Transformer decoder detecting a variable number of onset
events and another decoder detecting the offset events for the active pitches
with validation of the sustain pedal at each time frame. Experiments using the
MAESTRO dataset showed that the proposed streaming method performed comparably
with or even better than the state-of-the-art offline methods while
significantly reducing the computational cost.
|
no_new_dataset
| 0.945399 |
2503.01378
|
Artem Lykov
|
Artem Lykov, Valerii Serpiva, Muhammad Haris Khan, Oleg Sautenkov,
Artyom Myshlyaev, Grik Tadevosyan, Yasheerah Yaqoot, and Dzmitry Tsetserukou
|
CognitiveDrone: A VLA Model and Evaluation Benchmark for Real-Time
Cognitive Task Solving and Reasoning in UAVs
|
Paper submitted to the IEEE conference
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper introduces CognitiveDrone, a novel Vision-Language-Action (VLA)
model tailored for complex Unmanned Aerial Vehicles (UAVs) tasks that demand
advanced cognitive abilities. Trained on a dataset comprising over 8,000
simulated flight trajectories across three key categories-Human Recognition,
Symbol Understanding, and Reasoning-the model generates real-time 4D action
commands based on first-person visual inputs and textual instructions. To
further enhance performance in intricate scenarios, we propose
CognitiveDrone-R1, which integrates an additional Vision-Language Model (VLM)
reasoning module to simplify task directives prior to high-frequency control.
Experimental evaluations using our open-source benchmark, CognitiveDroneBench,
reveal that while a racing-oriented model (RaceVLA) achieves an overall success
rate of 31.3%, the base CognitiveDrone model reaches 59.6%, and
CognitiveDrone-R1 attains a success rate of 77.2%. These results demonstrate
improvements of up to 30% in critical cognitive tasks, underscoring the
effectiveness of incorporating advanced reasoning capabilities into UAV control
systems. Our contributions include the development of a state-of-the-art VLA
model for UAV control and the introduction of the first dedicated benchmark for
assessing cognitive tasks in drone operations. The complete repository is
available at cognitivedrone.github.io
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 10:21:36 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Lykov",
"Artem",
""
],
[
"Serpiva",
"Valerii",
""
],
[
"Khan",
"Muhammad Haris",
""
],
[
"Sautenkov",
"Oleg",
""
],
[
"Myshlyaev",
"Artyom",
""
],
[
"Tadevosyan",
"Grik",
""
],
[
"Yaqoot",
"Yasheerah",
""
],
[
"Tsetserukou",
"Dzmitry",
""
]
] |
TITLE: CognitiveDrone: A VLA Model and Evaluation Benchmark for Real-Time
Cognitive Task Solving and Reasoning in UAVs
ABSTRACT: This paper introduces CognitiveDrone, a novel Vision-Language-Action (VLA)
model tailored for complex Unmanned Aerial Vehicles (UAVs) tasks that demand
advanced cognitive abilities. Trained on a dataset comprising over 8,000
simulated flight trajectories across three key categories-Human Recognition,
Symbol Understanding, and Reasoning-the model generates real-time 4D action
commands based on first-person visual inputs and textual instructions. To
further enhance performance in intricate scenarios, we propose
CognitiveDrone-R1, which integrates an additional Vision-Language Model (VLM)
reasoning module to simplify task directives prior to high-frequency control.
Experimental evaluations using our open-source benchmark, CognitiveDroneBench,
reveal that while a racing-oriented model (RaceVLA) achieves an overall success
rate of 31.3%, the base CognitiveDrone model reaches 59.6%, and
CognitiveDrone-R1 attains a success rate of 77.2%. These results demonstrate
improvements of up to 30% in critical cognitive tasks, underscoring the
effectiveness of incorporating advanced reasoning capabilities into UAV control
systems. Our contributions include the development of a state-of-the-art VLA
model for UAV control and the introduction of the first dedicated benchmark for
assessing cognitive tasks in drone operations. The complete repository is
available at cognitivedrone.github.io
|
new_dataset
| 0.960137 |
2503.01386
|
Stefano Cresci
|
Leonardo Nizzoli, Marco Avvenuti, Maurizio Tesconi, Stefano Cresci
|
Geo-Semantic-Parsing: AI-powered geoparsing by traversing semantic
knowledge graphs
|
Postprint of the article published in the Decision Support Systems
journal. Please, cite accordingly
|
Decision Support Systems 136:113346, 2020
|
10.1016/j.dss.2020.113346
| null |
cs.CL cs.AI cs.LG cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Online social networks convey rich information about geospatial facets of
reality. However in most cases, geographic information is not explicit and
structured, thus preventing its exploitation in real-time applications. We
address this limitation by introducing a novel geoparsing and geotagging
technique called Geo-Semantic-Parsing (GSP). GSP identifies location references
in free text and extracts the corresponding geographic coordinates. To reach
this goal, we employ a semantic annotator to identify relevant portions of the
input text and to link them to the corresponding entity in a knowledge graph.
Then, we devise and experiment with several efficient strategies for traversing
the knowledge graph, thus expanding the available set of information for the
geoparsing task. Finally, we exploit all available information for learning a
regression model that selects the best entity with which to geotag the input
text. We evaluate GSP on a well-known reference dataset including almost 10k
event-related tweets, achieving $F1=0.66$. We extensively compare our results
with those of 2 baselines and 3 state-of-the-art geoparsing techniques,
achieving the best performance. On the same dataset, competitors obtain $F1
\leq 0.55$. We conclude by providing in-depth analyses of our results, showing
that the overall superior performance of GSP is mainly due to a large
improvement in recall, with respect to existing techniques.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 10:30:23 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Nizzoli",
"Leonardo",
""
],
[
"Avvenuti",
"Marco",
""
],
[
"Tesconi",
"Maurizio",
""
],
[
"Cresci",
"Stefano",
""
]
] |
TITLE: Geo-Semantic-Parsing: AI-powered geoparsing by traversing semantic
knowledge graphs
ABSTRACT: Online social networks convey rich information about geospatial facets of
reality. However in most cases, geographic information is not explicit and
structured, thus preventing its exploitation in real-time applications. We
address this limitation by introducing a novel geoparsing and geotagging
technique called Geo-Semantic-Parsing (GSP). GSP identifies location references
in free text and extracts the corresponding geographic coordinates. To reach
this goal, we employ a semantic annotator to identify relevant portions of the
input text and to link them to the corresponding entity in a knowledge graph.
Then, we devise and experiment with several efficient strategies for traversing
the knowledge graph, thus expanding the available set of information for the
geoparsing task. Finally, we exploit all available information for learning a
regression model that selects the best entity with which to geotag the input
text. We evaluate GSP on a well-known reference dataset including almost 10k
event-related tweets, achieving $F1=0.66$. We extensively compare our results
with those of 2 baselines and 3 state-of-the-art geoparsing techniques,
achieving the best performance. On the same dataset, competitors obtain $F1
\leq 0.55$. We conclude by providing in-depth analyses of our results, showing
that the overall superior performance of GSP is mainly due to a large
improvement in recall, with respect to existing techniques.
|
no_new_dataset
| 0.943191 |
2503.01389
|
Josef Urban
|
Thibault Gauthier and Josef Urban
|
Learning Conjecturing from Scratch
| null | null | null | null |
cs.AI cs.LG cs.LO cs.NE cs.SC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We develop a self-learning approach for conjecturing of induction predicates
on a dataset of 16197 problems derived from the OEIS. These problems are hard
for today's SMT and ATP systems because they require a combination of inductive
and arithmetical reasoning.
Starting from scratch, our approach consists of a feedback loop that iterates
between (i) training a neural translator to learn the correspondence between
the problems solved so far and the induction predicates useful for them, (ii)
using the trained neural system to generate many new induction predicates for
the problems, (iii) fast runs of the z3 prover attempting to prove the problems
using the generated predicates, (iv) using heuristics such as predicate size
and solution speed on the proved problems to choose the best predicates for the
next iteration of training.
The algorithm discovers on its own many interesting induction predicates,
ultimately solving 5565 problems, compared to 2265 problems solved by CVC5,
Vampire or Z3 in 60 seconds.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 10:39:38 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Gauthier",
"Thibault",
""
],
[
"Urban",
"Josef",
""
]
] |
TITLE: Learning Conjecturing from Scratch
ABSTRACT: We develop a self-learning approach for conjecturing of induction predicates
on a dataset of 16197 problems derived from the OEIS. These problems are hard
for today's SMT and ATP systems because they require a combination of inductive
and arithmetical reasoning.
Starting from scratch, our approach consists of a feedback loop that iterates
between (i) training a neural translator to learn the correspondence between
the problems solved so far and the induction predicates useful for them, (ii)
using the trained neural system to generate many new induction predicates for
the problems, (iii) fast runs of the z3 prover attempting to prove the problems
using the generated predicates, (iv) using heuristics such as predicate size
and solution speed on the proved problems to choose the best predicates for the
next iteration of training.
The algorithm discovers on its own many interesting induction predicates,
ultimately solving 5565 problems, compared to 2265 problems solved by CVC5,
Vampire or Z3 in 60 seconds.
|
no_new_dataset
| 0.941061 |
2503.01394
|
Liu Yunpeng
|
Liu Yan, Liu Yunpeng, Zhao Liang
|
Enhancing Social Media Rumor Detection: A Semantic and Graph Neural
Network Approach for the 2024 Global Election
| null | null | null | null |
cs.SI cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The development of social media platforms has revolutionized the speed and
manner in which information is disseminated, leading to both beneficial and
detrimental effects on society. While these platforms facilitate rapid
communication, they also accelerate the spread of rumors and extremist speech,
impacting public perception and behavior significantly. This issue is
particularly pronounced during election periods, where the influence of social
media on election outcomes has become a matter of global concern. With the
unprecedented number of elections in 2024, against this backdrop, the election
ecosystem has encountered unprecedented challenges. This study addresses the
urgent need for effective rumor detection on social media by proposing a novel
method that combines semantic analysis with graph neural networks. We have
meticulously collected a dataset from PolitiFact and Twitter, focusing on
politically relevant rumors. Our approach involves semantic analysis using a
fine-tuned BERT model to vectorize text content and construct a directed graph
where tweets and comments are nodes, and interactions are edges. The core of
our method is a graph neural network, SAGEWithEdgeAttention, which extends the
GraphSAGE model by incorporating first-order differences as edge attributes and
applying an attention mechanism to enhance feature aggregation. This innovative
approach allows for the fine-grained analysis of the complex social network
structure, improving rumor detection accuracy. The study concludes that our
method significantly outperforms traditional content analysis and time-based
models, offering a theoretically sound and practically efficient solution.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 10:49:33 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Yan",
"Liu",
""
],
[
"Yunpeng",
"Liu",
""
],
[
"Liang",
"Zhao",
""
]
] |
TITLE: Enhancing Social Media Rumor Detection: A Semantic and Graph Neural
Network Approach for the 2024 Global Election
ABSTRACT: The development of social media platforms has revolutionized the speed and
manner in which information is disseminated, leading to both beneficial and
detrimental effects on society. While these platforms facilitate rapid
communication, they also accelerate the spread of rumors and extremist speech,
impacting public perception and behavior significantly. This issue is
particularly pronounced during election periods, where the influence of social
media on election outcomes has become a matter of global concern. With the
unprecedented number of elections in 2024, against this backdrop, the election
ecosystem has encountered unprecedented challenges. This study addresses the
urgent need for effective rumor detection on social media by proposing a novel
method that combines semantic analysis with graph neural networks. We have
meticulously collected a dataset from PolitiFact and Twitter, focusing on
politically relevant rumors. Our approach involves semantic analysis using a
fine-tuned BERT model to vectorize text content and construct a directed graph
where tweets and comments are nodes, and interactions are edges. The core of
our method is a graph neural network, SAGEWithEdgeAttention, which extends the
GraphSAGE model by incorporating first-order differences as edge attributes and
applying an attention mechanism to enhance feature aggregation. This innovative
approach allows for the fine-grained analysis of the complex social network
structure, improving rumor detection accuracy. The study concludes that our
method significantly outperforms traditional content analysis and time-based
models, offering a theoretically sound and practically efficient solution.
|
no_new_dataset
| 0.927495 |
2503.01396
|
Yash Sharma
|
Yash Sharma and Anshul Arora
|
CorrNetDroid: Android Malware Detector leveraging a Correlation-based
Feature Selection for Network Traffic features
| null | null | null | null |
cs.CR cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Copious mobile operating systems exist in the market, but Android remains the
user's choice. Meanwhile, its growing popularity has also attracted malware
developers. Researchers have proposed various static solutions for Android
malware detection. However, stealthier malware evade static analysis. This
raises the need for a robust Android malware detection system capable of
dealing with advanced threats and overcoming the shortcomings of static
analysis.
Hence, this work proposes a dynamic analysis-based Android malware detection
system, CorrNetDroid, that works over network traffic flows. Many traffic
features exhibit overlapping ranges in normal and malware datasets. Therefore,
we first rank the features using two statistical measures, crRelevance and
Normalized Mean Residue Similarity (NMRS), to assess feature-class and
feature-feature correlations. Thereafter, we introduce a novel
correlation-based feature selection algorithm that applies NMRS on crRelevance
rankings to identify the optimal feature subset for Android malware detection.
Experimental results highlight that our model effectively reduces the feature
set while detecting Android malware with 99.50 percent accuracy when
considering only two network traffic features. Furthermore, our experiments
demonstrate that the NMRS-based algorithm on crRelevance rankings outperforms
statistical tests such as chi-square, ANOVA, Mann-Whitney U test, and
Kruskal-Wallis test. In addition, our model surpasses various state-of-the-art
Android malware detection techniques in terms of detection accuracy.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 10:52:34 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Sharma",
"Yash",
""
],
[
"Arora",
"Anshul",
""
]
] |
TITLE: CorrNetDroid: Android Malware Detector leveraging a Correlation-based
Feature Selection for Network Traffic features
ABSTRACT: Copious mobile operating systems exist in the market, but Android remains the
user's choice. Meanwhile, its growing popularity has also attracted malware
developers. Researchers have proposed various static solutions for Android
malware detection. However, stealthier malware evade static analysis. This
raises the need for a robust Android malware detection system capable of
dealing with advanced threats and overcoming the shortcomings of static
analysis.
Hence, this work proposes a dynamic analysis-based Android malware detection
system, CorrNetDroid, that works over network traffic flows. Many traffic
features exhibit overlapping ranges in normal and malware datasets. Therefore,
we first rank the features using two statistical measures, crRelevance and
Normalized Mean Residue Similarity (NMRS), to assess feature-class and
feature-feature correlations. Thereafter, we introduce a novel
correlation-based feature selection algorithm that applies NMRS on crRelevance
rankings to identify the optimal feature subset for Android malware detection.
Experimental results highlight that our model effectively reduces the feature
set while detecting Android malware with 99.50 percent accuracy when
considering only two network traffic features. Furthermore, our experiments
demonstrate that the NMRS-based algorithm on crRelevance rankings outperforms
statistical tests such as chi-square, ANOVA, Mann-Whitney U test, and
Kruskal-Wallis test. In addition, our model surpasses various state-of-the-art
Android malware detection techniques in terms of detection accuracy.
|
no_new_dataset
| 0.947527 |
2503.01416
|
Ramanathan Rajendiran
|
Ramanathan Rajendiran, Debaditya Roy, Basura Fernando
|
Learning to Generate Long-term Future Narrations Describing Activities
of Daily Living
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Anticipating future events is crucial for various application domains such as
healthcare, smart home technology, and surveillance. Narrative event
descriptions provide context-rich information, enhancing a system's future
planning and decision-making capabilities. We propose a novel task:
$\textit{long-term future narration generation}$, which extends beyond
traditional action anticipation by generating detailed narrations of future
daily activities. We introduce a visual-language model, ViNa, specifically
designed to address this challenging task. ViNa integrates long-term videos and
corresponding narrations to generate a sequence of future narrations that
predict subsequent events and actions over extended time horizons. ViNa extends
existing multimodal models that perform only short-term predictions or describe
observed videos by generating long-term future narrations for a broader range
of daily activities. We also present a novel downstream application that
leverages the generated narrations called future video retrieval to help users
improve planning for a task by visualizing the future. We evaluate future
narration generation on the largest egocentric dataset Ego4D.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 11:10:49 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Rajendiran",
"Ramanathan",
""
],
[
"Roy",
"Debaditya",
""
],
[
"Fernando",
"Basura",
""
]
] |
TITLE: Learning to Generate Long-term Future Narrations Describing Activities
of Daily Living
ABSTRACT: Anticipating future events is crucial for various application domains such as
healthcare, smart home technology, and surveillance. Narrative event
descriptions provide context-rich information, enhancing a system's future
planning and decision-making capabilities. We propose a novel task:
$\textit{long-term future narration generation}$, which extends beyond
traditional action anticipation by generating detailed narrations of future
daily activities. We introduce a visual-language model, ViNa, specifically
designed to address this challenging task. ViNa integrates long-term videos and
corresponding narrations to generate a sequence of future narrations that
predict subsequent events and actions over extended time horizons. ViNa extends
existing multimodal models that perform only short-term predictions or describe
observed videos by generating long-term future narrations for a broader range
of daily activities. We also present a novel downstream application that
leverages the generated narrations called future video retrieval to help users
improve planning for a task by visualizing the future. We evaluate future
narration generation on the largest egocentric dataset Ego4D.
|
new_dataset
| 0.535432 |
2503.01438
|
Zhiheng Li
|
Zhiheng Li, Yubo Cui, Ningyuan Huang, Chenglin Pang, Zheng Fang
|
CAO-RONet: A Robust 4D Radar Odometry with Exploring More Information
from Low-Quality Points
|
7 pages, 7 figures
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, 4D millimetre-wave radar exhibits more stable perception ability
than LiDAR and camera under adverse conditions (e.g. rain and fog). However,
low-quality radar points hinder its application, especially the odometry task
that requires a dense and accurate matching. To fully explore the potential of
4D radar, we introduce a learning-based odometry framework, enabling robust
ego-motion estimation from finite and uncertain geometry information. First,
for sparse radar points, we propose a local completion to supplement missing
structures and provide denser guideline for aligning two frames. Then, a
context-aware association with a hierarchical structure flexibly matches points
of different scales aided by feature similarity, and improves local matching
consistency through correlation balancing. Finally, we present a window-based
optimizer that uses historical priors to establish a coupling state estimation
and correct errors of inter-frame matching. The superiority of our algorithm is
confirmed on View-of-Delft dataset, achieving around a 50% performance
improvement over previous approaches and delivering accuracy on par with LiDAR
odometry. Our code will be available.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 11:44:49 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Li",
"Zhiheng",
""
],
[
"Cui",
"Yubo",
""
],
[
"Huang",
"Ningyuan",
""
],
[
"Pang",
"Chenglin",
""
],
[
"Fang",
"Zheng",
""
]
] |
TITLE: CAO-RONet: A Robust 4D Radar Odometry with Exploring More Information
from Low-Quality Points
ABSTRACT: Recently, 4D millimetre-wave radar exhibits more stable perception ability
than LiDAR and camera under adverse conditions (e.g. rain and fog). However,
low-quality radar points hinder its application, especially the odometry task
that requires a dense and accurate matching. To fully explore the potential of
4D radar, we introduce a learning-based odometry framework, enabling robust
ego-motion estimation from finite and uncertain geometry information. First,
for sparse radar points, we propose a local completion to supplement missing
structures and provide denser guideline for aligning two frames. Then, a
context-aware association with a hierarchical structure flexibly matches points
of different scales aided by feature similarity, and improves local matching
consistency through correlation balancing. Finally, we present a window-based
optimizer that uses historical priors to establish a coupling state estimation
and correct errors of inter-frame matching. The superiority of our algorithm is
confirmed on View-of-Delft dataset, achieving around a 50% performance
improvement over previous approaches and delivering accuracy on par with LiDAR
odometry. Our code will be available.
|
no_new_dataset
| 0.945045 |
2503.01449
|
Ting Zhang
|
Ting Zhang, Chengran Yang, Yindu Su, Martin Weyssow, Hung Nguyen, Tan
Bui, Hong Jin Kang, Yikun Li, Eng Lieh Ouh, Lwin Khin Shar, David Lo
|
Benchmarking Large Language Models for Multi-Language Software
Vulnerability Detection
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent advancements in generative AI have led to the widespread adoption of
large language models (LLMs) in software engineering, addressing numerous
long-standing challenges. However, a comprehensive study examining the
capabilities of LLMs in software vulnerability detection (SVD), a crucial
aspect of software security, is currently lacking. Existing research primarily
focuses on evaluating LLMs using C/C++ datasets. It typically explores only one
or two strategies among prompt engineering, instruction tuning, and sequence
classification fine-tuning for open-source LLMs. Consequently, there is a
significant knowledge gap regarding the effectiveness of diverse LLMs in
detecting vulnerabilities across various programming languages. To address this
knowledge gap, we present a comprehensive empirical study evaluating the
performance of LLMs on the SVD task. We have compiled a comprehensive dataset
comprising 8,260 vulnerable functions in Python, 7,505 in Java, and 28,983 in
JavaScript. We assess five open-source LLMs using multiple approaches,
including prompt engineering, instruction tuning, and sequence classification
fine-tuning. These LLMs are benchmarked against five fine-tuned small language
models and two open-source static application security testing tools.
Furthermore, we explore two avenues to improve LLM performance on SVD: a) Data
perspective: Retraining models using downsampled balanced datasets. b) Model
perspective: Investigating ensemble learning methods that combine predictions
from multiple LLMs. Our comprehensive experiments demonstrate that SVD remains
a challenging task for LLMs. This study provides a thorough understanding of
the role of LLMs in SVD and offers practical insights for future advancements
in leveraging generative AI to enhance software security practices.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 11:56:00 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Zhang",
"Ting",
""
],
[
"Yang",
"Chengran",
""
],
[
"Su",
"Yindu",
""
],
[
"Weyssow",
"Martin",
""
],
[
"Nguyen",
"Hung",
""
],
[
"Bui",
"Tan",
""
],
[
"Kang",
"Hong Jin",
""
],
[
"Li",
"Yikun",
""
],
[
"Ouh",
"Eng Lieh",
""
],
[
"Shar",
"Lwin Khin",
""
],
[
"Lo",
"David",
""
]
] |
TITLE: Benchmarking Large Language Models for Multi-Language Software
Vulnerability Detection
ABSTRACT: Recent advancements in generative AI have led to the widespread adoption of
large language models (LLMs) in software engineering, addressing numerous
long-standing challenges. However, a comprehensive study examining the
capabilities of LLMs in software vulnerability detection (SVD), a crucial
aspect of software security, is currently lacking. Existing research primarily
focuses on evaluating LLMs using C/C++ datasets. It typically explores only one
or two strategies among prompt engineering, instruction tuning, and sequence
classification fine-tuning for open-source LLMs. Consequently, there is a
significant knowledge gap regarding the effectiveness of diverse LLMs in
detecting vulnerabilities across various programming languages. To address this
knowledge gap, we present a comprehensive empirical study evaluating the
performance of LLMs on the SVD task. We have compiled a comprehensive dataset
comprising 8,260 vulnerable functions in Python, 7,505 in Java, and 28,983 in
JavaScript. We assess five open-source LLMs using multiple approaches,
including prompt engineering, instruction tuning, and sequence classification
fine-tuning. These LLMs are benchmarked against five fine-tuned small language
models and two open-source static application security testing tools.
Furthermore, we explore two avenues to improve LLM performance on SVD: a) Data
perspective: Retraining models using downsampled balanced datasets. b) Model
perspective: Investigating ensemble learning methods that combine predictions
from multiple LLMs. Our comprehensive experiments demonstrate that SVD remains
a challenging task for LLMs. This study provides a thorough understanding of
the role of LLMs in SVD and offers practical insights for future advancements
in leveraging generative AI to enhance software security practices.
|
new_dataset
| 0.960063 |
2503.01453
|
Pankaj Choudhury
|
Pankaj Choudhury, Yogesh Aggarwal, Prithwijit Guha, Sukumar Nandi
|
AC-Lite : A Lightweight Image Captioning Model for Low-Resource Assamese
Language
| null | null | null | null |
cs.CV cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neural networks have significantly advanced AI applications, yet their
real-world adoption remains constrained by high computational demands, hardware
limitations, and accessibility challenges. In image captioning, many
state-of-the-art models have achieved impressive performances while relying on
resource-intensive architectures. This made them impractical for deployment on
resource-constrained devices. This limitation is particularly noticeable for
applications involving low-resource languages. We demonstrate the case of image
captioning in Assamese language, where lack of effective, scalable systems can
restrict the accessibility of AI-based solutions for native Assamese speakers.
This work presents AC-Lite, a computationally efficient model for image
captioning in low-resource Assamese language. AC-Lite reduces computational
requirements by replacing computation-heavy visual feature extractors like
FasterRCNN with lightweight ShuffleNetv2x1.5. Additionally, Gated Recurrent
Units (GRUs) are used as the caption decoder to further reduce computational
demands and model parameters. Furthermore, the integration of bilinear
attention enhances the model's overall performance. AC-Lite can operate on edge
devices, thereby eliminating the need for computation on remote servers. The
proposed AC-Lite model achieves 82.3 CIDEr score on the COCO-AC dataset with
1.098 GFLOPs and 25.65M parameters.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 12:07:52 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Choudhury",
"Pankaj",
""
],
[
"Aggarwal",
"Yogesh",
""
],
[
"Guha",
"Prithwijit",
""
],
[
"Nandi",
"Sukumar",
""
]
] |
TITLE: AC-Lite : A Lightweight Image Captioning Model for Low-Resource Assamese
Language
ABSTRACT: Neural networks have significantly advanced AI applications, yet their
real-world adoption remains constrained by high computational demands, hardware
limitations, and accessibility challenges. In image captioning, many
state-of-the-art models have achieved impressive performances while relying on
resource-intensive architectures. This made them impractical for deployment on
resource-constrained devices. This limitation is particularly noticeable for
applications involving low-resource languages. We demonstrate the case of image
captioning in Assamese language, where lack of effective, scalable systems can
restrict the accessibility of AI-based solutions for native Assamese speakers.
This work presents AC-Lite, a computationally efficient model for image
captioning in low-resource Assamese language. AC-Lite reduces computational
requirements by replacing computation-heavy visual feature extractors like
FasterRCNN with lightweight ShuffleNetv2x1.5. Additionally, Gated Recurrent
Units (GRUs) are used as the caption decoder to further reduce computational
demands and model parameters. Furthermore, the integration of bilinear
attention enhances the model's overall performance. AC-Lite can operate on edge
devices, thereby eliminating the need for computation on remote servers. The
proposed AC-Lite model achieves 82.3 CIDEr score on the COCO-AC dataset with
1.098 GFLOPs and 25.65M parameters.
|
no_new_dataset
| 0.946695 |
2503.01506
|
Xiangyu Xi
|
Xiangyu Xi, Deyang Kong, Jian Yang, Jiawei Yang, Zhengyu Chen, Wei
Wang, Jingang Wang, Xunliang Cai, Shikun Zhang, Wei Ye
|
SampleMix: A Sample-wise Pre-training Data Mixing Strategey by
Coordinating Data Quality and Diversity
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Existing pretraining data mixing methods for large language models (LLMs)
typically follow a domain-wise methodology, a top-down process that first
determines domain weights and then performs uniform data sampling across each
domain. However, these approaches neglect significant inter-domain overlaps and
commonalities, failing to control the global diversity of the constructed
training dataset. Further, uniform sampling within domains ignores fine-grained
sample-specific features, potentially leading to suboptimal data distribution.
To address these shortcomings, we propose a novel sample-wise data mixture
approach based on a bottom-up paradigm. This method performs global
cross-domain sampling by systematically evaluating the quality and diversity of
each sample, thereby dynamically determining the optimal domain distribution.
Comprehensive experiments across multiple downstream tasks and perplexity
assessments demonstrate that SampleMix surpasses existing domain-based methods.
Meanwhile, SampleMix requires 1.4x to 2.1x training steps to achieves the
baselines' performance, highlighting the substantial potential of SampleMix to
optimize pre-training data.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 13:22:11 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Xi",
"Xiangyu",
""
],
[
"Kong",
"Deyang",
""
],
[
"Yang",
"Jian",
""
],
[
"Yang",
"Jiawei",
""
],
[
"Chen",
"Zhengyu",
""
],
[
"Wang",
"Wei",
""
],
[
"Wang",
"Jingang",
""
],
[
"Cai",
"Xunliang",
""
],
[
"Zhang",
"Shikun",
""
],
[
"Ye",
"Wei",
""
]
] |
TITLE: SampleMix: A Sample-wise Pre-training Data Mixing Strategey by
Coordinating Data Quality and Diversity
ABSTRACT: Existing pretraining data mixing methods for large language models (LLMs)
typically follow a domain-wise methodology, a top-down process that first
determines domain weights and then performs uniform data sampling across each
domain. However, these approaches neglect significant inter-domain overlaps and
commonalities, failing to control the global diversity of the constructed
training dataset. Further, uniform sampling within domains ignores fine-grained
sample-specific features, potentially leading to suboptimal data distribution.
To address these shortcomings, we propose a novel sample-wise data mixture
approach based on a bottom-up paradigm. This method performs global
cross-domain sampling by systematically evaluating the quality and diversity of
each sample, thereby dynamically determining the optimal domain distribution.
Comprehensive experiments across multiple downstream tasks and perplexity
assessments demonstrate that SampleMix surpasses existing domain-based methods.
Meanwhile, SampleMix requires 1.4x to 2.1x training steps to achieves the
baselines' performance, highlighting the substantial potential of SampleMix to
optimize pre-training data.
|
no_new_dataset
| 0.946794 |
2503.01510
|
Alexander Baranov
|
Alexander Baranov, Anna Palatkina, Yulia Makovka, Pavel Braslavski
|
KoWit-24: A Richly Annotated Dataset of Wordplay in News Headlines
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present KoWit-24, a dataset with fine-grained annotation of wordplay in
2,700 Russian news headlines. KoWit-24 annotations include the presence of
wordplay, its type, wordplay anchors, and words/phrases the wordplay refers to.
Unlike the majority of existing humor collections of canned jokes, KoWit-24
provides wordplay contexts -- each headline is accompanied by the news lead and
summary. The most common type of wordplay in the dataset is the transformation
of collocations, idioms, and named entities -- the mechanism that has been
underrepresented in previous humor datasets. Our experiments with five LLMs
show that there is ample room for improvement in wordplay detection and
interpretation tasks. The dataset and evaluation scripts are available at
https://github.com/Humor-Research/KoWit-24
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 13:24:25 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Baranov",
"Alexander",
""
],
[
"Palatkina",
"Anna",
""
],
[
"Makovka",
"Yulia",
""
],
[
"Braslavski",
"Pavel",
""
]
] |
TITLE: KoWit-24: A Richly Annotated Dataset of Wordplay in News Headlines
ABSTRACT: We present KoWit-24, a dataset with fine-grained annotation of wordplay in
2,700 Russian news headlines. KoWit-24 annotations include the presence of
wordplay, its type, wordplay anchors, and words/phrases the wordplay refers to.
Unlike the majority of existing humor collections of canned jokes, KoWit-24
provides wordplay contexts -- each headline is accompanied by the news lead and
summary. The most common type of wordplay in the dataset is the transformation
of collocations, idioms, and named entities -- the mechanism that has been
underrepresented in previous humor datasets. Our experiments with five LLMs
show that there is ample room for improvement in wordplay detection and
interpretation tasks. The dataset and evaluation scripts are available at
https://github.com/Humor-Research/KoWit-24
|
new_dataset
| 0.959231 |
2503.01513
|
Katerina Korre
|
Katerina Korre, Dimitris Tsirmpas, Nikos Gkoumas, Emma Cabal\'e,
Dionysis Kontarinis, Danai Myrtzani, Theodoros Evgeniou, Ion Androutsopoulos,
John Pavlopoulos
|
Evaluation and Facilitation of Online Discussions in the LLM Era: A
Survey
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We present a survey of methods for assessing and enhancing the quality of
online discussions, focusing on the potential of Large Language Models (LLMs).
While online discourses aim, at least in theory, to foster mutual
understanding, they often devolve into harmful exchanges, such as hate speech,
threatening social cohesion and democratic values. Recent advancements in LLMs
enable facilitation agents that not only moderate content, but also actively
improve the quality of interactions. Our survey synthesizes ideas from Natural
Language Processing (NLP) and Social Sciences to provide (a) a new taxonomy on
discussion quality evaluation, (b) an overview of intervention and facilitation
strategies, along with a new taxonomy on conversation facilitation datasets,
(c) an LLM-oriented roadmap of good practices and future research directions,
from technological and societal perspectives.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 13:26:01 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Korre",
"Katerina",
""
],
[
"Tsirmpas",
"Dimitris",
""
],
[
"Gkoumas",
"Nikos",
""
],
[
"Cabalé",
"Emma",
""
],
[
"Kontarinis",
"Dionysis",
""
],
[
"Myrtzani",
"Danai",
""
],
[
"Evgeniou",
"Theodoros",
""
],
[
"Androutsopoulos",
"Ion",
""
],
[
"Pavlopoulos",
"John",
""
]
] |
TITLE: Evaluation and Facilitation of Online Discussions in the LLM Era: A
Survey
ABSTRACT: We present a survey of methods for assessing and enhancing the quality of
online discussions, focusing on the potential of Large Language Models (LLMs).
While online discourses aim, at least in theory, to foster mutual
understanding, they often devolve into harmful exchanges, such as hate speech,
threatening social cohesion and democratic values. Recent advancements in LLMs
enable facilitation agents that not only moderate content, but also actively
improve the quality of interactions. Our survey synthesizes ideas from Natural
Language Processing (NLP) and Social Sciences to provide (a) a new taxonomy on
discussion quality evaluation, (b) an overview of intervention and facilitation
strategies, along with a new taxonomy on conversation facilitation datasets,
(c) an LLM-oriented roadmap of good practices and future research directions,
from technological and societal perspectives.
|
no_new_dataset
| 0.943452 |
2503.01531
|
Songlin Dong
|
Songlin Dong, Zhengdong Zhou, Chenhao Ding, Xinyuan Gao, Alex Kot,
Yihong Gong
|
Diversity Covariance-Aware Prompt Learning for Vision-Language Models
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Prompt tuning can further enhance the performance of visual-language models
across various downstream tasks (e.g., few-shot learning), enabling them to
better adapt to specific applications and needs. In this paper, we present a
Diversity Covariance-Aware framework that learns distributional information
from the data to enhance the few-shot ability of the prompt model. First, we
propose a covariance-aware method that models the covariance relationships
between visual features and uses anisotropic Mahalanobis distance, instead of
the suboptimal cosine distance, to measure the similarity between two
modalities. We rigorously derive and prove the validity of this modeling
process. Then, we propose the diversity-aware method, which learns multiple
diverse soft prompts to capture different attributes of categories and aligns
them independently with visual modalities. This method achieves multi-centered
covariance modeling, leading to more diverse decision boundaries. Extensive
experiments on 11 datasets in various tasks demonstrate the effectiveness of
our method.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 13:40:43 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Dong",
"Songlin",
""
],
[
"Zhou",
"Zhengdong",
""
],
[
"Ding",
"Chenhao",
""
],
[
"Gao",
"Xinyuan",
""
],
[
"Kot",
"Alex",
""
],
[
"Gong",
"Yihong",
""
]
] |
TITLE: Diversity Covariance-Aware Prompt Learning for Vision-Language Models
ABSTRACT: Prompt tuning can further enhance the performance of visual-language models
across various downstream tasks (e.g., few-shot learning), enabling them to
better adapt to specific applications and needs. In this paper, we present a
Diversity Covariance-Aware framework that learns distributional information
from the data to enhance the few-shot ability of the prompt model. First, we
propose a covariance-aware method that models the covariance relationships
between visual features and uses anisotropic Mahalanobis distance, instead of
the suboptimal cosine distance, to measure the similarity between two
modalities. We rigorously derive and prove the validity of this modeling
process. Then, we propose the diversity-aware method, which learns multiple
diverse soft prompts to capture different attributes of categories and aligns
them independently with visual modalities. This method achieves multi-centered
covariance modeling, leading to more diverse decision boundaries. Extensive
experiments on 11 datasets in various tasks demonstrate the effectiveness of
our method.
|
no_new_dataset
| 0.948585 |
2503.01542
|
Yizhuo Ding
|
Yizhuo Ding, Xinwei Sun, Yanwei Fu, Guosheng Hu
|
Revisiting Large Language Model Pruning using Neuron Semantic
Attribution
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Model pruning technique is vital for accelerating large language models by
reducing their size and computational requirements. However, the
generalizability of existing pruning methods across diverse datasets and tasks
remains unclear. Thus, we conduct extensive evaluations on 24 datasets and 4
tasks using popular pruning methods. Based on these evaluations, we find and
then investigate that calibration set greatly affect the performance of pruning
methods. In addition, we surprisingly find a significant performance drop of
existing pruning methods in sentiment classification tasks. To understand the
link between performance drop and pruned neurons, we propose Neuron Semantic
Attribution, which learns to associate each neuron with specific semantics.
This method first makes the unpruned neurons of LLMs explainable.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 13:52:17 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Ding",
"Yizhuo",
""
],
[
"Sun",
"Xinwei",
""
],
[
"Fu",
"Yanwei",
""
],
[
"Hu",
"Guosheng",
""
]
] |
TITLE: Revisiting Large Language Model Pruning using Neuron Semantic
Attribution
ABSTRACT: Model pruning technique is vital for accelerating large language models by
reducing their size and computational requirements. However, the
generalizability of existing pruning methods across diverse datasets and tasks
remains unclear. Thus, we conduct extensive evaluations on 24 datasets and 4
tasks using popular pruning methods. Based on these evaluations, we find and
then investigate that calibration set greatly affect the performance of pruning
methods. In addition, we surprisingly find a significant performance drop of
existing pruning methods in sentiment classification tasks. To understand the
link between performance drop and pruned neurons, we propose Neuron Semantic
Attribution, which learns to associate each neuron with specific semantics.
This method first makes the unpruned neurons of LLMs explainable.
|
no_new_dataset
| 0.950227 |
2503.01547
|
Arash NasrEsfahani
|
Arash Nasr Esfahani, Hamed Hosseini, Mehdi Tale Masouleh, Ahmad
Kalhor, Hedieh Sajedi
|
AI-Driven Relocation Tracking in Dynamic Kitchen Environments
|
Conference: 2024 14th International Conference on Computer and
Knowledge Engineering (ICCKE) Publisher: IEEE
| null |
10.1109/ICCKE65377.2024.10874520
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
As smart homes become more prevalent in daily life, the ability to understand
dynamic environments is essential which is increasingly dependent on AI
systems. This study focuses on developing an intelligent algorithm which can
navigate a robot through a kitchen, recognizing objects, and tracking their
relocation. The kitchen was chosen as the testing ground due to its dynamic
nature as objects are frequently moved, rearranged and replaced. Various
techniques, such as SLAM feature-based tracking and deep learning-based object
detection (e.g., Faster R-CNN), are commonly used for object tracking.
Additionally, methods such as optical flow analysis and 3D reconstruction have
also been used to track the relocation of objects. These approaches often face
challenges when it comes to problems such as lighting variations and partial
occlusions, where parts of the object are hidden in some frames but visible in
others. The proposed method in this study leverages the YOLOv5 architecture,
initialized with pre-trained weights and subsequently fine-tuned on a custom
dataset. A novel method was developed, introducing a frame-scoring algorithm
which calculates a score for each object based on its location and features
within all frames. This scoring approach helps to identify changes by
determining the best-associated frame for each object and comparing the results
in each scene, overcoming limitations seen in other methods while maintaining
simplicity in design. The experimental results demonstrate an accuracy of
97.72%, a precision of 95.83% and a recall of 96.84% for this algorithm, which
highlights the efficacy of the model in detecting spatial changes.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 13:53:46 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Esfahani",
"Arash Nasr",
""
],
[
"Hosseini",
"Hamed",
""
],
[
"Masouleh",
"Mehdi Tale",
""
],
[
"Kalhor",
"Ahmad",
""
],
[
"Sajedi",
"Hedieh",
""
]
] |
TITLE: AI-Driven Relocation Tracking in Dynamic Kitchen Environments
ABSTRACT: As smart homes become more prevalent in daily life, the ability to understand
dynamic environments is essential which is increasingly dependent on AI
systems. This study focuses on developing an intelligent algorithm which can
navigate a robot through a kitchen, recognizing objects, and tracking their
relocation. The kitchen was chosen as the testing ground due to its dynamic
nature as objects are frequently moved, rearranged and replaced. Various
techniques, such as SLAM feature-based tracking and deep learning-based object
detection (e.g., Faster R-CNN), are commonly used for object tracking.
Additionally, methods such as optical flow analysis and 3D reconstruction have
also been used to track the relocation of objects. These approaches often face
challenges when it comes to problems such as lighting variations and partial
occlusions, where parts of the object are hidden in some frames but visible in
others. The proposed method in this study leverages the YOLOv5 architecture,
initialized with pre-trained weights and subsequently fine-tuned on a custom
dataset. A novel method was developed, introducing a frame-scoring algorithm
which calculates a score for each object based on its location and features
within all frames. This scoring approach helps to identify changes by
determining the best-associated frame for each object and comparing the results
in each scene, overcoming limitations seen in other methods while maintaining
simplicity in design. The experimental results demonstrate an accuracy of
97.72%, a precision of 95.83% and a recall of 96.84% for this algorithm, which
highlights the efficacy of the model in detecting spatial changes.
|
no_new_dataset
| 0.946843 |
2503.01548
|
Brady Moon
|
Narek Harutyunyan, Brady Moon, Seungchan Kim, Cherie Ho, Adam Hung,
Sebastian Scherer
|
MapExRL: Human-Inspired Indoor Exploration with Predicted Environment
Context and Reinforcement Learning
|
8 pages, 6 figures
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Path planning for robotic exploration is challenging, requiring reasoning
over unknown spaces and anticipating future observations. Efficient exploration
requires selecting budget-constrained paths that maximize information gain.
Despite advances in autonomous exploration, existing algorithms still fall
short of human performance, particularly in structured environments where
predictive cues exist but are underutilized. Guided by insights from our user
study, we introduce MapExRL, which improves robot exploration efficiency in
structured indoor environments by enabling longer-horizon planning through
reinforcement learning (RL) and global map predictions. Unlike many RL-based
exploration methods that use motion primitives as the action space, our
approach leverages frontiers for more efficient model learning and longer
horizon reasoning. Our framework generates global map predictions from the
observed map, which our policy utilizes, along with the prediction uncertainty,
estimated sensor coverage, frontier distance, and remaining distance budget, to
assess the strategic long-term value of frontiers. By leveraging multiple
frontier scoring methods and additional context, our policy makes more informed
decisions at each stage of the exploration. We evaluate our framework on a
real-world indoor map dataset, achieving up to an 18.8% improvement over the
strongest state-of-the-art baseline, with even greater gains compared to
conventional frontier-based algorithms.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 13:54:56 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Harutyunyan",
"Narek",
""
],
[
"Moon",
"Brady",
""
],
[
"Kim",
"Seungchan",
""
],
[
"Ho",
"Cherie",
""
],
[
"Hung",
"Adam",
""
],
[
"Scherer",
"Sebastian",
""
]
] |
TITLE: MapExRL: Human-Inspired Indoor Exploration with Predicted Environment
Context and Reinforcement Learning
ABSTRACT: Path planning for robotic exploration is challenging, requiring reasoning
over unknown spaces and anticipating future observations. Efficient exploration
requires selecting budget-constrained paths that maximize information gain.
Despite advances in autonomous exploration, existing algorithms still fall
short of human performance, particularly in structured environments where
predictive cues exist but are underutilized. Guided by insights from our user
study, we introduce MapExRL, which improves robot exploration efficiency in
structured indoor environments by enabling longer-horizon planning through
reinforcement learning (RL) and global map predictions. Unlike many RL-based
exploration methods that use motion primitives as the action space, our
approach leverages frontiers for more efficient model learning and longer
horizon reasoning. Our framework generates global map predictions from the
observed map, which our policy utilizes, along with the prediction uncertainty,
estimated sensor coverage, frontier distance, and remaining distance budget, to
assess the strategic long-term value of frontiers. By leveraging multiple
frontier scoring methods and additional context, our policy makes more informed
decisions at each stage of the exploration. We evaluate our framework on a
real-world indoor map dataset, achieving up to an 18.8% improvement over the
strongest state-of-the-art baseline, with even greater gains compared to
conventional frontier-based algorithms.
|
no_new_dataset
| 0.949856 |
2503.01556
|
Yao Zou
|
Yao Zou and Dawei Cheng
|
Effective High-order Graph Representation Learning for Credit Card Fraud
Detection
|
9 pages, 5 figures, accepted at IJCAI 2024
|
Proceedings of the Thirty-Third International Joint Conference on
Artificial Intelligence (IJCAI 2024), pages 7581-7589
|
10.24963/ijcai.2024/839
| null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Credit card fraud imposes significant costs on both cardholders and issuing
banks. Fraudsters often disguise their crimes, such as using legitimate
transactions through several benign users to bypass anti-fraud detection.
Existing graph neural network (GNN) models struggle with learning features of
camouflaged, indirect multi-hop transactions due to their inherent
over-smoothing issues in deep multi-layer aggregation, presenting a major
challenge in detecting disguised relationships. Therefore, in this paper, we
propose a novel High-order Graph Representation Learning model (HOGRL) to avoid
incorporating excessive noise during the multi-layer aggregation process. In
particular, HOGRL learns different orders of \emph{pure} representations
directly from high-order transaction graphs. We realize this goal by
effectively constructing high-order transaction graphs first and then learning
the \emph{pure} representations of each order so that the model could identify
fraudsters' multi-hop indirect transactions via multi-layer \emph{pure} feature
learning. In addition, we introduce a mixture-of-expert attention mechanism to
automatically determine the importance of different orders for jointly
optimizing fraud detection performance. We conduct extensive experiments in
both the open source and real-world datasets, the result demonstrates the
significant improvements of our proposed HOGRL compared with state-of-the-art
fraud detection baselines. HOGRL's superior performance also proves its
effectiveness in addressing high-order fraud camouflage criminals.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 13:59:46 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Zou",
"Yao",
""
],
[
"Cheng",
"Dawei",
""
]
] |
TITLE: Effective High-order Graph Representation Learning for Credit Card Fraud
Detection
ABSTRACT: Credit card fraud imposes significant costs on both cardholders and issuing
banks. Fraudsters often disguise their crimes, such as using legitimate
transactions through several benign users to bypass anti-fraud detection.
Existing graph neural network (GNN) models struggle with learning features of
camouflaged, indirect multi-hop transactions due to their inherent
over-smoothing issues in deep multi-layer aggregation, presenting a major
challenge in detecting disguised relationships. Therefore, in this paper, we
propose a novel High-order Graph Representation Learning model (HOGRL) to avoid
incorporating excessive noise during the multi-layer aggregation process. In
particular, HOGRL learns different orders of \emph{pure} representations
directly from high-order transaction graphs. We realize this goal by
effectively constructing high-order transaction graphs first and then learning
the \emph{pure} representations of each order so that the model could identify
fraudsters' multi-hop indirect transactions via multi-layer \emph{pure} feature
learning. In addition, we introduce a mixture-of-expert attention mechanism to
automatically determine the importance of different orders for jointly
optimizing fraud detection performance. We conduct extensive experiments in
both the open source and real-world datasets, the result demonstrates the
significant improvements of our proposed HOGRL compared with state-of-the-art
fraud detection baselines. HOGRL's superior performance also proves its
effectiveness in addressing high-order fraud camouflage criminals.
|
no_new_dataset
| 0.948298 |
2503.01557
|
Kai Fang
|
Kai Fang, Jiangtao Deng, Chengzu Dong, Usman Naseem, Tongcun Liu,
Hailin Feng, Wei Wang
|
MoCFL: Mobile Cluster Federated Learning Framework for Highly Dynamic
Network
|
10 pages, 7 figures, conference
| null |
10.1145/3696410.3714515
| null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Frequent fluctuations of client nodes in highly dynamic mobile clusters can
lead to significant changes in feature space distribution and data drift,
posing substantial challenges to the robustness of existing federated learning
(FL) strategies. To address these issues, we proposed a mobile cluster
federated learning framework (MoCFL). MoCFL enhances feature aggregation by
introducing an affinity matrix that quantifies the similarity between local
feature extractors from different clients, addressing dynamic data distribution
changes caused by frequent client churn and topology changes. Additionally,
MoCFL integrates historical and current feature information when training the
global classifier, effectively mitigating the catastrophic forgetting problem
frequently encountered in mobile scenarios. This synergistic combination
ensures that MoCFL maintains high performance and stability in dynamically
changing mobile environments. Experimental results on the UNSW-NB15 dataset
show that MoCFL excels in dynamic environments, demonstrating superior
robustness and accuracy while maintaining reasonable training costs.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 13:59:47 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Fang",
"Kai",
""
],
[
"Deng",
"Jiangtao",
""
],
[
"Dong",
"Chengzu",
""
],
[
"Naseem",
"Usman",
""
],
[
"Liu",
"Tongcun",
""
],
[
"Feng",
"Hailin",
""
],
[
"Wang",
"Wei",
""
]
] |
TITLE: MoCFL: Mobile Cluster Federated Learning Framework for Highly Dynamic
Network
ABSTRACT: Frequent fluctuations of client nodes in highly dynamic mobile clusters can
lead to significant changes in feature space distribution and data drift,
posing substantial challenges to the robustness of existing federated learning
(FL) strategies. To address these issues, we proposed a mobile cluster
federated learning framework (MoCFL). MoCFL enhances feature aggregation by
introducing an affinity matrix that quantifies the similarity between local
feature extractors from different clients, addressing dynamic data distribution
changes caused by frequent client churn and topology changes. Additionally,
MoCFL integrates historical and current feature information when training the
global classifier, effectively mitigating the catastrophic forgetting problem
frequently encountered in mobile scenarios. This synergistic combination
ensures that MoCFL maintains high performance and stability in dynamically
changing mobile environments. Experimental results on the UNSW-NB15 dataset
show that MoCFL excels in dynamic environments, demonstrating superior
robustness and accuracy while maintaining reasonable training costs.
|
no_new_dataset
| 0.95418 |
2503.01562
|
Biao Xiong Dr.
|
Biao Xionga, Longjun Zhanga, Ruiqi Huanga, Junwei Zhoua, Bojian Wub,
Fashuai Lic
|
VF-Plan: Bridging the Art Gallery Problem and Static LiDAR Scanning with
Visibility Field Optimization
| null | null | null | null |
cs.RO cs.CG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Viewpoint planning is crucial for 3D data collection and autonomous
navigation, yet existing methods often miss key optimization objectives for
static LiDAR, resulting in suboptimal network designs. The Viewpoint Planning
Problem (VPP), which builds upon the Art Gallery Problem (AGP), requires not
only full coverage but also robust registrability and connectivity under
limited sensor views. We introduce a greedy optimization algorithm that tackles
these VPP and AGP challenges through a novel Visibility Field (VF) approach.
The VF captures visibility characteristics unique to static LiDAR, enabling a
reduction from 2D to 1D by focusing on medial axis and joints. This leads to a
minimal, fully connected viewpoint network with comprehensive coverage and
minimal redundancy. Experiments across diverse environments show that our
method achieves high efficiency and scalability, matching or surpassing expert
designs. Compared to state-of-the-art methods, our approach achieves comparable
viewpoint counts (VC) while reducing Weighted Average Path Length (WAPL) by
approximately 95\%, indicating a much more compact and connected network.
Dataset and source code will be released upon acceptance.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 14:07:20 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Xionga",
"Biao",
""
],
[
"Zhanga",
"Longjun",
""
],
[
"Huanga",
"Ruiqi",
""
],
[
"Zhoua",
"Junwei",
""
],
[
"Wub",
"Bojian",
""
],
[
"Lic",
"Fashuai",
""
]
] |
TITLE: VF-Plan: Bridging the Art Gallery Problem and Static LiDAR Scanning with
Visibility Field Optimization
ABSTRACT: Viewpoint planning is crucial for 3D data collection and autonomous
navigation, yet existing methods often miss key optimization objectives for
static LiDAR, resulting in suboptimal network designs. The Viewpoint Planning
Problem (VPP), which builds upon the Art Gallery Problem (AGP), requires not
only full coverage but also robust registrability and connectivity under
limited sensor views. We introduce a greedy optimization algorithm that tackles
these VPP and AGP challenges through a novel Visibility Field (VF) approach.
The VF captures visibility characteristics unique to static LiDAR, enabling a
reduction from 2D to 1D by focusing on medial axis and joints. This leads to a
minimal, fully connected viewpoint network with comprehensive coverage and
minimal redundancy. Experiments across diverse environments show that our
method achieves high efficiency and scalability, matching or surpassing expert
designs. Compared to state-of-the-art methods, our approach achieves comparable
viewpoint counts (VC) while reducing Weighted Average Path Length (WAPL) by
approximately 95\%, indicating a much more compact and connected network.
Dataset and source code will be released upon acceptance.
|
no_new_dataset
| 0.949623 |
2503.01569
|
Muhammad Aqeel
|
Muhammad Aqeel, Shakiba Sharifi, Marco Cristani and Francesco Setti
|
Meta Learning-Driven Iterative Refinement for Robust Anomaly Detection
in Industrial Inspection
|
Accepted in the VISION workshop at ECCV 2024
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This study investigates the performance of robust anomaly detection models in
industrial inspection, focusing particularly on their ability to handle noisy
data. We propose to leverage the adaptation ability of meta learning approaches
to identify and reject noisy training data to improve the learning process. In
our model, we employ Model Agnostic Meta Learning (MAML) and an iterative
refinement process through an Inter-Quartile Range rejection scheme to enhance
their adaptability and robustness. This approach significantly improves the
models capability to distinguish between normal and defective conditions. Our
results of experiments conducted on well known MVTec and KSDD2 datasets
demonstrate that the proposed method not only excels in environments with
substantial noise but can also contribute in case of a clear training set,
isolating those samples that are relatively out of distribution, thus offering
significant improvements over traditional models.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 14:11:41 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Aqeel",
"Muhammad",
""
],
[
"Sharifi",
"Shakiba",
""
],
[
"Cristani",
"Marco",
""
],
[
"Setti",
"Francesco",
""
]
] |
TITLE: Meta Learning-Driven Iterative Refinement for Robust Anomaly Detection
in Industrial Inspection
ABSTRACT: This study investigates the performance of robust anomaly detection models in
industrial inspection, focusing particularly on their ability to handle noisy
data. We propose to leverage the adaptation ability of meta learning approaches
to identify and reject noisy training data to improve the learning process. In
our model, we employ Model Agnostic Meta Learning (MAML) and an iterative
refinement process through an Inter-Quartile Range rejection scheme to enhance
their adaptability and robustness. This approach significantly improves the
models capability to distinguish between normal and defective conditions. Our
results of experiments conducted on well known MVTec and KSDD2 datasets
demonstrate that the proposed method not only excels in environments with
substantial noise but can also contribute in case of a clear training set,
isolating those samples that are relatively out of distribution, thus offering
significant improvements over traditional models.
|
no_new_dataset
| 0.948106 |
2503.01571
|
Haoyuan Li
|
Chao Ye, Haoyuan Li, Weiyang Lin, Xianqiang Yang
|
MLINE-VINS: Robust Monocular Visual-Inertial SLAM With Flow Manhattan
and Line Features
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we introduce MLINE-VINS, a novel monocular visual-inertial
odometry (VIO) system that leverages line features and Manhattan Word
assumption. Specifically, for line matching process, we propose a novel
geometric line optical flow algorithm that efficiently tracks line features
with varying lengths, whitch is do not require detections and descriptors in
every frame. To address the instability of Manhattan estimation from line
features, we propose a tracking-by-detection module that consistently tracks
and optimizes Manhattan framse in consecutive images. By aligning the Manhattan
World with the VIO world frame, the tracking could restart using the latest
pose from back-end, simplifying the coordinate transformations within the
system. Furthermore, we implement a mechanism to validate Manhattan frames and
a novel global structural constraints back-end optimization. Extensive
experiments results on vairous datasets, including benchmark and self-collected
datasets, show that the proposed approach outperforms existing methods in terms
of accuracy and long-range robustness. The source code of our method is
available at: https://github.com/LiHaoy-ux/MLINE-VINS.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 14:12:47 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Ye",
"Chao",
""
],
[
"Li",
"Haoyuan",
""
],
[
"Lin",
"Weiyang",
""
],
[
"Yang",
"Xianqiang",
""
]
] |
TITLE: MLINE-VINS: Robust Monocular Visual-Inertial SLAM With Flow Manhattan
and Line Features
ABSTRACT: In this paper we introduce MLINE-VINS, a novel monocular visual-inertial
odometry (VIO) system that leverages line features and Manhattan Word
assumption. Specifically, for line matching process, we propose a novel
geometric line optical flow algorithm that efficiently tracks line features
with varying lengths, whitch is do not require detections and descriptors in
every frame. To address the instability of Manhattan estimation from line
features, we propose a tracking-by-detection module that consistently tracks
and optimizes Manhattan framse in consecutive images. By aligning the Manhattan
World with the VIO world frame, the tracking could restart using the latest
pose from back-end, simplifying the coordinate transformations within the
system. Furthermore, we implement a mechanism to validate Manhattan frames and
a novel global structural constraints back-end optimization. Extensive
experiments results on vairous datasets, including benchmark and self-collected
datasets, show that the proposed approach outperforms existing methods in terms
of accuracy and long-range robustness. The source code of our method is
available at: https://github.com/LiHaoy-ux/MLINE-VINS.
|
new_dataset
| 0.936749 |
2503.01576
|
Mojtaba Safari
|
Mojtaba Safari, Shansong Wang, Zach Eidex, Qiang Li, Erik H.
Middlebrooks, David S. Yu, and Xiaofeng Yang
|
MRI super-resolution reconstruction using efficient diffusion
probabilistic model with residual shifting
| null | null | null | null |
cs.CV physics.med-ph
|
http://creativecommons.org/licenses/by/4.0/
|
Objective:This study introduces a residual error-shifting mechanism that
drastically reduces sampling steps while preserving critical anatomical
details, thus accelerating MRI reconstruction. Approach:We propose a novel
diffusion-based SR framework called Res-SRDiff, which integrates residual error
shifting into the forward diffusion process. This enables efficient HR image
reconstruction by aligning the degraded HR and LR distributions.We evaluated
Res-SRDiff on ultra-high-field brain T1 MP2RAGE maps and T2-weighted prostate
images, comparing it with Bicubic, Pix2pix, CycleGAN, and a conventional
denoising diffusion probabilistic model with vision transformer backbone
(TM-DDPM), using quantitative metrics such as peak signal-to-noise ratio
(PSNR), structural similarity index (SSIM), gradient magnitude similarity
deviation (GMSD), and learned perceptual image patch similarity (LPIPS). Main
results: Res-SRDiff significantly outperformed all comparative methods in terms
of PSNR, SSIM, and GMSD across both datasets, with statistically significant
improvements (p-values<<0.05). The model achieved high-fidelity image
restoration with only four sampling steps, drastically reducing computational
time to under one second per slice, which is substantially faster than
conventional TM-DDPM with around 20 seconds per slice. Qualitative analyses
further demonstrated that Res-SRDiff effectively preserved fine anatomical
details and lesion morphology in both brain and pelvic MRI images.
Significance: Our findings show that Res-SRDiff is an efficient and accurate
MRI SR method, markedly improving computational efficiency and image quality.
Integrating residual error shifting into the diffusion process allows for rapid
and robust HR image reconstruction, enhancing clinical MRI workflows and
advancing medical imaging research. The source
at:https://github.com/mosaf/Res-SRDiff
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 14:15:08 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Safari",
"Mojtaba",
""
],
[
"Wang",
"Shansong",
""
],
[
"Eidex",
"Zach",
""
],
[
"Li",
"Qiang",
""
],
[
"Middlebrooks",
"Erik H.",
""
],
[
"Yu",
"David S.",
""
],
[
"Yang",
"Xiaofeng",
""
]
] |
TITLE: MRI super-resolution reconstruction using efficient diffusion
probabilistic model with residual shifting
ABSTRACT: Objective:This study introduces a residual error-shifting mechanism that
drastically reduces sampling steps while preserving critical anatomical
details, thus accelerating MRI reconstruction. Approach:We propose a novel
diffusion-based SR framework called Res-SRDiff, which integrates residual error
shifting into the forward diffusion process. This enables efficient HR image
reconstruction by aligning the degraded HR and LR distributions.We evaluated
Res-SRDiff on ultra-high-field brain T1 MP2RAGE maps and T2-weighted prostate
images, comparing it with Bicubic, Pix2pix, CycleGAN, and a conventional
denoising diffusion probabilistic model with vision transformer backbone
(TM-DDPM), using quantitative metrics such as peak signal-to-noise ratio
(PSNR), structural similarity index (SSIM), gradient magnitude similarity
deviation (GMSD), and learned perceptual image patch similarity (LPIPS). Main
results: Res-SRDiff significantly outperformed all comparative methods in terms
of PSNR, SSIM, and GMSD across both datasets, with statistically significant
improvements (p-values<<0.05). The model achieved high-fidelity image
restoration with only four sampling steps, drastically reducing computational
time to under one second per slice, which is substantially faster than
conventional TM-DDPM with around 20 seconds per slice. Qualitative analyses
further demonstrated that Res-SRDiff effectively preserved fine anatomical
details and lesion morphology in both brain and pelvic MRI images.
Significance: Our findings show that Res-SRDiff is an efficient and accurate
MRI SR method, markedly improving computational efficiency and image quality.
Integrating residual error shifting into the diffusion process allows for rapid
and robust HR image reconstruction, enhancing clinical MRI workflows and
advancing medical imaging research. The source
at:https://github.com/mosaf/Res-SRDiff
|
no_new_dataset
| 0.95452 |
2503.01580
|
Hanmo Liu
|
Hanmo Liu, Shimin Di, Haoyang Li, Xun Jian, Yue Wang, Lei Chen
|
A Selective Learning Method for Temporal Graph Continual Learning
| null | null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Node classification is a key task in temporal graph learning (TGL). Real-life
temporal graphs often introduce new node classes over time, but existing TGL
methods assume a fixed set of classes. This assumption brings limitations, as
updating models with full data is costly, while focusing only on new classes
results in forgetting old ones. Graph continual learning (GCL) methods mitigate
forgetting using old-class subsets but fail to account for their evolution. We
define this novel problem as temporal graph continual learning (TGCL), which
focuses on efficiently maintaining up-to-date knowledge of old classes. To
tackle TGCL, we propose a selective learning framework that substitutes the
old-class data with its subsets, Learning Towards the Future (LTF). We derive
an upper bound on the error caused by such replacement and transform it into
objectives for selecting and learning subsets that minimize classification
error while preserving the distribution of the full old-class data. Experiments
on three real-world datasets validate the effectiveness of LTF on TGCL.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 14:22:20 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Liu",
"Hanmo",
""
],
[
"Di",
"Shimin",
""
],
[
"Li",
"Haoyang",
""
],
[
"Jian",
"Xun",
""
],
[
"Wang",
"Yue",
""
],
[
"Chen",
"Lei",
""
]
] |
TITLE: A Selective Learning Method for Temporal Graph Continual Learning
ABSTRACT: Node classification is a key task in temporal graph learning (TGL). Real-life
temporal graphs often introduce new node classes over time, but existing TGL
methods assume a fixed set of classes. This assumption brings limitations, as
updating models with full data is costly, while focusing only on new classes
results in forgetting old ones. Graph continual learning (GCL) methods mitigate
forgetting using old-class subsets but fail to account for their evolution. We
define this novel problem as temporal graph continual learning (TGCL), which
focuses on efficiently maintaining up-to-date knowledge of old classes. To
tackle TGCL, we propose a selective learning framework that substitutes the
old-class data with its subsets, Learning Towards the Future (LTF). We derive
an upper bound on the error caused by such replacement and transform it into
objectives for selecting and learning subsets that minimize classification
error while preserving the distribution of the full old-class data. Experiments
on three real-world datasets validate the effectiveness of LTF on TGCL.
|
no_new_dataset
| 0.948728 |
2503.01601
|
Muhammad Musab Ansari
|
Muhammad Musab Ansari
|
Evaluating Stenosis Detection with Grounding DINO, YOLO, and DINO-DETR
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Detecting stenosis in coronary angiography is vital for diagnosing and
managing cardiovascular diseases. This study evaluates the performance of
state-of-the-art object detection models on the ARCADE dataset using the
MMDetection framework. The models are assessed using COCO evaluation metrics,
including Intersection over Union (IoU), Average Precision (AP), and Average
Recall (AR). Results indicate variations in detection accuracy across different
models, attributed to differences in algorithmic design, transformer-based vs.
convolutional architectures. Additionally, several challenges were encountered
during implementation, such as compatibility issues between PyTorch, CUDA, and
MMDetection, as well as dataset inconsistencies in ARCADE. The findings provide
insights into model selection for stenosis detection and highlight areas for
further improvement in deep learning-based coronary artery disease diagnosis.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 14:38:54 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Ansari",
"Muhammad Musab",
""
]
] |
TITLE: Evaluating Stenosis Detection with Grounding DINO, YOLO, and DINO-DETR
ABSTRACT: Detecting stenosis in coronary angiography is vital for diagnosing and
managing cardiovascular diseases. This study evaluates the performance of
state-of-the-art object detection models on the ARCADE dataset using the
MMDetection framework. The models are assessed using COCO evaluation metrics,
including Intersection over Union (IoU), Average Precision (AP), and Average
Recall (AR). Results indicate variations in detection accuracy across different
models, attributed to differences in algorithmic design, transformer-based vs.
convolutional architectures. Additionally, several challenges were encountered
during implementation, such as compatibility issues between PyTorch, CUDA, and
MMDetection, as well as dataset inconsistencies in ARCADE. The findings provide
insights into model selection for stenosis detection and highlight areas for
further improvement in deep learning-based coronary artery disease diagnosis.
|
no_new_dataset
| 0.946597 |
2503.01605
|
Thiago Henrique Segreto Silva
|
Thiago H. Segreto, Juliano Negri, Paulo H. Polegato, Jo\~ao Manoel
Herrera Pinheiro, Ricardo Godoy, and Marcelo Becker
|
A Leaf-Level Dataset for Soybean-Cotton Detection and Segmentation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Soybean and cotton are major drivers of many countries' agricultural sectors,
offering substantial economic returns but also facing persistent challenges
from volunteer plants and weeds that hamper sustainable management. Effectively
controlling volunteer plants and weeds demands advanced recognition strategies
that can identify these amidst complex crop canopies. While deep learning
methods have demonstrated promising results for leaf-level detection and
segmentation, existing datasets often fail to capture the complexity of
real-world agricultural fields. To address this, we collected 640
high-resolution images from a commercial farm spanning multiple growth stages,
weed pressures, and lighting variations. Each image is annotated at the
leaf-instance level, with 7,221 soybean and 5,190 cotton leaves labeled via
bounding boxes and segmentation masks, capturing overlapping foliage, small
leaf size, and morphological similarities. We validate this dataset using
YOLOv11, demonstrating state-of-the-art performance in accurately identifying
and segmenting overlapping foliage. Our publicly available dataset supports
advanced applications such as selective herbicide spraying and pest monitoring
and can foster more robust, data-driven strategies for soybean-cotton
management.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 14:41:06 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Segreto",
"Thiago H.",
""
],
[
"Negri",
"Juliano",
""
],
[
"Polegato",
"Paulo H.",
""
],
[
"Pinheiro",
"João Manoel Herrera",
""
],
[
"Godoy",
"Ricardo",
""
],
[
"Becker",
"Marcelo",
""
]
] |
TITLE: A Leaf-Level Dataset for Soybean-Cotton Detection and Segmentation
ABSTRACT: Soybean and cotton are major drivers of many countries' agricultural sectors,
offering substantial economic returns but also facing persistent challenges
from volunteer plants and weeds that hamper sustainable management. Effectively
controlling volunteer plants and weeds demands advanced recognition strategies
that can identify these amidst complex crop canopies. While deep learning
methods have demonstrated promising results for leaf-level detection and
segmentation, existing datasets often fail to capture the complexity of
real-world agricultural fields. To address this, we collected 640
high-resolution images from a commercial farm spanning multiple growth stages,
weed pressures, and lighting variations. Each image is annotated at the
leaf-instance level, with 7,221 soybean and 5,190 cotton leaves labeled via
bounding boxes and segmentation masks, capturing overlapping foliage, small
leaf size, and morphological similarities. We validate this dataset using
YOLOv11, demonstrating state-of-the-art performance in accurately identifying
and segmenting overlapping foliage. Our publicly available dataset supports
advanced applications such as selective herbicide spraying and pest monitoring
and can foster more robust, data-driven strategies for soybean-cotton
management.
|
new_dataset
| 0.968231 |
2503.01610
|
Chen Guo
|
Chen Guo, Junxuan Li, Yash Kant, Yaser Sheikh, Shunsuke Saito, Chen
Cao
|
Vid2Avatar-Pro: Authentic Avatar from Videos in the Wild via Universal
Prior
|
Project page: https://moygcc.github.io/vid2avatar-pro/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present Vid2Avatar-Pro, a method to create photorealistic and animatable
3D human avatars from monocular in-the-wild videos. Building a high-quality
avatar that supports animation with diverse poses from a monocular video is
challenging because the observation of pose diversity and view points is
inherently limited. The lack of pose variations typically leads to poor
generalization to novel poses, and avatars can easily overfit to limited input
view points, producing artifacts and distortions from other views. In this
work, we address these limitations by leveraging a universal prior model (UPM)
learned from a large corpus of multi-view clothed human performance capture
data. We build our representation on top of expressive 3D Gaussians with
canonical front and back maps shared across identities. Once the UPM is learned
to accurately reproduce the large-scale multi-view human images, we fine-tune
the model with an in-the-wild video via inverse rendering to obtain a
personalized photorealistic human avatar that can be faithfully animated to
novel human motions and rendered from novel views. The experiments show that
our approach based on the learned universal prior sets a new state-of-the-art
in monocular avatar reconstruction by substantially outperforming existing
approaches relying only on heuristic regularization or a shape prior of
minimally clothed bodies (e.g., SMPL) on publicly available datasets.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 14:45:35 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Guo",
"Chen",
""
],
[
"Li",
"Junxuan",
""
],
[
"Kant",
"Yash",
""
],
[
"Sheikh",
"Yaser",
""
],
[
"Saito",
"Shunsuke",
""
],
[
"Cao",
"Chen",
""
]
] |
TITLE: Vid2Avatar-Pro: Authentic Avatar from Videos in the Wild via Universal
Prior
ABSTRACT: We present Vid2Avatar-Pro, a method to create photorealistic and animatable
3D human avatars from monocular in-the-wild videos. Building a high-quality
avatar that supports animation with diverse poses from a monocular video is
challenging because the observation of pose diversity and view points is
inherently limited. The lack of pose variations typically leads to poor
generalization to novel poses, and avatars can easily overfit to limited input
view points, producing artifacts and distortions from other views. In this
work, we address these limitations by leveraging a universal prior model (UPM)
learned from a large corpus of multi-view clothed human performance capture
data. We build our representation on top of expressive 3D Gaussians with
canonical front and back maps shared across identities. Once the UPM is learned
to accurately reproduce the large-scale multi-view human images, we fine-tune
the model with an in-the-wild video via inverse rendering to obtain a
personalized photorealistic human avatar that can be faithfully animated to
novel human motions and rendered from novel views. The experiments show that
our approach based on the learned universal prior sets a new state-of-the-art
in monocular avatar reconstruction by substantially outperforming existing
approaches relying only on heuristic regularization or a shape prior of
minimally clothed bodies (e.g., SMPL) on publicly available datasets.
|
no_new_dataset
| 0.948585 |
2503.01612
|
Kaveen Perera
|
Kaveen Perera, Fouad Khelifi, Ammar Belatreche
|
Robust Palm-Vein Recognition Using the MMD Filter: Improving SIFT-Based
Feature Matching
|
Our previous work, presented at the 2022 International Conference on
Digital Image Computing: Techniques and Applications (DICTA) and published in
IEEE Xplore. The code for the MMD filter is available at
https://github.com/kaveenperera/MMD_filter under Mozilla Public License
Version 2.0
| null |
10.1109/DICTA56598.2022.10034589
| null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
A major challenge with palm vein images is that slight movements of the
fingers and thumb, or variations in hand posture, can stretch the skin in
different areas and alter the vein patterns. This can result in an infinite
number of variations in palm vein images for a given individual. This paper
introduces a novel filtering technique for SIFT-based feature matching, known
as the Mean and Median Distance (MMD) Filter. This method evaluates the
differences in keypoint coordinates and computes the mean and median in each
direction to eliminate incorrect matches. Experiments conducted on the 850nm
subset of the CASIA dataset indicate that the proposed MMD filter effectively
preserves correct points while reducing false positives detected by other
filtering methods. A comparison with existing SIFT-based palm vein recognition
systems demonstrates that the proposed MMD filter delivers outstanding
performance, achieving lower Equal Error Rate (EER) values. This article
presents an extended author's version based on our previous work, A Keypoint
Filtering Method for SIFT based Palm-Vein Recognition.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 14:48:06 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Perera",
"Kaveen",
""
],
[
"Khelifi",
"Fouad",
""
],
[
"Belatreche",
"Ammar",
""
]
] |
TITLE: Robust Palm-Vein Recognition Using the MMD Filter: Improving SIFT-Based
Feature Matching
ABSTRACT: A major challenge with palm vein images is that slight movements of the
fingers and thumb, or variations in hand posture, can stretch the skin in
different areas and alter the vein patterns. This can result in an infinite
number of variations in palm vein images for a given individual. This paper
introduces a novel filtering technique for SIFT-based feature matching, known
as the Mean and Median Distance (MMD) Filter. This method evaluates the
differences in keypoint coordinates and computes the mean and median in each
direction to eliminate incorrect matches. Experiments conducted on the 850nm
subset of the CASIA dataset indicate that the proposed MMD filter effectively
preserves correct points while reducing false positives detected by other
filtering methods. A comparison with existing SIFT-based palm vein recognition
systems demonstrates that the proposed MMD filter delivers outstanding
performance, achieving lower Equal Error Rate (EER) values. This article
presents an extended author's version based on our previous work, A Keypoint
Filtering Method for SIFT based Palm-Vein Recognition.
|
no_new_dataset
| 0.949995 |
2503.01619
|
Yashu Liu
|
Tong Ge, Yashu Liu, Jieping Ye, Tianyi Li, Chao Wang
|
Advancing vision-language models in front-end development via data
synthesis
| null | null | null | null |
cs.CV cs.AI cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Modern front-end (FE) development, especially when leveraging the unique
features of frameworks like React and Vue, presents distinctive challenges.
These include managing modular architectures, ensuring synchronization between
data and visual outputs for declarative rendering, and adapting reusable
components to various scenarios. Such complexities make it particularly
difficult for state-of-the-art large vision-language models (VLMs) to generate
accurate and functional code directly from design images. To address these
challenges, we propose a reflective agentic workflow that synthesizes
high-quality image-text data to capture the diverse characteristics of FE
development. This workflow automates the extraction of
self-contained\footnote{A \textbf{self-contained} code snippet is one that
encapsulates all necessary logic, styling, and dependencies, ensuring it
functions independently without requiring external imports or context.} code
snippets from real-world projects, renders the corresponding visual outputs,
and generates detailed descriptions that link design elements to functional
code. To further expand the scope and utility of the synthesis, we introduce
three data synthesis strategies: Evolution-based synthesis, which enables
scalable and diverse dataset expansion; Waterfall-Model-based synthesis, which
generates logically coherent code derived from system requirements; and
Additive Development synthesis, which iteratively increases the complexity of
human-authored components. We build a large vision-language model, Flame,
trained on the synthesized datasets and demonstrate its effectiveness in
generating React code via the $\text{pass}@k$ metric. Our results suggest that
a code VLM trained to interpret images before code generation may achieve
better performance.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 14:54:01 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Ge",
"Tong",
""
],
[
"Liu",
"Yashu",
""
],
[
"Ye",
"Jieping",
""
],
[
"Li",
"Tianyi",
""
],
[
"Wang",
"Chao",
""
]
] |
TITLE: Advancing vision-language models in front-end development via data
synthesis
ABSTRACT: Modern front-end (FE) development, especially when leveraging the unique
features of frameworks like React and Vue, presents distinctive challenges.
These include managing modular architectures, ensuring synchronization between
data and visual outputs for declarative rendering, and adapting reusable
components to various scenarios. Such complexities make it particularly
difficult for state-of-the-art large vision-language models (VLMs) to generate
accurate and functional code directly from design images. To address these
challenges, we propose a reflective agentic workflow that synthesizes
high-quality image-text data to capture the diverse characteristics of FE
development. This workflow automates the extraction of
self-contained\footnote{A \textbf{self-contained} code snippet is one that
encapsulates all necessary logic, styling, and dependencies, ensuring it
functions independently without requiring external imports or context.} code
snippets from real-world projects, renders the corresponding visual outputs,
and generates detailed descriptions that link design elements to functional
code. To further expand the scope and utility of the synthesis, we introduce
three data synthesis strategies: Evolution-based synthesis, which enables
scalable and diverse dataset expansion; Waterfall-Model-based synthesis, which
generates logically coherent code derived from system requirements; and
Additive Development synthesis, which iteratively increases the complexity of
human-authored components. We build a large vision-language model, Flame,
trained on the synthesized datasets and demonstrate its effectiveness in
generating React code via the $\text{pass}@k$ metric. Our results suggest that
a code VLM trained to interpret images before code generation may achieve
better performance.
|
no_new_dataset
| 0.952574 |
2503.01623
|
David Hartmann
|
David Hartmann, Amin Oueslati, Dimitri Staufer, Lena Pohlmann, Simon
Munzert, Hendrik Heuer
|
Lost in Moderation: How Commercial Content Moderation APIs Over- and
Under-Moderate Group-Targeted Hate Speech and Linguistic Variations
|
This is the author's version of the paper accepted at CHI Conference
on Human Factors in Computing Systems (CHI '25), April 26-May 1, 2025,
Yokohama, Japan
| null |
10.1145/3706598.3713998
| null |
cs.HC cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Commercial content moderation APIs are marketed as scalable solutions to
combat online hate speech. However, the reliance on these APIs risks both
silencing legitimate speech, called over-moderation, and failing to protect
online platforms from harmful speech, known as under-moderation. To assess such
risks, this paper introduces a framework for auditing black-box NLP systems.
Using the framework, we systematically evaluate five widely used commercial
content moderation APIs. Analyzing five million queries based on four datasets,
we find that APIs frequently rely on group identity terms, such as ``black'',
to predict hate speech. While OpenAI's and Amazon's services perform slightly
better, all providers under-moderate implicit hate speech, which uses codified
messages, especially against LGBTQIA+ individuals. Simultaneously, they
over-moderate counter-speech, reclaimed slurs and content related to Black,
LGBTQIA+, Jewish, and Muslim people. We recommend that API providers offer
better guidance on API implementation and threshold setting and more
transparency on their APIs' limitations.
Warning: This paper contains offensive and hateful terms and concepts. We
have chosen to reproduce these terms for reasons of transparency.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 14:56:47 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Hartmann",
"David",
""
],
[
"Oueslati",
"Amin",
""
],
[
"Staufer",
"Dimitri",
""
],
[
"Pohlmann",
"Lena",
""
],
[
"Munzert",
"Simon",
""
],
[
"Heuer",
"Hendrik",
""
]
] |
TITLE: Lost in Moderation: How Commercial Content Moderation APIs Over- and
Under-Moderate Group-Targeted Hate Speech and Linguistic Variations
ABSTRACT: Commercial content moderation APIs are marketed as scalable solutions to
combat online hate speech. However, the reliance on these APIs risks both
silencing legitimate speech, called over-moderation, and failing to protect
online platforms from harmful speech, known as under-moderation. To assess such
risks, this paper introduces a framework for auditing black-box NLP systems.
Using the framework, we systematically evaluate five widely used commercial
content moderation APIs. Analyzing five million queries based on four datasets,
we find that APIs frequently rely on group identity terms, such as ``black'',
to predict hate speech. While OpenAI's and Amazon's services perform slightly
better, all providers under-moderate implicit hate speech, which uses codified
messages, especially against LGBTQIA+ individuals. Simultaneously, they
over-moderate counter-speech, reclaimed slurs and content related to Black,
LGBTQIA+, Jewish, and Muslim people. We recommend that API providers offer
better guidance on API implementation and threshold setting and more
transparency on their APIs' limitations.
Warning: This paper contains offensive and hateful terms and concepts. We
have chosen to reproduce these terms for reasons of transparency.
|
no_new_dataset
| 0.949106 |
2503.01628
|
William Laprade
|
William Michael Laprade, Jesper Cairo Westergaard, Svend Christensen,
Mads Nielsen, Anders Bjorholm Dahl
|
A General Purpose Spectral Foundational Model for Both Proximal and
Remote Sensing Spectral Imaging
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Spectral imaging data acquired via multispectral and hyperspectral cameras
can have hundreds of channels, where each channel records the reflectance at a
specific wavelength and bandwidth. Time and resource constraints limit our
ability to collect large spectral datasets, making it difficult to build and
train predictive models from scratch. In the RGB domain, we can often alleviate
some of the limitations of smaller datasets by using pretrained foundational
models as a starting point. However, most existing foundation models are
pretrained on large datasets of 3-channel RGB images, severely limiting their
effectiveness when used with spectral imaging data. The few spectral foundation
models that do exist usually have one of two limitations: (1) they are built
and trained only on remote sensing data limiting their application in proximal
spectral imaging, (2) they utilize the more widely available multispectral
imaging datasets with less than 15 channels restricting their use with
hundred-channel hyperspectral images. To alleviate these issues, we propose a
large-scale foundational model and dataset built upon the masked autoencoder
architecture that takes advantage of spectral channel encoding,
spatial-spectral masking and ImageNet pretraining for an adaptable and robust
model for downstream spectral imaging tasks.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 15:04:00 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Laprade",
"William Michael",
""
],
[
"Westergaard",
"Jesper Cairo",
""
],
[
"Christensen",
"Svend",
""
],
[
"Nielsen",
"Mads",
""
],
[
"Dahl",
"Anders Bjorholm",
""
]
] |
TITLE: A General Purpose Spectral Foundational Model for Both Proximal and
Remote Sensing Spectral Imaging
ABSTRACT: Spectral imaging data acquired via multispectral and hyperspectral cameras
can have hundreds of channels, where each channel records the reflectance at a
specific wavelength and bandwidth. Time and resource constraints limit our
ability to collect large spectral datasets, making it difficult to build and
train predictive models from scratch. In the RGB domain, we can often alleviate
some of the limitations of smaller datasets by using pretrained foundational
models as a starting point. However, most existing foundation models are
pretrained on large datasets of 3-channel RGB images, severely limiting their
effectiveness when used with spectral imaging data. The few spectral foundation
models that do exist usually have one of two limitations: (1) they are built
and trained only on remote sensing data limiting their application in proximal
spectral imaging, (2) they utilize the more widely available multispectral
imaging datasets with less than 15 channels restricting their use with
hundred-channel hyperspectral images. To alleviate these issues, we propose a
large-scale foundational model and dataset built upon the masked autoencoder
architecture that takes advantage of spectral channel encoding,
spatial-spectral masking and ImageNet pretraining for an adaptable and robust
model for downstream spectral imaging tasks.
|
no_new_dataset
| 0.951729 |
2503.01633
|
Luyi Qiu
|
Luyi Qiu, Tristan Till, Xiaobao Guo, Adams Wai-Kin Kong
|
SparseMamba-PCL: Scribble-Supervised Medical Image Segmentation via
SAM-Guided Progressive Collaborative Learning
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Scribble annotations significantly reduce the cost and labor required for
dense labeling in large medical datasets with complex anatomical structures.
However, current scribble-supervised learning methods are limited in their
ability to effectively propagate sparse annotation labels to dense segmentation
masks and accurately segment object boundaries. To address these issues, we
propose a Progressive Collaborative Learning framework that leverages novel
algorithms and the Med-SAM foundation model to enhance information quality
during training. (1) We enrich ground truth scribble segmentation labels
through a new algorithm, propagating scribbles to estimate object boundaries.
(2) We enhance feature representation by optimizing Med-SAM-guided training
through the fusion of feature embeddings from Med-SAM and our proposed Sparse
Mamba network. This enriched representation also facilitates the fine-tuning of
the Med-SAM decoder with enriched scribbles. (3) For inference, we introduce a
Sparse Mamba network, which is highly capable of capturing local and global
dependencies by replacing the traditional sequential patch processing method
with a skip-sampling procedure. Experiments on the ACDC, CHAOS, and MSCMRSeg
datasets validate the effectiveness of our framework, outperforming nine
state-of-the-art methods. Our code is available at
\href{https://github.com/QLYCode/SparseMamba-PCL}{SparseMamba-PCL.git}.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 15:09:04 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Qiu",
"Luyi",
""
],
[
"Till",
"Tristan",
""
],
[
"Guo",
"Xiaobao",
""
],
[
"Kong",
"Adams Wai-Kin",
""
]
] |
TITLE: SparseMamba-PCL: Scribble-Supervised Medical Image Segmentation via
SAM-Guided Progressive Collaborative Learning
ABSTRACT: Scribble annotations significantly reduce the cost and labor required for
dense labeling in large medical datasets with complex anatomical structures.
However, current scribble-supervised learning methods are limited in their
ability to effectively propagate sparse annotation labels to dense segmentation
masks and accurately segment object boundaries. To address these issues, we
propose a Progressive Collaborative Learning framework that leverages novel
algorithms and the Med-SAM foundation model to enhance information quality
during training. (1) We enrich ground truth scribble segmentation labels
through a new algorithm, propagating scribbles to estimate object boundaries.
(2) We enhance feature representation by optimizing Med-SAM-guided training
through the fusion of feature embeddings from Med-SAM and our proposed Sparse
Mamba network. This enriched representation also facilitates the fine-tuning of
the Med-SAM decoder with enriched scribbles. (3) For inference, we introduce a
Sparse Mamba network, which is highly capable of capturing local and global
dependencies by replacing the traditional sequential patch processing method
with a skip-sampling procedure. Experiments on the ACDC, CHAOS, and MSCMRSeg
datasets validate the effectiveness of our framework, outperforming nine
state-of-the-art methods. Our code is available at
\href{https://github.com/QLYCode/SparseMamba-PCL}{SparseMamba-PCL.git}.
|
no_new_dataset
| 0.948298 |
2503.01634
|
Arnesh Batra
|
Arnesh Batra, Arush Gumber, Anushk Kumar
|
M-SCAN: A Multistage Framework for Lumbar Spinal Canal Stenosis Grading
Using Multi-View Cross Attention
| null | null | null | null |
eess.IV cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The increasing prevalence of lumbar spinal canal stenosis has resulted in a
surge of MRI (Magnetic Resonance Imaging), leading to labor-intensive
interpretation and significant inter-reader variability, even among expert
radiologists. This paper introduces a novel and efficient deep-learning
framework that fully automates the grading of lumbar spinal canal stenosis. We
demonstrate state-of-the-art performance in grading spinal canal stenosis on a
dataset of 1,975 unique studies, each containing three distinct types of 3D
cross-sectional spine images: Axial T2, Sagittal T1, and Sagittal T2/STIR.
Employing a distinctive training strategy, our proposed multistage approach
effectively integrates sagittal and axial images. This strategy employs a
multi-view model with a sequence-based architecture, optimizing feature
extraction and cross-view alignment to achieve an AUROC (Area Under the
Receiver Operating Characteristic Curve) of 0.971 in spinal canal stenosis
grading surpassing other state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 15:10:40 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Batra",
"Arnesh",
""
],
[
"Gumber",
"Arush",
""
],
[
"Kumar",
"Anushk",
""
]
] |
TITLE: M-SCAN: A Multistage Framework for Lumbar Spinal Canal Stenosis Grading
Using Multi-View Cross Attention
ABSTRACT: The increasing prevalence of lumbar spinal canal stenosis has resulted in a
surge of MRI (Magnetic Resonance Imaging), leading to labor-intensive
interpretation and significant inter-reader variability, even among expert
radiologists. This paper introduces a novel and efficient deep-learning
framework that fully automates the grading of lumbar spinal canal stenosis. We
demonstrate state-of-the-art performance in grading spinal canal stenosis on a
dataset of 1,975 unique studies, each containing three distinct types of 3D
cross-sectional spine images: Axial T2, Sagittal T1, and Sagittal T2/STIR.
Employing a distinctive training strategy, our proposed multistage approach
effectively integrates sagittal and axial images. This strategy employs a
multi-view model with a sequence-based architecture, optimizing feature
extraction and cross-view alignment to achieve an AUROC (Area Under the
Receiver Operating Characteristic Curve) of 0.971 in spinal canal stenosis
grading surpassing other state-of-the-art methods.
|
no_new_dataset
| 0.942348 |
2503.01646
|
Dianyi Yang
|
Dianyi Yang, Yu Gao, Xihan Wang, Yufeng Yue, Yi Yang, Mengyin Fu
|
OpenGS-SLAM: Open-Set Dense Semantic SLAM with 3D Gaussian Splatting for
Object-Level Scene Understanding
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent advancements in 3D Gaussian Splatting have significantly improved the
efficiency and quality of dense semantic SLAM. However, previous methods are
generally constrained by limited-category pre-trained classifiers and implicit
semantic representation, which hinder their performance in open-set scenarios
and restrict 3D object-level scene understanding. To address these issues, we
propose OpenGS-SLAM, an innovative framework that utilizes 3D Gaussian
representation to perform dense semantic SLAM in open-set environments. Our
system integrates explicit semantic labels derived from 2D foundational models
into the 3D Gaussian framework, facilitating robust 3D object-level scene
understanding. We introduce Gaussian Voting Splatting to enable fast 2D label
map rendering and scene updating. Additionally, we propose a Confidence-based
2D Label Consensus method to ensure consistent labeling across multiple views.
Furthermore, we employ a Segmentation Counter Pruning strategy to improve the
accuracy of semantic scene representation. Extensive experiments on both
synthetic and real-world datasets demonstrate the effectiveness of our method
in scene understanding, tracking, and mapping, achieving 10 times faster
semantic rendering and 2 times lower storage costs compared to existing
methods. Project page: https://young-bit.github.io/opengs-github.github.io/.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 15:23:21 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Yang",
"Dianyi",
""
],
[
"Gao",
"Yu",
""
],
[
"Wang",
"Xihan",
""
],
[
"Yue",
"Yufeng",
""
],
[
"Yang",
"Yi",
""
],
[
"Fu",
"Mengyin",
""
]
] |
TITLE: OpenGS-SLAM: Open-Set Dense Semantic SLAM with 3D Gaussian Splatting for
Object-Level Scene Understanding
ABSTRACT: Recent advancements in 3D Gaussian Splatting have significantly improved the
efficiency and quality of dense semantic SLAM. However, previous methods are
generally constrained by limited-category pre-trained classifiers and implicit
semantic representation, which hinder their performance in open-set scenarios
and restrict 3D object-level scene understanding. To address these issues, we
propose OpenGS-SLAM, an innovative framework that utilizes 3D Gaussian
representation to perform dense semantic SLAM in open-set environments. Our
system integrates explicit semantic labels derived from 2D foundational models
into the 3D Gaussian framework, facilitating robust 3D object-level scene
understanding. We introduce Gaussian Voting Splatting to enable fast 2D label
map rendering and scene updating. Additionally, we propose a Confidence-based
2D Label Consensus method to ensure consistent labeling across multiple views.
Furthermore, we employ a Segmentation Counter Pruning strategy to improve the
accuracy of semantic scene representation. Extensive experiments on both
synthetic and real-world datasets demonstrate the effectiveness of our method
in scene understanding, tracking, and mapping, achieving 10 times faster
semantic rendering and 2 times lower storage costs compared to existing
methods. Project page: https://young-bit.github.io/opengs-github.github.io/.
|
no_new_dataset
| 0.950549 |
2503.01650
|
Hamidreza Mirkhani
|
Hamidreza Mirkhani, Behzad Khamidehi, Ehsan Ahmadi, Fazel Arasteh,
Mohammed Elmahgiubi, Weize Zhang, Umar Rajguru, Kasra Rezaee
|
CAPS: Context-Aware Priority Sampling for Enhanced Imitation Learning in
Autonomous Driving
| null | null | null | null |
cs.LG cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we introduce CAPS (Context-Aware Priority Sampling), a novel
method designed to enhance data efficiency in learning-based autonomous driving
systems. CAPS addresses the challenge of imbalanced training datasets in
imitation learning by leveraging Vector Quantized Variational Autoencoders
(VQ-VAEs). The use of VQ-VAE provides a structured and interpretable data
representation, which helps reveal meaningful patterns in the data. These
patterns are used to group the data into clusters, with each sample being
assigned a cluster ID. The cluster IDs are then used to re-balance the dataset,
ensuring that rare yet valuable samples receive higher priority during
training. By ensuring a more diverse and informative training set, CAPS
improves the generalization of the trained planner across a wide range of
driving scenarios. We evaluate our method through closed-loop simulations in
the CARLA environment. The results on Bench2Drive scenarios demonstrate that
our framework outperforms state-of-the-art methods, leading to notable
improvements in model performance.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 15:27:11 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Mirkhani",
"Hamidreza",
""
],
[
"Khamidehi",
"Behzad",
""
],
[
"Ahmadi",
"Ehsan",
""
],
[
"Arasteh",
"Fazel",
""
],
[
"Elmahgiubi",
"Mohammed",
""
],
[
"Zhang",
"Weize",
""
],
[
"Rajguru",
"Umar",
""
],
[
"Rezaee",
"Kasra",
""
]
] |
TITLE: CAPS: Context-Aware Priority Sampling for Enhanced Imitation Learning in
Autonomous Driving
ABSTRACT: In this paper, we introduce CAPS (Context-Aware Priority Sampling), a novel
method designed to enhance data efficiency in learning-based autonomous driving
systems. CAPS addresses the challenge of imbalanced training datasets in
imitation learning by leveraging Vector Quantized Variational Autoencoders
(VQ-VAEs). The use of VQ-VAE provides a structured and interpretable data
representation, which helps reveal meaningful patterns in the data. These
patterns are used to group the data into clusters, with each sample being
assigned a cluster ID. The cluster IDs are then used to re-balance the dataset,
ensuring that rare yet valuable samples receive higher priority during
training. By ensuring a more diverse and informative training set, CAPS
improves the generalization of the trained planner across a wide range of
driving scenarios. We evaluate our method through closed-loop simulations in
the CARLA environment. The results on Bench2Drive scenarios demonstrate that
our framework outperforms state-of-the-art methods, leading to notable
improvements in model performance.
|
no_new_dataset
| 0.947866 |
2503.01655
|
Zy Wang
|
Ziyu Wang (1), Tao Xue (1), Yanbin Wang (1), Jingyuan Li (1), Haibin
Zhang (1), Zhiqiang Xu (2), Gaofei Xu (3) ((1) Xidian University, (2) Jiangxi
University of Science and Technology, (3) Institute of Deep-sea Science and
Engineering)
|
Enhancing Object Detection Accuracy in Underwater Sonar Images through
Deep Learning-based Denoising
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sonar image object detection is crucial for underwater robotics and other
applications. However, various types of noise in sonar images can affect the
accuracy of object detection. Denoising, as a critical preprocessing step, aims
to remove noise while retaining useful information to improve detection
accuracy. Although deep learning-based denoising algorithms perform well on
optical images, their application to underwater sonar images remains
underexplored. This paper systematically evaluates the effectiveness of several
deep learning-based denoising algorithms, originally designed for optical
images, in the context of underwater sonar image object detection. We apply
nine trained denoising models to images from five open-source sonar datasets,
each processing different types of noise. We then test the denoised images
using four object detection algorithms. The results show that different
denoising models have varying effects on detection performance. By combining
the strengths of multiple denoising models, the detection results can be
optimized, thus more effectively suppressing noise. Additionally, we adopt a
multi-frame denoising technique, using different outputs generated by multiple
denoising models as multiple frames of the same scene for further processing to
enhance detection accuracy. This method, originally designed for optical
images, leverages complementary noise-reduction effects. Experimental results
show that denoised sonar images improve the performance of object detection
algorithms compared to the original sonar images.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 15:30:39 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Wang",
"Ziyu",
""
],
[
"Xue",
"Tao",
""
],
[
"Wang",
"Yanbin",
""
],
[
"Li",
"Jingyuan",
""
],
[
"Zhang",
"Haibin",
""
],
[
"Xu",
"Zhiqiang",
""
],
[
"Xu",
"Gaofei",
""
]
] |
TITLE: Enhancing Object Detection Accuracy in Underwater Sonar Images through
Deep Learning-based Denoising
ABSTRACT: Sonar image object detection is crucial for underwater robotics and other
applications. However, various types of noise in sonar images can affect the
accuracy of object detection. Denoising, as a critical preprocessing step, aims
to remove noise while retaining useful information to improve detection
accuracy. Although deep learning-based denoising algorithms perform well on
optical images, their application to underwater sonar images remains
underexplored. This paper systematically evaluates the effectiveness of several
deep learning-based denoising algorithms, originally designed for optical
images, in the context of underwater sonar image object detection. We apply
nine trained denoising models to images from five open-source sonar datasets,
each processing different types of noise. We then test the denoised images
using four object detection algorithms. The results show that different
denoising models have varying effects on detection performance. By combining
the strengths of multiple denoising models, the detection results can be
optimized, thus more effectively suppressing noise. Additionally, we adopt a
multi-frame denoising technique, using different outputs generated by multiple
denoising models as multiple frames of the same scene for further processing to
enhance detection accuracy. This method, originally designed for optical
images, leverages complementary noise-reduction effects. Experimental results
show that denoised sonar images improve the performance of object detection
algorithms compared to the original sonar images.
|
no_new_dataset
| 0.950365 |
2503.01667
|
Linhao Huang
|
Linhao Huang, Jing Yu
|
ToLo: A Two-Stage, Training-Free Layout-To-Image Generation Framework
For High-Overlap Layouts
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent training-free layout-to-image diffusion models have demonstrated
remarkable performance in generating high-quality images with controllable
layouts. These models follow a one-stage framework: Encouraging the model to
focus the attention map of each concept on its corresponding region by defining
attention map-based losses. However, these models still struggle to accurately
follow layouts with significant overlap, often leading to issues like attribute
leakage and missing entities. In this paper, we propose ToLo, a two-stage,
training-free layout-to-image generation framework for high-overlap layouts.
Our framework consists of two stages: the aggregation stage and the separation
stage, each with its own loss function based on the attention map. To provide a
more effective evaluation, we partition the HRS dataset based on the
Intersection over Union (IoU) of the input layouts, creating a new dataset for
layout-to-image generation with varying levels of overlap. Through extensive
experiments on this dataset, we demonstrate that ToLo significantly enhances
the performance of existing methods when dealing with high-overlap layouts. Our
code and dataset are available here: https://github.com/misaka12435/ToLo.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 15:41:51 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Huang",
"Linhao",
""
],
[
"Yu",
"Jing",
""
]
] |
TITLE: ToLo: A Two-Stage, Training-Free Layout-To-Image Generation Framework
For High-Overlap Layouts
ABSTRACT: Recent training-free layout-to-image diffusion models have demonstrated
remarkable performance in generating high-quality images with controllable
layouts. These models follow a one-stage framework: Encouraging the model to
focus the attention map of each concept on its corresponding region by defining
attention map-based losses. However, these models still struggle to accurately
follow layouts with significant overlap, often leading to issues like attribute
leakage and missing entities. In this paper, we propose ToLo, a two-stage,
training-free layout-to-image generation framework for high-overlap layouts.
Our framework consists of two stages: the aggregation stage and the separation
stage, each with its own loss function based on the attention map. To provide a
more effective evaluation, we partition the HRS dataset based on the
Intersection over Union (IoU) of the input layouts, creating a new dataset for
layout-to-image generation with varying levels of overlap. Through extensive
experiments on this dataset, we demonstrate that ToLo significantly enhances
the performance of existing methods when dealing with high-overlap layouts. Our
code and dataset are available here: https://github.com/misaka12435/ToLo.
|
new_dataset
| 0.953794 |
2503.01672
|
Xiao Liu
|
Xiao Liu, Zirui Wu, Jiayi Li, Zhicheng Shao, Xun Pang, Yansong Feng
|
Automated Annotation of Evolving Corpora for Augmenting Longitudinal
Network Data: A Framework Integrating Large Language Models and Expert
Knowledge
|
Work in progress, presented at the 2025 Asian PolMeth Conference
| null | null | null |
cs.CL cs.SI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Longitudinal network data are essential for analyzing political, economic,
and social systems and processes. In political science, these datasets are
often generated through human annotation or supervised machine learning applied
to evolving corpora. However, as semantic contexts shift over time, inferring
dynamic interaction types on emerging issues among a diverse set of entities
poses significant challenges, particularly in maintaining timely and consistent
annotations. This paper presents the Expert-Augmented LLM Annotation (EALA)
approach, which leverages Large Language Models (LLMs) in combination with
historically annotated data and expert-constructed codebooks to extrapolate and
extend datasets into future periods. We evaluate the performance and
reliability of EALA using a dataset of climate negotiations. Our findings
demonstrate that EALA effectively predicts nuanced interactions between
negotiation parties and captures the evolution of topics over time. At the same
time, we identify several limitations inherent to LLM-based annotation,
highlighting areas for further improvement. Given the wide availability of
codebooks and annotated datasets, EALA holds substantial promise for advancing
research in political science and beyond.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 15:46:01 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Liu",
"Xiao",
""
],
[
"Wu",
"Zirui",
""
],
[
"Li",
"Jiayi",
""
],
[
"Shao",
"Zhicheng",
""
],
[
"Pang",
"Xun",
""
],
[
"Feng",
"Yansong",
""
]
] |
TITLE: Automated Annotation of Evolving Corpora for Augmenting Longitudinal
Network Data: A Framework Integrating Large Language Models and Expert
Knowledge
ABSTRACT: Longitudinal network data are essential for analyzing political, economic,
and social systems and processes. In political science, these datasets are
often generated through human annotation or supervised machine learning applied
to evolving corpora. However, as semantic contexts shift over time, inferring
dynamic interaction types on emerging issues among a diverse set of entities
poses significant challenges, particularly in maintaining timely and consistent
annotations. This paper presents the Expert-Augmented LLM Annotation (EALA)
approach, which leverages Large Language Models (LLMs) in combination with
historically annotated data and expert-constructed codebooks to extrapolate and
extend datasets into future periods. We evaluate the performance and
reliability of EALA using a dataset of climate negotiations. Our findings
demonstrate that EALA effectively predicts nuanced interactions between
negotiation parties and captures the evolution of topics over time. At the same
time, we identify several limitations inherent to LLM-based annotation,
highlighting areas for further improvement. Given the wide availability of
codebooks and annotated datasets, EALA holds substantial promise for advancing
research in political science and beyond.
|
no_new_dataset
| 0.874881 |
2503.01691
|
Yuyan Chen
|
Yuyan Chen, Nico Lang, B. Christian Schmidt, Aditya Jain, Yves Basset,
Sara Beery, Maxim Larriv\'ee, David Rolnick
|
Open-Set Recognition of Novel Species in Biodiversity Monitoring
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Machine learning is increasingly being applied to facilitate long-term,
large-scale biodiversity monitoring. With most species on Earth still
undiscovered or poorly documented, species-recognition models are expected to
encounter new species during deployment. We introduce Open-Insects, a
fine-grained image recognition benchmark dataset for open-set recognition and
out-of-distribution detection in biodiversity monitoring. Open-Insects makes it
possible to evaluate algorithms for new species detection on several
geographical open-set splits with varying difficulty. Furthermore, we present a
test set recently collected in the wild with 59 species that are likely new to
science. We evaluate a variety of open-set recognition algorithms, including
post-hoc methods, training-time regularization, and training with auxiliary
data, finding that the simple post-hoc approach of utilizing softmax scores
remains a strong baseline. We also demonstrate how to leverage auxiliary data
to improve the detection performance when the training dataset is limited. Our
results provide timely insights to guide the development of computer vision
methods for biodiversity monitoring and species discovery.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 16:04:46 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Chen",
"Yuyan",
""
],
[
"Lang",
"Nico",
""
],
[
"Schmidt",
"B. Christian",
""
],
[
"Jain",
"Aditya",
""
],
[
"Basset",
"Yves",
""
],
[
"Beery",
"Sara",
""
],
[
"Larrivée",
"Maxim",
""
],
[
"Rolnick",
"David",
""
]
] |
TITLE: Open-Set Recognition of Novel Species in Biodiversity Monitoring
ABSTRACT: Machine learning is increasingly being applied to facilitate long-term,
large-scale biodiversity monitoring. With most species on Earth still
undiscovered or poorly documented, species-recognition models are expected to
encounter new species during deployment. We introduce Open-Insects, a
fine-grained image recognition benchmark dataset for open-set recognition and
out-of-distribution detection in biodiversity monitoring. Open-Insects makes it
possible to evaluate algorithms for new species detection on several
geographical open-set splits with varying difficulty. Furthermore, we present a
test set recently collected in the wild with 59 species that are likely new to
science. We evaluate a variety of open-set recognition algorithms, including
post-hoc methods, training-time regularization, and training with auxiliary
data, finding that the simple post-hoc approach of utilizing softmax scores
remains a strong baseline. We also demonstrate how to leverage auxiliary data
to improve the detection performance when the training dataset is limited. Our
results provide timely insights to guide the development of computer vision
methods for biodiversity monitoring and species discovery.
|
new_dataset
| 0.95803 |
2503.01695
|
Kun Li
|
Kun Li, Tianhua Zhang, Yunxiang Li, Hongyin Luo, Abdalla Moustafa,
Xixin Wu, James Glass, Helen Meng
|
Generate, Discriminate, Evolve: Enhancing Context Faithfulness via
Fine-Grained Sentence-Level Self-Evolution
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Improving context faithfulness in large language models is essential for
developing trustworthy retrieval augmented generation systems and mitigating
hallucinations, especially in long-form question answering (LFQA) tasks or
scenarios involving knowledge conflicts. Existing methods either intervene LLMs
only at inference without addressing their inherent limitations or overlook the
potential for self-improvement. In this paper, we introduce GenDiE (Generate,
Discriminate, Evolve), a novel self-evolving framework that enhances context
faithfulness through fine-grained sentence-level optimization. GenDiE combines
both generative and discriminative training, equipping LLMs with
self-generation and self-scoring capabilities to facilitate iterative
self-evolution. This supports both data construction for model alignment and
score-guided search during inference. Furthermore, by treating each sentence in
a response as an independent optimization unit, GenDiE effectively addresses
the limitations of previous approaches that optimize at the holistic answer
level, which may miss unfaithful details. Experiments on ASQA (in-domain LFQA)
and ConFiQA (out-of-domain counterfactual QA) datasets demonstrate that GenDiE
surpasses various baselines in both faithfulness and correctness, and exhibits
robust performance for domain adaptation.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 16:08:33 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Li",
"Kun",
""
],
[
"Zhang",
"Tianhua",
""
],
[
"Li",
"Yunxiang",
""
],
[
"Luo",
"Hongyin",
""
],
[
"Moustafa",
"Abdalla",
""
],
[
"Wu",
"Xixin",
""
],
[
"Glass",
"James",
""
],
[
"Meng",
"Helen",
""
]
] |
TITLE: Generate, Discriminate, Evolve: Enhancing Context Faithfulness via
Fine-Grained Sentence-Level Self-Evolution
ABSTRACT: Improving context faithfulness in large language models is essential for
developing trustworthy retrieval augmented generation systems and mitigating
hallucinations, especially in long-form question answering (LFQA) tasks or
scenarios involving knowledge conflicts. Existing methods either intervene LLMs
only at inference without addressing their inherent limitations or overlook the
potential for self-improvement. In this paper, we introduce GenDiE (Generate,
Discriminate, Evolve), a novel self-evolving framework that enhances context
faithfulness through fine-grained sentence-level optimization. GenDiE combines
both generative and discriminative training, equipping LLMs with
self-generation and self-scoring capabilities to facilitate iterative
self-evolution. This supports both data construction for model alignment and
score-guided search during inference. Furthermore, by treating each sentence in
a response as an independent optimization unit, GenDiE effectively addresses
the limitations of previous approaches that optimize at the holistic answer
level, which may miss unfaithful details. Experiments on ASQA (in-domain LFQA)
and ConFiQA (out-of-domain counterfactual QA) datasets demonstrate that GenDiE
surpasses various baselines in both faithfulness and correctness, and exhibits
robust performance for domain adaptation.
|
no_new_dataset
| 0.94743 |
2503.01699
|
Tang Jiankai
|
Jiankai Tang, Xin Liu, Daniel McDuff, Zhang Jiang, Hongming Hu, Luxi
Zhou, Nodoka Nagao, Haruta Suzuki, Yuki Nagahama, Wei Li, Linhong Ji,
Yuanchun Shi, Izumi Nishidate, and Yuntao Wang
|
Camera Measurement of Blood Oxygen Saturation
| null | null | null | null |
cs.CE
|
http://creativecommons.org/licenses/by/4.0/
|
Blood oxygen saturation (SpO2) is a crucial vital sign routinely monitored in
medical settings. Traditional methods require dedicated contact sensors,
limiting accessibility and comfort. This study presents a deep learning
framework for contactless SpO2 measurement using an off-the-shelf camera,
addressing challenges related to lighting variations and skin tone diversity.
We conducted two large-scale studies with diverse participants and evaluated
our method against traditional signal processing approaches in intra- and
inter-dataset scenarios. Our approach demonstrated consistent accuracy across
demographic groups, highlighting the feasibility of camera-based SpO2
monitoring as a scalable and non-invasive tool for remote health assessment.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 16:12:20 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Tang",
"Jiankai",
""
],
[
"Liu",
"Xin",
""
],
[
"McDuff",
"Daniel",
""
],
[
"Jiang",
"Zhang",
""
],
[
"Hu",
"Hongming",
""
],
[
"Zhou",
"Luxi",
""
],
[
"Nagao",
"Nodoka",
""
],
[
"Suzuki",
"Haruta",
""
],
[
"Nagahama",
"Yuki",
""
],
[
"Li",
"Wei",
""
],
[
"Ji",
"Linhong",
""
],
[
"Shi",
"Yuanchun",
""
],
[
"Nishidate",
"Izumi",
""
],
[
"Wang",
"Yuntao",
""
]
] |
TITLE: Camera Measurement of Blood Oxygen Saturation
ABSTRACT: Blood oxygen saturation (SpO2) is a crucial vital sign routinely monitored in
medical settings. Traditional methods require dedicated contact sensors,
limiting accessibility and comfort. This study presents a deep learning
framework for contactless SpO2 measurement using an off-the-shelf camera,
addressing challenges related to lighting variations and skin tone diversity.
We conducted two large-scale studies with diverse participants and evaluated
our method against traditional signal processing approaches in intra- and
inter-dataset scenarios. Our approach demonstrated consistent accuracy across
demographic groups, highlighting the feasibility of camera-based SpO2
monitoring as a scalable and non-invasive tool for remote health assessment.
|
no_new_dataset
| 0.942981 |
2503.01703
|
Vastal Srivastava Mr.
|
Vatsal Srivastava
|
On the Development of Binary Classification Algorithm Based on
Principles of Geometry and Statistical Inference
|
20 pages and some figures might give overfull warnings but compiled
successfully and looks good so can be ignored
| null | null | null |
cs.LG math.AG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The aim of this paper is to investigate an attempt to build a binary
classification algorithm using principles of geometry such as vectors, planes,
and vector algebra. The basic idea behind the proposed algorithm is that a
hyperplane can be used to completely separate a given set of data points mapped
to n dimensional space, if the given data points are linearly separable in the
n dimensions. Since points are the foundational elements of any geometrical
construct, by manipulating the position of points used for the construction of
a given hyperplane, the position of the hyperplane itself can be manipulated.
The paper includes testing data against other classifiers on a variety of
standard machine learning datasets. With a focus on support vector machines,
since they and our proposed classifier use the same geometrical construct of
hyperplane, and the versatility of SVMs make them a good bench mark for
comparison. Since the algorithm focuses on moving the points through the
hyperspace to which the dataset has been mapped, it has been dubbed as moving
points algorithm.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 16:16:28 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Srivastava",
"Vatsal",
""
]
] |
TITLE: On the Development of Binary Classification Algorithm Based on
Principles of Geometry and Statistical Inference
ABSTRACT: The aim of this paper is to investigate an attempt to build a binary
classification algorithm using principles of geometry such as vectors, planes,
and vector algebra. The basic idea behind the proposed algorithm is that a
hyperplane can be used to completely separate a given set of data points mapped
to n dimensional space, if the given data points are linearly separable in the
n dimensions. Since points are the foundational elements of any geometrical
construct, by manipulating the position of points used for the construction of
a given hyperplane, the position of the hyperplane itself can be manipulated.
The paper includes testing data against other classifiers on a variety of
standard machine learning datasets. With a focus on support vector machines,
since they and our proposed classifier use the same geometrical construct of
hyperplane, and the versatility of SVMs make them a good bench mark for
comparison. Since the algorithm focuses on moving the points through the
hyperspace to which the dataset has been mapped, it has been dubbed as moving
points algorithm.
|
no_new_dataset
| 0.953665 |
2503.01704
|
Minoo Hosseinzadeh
|
Minoo Hosseinzadeh, Hana Khamfroush
|
DILEMMA: Joint LLM Quantization and Distributed LLM Inference Over Edge
Computing Systems
| null | null | null | null |
cs.LG
|
http://creativecommons.org/publicdomain/zero/1.0/
|
With a recent trend of using Large Language Models (LLMs) for different
applications within smart cities, there is a need for pushing these models
toward the edge of network while still preserving their performance. Edge
Computing (EC) as a physically closer computing resource to the end users can
help to reduce the communication delay for serving end users' tasks for
LLM-dependent services. However, EC servers have limited capacity in terms of
communication, computation, and storage capacity. This paper introduces
DILEMMA, a novel framework addressing the challenges of deploying LLMs in EC
systems by jointly optimizing layer placement and layer quantization in EC
systems. DILEMMA formulates an Integer Linear Programming problem to minimize
total inference delay while ensuring acceptable LLM performance levels,
leveraging layer-wise quantization and knowledge distillation for LLM
performance control. Experimental evaluations on OPT-350 model using the SQuAD
dataset demonstrate that DILEMMA achieves a quantization ratio of up to 12.75%
while preserving model loss, highlighting its effectiveness in
resource-constrained environments.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 16:16:33 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Hosseinzadeh",
"Minoo",
""
],
[
"Khamfroush",
"Hana",
""
]
] |
TITLE: DILEMMA: Joint LLM Quantization and Distributed LLM Inference Over Edge
Computing Systems
ABSTRACT: With a recent trend of using Large Language Models (LLMs) for different
applications within smart cities, there is a need for pushing these models
toward the edge of network while still preserving their performance. Edge
Computing (EC) as a physically closer computing resource to the end users can
help to reduce the communication delay for serving end users' tasks for
LLM-dependent services. However, EC servers have limited capacity in terms of
communication, computation, and storage capacity. This paper introduces
DILEMMA, a novel framework addressing the challenges of deploying LLMs in EC
systems by jointly optimizing layer placement and layer quantization in EC
systems. DILEMMA formulates an Integer Linear Programming problem to minimize
total inference delay while ensuring acceptable LLM performance levels,
leveraging layer-wise quantization and knowledge distillation for LLM
performance control. Experimental evaluations on OPT-350 model using the SQuAD
dataset demonstrate that DILEMMA achieves a quantization ratio of up to 12.75%
while preserving model loss, highlighting its effectiveness in
resource-constrained environments.
|
no_new_dataset
| 0.941223 |
2503.01710
|
Xinsheng Wang
|
Xinsheng Wang, Mingqi Jiang, Ziyang Ma, Ziyu Zhang, Songxiang Liu,
Linqin Li, Zheng Liang, Qixi Zheng, Rui Wang, Xiaoqin Feng, Weizhen Bian,
Zhen Ye, Sitong Cheng, Ruibin Yuan, Zhixian Zhao, Xinfa Zhu, Jiahao Pan,
Liumeng Xue, Pengcheng Zhu, Yunlin Chen, Zhifei Li, Xie Chen, Lei Xie, Yike
Guo, Wei Xue
|
Spark-TTS: An Efficient LLM-Based Text-to-Speech Model with
Single-Stream Decoupled Speech Tokens
|
Submitted to ACL 2025
| null | null | null |
cs.SD cs.AI eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Recent advancements in large language models (LLMs) have driven significant
progress in zero-shot text-to-speech (TTS) synthesis. However, existing
foundation models rely on multi-stage processing or complex architectures for
predicting multiple codebooks, limiting efficiency and integration flexibility.
To overcome these challenges, we introduce Spark-TTS, a novel system powered by
BiCodec, a single-stream speech codec that decomposes speech into two
complementary token types: low-bitrate semantic tokens for linguistic content
and fixed-length global tokens for speaker attributes. This disentangled
representation, combined with the Qwen2.5 LLM and a chain-of-thought (CoT)
generation approach, enables both coarse-grained control (e.g., gender,
speaking style) and fine-grained adjustments (e.g., precise pitch values,
speaking rate). To facilitate research in controllable TTS, we introduce
VoxBox, a meticulously curated 100,000-hour dataset with comprehensive
attribute annotations. Extensive experiments demonstrate that Spark-TTS not
only achieves state-of-the-art zero-shot voice cloning but also generates
highly customizable voices that surpass the limitations of reference-based
synthesis. Source code, pre-trained models, and audio samples are available at
https://github.com/SparkAudio/Spark-TTS.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 16:23:10 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Wang",
"Xinsheng",
""
],
[
"Jiang",
"Mingqi",
""
],
[
"Ma",
"Ziyang",
""
],
[
"Zhang",
"Ziyu",
""
],
[
"Liu",
"Songxiang",
""
],
[
"Li",
"Linqin",
""
],
[
"Liang",
"Zheng",
""
],
[
"Zheng",
"Qixi",
""
],
[
"Wang",
"Rui",
""
],
[
"Feng",
"Xiaoqin",
""
],
[
"Bian",
"Weizhen",
""
],
[
"Ye",
"Zhen",
""
],
[
"Cheng",
"Sitong",
""
],
[
"Yuan",
"Ruibin",
""
],
[
"Zhao",
"Zhixian",
""
],
[
"Zhu",
"Xinfa",
""
],
[
"Pan",
"Jiahao",
""
],
[
"Xue",
"Liumeng",
""
],
[
"Zhu",
"Pengcheng",
""
],
[
"Chen",
"Yunlin",
""
],
[
"Li",
"Zhifei",
""
],
[
"Chen",
"Xie",
""
],
[
"Xie",
"Lei",
""
],
[
"Guo",
"Yike",
""
],
[
"Xue",
"Wei",
""
]
] |
TITLE: Spark-TTS: An Efficient LLM-Based Text-to-Speech Model with
Single-Stream Decoupled Speech Tokens
ABSTRACT: Recent advancements in large language models (LLMs) have driven significant
progress in zero-shot text-to-speech (TTS) synthesis. However, existing
foundation models rely on multi-stage processing or complex architectures for
predicting multiple codebooks, limiting efficiency and integration flexibility.
To overcome these challenges, we introduce Spark-TTS, a novel system powered by
BiCodec, a single-stream speech codec that decomposes speech into two
complementary token types: low-bitrate semantic tokens for linguistic content
and fixed-length global tokens for speaker attributes. This disentangled
representation, combined with the Qwen2.5 LLM and a chain-of-thought (CoT)
generation approach, enables both coarse-grained control (e.g., gender,
speaking style) and fine-grained adjustments (e.g., precise pitch values,
speaking rate). To facilitate research in controllable TTS, we introduce
VoxBox, a meticulously curated 100,000-hour dataset with comprehensive
attribute annotations. Extensive experiments demonstrate that Spark-TTS not
only achieves state-of-the-art zero-shot voice cloning but also generates
highly customizable voices that surpass the limitations of reference-based
synthesis. Source code, pre-trained models, and audio samples are available at
https://github.com/SparkAudio/Spark-TTS.
|
new_dataset
| 0.957952 |
2503.01727
|
Jos\'e Medina
|
Jos\'e Medina, Amnir Hadachi, Paul Honeine, and Abdelaziz Bensrhair
|
Mamba base PKD for efficient knowledge compression
|
A preliminary version of this work was presented as a short poster
titled "Mamba-PKD: A Framework for Efficient and Scalable Model Compression
in Image Classification" at The 40th ACM/SIGAPP Symposium on Applied
Computing https://doi.org/10.1145/3672608.3707887
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Deep neural networks (DNNs) have remarkably succeeded in various image
processing tasks. However, their large size and computational complexity
present significant challenges for deploying them in resource-constrained
environments. This paper presents an innovative approach for integrating Mamba
Architecture within a Progressive Knowledge Distillation (PKD) process to
address the challenge of reducing model complexity while maintaining accuracy
in image classification tasks. The proposed framework distills a large teacher
model into progressively smaller student models, designed using Mamba blocks.
Each student model is trained using Selective-State-Space Models (S-SSM) within
the Mamba blocks, focusing on important input aspects while reducing
computational complexity. The work's preliminary experiments use MNIST and
CIFAR-10 as datasets to demonstrate the effectiveness of this approach. For
MNIST, the teacher model achieves 98% accuracy. A set of seven student models
as a group retained 63% of the teacher's FLOPs, approximating the teacher's
performance with 98% accuracy. The weak student used only 1% of the teacher's
FLOPs and maintained 72% accuracy. Similarly, for CIFAR-10, the students
achieved 1% less accuracy compared to the teacher, with the small student
retaining 5% of the teacher's FLOPs to achieve 50% accuracy. These results
confirm the flexibility and scalability of Mamba Architecture, which can be
integrated into PKD, succeeding in the process of finding students as weak
learners. The framework provides a solution for deploying complex neural
networks in real-time applications with a reduction in computational cost.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 16:44:23 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Medina",
"José",
""
],
[
"Hadachi",
"Amnir",
""
],
[
"Honeine",
"Paul",
""
],
[
"Bensrhair",
"Abdelaziz",
""
]
] |
TITLE: Mamba base PKD for efficient knowledge compression
ABSTRACT: Deep neural networks (DNNs) have remarkably succeeded in various image
processing tasks. However, their large size and computational complexity
present significant challenges for deploying them in resource-constrained
environments. This paper presents an innovative approach for integrating Mamba
Architecture within a Progressive Knowledge Distillation (PKD) process to
address the challenge of reducing model complexity while maintaining accuracy
in image classification tasks. The proposed framework distills a large teacher
model into progressively smaller student models, designed using Mamba blocks.
Each student model is trained using Selective-State-Space Models (S-SSM) within
the Mamba blocks, focusing on important input aspects while reducing
computational complexity. The work's preliminary experiments use MNIST and
CIFAR-10 as datasets to demonstrate the effectiveness of this approach. For
MNIST, the teacher model achieves 98% accuracy. A set of seven student models
as a group retained 63% of the teacher's FLOPs, approximating the teacher's
performance with 98% accuracy. The weak student used only 1% of the teacher's
FLOPs and maintained 72% accuracy. Similarly, for CIFAR-10, the students
achieved 1% less accuracy compared to the teacher, with the small student
retaining 5% of the teacher's FLOPs to achieve 50% accuracy. These results
confirm the flexibility and scalability of Mamba Architecture, which can be
integrated into PKD, succeeding in the process of finding students as weak
learners. The framework provides a solution for deploying complex neural
networks in real-time applications with a reduction in computational cost.
|
no_new_dataset
| 0.950273 |
2503.01729
|
Alberta Longhini
|
Santiago Bou Betran, Alberta Longhini, Miguel Vasco, Yuchong Zhang and
Danica Kragic
|
FLAME: A Federated Learning Benchmark for Robotic Manipulation
|
Under Review
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent progress in robotic manipulation has been fueled by large-scale
datasets collected across diverse environments. Training robotic manipulation
policies on these datasets is traditionally performed in a centralized manner,
raising concerns regarding scalability, adaptability, and data privacy. While
federated learning enables decentralized, privacy-preserving training, its
application to robotic manipulation remains largely unexplored. We introduce
FLAME (Federated Learning Across Manipulation Environments), the first
benchmark designed for federated learning in robotic manipulation. FLAME
consists of: (i) a set of large-scale datasets of over 160,000 expert
demonstrations of multiple manipulation tasks, collected across a wide range of
simulated environments; (ii) a training and evaluation framework for robotic
policy learning in a federated setting. We evaluate standard federated learning
algorithms in FLAME, showing their potential for distributed policy learning
and highlighting key challenges. Our benchmark establishes a foundation for
scalable, adaptive, and privacy-aware robotic learning.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 16:49:15 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Betran",
"Santiago Bou",
""
],
[
"Longhini",
"Alberta",
""
],
[
"Vasco",
"Miguel",
""
],
[
"Zhang",
"Yuchong",
""
],
[
"Kragic",
"Danica",
""
]
] |
TITLE: FLAME: A Federated Learning Benchmark for Robotic Manipulation
ABSTRACT: Recent progress in robotic manipulation has been fueled by large-scale
datasets collected across diverse environments. Training robotic manipulation
policies on these datasets is traditionally performed in a centralized manner,
raising concerns regarding scalability, adaptability, and data privacy. While
federated learning enables decentralized, privacy-preserving training, its
application to robotic manipulation remains largely unexplored. We introduce
FLAME (Federated Learning Across Manipulation Environments), the first
benchmark designed for federated learning in robotic manipulation. FLAME
consists of: (i) a set of large-scale datasets of over 160,000 expert
demonstrations of multiple manipulation tasks, collected across a wide range of
simulated environments; (ii) a training and evaluation framework for robotic
policy learning in a federated setting. We evaluate standard federated learning
algorithms in FLAME, showing their potential for distributed policy learning
and highlighting key challenges. Our benchmark establishes a foundation for
scalable, adaptive, and privacy-aware robotic learning.
|
new_dataset
| 0.931836 |
2503.01733
|
Alexander Karpekov
|
Alexander Karpekov, Sonia Chernova, Thomas Pl\"otz
|
DISCOVER: Data-driven Identification of Sub-activities via Clustering
and Visualization for Enhanced Activity Recognition in Smart Homes
|
v1: Initial submission. Under review at IMWUT
| null | null | null |
cs.HC cs.AI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human Activity Recognition (HAR) using ambient sensors has great potential
for practical applications, particularly in elder care and independent living.
However, deploying HAR systems in real-world settings remains challenging due
to the high cost of labeled data, the need for pre-segmented sensor streams,
and the lack of flexibility in activity granularity. To address these
limitations, we introduce DISCOVER, a method designed to discover fine-grained
human sub-activities from unlabeled sensor data without relying on
pre-segmentation. DISCOVER combines unsupervised feature extraction and
clustering with a user-friendly visualization tool to streamline the labeling
process. DISCOVER enables domain experts to efficiently annotate only a minimal
set of representative cluster centroids, reducing the annotation workload to a
small number of samples (0.05% of our dataset). We demonstrate DISCOVER's
effectiveness through a re-annotation exercise on widely used HAR datasets,
showing that it uncovers finer-grained activities and produces more nuanced
annotations than traditional coarse labels. DISCOVER represents a step toward
practical, deployable HAR systems that adapt to diverse real environments.
|
[
{
"version": "v1",
"created": "Tue, 11 Feb 2025 20:02:24 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Karpekov",
"Alexander",
""
],
[
"Chernova",
"Sonia",
""
],
[
"Plötz",
"Thomas",
""
]
] |
TITLE: DISCOVER: Data-driven Identification of Sub-activities via Clustering
and Visualization for Enhanced Activity Recognition in Smart Homes
ABSTRACT: Human Activity Recognition (HAR) using ambient sensors has great potential
for practical applications, particularly in elder care and independent living.
However, deploying HAR systems in real-world settings remains challenging due
to the high cost of labeled data, the need for pre-segmented sensor streams,
and the lack of flexibility in activity granularity. To address these
limitations, we introduce DISCOVER, a method designed to discover fine-grained
human sub-activities from unlabeled sensor data without relying on
pre-segmentation. DISCOVER combines unsupervised feature extraction and
clustering with a user-friendly visualization tool to streamline the labeling
process. DISCOVER enables domain experts to efficiently annotate only a minimal
set of representative cluster centroids, reducing the annotation workload to a
small number of samples (0.05% of our dataset). We demonstrate DISCOVER's
effectiveness through a re-annotation exercise on widely used HAR datasets,
showing that it uncovers finer-grained activities and produces more nuanced
annotations than traditional coarse labels. DISCOVER represents a step toward
practical, deployable HAR systems that adapt to diverse real environments.
|
no_new_dataset
| 0.529081 |
2503.01737
|
Mohammad Rafid Ul Islam
|
Mohammad Rafid Ul Islam, Prasad Tadepalli, Alan Fern
|
Self-attention-based Diffusion Model for Time-series Imputation in
Partial Blackout Scenarios
|
7 pages, 2 figures, 3 tables, Accepted in AAAI 2025 Main Track
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Missing values in multivariate time series data can harm machine learning
performance and introduce bias. These gaps arise from sensor malfunctions,
blackouts, and human error and are typically addressed by data imputation.
Previous work has tackled the imputation of missing data in random, complete
blackouts and forecasting scenarios. The current paper addresses a more general
missing pattern, which we call "partial blackout," where a subset of features
is missing for consecutive time steps. We introduce a two-stage imputation
process using self-attention and diffusion processes to model feature and
temporal correlations. Notably, our model effectively handles missing data
during training, enhancing adaptability and ensuring reliable imputation and
performance, even with incomplete datasets. Our experiments on benchmark and
two real-world time series datasets demonstrate that our model outperforms the
state-of-the-art in partial blackout scenarios and shows better scalability.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 16:58:15 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Islam",
"Mohammad Rafid Ul",
""
],
[
"Tadepalli",
"Prasad",
""
],
[
"Fern",
"Alan",
""
]
] |
TITLE: Self-attention-based Diffusion Model for Time-series Imputation in
Partial Blackout Scenarios
ABSTRACT: Missing values in multivariate time series data can harm machine learning
performance and introduce bias. These gaps arise from sensor malfunctions,
blackouts, and human error and are typically addressed by data imputation.
Previous work has tackled the imputation of missing data in random, complete
blackouts and forecasting scenarios. The current paper addresses a more general
missing pattern, which we call "partial blackout," where a subset of features
is missing for consecutive time steps. We introduce a two-stage imputation
process using self-attention and diffusion processes to model feature and
temporal correlations. Notably, our model effectively handles missing data
during training, enhancing adaptability and ensuring reliable imputation and
performance, even with incomplete datasets. Our experiments on benchmark and
two real-world time series datasets demonstrate that our model outperforms the
state-of-the-art in partial blackout scenarios and shows better scalability.
|
no_new_dataset
| 0.948632 |
2503.01739
|
Wenhao Wang
|
Wenhao Wang, Yi Yang
|
VideoUFO: A Million-Scale User-Focused Dataset for Text-to-Video
Generation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Text-to-video generative models convert textual prompts into dynamic visual
content, offering wide-ranging applications in film production, gaming, and
education. However, their real-world performance often falls short of user
expectations. One key reason is that these models have not been trained on
videos related to some topics users want to create. In this paper, we propose
VideoUFO, the first Video dataset specifically curated to align with Users'
FOcus in real-world scenarios. Beyond this, our VideoUFO also features: (1)
minimal ($0.29\%$) overlap with existing video datasets, and (2) videos
searched exclusively via YouTube's official API under the Creative Commons
license. These two attributes provide future researchers with greater freedom
to broaden their training sources. The VideoUFO comprises over $1.09$ million
video clips, each paired with both a brief and a detailed caption
(description). Specifically, through clustering, we first identify $1,291$
user-focused topics from the million-scale real text-to-video prompt dataset,
VidProM. Then, we use these topics to retrieve videos from YouTube, split the
retrieved videos into clips, and generate both brief and detailed captions for
each clip. After verifying the clips with specified topics, we are left with
about $1.09$ million video clips. Our experiments reveal that (1) current $16$
text-to-video models do not achieve consistent performance across all
user-focused topics; and (2) a simple model trained on VideoUFO outperforms
others on worst-performing topics. The dataset is publicly available at
https://huggingface.co/datasets/WenhaoWang/VideoUFO under the CC BY 4.0
License.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 17:00:36 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Wang",
"Wenhao",
""
],
[
"Yang",
"Yi",
""
]
] |
TITLE: VideoUFO: A Million-Scale User-Focused Dataset for Text-to-Video
Generation
ABSTRACT: Text-to-video generative models convert textual prompts into dynamic visual
content, offering wide-ranging applications in film production, gaming, and
education. However, their real-world performance often falls short of user
expectations. One key reason is that these models have not been trained on
videos related to some topics users want to create. In this paper, we propose
VideoUFO, the first Video dataset specifically curated to align with Users'
FOcus in real-world scenarios. Beyond this, our VideoUFO also features: (1)
minimal ($0.29\%$) overlap with existing video datasets, and (2) videos
searched exclusively via YouTube's official API under the Creative Commons
license. These two attributes provide future researchers with greater freedom
to broaden their training sources. The VideoUFO comprises over $1.09$ million
video clips, each paired with both a brief and a detailed caption
(description). Specifically, through clustering, we first identify $1,291$
user-focused topics from the million-scale real text-to-video prompt dataset,
VidProM. Then, we use these topics to retrieve videos from YouTube, split the
retrieved videos into clips, and generate both brief and detailed captions for
each clip. After verifying the clips with specified topics, we are left with
about $1.09$ million video clips. Our experiments reveal that (1) current $16$
text-to-video models do not achieve consistent performance across all
user-focused topics; and (2) a simple model trained on VideoUFO outperforms
others on worst-performing topics. The dataset is publicly available at
https://huggingface.co/datasets/WenhaoWang/VideoUFO under the CC BY 4.0
License.
|
no_new_dataset
| 0.92157 |
2503.01750
|
Arash Mohammadi
|
Nastaran Mansourian, Arash Mohammadi, M. Omair Ahmad, M.N.S. Swamy
|
ECG-EmotionNet: Nested Mixture of Expert (NMoE) Adaptation of
ECG-Foundation Model for Driver Emotion Recognition
| null | null | null | null |
cs.LG eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
Driver emotion recognition plays a crucial role in driver monitoring systems,
enhancing human-autonomy interactions and the trustworthiness of Autonomous
Driving (AD). Various physiological and behavioural modalities have been
explored for this purpose, with Electrocardiogram (ECG) emerging as a standout
choice for real-time emotion monitoring, particularly in dynamic and
unpredictable driving conditions. Existing methods, however, often rely on
multi-channel ECG signals recorded under static conditions, limiting their
applicability in real-world dynamic driving scenarios. To address this
limitation, the paper introduces ECG-EmotionNet, a novel architecture designed
specifically for emotion recognition in dynamic driving environments.
ECG-EmotionNet is constructed by adapting a recently introduced ECG Foundation
Model (FM) and uniquely employs single-channel ECG signals, ensuring both
robust generalizability and computational efficiency. Unlike conventional
adaptation methods such as full fine-tuning, linear probing, or low-rank
adaptation, we propose an intuitively pleasing alternative, referred to as the
nested Mixture of Experts (MoE) adaptation. More precisely, each transformer
layer of the underlying FM is treated as a separate expert, with embeddings
extracted from these experts fused using trainable weights within a gating
mechanism. This approach enhances the representation of both global and local
ECG features, leading to a 6% improvement in accuracy and a 7% increase in the
F1 score, all while maintaining computational efficiency. The effectiveness of
the proposed ECG-EmotionNet architecture is evaluated using a recently
introduced and challenging driver emotion monitoring dataset.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 17:19:45 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Mansourian",
"Nastaran",
""
],
[
"Mohammadi",
"Arash",
""
],
[
"Ahmad",
"M. Omair",
""
],
[
"Swamy",
"M. N. S.",
""
]
] |
TITLE: ECG-EmotionNet: Nested Mixture of Expert (NMoE) Adaptation of
ECG-Foundation Model for Driver Emotion Recognition
ABSTRACT: Driver emotion recognition plays a crucial role in driver monitoring systems,
enhancing human-autonomy interactions and the trustworthiness of Autonomous
Driving (AD). Various physiological and behavioural modalities have been
explored for this purpose, with Electrocardiogram (ECG) emerging as a standout
choice for real-time emotion monitoring, particularly in dynamic and
unpredictable driving conditions. Existing methods, however, often rely on
multi-channel ECG signals recorded under static conditions, limiting their
applicability in real-world dynamic driving scenarios. To address this
limitation, the paper introduces ECG-EmotionNet, a novel architecture designed
specifically for emotion recognition in dynamic driving environments.
ECG-EmotionNet is constructed by adapting a recently introduced ECG Foundation
Model (FM) and uniquely employs single-channel ECG signals, ensuring both
robust generalizability and computational efficiency. Unlike conventional
adaptation methods such as full fine-tuning, linear probing, or low-rank
adaptation, we propose an intuitively pleasing alternative, referred to as the
nested Mixture of Experts (MoE) adaptation. More precisely, each transformer
layer of the underlying FM is treated as a separate expert, with embeddings
extracted from these experts fused using trainable weights within a gating
mechanism. This approach enhances the representation of both global and local
ECG features, leading to a 6% improvement in accuracy and a 7% increase in the
F1 score, all while maintaining computational efficiency. The effectiveness of
the proposed ECG-EmotionNet architecture is evaluated using a recently
introduced and challenging driver emotion monitoring dataset.
|
no_new_dataset
| 0.948822 |
2503.01753
|
Quan Mai
|
Quan Mai, Susan Gauch, Douglas Adams
|
Boolean-aware Attention for Dense Retrieval
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present Boolean-aware attention, a novel attention mechanism that
dynamically adjusts token focus based on Boolean operators (e.g., and, or,
not). Our model employs specialized Boolean experts, each tailored to amplify
or suppress attention for operator-specific contexts. A predefined gating
mechanism activates the corresponding experts based on the detected Boolean
type. Experiments on Boolean retrieval datasets demonstrate that integrating
BoolAttn with BERT greatly enhances the model's capability to process Boolean
queries.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 17:23:08 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Mai",
"Quan",
""
],
[
"Gauch",
"Susan",
""
],
[
"Adams",
"Douglas",
""
]
] |
TITLE: Boolean-aware Attention for Dense Retrieval
ABSTRACT: We present Boolean-aware attention, a novel attention mechanism that
dynamically adjusts token focus based on Boolean operators (e.g., and, or,
not). Our model employs specialized Boolean experts, each tailored to amplify
or suppress attention for operator-specific contexts. A predefined gating
mechanism activates the corresponding experts based on the detected Boolean
type. Experiments on Boolean retrieval datasets demonstrate that integrating
BoolAttn with BERT greatly enhances the model's capability to process Boolean
queries.
|
no_new_dataset
| 0.951097 |
2503.01763
|
Zhengliang Shi
|
Zhengliang Shi, Yuhan Wang, Lingyong Yan, Pengjie Ren, Shuaiqiang
Wang, Dawei Yin, Zhaochun Ren
|
Retrieval Models Aren't Tool-Savvy: Benchmarking Tool Retrieval for
Large Language Models
| null | null | null | null |
cs.CL cs.AI cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Tool learning aims to augment large language models (LLMs) with diverse
tools, enabling them to act as agents for solving practical tasks. Due to the
limited context length of tool-using LLMs, adopting information retrieval (IR)
models to select useful tools from large toolsets is a critical initial step.
However, the performance of IR models in tool retrieval tasks remains
underexplored and unclear. Most tool-use benchmarks simplify this step by
manually pre-annotating a small set of relevant tools for each task, which is
far from the real-world scenarios. In this paper, we propose ToolRet, a
heterogeneous tool retrieval benchmark comprising 7.6k diverse retrieval tasks,
and a corpus of 43k tools, collected from existing datasets. We benchmark six
types of models on ToolRet. Surprisingly, even the models with strong
performance in conventional IR benchmarks, exhibit poor performance on ToolRet.
This low retrieval quality degrades the task pass rate of tool-use LLMs. As a
further step, we contribute a large-scale training dataset with over 200k
instances, which substantially optimizes the tool retrieval ability of IR
models.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 17:37:16 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Shi",
"Zhengliang",
""
],
[
"Wang",
"Yuhan",
""
],
[
"Yan",
"Lingyong",
""
],
[
"Ren",
"Pengjie",
""
],
[
"Wang",
"Shuaiqiang",
""
],
[
"Yin",
"Dawei",
""
],
[
"Ren",
"Zhaochun",
""
]
] |
TITLE: Retrieval Models Aren't Tool-Savvy: Benchmarking Tool Retrieval for
Large Language Models
ABSTRACT: Tool learning aims to augment large language models (LLMs) with diverse
tools, enabling them to act as agents for solving practical tasks. Due to the
limited context length of tool-using LLMs, adopting information retrieval (IR)
models to select useful tools from large toolsets is a critical initial step.
However, the performance of IR models in tool retrieval tasks remains
underexplored and unclear. Most tool-use benchmarks simplify this step by
manually pre-annotating a small set of relevant tools for each task, which is
far from the real-world scenarios. In this paper, we propose ToolRet, a
heterogeneous tool retrieval benchmark comprising 7.6k diverse retrieval tasks,
and a corpus of 43k tools, collected from existing datasets. We benchmark six
types of models on ToolRet. Surprisingly, even the models with strong
performance in conventional IR benchmarks, exhibit poor performance on ToolRet.
This low retrieval quality degrades the task pass rate of tool-use LLMs. As a
further step, we contribute a large-scale training dataset with over 200k
instances, which substantially optimizes the tool retrieval ability of IR
models.
|
new_dataset
| 0.969527 |
2503.01768
|
Heming Fu
|
Heming Fu, Hongkai Chen, Shan Lin, Guoliang Xing
|
SHADE-AD: An LLM-Based Framework for Synthesizing Activity Data of
Alzheimer's Patients
|
7 pages, 6 figures, ACM SenSys'25
| null | null | null |
cs.LG cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Alzheimer's Disease (AD) has become an increasingly critical global health
concern, which necessitates effective monitoring solutions in smart health
applications. However, the development of such solutions is significantly
hindered by the scarcity of AD-specific activity datasets. To address this
challenge, we propose SHADE-AD, a Large Language Model (LLM) framework for
Synthesizing Human Activity Datasets Embedded with AD features. Leveraging both
public datasets and our own collected data from 99 AD patients, SHADE-AD
synthesizes human activity videos that specifically represent AD-related
behaviors. By employing a three-stage training mechanism, it broadens the range
of activities beyond those collected from limited deployment settings. We
conducted comprehensive evaluations of the generated dataset, demonstrating
significant improvements in downstream tasks such as Human Activity Recognition
(HAR) detection, with enhancements of up to 79.69%. Detailed motion metrics
between real and synthetic data show strong alignment, validating the realism
and utility of the synthesized dataset. These results underscore SHADE-AD's
potential to advance smart health applications by providing a cost-effective,
privacy-preserving solution for AD monitoring.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 17:48:18 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Fu",
"Heming",
""
],
[
"Chen",
"Hongkai",
""
],
[
"Lin",
"Shan",
""
],
[
"Xing",
"Guoliang",
""
]
] |
TITLE: SHADE-AD: An LLM-Based Framework for Synthesizing Activity Data of
Alzheimer's Patients
ABSTRACT: Alzheimer's Disease (AD) has become an increasingly critical global health
concern, which necessitates effective monitoring solutions in smart health
applications. However, the development of such solutions is significantly
hindered by the scarcity of AD-specific activity datasets. To address this
challenge, we propose SHADE-AD, a Large Language Model (LLM) framework for
Synthesizing Human Activity Datasets Embedded with AD features. Leveraging both
public datasets and our own collected data from 99 AD patients, SHADE-AD
synthesizes human activity videos that specifically represent AD-related
behaviors. By employing a three-stage training mechanism, it broadens the range
of activities beyond those collected from limited deployment settings. We
conducted comprehensive evaluations of the generated dataset, demonstrating
significant improvements in downstream tasks such as Human Activity Recognition
(HAR) detection, with enhancements of up to 79.69%. Detailed motion metrics
between real and synthetic data show strong alignment, validating the realism
and utility of the synthesized dataset. These results underscore SHADE-AD's
potential to advance smart health applications by providing a cost-effective,
privacy-preserving solution for AD monitoring.
|
new_dataset
| 0.964921 |
2503.01781
|
Prapti Trivedi
|
Meghana Rajeev, Rajkumar Ramamurthy, Prapti Trivedi, Vikas Yadav,
Oluwanifemi Bamgbose, Sathwik Tejaswi Madhusudan, James Zou, Nazneen Rajani
|
Cats Confuse Reasoning LLM: Query Agnostic Adversarial Triggers for
Reasoning Models
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We investigate the robustness of reasoning models trained for step-by-step
problem solving by introducing query-agnostic adversarial triggers - short,
irrelevant text that, when appended to math problems, systematically mislead
models to output incorrect answers without altering the problem's semantics. We
propose CatAttack, an automated iterative attack pipeline for generating
triggers on a weaker, less expensive proxy model (DeepSeek V3) and successfully
transfer them to more advanced reasoning target models like DeepSeek R1 and
DeepSeek R1-distilled-Qwen-32B, resulting in greater than 300% increase in the
likelihood of the target model generating an incorrect answer. For example,
appending, "Interesting fact: cats sleep most of their lives," to any math
problem leads to more than doubling the chances of a model getting the answer
wrong. Our findings highlight critical vulnerabilities in reasoning models,
revealing that even state-of-the-art models remain susceptible to subtle
adversarial inputs, raising security and reliability concerns. The CatAttack
triggers dataset with model responses is available at
https://huggingface.co/datasets/collinear-ai/cat-attack-adversarial-triggers.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 18:10:54 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Rajeev",
"Meghana",
""
],
[
"Ramamurthy",
"Rajkumar",
""
],
[
"Trivedi",
"Prapti",
""
],
[
"Yadav",
"Vikas",
""
],
[
"Bamgbose",
"Oluwanifemi",
""
],
[
"Madhusudan",
"Sathwik Tejaswi",
""
],
[
"Zou",
"James",
""
],
[
"Rajani",
"Nazneen",
""
]
] |
TITLE: Cats Confuse Reasoning LLM: Query Agnostic Adversarial Triggers for
Reasoning Models
ABSTRACT: We investigate the robustness of reasoning models trained for step-by-step
problem solving by introducing query-agnostic adversarial triggers - short,
irrelevant text that, when appended to math problems, systematically mislead
models to output incorrect answers without altering the problem's semantics. We
propose CatAttack, an automated iterative attack pipeline for generating
triggers on a weaker, less expensive proxy model (DeepSeek V3) and successfully
transfer them to more advanced reasoning target models like DeepSeek R1 and
DeepSeek R1-distilled-Qwen-32B, resulting in greater than 300% increase in the
likelihood of the target model generating an incorrect answer. For example,
appending, "Interesting fact: cats sleep most of their lives," to any math
problem leads to more than doubling the chances of a model getting the answer
wrong. Our findings highlight critical vulnerabilities in reasoning models,
revealing that even state-of-the-art models remain susceptible to subtle
adversarial inputs, raising security and reliability concerns. The CatAttack
triggers dataset with model responses is available at
https://huggingface.co/datasets/collinear-ai/cat-attack-adversarial-triggers.
|
new_dataset
| 0.828315 |
2503.01783
|
Ali Tourani
|
Ali Tourani, Saad Ejaz, Hriday Bavle, David Morilla-Cabello, Jose Luis
Sanchez-Lopez, Holger Voos
|
vS-Graphs: Integrating Visual SLAM and Situational Graphs through
Multi-level Scene Understanding
|
13 pages, 8 figures, 2 tables
| null | null | null |
cs.RO cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Current Visual Simultaneous Localization and Mapping (VSLAM) systems often
struggle to create maps that are both semantically rich and easily
interpretable. While incorporating semantic scene knowledge aids in building
richer maps with contextual associations among mapped objects, representing
them in structured formats like scene graphs has not been widely addressed,
encountering complex map comprehension and limited scalability. This paper
introduces visual S-Graphs (vS-Graphs), a novel real-time VSLAM framework that
integrates vision-based scene understanding with map reconstruction and
comprehensible graph-based representation. The framework infers structural
elements (i.e., rooms and corridors) from detected building components (i.e.,
walls and ground surfaces) and incorporates them into optimizable 3D scene
graphs. This solution enhances the reconstructed map's semantic richness,
comprehensibility, and localization accuracy. Extensive experiments on standard
benchmarks and real-world datasets demonstrate that vS-Graphs outperforms
state-of-the-art VSLAM methods, reducing trajectory error by an average of
3.38% and up to 9.58% on real-world data. Furthermore, the proposed framework
achieves environment-driven semantic entity detection accuracy comparable to
precise LiDAR-based frameworks using only visual features. A web page
containing more media and evaluation outcomes is available on
https://snt-arg.github.io/vsgraphs-results/.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 18:15:11 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Tourani",
"Ali",
""
],
[
"Ejaz",
"Saad",
""
],
[
"Bavle",
"Hriday",
""
],
[
"Morilla-Cabello",
"David",
""
],
[
"Sanchez-Lopez",
"Jose Luis",
""
],
[
"Voos",
"Holger",
""
]
] |
TITLE: vS-Graphs: Integrating Visual SLAM and Situational Graphs through
Multi-level Scene Understanding
ABSTRACT: Current Visual Simultaneous Localization and Mapping (VSLAM) systems often
struggle to create maps that are both semantically rich and easily
interpretable. While incorporating semantic scene knowledge aids in building
richer maps with contextual associations among mapped objects, representing
them in structured formats like scene graphs has not been widely addressed,
encountering complex map comprehension and limited scalability. This paper
introduces visual S-Graphs (vS-Graphs), a novel real-time VSLAM framework that
integrates vision-based scene understanding with map reconstruction and
comprehensible graph-based representation. The framework infers structural
elements (i.e., rooms and corridors) from detected building components (i.e.,
walls and ground surfaces) and incorporates them into optimizable 3D scene
graphs. This solution enhances the reconstructed map's semantic richness,
comprehensibility, and localization accuracy. Extensive experiments on standard
benchmarks and real-world datasets demonstrate that vS-Graphs outperforms
state-of-the-art VSLAM methods, reducing trajectory error by an average of
3.38% and up to 9.58% on real-world data. Furthermore, the proposed framework
achieves environment-driven semantic entity detection accuracy comparable to
precise LiDAR-based frameworks using only visual features. A web page
containing more media and evaluation outcomes is available on
https://snt-arg.github.io/vsgraphs-results/.
|
no_new_dataset
| 0.952486 |
2503.01789
|
Hao Li
|
Chengyi Xing, Hao Li, Yi-Lin Wei, Tian-Ao Ren, Tianyu Tu, Yuhao Lin,
Elizabeth Schumann, Wei-Shi Zheng, Mark R. Cutkosky
|
TacCap: A Wearable FBG-Based Tactile Sensor for Seamless Human-to-Robot
Skill Transfer
|
7 pages, 8 figures
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Tactile sensing is essential for dexterous manipulation, yet large-scale
human demonstration datasets lack tactile feedback, limiting their
effectiveness in skill transfer to robots. To address this, we introduce
TacCap, a wearable Fiber Bragg Grating (FBG)-based tactile sensor designed for
seamless human-to-robot transfer. TacCap is lightweight, durable, and immune to
electromagnetic interference, making it ideal for real-world data collection.
We detail its design and fabrication, evaluate its sensitivity, repeatability,
and cross-sensor consistency, and assess its effectiveness through grasp
stability prediction and ablation studies. Our results demonstrate that TacCap
enables transferable tactile data collection, bridging the gap between human
demonstrations and robotic execution. To support further research and
development, we open-source our hardware design and software.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 18:21:26 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Xing",
"Chengyi",
""
],
[
"Li",
"Hao",
""
],
[
"Wei",
"Yi-Lin",
""
],
[
"Ren",
"Tian-Ao",
""
],
[
"Tu",
"Tianyu",
""
],
[
"Lin",
"Yuhao",
""
],
[
"Schumann",
"Elizabeth",
""
],
[
"Zheng",
"Wei-Shi",
""
],
[
"Cutkosky",
"Mark R.",
""
]
] |
TITLE: TacCap: A Wearable FBG-Based Tactile Sensor for Seamless Human-to-Robot
Skill Transfer
ABSTRACT: Tactile sensing is essential for dexterous manipulation, yet large-scale
human demonstration datasets lack tactile feedback, limiting their
effectiveness in skill transfer to robots. To address this, we introduce
TacCap, a wearable Fiber Bragg Grating (FBG)-based tactile sensor designed for
seamless human-to-robot transfer. TacCap is lightweight, durable, and immune to
electromagnetic interference, making it ideal for real-world data collection.
We detail its design and fabrication, evaluate its sensitivity, repeatability,
and cross-sensor consistency, and assess its effectiveness through grasp
stability prediction and ablation studies. Our results demonstrate that TacCap
enables transferable tactile data collection, bridging the gap between human
demonstrations and robotic execution. To support further research and
development, we open-source our hardware design and software.
|
no_new_dataset
| 0.811974 |
2503.01799
|
Md Farhan Shahriyar
|
Md. Farhan Shahriyar, Gazi Tanbhir, Abdullah Md Raihan Chy, Mohammed
Abdul Al Arafat Tanzin, Md. Jisan Mashrafi
|
PhishVQC: Optimizing Phishing URL Detection with Correlation Based
Feature Selection and Variational Quantum Classifier
|
This paper has been accepted and presented at the 3rd International
Conference on Intelligent Systems Advanced Computing and Communication (ISACC
2025)
| null | null | null |
cs.CR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Phishing URL detection is crucial in cybersecurity as malicious websites
disguise themselves to steal sensitive infor mation. Traditional machine
learning techniques struggle to per form well in complex real-world scenarios
due to large datasets and intricate patterns. Motivated by quantum computing,
this paper proposes using Variational Quantum Classifiers (VQC) to enhance
phishing URL detection. We present PhishVQC, a quantum model that combines
quantum feature maps and vari ational ansatzes such as RealAmplitude and
EfficientSU2. The model is evaluated across two experimental setups with
varying dataset sizes and feature map repetitions. PhishVQC achieves a maximum
macro average F1-score of 0.89, showing a 22% improvement over prior studies.
This highlights the potential of quantum machine learning to improve phishing
detection accuracy. The study also notes computational challenges, with
execution wall times increasing as dataset size grows.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 18:28:01 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Shahriyar",
"Md. Farhan",
""
],
[
"Tanbhir",
"Gazi",
""
],
[
"Chy",
"Abdullah Md Raihan",
""
],
[
"Tanzin",
"Mohammed Abdul Al Arafat",
""
],
[
"Mashrafi",
"Md. Jisan",
""
]
] |
TITLE: PhishVQC: Optimizing Phishing URL Detection with Correlation Based
Feature Selection and Variational Quantum Classifier
ABSTRACT: Phishing URL detection is crucial in cybersecurity as malicious websites
disguise themselves to steal sensitive infor mation. Traditional machine
learning techniques struggle to per form well in complex real-world scenarios
due to large datasets and intricate patterns. Motivated by quantum computing,
this paper proposes using Variational Quantum Classifiers (VQC) to enhance
phishing URL detection. We present PhishVQC, a quantum model that combines
quantum feature maps and vari ational ansatzes such as RealAmplitude and
EfficientSU2. The model is evaluated across two experimental setups with
varying dataset sizes and feature map repetitions. PhishVQC achieves a maximum
macro average F1-score of 0.89, showing a 22% improvement over prior studies.
This highlights the potential of quantum machine learning to improve phishing
detection accuracy. The study also notes computational challenges, with
execution wall times increasing as dataset size grows.
|
no_new_dataset
| 0.946941 |
2503.01807
|
Hamish Ivison
|
Hamish Ivison and Muru Zhang and Faeze Brahman and Pang Wei Koh and
Pradeep Dasigi
|
Large-Scale Data Selection for Instruction Tuning
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Selecting high-quality training data from a larger pool is a crucial step
when instruction-tuning language models, as carefully curated datasets often
produce models that outperform those trained on much larger, noisier datasets.
Automated data selection approaches for instruction-tuning are typically tested
by selecting small datasets (roughly 10k samples) from small pools (100-200k
samples). However, popular deployed instruction-tuned models often train on
hundreds of thousands to millions of samples, subsampled from even larger data
pools. We present a systematic study of how well data selection methods scale
to these settings, selecting up to 2.5M samples from pools of up to 5.8M
samples and evaluating across 7 diverse tasks. We show that many recently
proposed methods fall short of random selection in this setting (while using
more compute), and even decline in performance when given access to larger
pools of data to select over. However, we find that a variant of
representation-based data selection (RDS+), which uses weighted mean pooling of
pretrained LM hidden states, consistently outperforms more complex methods
across all settings tested -- all whilst being more compute-efficient. Our
findings highlight that the scaling properties of proposed automated selection
methods should be more closely examined. We release our code, data, and models
at https://github.com/hamishivi/automated-instruction-selection.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 18:37:26 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Ivison",
"Hamish",
""
],
[
"Zhang",
"Muru",
""
],
[
"Brahman",
"Faeze",
""
],
[
"Koh",
"Pang Wei",
""
],
[
"Dasigi",
"Pradeep",
""
]
] |
TITLE: Large-Scale Data Selection for Instruction Tuning
ABSTRACT: Selecting high-quality training data from a larger pool is a crucial step
when instruction-tuning language models, as carefully curated datasets often
produce models that outperform those trained on much larger, noisier datasets.
Automated data selection approaches for instruction-tuning are typically tested
by selecting small datasets (roughly 10k samples) from small pools (100-200k
samples). However, popular deployed instruction-tuned models often train on
hundreds of thousands to millions of samples, subsampled from even larger data
pools. We present a systematic study of how well data selection methods scale
to these settings, selecting up to 2.5M samples from pools of up to 5.8M
samples and evaluating across 7 diverse tasks. We show that many recently
proposed methods fall short of random selection in this setting (while using
more compute), and even decline in performance when given access to larger
pools of data to select over. However, we find that a variant of
representation-based data selection (RDS+), which uses weighted mean pooling of
pretrained LM hidden states, consistently outperforms more complex methods
across all settings tested -- all whilst being more compute-efficient. Our
findings highlight that the scaling properties of proposed automated selection
methods should be more closely examined. We release our code, data, and models
at https://github.com/hamishivi/automated-instruction-selection.
|
no_new_dataset
| 0.943764 |
2503.01808
|
Samuel Wolf
|
Johann Hartleb, Marie Schmidt, Samuel Wolf, Alexander Wolff
|
Visualization of Event Graphs for Train Schedules
| null | null | null | null |
cs.CG
|
http://creativecommons.org/licenses/by/4.0/
|
Software that is used to compute or adjust train schedules is based on
so-called event graphs. The vertices of such a graph correspond to events; each
event is associated with a point in time, a location, and a train. A train line
corresponds to a sequence of events (ordered by time) that are associated with
the same train. The event graph has a directed edge from an earlier to a later
event if they are consecutive along a train line. Events that occur at the same
location do not occur at the same time. In this paper, we present a way to
visualize such graphs, namely time-space diagrams. A time-space diagram is a
straight-line drawing of the event graph with the additional constraint that
all vertices that belong to the same location lie on the same horizontal line
and that the x-coordinate of each vertex is given by its point in time. Hence,
it remains to determine the y-coordinates of the locations. A good drawing of a
time-space diagram supports users (or software developers) when creating
(software for computing) train schedules. To enhance readability, we aim to
minimize the number of turns in time-space diagrams. To this end, we establish
a connection between this problem and Maximum Betweenness. Then we develop
exact reduction rules to reduce the instance size. We also propose a
parameterized algorithm and devise a heuristic that we evaluate experimentally
on a real-world dataset.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 18:37:48 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Hartleb",
"Johann",
""
],
[
"Schmidt",
"Marie",
""
],
[
"Wolf",
"Samuel",
""
],
[
"Wolff",
"Alexander",
""
]
] |
TITLE: Visualization of Event Graphs for Train Schedules
ABSTRACT: Software that is used to compute or adjust train schedules is based on
so-called event graphs. The vertices of such a graph correspond to events; each
event is associated with a point in time, a location, and a train. A train line
corresponds to a sequence of events (ordered by time) that are associated with
the same train. The event graph has a directed edge from an earlier to a later
event if they are consecutive along a train line. Events that occur at the same
location do not occur at the same time. In this paper, we present a way to
visualize such graphs, namely time-space diagrams. A time-space diagram is a
straight-line drawing of the event graph with the additional constraint that
all vertices that belong to the same location lie on the same horizontal line
and that the x-coordinate of each vertex is given by its point in time. Hence,
it remains to determine the y-coordinates of the locations. A good drawing of a
time-space diagram supports users (or software developers) when creating
(software for computing) train schedules. To enhance readability, we aim to
minimize the number of turns in time-space diagrams. To this end, we establish
a connection between this problem and Maximum Betweenness. Then we develop
exact reduction rules to reduce the instance size. We also propose a
parameterized algorithm and devise a heuristic that we evaluate experimentally
on a real-world dataset.
|
no_new_dataset
| 0.947721 |
2503.01814
|
Weizhi Zhang
|
Weizhi Zhang, Liangwei Yang, Wooseong Yang, Henry Peng Zou, Yuqing
Liu, Ke Xu, Sourav Medya, Philip S. Yu
|
LLMInit: A Free Lunch from Large Language Models for Selective
Initialization of Recommendation
| null | null | null | null |
cs.IR cs.AI cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Collaborative filtering models, particularly graph-based approaches, have
demonstrated strong performance in capturing user-item interactions for
recommendation systems. However, they continue to struggle in cold-start and
data-sparse scenarios. The emergence of large language models (LLMs) like GPT
and LLaMA presents new possibilities for enhancing recommendation performance,
especially in cold-start settings. Despite their promise, LLMs pose challenges
related to scalability and efficiency due to their high computational demands
and limited ability to model complex user-item relationships effectively. In
this work, we introduce a novel perspective on leveraging LLMs for CF model
initialization. Through experiments, we uncover an embedding collapse issue
when scaling CF models to larger embedding dimensions. To effectively harness
large-scale LLM embeddings, we propose innovative selective initialization
strategies utilizing random, uniform, and variance-based index sampling. Our
comprehensive evaluation on multiple real-world datasets demonstrates
significant performance gains across various CF models while maintaining a
lower computational cost compared to existing LLM-based recommendation
approaches.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 18:41:59 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Zhang",
"Weizhi",
""
],
[
"Yang",
"Liangwei",
""
],
[
"Yang",
"Wooseong",
""
],
[
"Zou",
"Henry Peng",
""
],
[
"Liu",
"Yuqing",
""
],
[
"Xu",
"Ke",
""
],
[
"Medya",
"Sourav",
""
],
[
"Yu",
"Philip S.",
""
]
] |
TITLE: LLMInit: A Free Lunch from Large Language Models for Selective
Initialization of Recommendation
ABSTRACT: Collaborative filtering models, particularly graph-based approaches, have
demonstrated strong performance in capturing user-item interactions for
recommendation systems. However, they continue to struggle in cold-start and
data-sparse scenarios. The emergence of large language models (LLMs) like GPT
and LLaMA presents new possibilities for enhancing recommendation performance,
especially in cold-start settings. Despite their promise, LLMs pose challenges
related to scalability and efficiency due to their high computational demands
and limited ability to model complex user-item relationships effectively. In
this work, we introduce a novel perspective on leveraging LLMs for CF model
initialization. Through experiments, we uncover an embedding collapse issue
when scaling CF models to larger embedding dimensions. To effectively harness
large-scale LLM embeddings, we propose innovative selective initialization
strategies utilizing random, uniform, and variance-based index sampling. Our
comprehensive evaluation on multiple real-world datasets demonstrates
significant performance gains across various CF models while maintaining a
lower computational cost compared to existing LLM-based recommendation
approaches.
|
no_new_dataset
| 0.945551 |
2503.01819
|
Mansi Gupta
|
Adesh Gupta, Abhinav Kumar, Mansi Gupta, Paras Chopra
|
Do GFlowNets Transfer? Case Study on the Game of 24/42
| null | null | null | null |
cs.AI cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Generating diverse solutions is key to human-like reasoning, yet
autoregressive language models focus on single accurate responses, limiting
creativity. GFlowNets optimize solution generation as a flow network, promising
greater diversity. Our case study shows their limited zero-shot transferability
by fine-tuning small and medium-sized large language models on the Game of 24
and testing them on the Game of 42 datasets. Results revealed that GFlowNets
struggle to maintain solution diversity and accuracy, highlighting key
limitations in their cross-task generalization and the need for future research
in improved transfer learning capabilities.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 18:43:25 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Gupta",
"Adesh",
""
],
[
"Kumar",
"Abhinav",
""
],
[
"Gupta",
"Mansi",
""
],
[
"Chopra",
"Paras",
""
]
] |
TITLE: Do GFlowNets Transfer? Case Study on the Game of 24/42
ABSTRACT: Generating diverse solutions is key to human-like reasoning, yet
autoregressive language models focus on single accurate responses, limiting
creativity. GFlowNets optimize solution generation as a flow network, promising
greater diversity. Our case study shows their limited zero-shot transferability
by fine-tuning small and medium-sized large language models on the Game of 24
and testing them on the Game of 42 datasets. Results revealed that GFlowNets
struggle to maintain solution diversity and accuracy, highlighting key
limitations in their cross-task generalization and the need for future research
in improved transfer learning capabilities.
|
no_new_dataset
| 0.94868 |
2503.01820
|
Yi-Lin Sung
|
Yi-Lin Sung, Prateek Yadav, Jialu Li, Jaehong Yoon, Mohit Bansal
|
RSQ: Learning from Important Tokens Leads to Better Quantized LLMs
|
Our code is available at https://github.com/ylsung/rsq
| null | null | null |
cs.LG cs.AI cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Layer-wise quantization is a key technique for efficiently compressing large
models without expensive retraining. Previous methods typically quantize the
weights of each layer by "uniformly" optimizing the layer reconstruction loss
across all output tokens. However, in this paper, we demonstrate that
better-quantized models can be obtained by prioritizing learning from important
tokens (e.g. which have large attention scores). Building on this finding, we
propose RSQ (Rotate, Scale, then Quantize), which (1) applies rotations
(orthogonal transformation) to the model to mitigate outliers (those with
exceptionally large magnitude), (2) scales the token feature based on its
importance, and (3) quantizes the model using the GPTQ framework with the
second-order statistics computed by scaled tokens. To compute token importance,
we explore both heuristic and dynamic strategies. Based on a thorough analysis
of all approaches, we adopt attention concentration, which uses attention
scores of each token as its importance, as the best approach. We demonstrate
that RSQ consistently outperforms baseline methods across multiple downstream
tasks and three model families: LLaMA3, Mistral, and Qwen2.5. Additionally,
models quantized with RSQ achieve superior performance on long-context tasks,
further highlighting its effectiveness. Lastly, RSQ demonstrates
generalizability across various setups, including different model sizes,
calibration datasets, bit precisions, and quantization methods.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 18:46:33 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Sung",
"Yi-Lin",
""
],
[
"Yadav",
"Prateek",
""
],
[
"Li",
"Jialu",
""
],
[
"Yoon",
"Jaehong",
""
],
[
"Bansal",
"Mohit",
""
]
] |
TITLE: RSQ: Learning from Important Tokens Leads to Better Quantized LLMs
ABSTRACT: Layer-wise quantization is a key technique for efficiently compressing large
models without expensive retraining. Previous methods typically quantize the
weights of each layer by "uniformly" optimizing the layer reconstruction loss
across all output tokens. However, in this paper, we demonstrate that
better-quantized models can be obtained by prioritizing learning from important
tokens (e.g. which have large attention scores). Building on this finding, we
propose RSQ (Rotate, Scale, then Quantize), which (1) applies rotations
(orthogonal transformation) to the model to mitigate outliers (those with
exceptionally large magnitude), (2) scales the token feature based on its
importance, and (3) quantizes the model using the GPTQ framework with the
second-order statistics computed by scaled tokens. To compute token importance,
we explore both heuristic and dynamic strategies. Based on a thorough analysis
of all approaches, we adopt attention concentration, which uses attention
scores of each token as its importance, as the best approach. We demonstrate
that RSQ consistently outperforms baseline methods across multiple downstream
tasks and three model families: LLaMA3, Mistral, and Qwen2.5. Additionally,
models quantized with RSQ achieve superior performance on long-context tasks,
further highlighting its effectiveness. Lastly, RSQ demonstrates
generalizability across various setups, including different model sizes,
calibration datasets, bit precisions, and quantization methods.
|
no_new_dataset
| 0.947914 |
2503.01822
|
Sai Sumedh R. Hindupur
|
Sai Sumedh R. Hindupur, Ekdeep Singh Lubana, Thomas Fel, Demba Ba
|
Projecting Assumptions: The Duality Between Sparse Autoencoders and
Concept Geometry
|
Preprint
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sparse Autoencoders (SAEs) are widely used to interpret neural networks by
identifying meaningful concepts from their representations. However, do SAEs
truly uncover all concepts a model relies on, or are they inherently biased
toward certain kinds of concepts? We introduce a unified framework that recasts
SAEs as solutions to a bilevel optimization problem, revealing a fundamental
challenge: each SAE imposes structural assumptions about how concepts are
encoded in model representations, which in turn shapes what it can and cannot
detect. This means different SAEs are not interchangeable -- switching
architectures can expose entirely new concepts or obscure existing ones. To
systematically probe this effect, we evaluate SAEs across a spectrum of
settings: from controlled toy models that isolate key variables, to
semi-synthetic experiments on real model activations and finally to
large-scale, naturalistic datasets. Across this progression, we examine two
fundamental properties that real-world concepts often exhibit: heterogeneity in
intrinsic dimensionality (some concepts are inherently low-dimensional, others
are not) and nonlinear separability. We show that SAEs fail to recover concepts
when these properties are ignored, and we design a new SAE that explicitly
incorporates both, enabling the discovery of previously hidden concepts and
reinforcing our theoretical insights. Our findings challenge the idea of a
universal SAE and underscores the need for architecture-specific choices in
model interpretability. Overall, we argue an SAE does not just reveal concepts
-- it determines what can be seen at all.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 18:47:40 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Hindupur",
"Sai Sumedh R.",
""
],
[
"Lubana",
"Ekdeep Singh",
""
],
[
"Fel",
"Thomas",
""
],
[
"Ba",
"Demba",
""
]
] |
TITLE: Projecting Assumptions: The Duality Between Sparse Autoencoders and
Concept Geometry
ABSTRACT: Sparse Autoencoders (SAEs) are widely used to interpret neural networks by
identifying meaningful concepts from their representations. However, do SAEs
truly uncover all concepts a model relies on, or are they inherently biased
toward certain kinds of concepts? We introduce a unified framework that recasts
SAEs as solutions to a bilevel optimization problem, revealing a fundamental
challenge: each SAE imposes structural assumptions about how concepts are
encoded in model representations, which in turn shapes what it can and cannot
detect. This means different SAEs are not interchangeable -- switching
architectures can expose entirely new concepts or obscure existing ones. To
systematically probe this effect, we evaluate SAEs across a spectrum of
settings: from controlled toy models that isolate key variables, to
semi-synthetic experiments on real model activations and finally to
large-scale, naturalistic datasets. Across this progression, we examine two
fundamental properties that real-world concepts often exhibit: heterogeneity in
intrinsic dimensionality (some concepts are inherently low-dimensional, others
are not) and nonlinear separability. We show that SAEs fail to recover concepts
when these properties are ignored, and we design a new SAE that explicitly
incorporates both, enabling the discovery of previously hidden concepts and
reinforcing our theoretical insights. Our findings challenge the idea of a
universal SAE and underscores the need for architecture-specific choices in
model interpretability. Overall, we argue an SAE does not just reveal concepts
-- it determines what can be seen at all.
|
no_new_dataset
| 0.941385 |
2503.01823
|
Vasilis Mageirakos
|
Vasilis Mageirakos, Bowen Wu, Gustavo Alonso
|
Cracking Vector Search Indexes
| null | null | null | null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Retrieval Augmented Generation (RAG) uses vector databases to expand the
expertise of an LLM model without having to retrain it. This idea can be
applied over data lakes, leading to the notion of embeddings data lakes, i.e.,
a pool of vector databases ready to be used by RAGs. The key component in these
systems is the indexes enabling Approximated Nearest Neighbor Search (ANNS).
However, in data lakes, one cannot realistically expect to build indexes for
every possible dataset. In this paper, we propose an adaptive, partition-based
index, CrackIVF, that performs much better than up-front index building.
CrackIVF starts answering queries by near brute force search and only expands
as it sees enough queries. It does so by progressively adapting the index to
the query workload. That way, queries can be answered right away without having
to build a full index first. After seeing enough queries, CrackIVF will produce
an index comparable to the best of those built using conventional techniques.
As the experimental evaluation shows, CrackIVF can often answer more than 1
million queries before other approaches have even built the index and can start
answering queries immediately, achieving 10-1000x faster initialization times.
This makes it ideal when working with cold data or infrequently used data or as
a way to bootstrap access to unseen datasets.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 18:49:57 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Mageirakos",
"Vasilis",
""
],
[
"Wu",
"Bowen",
""
],
[
"Alonso",
"Gustavo",
""
]
] |
TITLE: Cracking Vector Search Indexes
ABSTRACT: Retrieval Augmented Generation (RAG) uses vector databases to expand the
expertise of an LLM model without having to retrain it. This idea can be
applied over data lakes, leading to the notion of embeddings data lakes, i.e.,
a pool of vector databases ready to be used by RAGs. The key component in these
systems is the indexes enabling Approximated Nearest Neighbor Search (ANNS).
However, in data lakes, one cannot realistically expect to build indexes for
every possible dataset. In this paper, we propose an adaptive, partition-based
index, CrackIVF, that performs much better than up-front index building.
CrackIVF starts answering queries by near brute force search and only expands
as it sees enough queries. It does so by progressively adapting the index to
the query workload. That way, queries can be answered right away without having
to build a full index first. After seeing enough queries, CrackIVF will produce
an index comparable to the best of those built using conventional techniques.
As the experimental evaluation shows, CrackIVF can often answer more than 1
million queries before other approaches have even built the index and can start
answering queries immediately, achieving 10-1000x faster initialization times.
This makes it ideal when working with cold data or infrequently used data or as
a way to bootstrap access to unseen datasets.
|
no_new_dataset
| 0.933854 |
2503.01827
|
Anders Sildnes
|
Anders Sildnes, Nikita Shvetsov, Masoud Tafavvoghi, Vi Ngoc-Nha Tran,
Kajsa M{\o}llersen, Lill-Tove Rasmussen Busund, Thomas K. Kilv{\ae}r, Lars
Ailo Bongo
|
Open-source framework for detecting bias and overfitting for large
pathology images
| null | null | null | null |
cs.LG cs.SE eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Even foundational models that are trained on datasets with billions of data
samples may develop shortcuts that lead to overfitting and bias. Shortcuts are
non-relevant patterns in data, such as the background color or color intensity.
So, to ensure the robustness of deep learning applications, there is a need for
methods to detect and remove such shortcuts. Today's model debugging methods
are time consuming since they often require customization to fit for a given
model architecture in a specific domain. We propose a generalized,
model-agnostic framework to debug deep learning models. We focus on the domain
of histopathology, which has very large images that require large models - and
therefore large computation resources. It can be run on a workstation with a
commodity GPU. We demonstrate that our framework can replicate non-image
shortcuts that have been found in previous work for self-supervised learning
models, and we also identify possible shortcuts in a foundation model. Our easy
to use tests contribute to the development of more reliable, accurate, and
generalizable models for WSI analysis. Our framework is available as an
open-source tool available on github.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 18:52:53 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Sildnes",
"Anders",
""
],
[
"Shvetsov",
"Nikita",
""
],
[
"Tafavvoghi",
"Masoud",
""
],
[
"Tran",
"Vi Ngoc-Nha",
""
],
[
"Møllersen",
"Kajsa",
""
],
[
"Busund",
"Lill-Tove Rasmussen",
""
],
[
"Kilvær",
"Thomas K.",
""
],
[
"Bongo",
"Lars Ailo",
""
]
] |
TITLE: Open-source framework for detecting bias and overfitting for large
pathology images
ABSTRACT: Even foundational models that are trained on datasets with billions of data
samples may develop shortcuts that lead to overfitting and bias. Shortcuts are
non-relevant patterns in data, such as the background color or color intensity.
So, to ensure the robustness of deep learning applications, there is a need for
methods to detect and remove such shortcuts. Today's model debugging methods
are time consuming since they often require customization to fit for a given
model architecture in a specific domain. We propose a generalized,
model-agnostic framework to debug deep learning models. We focus on the domain
of histopathology, which has very large images that require large models - and
therefore large computation resources. It can be run on a workstation with a
commodity GPU. We demonstrate that our framework can replicate non-image
shortcuts that have been found in previous work for self-supervised learning
models, and we also identify possible shortcuts in a foundation model. Our easy
to use tests contribute to the development of more reliable, accurate, and
generalizable models for WSI analysis. Our framework is available as an
open-source tool available on github.
|
no_new_dataset
| 0.9434 |
2503.01835
|
Tassilo Wald
|
Tassilo Wald, Saikat Roy, Fabian Isensee, Constantin Ulrich, Sebastian
Ziegler, Dasha Trofimova, Raphael Stock, Michael Baumgartner, Gregor
K\"ohler, Klaus Maier-Hein
|
Primus: Enforcing Attention Usage for 3D Medical Image Segmentation
|
Preprint
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Transformers have achieved remarkable success across multiple fields, yet
their impact on 3D medical image segmentation remains limited with
convolutional networks still dominating major benchmarks. In this work, we a)
analyze current Transformer-based segmentation models and identify critical
shortcomings, particularly their over-reliance on convolutional blocks.
Further, we demonstrate that in some architectures, performance is unaffected
by the absence of the Transformer, thereby demonstrating their limited
effectiveness. To address these challenges, we move away from hybrid
architectures and b) introduce a fully Transformer-based segmentation
architecture, termed Primus. Primus leverages high-resolution tokens, combined
with advances in positional embeddings and block design, to maximally leverage
its Transformer blocks. Through these adaptations Primus surpasses current
Transformer-based methods and competes with state-of-the-art convolutional
models on multiple public datasets. By doing so, we create the first pure
Transformer architecture and take a significant step towards making
Transformers state-of-the-art for 3D medical image segmentation.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 18:56:29 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Wald",
"Tassilo",
""
],
[
"Roy",
"Saikat",
""
],
[
"Isensee",
"Fabian",
""
],
[
"Ulrich",
"Constantin",
""
],
[
"Ziegler",
"Sebastian",
""
],
[
"Trofimova",
"Dasha",
""
],
[
"Stock",
"Raphael",
""
],
[
"Baumgartner",
"Michael",
""
],
[
"Köhler",
"Gregor",
""
],
[
"Maier-Hein",
"Klaus",
""
]
] |
TITLE: Primus: Enforcing Attention Usage for 3D Medical Image Segmentation
ABSTRACT: Transformers have achieved remarkable success across multiple fields, yet
their impact on 3D medical image segmentation remains limited with
convolutional networks still dominating major benchmarks. In this work, we a)
analyze current Transformer-based segmentation models and identify critical
shortcomings, particularly their over-reliance on convolutional blocks.
Further, we demonstrate that in some architectures, performance is unaffected
by the absence of the Transformer, thereby demonstrating their limited
effectiveness. To address these challenges, we move away from hybrid
architectures and b) introduce a fully Transformer-based segmentation
architecture, termed Primus. Primus leverages high-resolution tokens, combined
with advances in positional embeddings and block design, to maximally leverage
its Transformer blocks. Through these adaptations Primus surpasses current
Transformer-based methods and competes with state-of-the-art convolutional
models on multiple public datasets. By doing so, we create the first pure
Transformer architecture and take a significant step towards making
Transformers state-of-the-art for 3D medical image segmentation.
|
no_new_dataset
| 0.948251 |
2503.01838
|
Dimitar I. Dimitrov
|
Maria Drencheva, Ivo Petrov, Maximilian Baader, Dimitar I. Dimitrov,
Martin Vechev
|
GRAIN: Exact Graph Reconstruction from Gradients
|
Published at The Thirteenth International Conference on Learning
Representations (ICLR) 2025
| null | null | null |
cs.LG cs.DC
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Federated learning claims to enable collaborative model training among
multiple clients with data privacy by transmitting gradient updates instead of
the actual client data. However, recent studies have shown the client privacy
is still at risk due to the, so called, gradient inversion attacks which can
precisely reconstruct clients' text and image data from the shared gradient
updates. While these attacks demonstrate severe privacy risks for certain
domains and architectures, the vulnerability of other commonly-used data types,
such as graph-structured data, remain under-explored. To bridge this gap, we
present GRAIN, the first exact gradient inversion attack on graph data in the
honest-but-curious setting that recovers both the structure of the graph and
the associated node features. Concretely, we focus on Graph Convolutional
Networks (GCN) and Graph Attention Networks (GAT) -- two of the most widely
used frameworks for learning on graphs. Our method first utilizes the low-rank
structure of GNN gradients to efficiently reconstruct and filter the client
subgraphs which are then joined to complete the input graph. We evaluate our
approach on molecular, citation, and social network datasets using our novel
metric. We show that GRAIN reconstructs up to 80% of all graphs exactly,
significantly outperforming the baseline, which achieves up to 20% correctly
positioned nodes.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 18:58:12 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Drencheva",
"Maria",
""
],
[
"Petrov",
"Ivo",
""
],
[
"Baader",
"Maximilian",
""
],
[
"Dimitrov",
"Dimitar I.",
""
],
[
"Vechev",
"Martin",
""
]
] |
TITLE: GRAIN: Exact Graph Reconstruction from Gradients
ABSTRACT: Federated learning claims to enable collaborative model training among
multiple clients with data privacy by transmitting gradient updates instead of
the actual client data. However, recent studies have shown the client privacy
is still at risk due to the, so called, gradient inversion attacks which can
precisely reconstruct clients' text and image data from the shared gradient
updates. While these attacks demonstrate severe privacy risks for certain
domains and architectures, the vulnerability of other commonly-used data types,
such as graph-structured data, remain under-explored. To bridge this gap, we
present GRAIN, the first exact gradient inversion attack on graph data in the
honest-but-curious setting that recovers both the structure of the graph and
the associated node features. Concretely, we focus on Graph Convolutional
Networks (GCN) and Graph Attention Networks (GAT) -- two of the most widely
used frameworks for learning on graphs. Our method first utilizes the low-rank
structure of GNN gradients to efficiently reconstruct and filter the client
subgraphs which are then joined to complete the input graph. We evaluate our
approach on molecular, citation, and social network datasets using our novel
metric. We show that GRAIN reconstructs up to 80% of all graphs exactly,
significantly outperforming the baseline, which achieves up to 20% correctly
positioned nodes.
|
no_new_dataset
| 0.943452 |
2503.01839
|
Zhengyuan Jiang
|
Zhengyuan Jiang, Yuepeng Hu, Yuchen Yang, Yinzhi Cao, Neil Zhenqiang
Gong
|
Jailbreaking Safeguarded Text-to-Image Models via Large Language Models
| null | null | null | null |
cs.CR cs.AI cs.CL cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Text-to-Image models may generate harmful content, such as pornographic
images, particularly when unsafe prompts are submitted. To address this issue,
safety filters are often added on top of text-to-image models, or the models
themselves are aligned to reduce harmful outputs. However, these defenses
remain vulnerable when an attacker strategically designs adversarial prompts to
bypass these safety guardrails. In this work, we propose PromptTune, a method
to jailbreak text-to-image models with safety guardrails using a fine-tuned
large language model. Unlike other query-based jailbreak attacks that require
repeated queries to the target model, our attack generates adversarial prompts
efficiently after fine-tuning our AttackLLM. We evaluate our method on three
datasets of unsafe prompts and against five safety guardrails. Our results
demonstrate that our approach effectively bypasses safety guardrails,
outperforms existing no-box attacks, and also facilitates other query-based
attacks.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2025 18:58:46 GMT"
}
] | 2025-03-04T00:00:00 |
[
[
"Jiang",
"Zhengyuan",
""
],
[
"Hu",
"Yuepeng",
""
],
[
"Yang",
"Yuchen",
""
],
[
"Cao",
"Yinzhi",
""
],
[
"Gong",
"Neil Zhenqiang",
""
]
] |
TITLE: Jailbreaking Safeguarded Text-to-Image Models via Large Language Models
ABSTRACT: Text-to-Image models may generate harmful content, such as pornographic
images, particularly when unsafe prompts are submitted. To address this issue,
safety filters are often added on top of text-to-image models, or the models
themselves are aligned to reduce harmful outputs. However, these defenses
remain vulnerable when an attacker strategically designs adversarial prompts to
bypass these safety guardrails. In this work, we propose PromptTune, a method
to jailbreak text-to-image models with safety guardrails using a fine-tuned
large language model. Unlike other query-based jailbreak attacks that require
repeated queries to the target model, our attack generates adversarial prompts
efficiently after fine-tuning our AttackLLM. We evaluate our method on three
datasets of unsafe prompts and against five safety guardrails. Our results
demonstrate that our approach effectively bypasses safety guardrails,
outperforms existing no-box attacks, and also facilitates other query-based
attacks.
|
no_new_dataset
| 0.947381 |
2302.06504
|
Hengyuan Ma
|
Hengyuan Ma, Xiatian Zhu, Jianfeng Feng, Li Zhang
|
Preconditioned Score-based Generative Models
|
IJCV 2025
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Score-based generative models (SGMs) have recently emerged as a promising
class of generative models. However, a fundamental limitation is that their
sampling process is slow due to a need for many (e.g., 2000) iterations of
sequential computations. An intuitive acceleration method is to reduce the
sampling iterations which however causes severe performance degradation. We
assault this problem to the ill-conditioned issues of the Langevin dynamics and
reverse diffusion in the sampling process. Under this insight, we propose a
novel preconditioned diffusion sampling (PDS) method that leverages matrix
preconditioning to alleviate the aforementioned problem. PDS alters the
sampling process of a vanilla SGM at marginal extra computation cost and
without model retraining. Theoretically, we prove that PDS preserves the output
distribution of the SGM, with no risk of inducing systematical bias to the
original sampling process. We further theoretically reveal a relation between
the parameter of PDS and the sampling iterations, easing the parameter
estimation under varying sampling iterations. Extensive experiments on various
image datasets with a variety of resolutions and diversity validate that our
PDS consistently accelerates off-the-shelf SGMs whilst maintaining the
synthesis quality. In particular, PDS can accelerate by up to 28x on more
challenging high-resolution (1024x1024) image generation. Compared with the
latest generative models (e.g., CLD-SGM and Analytic-DDIM), PDS can achieve the
best sampling quality on CIFAR-10 at an FID score of 1.99. Our code is publicly
available to foster any further research https://github.com/fudan-zvg/PDS.
|
[
{
"version": "v1",
"created": "Mon, 13 Feb 2023 16:30:53 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Dec 2023 15:46:20 GMT"
},
{
"version": "v3",
"created": "Thu, 27 Feb 2025 15:14:40 GMT"
},
{
"version": "v4",
"created": "Fri, 28 Feb 2025 07:35:11 GMT"
}
] | 2025-03-03T00:00:00 |
[
[
"Ma",
"Hengyuan",
""
],
[
"Zhu",
"Xiatian",
""
],
[
"Feng",
"Jianfeng",
""
],
[
"Zhang",
"Li",
""
]
] |
TITLE: Preconditioned Score-based Generative Models
ABSTRACT: Score-based generative models (SGMs) have recently emerged as a promising
class of generative models. However, a fundamental limitation is that their
sampling process is slow due to a need for many (e.g., 2000) iterations of
sequential computations. An intuitive acceleration method is to reduce the
sampling iterations which however causes severe performance degradation. We
assault this problem to the ill-conditioned issues of the Langevin dynamics and
reverse diffusion in the sampling process. Under this insight, we propose a
novel preconditioned diffusion sampling (PDS) method that leverages matrix
preconditioning to alleviate the aforementioned problem. PDS alters the
sampling process of a vanilla SGM at marginal extra computation cost and
without model retraining. Theoretically, we prove that PDS preserves the output
distribution of the SGM, with no risk of inducing systematical bias to the
original sampling process. We further theoretically reveal a relation between
the parameter of PDS and the sampling iterations, easing the parameter
estimation under varying sampling iterations. Extensive experiments on various
image datasets with a variety of resolutions and diversity validate that our
PDS consistently accelerates off-the-shelf SGMs whilst maintaining the
synthesis quality. In particular, PDS can accelerate by up to 28x on more
challenging high-resolution (1024x1024) image generation. Compared with the
latest generative models (e.g., CLD-SGM and Analytic-DDIM), PDS can achieve the
best sampling quality on CIFAR-10 at an FID score of 1.99. Our code is publicly
available to foster any further research https://github.com/fudan-zvg/PDS.
|
no_new_dataset
| 0.944485 |
2303.17703
|
Finlay Hudson
|
Finlay G. C. Hudson and William A. P. Smith
|
If At First You Don't Succeed: Test Time Re-ranking for Zero-shot,
Cross-domain Retrieval
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we introduce a novel method for zero-shot, cross-domain image
retrieval. Our key contribution is a test-time Iterative Cluster-free
Re-ranking process that leverages gallery-gallery feature information to
establish semantic links between query and gallery images. This enables the
retrieval of relevant images even when they do not exhibit similar visual
features but share underlying semantic concepts. This can be combined with any
pre-existing cross-domain feature extraction backbone to improve retrieval
performance. However, when combined with a carefully chosen Vision Transformer
backbone and combination of zero-shot retrieval losses, our approach yields
state-of-the-art results on the Sketchy, TU-Berlin and QuickDraw sketch-based
retrieval benchmarks. We show that our re-ranking also improves performance
with other backbones and outperforms other re-ranking methods applied with our
backbone. Importantly, unlike many previous methods, none of the components in
our approach are engineered specifically towards the sketch-based image
retrieval task - it can be generally applied to any cross-domain, zero-shot
retrieval task. We therefore also present new results on zero-shot
cartoon-to-photo and art-to-product retrieval using the Office-Home dataset.
Project page: finlay-hudson.github.io/icfrr, code available at:
github.com/finlay-hudson/ICFRR
|
[
{
"version": "v1",
"created": "Thu, 30 Mar 2023 20:52:08 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Feb 2025 09:02:21 GMT"
}
] | 2025-03-03T00:00:00 |
[
[
"Hudson",
"Finlay G. C.",
""
],
[
"Smith",
"William A. P.",
""
]
] |
TITLE: If At First You Don't Succeed: Test Time Re-ranking for Zero-shot,
Cross-domain Retrieval
ABSTRACT: In this paper, we introduce a novel method for zero-shot, cross-domain image
retrieval. Our key contribution is a test-time Iterative Cluster-free
Re-ranking process that leverages gallery-gallery feature information to
establish semantic links between query and gallery images. This enables the
retrieval of relevant images even when they do not exhibit similar visual
features but share underlying semantic concepts. This can be combined with any
pre-existing cross-domain feature extraction backbone to improve retrieval
performance. However, when combined with a carefully chosen Vision Transformer
backbone and combination of zero-shot retrieval losses, our approach yields
state-of-the-art results on the Sketchy, TU-Berlin and QuickDraw sketch-based
retrieval benchmarks. We show that our re-ranking also improves performance
with other backbones and outperforms other re-ranking methods applied with our
backbone. Importantly, unlike many previous methods, none of the components in
our approach are engineered specifically towards the sketch-based image
retrieval task - it can be generally applied to any cross-domain, zero-shot
retrieval task. We therefore also present new results on zero-shot
cartoon-to-photo and art-to-product retrieval using the Office-Home dataset.
Project page: finlay-hudson.github.io/icfrr, code available at:
github.com/finlay-hudson/ICFRR
|
no_new_dataset
| 0.947575 |
2305.07152
|
Aneeq Zia
|
Aneeq Zia, Max Berniker, Rogerio Garcia Nespolo, Conor Perreault,
Kiran Bhattacharyya, Xi Liu, Ziheng Wang, Satoshi Kondo, Satoshi Kasai,
Kousuke Hirasawa, Bo Liu, David Austin, Yiheng Wang, Michal Futrega,
Jean-Francois Puget, Zhenqiang Li, Yoichi Sato, Ryo Fujii, Ryo Hachiuma, Mana
Masuda, Hideo Saito, An Wang, Mengya Xu, Mobarakol Islam, Long Bai, Winnie
Pang, Hongliang Ren, Chinedu Nwoye, Luca Sestini, Nicolas Padoy, Maximilian
Nielsen, Samuel Sch\"uttler, Thilo Sentker, H\"umeyra Husseini, Ivo
Baltruschat, R\"udiger Schmitz, Ren\'e Werner, Aleksandr Matsun, Mugariya
Farooq, Numan Saaed, Jose Renato Restom Viera, Mohammad Yaqub, Neil Getty,
Fangfang Xia, Zixuan Zhao, Xiaotian Duan, Xing Yao, Ange Lou, Hao Yang,
Jintong Han, Jack Noble, Jie Ying Wu, Tamer Abdulbaki Alshirbaji, Nour Aldeen
Jalal, Herag Arabian, Ning Ding, Knut Moeller, Weiliang Chen, Quan He,
Muhammad Bilal, Taofeek Akinosho, Adnan Qayyum, Massimo Caputo, Hunaid Vohra,
Michael Loizou, Anuoluwapo Ajayi, Ilhem Berrou, Faatihah Niyi-Odumosu,
Charlie Budd, Oluwatosin Alabi, Tom Vercauteren, Ruoxi Zhao, Ayberk Acar,
John Han, Jumanh Atoum, Yinhong Qin, Jie Ying Wu, Surong Hua, Lu Ping,
Wenming Wu, Rongfeng Wei, Jinlin Wu, You Pang, Zhen Chen, Tim Jaspers, Amine
Yamlahi, Piotr Kalinowski, Dominik Michael, Tim R\"a dsch, Marco H\"ubner,
Danail Stoyanov, Stefanie Speidel, Lena Maier-Hein, Anthony Jarc
|
Intuitive Surgical SurgToolLoc Challenge Results: 2022-2023
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Robotic assisted (RA) surgery promises to transform surgical intervention.
Intuitive Surgical is committed to fostering these changes and the machine
learning models and algorithms that will enable them. With these goals in mind
we have invited the surgical data science community to participate in a yearly
competition hosted through the Medical Imaging Computing and Computer Assisted
Interventions (MICCAI) conference. With varying changes from year to year, we
have challenged the community to solve difficult machine learning problems in
the context of advanced RA applications. Here we document the results of these
challenges, focusing on surgical tool localization (SurgToolLoc). The publicly
released dataset that accompanies these challenges is detailed in a separate
paper arXiv:2501.09209 [1].
|
[
{
"version": "v1",
"created": "Thu, 11 May 2023 21:44:39 GMT"
},
{
"version": "v2",
"created": "Wed, 31 May 2023 17:17:21 GMT"
},
{
"version": "v3",
"created": "Fri, 28 Feb 2025 14:42:27 GMT"
}
] | 2025-03-03T00:00:00 |
[
[
"Zia",
"Aneeq",
""
],
[
"Berniker",
"Max",
""
],
[
"Nespolo",
"Rogerio Garcia",
""
],
[
"Perreault",
"Conor",
""
],
[
"Bhattacharyya",
"Kiran",
""
],
[
"Liu",
"Xi",
""
],
[
"Wang",
"Ziheng",
""
],
[
"Kondo",
"Satoshi",
""
],
[
"Kasai",
"Satoshi",
""
],
[
"Hirasawa",
"Kousuke",
""
],
[
"Liu",
"Bo",
""
],
[
"Austin",
"David",
""
],
[
"Wang",
"Yiheng",
""
],
[
"Futrega",
"Michal",
""
],
[
"Puget",
"Jean-Francois",
""
],
[
"Li",
"Zhenqiang",
""
],
[
"Sato",
"Yoichi",
""
],
[
"Fujii",
"Ryo",
""
],
[
"Hachiuma",
"Ryo",
""
],
[
"Masuda",
"Mana",
""
],
[
"Saito",
"Hideo",
""
],
[
"Wang",
"An",
""
],
[
"Xu",
"Mengya",
""
],
[
"Islam",
"Mobarakol",
""
],
[
"Bai",
"Long",
""
],
[
"Pang",
"Winnie",
""
],
[
"Ren",
"Hongliang",
""
],
[
"Nwoye",
"Chinedu",
""
],
[
"Sestini",
"Luca",
""
],
[
"Padoy",
"Nicolas",
""
],
[
"Nielsen",
"Maximilian",
""
],
[
"Schüttler",
"Samuel",
""
],
[
"Sentker",
"Thilo",
""
],
[
"Husseini",
"Hümeyra",
""
],
[
"Baltruschat",
"Ivo",
""
],
[
"Schmitz",
"Rüdiger",
""
],
[
"Werner",
"René",
""
],
[
"Matsun",
"Aleksandr",
""
],
[
"Farooq",
"Mugariya",
""
],
[
"Saaed",
"Numan",
""
],
[
"Viera",
"Jose Renato Restom",
""
],
[
"Yaqub",
"Mohammad",
""
],
[
"Getty",
"Neil",
""
],
[
"Xia",
"Fangfang",
""
],
[
"Zhao",
"Zixuan",
""
],
[
"Duan",
"Xiaotian",
""
],
[
"Yao",
"Xing",
""
],
[
"Lou",
"Ange",
""
],
[
"Yang",
"Hao",
""
],
[
"Han",
"Jintong",
""
],
[
"Noble",
"Jack",
""
],
[
"Wu",
"Jie Ying",
""
],
[
"Alshirbaji",
"Tamer Abdulbaki",
""
],
[
"Jalal",
"Nour Aldeen",
""
],
[
"Arabian",
"Herag",
""
],
[
"Ding",
"Ning",
""
],
[
"Moeller",
"Knut",
""
],
[
"Chen",
"Weiliang",
""
],
[
"He",
"Quan",
""
],
[
"Bilal",
"Muhammad",
""
],
[
"Akinosho",
"Taofeek",
""
],
[
"Qayyum",
"Adnan",
""
],
[
"Caputo",
"Massimo",
""
],
[
"Vohra",
"Hunaid",
""
],
[
"Loizou",
"Michael",
""
],
[
"Ajayi",
"Anuoluwapo",
""
],
[
"Berrou",
"Ilhem",
""
],
[
"Niyi-Odumosu",
"Faatihah",
""
],
[
"Budd",
"Charlie",
""
],
[
"Alabi",
"Oluwatosin",
""
],
[
"Vercauteren",
"Tom",
""
],
[
"Zhao",
"Ruoxi",
""
],
[
"Acar",
"Ayberk",
""
],
[
"Han",
"John",
""
],
[
"Atoum",
"Jumanh",
""
],
[
"Qin",
"Yinhong",
""
],
[
"Wu",
"Jie Ying",
""
],
[
"Hua",
"Surong",
""
],
[
"Ping",
"Lu",
""
],
[
"Wu",
"Wenming",
""
],
[
"Wei",
"Rongfeng",
""
],
[
"Wu",
"Jinlin",
""
],
[
"Pang",
"You",
""
],
[
"Chen",
"Zhen",
""
],
[
"Jaspers",
"Tim",
""
],
[
"Yamlahi",
"Amine",
""
],
[
"Kalinowski",
"Piotr",
""
],
[
"Michael",
"Dominik",
""
],
[
"dsch",
"Tim Rä",
""
],
[
"Hübner",
"Marco",
""
],
[
"Stoyanov",
"Danail",
""
],
[
"Speidel",
"Stefanie",
""
],
[
"Maier-Hein",
"Lena",
""
],
[
"Jarc",
"Anthony",
""
]
] |
TITLE: Intuitive Surgical SurgToolLoc Challenge Results: 2022-2023
ABSTRACT: Robotic assisted (RA) surgery promises to transform surgical intervention.
Intuitive Surgical is committed to fostering these changes and the machine
learning models and algorithms that will enable them. With these goals in mind
we have invited the surgical data science community to participate in a yearly
competition hosted through the Medical Imaging Computing and Computer Assisted
Interventions (MICCAI) conference. With varying changes from year to year, we
have challenged the community to solve difficult machine learning problems in
the context of advanced RA applications. Here we document the results of these
challenges, focusing on surgical tool localization (SurgToolLoc). The publicly
released dataset that accompanies these challenges is detailed in a separate
paper arXiv:2501.09209 [1].
|
new_dataset
| 0.959001 |
2308.01251
|
Yiming Zhou
|
Yiming Zhou, Yuexing Peng, Junchuan Yu, Daqing Ge, Wei Xiang
|
A Multi-Source Data Fusion-based Semantic Segmentation Model for Relic
Landslide Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As a natural disaster, landslide often brings tremendous losses to human
lives, so it urgently demands reliable detection of landslide risks. When
detecting relic landslides that present important information for landslide
risk warning, problems such as visual blur and small-sized dataset cause great
challenges when using remote sensing images. To extract accurate semantic
features, a hyper-pixel-wise contrastive learning augmented segmentation
network (HPCL-Net) is proposed, which augments the local salient feature
extraction from boundaries of landslides through HPCL and fuses heterogeneous
information in the semantic space from high-resolution remote sensing images
and digital elevation model data. For full utilization of precious samples, a
global hyper-pixel-wise sample pair queues-based contrastive learning method is
developed, which includes the construction of global queues that store
hyper-pixel-wise samples and the updating scheme of a momentum encoder,
reliably enhancing the extraction ability of semantic features. The proposed
HPCL-Net is evaluated on the Loess Plateau relic landslide dataset and
experimental results verify that the proposed HPCL-Net greatly outperforms
existing models, where the mIoU is increased from 0.620 to 0.651, the Landslide
IoU is improved from 0.334 to 0.394 and the F1score is enhanced from 0.501 to
0.565.
|
[
{
"version": "v1",
"created": "Wed, 2 Aug 2023 16:11:51 GMT"
},
{
"version": "v2",
"created": "Fri, 6 Oct 2023 04:15:56 GMT"
},
{
"version": "v3",
"created": "Wed, 4 Dec 2024 01:52:57 GMT"
},
{
"version": "v4",
"created": "Fri, 28 Feb 2025 00:51:20 GMT"
}
] | 2025-03-03T00:00:00 |
[
[
"Zhou",
"Yiming",
""
],
[
"Peng",
"Yuexing",
""
],
[
"Yu",
"Junchuan",
""
],
[
"Ge",
"Daqing",
""
],
[
"Xiang",
"Wei",
""
]
] |
TITLE: A Multi-Source Data Fusion-based Semantic Segmentation Model for Relic
Landslide Detection
ABSTRACT: As a natural disaster, landslide often brings tremendous losses to human
lives, so it urgently demands reliable detection of landslide risks. When
detecting relic landslides that present important information for landslide
risk warning, problems such as visual blur and small-sized dataset cause great
challenges when using remote sensing images. To extract accurate semantic
features, a hyper-pixel-wise contrastive learning augmented segmentation
network (HPCL-Net) is proposed, which augments the local salient feature
extraction from boundaries of landslides through HPCL and fuses heterogeneous
information in the semantic space from high-resolution remote sensing images
and digital elevation model data. For full utilization of precious samples, a
global hyper-pixel-wise sample pair queues-based contrastive learning method is
developed, which includes the construction of global queues that store
hyper-pixel-wise samples and the updating scheme of a momentum encoder,
reliably enhancing the extraction ability of semantic features. The proposed
HPCL-Net is evaluated on the Loess Plateau relic landslide dataset and
experimental results verify that the proposed HPCL-Net greatly outperforms
existing models, where the mIoU is increased from 0.620 to 0.651, the Landslide
IoU is improved from 0.334 to 0.394 and the F1score is enhanced from 0.501 to
0.565.
|
no_new_dataset
| 0.948442 |
2310.02557
|
Florentin Guth
|
Zahra Kadkhodaie, Florentin Guth, Eero P. Simoncelli, St\'ephane
Mallat
|
Generalization in diffusion models arises from geometry-adaptive
harmonic representations
|
Accepted for oral presentation at ICLR, Vienna, May 2024
|
Int'l Conf on Learning Representations (ICLR), vol.12, Vienna, May
2024. Outstanding Paper award
| null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Deep neural networks (DNNs) trained for image denoising are able to generate
high-quality samples with score-based reverse diffusion algorithms. These
impressive capabilities seem to imply an escape from the curse of
dimensionality, but recent reports of memorization of the training set raise
the question of whether these networks are learning the "true" continuous
density of the data. Here, we show that two DNNs trained on non-overlapping
subsets of a dataset learn nearly the same score function, and thus the same
density, when the number of training images is large enough. In this regime of
strong generalization, diffusion-generated images are distinct from the
training set, and are of high visual quality, suggesting that the inductive
biases of the DNNs are well-aligned with the data density. We analyze the
learned denoising functions and show that the inductive biases give rise to a
shrinkage operation in a basis adapted to the underlying image. Examination of
these bases reveals oscillating harmonic structures along contours and in
homogeneous regions. We demonstrate that trained denoisers are inductively
biased towards these geometry-adaptive harmonic bases since they arise not only
when the network is trained on photographic images, but also when it is trained
on image classes supported on low-dimensional manifolds for which the harmonic
basis is suboptimal. Finally, we show that when trained on regular image
classes for which the optimal basis is known to be geometry-adaptive and
harmonic, the denoising performance of the networks is near-optimal.
|
[
{
"version": "v1",
"created": "Wed, 4 Oct 2023 03:30:32 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Mar 2024 18:21:48 GMT"
},
{
"version": "v3",
"created": "Fri, 12 Apr 2024 15:48:47 GMT"
}
] | 2025-03-03T00:00:00 |
[
[
"Kadkhodaie",
"Zahra",
""
],
[
"Guth",
"Florentin",
""
],
[
"Simoncelli",
"Eero P.",
""
],
[
"Mallat",
"Stéphane",
""
]
] |
TITLE: Generalization in diffusion models arises from geometry-adaptive
harmonic representations
ABSTRACT: Deep neural networks (DNNs) trained for image denoising are able to generate
high-quality samples with score-based reverse diffusion algorithms. These
impressive capabilities seem to imply an escape from the curse of
dimensionality, but recent reports of memorization of the training set raise
the question of whether these networks are learning the "true" continuous
density of the data. Here, we show that two DNNs trained on non-overlapping
subsets of a dataset learn nearly the same score function, and thus the same
density, when the number of training images is large enough. In this regime of
strong generalization, diffusion-generated images are distinct from the
training set, and are of high visual quality, suggesting that the inductive
biases of the DNNs are well-aligned with the data density. We analyze the
learned denoising functions and show that the inductive biases give rise to a
shrinkage operation in a basis adapted to the underlying image. Examination of
these bases reveals oscillating harmonic structures along contours and in
homogeneous regions. We demonstrate that trained denoisers are inductively
biased towards these geometry-adaptive harmonic bases since they arise not only
when the network is trained on photographic images, but also when it is trained
on image classes supported on low-dimensional manifolds for which the harmonic
basis is suboptimal. Finally, we show that when trained on regular image
classes for which the optimal basis is known to be geometry-adaptive and
harmonic, the denoising performance of the networks is near-optimal.
|
no_new_dataset
| 0.95222 |
2310.16152
|
Md Rafi Ur Rashid
|
Md Rafi Ur Rashid, Vishnu Asutosh Dasu, Kang Gu, Najrin Sultana,
Shagufta Mehnaz
|
FLTrojan: Privacy Leakage Attacks against Federated Language Models
Through Selective Weight Tampering
|
20 pages (including bibliography and Appendix), Submitted to ACM CCS
'24
| null | null | null |
cs.CR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Federated learning (FL) has become a key component in various language
modeling applications such as machine translation, next-word prediction, and
medical record analysis. These applications are trained on datasets from many
FL participants that often include privacy-sensitive data, such as healthcare
records, phone/credit card numbers, login credentials, etc. Although FL enables
computation without necessitating clients to share their raw data, determining
the extent of privacy leakage in federated language models is challenging and
not straightforward. Moreover, existing attacks aim to extract data regardless
of how sensitive or naive it is. To fill this research gap, we introduce two
novel findings with regard to leaking privacy-sensitive user data from
federated large language models. Firstly, we make a key observation that model
snapshots from the intermediate rounds in FL can cause greater privacy leakage
than the final trained model. Secondly, we identify that privacy leakage can be
aggravated by tampering with a model's selective weights that are specifically
responsible for memorizing the sensitive training data. We show how a malicious
client can leak the privacy-sensitive data of some other users in FL even
without any cooperation from the server. Our best-performing method improves
the membership inference recall by 29% and achieves up to 71% private data
reconstruction, evidently outperforming existing attacks with stronger
assumptions of adversary capabilities.
|
[
{
"version": "v1",
"created": "Tue, 24 Oct 2023 19:50:01 GMT"
},
{
"version": "v2",
"created": "Sun, 26 May 2024 03:44:52 GMT"
},
{
"version": "v3",
"created": "Fri, 28 Feb 2025 03:09:51 GMT"
}
] | 2025-03-03T00:00:00 |
[
[
"Rashid",
"Md Rafi Ur",
""
],
[
"Dasu",
"Vishnu Asutosh",
""
],
[
"Gu",
"Kang",
""
],
[
"Sultana",
"Najrin",
""
],
[
"Mehnaz",
"Shagufta",
""
]
] |
TITLE: FLTrojan: Privacy Leakage Attacks against Federated Language Models
Through Selective Weight Tampering
ABSTRACT: Federated learning (FL) has become a key component in various language
modeling applications such as machine translation, next-word prediction, and
medical record analysis. These applications are trained on datasets from many
FL participants that often include privacy-sensitive data, such as healthcare
records, phone/credit card numbers, login credentials, etc. Although FL enables
computation without necessitating clients to share their raw data, determining
the extent of privacy leakage in federated language models is challenging and
not straightforward. Moreover, existing attacks aim to extract data regardless
of how sensitive or naive it is. To fill this research gap, we introduce two
novel findings with regard to leaking privacy-sensitive user data from
federated large language models. Firstly, we make a key observation that model
snapshots from the intermediate rounds in FL can cause greater privacy leakage
than the final trained model. Secondly, we identify that privacy leakage can be
aggravated by tampering with a model's selective weights that are specifically
responsible for memorizing the sensitive training data. We show how a malicious
client can leak the privacy-sensitive data of some other users in FL even
without any cooperation from the server. Our best-performing method improves
the membership inference recall by 29% and achieves up to 71% private data
reconstruction, evidently outperforming existing attacks with stronger
assumptions of adversary capabilities.
|
no_new_dataset
| 0.944382 |
2401.07576
|
Wannita Takerngsaksiri
|
Wannita Takerngsaksiri, Rujikorn Charakorn, Chakkrit Tantithamthavorn,
Yuan-Fang Li
|
PyTester: Deep Reinforcement Learning for Text-to-Testcase Generation
|
17 pages, 5 figures
|
Journal of Systems and Software, Volume 224, 2025, 112381
|
10.1016/j.jss.2025.112381
| null |
cs.SE
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Test-driven development (TDD) is a widely-employed software development
practice that mandates writing test cases based on requirements before writing
the actual code. While writing test cases is the centerpiece of TDD, it is
time-consuming, expensive, and often shunned by developers. To address these
issues associated with TDD, automated test case generation approaches have
recently been investigated. Such approaches take source code as input, but not
the requirements. Therefore, existing work does not fully support true TDD, as
actual code is required to generate test cases. In addition, current deep
learning-based test case generation approaches are trained with one learning
objective, i.e., to generate test cases that are exactly matched with the
ground-truth test cases. However, such approaches may limit the model's ability
to generate different yet correct test cases. In this paper, we introduce
PyTester, a Text-to-Testcase generation approach that can automatically
generate syntactically correct, executable, complete, and effective test cases
while being aligned with a given natural language requirement. We evaluate
PyTester on the public APPS benchmark dataset, and the results show that our
Deep RL approach enables PyTester, a small language model, to outperform much
larger language models like GPT3.5, StarCoder, and InCoder. Our findings
suggest that future research could consider improving small over large LMs for
better resource efficiency by integrating the SE domain knowledge into the
design of reinforcement learning architecture.
|
[
{
"version": "v1",
"created": "Mon, 15 Jan 2024 10:21:58 GMT"
},
{
"version": "v2",
"created": "Fri, 22 Nov 2024 06:42:56 GMT"
}
] | 2025-03-03T00:00:00 |
[
[
"Takerngsaksiri",
"Wannita",
""
],
[
"Charakorn",
"Rujikorn",
""
],
[
"Tantithamthavorn",
"Chakkrit",
""
],
[
"Li",
"Yuan-Fang",
""
]
] |
TITLE: PyTester: Deep Reinforcement Learning for Text-to-Testcase Generation
ABSTRACT: Test-driven development (TDD) is a widely-employed software development
practice that mandates writing test cases based on requirements before writing
the actual code. While writing test cases is the centerpiece of TDD, it is
time-consuming, expensive, and often shunned by developers. To address these
issues associated with TDD, automated test case generation approaches have
recently been investigated. Such approaches take source code as input, but not
the requirements. Therefore, existing work does not fully support true TDD, as
actual code is required to generate test cases. In addition, current deep
learning-based test case generation approaches are trained with one learning
objective, i.e., to generate test cases that are exactly matched with the
ground-truth test cases. However, such approaches may limit the model's ability
to generate different yet correct test cases. In this paper, we introduce
PyTester, a Text-to-Testcase generation approach that can automatically
generate syntactically correct, executable, complete, and effective test cases
while being aligned with a given natural language requirement. We evaluate
PyTester on the public APPS benchmark dataset, and the results show that our
Deep RL approach enables PyTester, a small language model, to outperform much
larger language models like GPT3.5, StarCoder, and InCoder. Our findings
suggest that future research could consider improving small over large LMs for
better resource efficiency by integrating the SE domain knowledge into the
design of reinforcement learning architecture.
|
no_new_dataset
| 0.938067 |
2402.00234
|
Shiwali Mohan
|
Shreya Rajagopal, Jae Ho Sohn, Hari Subramonyam, Shiwali Mohan
|
Can Generative AI Support Patients' & Caregivers' Informational Needs?
Towards Task-Centric Evaluation Of AI Systems
| null |
Joint Proceedings of the ACM IUI Workshops 2025, March 24-27,
2025, Cagliari, Italy
| null | null |
cs.HC cs.AI cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Generative AI systems such as ChatGPT and Claude are built upon language
models that are typically evaluated for accuracy on curated benchmark datasets.
Such evaluation paradigms measure predictive and reasoning capabilities of
language models but do not assess if they can provide information that is
useful to people. In this paper, we take some initial steps in developing an
evaluation paradigm that centers human understanding and decision-making. We
study the utility of generative AI systems in supporting people in a concrete
task - making sense of clinical reports and imagery in order to make a clinical
decision. We conducted a formative need-finding study in which participants
discussed chest computed tomography (CT) scans and associated radiology reports
of a fictitious close relative with a cardiothoracic radiologist. Using
thematic analysis of the conversation between participants and medical experts,
we identified commonly occurring themes across interactions, including
clarifying medical terminology, locating the problems mentioned in the report
in the scanned image, understanding disease prognosis, discussing the next
diagnostic steps, and comparing treatment options. Based on these themes, we
evaluated two state-of-the-art generative AI systems against the radiologist's
responses. Our results reveal variability in the quality of responses generated
by the models across various themes. We highlight the importance of
patient-facing generative AI systems to accommodate a diverse range of
conversational themes, catering to the real-world informational needs of
patients.
|
[
{
"version": "v1",
"created": "Wed, 31 Jan 2024 23:24:37 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Feb 2025 05:46:53 GMT"
}
] | 2025-03-03T00:00:00 |
[
[
"Rajagopal",
"Shreya",
""
],
[
"Sohn",
"Jae Ho",
""
],
[
"Subramonyam",
"Hari",
""
],
[
"Mohan",
"Shiwali",
""
]
] |
TITLE: Can Generative AI Support Patients' & Caregivers' Informational Needs?
Towards Task-Centric Evaluation Of AI Systems
ABSTRACT: Generative AI systems such as ChatGPT and Claude are built upon language
models that are typically evaluated for accuracy on curated benchmark datasets.
Such evaluation paradigms measure predictive and reasoning capabilities of
language models but do not assess if they can provide information that is
useful to people. In this paper, we take some initial steps in developing an
evaluation paradigm that centers human understanding and decision-making. We
study the utility of generative AI systems in supporting people in a concrete
task - making sense of clinical reports and imagery in order to make a clinical
decision. We conducted a formative need-finding study in which participants
discussed chest computed tomography (CT) scans and associated radiology reports
of a fictitious close relative with a cardiothoracic radiologist. Using
thematic analysis of the conversation between participants and medical experts,
we identified commonly occurring themes across interactions, including
clarifying medical terminology, locating the problems mentioned in the report
in the scanned image, understanding disease prognosis, discussing the next
diagnostic steps, and comparing treatment options. Based on these themes, we
evaluated two state-of-the-art generative AI systems against the radiologist's
responses. Our results reveal variability in the quality of responses generated
by the models across various themes. We highlight the importance of
patient-facing generative AI systems to accommodate a diverse range of
conversational themes, catering to the real-world informational needs of
patients.
|
no_new_dataset
| 0.925162 |
2403.00946
|
Jianyu Zhang
|
Jianyu Zhang, L\'eon Bottou
|
Fine-tuning with Very Large Dropout
|
Fine-tuning with very large dropout outperforms weight-averaging and
ensemble on ResNet and large vision transformer
| null | null | null |
cs.LG cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
It is impossible today to pretend that the practice of machine learning is
always compatible with the idea that training and testing data follow the same
distribution. Several authors have recently used ensemble techniques to show
how scenarios involving multiple data distributions are best served by
representations that are both richer than those obtained by regularizing for
the best in-distribution performance, and richer than those obtained under the
influence of the implicit sparsity bias of common stochastic gradient
procedures.
This contribution investigates the use of very high dropout rates instead of
ensembles to obtain such rich representations. Although training a deep network
from scratch using such dropout rates is virtually impossible, fine-tuning a
large pre-trained model under such conditions is not only possible but also
achieves out-of-distribution performances that exceed those of both ensembles
and weight averaging methods such as model soups.
This result has practical significance because the importance of the
fine-tuning scenario has considerably grown in recent years. This result also
provides interesting insights on the nature of rich representations and on the
intrinsically linear nature of fine-tuning a large network using a
comparatively small dataset.
|
[
{
"version": "v1",
"created": "Fri, 1 Mar 2024 19:50:22 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Oct 2024 20:01:45 GMT"
},
{
"version": "v3",
"created": "Thu, 27 Feb 2025 22:15:53 GMT"
}
] | 2025-03-03T00:00:00 |
[
[
"Zhang",
"Jianyu",
""
],
[
"Bottou",
"Léon",
""
]
] |
TITLE: Fine-tuning with Very Large Dropout
ABSTRACT: It is impossible today to pretend that the practice of machine learning is
always compatible with the idea that training and testing data follow the same
distribution. Several authors have recently used ensemble techniques to show
how scenarios involving multiple data distributions are best served by
representations that are both richer than those obtained by regularizing for
the best in-distribution performance, and richer than those obtained under the
influence of the implicit sparsity bias of common stochastic gradient
procedures.
This contribution investigates the use of very high dropout rates instead of
ensembles to obtain such rich representations. Although training a deep network
from scratch using such dropout rates is virtually impossible, fine-tuning a
large pre-trained model under such conditions is not only possible but also
achieves out-of-distribution performances that exceed those of both ensembles
and weight averaging methods such as model soups.
This result has practical significance because the importance of the
fine-tuning scenario has considerably grown in recent years. This result also
provides interesting insights on the nature of rich representations and on the
intrinsically linear nature of fine-tuning a large network using a
comparatively small dataset.
|
no_new_dataset
| 0.945045 |
2403.01570
|
Jiahuan Yan
|
Jiahuan Yan, Jintai Chen, Chaowen Hu, Bo Zheng, Yaojun Hu, Jimeng Sun,
Jian Wu
|
Small Models are LLM Knowledge Triggers on Medical Tabular Prediction
|
Accepted to ICLR 2025. Codes will be available at
https://github.com/jyansir/sersal
| null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent development in large language models (LLMs) has demonstrated
impressive domain proficiency on unstructured textual or multi-modal tasks.
However, despite with intrinsic world knowledge, their application on
structured tabular data prediction still lags behind, primarily due to the
numerical insensitivity and modality discrepancy that brings a gap between LLM
reasoning and statistical tabular learning. Unlike textual or vision data
(e.g., electronic clinical notes or medical imaging data), tabular data is
often presented in heterogeneous numerical values (e.g., CBC reports). This
ubiquitous data format requires intensive expert annotation, and its numerical
nature limits LLMs' capability to effectively transfer untapped domain
expertise. In this paper, we propose SERSAL, a general self-prompting method by
synergy learning with small models to enhance LLM tabular prediction in an
unsupervised manner. Specifically, SERSAL utilizes the LLM's prior outcomes as
original soft noisy annotations, which are dynamically leveraged to teach a
better small student model. Reversely, the outcomes from the trained small
model are used to teach the LLM to further refine its real capability. This
process can be repeatedly applied to gradually distill refined knowledge for
continuous progress. Comprehensive experiments on widely used medical domain
tabular datasets show that, without access to gold labels, applying SERSAL to
OpenAI GPT reasoning process attains substantial improvement compared to
linguistic prompting methods, which serves as an orthogonal direction for
tabular LLM, and increasing prompting bonus is observed as more powerful LLMs
appear.
|
[
{
"version": "v1",
"created": "Sun, 3 Mar 2024 17:35:52 GMT"
},
{
"version": "v2",
"created": "Sat, 16 Mar 2024 04:07:01 GMT"
},
{
"version": "v3",
"created": "Fri, 28 Feb 2025 09:23:04 GMT"
}
] | 2025-03-03T00:00:00 |
[
[
"Yan",
"Jiahuan",
""
],
[
"Chen",
"Jintai",
""
],
[
"Hu",
"Chaowen",
""
],
[
"Zheng",
"Bo",
""
],
[
"Hu",
"Yaojun",
""
],
[
"Sun",
"Jimeng",
""
],
[
"Wu",
"Jian",
""
]
] |
TITLE: Small Models are LLM Knowledge Triggers on Medical Tabular Prediction
ABSTRACT: Recent development in large language models (LLMs) has demonstrated
impressive domain proficiency on unstructured textual or multi-modal tasks.
However, despite with intrinsic world knowledge, their application on
structured tabular data prediction still lags behind, primarily due to the
numerical insensitivity and modality discrepancy that brings a gap between LLM
reasoning and statistical tabular learning. Unlike textual or vision data
(e.g., electronic clinical notes or medical imaging data), tabular data is
often presented in heterogeneous numerical values (e.g., CBC reports). This
ubiquitous data format requires intensive expert annotation, and its numerical
nature limits LLMs' capability to effectively transfer untapped domain
expertise. In this paper, we propose SERSAL, a general self-prompting method by
synergy learning with small models to enhance LLM tabular prediction in an
unsupervised manner. Specifically, SERSAL utilizes the LLM's prior outcomes as
original soft noisy annotations, which are dynamically leveraged to teach a
better small student model. Reversely, the outcomes from the trained small
model are used to teach the LLM to further refine its real capability. This
process can be repeatedly applied to gradually distill refined knowledge for
continuous progress. Comprehensive experiments on widely used medical domain
tabular datasets show that, without access to gold labels, applying SERSAL to
OpenAI GPT reasoning process attains substantial improvement compared to
linguistic prompting methods, which serves as an orthogonal direction for
tabular LLM, and increasing prompting bonus is observed as more powerful LLMs
appear.
|
no_new_dataset
| 0.951729 |
2403.20145
|
Nilesh Kumar Sahu
|
Manjeet Yadav, Nilesh Kumar Sahu, Mudita Chaturvedi, Snehil Gupta,
Haroon R Lone
|
Fine-tuning Large Language Models for Automated Diagnostic Screening
Summaries
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Improving mental health support in developing countries is a pressing need.
One potential solution is the development of scalable, automated systems to
conduct diagnostic screenings, which could help alleviate the burden on mental
health professionals. In this work, we evaluate several state-of-the-art Large
Language Models (LLMs), with and without fine-tuning, on our custom dataset for
generating concise summaries from mental state examinations. We rigorously
evaluate four different models for summary generation using established ROUGE
metrics and input from human evaluators. The results highlight that our
top-performing fine-tuned model outperforms existing models, achieving ROUGE-1
and ROUGE-L values of 0.810 and 0.764, respectively. Furthermore, we assessed
the fine-tuned model's generalizability on a publicly available D4 dataset, and
the outcomes were promising, indicating its potential applicability beyond our
custom dataset.
|
[
{
"version": "v1",
"created": "Fri, 29 Mar 2024 12:25:37 GMT"
},
{
"version": "v2",
"created": "Thu, 4 Apr 2024 10:36:48 GMT"
}
] | 2025-03-03T00:00:00 |
[
[
"Yadav",
"Manjeet",
""
],
[
"Sahu",
"Nilesh Kumar",
""
],
[
"Chaturvedi",
"Mudita",
""
],
[
"Gupta",
"Snehil",
""
],
[
"Lone",
"Haroon R",
""
]
] |
TITLE: Fine-tuning Large Language Models for Automated Diagnostic Screening
Summaries
ABSTRACT: Improving mental health support in developing countries is a pressing need.
One potential solution is the development of scalable, automated systems to
conduct diagnostic screenings, which could help alleviate the burden on mental
health professionals. In this work, we evaluate several state-of-the-art Large
Language Models (LLMs), with and without fine-tuning, on our custom dataset for
generating concise summaries from mental state examinations. We rigorously
evaluate four different models for summary generation using established ROUGE
metrics and input from human evaluators. The results highlight that our
top-performing fine-tuned model outperforms existing models, achieving ROUGE-1
and ROUGE-L values of 0.810 and 0.764, respectively. Furthermore, we assessed
the fine-tuned model's generalizability on a publicly available D4 dataset, and
the outcomes were promising, indicating its potential applicability beyond our
custom dataset.
|
new_dataset
| 0.958538 |
2405.05061
|
Ayano Okoso
|
Ayano Okoso, Keisuke Otaki, Satoshi Koide, Yukino Baba
|
Impact of Tone-Aware Explanations in Recommender Systems
| null |
Transactions on Recommender Systems 2025
|
10.1145/3718101
| null |
cs.HC cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recommender systems, the presentation of explanations plays a crucial role
in supporting users' decision-making processes. Although numerous existing
studies have focused on the effects (transparency or persuasiveness) of
explanation content, explanation expression is largely overlooked. Tone, such
as formal and humorous, is directly linked to expressiveness and is an
important element in human communication. However, studies on the impact of
tone on explanations within the context of recommender systems are
insufficient. Therefore, this study investigates the effect of explanation
tones through an online user study from three aspects: perceived effects,
domain differences, and user attributes. We create a dataset using a large
language model to generate fictional items and explanations with various tones
in the domain of movies, hotels, and home products. Collected data analysis
reveals different perceived effects of tones depending on the domains.
Moreover, user attributes such as age and personality traits are found to
influence the impact of tone. This research underscores the critical role of
tones in explanations within recommender systems, suggesting that attention to
tone can enhance user experience.
|
[
{
"version": "v1",
"created": "Wed, 8 May 2024 13:55:52 GMT"
}
] | 2025-03-03T00:00:00 |
[
[
"Okoso",
"Ayano",
""
],
[
"Otaki",
"Keisuke",
""
],
[
"Koide",
"Satoshi",
""
],
[
"Baba",
"Yukino",
""
]
] |
TITLE: Impact of Tone-Aware Explanations in Recommender Systems
ABSTRACT: In recommender systems, the presentation of explanations plays a crucial role
in supporting users' decision-making processes. Although numerous existing
studies have focused on the effects (transparency or persuasiveness) of
explanation content, explanation expression is largely overlooked. Tone, such
as formal and humorous, is directly linked to expressiveness and is an
important element in human communication. However, studies on the impact of
tone on explanations within the context of recommender systems are
insufficient. Therefore, this study investigates the effect of explanation
tones through an online user study from three aspects: perceived effects,
domain differences, and user attributes. We create a dataset using a large
language model to generate fictional items and explanations with various tones
in the domain of movies, hotels, and home products. Collected data analysis
reveals different perceived effects of tones depending on the domains.
Moreover, user attributes such as age and personality traits are found to
influence the impact of tone. This research underscores the critical role of
tones in explanations within recommender systems, suggesting that attention to
tone can enhance user experience.
|
new_dataset
| 0.959269 |
2405.15392
|
Yuandou Wang
|
Yuandou Wang, Sheejan Tripathi, Siamak Farshidi, and Zhiming Zhao
|
D-VRE: From a Jupyter-enabled Private Research Environment to
Decentralized Collaborative Research Ecosystem
|
We revised the manuscript draft and submitted the revised manuscript
to the journal Blockchain: Research and Applications
| null |
10.1016/j.bcra.2024.100244
| null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Today, scientific research is increasingly data-centric and
compute-intensive, relying on data and models across distributed sources.
However, it still faces challenges in the traditional cooperation mode, due to
the high storage and computing cost, geo-location barriers, and local
confidentiality regulations. The Jupyter environment has recently emerged and
evolved as a vital virtual research environment for scientific computing, which
researchers can use to scale computational analyses up to larger datasets and
high-performance computing resources. Nevertheless, existing approaches lack
robust support of a decentralized cooperation mode to unlock the full potential
of decentralized collaborative scientific research, e.g., seamlessly secure
data sharing. In this work, we change the basic structure and legacy norms of
current research environments via the seamless integration of Jupyter with
Ethereum blockchain capabilities. As such, it creates a Decentralized Virtual
Research Environment (D-VRE) from private computational notebooks to
decentralized collaborative research ecosystem. We propose a novel architecture
for the D-VRE and prototype some essential D-VRE elements for enabling secure
data sharing with decentralized identity, user-centric agreement-making,
membership, and research asset management. To validate our method, we conducted
an experimental study to test all functionalities of D-VRE smart contracts and
their gas consumption. In addition, we deployed the D-VRE prototype on a test
net of the Ethereum blockchain for demonstration. The feedback from the studies
showcases the current prototype's usability, ease of use, and potential and
suggests further improvements.
|
[
{
"version": "v1",
"created": "Fri, 24 May 2024 09:46:17 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Jun 2024 16:55:23 GMT"
}
] | 2025-03-03T00:00:00 |
[
[
"Wang",
"Yuandou",
""
],
[
"Tripathi",
"Sheejan",
""
],
[
"Farshidi",
"Siamak",
""
],
[
"Zhao",
"Zhiming",
""
]
] |
TITLE: D-VRE: From a Jupyter-enabled Private Research Environment to
Decentralized Collaborative Research Ecosystem
ABSTRACT: Today, scientific research is increasingly data-centric and
compute-intensive, relying on data and models across distributed sources.
However, it still faces challenges in the traditional cooperation mode, due to
the high storage and computing cost, geo-location barriers, and local
confidentiality regulations. The Jupyter environment has recently emerged and
evolved as a vital virtual research environment for scientific computing, which
researchers can use to scale computational analyses up to larger datasets and
high-performance computing resources. Nevertheless, existing approaches lack
robust support of a decentralized cooperation mode to unlock the full potential
of decentralized collaborative scientific research, e.g., seamlessly secure
data sharing. In this work, we change the basic structure and legacy norms of
current research environments via the seamless integration of Jupyter with
Ethereum blockchain capabilities. As such, it creates a Decentralized Virtual
Research Environment (D-VRE) from private computational notebooks to
decentralized collaborative research ecosystem. We propose a novel architecture
for the D-VRE and prototype some essential D-VRE elements for enabling secure
data sharing with decentralized identity, user-centric agreement-making,
membership, and research asset management. To validate our method, we conducted
an experimental study to test all functionalities of D-VRE smart contracts and
their gas consumption. In addition, we deployed the D-VRE prototype on a test
net of the Ethereum blockchain for demonstration. The feedback from the studies
showcases the current prototype's usability, ease of use, and potential and
suggests further improvements.
|
no_new_dataset
| 0.943971 |
2405.17571
|
Dorian Christoph Quelle
|
Dorian Quelle, Alexandre Bovet
|
Bluesky: Network Topology, Polarization, and Algorithmic Curation
| null |
PLOS ONE 20(2): e0318034 (2025)
|
10.1371/journal.pone.0318034
| null |
cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
Bluesky is a nascent Twitter-like and decentralized social media network with
novel features and unprecedented data access. This paper provides a
characterization of its interaction network, studying the political leaning,
polarization, network structure, and algorithmic curation mechanisms of five
million users. The dataset spans from the website's first release in February
of 2023 to May of 2024. We investigate the replies, likes, reposts, and follows
layers of the Bluesky network. We find that all networks are characterized by
heavy-tailed distributions, high clustering, and short connection paths,
similar to other larger social networks. BlueSky introduced feeds-algorithmic
content recommenders created for and by users. We analyze all feeds and find
that while a large number of custom feeds have been created, users' uptake of
them appears to be limited. We analyze the hyperlinks shared by BlueSky's users
and find no evidence of polarization in terms of the political leaning of the
news sources they share. They share predominantly left-center news sources and
little to no links associated with questionable news sources. In contrast to
the homogeneous political ideology, we find significant issues-based divergence
by studying opinions related to the Israel-Palestine conflict. Two clear
homophilic clusters emerge: Pro-Palestinian voices outnumber pro-Israeli users,
and the proportion has increased. We conclude by claiming that Bluesky-for all
its novel features-is very similar in its network structure to existing and
larger social media sites and provides unprecedented research opportunities for
social scientists, network scientists, and political scientists alike.
|
[
{
"version": "v1",
"created": "Mon, 27 May 2024 18:10:55 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Aug 2024 20:56:22 GMT"
},
{
"version": "v3",
"created": "Fri, 28 Feb 2025 16:23:23 GMT"
}
] | 2025-03-03T00:00:00 |
[
[
"Quelle",
"Dorian",
""
],
[
"Bovet",
"Alexandre",
""
]
] |
TITLE: Bluesky: Network Topology, Polarization, and Algorithmic Curation
ABSTRACT: Bluesky is a nascent Twitter-like and decentralized social media network with
novel features and unprecedented data access. This paper provides a
characterization of its interaction network, studying the political leaning,
polarization, network structure, and algorithmic curation mechanisms of five
million users. The dataset spans from the website's first release in February
of 2023 to May of 2024. We investigate the replies, likes, reposts, and follows
layers of the Bluesky network. We find that all networks are characterized by
heavy-tailed distributions, high clustering, and short connection paths,
similar to other larger social networks. BlueSky introduced feeds-algorithmic
content recommenders created for and by users. We analyze all feeds and find
that while a large number of custom feeds have been created, users' uptake of
them appears to be limited. We analyze the hyperlinks shared by BlueSky's users
and find no evidence of polarization in terms of the political leaning of the
news sources they share. They share predominantly left-center news sources and
little to no links associated with questionable news sources. In contrast to
the homogeneous political ideology, we find significant issues-based divergence
by studying opinions related to the Israel-Palestine conflict. Two clear
homophilic clusters emerge: Pro-Palestinian voices outnumber pro-Israeli users,
and the proportion has increased. We conclude by claiming that Bluesky-for all
its novel features-is very similar in its network structure to existing and
larger social media sites and provides unprecedented research opportunities for
social scientists, network scientists, and political scientists alike.
|
no_new_dataset
| 0.936285 |
2405.18540
|
Seanie Lee
|
Seanie Lee, Minsu Kim, Lynn Cherif, David Dobre, Juho Lee, Sung Ju
Hwang, Kenji Kawaguchi, Gauthier Gidel, Yoshua Bengio, Nikolay Malkin, Moksh
Jain
|
Learning diverse attacks on large language models for robust red-teaming
and safety tuning
|
ICLR 2025
| null | null | null |
cs.CL cs.CR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Red-teaming, or identifying prompts that elicit harmful responses, is a
critical step in ensuring the safe and responsible deployment of large language
models (LLMs). Developing effective protection against many modes of attack
prompts requires discovering diverse attacks. Automated red-teaming typically
uses reinforcement learning to fine-tune an attacker language model to generate
prompts that elicit undesirable responses from a target LLM, as measured, for
example, by an auxiliary toxicity classifier. We show that even with explicit
regularization to favor novelty and diversity, existing approaches suffer from
mode collapse or fail to generate effective attacks. As a flexible and
probabilistically principled alternative, we propose to use GFlowNet
fine-tuning, followed by a secondary smoothing phase, to train the attacker
model to generate diverse and effective attack prompts. We find that the
attacks generated by our method are effective against a wide range of target
LLMs, both with and without safety tuning, and transfer well between target
LLMs. Finally, we demonstrate that models safety-tuned using a dataset of
red-teaming prompts generated by our method are robust to attacks from other
RL-based red-teaming approaches.
|
[
{
"version": "v1",
"created": "Tue, 28 May 2024 19:16:17 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Feb 2025 14:49:25 GMT"
}
] | 2025-03-03T00:00:00 |
[
[
"Lee",
"Seanie",
""
],
[
"Kim",
"Minsu",
""
],
[
"Cherif",
"Lynn",
""
],
[
"Dobre",
"David",
""
],
[
"Lee",
"Juho",
""
],
[
"Hwang",
"Sung Ju",
""
],
[
"Kawaguchi",
"Kenji",
""
],
[
"Gidel",
"Gauthier",
""
],
[
"Bengio",
"Yoshua",
""
],
[
"Malkin",
"Nikolay",
""
],
[
"Jain",
"Moksh",
""
]
] |
TITLE: Learning diverse attacks on large language models for robust red-teaming
and safety tuning
ABSTRACT: Red-teaming, or identifying prompts that elicit harmful responses, is a
critical step in ensuring the safe and responsible deployment of large language
models (LLMs). Developing effective protection against many modes of attack
prompts requires discovering diverse attacks. Automated red-teaming typically
uses reinforcement learning to fine-tune an attacker language model to generate
prompts that elicit undesirable responses from a target LLM, as measured, for
example, by an auxiliary toxicity classifier. We show that even with explicit
regularization to favor novelty and diversity, existing approaches suffer from
mode collapse or fail to generate effective attacks. As a flexible and
probabilistically principled alternative, we propose to use GFlowNet
fine-tuning, followed by a secondary smoothing phase, to train the attacker
model to generate diverse and effective attack prompts. We find that the
attacks generated by our method are effective against a wide range of target
LLMs, both with and without safety tuning, and transfer well between target
LLMs. Finally, we demonstrate that models safety-tuned using a dataset of
red-teaming prompts generated by our method are robust to attacks from other
RL-based red-teaming approaches.
|
no_new_dataset
| 0.935346 |
2405.20681
|
Xiaojin Zhang
|
Xiaojin Zhang, Yahao Pang, Yan Kang, Wei Chen, Lixin Fan, Hai Jin,
Qiang Yang
|
No Free Lunch Theorem for Privacy-Preserving LLM Inference
| null | null | null | null |
cs.CR cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Individuals and businesses have been significantly benefited by Large
Language Models (LLMs) including PaLM, Gemini and ChatGPT in various ways. For
example, LLMs enhance productivity, reduce costs, and enable us to focus on
more valuable tasks. Furthermore, LLMs possess the capacity to sift through
extensive datasets, uncover underlying patterns, and furnish critical insights
that propel the frontiers of technology and science. However, LLMs also pose
privacy concerns. Users' interactions with LLMs may expose their sensitive
personal or company information. A lack of robust privacy safeguards and legal
frameworks could permit the unwarranted intrusion or improper handling of
individual data, thereby risking infringements of privacy and the theft of
personal identities. To ensure privacy, it is essential to minimize the
dependency between shared prompts and private information. Various
randomization approaches have been proposed to protect prompts' privacy, but
they may incur utility loss compared to unprotected LLMs prompting. Therefore,
it is essential to evaluate the balance between the risk of privacy leakage and
loss of utility when conducting effective protection mechanisms. The current
study develops a framework for inferring privacy-protected Large Language
Models (LLMs) and lays down a solid theoretical basis for examining the
interplay between privacy preservation and utility. The core insight is
encapsulated within a theorem that is called as the NFL (abbreviation of the
word No-Free-Lunch) Theorem.
|
[
{
"version": "v1",
"created": "Fri, 31 May 2024 08:22:53 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Feb 2025 01:55:21 GMT"
},
{
"version": "v3",
"created": "Fri, 28 Feb 2025 02:38:26 GMT"
}
] | 2025-03-03T00:00:00 |
[
[
"Zhang",
"Xiaojin",
""
],
[
"Pang",
"Yahao",
""
],
[
"Kang",
"Yan",
""
],
[
"Chen",
"Wei",
""
],
[
"Fan",
"Lixin",
""
],
[
"Jin",
"Hai",
""
],
[
"Yang",
"Qiang",
""
]
] |
TITLE: No Free Lunch Theorem for Privacy-Preserving LLM Inference
ABSTRACT: Individuals and businesses have been significantly benefited by Large
Language Models (LLMs) including PaLM, Gemini and ChatGPT in various ways. For
example, LLMs enhance productivity, reduce costs, and enable us to focus on
more valuable tasks. Furthermore, LLMs possess the capacity to sift through
extensive datasets, uncover underlying patterns, and furnish critical insights
that propel the frontiers of technology and science. However, LLMs also pose
privacy concerns. Users' interactions with LLMs may expose their sensitive
personal or company information. A lack of robust privacy safeguards and legal
frameworks could permit the unwarranted intrusion or improper handling of
individual data, thereby risking infringements of privacy and the theft of
personal identities. To ensure privacy, it is essential to minimize the
dependency between shared prompts and private information. Various
randomization approaches have been proposed to protect prompts' privacy, but
they may incur utility loss compared to unprotected LLMs prompting. Therefore,
it is essential to evaluate the balance between the risk of privacy leakage and
loss of utility when conducting effective protection mechanisms. The current
study develops a framework for inferring privacy-protected Large Language
Models (LLMs) and lays down a solid theoretical basis for examining the
interplay between privacy preservation and utility. The core insight is
encapsulated within a theorem that is called as the NFL (abbreviation of the
word No-Free-Lunch) Theorem.
|
no_new_dataset
| 0.941547 |
2406.00216
|
Michail Mamalakis Dr
|
Michail Mamalakis, H\'elo\"ise de Vareilles, Graham Murray, Pietro
Lio, John Suckling
|
The Explanation Necessity for Healthcare AI
|
accepted paper in IEEE CITREx 2025 : IEEE Symposium on Explainable,
Responsible, and Trustworthy Computational Intelligence
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Explainability is a critical factor in enhancing the trustworthiness and
acceptance of artificial intelligence (AI) in healthcare, where decisions
directly impact patient outcomes. Despite advancements in AI interpretability,
clear guidelines on when and to what extent explanations are required in
medical applications remain lacking. We propose a novel categorization system
comprising four classes of explanation necessity (self-explainable,
semi-explainable, non-explainable, and new-patterns discovery), guiding the
required level of explanation; whether local (patient or sample level), global
(cohort or dataset level), or both. To support this system, we introduce a
mathematical formulation that incorporates three key factors: (i) robustness of
the evaluation protocol, (ii) variability of expert observations, and (iii)
representation dimensionality of the application. This framework provides a
practical tool for researchers to determine the appropriate depth of
explainability needed, addressing the critical question: When does an AI
medical application need to be explained, and at what level of detail?
|
[
{
"version": "v1",
"created": "Fri, 31 May 2024 22:20:10 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Feb 2025 14:16:47 GMT"
}
] | 2025-03-03T00:00:00 |
[
[
"Mamalakis",
"Michail",
""
],
[
"de Vareilles",
"Héloïse",
""
],
[
"Murray",
"Graham",
""
],
[
"Lio",
"Pietro",
""
],
[
"Suckling",
"John",
""
]
] |
TITLE: The Explanation Necessity for Healthcare AI
ABSTRACT: Explainability is a critical factor in enhancing the trustworthiness and
acceptance of artificial intelligence (AI) in healthcare, where decisions
directly impact patient outcomes. Despite advancements in AI interpretability,
clear guidelines on when and to what extent explanations are required in
medical applications remain lacking. We propose a novel categorization system
comprising four classes of explanation necessity (self-explainable,
semi-explainable, non-explainable, and new-patterns discovery), guiding the
required level of explanation; whether local (patient or sample level), global
(cohort or dataset level), or both. To support this system, we introduce a
mathematical formulation that incorporates three key factors: (i) robustness of
the evaluation protocol, (ii) variability of expert observations, and (iii)
representation dimensionality of the application. This framework provides a
practical tool for researchers to determine the appropriate depth of
explainability needed, addressing the critical question: When does an AI
medical application need to be explained, and at what level of detail?
|
no_new_dataset
| 0.95511 |
2406.02720
|
Jinyang Liu
|
Haolin Li, Jinyang Liu, Mario Sznaier, Octavia Camps
|
3D-HGS: 3D Half-Gaussian Splatting
|
8 pages, 9 figures
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Photo-realistic image rendering from scene 3D reconstruction is a fundamental
problem in 3D computer vision. This domain has seen considerable advancements
owing to the advent of recent neural rendering techniques. These techniques
predominantly aim to focus on learning volumetric representations of 3D scenes
and refining these representations via loss functions derived from their
rendering. Among these, 3D Gaussian Splatting (3D-GS) has emerged as a
preferred method, surpassing Neural Radiance Fields' (NeRFs) quality and
rendering speed. 3D-GS uses parameterized 3D Gaussians to model both spatial
locations and color information, combined with a tile-based fast rendering
technique. Despite its superior performance, using 3D Gaussian kernels has
inherent limitations in accurately representing discontinuous functions,
notably at edges and corners corresponding to shape discontinuities, and across
varying textures due to color discontinuities. In this paper, we introduce 3D
Half-Gaussian (\textbf{3D-HGS}) kernels, which can be used as a plug-and-play
kernel, to address this issue. Our experiments demonstrate their capability to
improve the performance of current 3D-GS related methods and achieve
state-of-the-art rendering quality performance on various datasets without
compromising their rendering speed.
|
[
{
"version": "v1",
"created": "Tue, 4 Jun 2024 19:04:29 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Jun 2024 18:49:59 GMT"
},
{
"version": "v3",
"created": "Thu, 27 Feb 2025 20:52:28 GMT"
}
] | 2025-03-03T00:00:00 |
[
[
"Li",
"Haolin",
""
],
[
"Liu",
"Jinyang",
""
],
[
"Sznaier",
"Mario",
""
],
[
"Camps",
"Octavia",
""
]
] |
TITLE: 3D-HGS: 3D Half-Gaussian Splatting
ABSTRACT: Photo-realistic image rendering from scene 3D reconstruction is a fundamental
problem in 3D computer vision. This domain has seen considerable advancements
owing to the advent of recent neural rendering techniques. These techniques
predominantly aim to focus on learning volumetric representations of 3D scenes
and refining these representations via loss functions derived from their
rendering. Among these, 3D Gaussian Splatting (3D-GS) has emerged as a
preferred method, surpassing Neural Radiance Fields' (NeRFs) quality and
rendering speed. 3D-GS uses parameterized 3D Gaussians to model both spatial
locations and color information, combined with a tile-based fast rendering
technique. Despite its superior performance, using 3D Gaussian kernels has
inherent limitations in accurately representing discontinuous functions,
notably at edges and corners corresponding to shape discontinuities, and across
varying textures due to color discontinuities. In this paper, we introduce 3D
Half-Gaussian (\textbf{3D-HGS}) kernels, which can be used as a plug-and-play
kernel, to address this issue. Our experiments demonstrate their capability to
improve the performance of current 3D-GS related methods and achieve
state-of-the-art rendering quality performance on various datasets without
compromising their rendering speed.
|
no_new_dataset
| 0.949342 |
2406.03807
|
Yanming Liu
|
Yanming Liu, Xinyue Peng, Jiannan Cao, Shi Bo, Yuwei Zhang, Xuhong
Zhang, Sheng Cheng, Xun Wang, Jianwei Yin, Tianyu Du
|
Tool-Planner: Task Planning with Clusters across Multiple Tools
|
ICLR 2025 Camera Ready version
| null | null | null |
cs.AI cs.CL cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Large language models (LLMs) have demonstrated exceptional reasoning
capabilities, enabling them to solve various complex problems. Recently, this
ability has been applied to the paradigm of tool learning. Tool learning
involves providing examples of tool usage and their corresponding functions,
allowing LLMs to formulate plans and demonstrate the process of invoking and
executing each tool. LLMs can address tasks that they cannot complete
independently, thereby enhancing their potential across different tasks.
However, this approach faces two key challenges. First, redundant error
correction leads to unstable planning and long execution time. Additionally,
designing a correct plan among multiple tools is also a challenge in tool
learning. To address these issues, we propose Tool-Planner, a task-processing
framework based on toolkits. Tool-Planner groups tools based on the API
functions with the same function into a toolkit and allows LLMs to implement
planning across the various toolkits. When a tool error occurs, the language
model can reselect and adjust tools based on the toolkit. Experiments show that
our approach demonstrates a high pass and win rate across different datasets
and optimizes the planning scheme for tool learning in models such as GPT-4 and
Claude 3, showcasing the potential of our method. Our code is public at
https://github.com/OceannTwT/Tool-Planner
|
[
{
"version": "v1",
"created": "Thu, 6 Jun 2024 07:30:14 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Oct 2024 16:00:39 GMT"
},
{
"version": "v3",
"created": "Fri, 28 Feb 2025 07:12:21 GMT"
}
] | 2025-03-03T00:00:00 |
[
[
"Liu",
"Yanming",
""
],
[
"Peng",
"Xinyue",
""
],
[
"Cao",
"Jiannan",
""
],
[
"Bo",
"Shi",
""
],
[
"Zhang",
"Yuwei",
""
],
[
"Zhang",
"Xuhong",
""
],
[
"Cheng",
"Sheng",
""
],
[
"Wang",
"Xun",
""
],
[
"Yin",
"Jianwei",
""
],
[
"Du",
"Tianyu",
""
]
] |
TITLE: Tool-Planner: Task Planning with Clusters across Multiple Tools
ABSTRACT: Large language models (LLMs) have demonstrated exceptional reasoning
capabilities, enabling them to solve various complex problems. Recently, this
ability has been applied to the paradigm of tool learning. Tool learning
involves providing examples of tool usage and their corresponding functions,
allowing LLMs to formulate plans and demonstrate the process of invoking and
executing each tool. LLMs can address tasks that they cannot complete
independently, thereby enhancing their potential across different tasks.
However, this approach faces two key challenges. First, redundant error
correction leads to unstable planning and long execution time. Additionally,
designing a correct plan among multiple tools is also a challenge in tool
learning. To address these issues, we propose Tool-Planner, a task-processing
framework based on toolkits. Tool-Planner groups tools based on the API
functions with the same function into a toolkit and allows LLMs to implement
planning across the various toolkits. When a tool error occurs, the language
model can reselect and adjust tools based on the toolkit. Experiments show that
our approach demonstrates a high pass and win rate across different datasets
and optimizes the planning scheme for tool learning in models such as GPT-4 and
Claude 3, showcasing the potential of our method. Our code is public at
https://github.com/OceannTwT/Tool-Planner
|
no_new_dataset
| 0.941761 |
2406.04138
|
Drew Linsley
|
Drew Linsley, Peisen Zhou, Alekh Karkada Ashok, Akash Nagaraj, Gaurav
Gaonkar, Francis E Lewis, Zygmunt Pizlo, Thomas Serre
|
The 3D-PC: a benchmark for visual perspective taking in humans and
machines
|
Published in ICLR 2025
| null | null | null |
cs.CV cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Visual perspective taking (VPT) is the ability to perceive and reason about
the perspectives of others. It is an essential feature of human intelligence,
which develops over the first decade of life and requires an ability to process
the 3D structure of visual scenes. A growing number of reports have indicated
that deep neural networks (DNNs) become capable of analyzing 3D scenes after
training on large image datasets. We investigated if this emergent ability for
3D analysis in DNNs is sufficient for VPT with the 3D perception challenge
(3D-PC): a novel benchmark for 3D perception in humans and DNNs. The 3D-PC is
comprised of three 3D-analysis tasks posed within natural scene images: 1. a
simple test of object depth order, 2. a basic VPT task (VPT-basic), and 3.
another version of VPT (VPT-Strategy) designed to limit the effectiveness of
"shortcut" visual strategies. We tested human participants (N=33) and linearly
probed or text-prompted over 300 DNNs on the challenge and found that nearly
all of the DNNs approached or exceeded human accuracy in analyzing object depth
order. Surprisingly, DNN accuracy on this task correlated with their object
recognition performance. In contrast, there was an extraordinary gap between
DNNs and humans on VPT-basic. Humans were nearly perfect, whereas most DNNs
were near chance. Fine-tuning DNNs on VPT-basic brought them close to human
performance, but they, unlike humans, dropped back to chance when tested on
VPT-Strategy. Our challenge demonstrates that the training routines and
architectures of today's DNNs are well-suited for learning basic 3D properties
of scenes and objects but are ill-suited for reasoning about these properties
as humans do. We release our 3D-PC datasets and code to help bridge this gap in
3D perception between humans and machines.
|
[
{
"version": "v1",
"created": "Thu, 6 Jun 2024 14:59:39 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Feb 2025 14:49:44 GMT"
}
] | 2025-03-03T00:00:00 |
[
[
"Linsley",
"Drew",
""
],
[
"Zhou",
"Peisen",
""
],
[
"Ashok",
"Alekh Karkada",
""
],
[
"Nagaraj",
"Akash",
""
],
[
"Gaonkar",
"Gaurav",
""
],
[
"Lewis",
"Francis E",
""
],
[
"Pizlo",
"Zygmunt",
""
],
[
"Serre",
"Thomas",
""
]
] |
TITLE: The 3D-PC: a benchmark for visual perspective taking in humans and
machines
ABSTRACT: Visual perspective taking (VPT) is the ability to perceive and reason about
the perspectives of others. It is an essential feature of human intelligence,
which develops over the first decade of life and requires an ability to process
the 3D structure of visual scenes. A growing number of reports have indicated
that deep neural networks (DNNs) become capable of analyzing 3D scenes after
training on large image datasets. We investigated if this emergent ability for
3D analysis in DNNs is sufficient for VPT with the 3D perception challenge
(3D-PC): a novel benchmark for 3D perception in humans and DNNs. The 3D-PC is
comprised of three 3D-analysis tasks posed within natural scene images: 1. a
simple test of object depth order, 2. a basic VPT task (VPT-basic), and 3.
another version of VPT (VPT-Strategy) designed to limit the effectiveness of
"shortcut" visual strategies. We tested human participants (N=33) and linearly
probed or text-prompted over 300 DNNs on the challenge and found that nearly
all of the DNNs approached or exceeded human accuracy in analyzing object depth
order. Surprisingly, DNN accuracy on this task correlated with their object
recognition performance. In contrast, there was an extraordinary gap between
DNNs and humans on VPT-basic. Humans were nearly perfect, whereas most DNNs
were near chance. Fine-tuning DNNs on VPT-basic brought them close to human
performance, but they, unlike humans, dropped back to chance when tested on
VPT-Strategy. Our challenge demonstrates that the training routines and
architectures of today's DNNs are well-suited for learning basic 3D properties
of scenes and objects but are ill-suited for reasoning about these properties
as humans do. We release our 3D-PC datasets and code to help bridge this gap in
3D perception between humans and machines.
|
no_new_dataset
| 0.935641 |
2406.06967
|
Kailas Dayanandan
|
Kailas Dayanandan, Nikhil Kumar, Anand Sinha, Brejesh Lall
|
Dual Thinking and Logical Processing -- Are Multi-modal Large Language
Models Closing the Gap with Human Vision ?
| null | null | null | null |
cs.CV cs.AI eess.IV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The dual thinking framework considers fast, intuitive, and slower logical
processing. The perception of dual thinking in vision requires images where
inferences from intuitive and logical processing differ, and the latter is
under-explored in current studies. We introduce a novel adversarial dataset to
provide evidence for the dual thinking framework in human vision, which also
facilitates the study of the qualitative behavior of deep learning models. Our
psychophysical studies show the presence of multiple inferences in rapid
succession, and analysis of errors shows that the early stopping of visual
processing can result in missing relevant information. MLLMs (Multi-modal Large
Language Models) and VLMs (Vision Language Models) have made significant
progress in correcting errors in intuitive processing in human vision and
showed enhanced performance on images requiring logical processing. However,
their improvements in logical processing have not kept pace with their
advancements in intuitive processing. In contrast, segmentation models exhibit
errors similar to those seen in intuitive human processing and lack
understanding of sub-structures, as indicated by errors related to
sub-components in identified instances. As AI (Artificial Intelligence)-based
systems find increasing applications in safety-critical domains like autonomous
driving, the integration of logical processing capabilities becomes essential.
This not only enhances performance but also addresses the limitations of
scaling-based approaches while ensuring robustness and reliability in
real-world environments.
|
[
{
"version": "v1",
"created": "Tue, 11 Jun 2024 05:50:34 GMT"
},
{
"version": "v2",
"created": "Thu, 30 Jan 2025 14:37:55 GMT"
},
{
"version": "v3",
"created": "Fri, 28 Feb 2025 17:28:36 GMT"
}
] | 2025-03-03T00:00:00 |
[
[
"Dayanandan",
"Kailas",
""
],
[
"Kumar",
"Nikhil",
""
],
[
"Sinha",
"Anand",
""
],
[
"Lall",
"Brejesh",
""
]
] |
TITLE: Dual Thinking and Logical Processing -- Are Multi-modal Large Language
Models Closing the Gap with Human Vision ?
ABSTRACT: The dual thinking framework considers fast, intuitive, and slower logical
processing. The perception of dual thinking in vision requires images where
inferences from intuitive and logical processing differ, and the latter is
under-explored in current studies. We introduce a novel adversarial dataset to
provide evidence for the dual thinking framework in human vision, which also
facilitates the study of the qualitative behavior of deep learning models. Our
psychophysical studies show the presence of multiple inferences in rapid
succession, and analysis of errors shows that the early stopping of visual
processing can result in missing relevant information. MLLMs (Multi-modal Large
Language Models) and VLMs (Vision Language Models) have made significant
progress in correcting errors in intuitive processing in human vision and
showed enhanced performance on images requiring logical processing. However,
their improvements in logical processing have not kept pace with their
advancements in intuitive processing. In contrast, segmentation models exhibit
errors similar to those seen in intuitive human processing and lack
understanding of sub-structures, as indicated by errors related to
sub-components in identified instances. As AI (Artificial Intelligence)-based
systems find increasing applications in safety-critical domains like autonomous
driving, the integration of logical processing capabilities becomes essential.
This not only enhances performance but also addresses the limitations of
scaling-based approaches while ensuring robustness and reliability in
real-world environments.
|
new_dataset
| 0.964623 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.