File size: 4,647 Bytes
dc38ba5 9e26714 3f7c489 9e26714 3f7c489 9e26714 3f7c489 9e26714 3f7c489 9e26714 3f7c489 9e26714 3f7c489 9e26714 3f7c489 5d472bd 3f7c489 9e26714 3f7c489 9e26714 3f7c489 9e26714 3f7c489 9e26714 3f7c489 9e26714 3f7c489 9e26714 3f7c489 9e26714 98ea233 9e26714 375c8e2 dc38ba5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 |
---
license: mit
pipeline_tag: image-to-image
---
<h1 align="center">[ACMMM 2025] DenseSR: Image Shadow Removal as Dense Prediction</h1>
<p align="center">Yu-Fan Lin<sup>1</sup>, Chia-ming Lee<sup>1</sup>, Chih-Chung Hsu<sup>2</sup></p>
<p align="center"><sup>1</sup>National Cheng Kung University <sup>2</sup>National Yang Ming Chiao Tung University</p>
<div align="center">
[](https://www.arxiv.org/abs/2507.16472)
</div>
<details>
<summary>Abstract</summary>
Shadows are a common factor degrading image quality. Single-image shadow removal (SR), particularly under challenging indirect illumination, is hampered by non-uniform content degradation and inherent ambiguity. Consequently, traditional methods often fail to simultaneously recover intra-shadow details and maintain sharp boundaries, resulting in inconsistent restoration and blurring that negatively affect both downstream applications and the overall viewing experience. To overcome these limitations, we propose the DenseSR, approaching the problem from a dense prediction perspective to emphasize restoration quality. This framework uniquely synergizes two key strategies: (1) deep scene understanding guided by geometric-semantic priors to resolve ambiguity and implicitly localize shadows, and (2) high-fidelity restoration via a novel Dense Fusion Block (DFB) in the decoder. The DFB employs adaptive component processing-using an Adaptive Content Smoothing Module (ACSM) for consistent appearance and a Texture-Boundary Recuperation Module (TBRM) for fine textures and sharp boundaries-thereby directly tackling the inconsistent restoration and blurring issues. These purposefully processed components are effectively fused, yielding an optimized feature representation preserving both consistency and fidelity. Extensive experimental results demonstrate the merits of our approach over existing methods.
</details>
## β Citation
If you find this project useful, please consider citing us and giving us a star.
```bash
@misc{lin2025densesrimageshadowremoval,
title={DenseSR: Image Shadow Removal as Dense Prediction},
author={Yu-Fan Lin and Chia-Ming Lee and Chih-Chung Hsu},
year={2025},
eprint={2507.16472},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2507.16472},
}
```
## π± Environments
```bash
conda create -n ntire_shadow python=3.9 -y
conda activate ntire_shadow
pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/cu118
pip install -r requirements.txt
```
## π Folder Structure
You can download WSRD dataset from [here](https://github.com/fvasluianu97/WSRD-DNSR).
```bash
test_dir
βββ origin <- Put the shadow affected images in this folder
β βββ 0000.png
β βββ 0001.png
β βββ ...
βββ depth
βββ normal
output_dir
βββ 0000.png
βββ 0001.png
βββ...
```
## β¨ How to test?
1. Clone [Depth anything v2](https://github.com/DepthAnything/Depth-Anything-V2.git)
```bash
git clone https://github.com/DepthAnything/Depth-Anything-V2.git
```
2. Download the [pretrain model of depth anything v2](https://huggingface.co/depth-anything/Depth-Anything-V2-Large/resolve/main/depth_anything_v2_vitl.pth?download=true)
3. Run ```get_depth_normap.py``` to create depth and normal map.
```python
python get_depth_normap.py
```
Now folder structure will be
```bash
test_dir
βββ origin
β βββ 0000.png
β βββ 0001.png
β βββ ...
βββ depth
β βββ 0000.npy
β βββ 0001.npy
β βββ ...
βββ ormal
β βββ 0000.npy
β βββ 0001.npy
β βββ ...
output_dir
βββ 0000.png
βββ 0001.png
βββ...
```
4. Clone [DINOv2](https://github.com/facebookresearch/dinov2.git)
```bash
git clone https://github.com/facebookresearch/dinov2.git
```
5. Download [shadow removal weight](https://drive.google.com/file/d/1of3KLSVhaXlsX3jasuwdPKBwb4O4hGZD/view?usp=drive_link)
```bash
gdown 1of3KLSVhaXlsX3jasuwdPKBwb4O4hGZD
```
6. Run ```run_test.sh``` to get inference results.
```bash
bash run_test.sh
```
## π° News
✔ 2025/08/11 Release WSRD pretrained model
✔ 2025/08/11 Release inference code
✔ 2025/07/05 Paper Accepted by ACMMM'25
## π οΈ TODO
◻ Release training code
◻ Release other pretrained model
## π License
This code repository is release under [MIT License](https://github.com/VanLinLin/NTIRE25_Shadow_Removal?tab=MIT-1-ov-file#readme). |