Spaces:
Running
Running
Update README.md
Browse files
README.md
CHANGED
@@ -1,129 +1,13 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
<img src='https://img.shields.io/badge/Demo-Gradio-gold?style=flat&logo=Gradio&logoColor=red' alt='Demo'>
|
15 |
-
</a>
|
16 |
-
<a href='https://huggingface.co/BoyuanJiang/FitDiT' style="margin: 0 2px;">
|
17 |
-
<img src='https://img.shields.io/badge/Hugging Face-ckpts-orange?style=flat&logo=HuggingFace&logoColor=orange' alt='huggingface'>
|
18 |
-
</a>
|
19 |
-
<a href='https://byjiang.com/FitDiT/' style="margin: 0 2px;">
|
20 |
-
<img src='https://img.shields.io/badge/Webpage-Project-silver?style=flat&logo=&logoColor=orange' alt='webpage'>
|
21 |
-
</a>
|
22 |
-
<a href="https://raw.githubusercontent.com/BoyuanJiang/FitDiT/refs/heads/main/LICENSE" style="margin: 0 2px;">
|
23 |
-
<img src='https://img.shields.io/badge/License-CC BY--NC--SA--4.0-lightgreen?style=flat&logo=Lisence' alt='License'>
|
24 |
-
</a>
|
25 |
-
</div>
|
26 |
-
|
27 |
-
<p align="center">
|
28 |
-
π Join our <a href="resource/img/QQ_group.jpg" target="_blank">QQ Chat Group</a>
|
29 |
-
</p>
|
30 |
-
<p align="center">
|
31 |
-
|
32 |
-
|
33 |
-
**FitDiT** is designed for high-fidelity virtual try-on using Diffusion Transformers (DiT).
|
34 |
-
<div align="center">
|
35 |
-
<img src="resource/img/teaser.jpg" width="100%" height="100%"/>
|
36 |
-
</div>
|
37 |
-
|
38 |
-
|
39 |
-
## Updates
|
40 |
-
- **`2025/1/16`**: We provide the [ComfyUI version of FitDiT](https://github.com/BoyuanJiang/FitDiT/tree/FitDiT-ComfyUI), you can use FitDiT in ComfyUI now.
|
41 |
-
- **`2025/1/9`**: We provide a [**Huggingface Space**](https://huggingface.co/spaces/BoyuanJiang/FitDiT) of FitDiT, thanks for Huggingface community GPU grant for providing the GPU resources.
|
42 |
-
- **`2024/12/20`**: The FitDiT [**model weight**](https://huggingface.co/BoyuanJiang/FitDiT) is available.
|
43 |
-
- **`2024/12/17`**: Inference code is released.
|
44 |
-
- **`2024/12/4`**: Our [**Online Demo**](http://demo.fitdit.byjiang.com/) is released.
|
45 |
-
- **`2024/11/25`**: Our [**Complex Virtual Dressing Dataset (CVDD)**](https://huggingface.co/datasets/BoyuanJiang/CVDD) is released.
|
46 |
-
- **`2024/11/15`**: Our [**FitDiT paper**](https://arxiv.org/abs/2411.10499) is available.
|
47 |
-
|
48 |
-
|
49 |
-
## Gradio Demo
|
50 |
-
Our algorithm is divided into two steps. The first step is to generate the mask of the try-on area, and the second step is to try-on in the mask area.
|
51 |
-
|
52 |
-
### Step1: Run Mask
|
53 |
-
You can simpley get try-on mask by click **Step1: Run Mask** at the right side of gradio demo. If the automatically generated mask are not well covered the area where you want to try-on, you can either adjust the mask by:
|
54 |
-
|
55 |
-
1. Drag the slider of *mask offset top*, *mask offset bottom*, *mask offset left* or *mask offset right* and then click **Step1: Run Mask** button, this will re-generate mask.
|
56 |
-
|
57 |
-

|
58 |
-
|
59 |
-
|
60 |
-
|
61 |
-
2. Using the brush or eraser tool to edit the automatically generated mask
|
62 |
-
|
63 |
-

|
64 |
-
|
65 |
-
### Step2: Run Try-on
|
66 |
-
After generating a suitable mask, you can get the try-on results by click **Step2: Run Try-on**. In the Try-on resolution drop-down box, you can select a suitable processing resolution. In our online demo, the default resolution is 1152x1536, which means that the input model image and garment image will be pad and resized to this resolution before being fed into the model.
|
67 |
-
|
68 |
-
|
69 |
-
## Local Demo
|
70 |
-
First apply access of FitDiT [model weight](https://huggingface.co/BoyuanJiang/FitDiT), then clone model to *local_model_dir*
|
71 |
-
|
72 |
-
### Enviroment
|
73 |
-
We test our model with following enviroment
|
74 |
-
```
|
75 |
-
torch==2.4.0
|
76 |
-
torchvision==0.19.0
|
77 |
-
diffusers==0.31.0
|
78 |
-
transformers==4.39.3
|
79 |
-
gradio==5.8.0
|
80 |
-
onnxruntime-gpu==1.20.1
|
81 |
-
```
|
82 |
-
|
83 |
-
### Run gradio locally
|
84 |
-
```
|
85 |
-
# Run model with bf16 without any offload, fastest inference and most memory
|
86 |
-
python gradio_sd3.py --model_path local_model_dir
|
87 |
-
|
88 |
-
# Run model with fp16
|
89 |
-
python gradio_sd3.py --model_path local_model_dir --fp16
|
90 |
-
|
91 |
-
# Run model with fp16 and cpu offload, moderate inference and moderate memory
|
92 |
-
python gradio_sd3.py --model_path local_model_dir --fp16 --offload
|
93 |
-
|
94 |
-
# Run model with fp16 and aggressive cpu offload, slowest inference and less memory
|
95 |
-
python gradio_sd3.py --model_path local_model_dir --fp16 --aggressive_offload
|
96 |
-
```
|
97 |
-
|
98 |
-
## Third-Party Creations
|
99 |
-
We found there've been some 3rd party applications or tutorial based on our FitDiT. Many thanks for their contribution to the community!
|
100 |
-
If you have any related work that you would like to see displayed, please submit it in the [issue](https://github.com/BoyuanJiang/FitDiT/issues/new).
|
101 |
-
These projects have not been verified by us. If you have any questions, please seek help from the original project authors.
|
102 |
-
|
103 |
-
### Tutorial
|
104 |
-
- A tutorial of using the comfyui version of FitDiT, from `T8star-Aix` at [youtube](https://www.youtube.com/watch?v=qBQtYYa-bvs) or [bilibili](https://www.bilibili.com/video/BV1U4wpe6EkD/)
|
105 |
-
|
106 |
-
### Applications
|
107 |
-
- Local one-click integration package of FitDiT, which can be found at [deepface forum](https://deepface.cc/thread-517-1-1.html)
|
108 |
-
|
109 |
-
## Star History
|
110 |
-
[](https://star-history.com/#BoyuanJiang/FitDiT&Date)
|
111 |
-
|
112 |
-
## Contact
|
113 |
-
This model can only be used **for non-commercial use**. For commercial use, please visit [Tencent Cloud](https://cloud.tencent.com/document/product/1668/108532) for support.
|
114 |
-
|
115 |
-
|
116 |
-
## Citation
|
117 |
-
If you find our work helpful for your research, please consider citing our work.
|
118 |
-
```
|
119 |
-
@misc{jiang2024fitditadvancingauthenticgarment,
|
120 |
-
title={FitDiT: Advancing the Authentic Garment Details for High-fidelity Virtual Try-on},
|
121 |
-
author={Boyuan Jiang and Xiaobin Hu and Donghao Luo and Qingdong He and Chengming Xu and Jinlong Peng and Jiangning Zhang and Chengjie Wang and Yunsheng Wu and Yanwei Fu},
|
122 |
-
year={2024},
|
123 |
-
eprint={2411.10499},
|
124 |
-
archivePrefix={arXiv},
|
125 |
-
primaryClass={cs.CV},
|
126 |
-
url={https://arxiv.org/abs/2411.10499},
|
127 |
-
}
|
128 |
-
```
|
129 |
-
|
|
|
1 |
+
---
|
2 |
+
title: FitTon
|
3 |
+
emoji: π
|
4 |
+
colorFrom: gray
|
5 |
+
colorTo: yellow
|
6 |
+
sdk: gradio
|
7 |
+
sdk_version: 5.33.0
|
8 |
+
app_file: gradio_sd3.py
|
9 |
+
pinned: false
|
10 |
+
license: mit
|
11 |
+
---
|
12 |
+
|
13 |
+
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|