SahilCarterr commited on
Commit
4d7910a
·
verified ·
1 Parent(s): f056744

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -68
README.md CHANGED
@@ -1,68 +1,13 @@
1
- # ReFlex: Text-Guided Editing of Real Images in Rectified Flow via Mid-Step Feature Extraction and Attention Adaptation
2
-
3
- ### [ICCV 2025] Official Pytorch implementation of the paper: "ReFlex: Text-Guided Editing of Real Images in Rectified Flow via Mid-Step Feature Extraction and Attention Adaptation"
4
- by Jimyeon Kim, Jungwon Park, Yeji Song, Nojun Kwak, Wonjong Rhee†.
5
-
6
- Seoul National University
7
-
8
- [Arxiv](https://arxiv.org/abs/2507.01496)
9
-  
10
- [Project Page](https://wlaud1001.github.io/ReFlex/)
11
-
12
-
13
-
14
- ![main](./images/main_figure.png)
15
-
16
- ## Setup
17
- ```
18
- git clone https://github.com/wlaud1001/ReFlex.git
19
- cd ReFlex
20
-
21
- conda create -n reflex python=3.10
22
- conda activate reflex
23
- pip install -r requirements.txt
24
- ```
25
-
26
- ## Run
27
-
28
- ### Run exmaple
29
- ```
30
- python img_edit.py \
31
- --gpu {gpu} \
32
- --seed {seed} \
33
- --img_path {source_img_path} \
34
- --source_prompt {source_prompt} \
35
- --target_prompt {target_prompt} \
36
- --results_dir {results_dir} \
37
- --feature_steps {feature_steps} \
38
- --attn_topk {attn_topk}
39
- ```
40
- ### Arguments
41
- - --gpu: Index of the GPU to use.
42
- - --seed: Random seed.
43
- - --img_path: Path to the input real image to be edited.
44
- - --mask_path (optional): Path to a ground-truth mask for local editing.
45
- - If provided, this mask is used directly.
46
- - If omitted, the editing mask is automatically generated from attention maps.
47
- - --source_prompt (optional): Text prompt describing the content of the input image.
48
- - If provided, mask generation and latent blending will be applied.
49
- - If omitted, editing proceeds without latent blending.
50
- - --target_prompt: Text prompt describing the desired edited image.
51
- - --blend_word (optional): Word in --source_prompt to guide mask generation via its I2T-CA map.
52
- - If omitted, the blend word is automatically inferred by comparing source_prompt and target_prompt.
53
- - --results_dir: Directory to save the output images
54
- ###
55
-
56
- ### Scripts
57
- We also provide several example scripts in the (./scripts) directory for some use cases and reproducible experiments.
58
- #### Script Categories
59
- - scripts/wo_ca/: Cases where the source prompt is not given. I2T-CA adaptation and latent blending are not applied.
60
- - scripts/w_ca/: Cases where the source prompt is given, and the editing mask for latent blending is automatically generated from the attention map.
61
- - scripts/w_mask/: Cases where a ground-truth mask for local editing is provided and directly used for latent blending.
62
-
63
- You can run a script as follows:
64
- ```
65
- ./scripts/wo_ca/run_bear.sh
66
- ./scripts/w_ca/run_bird.sh
67
- ./scripts/w_mask/run_cat_hat.sh
68
- ```
 
1
+ title: ReFlex
2
+ emoji: 📚
3
+ colorFrom: red
4
+ colorTo: yellow
5
+ sdk: gradio
6
+ sdk_version: 5.38.0
7
+ app_file: app.py
8
+ pinned: false
9
+ license: mit
10
+ short_description: Text-Guided Editing of Real Images
11
+ ---
12
+
13
+ Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference