Spaces:
Runtime error
Runtime error
Commit
·
da48dbe
0
Parent(s):
init
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- .gitignore +18 -0
- LICENSE +53 -0
- README.md +165 -0
- apps/IFGeo.py +186 -0
- apps/Normal.py +220 -0
- apps/avatarizer.py +47 -0
- apps/infer.py +528 -0
- apps/multi_render.py +25 -0
- configs/econ.yaml +35 -0
- docs/installation.md +77 -0
- environment.yaml +20 -0
- examples/304e9c4798a8c3967de7c74c24ef2e38.jpg +0 -0
- examples/cloth/0a64d9c7ac4a86aa0c29195bc6f55246.jpg +0 -0
- examples/cloth/1f7c9214b80a02071edfadd5be908d8e.jpg +0 -0
- examples/cloth/2095f20b1390e14d9312913c61c4b621.png +0 -0
- examples/cloth/267cffcff3809e0df9eff44c443f07b0.jpg +0 -0
- examples/cloth/26d2e846349647ff04c536816e0e8ca1.jpg +0 -0
- examples/cloth/351f52b9d1ddebb70241a092af34c2f3.jpg +0 -0
- examples/cloth/55cc162cc4fcda1df2236847a52db93a.jpg +0 -0
- examples/cloth/6465c18fc13b862341c33922c79ab490.jpg +0 -0
- examples/cloth/663dcd6db19490de0b790da430bd5681.jpg +0 -0
- examples/cloth/6c0a5def2287d9bfa4a42ee0ce9cb7f9.jpg +0 -0
- examples/cloth/7c6bb9410ea8debe3aca92e299fe2333.jpg +0 -0
- examples/cloth/930c782d63e180208e0a55754d607f34.jpg +0 -0
- examples/cloth/baf3c945fa6b4349c59953a97740e70f.jpg +0 -0
- examples/cloth/c7ca6894119f235caba568a7e01684af.jpg +0 -0
- examples/cloth/da135ecd046333c0dc437a383325c90b.jpg +0 -0
- examples/cloth/df90cff51a84dd602024ac3aa03ad182.jpg +0 -0
- examples/cloth/e80b36c782ce869caef9abb55b37d464.jpg +0 -0
- examples/multi/1.png +0 -0
- examples/multi/2.jpg +0 -0
- examples/multi/3.jpg +0 -0
- examples/multi/4.jpg +0 -0
- examples/pose/02986d0998ce01aa0aa67a99fbd1e09a.jpg +0 -0
- examples/pose/105545f93dcaecd13f2e3f01db92331c.jpg +0 -0
- examples/pose/1af2662b5026ef82ed0e8b08b6698017.png +0 -0
- examples/pose/3745ee0a7f31fafc3dfd3d8bf246f3b8.jpg +0 -0
- examples/pose/4ac9ca7a3e34a365c073317f98525add.jpg +0 -0
- examples/pose/4d1ed606c3c0a346c8a75507fc81abff.jpg +0 -0
- examples/pose/5617dc56d25918217b81f27c98011ea5.jpg +0 -0
- examples/pose/5ef3bc939cf82dbd0c541eba41b517c2.jpg +0 -0
- examples/pose/68757076df6c98e9d6ba6ed00870daef.jpg +0 -0
- examples/pose/6f0029a9592a11530267b3a51413ae74.jpg +0 -0
- examples/pose/7530ae51e811b1878fae23ea243a3a30.jpg +0 -0
- examples/pose/780047b55ee80b0dc2468ad16cab2278.jpg +0 -0
- examples/pose/ab2192beaefb58e872ce55099cbed8fe.jpg +0 -0
- examples/pose/d5241b4de0cebd2722b05d855a5a9ca6.jpg +0 -0
- examples/pose/d6fcd37df9983973af08af3f9267cd1e.jpg +0 -0
- examples/pose/d7e876bc5f9e8277d58e30bd83c7452f.jpg +0 -0
- fetch_data.sh +60 -0
.gitignore
ADDED
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
__pycache__
|
2 |
+
debug/
|
3 |
+
log/
|
4 |
+
results/*
|
5 |
+
results
|
6 |
+
.vscode
|
7 |
+
!.gitignore
|
8 |
+
.idea
|
9 |
+
cluster/
|
10 |
+
cluster
|
11 |
+
*.zip
|
12 |
+
data/
|
13 |
+
data
|
14 |
+
wandb
|
15 |
+
build
|
16 |
+
dist
|
17 |
+
*egg-info
|
18 |
+
*.so
|
LICENSE
ADDED
@@ -0,0 +1,53 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
License
|
2 |
+
|
3 |
+
Software Copyright License for non-commercial scientific research purposes
|
4 |
+
Please read carefully the following terms and conditions and any accompanying documentation before you download and/or use the ECON model, data and software, (the "Data & Software"), including 3D meshes, images, videos, textures, software, scripts, and animations. By downloading and/or using the Data & Software (including downloading, cloning, installing, and any other use of the corresponding github repository), you acknowledge that you have read these terms and conditions, understand them, and agree to be bound by them. If you do not agree with these terms and conditions, you must not download and/or use the Data & Software. Any infringement of the terms of this agreement will automatically terminate your rights under this License
|
5 |
+
|
6 |
+
Ownership / Licensees
|
7 |
+
The Software and the associated materials has been developed at the Max Planck Institute for Intelligent Systems (hereinafter "MPI"). Any copyright or patent right is owned by and proprietary material of the Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V. (hereinafter “MPG”; MPI and MPG hereinafter collectively “Max-Planck”) hereinafter the “Licensor”.
|
8 |
+
|
9 |
+
License Grant
|
10 |
+
Licensor grants you (Licensee) personally a single-user, non-exclusive, non-transferable, free of charge right:
|
11 |
+
|
12 |
+
• To install the Model & Software on computers owned, leased or otherwise controlled by you and/or your organization;
|
13 |
+
• To use the Model & Software for the sole purpose of performing peaceful non-commercial scientific research, non-commercial education, or non-commercial artistic projects;
|
14 |
+
• To modify, adapt, translate or create derivative works based upon the Model & Software.
|
15 |
+
|
16 |
+
Any other use, in particular any use for commercial, pornographic, military, or surveillance, purposes is prohibited. This includes, without limitation, incorporation in a commercial product, use in a commercial service, or production of other artifacts for commercial purposes. The Data & Software may not be used to create fake, libelous, misleading, or defamatory content of any kind excluding analyses in peer-reviewed scientific research. The Data & Software may not be reproduced, modified and/or made available in any form to any third party without Max-Planck’s prior written permission.
|
17 |
+
|
18 |
+
The Data & Software may not be used for pornographic purposes or to generate pornographic material whether commercial or not. This license also prohibits the use of the Software to train methods/algorithms/neural networks/etc. for commercial, pornographic, military, surveillance, or defamatory use of any kind. By downloading the Data & Software, you agree not to reverse engineer it.
|
19 |
+
|
20 |
+
No Distribution
|
21 |
+
The Data & Software and the license herein granted shall not be copied, shared, distributed, re-sold, offered for re-sale, transferred or sub-licensed in whole or in part except that you may make one copy for archive purposes only.
|
22 |
+
|
23 |
+
Disclaimer of Representations and Warranties
|
24 |
+
You expressly acknowledge and agree that the Data & Software results from basic research, is provided “AS IS”, may contain errors, and that any use of the Data & Software is at your sole risk. LICENSOR MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND CONCERNING THE DATA & SOFTWARE, NEITHER EXPRESS NOR IMPLIED, AND THE ABSENCE OF ANY LEGAL OR ACTUAL DEFECTS, WHETHER DISCOVERABLE OR NOT. Specifically, and not to limit the foregoing, licensor makes no representations or warranties (i) regarding the merchantability or fitness for a particular purpose of the Data & Software, (ii) that the use of the Data & Software will not infringe any patents, copyrights or other intellectual property rights of a third party, and (iii) that the use of the Data & Software will not cause any damage of any kind to you or a third party.
|
25 |
+
|
26 |
+
Limitation of Liability
|
27 |
+
Because this Data & Software License Agreement qualifies as a donation, according to Section 521 of the German Civil Code (Bürgerliches Gesetzbuch – BGB) Licensor as a donor is liable for intent and gross negligence only. If the Licensor fraudulently conceals a legal or material defect, they are obliged to compensate the Licensee for the resulting damage.
|
28 |
+
Licensor shall be liable for loss of data only up to the amount of typical recovery costs which would have arisen had proper and regular data backup measures been taken. For the avoidance of doubt Licensor shall be liable in accordance with the German Product Liability Act in the event of product liability. The foregoing applies also to Licensor’s legal representatives or assistants in performance. Any further liability shall be excluded.
|
29 |
+
Patent claims generated through the usage of the Data & Software cannot be directed towards the copyright holders.
|
30 |
+
The Data & Software is provided in the state of development the licensor defines. If modified or extended by Licensee, the Licensor makes no claims about the fitness of the Data & Software and is not responsible for any problems such modifications cause.
|
31 |
+
|
32 |
+
No Maintenance Services
|
33 |
+
You understand and agree that Licensor is under no obligation to provide either maintenance services, update services, notices of latent defects, or corrections of defects with regard to the Data & Software. Licensor nevertheless reserves the right to update, modify, or discontinue the Data & Software at any time.
|
34 |
+
|
35 |
+
Defects of the Data & Software must be notified in writing to the Licensor with a comprehensible description of the error symptoms. The notification of the defect should enable the reproduction of the error. The Licensee is encouraged to communicate any use, results, modification or publication.
|
36 |
+
|
37 |
+
Publications using the Model & Software
|
38 |
+
You acknowledge that the Data & Software is a valuable scientific resource and agree to appropriately reference the following paper in any publication making use of the Data & Software.
|
39 |
+
|
40 |
+
Citation:
|
41 |
+
|
42 |
+
@inproceedings{xiu2022econ,
|
43 |
+
title={{ECON}: {E}xplicit {C}lothed humans {O}btained from {N}ormals},
|
44 |
+
author={Xiu, Yuliang and Yang, Jinlong and Cao, Xu and Tzionas, Dimitrios and Black, Michael J.},
|
45 |
+
booktitle=arXiv,
|
46 |
+
month = Dec,
|
47 |
+
year={2022}
|
48 |
+
}
|
49 |
+
|
50 |
+
Commercial licensing opportunities
|
51 |
+
For commercial uses of the Model & Software, please send email to [email protected]
|
52 |
+
|
53 |
+
This Agreement shall be governed by the laws of the Federal Republic of Germany except for the UN Sales Convention.
|
README.md
ADDED
@@ -0,0 +1,165 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!-- PROJECT LOGO -->
|
2 |
+
|
3 |
+
<p align="center">
|
4 |
+
|
5 |
+
<h1 align="center">ECON: Explicit Clothed humans Obtained from Normals</h1>
|
6 |
+
<p align="center">
|
7 |
+
<a href="https://ps.is.tuebingen.mpg.de/person/yxiu"><strong>Yuliang Xiu</strong></a>
|
8 |
+
·
|
9 |
+
<a href="https://ps.is.tuebingen.mpg.de/person/jyang"><strong>Jinlong Yang</strong></a>
|
10 |
+
·
|
11 |
+
<a href="https://hoshino042.github.io/homepage/"><strong>Xu Cao</strong></a>
|
12 |
+
·
|
13 |
+
<a href="https://ps.is.mpg.de/~dtzionas"><strong>Dimitrios Tzionas</strong></a>
|
14 |
+
·
|
15 |
+
<a href="https://ps.is.tuebingen.mpg.de/person/black"><strong>Michael J. Black</strong></a>
|
16 |
+
</p>
|
17 |
+
<h2 align="center">arXiv 2022</h2>
|
18 |
+
<div align="center">
|
19 |
+
<img src="./assets/teaser.gif" alt="Logo" width="100%">
|
20 |
+
</div>
|
21 |
+
|
22 |
+
<p align="center">
|
23 |
+
<br>
|
24 |
+
<a href="https://pytorch.org/get-started/locally/"><img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-ee4c2c?logo=pytorch&logoColor=white"></a>
|
25 |
+
<a href="https://pytorchlightning.ai/"><img alt="Lightning" src="https://img.shields.io/badge/-Lightning-792ee5?logo=pytorchlightning&logoColor=white"></a>
|
26 |
+
<br></br>
|
27 |
+
<a href=''>
|
28 |
+
<img src='https://img.shields.io/badge/Paper-PDF-green?style=for-the-badge&logo=arXiv&logoColor=green' alt='Paper PDF'>
|
29 |
+
</a>
|
30 |
+
<a href="https://discord.gg/Vqa7KBGRyk"><img src="https://img.shields.io/discord/940240966844035082?color=7289DA&labelColor=4a64bd&logo=discord&logoColor=white&style=for-the-badge"></a>
|
31 |
+
<a href="https://youtu.be/j5hw4tsWpoY"><img alt="youtube views" title="Subscribe to my YouTube channel" src="https://img.shields.io/youtube/views/j5hw4tsWpoY?logo=youtube&labelColor=ce4630&style=for-the-badge"/></a>
|
32 |
+
</p>
|
33 |
+
</p>
|
34 |
+
|
35 |
+
<br/>
|
36 |
+
|
37 |
+
ECON is designed for **"Human digitization from a color image"**, which combines the best properties of implicit and explicit representations, to infer high-fidelity 3D clothed humans from in-the-wild images, even with **loose clothing** or in **challenging poses**. ECON also supports batch reconstruction from **multi-person** photos.
|
38 |
+
<br/>
|
39 |
+
<br/>
|
40 |
+
|
41 |
+
## News :triangular_flag_on_post:
|
42 |
+
- [2022/03/05] <a href="">arXiv</a> and <a href="#demo">demo</a> are available.
|
43 |
+
|
44 |
+
## TODO
|
45 |
+
- [ ] Blender add-on for FBX export
|
46 |
+
- [ ] Full RGB texture generation
|
47 |
+
|
48 |
+
<br>
|
49 |
+
|
50 |
+
<!-- TABLE OF CONTENTS -->
|
51 |
+
<details open="open" style='padding: 10px; border-radius:5px 30px 30px 5px; border-style: solid; border-width: 1px;'>
|
52 |
+
<summary>Table of Contents</summary>
|
53 |
+
<ol>
|
54 |
+
<li>
|
55 |
+
<a href="#instructions">Instructions</a>
|
56 |
+
</li>
|
57 |
+
<li>
|
58 |
+
<a href="#demo">Demo</a>
|
59 |
+
</li>
|
60 |
+
<li>
|
61 |
+
<a href="#tricks">Tricks</a>
|
62 |
+
</li>
|
63 |
+
<li>
|
64 |
+
<a href="#citation">Citation</a>
|
65 |
+
</li>
|
66 |
+
</ol>
|
67 |
+
</details>
|
68 |
+
|
69 |
+
<br/>
|
70 |
+
|
71 |
+
## Instructions
|
72 |
+
|
73 |
+
- See [docs/installation.md](docs/installation.md) to install all the required packages and setup the models
|
74 |
+
|
75 |
+
|
76 |
+
## Demo
|
77 |
+
|
78 |
+
```bash
|
79 |
+
# For image-based reconstruction
|
80 |
+
python -m apps.infer -cfg ./configs/econ.yaml -in_dir ./examples -out_dir ./results
|
81 |
+
|
82 |
+
# For video rendering
|
83 |
+
python -m apps.multi_render -n {filename}
|
84 |
+
```
|
85 |
+
|
86 |
+
## Tricks
|
87 |
+
### Some adjustable parameters in *config/econ.yaml*
|
88 |
+
- `use_ifnet`
|
89 |
+
- True: use IF-Nets+ for mesh completion ( $\text{ECON}_\text{IF}$ - Better quality)
|
90 |
+
- False: use SMPL-X for mesh completion ( $\text{ECON}_\text{EX}$ - Faster speed)
|
91 |
+
- `use_smpl`
|
92 |
+
- [ ]: don't use either hands or face parts from SMPL-X
|
93 |
+
- ["hand"]: only use the **visible** hands from SMPL-X
|
94 |
+
- ["hand", "face"]: use both **visible** hands and face from SMPL-X
|
95 |
+
- `thickness` (default 2cm)
|
96 |
+
- could be increased accordingly in case **xx_full.obj** looks flat
|
97 |
+
- `hps_type`
|
98 |
+
- "pixie": more accurate for face and hands
|
99 |
+
- "pymafx": more robust for challenging poses
|
100 |
+
|
101 |
+
<br/>
|
102 |
+
|
103 |
+
## More Qualitative Results
|
104 |
+
|
105 |
+
||
|
106 |
+
| :----------------------: |
|
107 |
+
|_Challenging Poses_|
|
108 |
+
||
|
109 |
+
|_Loose Clothes_|
|
110 |
+
||
|
111 |
+
|_ECON Results on [SHHQ Dataset](https://github.com/stylegan-human/StyleGAN-Human)_|
|
112 |
+
||
|
113 |
+
|_ECON Results on Multi-Person Image_|
|
114 |
+
|
115 |
+
|
116 |
+
<br/>
|
117 |
+
<br/>
|
118 |
+
|
119 |
+
## Citation
|
120 |
+
|
121 |
+
```bibtex
|
122 |
+
@inproceedings{xiu2022econ,
|
123 |
+
title = {{ECON}: {E}xplicit {C}lothed humans {O}btained from {N}ormals},
|
124 |
+
author = {Xiu, Yuliang and Yang, Jinlong and Cao, Xu and Tzionas, Dimitrios and Black, Michael J.},
|
125 |
+
booktitle = arXiv,
|
126 |
+
month = {Dec},
|
127 |
+
year = {2022},
|
128 |
+
}
|
129 |
+
```
|
130 |
+
<br/>
|
131 |
+
|
132 |
+
## Acknowledgments
|
133 |
+
|
134 |
+
We thank [Lea Hering](https://is.mpg.de/person/lhering) and [Radek Daněček](https://is.mpg.de/person/rdanecek) for proof reading, [Yao Feng](https://ps.is.mpg.de/person/yfeng), [Haven Feng](https://is.mpg.de/person/hfeng), and [Weiyang Liu](https://wyliu.com/) for their feedback and discussions, [Tsvetelina Alexiadis](https://ps.is.mpg.de/person/talexiadis) for her help with the AMT perceptual study.
|
135 |
+
|
136 |
+
Here are some great resources we benefit from:
|
137 |
+
|
138 |
+
- [ICON](https://github.com/YuliangXiu/ICON) for Body Fitting
|
139 |
+
- [MonoPortDataset](https://github.com/Project-Splinter/MonoPortDataset) for Data Processing
|
140 |
+
- [rembg](https://github.com/danielgatis/rembg) for Human Segmentation
|
141 |
+
- [smplx](https://github.com/vchoutas/smplx), [PyMAF-X](https://www.liuyebin.com/pymaf-x/), [PIXIE](https://github.com/YadiraF/PIXIE) for Human Pose & Shape Estimation
|
142 |
+
- [CAPE](https://github.com/qianlim/CAPE) and [THuman](https://github.com/ZhengZerong/DeepHuman/tree/master/THUmanDataset) for Dataset
|
143 |
+
- [PyTorch3D](https://github.com/facebookresearch/pytorch3d) for Differential Rendering
|
144 |
+
|
145 |
+
Some images used in the qualitative examples come from [pinterest.com](https://www.pinterest.com/).
|
146 |
+
|
147 |
+
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No.860768 ([CLIPE Project](https://www.clipe-itn.eu)).
|
148 |
+
|
149 |
+
--------------
|
150 |
+
|
151 |
+
<br>
|
152 |
+
|
153 |
+
## License
|
154 |
+
|
155 |
+
This code and model are available for non-commercial scientific research purposes as defined in the [LICENSE](LICENSE) file. By downloading and using the code and model you agree to the terms in the [LICENSE](LICENSE).
|
156 |
+
|
157 |
+
## Disclosure
|
158 |
+
|
159 |
+
MJB has received research gift funds from Adobe, Intel, Nvidia, Meta/Facebook, and Amazon. MJB has financial interests in Amazon, Datagen Technologies, and Meshcapade GmbH.
|
160 |
+
|
161 |
+
## Contact
|
162 |
+
|
163 |
+
For technical questions, please contact [email protected]
|
164 |
+
|
165 |
+
For commercial licensing, please contact [email protected] and [email protected]
|
apps/IFGeo.py
ADDED
@@ -0,0 +1,186 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# -*- coding: utf-8 -*-
|
2 |
+
|
3 |
+
# Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V. (MPG) is
|
4 |
+
# holder of all proprietary rights on this computer program.
|
5 |
+
# You can only use this computer program if you have closed
|
6 |
+
# a license agreement with MPG or you get the right to use the computer
|
7 |
+
# program from someone who is authorized to grant you that right.
|
8 |
+
# Any use of the computer program without a valid license is prohibited and
|
9 |
+
# liable to prosecution.
|
10 |
+
#
|
11 |
+
# Copyright©2019 Max-Planck-Gesellschaft zur Förderung
|
12 |
+
# der Wissenschaften e.V. (MPG). acting on behalf of its Max Planck Institute
|
13 |
+
# for Intelligent Systems. All rights reserved.
|
14 |
+
#
|
15 |
+
# Contact: [email protected]
|
16 |
+
|
17 |
+
from lib.common.seg3d_lossless import Seg3dLossless
|
18 |
+
from lib.common.train_util import *
|
19 |
+
import torch
|
20 |
+
import numpy as np
|
21 |
+
import pytorch_lightning as pl
|
22 |
+
|
23 |
+
torch.backends.cudnn.benchmark = True
|
24 |
+
|
25 |
+
|
26 |
+
class IFGeo(pl.LightningModule):
|
27 |
+
|
28 |
+
def __init__(self, cfg):
|
29 |
+
super(IFGeo, self).__init__()
|
30 |
+
|
31 |
+
self.cfg = cfg
|
32 |
+
self.batch_size = self.cfg.batch_size
|
33 |
+
self.lr_G = self.cfg.lr_G
|
34 |
+
|
35 |
+
self.use_sdf = cfg.sdf
|
36 |
+
self.mcube_res = cfg.mcube_res
|
37 |
+
self.clean_mesh_flag = cfg.clean_mesh
|
38 |
+
self.overfit = cfg.overfit
|
39 |
+
|
40 |
+
if cfg.dataset.prior_type == "SMPL":
|
41 |
+
from lib.net.IFGeoNet import IFGeoNet
|
42 |
+
self.netG = IFGeoNet(cfg)
|
43 |
+
else:
|
44 |
+
from lib.net.IFGeoNet_nobody import IFGeoNet
|
45 |
+
self.netG = IFGeoNet(cfg)
|
46 |
+
|
47 |
+
|
48 |
+
self.resolutions = (np.logspace(
|
49 |
+
start=5,
|
50 |
+
stop=np.log2(self.mcube_res),
|
51 |
+
base=2,
|
52 |
+
num=int(np.log2(self.mcube_res) - 4),
|
53 |
+
endpoint=True,
|
54 |
+
) + 1.0)
|
55 |
+
|
56 |
+
self.resolutions = self.resolutions.astype(np.int16).tolist()
|
57 |
+
|
58 |
+
self.reconEngine = Seg3dLossless(
|
59 |
+
query_func=query_func_IF,
|
60 |
+
b_min=[[-1.0, 1.0, -1.0]],
|
61 |
+
b_max=[[1.0, -1.0, 1.0]],
|
62 |
+
resolutions=self.resolutions,
|
63 |
+
align_corners=True,
|
64 |
+
balance_value=0.50,
|
65 |
+
visualize=False,
|
66 |
+
debug=False,
|
67 |
+
use_cuda_impl=False,
|
68 |
+
faster=True,
|
69 |
+
)
|
70 |
+
|
71 |
+
self.export_dir = None
|
72 |
+
self.result_eval = {}
|
73 |
+
|
74 |
+
# Training related
|
75 |
+
def configure_optimizers(self):
|
76 |
+
|
77 |
+
# set optimizer
|
78 |
+
weight_decay = self.cfg.weight_decay
|
79 |
+
momentum = self.cfg.momentum
|
80 |
+
|
81 |
+
optim_params_G = [{"params": self.netG.parameters(), "lr": self.lr_G}]
|
82 |
+
|
83 |
+
if self.cfg.optim == "Adadelta":
|
84 |
+
|
85 |
+
optimizer_G = torch.optim.Adadelta(optim_params_G,
|
86 |
+
lr=self.lr_G,
|
87 |
+
weight_decay=weight_decay)
|
88 |
+
|
89 |
+
elif self.cfg.optim == "Adam":
|
90 |
+
|
91 |
+
optimizer_G = torch.optim.Adam(optim_params_G, lr=self.lr_G, weight_decay=weight_decay)
|
92 |
+
|
93 |
+
elif self.cfg.optim == "RMSprop":
|
94 |
+
|
95 |
+
optimizer_G = torch.optim.RMSprop(
|
96 |
+
optim_params_G,
|
97 |
+
lr=self.lr_G,
|
98 |
+
weight_decay=weight_decay,
|
99 |
+
momentum=momentum,
|
100 |
+
)
|
101 |
+
|
102 |
+
else:
|
103 |
+
raise NotImplementedError
|
104 |
+
|
105 |
+
# set scheduler
|
106 |
+
scheduler_G = torch.optim.lr_scheduler.MultiStepLR(optimizer_G,
|
107 |
+
milestones=self.cfg.schedule,
|
108 |
+
gamma=self.cfg.gamma)
|
109 |
+
|
110 |
+
return [optimizer_G], [scheduler_G]
|
111 |
+
|
112 |
+
def training_step(self, batch, batch_idx):
|
113 |
+
|
114 |
+
# cfg log
|
115 |
+
if self.cfg.devices == 1:
|
116 |
+
if not self.cfg.fast_dev and self.global_step == 0:
|
117 |
+
export_cfg(self.logger, osp.join(self.cfg.results_path, self.cfg.name), self.cfg)
|
118 |
+
self.logger.experiment.config.update(convert_to_dict(self.cfg))
|
119 |
+
|
120 |
+
self.netG.train()
|
121 |
+
|
122 |
+
preds_G = self.netG(batch)
|
123 |
+
error_G = self.netG.compute_loss(preds_G, batch["labels_geo"])
|
124 |
+
|
125 |
+
# metrics processing
|
126 |
+
metrics_log = {
|
127 |
+
"loss": error_G,
|
128 |
+
}
|
129 |
+
|
130 |
+
self.log_dict(metrics_log,
|
131 |
+
prog_bar=True,
|
132 |
+
logger=True,
|
133 |
+
on_step=True,
|
134 |
+
on_epoch=False,
|
135 |
+
sync_dist=True)
|
136 |
+
|
137 |
+
return metrics_log
|
138 |
+
|
139 |
+
def training_epoch_end(self, outputs):
|
140 |
+
|
141 |
+
# metrics processing
|
142 |
+
metrics_log = {
|
143 |
+
"train/avgloss": batch_mean(outputs, "loss"),
|
144 |
+
}
|
145 |
+
|
146 |
+
self.log_dict(metrics_log,
|
147 |
+
prog_bar=False,
|
148 |
+
logger=True,
|
149 |
+
on_step=False,
|
150 |
+
on_epoch=True,
|
151 |
+
rank_zero_only=True)
|
152 |
+
|
153 |
+
def validation_step(self, batch, batch_idx):
|
154 |
+
|
155 |
+
self.netG.eval()
|
156 |
+
self.netG.training = False
|
157 |
+
|
158 |
+
preds_G = self.netG(batch)
|
159 |
+
error_G = self.netG.compute_loss(preds_G, batch["labels_geo"])
|
160 |
+
|
161 |
+
metrics_log = {
|
162 |
+
"val/loss": error_G,
|
163 |
+
}
|
164 |
+
|
165 |
+
self.log_dict(metrics_log,
|
166 |
+
prog_bar=True,
|
167 |
+
logger=False,
|
168 |
+
on_step=True,
|
169 |
+
on_epoch=False,
|
170 |
+
sync_dist=True)
|
171 |
+
|
172 |
+
return metrics_log
|
173 |
+
|
174 |
+
def validation_epoch_end(self, outputs):
|
175 |
+
|
176 |
+
# metrics processing
|
177 |
+
metrics_log = {
|
178 |
+
"val/avgloss": batch_mean(outputs, "val/loss"),
|
179 |
+
}
|
180 |
+
|
181 |
+
self.log_dict(metrics_log,
|
182 |
+
prog_bar=False,
|
183 |
+
logger=True,
|
184 |
+
on_step=False,
|
185 |
+
on_epoch=True,
|
186 |
+
rank_zero_only=True)
|
apps/Normal.py
ADDED
@@ -0,0 +1,220 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from lib.net import NormalNet
|
2 |
+
from lib.common.train_util import convert_to_dict, export_cfg, batch_mean
|
3 |
+
import torch
|
4 |
+
import numpy as np
|
5 |
+
import os.path as osp
|
6 |
+
from skimage.transform import resize
|
7 |
+
import pytorch_lightning as pl
|
8 |
+
|
9 |
+
|
10 |
+
class Normal(pl.LightningModule):
|
11 |
+
|
12 |
+
def __init__(self, cfg):
|
13 |
+
super(Normal, self).__init__()
|
14 |
+
self.cfg = cfg
|
15 |
+
self.batch_size = self.cfg.batch_size
|
16 |
+
self.lr_F = self.cfg.lr_netF
|
17 |
+
self.lr_B = self.cfg.lr_netB
|
18 |
+
self.lr_D = self.cfg.lr_netD
|
19 |
+
self.overfit = cfg.overfit
|
20 |
+
|
21 |
+
self.F_losses = [item[0] for item in self.cfg.net.front_losses]
|
22 |
+
self.B_losses = [item[0] for item in self.cfg.net.back_losses]
|
23 |
+
self.ALL_losses = self.F_losses + self.B_losses
|
24 |
+
|
25 |
+
self.automatic_optimization = False
|
26 |
+
|
27 |
+
self.schedulers = []
|
28 |
+
|
29 |
+
self.netG = NormalNet(self.cfg)
|
30 |
+
|
31 |
+
self.in_nml = [item[0] for item in cfg.net.in_nml]
|
32 |
+
|
33 |
+
# Training related
|
34 |
+
def configure_optimizers(self):
|
35 |
+
|
36 |
+
optim_params_N_D = None
|
37 |
+
optimizer_N_D = None
|
38 |
+
scheduler_N_D = None
|
39 |
+
|
40 |
+
# set optimizer
|
41 |
+
optim_params_N_F = [{"params": self.netG.netF.parameters(), "lr": self.lr_F}]
|
42 |
+
optim_params_N_B = [{"params": self.netG.netB.parameters(), "lr": self.lr_B}]
|
43 |
+
|
44 |
+
optimizer_N_F = torch.optim.Adam(optim_params_N_F, lr=self.lr_F, betas=(0.5, 0.999))
|
45 |
+
optimizer_N_B = torch.optim.Adam(optim_params_N_B, lr=self.lr_B, betas=(0.5, 0.999))
|
46 |
+
|
47 |
+
scheduler_N_F = torch.optim.lr_scheduler.MultiStepLR(optimizer_N_F,
|
48 |
+
milestones=self.cfg.schedule,
|
49 |
+
gamma=self.cfg.gamma)
|
50 |
+
|
51 |
+
scheduler_N_B = torch.optim.lr_scheduler.MultiStepLR(optimizer_N_B,
|
52 |
+
milestones=self.cfg.schedule,
|
53 |
+
gamma=self.cfg.gamma)
|
54 |
+
if 'gan' in self.ALL_losses:
|
55 |
+
optim_params_N_D = [{"params": self.netG.netD.parameters(), "lr": self.lr_D}]
|
56 |
+
optimizer_N_D = torch.optim.Adam(optim_params_N_D, lr=self.lr_D, betas=(0.5, 0.999))
|
57 |
+
scheduler_N_D = torch.optim.lr_scheduler.MultiStepLR(optimizer_N_D,
|
58 |
+
milestones=self.cfg.schedule,
|
59 |
+
gamma=self.cfg.gamma)
|
60 |
+
self.schedulers = [scheduler_N_F, scheduler_N_B, scheduler_N_D]
|
61 |
+
optims = [optimizer_N_F, optimizer_N_B, optimizer_N_D]
|
62 |
+
|
63 |
+
else:
|
64 |
+
self.schedulers = [scheduler_N_F, scheduler_N_B]
|
65 |
+
optims = [optimizer_N_F, optimizer_N_B]
|
66 |
+
|
67 |
+
return optims, self.schedulers
|
68 |
+
|
69 |
+
def render_func(self, render_tensor, dataset, idx):
|
70 |
+
|
71 |
+
height = render_tensor["image"].shape[2]
|
72 |
+
result_list = []
|
73 |
+
|
74 |
+
for name in render_tensor.keys():
|
75 |
+
result_list.append(
|
76 |
+
resize(
|
77 |
+
((render_tensor[name].cpu().numpy()[0] + 1.0) / 2.0).transpose(1, 2, 0),
|
78 |
+
(height, height),
|
79 |
+
anti_aliasing=True,
|
80 |
+
))
|
81 |
+
|
82 |
+
self.logger.log_image(key=f"Normal/{dataset}/{idx if not self.overfit else 1}",
|
83 |
+
images=[(np.concatenate(result_list, axis=1) * 255.0).astype(np.uint8)
|
84 |
+
])
|
85 |
+
|
86 |
+
def training_step(self, batch, batch_idx):
|
87 |
+
|
88 |
+
# cfg log
|
89 |
+
if not self.cfg.fast_dev and self.global_step == 0 and self.cfg.devices == 1:
|
90 |
+
export_cfg(self.logger, osp.join(self.cfg.results_path, self.cfg.name), self.cfg)
|
91 |
+
self.logger.experiment.config.update(convert_to_dict(self.cfg))
|
92 |
+
|
93 |
+
self.netG.train()
|
94 |
+
|
95 |
+
# retrieve the data
|
96 |
+
in_tensor = {}
|
97 |
+
for name in self.in_nml:
|
98 |
+
in_tensor[name] = batch[name]
|
99 |
+
|
100 |
+
FB_tensor = {"normal_F": batch["normal_F"], "normal_B": batch["normal_B"]}
|
101 |
+
|
102 |
+
in_tensor.update(FB_tensor)
|
103 |
+
|
104 |
+
preds_F, preds_B = self.netG(in_tensor)
|
105 |
+
error_dict = self.netG.get_norm_error(preds_F, preds_B, FB_tensor)
|
106 |
+
|
107 |
+
if 'gan' in self.ALL_losses:
|
108 |
+
(opt_F, opt_B, opt_D) = self.optimizers()
|
109 |
+
opt_F.zero_grad()
|
110 |
+
self.manual_backward(error_dict["netF"])
|
111 |
+
opt_B.zero_grad()
|
112 |
+
self.manual_backward(error_dict["netB"], retain_graph=True)
|
113 |
+
opt_D.zero_grad()
|
114 |
+
self.manual_backward(error_dict["netD"])
|
115 |
+
opt_F.step()
|
116 |
+
opt_B.step()
|
117 |
+
opt_D.step()
|
118 |
+
else:
|
119 |
+
(opt_F, opt_B) = self.optimizers()
|
120 |
+
opt_F.zero_grad()
|
121 |
+
self.manual_backward(error_dict["netF"])
|
122 |
+
opt_B.zero_grad()
|
123 |
+
self.manual_backward(error_dict["netB"])
|
124 |
+
opt_F.step()
|
125 |
+
opt_B.step()
|
126 |
+
|
127 |
+
if batch_idx > 0 and batch_idx % int(
|
128 |
+
self.cfg.freq_show_train) == 0 and self.cfg.devices == 1:
|
129 |
+
|
130 |
+
self.netG.eval()
|
131 |
+
with torch.no_grad():
|
132 |
+
nmlF, nmlB = self.netG(in_tensor)
|
133 |
+
in_tensor.update({"nmlF": nmlF, "nmlB": nmlB})
|
134 |
+
self.render_func(in_tensor, "train", self.global_step)
|
135 |
+
|
136 |
+
# metrics processing
|
137 |
+
metrics_log = {"loss": error_dict["netF"] + error_dict["netB"]}
|
138 |
+
|
139 |
+
if "gan" in self.ALL_losses:
|
140 |
+
metrics_log["loss"] += error_dict["netD"]
|
141 |
+
|
142 |
+
for key in error_dict.keys():
|
143 |
+
metrics_log["train/loss_" + key] = error_dict[key].item()
|
144 |
+
|
145 |
+
self.log_dict(metrics_log,
|
146 |
+
prog_bar=True,
|
147 |
+
logger=True,
|
148 |
+
on_step=True,
|
149 |
+
on_epoch=False,
|
150 |
+
sync_dist=True)
|
151 |
+
|
152 |
+
return metrics_log
|
153 |
+
|
154 |
+
def training_epoch_end(self, outputs):
|
155 |
+
|
156 |
+
# metrics processing
|
157 |
+
metrics_log = {}
|
158 |
+
for key in outputs[0].keys():
|
159 |
+
if "/" in key:
|
160 |
+
[stage, loss_name] = key.split("/")
|
161 |
+
else:
|
162 |
+
stage = "train"
|
163 |
+
loss_name = key
|
164 |
+
metrics_log[f"{stage}/avg-{loss_name}"] = batch_mean(outputs, key)
|
165 |
+
|
166 |
+
self.log_dict(metrics_log,
|
167 |
+
prog_bar=False,
|
168 |
+
logger=True,
|
169 |
+
on_step=False,
|
170 |
+
on_epoch=True,
|
171 |
+
rank_zero_only=True)
|
172 |
+
|
173 |
+
def validation_step(self, batch, batch_idx):
|
174 |
+
|
175 |
+
self.netG.eval()
|
176 |
+
self.netG.training = False
|
177 |
+
|
178 |
+
# retrieve the data
|
179 |
+
in_tensor = {}
|
180 |
+
for name in self.in_nml:
|
181 |
+
in_tensor[name] = batch[name]
|
182 |
+
|
183 |
+
FB_tensor = {"normal_F": batch["normal_F"], "normal_B": batch["normal_B"]}
|
184 |
+
in_tensor.update(FB_tensor)
|
185 |
+
|
186 |
+
preds_F, preds_B = self.netG(in_tensor)
|
187 |
+
error_dict = self.netG.get_norm_error(preds_F, preds_B, FB_tensor)
|
188 |
+
|
189 |
+
if batch_idx % int(self.cfg.freq_show_train) == 0 and self.cfg.devices == 1:
|
190 |
+
|
191 |
+
with torch.no_grad():
|
192 |
+
nmlF, nmlB = self.netG(in_tensor)
|
193 |
+
in_tensor.update({"nmlF": nmlF, "nmlB": nmlB})
|
194 |
+
self.render_func(in_tensor, "val", batch_idx)
|
195 |
+
|
196 |
+
# metrics processing
|
197 |
+
metrics_log = {"val/loss": error_dict["netF"] + error_dict["netB"]}
|
198 |
+
|
199 |
+
if "gan" in self.ALL_losses:
|
200 |
+
metrics_log["val/loss"] += error_dict["netD"]
|
201 |
+
|
202 |
+
for key in error_dict.keys():
|
203 |
+
metrics_log["val/" + key] = error_dict[key].item()
|
204 |
+
|
205 |
+
return metrics_log
|
206 |
+
|
207 |
+
def validation_epoch_end(self, outputs):
|
208 |
+
|
209 |
+
# metrics processing
|
210 |
+
metrics_log = {}
|
211 |
+
for key in outputs[0].keys():
|
212 |
+
[stage, loss_name] = key.split("/")
|
213 |
+
metrics_log[f"{stage}/avg-{loss_name}"] = batch_mean(outputs, key)
|
214 |
+
|
215 |
+
self.log_dict(metrics_log,
|
216 |
+
prog_bar=False,
|
217 |
+
logger=True,
|
218 |
+
on_step=False,
|
219 |
+
on_epoch=True,
|
220 |
+
rank_zero_only=True)
|
apps/avatarizer.py
ADDED
@@ -0,0 +1,47 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import numpy as np
|
2 |
+
import trimesh
|
3 |
+
import torch
|
4 |
+
import os.path as osp
|
5 |
+
import lib.smplx as smplx
|
6 |
+
from lib.dataset.mesh_util import SMPLX
|
7 |
+
|
8 |
+
smplx_container = SMPLX()
|
9 |
+
|
10 |
+
smpl_npy = "./results/github/econ/obj/304e9c4798a8c3967de7c74c24ef2e38_smpl_00.npy"
|
11 |
+
smplx_param = np.load(smpl_npy, allow_pickle=True).item()
|
12 |
+
|
13 |
+
for key in smplx_param.keys():
|
14 |
+
smplx_param[key] = smplx_param[key].cpu().view(1, -1)
|
15 |
+
# print(key, smplx_param[key].device, smplx_param[key].shape)
|
16 |
+
|
17 |
+
smpl_model = smplx.create(
|
18 |
+
smplx_container.model_dir,
|
19 |
+
model_type="smplx",
|
20 |
+
gender="neutral",
|
21 |
+
age="adult",
|
22 |
+
use_face_contour=False,
|
23 |
+
use_pca=False,
|
24 |
+
num_betas=200,
|
25 |
+
num_expression_coeffs=50,
|
26 |
+
ext='pkl')
|
27 |
+
|
28 |
+
smpl_out = smpl_model(
|
29 |
+
body_pose=smplx_param["body_pose"],
|
30 |
+
global_orient=smplx_param["global_orient"],
|
31 |
+
# transl=smplx_param["transl"],
|
32 |
+
betas=smplx_param["betas"],
|
33 |
+
expression=smplx_param["expression"],
|
34 |
+
jaw_pose=smplx_param["jaw_pose"],
|
35 |
+
left_hand_pose=smplx_param["left_hand_pose"],
|
36 |
+
right_hand_pose=smplx_param["right_hand_pose"],
|
37 |
+
return_verts=True,
|
38 |
+
return_joint_transformation=True,
|
39 |
+
return_vertex_transformation=True)
|
40 |
+
|
41 |
+
smpl_verts = smpl_out.vertices.detach()[0]
|
42 |
+
inv_mat = torch.inverse(smpl_out.vertex_transformation.detach()[0])
|
43 |
+
homo_coord = torch.ones_like(smpl_verts)[..., :1]
|
44 |
+
smpl_verts = inv_mat @ torch.cat([smpl_verts, homo_coord], dim=1).unsqueeze(-1)
|
45 |
+
smpl_verts = smpl_verts[:, :3, 0].cpu()
|
46 |
+
|
47 |
+
trimesh.Trimesh(smpl_verts, smpl_model.faces).show()
|
apps/infer.py
ADDED
@@ -0,0 +1,528 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# -*- coding: utf-8 -*-
|
2 |
+
|
3 |
+
# Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V. (MPG) is
|
4 |
+
# holder of all proprietary rights on this computer program.
|
5 |
+
# You can only use this computer program if you have closed
|
6 |
+
# a license agreement with MPG or you get the right to use the computer
|
7 |
+
# program from someone who is authorized to grant you that right.
|
8 |
+
# Any use of the computer program without a valid license is prohibited and
|
9 |
+
# liable to prosecution.
|
10 |
+
#
|
11 |
+
# Copyright©2019 Max-Planck-Gesellschaft zur Förderung
|
12 |
+
# der Wissenschaften e.V. (MPG). acting on behalf of its Max Planck Institute
|
13 |
+
# for Intelligent Systems. All rights reserved.
|
14 |
+
#
|
15 |
+
# Contact: [email protected]
|
16 |
+
|
17 |
+
import warnings
|
18 |
+
import logging
|
19 |
+
|
20 |
+
warnings.filterwarnings("ignore")
|
21 |
+
logging.getLogger("lightning").setLevel(logging.ERROR)
|
22 |
+
logging.getLogger("trimesh").setLevel(logging.ERROR)
|
23 |
+
|
24 |
+
import torch, torchvision
|
25 |
+
import trimesh
|
26 |
+
import numpy as np
|
27 |
+
import argparse
|
28 |
+
import os
|
29 |
+
|
30 |
+
from termcolor import colored
|
31 |
+
from tqdm.auto import tqdm
|
32 |
+
from apps.Normal import Normal
|
33 |
+
from apps.IFGeo import IFGeo
|
34 |
+
from lib.common.config import cfg
|
35 |
+
from lib.common.train_util import init_loss, load_normal_networks, load_networks
|
36 |
+
from lib.common.BNI import BNI
|
37 |
+
from lib.common.BNI_utils import save_normal_tensor
|
38 |
+
from lib.dataset.TestDataset import TestDataset
|
39 |
+
from lib.net.geometry import rot6d_to_rotmat, rotation_matrix_to_angle_axis
|
40 |
+
from lib.dataset.mesh_util import *
|
41 |
+
from lib.common.voxelize import VoxelGrid
|
42 |
+
|
43 |
+
torch.backends.cudnn.benchmark = True
|
44 |
+
|
45 |
+
if __name__ == "__main__":
|
46 |
+
|
47 |
+
# loading cfg file
|
48 |
+
parser = argparse.ArgumentParser()
|
49 |
+
|
50 |
+
parser.add_argument("-gpu", "--gpu_device", type=int, default=0)
|
51 |
+
parser.add_argument("-loop_smpl", "--loop_smpl", type=int, default=50)
|
52 |
+
parser.add_argument("-patience", "--patience", type=int, default=5)
|
53 |
+
parser.add_argument("-vis_freq", "--vis_freq", type=int, default=1000)
|
54 |
+
parser.add_argument("-multi", action="store_false")
|
55 |
+
parser.add_argument("-in_dir", "--in_dir", type=str, default="./examples")
|
56 |
+
parser.add_argument("-out_dir", "--out_dir", type=str, default="./results")
|
57 |
+
parser.add_argument("-seg_dir", "--seg_dir", type=str, default=None)
|
58 |
+
parser.add_argument("-cfg", "--config", type=str, default="./configs/econ.yaml")
|
59 |
+
|
60 |
+
args = parser.parse_args()
|
61 |
+
|
62 |
+
# cfg read and merge
|
63 |
+
cfg.merge_from_file(args.config)
|
64 |
+
cfg.merge_from_file("./lib/pymafx/configs/pymafx_config.yaml")
|
65 |
+
device = torch.device(f"cuda:{args.gpu_device}")
|
66 |
+
|
67 |
+
# setting for testing on in-the-wild images
|
68 |
+
cfg_show_list = ["test_gpus", [args.gpu_device], "mcube_res", 512, "clean_mesh", True, "test_mode", True, "batch_size", 1]
|
69 |
+
|
70 |
+
cfg.merge_from_list(cfg_show_list)
|
71 |
+
cfg.freeze()
|
72 |
+
|
73 |
+
# load model
|
74 |
+
normal_model = Normal(cfg).to(device)
|
75 |
+
load_normal_networks(normal_model, cfg.normal_path)
|
76 |
+
normal_model.netG.eval()
|
77 |
+
|
78 |
+
# load IFGeo model
|
79 |
+
ifnet_model = IFGeo(cfg).to(device)
|
80 |
+
load_networks(ifnet_model, mlp_path=cfg.ifnet_path)
|
81 |
+
ifnet_model.netG.eval()
|
82 |
+
|
83 |
+
# SMPLX object
|
84 |
+
SMPLX_object = SMPLX()
|
85 |
+
|
86 |
+
dataset_param = {
|
87 |
+
"image_dir": args.in_dir,
|
88 |
+
"seg_dir": args.seg_dir,
|
89 |
+
"use_seg": True, # w/ or w/o segmentation
|
90 |
+
"hps_type": cfg.bni.hps_type, # pymafx/pixie
|
91 |
+
"vol_res": cfg.vol_res,
|
92 |
+
"single": args.multi,
|
93 |
+
}
|
94 |
+
|
95 |
+
dataset = TestDataset(dataset_param, device)
|
96 |
+
|
97 |
+
print(colored(f"Dataset Size: {len(dataset)}", "green"))
|
98 |
+
|
99 |
+
pbar = tqdm(dataset)
|
100 |
+
|
101 |
+
for data in pbar:
|
102 |
+
|
103 |
+
losses = init_loss()
|
104 |
+
|
105 |
+
pbar.set_description(f"{data['name']}")
|
106 |
+
|
107 |
+
# final results rendered as image (PNG)
|
108 |
+
# 1. Render the final fitted SMPL (xxx_smpl.png)
|
109 |
+
# 2. Render the final reconstructed clothed human (xxx_cloth.png)
|
110 |
+
# 3. Blend the original image with predicted cloth normal (xxx_overlap.png)
|
111 |
+
# 4. Blend the cropped image with predicted cloth normal (xxx_crop.png)
|
112 |
+
|
113 |
+
os.makedirs(osp.join(args.out_dir, cfg.name, "png"), exist_ok=True)
|
114 |
+
|
115 |
+
# final reconstruction meshes (OBJ)
|
116 |
+
# 1. SMPL mesh (xxx_smpl_xx.obj)
|
117 |
+
# 2. SMPL params (xxx_smpl.npy)
|
118 |
+
# 3. d-BiNI surfaces (xxx_BNI.obj)
|
119 |
+
# 4. seperate face/hand mesh (xxx_hand/face.obj)
|
120 |
+
# 5. full shape impainted by IF-Nets+, and remeshed shape (xxx_IF_(remesh).obj)
|
121 |
+
# 6. sideded or occluded parts (xxx_side.obj)
|
122 |
+
# 7. final reconstructed clothed human (xxx_full.obj)
|
123 |
+
|
124 |
+
os.makedirs(osp.join(args.out_dir, cfg.name, "obj"), exist_ok=True)
|
125 |
+
|
126 |
+
in_tensor = {
|
127 |
+
"smpl_faces": data["smpl_faces"],
|
128 |
+
"image": data["img_icon"].to(device),
|
129 |
+
"mask": data["img_mask"].to(device)
|
130 |
+
}
|
131 |
+
|
132 |
+
# The optimizer and variables
|
133 |
+
optimed_pose = data["body_pose"].requires_grad_(True)
|
134 |
+
optimed_trans = data["trans"].requires_grad_(True)
|
135 |
+
optimed_betas = data["betas"].requires_grad_(True)
|
136 |
+
optimed_orient = data["global_orient"].requires_grad_(True)
|
137 |
+
|
138 |
+
optimizer_smpl = torch.optim.Adam([optimed_pose, optimed_trans, optimed_betas, optimed_orient], lr=1e-2, amsgrad=True)
|
139 |
+
scheduler_smpl = torch.optim.lr_scheduler.ReduceLROnPlateau(
|
140 |
+
optimizer_smpl,
|
141 |
+
mode="min",
|
142 |
+
factor=0.5,
|
143 |
+
verbose=0,
|
144 |
+
min_lr=1e-5,
|
145 |
+
patience=args.patience,
|
146 |
+
)
|
147 |
+
|
148 |
+
# [result_loop_1, result_loop_2, ...]
|
149 |
+
per_data_lst = []
|
150 |
+
|
151 |
+
N_body, N_pose = optimed_pose.shape[:2]
|
152 |
+
|
153 |
+
smpl_path = osp.join(args.out_dir, "econ", f"png/{data['name']}_smpl.png")
|
154 |
+
|
155 |
+
if osp.exists(smpl_path):
|
156 |
+
|
157 |
+
smpl_verts_lst = []
|
158 |
+
smpl_faces_lst = []
|
159 |
+
for idx in range(N_body):
|
160 |
+
|
161 |
+
smpl_obj = f"{args.out_dir}/{cfg.name}/obj/{data['name']}_smpl_{idx:02d}.obj"
|
162 |
+
smpl_mesh = trimesh.load(smpl_obj)
|
163 |
+
smpl_verts = torch.tensor(smpl_mesh.vertices).to(device).float()
|
164 |
+
smpl_faces = torch.tensor(smpl_mesh.faces).to(device).long()
|
165 |
+
smpl_verts_lst.append(smpl_verts)
|
166 |
+
smpl_faces_lst.append(smpl_faces)
|
167 |
+
|
168 |
+
batch_smpl_verts = torch.stack(smpl_verts_lst)
|
169 |
+
batch_smpl_faces = torch.stack(smpl_faces_lst)
|
170 |
+
|
171 |
+
# render optimized mesh as normal [-1,1]
|
172 |
+
in_tensor["T_normal_F"], in_tensor["T_normal_B"] = dataset.render_normal(batch_smpl_verts, batch_smpl_faces)
|
173 |
+
|
174 |
+
with torch.no_grad():
|
175 |
+
in_tensor["normal_F"], in_tensor["normal_B"] = normal_model.netG(in_tensor)
|
176 |
+
|
177 |
+
in_tensor["smpl_verts"] = batch_smpl_verts * torch.tensor([1., -1., 1.]).to(device)
|
178 |
+
in_tensor["smpl_faces"] = batch_smpl_faces[:, :, [0, 2, 1]]
|
179 |
+
else:
|
180 |
+
# smpl optimization
|
181 |
+
loop_smpl = tqdm(range(args.loop_smpl))
|
182 |
+
|
183 |
+
for i in loop_smpl:
|
184 |
+
|
185 |
+
per_loop_lst = []
|
186 |
+
|
187 |
+
optimizer_smpl.zero_grad()
|
188 |
+
|
189 |
+
N_body, N_pose = optimed_pose.shape[:2]
|
190 |
+
|
191 |
+
# 6d_rot to rot_mat
|
192 |
+
optimed_orient_mat = rot6d_to_rotmat(optimed_orient.view(-1, 6)).view(N_body, 1, 3, 3)
|
193 |
+
optimed_pose_mat = rot6d_to_rotmat(optimed_pose.view(-1, 6)).view(N_body, N_pose, 3, 3)
|
194 |
+
|
195 |
+
smpl_verts, smpl_landmarks, smpl_joints = dataset.smpl_model(
|
196 |
+
shape_params=optimed_betas,
|
197 |
+
expression_params=tensor2variable(data["exp"], device),
|
198 |
+
body_pose=optimed_pose_mat,
|
199 |
+
global_pose=optimed_orient_mat,
|
200 |
+
jaw_pose=tensor2variable(data["jaw_pose"], device),
|
201 |
+
left_hand_pose=tensor2variable(data["left_hand_pose"], device),
|
202 |
+
right_hand_pose=tensor2variable(data["right_hand_pose"], device),
|
203 |
+
)
|
204 |
+
|
205 |
+
smpl_verts = (smpl_verts + optimed_trans) * data["scale"]
|
206 |
+
smpl_joints = (smpl_joints + optimed_trans) * data["scale"] * torch.tensor([1.0, 1.0, -1.0]).to(device)
|
207 |
+
|
208 |
+
# landmark errors
|
209 |
+
smpl_joints_3d = (smpl_joints[:, dataset.smpl_data.smpl_joint_ids_45_pixie, :] + 1.0) * 0.5
|
210 |
+
in_tensor["smpl_joint"] = smpl_joints[:, dataset.smpl_data.smpl_joint_ids_24_pixie, :]
|
211 |
+
|
212 |
+
ghum_lmks = data["landmark"][:, SMPLX_object.ghum_smpl_pairs[:, 0], :2].to(device)
|
213 |
+
ghum_conf = data["landmark"][:, SMPLX_object.ghum_smpl_pairs[:, 0], -1].to(device)
|
214 |
+
smpl_lmks = smpl_joints_3d[:, SMPLX_object.ghum_smpl_pairs[:, 1], :2]
|
215 |
+
|
216 |
+
# render optimized mesh as normal [-1,1]
|
217 |
+
in_tensor["T_normal_F"], in_tensor["T_normal_B"] = dataset.render_normal(
|
218 |
+
smpl_verts * torch.tensor([1.0, -1.0, -1.0]).to(device),
|
219 |
+
in_tensor["smpl_faces"],
|
220 |
+
)
|
221 |
+
|
222 |
+
T_mask_F, T_mask_B = dataset.render.get_image(type="mask")
|
223 |
+
|
224 |
+
with torch.no_grad():
|
225 |
+
in_tensor["normal_F"], in_tensor["normal_B"] = normal_model.netG(in_tensor)
|
226 |
+
|
227 |
+
diff_F_smpl = torch.abs(in_tensor["T_normal_F"] - in_tensor["normal_F"])
|
228 |
+
diff_B_smpl = torch.abs(in_tensor["T_normal_B"] - in_tensor["normal_B"])
|
229 |
+
|
230 |
+
# silhouette loss
|
231 |
+
smpl_arr = torch.cat([T_mask_F, T_mask_B], dim=-1)
|
232 |
+
gt_arr = in_tensor["mask"].repeat(1, 1, 2)
|
233 |
+
diff_S = torch.abs(smpl_arr - gt_arr)
|
234 |
+
losses["silhouette"]["value"] = diff_S.mean()
|
235 |
+
|
236 |
+
# large cloth_overlap --> big difference between body and cloth mask
|
237 |
+
# for loose clothing, reply more on landmarks instead of silhouette+normal loss
|
238 |
+
cloth_overlap = diff_S.sum(dim=[1, 2]) / gt_arr.sum(dim=[1, 2])
|
239 |
+
cloth_overlap_flag = cloth_overlap > cfg.cloth_overlap_thres
|
240 |
+
losses["joint"]["weight"] = [50.0 if flag else 5.0 for flag in cloth_overlap_flag]
|
241 |
+
|
242 |
+
# small body_overlap --> large occlusion or out-of-frame
|
243 |
+
# for highly occluded body, reply only on high-confidence landmarks, no silhouette+normal loss
|
244 |
+
|
245 |
+
# BUG: PyTorch3D silhouette renderer generates dilated mask
|
246 |
+
bg_value = in_tensor["T_normal_F"][0, 0, 0, 0]
|
247 |
+
smpl_arr_fake = torch.cat(
|
248 |
+
[in_tensor["T_normal_F"][:, 0].ne(bg_value).float(), in_tensor["T_normal_B"][:, 0].ne(bg_value).float()],
|
249 |
+
dim=-1)
|
250 |
+
|
251 |
+
body_overlap = (gt_arr * smpl_arr_fake.gt(0.0)).sum(dim=[1, 2]) / smpl_arr_fake.gt(0.0).sum(dim=[1, 2])
|
252 |
+
body_overlap_mask = (gt_arr * smpl_arr_fake).unsqueeze(1)
|
253 |
+
body_overlap_flag = body_overlap < cfg.body_overlap_thres
|
254 |
+
|
255 |
+
losses["normal"]["value"] = (diff_F_smpl * body_overlap_mask[..., :512] +
|
256 |
+
diff_B_smpl * body_overlap_mask[..., 512:]).mean() / 2.0
|
257 |
+
|
258 |
+
losses["silhouette"]["weight"] = [0 if flag else 1.0 for flag in body_overlap_flag]
|
259 |
+
occluded_idx = torch.where(body_overlap_flag)[0]
|
260 |
+
ghum_conf[occluded_idx] *= ghum_conf[occluded_idx] > 0.95
|
261 |
+
losses["joint"]["value"] = (torch.norm(ghum_lmks - smpl_lmks, dim=2) * ghum_conf).mean(dim=1)
|
262 |
+
|
263 |
+
# Weighted sum of the losses
|
264 |
+
smpl_loss = 0.0
|
265 |
+
pbar_desc = "Body Fitting --- "
|
266 |
+
for k in ["normal", "silhouette", "joint"]:
|
267 |
+
per_loop_loss = (losses[k]["value"] * torch.tensor(losses[k]["weight"]).to(device)).mean()
|
268 |
+
pbar_desc += f"{k}: {per_loop_loss:.3f} | "
|
269 |
+
smpl_loss += per_loop_loss
|
270 |
+
pbar_desc += f"Total: {smpl_loss:.3f}"
|
271 |
+
loose_str = ''.join([str(j) for j in cloth_overlap_flag.int().tolist()])
|
272 |
+
occlude_str = ''.join([str(j) for j in body_overlap_flag.int().tolist()])
|
273 |
+
pbar_desc += colored(f"| loose:{loose_str}, occluded:{occlude_str}", "yellow")
|
274 |
+
loop_smpl.set_description(pbar_desc)
|
275 |
+
|
276 |
+
# save intermediate results / vis_freq and final_step
|
277 |
+
if (i % args.vis_freq == 0) or (i == args.loop_smpl - 1):
|
278 |
+
|
279 |
+
per_loop_lst.extend([
|
280 |
+
in_tensor["image"],
|
281 |
+
in_tensor["T_normal_F"],
|
282 |
+
in_tensor["normal_F"],
|
283 |
+
diff_S[:, :, :512].unsqueeze(1).repeat(1, 3, 1, 1),
|
284 |
+
])
|
285 |
+
per_loop_lst.extend([
|
286 |
+
in_tensor["image"],
|
287 |
+
in_tensor["T_normal_B"],
|
288 |
+
in_tensor["normal_B"],
|
289 |
+
diff_S[:, :, 512:].unsqueeze(1).repeat(1, 3, 1, 1),
|
290 |
+
])
|
291 |
+
per_data_lst.append(get_optim_grid_image(per_loop_lst, None, nrow=N_body * 2, type="smpl"))
|
292 |
+
|
293 |
+
smpl_loss.backward()
|
294 |
+
optimizer_smpl.step()
|
295 |
+
scheduler_smpl.step(smpl_loss)
|
296 |
+
|
297 |
+
in_tensor["smpl_verts"] = smpl_verts * torch.tensor([1.0, 1.0, -1.0]).to(device)
|
298 |
+
in_tensor["smpl_faces"] = in_tensor["smpl_faces"][:, :, [0, 2, 1]]
|
299 |
+
per_data_lst[-1].save(osp.join(args.out_dir, cfg.name, f"png/{data['name']}_smpl.png"))
|
300 |
+
|
301 |
+
img_crop_path = osp.join(args.out_dir, cfg.name, "png", f"{data['name']}_crop.png")
|
302 |
+
torchvision.utils.save_image(
|
303 |
+
torch.cat([
|
304 |
+
data["img_crop"][:, :3], (in_tensor['normal_F'].detach().cpu() + 1.0) * 0.5,
|
305 |
+
(in_tensor['normal_B'].detach().cpu() + 1.0) * 0.5
|
306 |
+
],
|
307 |
+
dim=3), img_crop_path)
|
308 |
+
|
309 |
+
rgb_norm_F = blend_rgb_norm(in_tensor["normal_F"], data)
|
310 |
+
rgb_norm_B = blend_rgb_norm(in_tensor["normal_B"], data)
|
311 |
+
|
312 |
+
img_overlap_path = osp.join(args.out_dir, cfg.name, f"png/{data['name']}_overlap.png")
|
313 |
+
torchvision.utils.save_image(
|
314 |
+
torch.Tensor([data["img_raw"], rgb_norm_F, rgb_norm_B]).permute(0, 3, 1, 2) / 255., img_overlap_path)
|
315 |
+
|
316 |
+
smpl_obj_lst = []
|
317 |
+
|
318 |
+
for idx in range(N_body):
|
319 |
+
|
320 |
+
smpl_obj = trimesh.Trimesh(
|
321 |
+
in_tensor["smpl_verts"].detach().cpu()[idx] * torch.tensor([1.0, -1.0, 1.0]),
|
322 |
+
in_tensor["smpl_faces"].detach().cpu()[0][:, [0, 2, 1]],
|
323 |
+
process=False,
|
324 |
+
maintains_order=True,
|
325 |
+
)
|
326 |
+
|
327 |
+
smpl_obj_path = f"{args.out_dir}/{cfg.name}/obj/{data['name']}_smpl_{idx:02d}.obj"
|
328 |
+
|
329 |
+
if not osp.exists(smpl_obj_path):
|
330 |
+
smpl_obj.export(smpl_obj_path)
|
331 |
+
smpl_info = {
|
332 |
+
"betas": optimed_betas[idx].detach().cpu().unsqueeze(0),
|
333 |
+
"body_pose": rotation_matrix_to_angle_axis(optimed_pose_mat[idx].detach()).cpu().unsqueeze(0),
|
334 |
+
"global_orient": rotation_matrix_to_angle_axis(optimed_orient_mat[idx].detach()).cpu().unsqueeze(0),
|
335 |
+
"transl": optimed_trans[idx].detach().cpu(),
|
336 |
+
"expression": data["exp"][idx].cpu().unsqueeze(0),
|
337 |
+
"jaw_pose": rotation_matrix_to_angle_axis(data["jaw_pose"][idx]).cpu().unsqueeze(0),
|
338 |
+
"left_hand_pose": rotation_matrix_to_angle_axis(data["left_hand_pose"][idx]).cpu().unsqueeze(0),
|
339 |
+
"right_hand_pose": rotation_matrix_to_angle_axis(data["right_hand_pose"][idx]).cpu().unsqueeze(0),
|
340 |
+
"scale": data["scale"][idx].cpu(),
|
341 |
+
}
|
342 |
+
np.save(
|
343 |
+
smpl_obj_path.replace(".obj", ".npy"),
|
344 |
+
smpl_info,
|
345 |
+
allow_pickle=True,
|
346 |
+
)
|
347 |
+
smpl_obj_lst.append(smpl_obj)
|
348 |
+
|
349 |
+
del optimizer_smpl
|
350 |
+
del optimed_betas
|
351 |
+
del optimed_orient
|
352 |
+
del optimed_pose
|
353 |
+
del optimed_trans
|
354 |
+
|
355 |
+
torch.cuda.empty_cache()
|
356 |
+
|
357 |
+
# ------------------------------------------------------------------------------------------------------------------
|
358 |
+
# clothing refinement
|
359 |
+
|
360 |
+
per_data_lst = []
|
361 |
+
|
362 |
+
batch_smpl_verts = in_tensor["smpl_verts"].detach() * torch.tensor([1.0, -1.0, 1.0], device=device)
|
363 |
+
batch_smpl_faces = in_tensor["smpl_faces"].detach()[:, :, [0, 2, 1]]
|
364 |
+
|
365 |
+
in_tensor["depth_F"], in_tensor["depth_B"] = dataset.render_depth(batch_smpl_verts, batch_smpl_faces)
|
366 |
+
|
367 |
+
per_loop_lst = []
|
368 |
+
|
369 |
+
in_tensor["BNI_verts"] = []
|
370 |
+
in_tensor["BNI_faces"] = []
|
371 |
+
in_tensor["body_verts"] = []
|
372 |
+
in_tensor["body_faces"] = []
|
373 |
+
|
374 |
+
for idx in range(N_body):
|
375 |
+
|
376 |
+
final_path = f"{args.out_dir}/{cfg.name}/obj/{data['name']}_{idx}_full.obj"
|
377 |
+
|
378 |
+
side_mesh = smpl_obj_lst[idx].copy()
|
379 |
+
face_mesh = smpl_obj_lst[idx].copy()
|
380 |
+
hand_mesh = smpl_obj_lst[idx].copy()
|
381 |
+
|
382 |
+
# save normals, depths and masks
|
383 |
+
BNI_dict = save_normal_tensor(
|
384 |
+
in_tensor,
|
385 |
+
idx,
|
386 |
+
osp.join(args.out_dir, cfg.name, f"BNI/{data['name']}_{idx}"),
|
387 |
+
cfg.bni.thickness,
|
388 |
+
)
|
389 |
+
|
390 |
+
# BNI process
|
391 |
+
BNI_object = BNI(
|
392 |
+
dir_path=osp.join(args.out_dir, cfg.name, "BNI"),
|
393 |
+
name=data["name"],
|
394 |
+
BNI_dict=BNI_dict,
|
395 |
+
cfg=cfg.bni,
|
396 |
+
device=device)
|
397 |
+
|
398 |
+
BNI_object.extract_surface(False)
|
399 |
+
|
400 |
+
in_tensor["body_verts"].append(torch.tensor(smpl_obj_lst[idx].vertices).float())
|
401 |
+
in_tensor["body_faces"].append(torch.tensor(smpl_obj_lst[idx].faces).long())
|
402 |
+
|
403 |
+
# requires shape completion when low overlap
|
404 |
+
# replace SMPL by completed mesh as side_mesh
|
405 |
+
|
406 |
+
if cfg.bni.use_ifnet:
|
407 |
+
print(colored("Use IF-Nets+ for completion\n", "green"))
|
408 |
+
|
409 |
+
side_mesh_path = f"{args.out_dir}/{cfg.name}/obj/{data['name']}_{idx}_IF.obj"
|
410 |
+
|
411 |
+
side_mesh = apply_face_mask(side_mesh, ~SMPLX_object.smplx_eyeball_fid_mask)
|
412 |
+
|
413 |
+
# mesh completion via IF-net
|
414 |
+
in_tensor.update(
|
415 |
+
dataset.depth_to_voxel({
|
416 |
+
"depth_F": BNI_object.F_depth.unsqueeze(0),
|
417 |
+
"depth_B": BNI_object.B_depth.unsqueeze(0)
|
418 |
+
}))
|
419 |
+
|
420 |
+
occupancies = VoxelGrid.from_mesh(
|
421 |
+
side_mesh, cfg.vol_res, loc=[
|
422 |
+
0,
|
423 |
+
] * 3, scale=2.0).data.transpose(2, 1, 0)
|
424 |
+
occupancies = np.flip(occupancies, axis=1)
|
425 |
+
|
426 |
+
in_tensor["body_voxels"] = torch.tensor(occupancies.copy()).float().unsqueeze(0).to(device)
|
427 |
+
|
428 |
+
with torch.no_grad():
|
429 |
+
sdf = ifnet_model.reconEngine(netG=ifnet_model.netG, batch=in_tensor)
|
430 |
+
verts_IF, faces_IF = ifnet_model.reconEngine.export_mesh(sdf)
|
431 |
+
|
432 |
+
if ifnet_model.clean_mesh_flag:
|
433 |
+
verts_IF, faces_IF = clean_mesh(verts_IF, faces_IF)
|
434 |
+
|
435 |
+
side_mesh = trimesh.Trimesh(verts_IF, faces_IF)
|
436 |
+
side_mesh = remesh(side_mesh, side_mesh_path)
|
437 |
+
|
438 |
+
else:
|
439 |
+
print(colored("Use SMPL-X body for completion\n", "green"))
|
440 |
+
side_mesh = apply_vertex_mask(
|
441 |
+
side_mesh,
|
442 |
+
(SMPLX_object.front_flame_vertex_mask + SMPLX_object.mano_vertex_mask +
|
443 |
+
SMPLX_object.eyeball_vertex_mask).eq(0).float(),
|
444 |
+
)
|
445 |
+
|
446 |
+
side_verts = torch.tensor(side_mesh.vertices).float().to(device)
|
447 |
+
side_faces = torch.tensor(side_mesh.faces).long().to(device)
|
448 |
+
|
449 |
+
# Possion Fusion between SMPLX and BNI
|
450 |
+
# 1. keep the faces invisible to front+back cameras
|
451 |
+
# 2. keep the front-FLAME+MANO faces
|
452 |
+
# 3. remove eyeball faces
|
453 |
+
|
454 |
+
# export intermediate meshes
|
455 |
+
BNI_object.F_B_trimesh.export(f"{args.out_dir}/{cfg.name}/obj/{data['name']}_{idx}_BNI.obj")
|
456 |
+
|
457 |
+
full_lst = []
|
458 |
+
|
459 |
+
if "face" in cfg.bni.use_smpl:
|
460 |
+
|
461 |
+
# only face
|
462 |
+
face_mesh = apply_vertex_mask(face_mesh, SMPLX_object.front_flame_vertex_mask)
|
463 |
+
face_mesh.vertices = face_mesh.vertices - np.array([0, 0, cfg.bni.thickness])
|
464 |
+
|
465 |
+
# remove face neighbor triangles
|
466 |
+
BNI_object.F_B_trimesh = part_removal(
|
467 |
+
BNI_object.F_B_trimesh, None, face_mesh, cfg.bni.face_thres, device, camera_ray=True)
|
468 |
+
side_mesh = part_removal(
|
469 |
+
side_mesh, torch.zeros_like(side_verts[:, 0:1]), face_mesh, cfg.bni.face_thres, device, camera_ray=True)
|
470 |
+
face_mesh.export(f"{args.out_dir}/{cfg.name}/obj/{data['name']}_{idx}_face.obj")
|
471 |
+
full_lst += [face_mesh]
|
472 |
+
|
473 |
+
if "hand" in cfg.bni.use_smpl and (True in data['hands_visibility'][idx]):
|
474 |
+
|
475 |
+
hand_mask = torch.zeros(SMPLX_object.smplx_verts.shape[0],)
|
476 |
+
if data['hands_visibility'][idx][0]:
|
477 |
+
hand_mask.index_fill_(0, torch.tensor(SMPLX_object.smplx_mano_vid_dict["left_hand"]), 1.0)
|
478 |
+
if data['hands_visibility'][idx][1]:
|
479 |
+
hand_mask.index_fill_(0, torch.tensor(SMPLX_object.smplx_mano_vid_dict["right_hand"]), 1.0)
|
480 |
+
|
481 |
+
# only hands
|
482 |
+
hand_mesh = apply_vertex_mask(hand_mesh, hand_mask)
|
483 |
+
# remove face neighbor triangles
|
484 |
+
BNI_object.F_B_trimesh = part_removal(
|
485 |
+
BNI_object.F_B_trimesh, None, hand_mesh, cfg.bni.hand_thres, device, camera_ray=True)
|
486 |
+
side_mesh = part_removal(
|
487 |
+
side_mesh, torch.zeros_like(side_verts[:, 0:1]), hand_mesh, cfg.bni.hand_thres, device, camera_ray=True)
|
488 |
+
hand_mesh.export(f"{args.out_dir}/{cfg.name}/obj/{data['name']}_{idx}_hand.obj")
|
489 |
+
full_lst += [hand_mesh]
|
490 |
+
|
491 |
+
full_lst += [BNI_object.F_B_trimesh]
|
492 |
+
|
493 |
+
# initial side_mesh could be SMPLX or IF-net
|
494 |
+
side_mesh = part_removal(side_mesh, torch.zeros_like(side_verts[:, 0:1]), sum(full_lst), 2e-2, device, clean=False)
|
495 |
+
|
496 |
+
full_lst += [side_mesh]
|
497 |
+
|
498 |
+
# # export intermediate meshes
|
499 |
+
BNI_object.F_B_trimesh.export(f"{args.out_dir}/{cfg.name}/obj/{data['name']}_{idx}_BNI.obj")
|
500 |
+
side_mesh.export(f"{args.out_dir}/{cfg.name}/obj/{data['name']}_{idx}_side.obj")
|
501 |
+
|
502 |
+
if cfg.bni.use_poisson:
|
503 |
+
final_mesh = poisson(
|
504 |
+
sum(full_lst),
|
505 |
+
final_path,
|
506 |
+
cfg.bni.poisson_depth,
|
507 |
+
)
|
508 |
+
else:
|
509 |
+
final_mesh = sum(full_lst)
|
510 |
+
final_mesh.export(final_path)
|
511 |
+
|
512 |
+
dataset.render.load_meshes(final_mesh.vertices, final_mesh.faces)
|
513 |
+
rotate_recon_lst = dataset.render.get_image(cam_type="four")
|
514 |
+
per_loop_lst.extend([in_tensor['image'][idx:idx + 1]] + rotate_recon_lst)
|
515 |
+
|
516 |
+
# for video rendering
|
517 |
+
in_tensor["BNI_verts"].append(torch.tensor(final_mesh.vertices).float())
|
518 |
+
in_tensor["BNI_faces"].append(torch.tensor(final_mesh.faces).long())
|
519 |
+
|
520 |
+
if len(per_loop_lst) > 0:
|
521 |
+
|
522 |
+
per_data_lst.append(get_optim_grid_image(per_loop_lst, None, nrow=5, type="cloth"))
|
523 |
+
per_data_lst[-1].save(osp.join(args.out_dir, cfg.name, f"png/{data['name']}_cloth.png"))
|
524 |
+
|
525 |
+
os.makedirs(osp.join(args.out_dir, cfg.name, "vid"), exist_ok=True)
|
526 |
+
in_tensor["uncrop_param"] = data["uncrop_param"]
|
527 |
+
in_tensor["img_raw"] = data["img_raw"]
|
528 |
+
torch.save(in_tensor, osp.join(args.out_dir, cfg.name, f"vid/{data['name']}_in_tensor.pt"))
|
apps/multi_render.py
ADDED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from lib.common.render import Render
|
2 |
+
import torch
|
3 |
+
import argparse
|
4 |
+
|
5 |
+
root = "./results/econ/vid"
|
6 |
+
|
7 |
+
# loading cfg file
|
8 |
+
parser = argparse.ArgumentParser()
|
9 |
+
parser.add_argument("-n", "--name", type=str, default="")
|
10 |
+
parser.add_argument("-g", "--gpu", type=int, default=0)
|
11 |
+
args = parser.parse_args()
|
12 |
+
|
13 |
+
in_tensor = torch.load(f"{root}/{args.name}_in_tensor.pt")
|
14 |
+
|
15 |
+
render = Render(size=512, device=torch.device(f"cuda:{args.gpu}"))
|
16 |
+
|
17 |
+
# visualize the final results in self-rotation mode
|
18 |
+
verts_lst = in_tensor["body_verts"] + in_tensor["BNI_verts"]
|
19 |
+
faces_lst = in_tensor["body_faces"] + in_tensor["BNI_faces"]
|
20 |
+
|
21 |
+
# self-rotated video
|
22 |
+
render.load_meshes(verts_lst, faces_lst)
|
23 |
+
render.get_rendered_video_multi(
|
24 |
+
in_tensor,
|
25 |
+
f"{root}/{args.name}_cloth.mp4")
|
configs/econ.yaml
ADDED
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
name: econ
|
2 |
+
ckpt_dir: "./data/ckpt/"
|
3 |
+
normal_path: "./data/ckpt/normal.ckpt"
|
4 |
+
ifnet_path: "./data/ckpt/ifnet.ckpt"
|
5 |
+
results_path: "./results"
|
6 |
+
|
7 |
+
net:
|
8 |
+
in_nml: (('image',3), ('T_normal_F',3), ('T_normal_B',3))
|
9 |
+
in_geo: (('normal_F',3), ('normal_B',3))
|
10 |
+
|
11 |
+
test_mode: True
|
12 |
+
batch_size: 1
|
13 |
+
|
14 |
+
dataset:
|
15 |
+
prior_type: "SMPL"
|
16 |
+
|
17 |
+
# user defined
|
18 |
+
vol_res: 256 # IF-Net volume resolution
|
19 |
+
mcube_res: 256
|
20 |
+
clean_mesh: True # if True, will remove floating pieces
|
21 |
+
cloth_overlap_thres: 0.50
|
22 |
+
body_overlap_thres: 0.98
|
23 |
+
|
24 |
+
bni:
|
25 |
+
k: 2
|
26 |
+
lambda1: 1e-4
|
27 |
+
boundary_consist: 1e-6
|
28 |
+
poisson_depth: 10
|
29 |
+
use_smpl: ["hand", "face"]
|
30 |
+
use_ifnet: True
|
31 |
+
use_poisson: True
|
32 |
+
hand_thres: 4e-2
|
33 |
+
face_thres: 6e-2
|
34 |
+
thickness: 0.02
|
35 |
+
hps_type: "pixie"
|
docs/installation.md
ADDED
@@ -0,0 +1,77 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
## Getting started
|
2 |
+
|
3 |
+
Start by cloning the repo:
|
4 |
+
|
5 |
+
```bash
|
6 |
+
git clone --depth 1 [email protected]:YuliangXiu/ECON.git
|
7 |
+
cd ECON
|
8 |
+
```
|
9 |
+
|
10 |
+
## Environment
|
11 |
+
|
12 |
+
- Ubuntu 20 / 18
|
13 |
+
- **CUDA=11.4, GPU Memory > 12GB**
|
14 |
+
- Python = 3.8
|
15 |
+
- PyTorch >= 1.13.0 (official [Get Started](https://pytorch.org/get-started/locally/))
|
16 |
+
- CUPY >= 11.3.0 (offcial [Installation](https://docs.cupy.dev/en/stable/install.html#installing-cupy-from-pypi))
|
17 |
+
- PyTorch3D (official [INSTALL.md](https://github.com/facebookresearch/pytorch3d/blob/main/INSTALL.md), recommend [install-from-local-clone](https://github.com/facebookresearch/pytorch3d/blob/main/INSTALL.md#2-install-from-a-local-clone))
|
18 |
+
|
19 |
+
```bash
|
20 |
+
|
21 |
+
sudo apt-get install libeigen3-dev ffmpeg
|
22 |
+
|
23 |
+
# install required packages
|
24 |
+
cd ECON
|
25 |
+
conda env create -f environment.yaml
|
26 |
+
conda activate econ
|
27 |
+
pip install -r requirements.txt
|
28 |
+
|
29 |
+
# install libmesh & libvoxelize
|
30 |
+
cd lib/commmon/libmesh
|
31 |
+
python setup.py build_ext --inplace
|
32 |
+
cd ../libvoxelize
|
33 |
+
python setup.py build_ext --inplace
|
34 |
+
```
|
35 |
+
|
36 |
+
## Register at [ICON's website](https://icon.is.tue.mpg.de/)
|
37 |
+
|
38 |
+

|
39 |
+
Required:
|
40 |
+
|
41 |
+
- [SMPL](http://smpl.is.tue.mpg.de/): SMPL Model (Male, Female)
|
42 |
+
- [SMPL-X](http://smpl-x.is.tue.mpg.de/): SMPL-X Model, used for training
|
43 |
+
- [SMPLIFY](http://smplify.is.tue.mpg.de/): SMPL Model (Neutral)
|
44 |
+
- [PIXIE](https://icon.is.tue.mpg.de/user.php): PIXIE SMPL-X estimator
|
45 |
+
|
46 |
+
:warning: Click **Register now** on all dependencies, then you can download them all with **ONE** account.
|
47 |
+
|
48 |
+
## Downloading required models and extra data
|
49 |
+
|
50 |
+
```bash
|
51 |
+
cd ECON
|
52 |
+
bash fetch_data.sh # requires username and password
|
53 |
+
```
|
54 |
+
|
55 |
+
## Citation
|
56 |
+
|
57 |
+
:+1: Please consider citing these awesome HPS approaches: PyMAF-X, PIXIE
|
58 |
+
|
59 |
+
|
60 |
+
```
|
61 |
+
@article{pymafx2022,
|
62 |
+
title={PyMAF-X: Towards Well-aligned Full-body Model Regression from Monocular Images},
|
63 |
+
author={Zhang, Hongwen and Tian, Yating and Zhang, Yuxiang and Li, Mengcheng and An, Liang and Sun, Zhenan and Liu, Yebin},
|
64 |
+
journal={arXiv preprint arXiv:2207.06400},
|
65 |
+
year={2022}
|
66 |
+
}
|
67 |
+
|
68 |
+
|
69 |
+
@inproceedings{PIXIE:2021,
|
70 |
+
title={Collaborative Regression of Expressive Bodies using Moderation},
|
71 |
+
author={Yao Feng and Vasileios Choutas and Timo Bolkart and Dimitrios Tzionas and Michael J. Black},
|
72 |
+
booktitle={International Conference on 3D Vision (3DV)},
|
73 |
+
year={2021}
|
74 |
+
}
|
75 |
+
|
76 |
+
|
77 |
+
```
|
environment.yaml
ADDED
@@ -0,0 +1,20 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
name: econ
|
2 |
+
channels:
|
3 |
+
- pytorch
|
4 |
+
- nvidia
|
5 |
+
- conda-forge
|
6 |
+
- fvcore
|
7 |
+
- iopath
|
8 |
+
- bottler
|
9 |
+
- defaults
|
10 |
+
dependencies:
|
11 |
+
- python=3.8
|
12 |
+
- pytorch
|
13 |
+
- torchvision
|
14 |
+
- fvcore
|
15 |
+
- iopath
|
16 |
+
- nvidiacub
|
17 |
+
- pyembree
|
18 |
+
- cupy
|
19 |
+
- cython
|
20 |
+
- pip
|
examples/304e9c4798a8c3967de7c74c24ef2e38.jpg
ADDED
![]() |
examples/cloth/0a64d9c7ac4a86aa0c29195bc6f55246.jpg
ADDED
![]() |
examples/cloth/1f7c9214b80a02071edfadd5be908d8e.jpg
ADDED
![]() |
examples/cloth/2095f20b1390e14d9312913c61c4b621.png
ADDED
![]() |
examples/cloth/267cffcff3809e0df9eff44c443f07b0.jpg
ADDED
![]() |
examples/cloth/26d2e846349647ff04c536816e0e8ca1.jpg
ADDED
![]() |
examples/cloth/351f52b9d1ddebb70241a092af34c2f3.jpg
ADDED
![]() |
examples/cloth/55cc162cc4fcda1df2236847a52db93a.jpg
ADDED
![]() |
examples/cloth/6465c18fc13b862341c33922c79ab490.jpg
ADDED
![]() |
examples/cloth/663dcd6db19490de0b790da430bd5681.jpg
ADDED
![]() |
examples/cloth/6c0a5def2287d9bfa4a42ee0ce9cb7f9.jpg
ADDED
![]() |
examples/cloth/7c6bb9410ea8debe3aca92e299fe2333.jpg
ADDED
![]() |
examples/cloth/930c782d63e180208e0a55754d607f34.jpg
ADDED
![]() |
examples/cloth/baf3c945fa6b4349c59953a97740e70f.jpg
ADDED
![]() |
examples/cloth/c7ca6894119f235caba568a7e01684af.jpg
ADDED
![]() |
examples/cloth/da135ecd046333c0dc437a383325c90b.jpg
ADDED
![]() |
examples/cloth/df90cff51a84dd602024ac3aa03ad182.jpg
ADDED
![]() |
examples/cloth/e80b36c782ce869caef9abb55b37d464.jpg
ADDED
![]() |
examples/multi/1.png
ADDED
![]() |
examples/multi/2.jpg
ADDED
![]() |
examples/multi/3.jpg
ADDED
![]() |
examples/multi/4.jpg
ADDED
![]() |
examples/pose/02986d0998ce01aa0aa67a99fbd1e09a.jpg
ADDED
![]() |
examples/pose/105545f93dcaecd13f2e3f01db92331c.jpg
ADDED
![]() |
examples/pose/1af2662b5026ef82ed0e8b08b6698017.png
ADDED
![]() |
examples/pose/3745ee0a7f31fafc3dfd3d8bf246f3b8.jpg
ADDED
![]() |
examples/pose/4ac9ca7a3e34a365c073317f98525add.jpg
ADDED
![]() |
examples/pose/4d1ed606c3c0a346c8a75507fc81abff.jpg
ADDED
![]() |
examples/pose/5617dc56d25918217b81f27c98011ea5.jpg
ADDED
![]() |
examples/pose/5ef3bc939cf82dbd0c541eba41b517c2.jpg
ADDED
![]() |
examples/pose/68757076df6c98e9d6ba6ed00870daef.jpg
ADDED
![]() |
examples/pose/6f0029a9592a11530267b3a51413ae74.jpg
ADDED
![]() |
examples/pose/7530ae51e811b1878fae23ea243a3a30.jpg
ADDED
![]() |
examples/pose/780047b55ee80b0dc2468ad16cab2278.jpg
ADDED
![]() |
examples/pose/ab2192beaefb58e872ce55099cbed8fe.jpg
ADDED
![]() |
examples/pose/d5241b4de0cebd2722b05d855a5a9ca6.jpg
ADDED
![]() |
examples/pose/d6fcd37df9983973af08af3f9267cd1e.jpg
ADDED
![]() |
examples/pose/d7e876bc5f9e8277d58e30bd83c7452f.jpg
ADDED
![]() |
fetch_data.sh
ADDED
@@ -0,0 +1,60 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/bin/bash
|
2 |
+
urle () { [[ "${1}" ]] || return 1; local LANG=C i x; for (( i = 0; i < ${#1}; i++ )); do x="${1:i:1}"; [[ "${x}" == [a-zA-Z0-9.~-] ]] && echo -n "${x}" || printf '%%%02X' "'${x}"; done; echo; }
|
3 |
+
|
4 |
+
mkdir -p data/smpl_related/models
|
5 |
+
|
6 |
+
# username and password input
|
7 |
+
echo -e "\nYou need to register at https://icon.is.tue.mpg.de/, according to Installation Instruction."
|
8 |
+
read -p "Username (ICON):" username
|
9 |
+
read -p "Password (ICON):" password
|
10 |
+
username=$(urle $username)
|
11 |
+
password=$(urle $password)
|
12 |
+
|
13 |
+
# SMPL (Male, Female)
|
14 |
+
echo -e "\nDownloading SMPL..."
|
15 |
+
wget --post-data "username=$username&password=$password" 'https://download.is.tue.mpg.de/download.php?domain=smpl&sfile=SMPL_python_v.1.0.0.zip&resume=1' -O './data/smpl_related/models/SMPL_python_v.1.0.0.zip' --no-check-certificate --continue
|
16 |
+
unzip data/smpl_related/models/SMPL_python_v.1.0.0.zip -d data/smpl_related/models
|
17 |
+
mv data/smpl_related/models/smpl/models/basicModel_f_lbs_10_207_0_v1.0.0.pkl data/smpl_related/models/smpl/SMPL_FEMALE.pkl
|
18 |
+
mv data/smpl_related/models/smpl/models/basicmodel_m_lbs_10_207_0_v1.0.0.pkl data/smpl_related/models/smpl/SMPL_MALE.pkl
|
19 |
+
cd data/smpl_related/models
|
20 |
+
rm -rf *.zip __MACOSX smpl/models smpl/smpl_webuser
|
21 |
+
cd ../../..
|
22 |
+
|
23 |
+
# SMPL (Neutral, from SMPLIFY)
|
24 |
+
echo -e "\nDownloading SMPLify..."
|
25 |
+
wget --post-data "username=$username&password=$password" 'https://download.is.tue.mpg.de/download.php?domain=smplify&sfile=mpips_smplify_public_v2.zip&resume=1' -O './data/smpl_related/models/mpips_smplify_public_v2.zip' --no-check-certificate --continue
|
26 |
+
unzip data/smpl_related/models/mpips_smplify_public_v2.zip -d data/smpl_related/models
|
27 |
+
mv data/smpl_related/models/smplify_public/code/models/basicModel_neutral_lbs_10_207_0_v1.0.0.pkl data/smpl_related/models/smpl/SMPL_NEUTRAL.pkl
|
28 |
+
cd data/smpl_related/models
|
29 |
+
rm -rf *.zip smplify_public
|
30 |
+
cd ../../..
|
31 |
+
|
32 |
+
# SMPL-X
|
33 |
+
echo -e "\nDownloading SMPL-X..."
|
34 |
+
wget --post-data "username=$username&password=$password" 'https://download.is.tue.mpg.de/download.php?domain=smplx&sfile=models_smplx_v1_1.zip&resume=1' -O './data/smpl_related/models/models_smplx_v1_1.zip' --no-check-certificate --continue
|
35 |
+
unzip data/smpl_related/models/models_smplx_v1_1.zip -d data/smpl_related
|
36 |
+
rm -f data/smpl_related/models/models_smplx_v1_1.zip
|
37 |
+
|
38 |
+
# ECON
|
39 |
+
echo -e "\nDownloading ECON..."
|
40 |
+
wget --post-data "username=$username&password=$password" 'https://download.is.tue.mpg.de/download.php?domain=icon&sfile=econ_data.zip&resume=1' -O './data/econ_data.zip' --no-check-certificate --continue
|
41 |
+
cd data && unzip econ_data.zip
|
42 |
+
mv smpl_data smpl_related/
|
43 |
+
rm -f econ_data.zip
|
44 |
+
cd ..
|
45 |
+
|
46 |
+
mkdir -p data/HPS
|
47 |
+
|
48 |
+
# PIXIE
|
49 |
+
echo -e "\nDownloading PIXIE..."
|
50 |
+
wget --post-data "username=$username&password=$password" 'https://download.is.tue.mpg.de/download.php?domain=icon&sfile=HPS/pixie_data.zip&resume=1' -O './data/HPS/pixie_data.zip' --no-check-certificate --continue
|
51 |
+
cd data/HPS && unzip pixie_data.zip
|
52 |
+
rm -f pixie_data.zip
|
53 |
+
cd ../..
|
54 |
+
|
55 |
+
# PyMAF-X
|
56 |
+
echo -e "\nDownloading PyMAF-X..."
|
57 |
+
wget --post-data "username=$username&password=$password" 'https://download.is.tue.mpg.de/download.php?domain=icon&sfile=HPS/pymafx_data.zip&resume=1' -O './data/HPS/pymafx_data.zip' --no-check-certificate --continue
|
58 |
+
cd data/HPS && unzip pymafx_data.zip
|
59 |
+
rm -f pymafx_data.zip
|
60 |
+
cd ../..
|