elungky commited on
Commit
f2ae3ad
·
1 Parent(s): 8c961a6

Updated start.sh for A100 large 80GB with single GPU command and offloading

Browse files
Files changed (1) hide show
  1. start.sh +27 -0
start.sh ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+
3
+ # Set environment variables for a single GPU on Hugging Face Spaces
4
+ export CUDA_VISIBLE_DEVICES="0" # Hugging Face maps the visible GPU to device 0
5
+ export CUDA_HOME="/usr/local/cuda" # Common path for CUDA toolkit in Docker images
6
+ export PYTHONPATH="/app" # Assuming /app is your WORKDIR in the base image where your code is
7
+
8
+ echo "Starting GEN3C application on A100 Large 80GB GPU..."
9
+
10
+ python cosmos_predict1/diffusion/inference/gen3c_single_image.py \
11
+ --checkpoint_dir checkpoints \
12
+ --input_image_path assets/diffusion/000000.png \
13
+ --video_save_name test_single_image \
14
+ --guidance 1 \
15
+ --foreground_masking \
16
+ --offload_diffusion_transformer \
17
+ --offload_tokenizer \
18
+ --offload_text_encoder_model \
19
+ --offload_prompt_upsampler \
20
+ --offload_guardrail_models \
21
+ --disable_guardrail \
22
+ --disable_prompt_encoder
23
+
24
+ # IMPORTANT: If your Python script (gen3c_single_image.py) is designed to run and then exit,
25
+ # your Hugging Face Space will stop after it finishes.
26
+ # If your application is meant to be a continuous service (like a Gradio/Streamlit app),
27
+ # ensure that the Python script itself keeps running (e.g., by starting a web server).