Commit
·
9741963
0
Parent(s):
initial release
Browse files- readme.md +142 -0
- requirements.txt +4 -0
- scraper_run.py +252 -0
- scrapertool.zip +0 -0
- start.bat +21 -0
readme.md
ADDED
@@ -0,0 +1,142 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# ComfyUI LoRA Metadata Scraper
|
2 |
+
|
3 |
+
This is a Python tool that scans a ComfyUI installation and extracts metadata from `.safetensors` files, including LoRAs, checkpoints, VAEs, and ControlNets. It supports both local metadata extraction and optional scraping of CivitAI metadata, including preview images and videos.
|
4 |
+
|
5 |
+
What it does:
|
6 |
+
- Extracts embedded metadata from `.safetensors` files
|
7 |
+
- Optionally scrapes CivitAI metadata (tags, descriptions, trained words)
|
8 |
+
- Downloads preview images and videos
|
9 |
+
- Smart deduplication: skips files that already have metadata or previews
|
10 |
+
- Lets you scan only the LoRA folder or your entire ComfyUI setup
|
11 |
+
- Lets you choose how many preview images/videos to download (or none)
|
12 |
+
- Saves previews in a subdirectory or alongside your model files
|
13 |
+
|
14 |
+
--------------------
|
15 |
+
|
16 |
+
## Folder Setup
|
17 |
+
|
18 |
+
Place everything inside:
|
19 |
+
|
20 |
+
ComfyUI/utils/Scrapertool/
|
21 |
+
|
22 |
+
Example structure:
|
23 |
+
|
24 |
+
ComfyUI/
|
25 |
+
└── utils/
|
26 |
+
└── Scrapertool/
|
27 |
+
├── scraper_run.py
|
28 |
+
├── requirements.txt
|
29 |
+
└── start.bat (optional launcher)
|
30 |
+
|
31 |
+
--------------------
|
32 |
+
|
33 |
+
## Usage
|
34 |
+
|
35 |
+
### Option 1: Interactive Mode
|
36 |
+
|
37 |
+
Run:
|
38 |
+
|
39 |
+
python scraper_run.py --interactive
|
40 |
+
|
41 |
+
You’ll be prompted:
|
42 |
+
|
43 |
+
A) Scrape CivitAI? (Y/N)
|
44 |
+
B) Use default delay (0.5s), no delay (0), or custom delay?
|
45 |
+
C) Force re-scrape if metadata already exists? (Y/N)
|
46 |
+
D) Scan only the LoRA folder? (Y/N)
|
47 |
+
E) Save previews in a subdirectory? (Y/N)
|
48 |
+
F) How many preview images/videos to download? (A=All, N=None, or a number)
|
49 |
+
|
50 |
+
### Option 2: Command Line Arguments
|
51 |
+
|
52 |
+
Example:
|
53 |
+
|
54 |
+
python scraper_run.py --scrape-civitai --delay 0.5 --force --loras-only --previews-subdir --max-media 5
|
55 |
+
|
56 |
+
--------------------
|
57 |
+
|
58 |
+
## Command-Line Argument Reference
|
59 |
+
|
60 |
+
--interactive
|
61 |
+
Launch the interactive menu.
|
62 |
+
|
63 |
+
--scrape-civitai
|
64 |
+
Enable scraping of metadata and previews from CivitAI.
|
65 |
+
|
66 |
+
--delay [seconds]
|
67 |
+
Set the delay between API calls and image/video downloads (default: 0.5 seconds).
|
68 |
+
|
69 |
+
--force
|
70 |
+
Force re-scraping even if a metadata JSON already exists.
|
71 |
+
|
72 |
+
--loras-only
|
73 |
+
Scan only the ComfyUI/models/loras folder. If omitted, scans all subdirectories.
|
74 |
+
|
75 |
+
--previews-subdir
|
76 |
+
Save previews inside a dedicated subdirectory (e.g., model_previews/).
|
77 |
+
|
78 |
+
--no-previews-subdir
|
79 |
+
Save previews directly next to the model file.
|
80 |
+
|
81 |
+
--max-media [number]
|
82 |
+
Set how many preview images/videos to download. Use 0 to skip all. (Default: unlimited)
|
83 |
+
|
84 |
+
--------------------
|
85 |
+
|
86 |
+
## How It Finds CivitAI Metadata
|
87 |
+
|
88 |
+
1. First, it checks if the `.safetensors` file includes a `civitai_metadata` block (newly fixed!).
|
89 |
+
- If found, it extracts the `civitai_model_id` from that nested block.
|
90 |
+
2. If not found, it falls back to looking up the file’s SHA256 hash using the CivitAI API.
|
91 |
+
|
92 |
+
This ensures maximum coverage even if metadata is incomplete.
|
93 |
+
|
94 |
+
--------------------
|
95 |
+
|
96 |
+
## Dependencies
|
97 |
+
|
98 |
+
This script requires the following Python packages:
|
99 |
+
|
100 |
+
safetensors
|
101 |
+
torch
|
102 |
+
requests
|
103 |
+
tqdm
|
104 |
+
|
105 |
+
--------------------
|
106 |
+
|
107 |
+
## How to Set Up
|
108 |
+
|
109 |
+
1. Create a virtual environment (recommended):
|
110 |
+
|
111 |
+
python -m venv scrapervenv
|
112 |
+
|
113 |
+
2. Activate the virtual environment:
|
114 |
+
|
115 |
+
On Windows:
|
116 |
+
scrapervenv\Scripts\activate
|
117 |
+
|
118 |
+
On macOS/Linux:
|
119 |
+
source scrapervenv/bin/activate
|
120 |
+
|
121 |
+
3. Install dependencies:
|
122 |
+
|
123 |
+
pip install -r requirements.txt
|
124 |
+
|
125 |
+
--------------------
|
126 |
+
|
127 |
+
## requirements.txt
|
128 |
+
|
129 |
+
safetensors
|
130 |
+
torch
|
131 |
+
requests
|
132 |
+
tqdm
|
133 |
+
|
134 |
+
--------------------
|
135 |
+
|
136 |
+
## License
|
137 |
+
|
138 |
+
Creative Commons Attribution 4.0 International (CC BY 4.0)
|
139 |
+
|
140 |
+
This tool is free to use, share, and adapt for any purpose, including commercial use, as long as you provide attribution to the original creator.
|
141 |
+
|
142 |
+
--------------------
|
requirements.txt
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
safetensors
|
2 |
+
requests
|
3 |
+
tqdm
|
4 |
+
torch
|
scraper_run.py
ADDED
@@ -0,0 +1,252 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import os
|
2 |
+
import json
|
3 |
+
import argparse
|
4 |
+
import hashlib
|
5 |
+
import requests
|
6 |
+
import time
|
7 |
+
from safetensors import safe_open
|
8 |
+
from tqdm import tqdm
|
9 |
+
|
10 |
+
MAX_RETRIES = 3
|
11 |
+
BACKOFF_BASE = 2
|
12 |
+
|
13 |
+
def retry_request(url, timeout=10, desc=""):
|
14 |
+
attempt = 0
|
15 |
+
while attempt < MAX_RETRIES:
|
16 |
+
try:
|
17 |
+
response = requests.get(url, timeout=timeout)
|
18 |
+
if response.status_code == 200:
|
19 |
+
return response
|
20 |
+
elif response.status_code == 429:
|
21 |
+
tqdm.write(f"{desc} - Rate limited (HTTP 429). Retrying ({attempt + 1}/{MAX_RETRIES})...")
|
22 |
+
else:
|
23 |
+
tqdm.write(f"{desc} - HTTP {response.status_code}. Retrying ({attempt + 1}/{MAX_RETRIES})...")
|
24 |
+
except requests.exceptions.RequestException as e:
|
25 |
+
tqdm.write(f"{desc} - Error: {e}. Retrying ({attempt + 1}/{MAX_RETRIES})...")
|
26 |
+
|
27 |
+
attempt += 1
|
28 |
+
time.sleep(BACKOFF_BASE * (2 ** (attempt - 1)))
|
29 |
+
tqdm.write(f"{desc} - Failed after {MAX_RETRIES} attempts.")
|
30 |
+
return None
|
31 |
+
|
32 |
+
def compute_sha256(file_path):
|
33 |
+
sha256_hash = hashlib.sha256()
|
34 |
+
with open(file_path, "rb") as f:
|
35 |
+
for byte_block in iter(lambda: f.read(4096), b""):
|
36 |
+
sha256_hash.update(byte_block)
|
37 |
+
return sha256_hash.hexdigest()
|
38 |
+
|
39 |
+
def extract_local_metadata(safetensors_file):
|
40 |
+
try:
|
41 |
+
with safe_open(safetensors_file, framework="pt") as f:
|
42 |
+
metadata = f.metadata()
|
43 |
+
return metadata
|
44 |
+
except Exception as e:
|
45 |
+
tqdm.write(f"Error reading {safetensors_file}: {e}")
|
46 |
+
return None
|
47 |
+
|
48 |
+
def fetch_civitai_metadata_by_model_id(model_id):
|
49 |
+
url = f"https://civitai.com/api/v1/models/{model_id}"
|
50 |
+
response = retry_request(url, desc=f"Fetching metadata for model ID {model_id}")
|
51 |
+
if response and response.status_code == 200:
|
52 |
+
return response.json()
|
53 |
+
return None
|
54 |
+
|
55 |
+
def fetch_civitai_metadata_by_hash(file_hash):
|
56 |
+
url = f"https://civitai.com/api/v1/model-versions/by-hash/{file_hash}"
|
57 |
+
response = retry_request(url, desc=f"Fetching metadata by hash {file_hash[:10]}...")
|
58 |
+
if response and response.status_code == 200:
|
59 |
+
return response.json()
|
60 |
+
return None
|
61 |
+
|
62 |
+
def download_preview_images(images_list, save_dir, base_filename, delay=0.5, image_pbar=None, use_subdir=True):
|
63 |
+
if use_subdir:
|
64 |
+
subdir = os.path.join(save_dir, f"{base_filename}_previews")
|
65 |
+
os.makedirs(subdir, exist_ok=True)
|
66 |
+
else:
|
67 |
+
subdir = save_dir # save in same folder
|
68 |
+
|
69 |
+
for idx, img_data in enumerate(images_list):
|
70 |
+
url = img_data.get('url')
|
71 |
+
if not url:
|
72 |
+
continue
|
73 |
+
ext = os.path.splitext(url)[1].split('?')[0]
|
74 |
+
img_name = f"{base_filename}_preview_{idx+1}{ext}"
|
75 |
+
|
76 |
+
# Paths to check (both flat + subdir)
|
77 |
+
flat_img_path = os.path.join(save_dir, img_name)
|
78 |
+
subdir_img_path = os.path.join(subdir, img_name)
|
79 |
+
|
80 |
+
if os.path.exists(flat_img_path) or os.path.exists(subdir_img_path):
|
81 |
+
tqdm.write(f"Preview already exists: {img_name} (skipping)")
|
82 |
+
if image_pbar:
|
83 |
+
image_pbar.update(1)
|
84 |
+
continue
|
85 |
+
|
86 |
+
desc = f"Downloading media {idx + 1}/{len(images_list)}"
|
87 |
+
response = retry_request(url, desc=desc)
|
88 |
+
if response and response.status_code == 200:
|
89 |
+
img_path = subdir_img_path if use_subdir else flat_img_path
|
90 |
+
with open(img_path, 'wb') as img_file:
|
91 |
+
img_file.write(response.content)
|
92 |
+
tqdm.write(f"Saved preview: {img_path}")
|
93 |
+
if image_pbar:
|
94 |
+
image_pbar.update(1)
|
95 |
+
time.sleep(delay)
|
96 |
+
|
97 |
+
def process_directory(root_dir, force=False, scrape_civitai=False, delay=0.5, previews_subdir=True, max_media=None):
|
98 |
+
safetensors_files = []
|
99 |
+
for dirpath, dirnames, filenames in os.walk(root_dir):
|
100 |
+
for filename in filenames:
|
101 |
+
if filename.endswith(".safetensors"):
|
102 |
+
safetensors_files.append(os.path.join(dirpath, filename))
|
103 |
+
|
104 |
+
print(f"\nFound {len(safetensors_files)} .safetensors files.\n")
|
105 |
+
|
106 |
+
with tqdm(total=len(safetensors_files), desc="Total Progress", unit="file") as total_pbar:
|
107 |
+
for safetensors_path in safetensors_files:
|
108 |
+
dirpath = os.path.dirname(safetensors_path)
|
109 |
+
filename = os.path.basename(safetensors_path)
|
110 |
+
base_filename = os.path.splitext(filename)[0]
|
111 |
+
json_filename = f"{base_filename}.metadata.json"
|
112 |
+
json_path = os.path.join(dirpath, json_filename)
|
113 |
+
|
114 |
+
if os.path.exists(json_path) and not force:
|
115 |
+
tqdm.write(f"Skipping (metadata exists): {safetensors_path}")
|
116 |
+
total_pbar.update(1)
|
117 |
+
continue
|
118 |
+
|
119 |
+
tqdm.write(f"\nProcessing: {safetensors_path}")
|
120 |
+
metadata = extract_local_metadata(safetensors_path)
|
121 |
+
combined_metadata = {'local_metadata': metadata if metadata else {}}
|
122 |
+
|
123 |
+
civitai_data = None
|
124 |
+
|
125 |
+
if scrape_civitai:
|
126 |
+
civitai_model_id = None
|
127 |
+
if metadata:
|
128 |
+
if 'ss_civitai_model_id' in metadata:
|
129 |
+
civitai_model_id = metadata['ss_civitai_model_id']
|
130 |
+
elif 'ss_civitai_url' in metadata:
|
131 |
+
parts = metadata['ss_civitai_url'].split('/')
|
132 |
+
civitai_model_id = next((part for part in parts if part.isdigit()), None)
|
133 |
+
|
134 |
+
if civitai_model_id:
|
135 |
+
tqdm.write(f"Found model ID in metadata: {civitai_model_id}")
|
136 |
+
civitai_data = fetch_civitai_metadata_by_model_id(civitai_model_id)
|
137 |
+
time.sleep(delay)
|
138 |
+
else:
|
139 |
+
tqdm.write("No CivitAI model ID found in metadata. Trying hash lookup...")
|
140 |
+
file_hash = compute_sha256(safetensors_path)
|
141 |
+
civitai_data = fetch_civitai_metadata_by_hash(file_hash)
|
142 |
+
time.sleep(delay)
|
143 |
+
|
144 |
+
if civitai_data:
|
145 |
+
civitai_meta = {
|
146 |
+
'civitai_model_id': civitai_data.get('modelId') or civitai_data.get('id'),
|
147 |
+
'civitai_model_version_id': civitai_data.get('id'),
|
148 |
+
'civitai_name': civitai_data.get('name'),
|
149 |
+
'description': civitai_data.get('description'),
|
150 |
+
'tags': civitai_data.get('tags'),
|
151 |
+
'trainedWords': civitai_data.get('trainedWords'),
|
152 |
+
'images': civitai_data.get('images')
|
153 |
+
}
|
154 |
+
combined_metadata['civitai_metadata'] = civitai_meta
|
155 |
+
|
156 |
+
images_list = civitai_meta.get('images', [])
|
157 |
+
if images_list:
|
158 |
+
# Apply max_media logic
|
159 |
+
if max_media == 0:
|
160 |
+
tqdm.write("Skipping download of preview images/videos (user selected 0).")
|
161 |
+
else:
|
162 |
+
if max_media is not None:
|
163 |
+
images_list = images_list[:max_media]
|
164 |
+
with tqdm(total=len(images_list), desc="Image/Video Progress", leave=False) as image_pbar:
|
165 |
+
download_preview_images(
|
166 |
+
images_list,
|
167 |
+
dirpath,
|
168 |
+
base_filename,
|
169 |
+
delay=delay,
|
170 |
+
image_pbar=image_pbar,
|
171 |
+
use_subdir=previews_subdir
|
172 |
+
)
|
173 |
+
else:
|
174 |
+
tqdm.write("No preview images/videos found.")
|
175 |
+
else:
|
176 |
+
tqdm.write("No CivitAI data found (model ID or hash lookup failed).")
|
177 |
+
|
178 |
+
with open(json_path, "w", encoding="utf-8") as f:
|
179 |
+
json.dump(combined_metadata, f, indent=4, ensure_ascii=False)
|
180 |
+
tqdm.write(f"Saved metadata to: {json_path}")
|
181 |
+
total_pbar.update(1)
|
182 |
+
|
183 |
+
def interactive_menu():
|
184 |
+
print("\n=== LoRA Metadata Scraper Config ===\n")
|
185 |
+
scrape_civitai = input("A) Scrape CivitAI? (Y/N) [Default: N]: ").strip().lower() == 'y'
|
186 |
+
delay_choice = input("B) Use default delay (0.5s), no delay (0), or custom? (D/N/C) [Default: D]: ").strip().lower()
|
187 |
+
if delay_choice == 'n':
|
188 |
+
delay = 0.0
|
189 |
+
elif delay_choice == 'c':
|
190 |
+
delay = float(input("Enter delay in seconds (e.g., 0.5): ").strip())
|
191 |
+
else:
|
192 |
+
delay = 0.5
|
193 |
+
force = input("C) Force re-scrape if metadata exists? (Y/N) [Default: N]: ").strip().lower() == 'y'
|
194 |
+
loras_only = input("D) Scan only the LoRAs folder? (Y/N) [Default: Y]: ").strip().lower() != 'n'
|
195 |
+
previews_subdir = input("E) Save preview images in a subdirectory? (Y/N) [Default: Y]: ").strip().lower() != 'n'
|
196 |
+
media_choice = input("F) How many preview images/videos to download? (A=All [default], N=None, X=Number): ").strip().lower()
|
197 |
+
if media_choice == 'n':
|
198 |
+
max_media = 0
|
199 |
+
elif media_choice == 'a' or media_choice == '':
|
200 |
+
max_media = None
|
201 |
+
else:
|
202 |
+
try:
|
203 |
+
max_media = int(media_choice)
|
204 |
+
except ValueError:
|
205 |
+
max_media = None # fallback to all
|
206 |
+
print("\n=== Starting with your selected options ===\n")
|
207 |
+
return force, scrape_civitai, delay, loras_only, previews_subdir, max_media
|
208 |
+
|
209 |
+
if __name__ == "__main__":
|
210 |
+
print(">>> Script started")
|
211 |
+
|
212 |
+
parser = argparse.ArgumentParser(description="Scrape and save metadata for .safetensors files.")
|
213 |
+
parser.add_argument("--force", action="store_true", help="Force re-scrape even if metadata file exists.")
|
214 |
+
parser.add_argument("--scrape-civitai", action="store_true", help="Enable scraping CivitAI metadata + images.")
|
215 |
+
parser.add_argument("--delay", type=float, default=0.5, help="Delay time (seconds) between API/image steps (default: 0.5s).")
|
216 |
+
parser.add_argument("--interactive", action="store_true", help="Run in interactive mode.")
|
217 |
+
parser.add_argument("--loras-only", action="store_true", help="Scan only the LoRAs folder (models/loras).")
|
218 |
+
parser.add_argument("--previews-subdir", dest="previews_subdir", action="store_true", help="Save preview images in a subdirectory.")
|
219 |
+
parser.add_argument("--no-previews-subdir", dest="previews_subdir", action="store_false", help="Save preview images in the same folder.")
|
220 |
+
parser.add_argument("--max-media", type=int, default=None, help="Max number of preview images/videos to download (0 = none).")
|
221 |
+
parser.set_defaults(previews_subdir=True)
|
222 |
+
|
223 |
+
args = parser.parse_args()
|
224 |
+
|
225 |
+
if args.interactive:
|
226 |
+
force, scrape_civitai, delay, loras_only, previews_subdir, max_media = interactive_menu()
|
227 |
+
else:
|
228 |
+
force, scrape_civitai, delay, loras_only, previews_subdir, max_media = (
|
229 |
+
args.force,
|
230 |
+
args.scrape_civitai,
|
231 |
+
args.delay,
|
232 |
+
args.loras_only,
|
233 |
+
args.previews_subdir,
|
234 |
+
args.max_media
|
235 |
+
)
|
236 |
+
|
237 |
+
script_dir = os.path.dirname(os.path.abspath(__file__))
|
238 |
+
if loras_only:
|
239 |
+
comfyui_dir = os.path.abspath(os.path.join(script_dir, "..", "..", "models", "loras"))
|
240 |
+
else:
|
241 |
+
comfyui_dir = os.path.abspath(os.path.join(script_dir, "..", ".."))
|
242 |
+
|
243 |
+
tqdm.write(f"Scanning directory: {comfyui_dir}")
|
244 |
+
|
245 |
+
process_directory(
|
246 |
+
comfyui_dir,
|
247 |
+
force=force,
|
248 |
+
scrape_civitai=scrape_civitai,
|
249 |
+
delay=delay,
|
250 |
+
previews_subdir=previews_subdir,
|
251 |
+
max_media=max_media
|
252 |
+
)
|
scrapertool.zip
ADDED
Binary file (5.37 kB). View file
|
|
start.bat
ADDED
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
@echo off
|
2 |
+
SET VENV_DIR=scrapervenv
|
3 |
+
|
4 |
+
echo [INFO] Checking virtual environment...
|
5 |
+
if not exist %VENV_DIR% (
|
6 |
+
echo [INFO] Creating virtual environment...
|
7 |
+
python -m venv %VENV_DIR%
|
8 |
+
)
|
9 |
+
|
10 |
+
echo [INFO] Activating virtual environment...
|
11 |
+
call %VENV_DIR%\Scripts\activate
|
12 |
+
|
13 |
+
echo [INFO] Installing dependencies...
|
14 |
+
pip install -r requirements.txt
|
15 |
+
|
16 |
+
echo [INFO] Running scraper (interactive mode)...
|
17 |
+
|
18 |
+
python scraper_run.py --interactive
|
19 |
+
|
20 |
+
echo [INFO] Done.
|
21 |
+
pause
|