Spaces:
Running
on
L4
🚩 CUDA error
CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with TORCH_USE_CUDA_DSA
to enable device-side assertions.
it's happening again🥀
Error
CUDA out of memory. Tried to allocate 66.00 MiB. GPU 0 has a total capacity of 22.05 GiB of which 58.12 MiB is free. Including non-PyTorch memory, this process has 21.98 GiB memory in use. Of the allocated memory 4.38 GiB is allocated by PyTorch, with 66.00 MiB allocated in private pools (e.g., CUDA Graphs), and 33.94 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
Yeah, everytime I test it