Spaces:
Running
on
Zero
Refine GPU Resource Allocation for YOLOv11 Inference
Summary: This pull request optimizes GPU resource allocation in the SAHI + YOLOv11 demo by removing the duration parameter from the @spaces.GPU decorators and explicitly setting the device to cuda:0 in the load_yolo_model function.
@atalaydenknalbant I want to use automated device seelction feature sahi so that the space also works on non-gpu environments.
Why is duration parameter bad? I thought its to limit the execution so that queue times are slower during peak times.
Hello @fcakyon , So you're right about automated device selection. I recall that having @spaces.GPU in cpu environments used to cause an error, but I duplicated the setup and tested it in a cpu environment, and it didnโt. So they probably fixed it.
Second, I also tested the duration parameter. Since it didnโt take more than 60 seconds, we should keep it as is.(I removed that parameter while i was trying to implement video inference)
But spaces.GPU() is used in two places it will cause unnecessary queueing. It doesn't need to be called in the load_yolo_model since it already called in sahi_yolo_inference function. So I updated my PR accordingly.
Thanks a lot! Merged ๐๐ป