Refine GPU Resource Allocation for YOLOv11 Inference

#6

Summary: This pull request optimizes GPU resource allocation in the SAHI + YOLOv11 demo by removing the duration parameter from the @spaces.GPU decorators and explicitly setting the device to cuda:0 in the load_yolo_model function.

@fcakyon Evening Fatih, I made some adjustments and it is working on my duplicated space.

@atalaydenknalbant I want to use automated device seelction feature sahi so that the space also works on non-gpu environments.

Why is duration parameter bad? I thought its to limit the execution so that queue times are slower during peak times.

Hello @fcakyon , So you're right about automated device selection. I recall that having @spaces.GPU in cpu environments used to cause an error, but I duplicated the setup and tested it in a cpu environment, and it didnโ€™t. So they probably fixed it.

Second, I also tested the duration parameter. Since it didnโ€™t take more than 60 seconds, we should keep it as is.(I removed that parameter while i was trying to implement video inference)

But spaces.GPU() is used in two places it will cause unnecessary queueing. It doesn't need to be called in the load_yolo_model since it already called in sahi_yolo_inference function. So I updated my PR accordingly.

Thanks a lot! Merged ๐Ÿ‘๐Ÿป

fcakyon changed pull request status to merged

Sign up or log in to comment