This is a quantized version of distil-whisper-medium.en, optimized with ctranslate2 to use 8-bit integers for faster inference while maintaining accuracy. Ideal for speech-to-text tasks where speed is critical.
- Downloads last month
- 0
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for Rejekts/fastest-distil-whisper-medium.en
Base model
distil-whisper/distil-medium.en