Spaces:
Configuration error
Configuration error
Fedir Zadniprovskyi
commited on
Commit
·
9f56267
1
Parent(s):
8ad3023
docs: add examples, roadmap, etc.
Browse files- Dockerfile.cpu +5 -2
- Dockerfile.cuda +4 -2
- README.md +40 -14
- compose.yaml +0 -4
- flake.nix +1 -0
- speaches/config.py +23 -11
- speaches/main.py +2 -1
Dockerfile.cpu
CHANGED
|
@@ -12,5 +12,8 @@ RUN poetry install
|
|
| 12 |
COPY ./speaches ./speaches
|
| 13 |
ENTRYPOINT ["poetry", "run"]
|
| 14 |
CMD ["uvicorn", "speaches.main:app"]
|
| 15 |
-
ENV
|
| 16 |
-
ENV
|
|
|
|
|
|
|
|
|
|
|
|
| 12 |
COPY ./speaches ./speaches
|
| 13 |
ENTRYPOINT ["poetry", "run"]
|
| 14 |
CMD ["uvicorn", "speaches.main:app"]
|
| 15 |
+
ENV WHISPER_MODEL=distil-small.en
|
| 16 |
+
ENV WHISPER_INFERENCE_DEVICE=cpu
|
| 17 |
+
ENV WHISPER_COMPUTE_TYPE=int8
|
| 18 |
+
ENV UVICORN_HOST=0.0.0.0
|
| 19 |
+
ENV UVICORN_PORT=8000
|
Dockerfile.cuda
CHANGED
|
@@ -12,5 +12,7 @@ RUN poetry install
|
|
| 12 |
COPY ./speaches ./speaches
|
| 13 |
ENTRYPOINT ["poetry", "run"]
|
| 14 |
CMD ["uvicorn", "speaches.main:app"]
|
| 15 |
-
ENV
|
| 16 |
-
ENV
|
|
|
|
|
|
|
|
|
| 12 |
COPY ./speaches ./speaches
|
| 13 |
ENTRYPOINT ["poetry", "run"]
|
| 14 |
CMD ["uvicorn", "speaches.main:app"]
|
| 15 |
+
ENV WHISPER_MODEL=distil-medium.en
|
| 16 |
+
ENV WHISPER_INFERENCE_DEVICE=cuda
|
| 17 |
+
ENV UVICORN_HOST=0.0.0.0
|
| 18 |
+
ENV UVICORN_PORT=8000
|
README.md
CHANGED
|
@@ -1,25 +1,51 @@
|
|
| 1 |
-
# WARN: WIP(code is ugly, may have bugs, test files aren't included, etc.)
|
| 2 |
# Intro
|
| 3 |
-
|
| 4 |
-
- [faster-whisper](https://github.com/SYSTRAN/faster-whisper) is used as the backend. Both GPU and CPU inference
|
| 5 |
-
- LocalAgreement2([paper](https://aclanthology.org/2023.ijcnlp-demo.3.pdf)|[original implementation](https://github.com/ufal/whisper_streaming)) algorithm is used for real-time transcription.
|
| 6 |
-
- Can be deployed using Docker (Compose configuration can be found in
|
| 7 |
- All configuration is done through environment variables. See [config.py](./speaches/config.py).
|
| 8 |
- NOTE: only transcription of single channel, 16000 sample rate, raw, 16-bit little-endian audio is supported.
|
| 9 |
-
- NOTE: this isn't really meant to be used as a standalone tool but rather to add transcription features to other applications
|
| 10 |
Please create an issue if you find a bug, have a question, or a feature suggestion.
|
| 11 |
# Quick Start
|
| 12 |
-
|
| 13 |
-
Spinning up a `speaches` web-server
|
| 14 |
```bash
|
| 15 |
-
docker run --
|
| 16 |
# or
|
| 17 |
-
docker run --
|
| 18 |
```
|
| 19 |
-
|
| 20 |
```bash
|
| 21 |
-
|
| 22 |
# or
|
| 23 |
-
|
| 24 |
```
|
| 25 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# WARN: WIP (code is ugly, bad documentation, may have bugs, test files aren't included, CPU inference was barely tested, etc.)
|
| 2 |
# Intro
|
| 3 |
+
:peach:`speaches` is a web server that supports real-time transcription using WebSockets.
|
| 4 |
+
- [faster-whisper](https://github.com/SYSTRAN/faster-whisper) is used as the backend. Both GPU and CPU inference are supported.
|
| 5 |
+
- LocalAgreement2 ([paper](https://aclanthology.org/2023.ijcnlp-demo.3.pdf) | [original implementation](https://github.com/ufal/whisper_streaming)) algorithm is used for real-time transcription.
|
| 6 |
+
- Can be deployed using Docker (Compose configuration can be found in [compose.yaml](./compose.yaml)).
|
| 7 |
- All configuration is done through environment variables. See [config.py](./speaches/config.py).
|
| 8 |
- NOTE: only transcription of single channel, 16000 sample rate, raw, 16-bit little-endian audio is supported.
|
| 9 |
+
- NOTE: this isn't really meant to be used as a standalone tool but rather to add transcription features to other applications.
|
| 10 |
Please create an issue if you find a bug, have a question, or a feature suggestion.
|
| 11 |
# Quick Start
|
| 12 |
+
Spinning up a `speaches` web server
|
|
|
|
| 13 |
```bash
|
| 14 |
+
docker run --gpus=all --publish 8000:8000 --mount type=bind,source=$HOME/.cache/huggingface,target=/root/.cache/huggingface fedirz/speaches:cuda
|
| 15 |
# or
|
| 16 |
+
docker run --publish 8000:8000 --mount type=bind,source=$HOME/.cache/huggingface,target=/root/.cache/huggingface fedirz/speaches:cpu
|
| 17 |
```
|
| 18 |
+
Streaming audio data from a microphone. [websocat](https://github.com/vi/websocat?tab=readme-ov-file#installation) installation is required.
|
| 19 |
```bash
|
| 20 |
+
ffmpeg -loglevel quiet -f alsa -i default -ac 1 -ar 16000 -f s16le - | websocat --binary ws://0.0.0.0:8000/v1/audio/transcriptions
|
| 21 |
# or
|
| 22 |
+
arecord -f S16_LE -c1 -r 16000 -t raw -D default 2>/dev/null | websocat --binary ws://0.0.0.0:8000/v1/audio/transcriptions
|
| 23 |
```
|
| 24 |
+
Streaming audio data from a file.
|
| 25 |
+
```bash
|
| 26 |
+
ffmpeg -loglevel quiet -f alsa -i default -ac 1 -ar 16000 -f s16le - > output.raw
|
| 27 |
+
# send all data at once
|
| 28 |
+
cat output.raw | websocat --no-close --binary ws://0.0.0.0:8000/v1/audio/transcriptions
|
| 29 |
+
# Output: {"text":"One,"}{"text":"One, two, three, four, five."}{"text":"One, two, three, four, five."}%
|
| 30 |
+
# streaming 16000 samples per second. each sample is 2 bytes
|
| 31 |
+
cat output.raw | pv -qL 32000 | websocat --no-close --binary ws://0.0.0.0:8000/v1/audio/transcriptions
|
| 32 |
+
# Output: {"text":"One,"}{"text":"One, two,"}{"text":"One, two, three,"}{"text":"One, two, three, four, five."}{"text":"One, two, three, four, five. one."}%
|
| 33 |
+
```
|
| 34 |
+
Transcribing a file
|
| 35 |
+
```bash
|
| 36 |
+
# convert the file if it has a different format
|
| 37 |
+
ffmpeg -i output.wav -ac 1 -ar 16000 -f s16le output.raw
|
| 38 |
+
curl -X POST -F "[email protected]" http://0.0.0.0:8000/v1/audio/transcriptions
|
| 39 |
+
# Output: "{\"text\":\"One, two, three, four, five.\"}"%
|
| 40 |
+
```
|
| 41 |
+
# Roadmap
|
| 42 |
+
- [ ] Support file transcription (non-streaming) of multiple formats.
|
| 43 |
+
- [ ] CLI client.
|
| 44 |
+
- [ ] Separate the web server related code from the "core", and publish "core" as a package.
|
| 45 |
+
- [ ] Additional documentation and code comments.
|
| 46 |
+
- [ ] Write benchmarks for measuring streaming transcription performance. Possible metrics:
|
| 47 |
+
- Latency (time when transcription is sent - time between when audio has been received)
|
| 48 |
+
- Accuracy (already being measured when testing but the process can be improved)
|
| 49 |
+
- Total seconds of audio transcribed / audio duration (since each audio chunk is being processed at least twice)
|
| 50 |
+
- [ ] Get the API response closer to the format used by OpenAI.
|
| 51 |
+
- [ ] Integrations...
|
compose.yaml
CHANGED
|
@@ -11,8 +11,6 @@ services:
|
|
| 11 |
restart: unless-stopped
|
| 12 |
ports:
|
| 13 |
- 8000:8000
|
| 14 |
-
environment:
|
| 15 |
-
- INFERENCE_DEVICE=cuda
|
| 16 |
deploy:
|
| 17 |
resources:
|
| 18 |
reservations:
|
|
@@ -30,5 +28,3 @@ services:
|
|
| 30 |
restart: unless-stopped
|
| 31 |
ports:
|
| 32 |
- 8000:8000
|
| 33 |
-
environment:
|
| 34 |
-
- INFERENCE_DEVICE=cpu
|
|
|
|
| 11 |
restart: unless-stopped
|
| 12 |
ports:
|
| 13 |
- 8000:8000
|
|
|
|
|
|
|
| 14 |
deploy:
|
| 15 |
resources:
|
| 16 |
reservations:
|
|
|
|
| 28 |
restart: unless-stopped
|
| 29 |
ports:
|
| 30 |
- 8000:8000
|
|
|
|
|
|
flake.nix
CHANGED
|
@@ -23,6 +23,7 @@
|
|
| 23 |
lsyncd
|
| 24 |
poetry
|
| 25 |
pre-commit
|
|
|
|
| 26 |
pyright
|
| 27 |
python311
|
| 28 |
websocat
|
|
|
|
| 23 |
lsyncd
|
| 24 |
poetry
|
| 25 |
pre-commit
|
| 26 |
+
pv
|
| 27 |
pyright
|
| 28 |
python311
|
| 29 |
websocat
|
speaches/config.py
CHANGED
|
@@ -37,6 +37,7 @@ class Device(enum.StrEnum):
|
|
| 37 |
|
| 38 |
|
| 39 |
# https://github.com/OpenNMT/CTranslate2/blob/master/docs/quantization.md
|
|
|
|
| 40 |
class Quantization(enum.StrEnum):
|
| 41 |
INT8 = "int8"
|
| 42 |
INT8_FLOAT16 = "int8_float16"
|
|
@@ -153,24 +154,35 @@ class Language(enum.StrEnum):
|
|
| 153 |
|
| 154 |
|
| 155 |
class WhisperConfig(BaseModel):
|
| 156 |
-
model: Model = Field(default=Model.DISTIL_SMALL_EN)
|
| 157 |
-
inference_device: Device = Field(
|
| 158 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 159 |
|
| 160 |
|
| 161 |
class Config(BaseSettings):
|
| 162 |
model_config = SettingsConfigDict(env_nested_delimiter="_")
|
| 163 |
|
| 164 |
-
log_level: str = "info"
|
| 165 |
-
whisper: WhisperConfig = WhisperConfig()
|
| 166 |
"""
|
| 167 |
-
Max duration to for the next audio chunk before
|
| 168 |
"""
|
| 169 |
-
max_no_data_seconds: float = 1.0
|
| 170 |
-
min_duration: float = 1.0
|
| 171 |
-
word_timestamp_error_margin: float = 0.2
|
| 172 |
-
|
| 173 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 174 |
|
| 175 |
|
| 176 |
config = Config()
|
|
|
|
| 37 |
|
| 38 |
|
| 39 |
# https://github.com/OpenNMT/CTranslate2/blob/master/docs/quantization.md
|
| 40 |
+
# NOTE: `Precision` might be a better name
|
| 41 |
class Quantization(enum.StrEnum):
|
| 42 |
INT8 = "int8"
|
| 43 |
INT8_FLOAT16 = "int8_float16"
|
|
|
|
| 154 |
|
| 155 |
|
| 156 |
class WhisperConfig(BaseModel):
|
| 157 |
+
model: Model = Field(default=Model.DISTIL_SMALL_EN) # ENV: WHISPER_MODEL
|
| 158 |
+
inference_device: Device = Field(
|
| 159 |
+
default=Device.AUTO
|
| 160 |
+
) # ENV: WHISPER_INFERENCE_DEVICE
|
| 161 |
+
compute_type: Quantization = Field(
|
| 162 |
+
default=Quantization.DEFAULT
|
| 163 |
+
) # ENV: WHISPER_COMPUTE_TYPE
|
| 164 |
|
| 165 |
|
| 166 |
class Config(BaseSettings):
|
| 167 |
model_config = SettingsConfigDict(env_nested_delimiter="_")
|
| 168 |
|
| 169 |
+
log_level: str = "info" # ENV: LOG_LEVEL
|
| 170 |
+
whisper: WhisperConfig = WhisperConfig() # ENV: WHISPER_*
|
| 171 |
"""
|
| 172 |
+
Max duration to for the next audio chunk before transcription is finilized and connection is closed.
|
| 173 |
"""
|
| 174 |
+
max_no_data_seconds: float = 1.0 # ENV: MAX_NO_DATA_SECONDS
|
| 175 |
+
min_duration: float = 1.0 # ENV: MIN_DURATION
|
| 176 |
+
word_timestamp_error_margin: float = 0.2 # ENV: WORD_TIMESTAMP_ERROR_MARGIN
|
| 177 |
+
"""
|
| 178 |
+
Max allowed audio duration without any speech being detected before transcription is finilized and connection is closed.
|
| 179 |
+
"""
|
| 180 |
+
max_inactivity_seconds: float = 2.0 # ENV: MAX_INACTIVITY_SECONDS
|
| 181 |
+
"""
|
| 182 |
+
Controls how many latest seconds of audio are being passed through VAD.
|
| 183 |
+
Should be greater than `max_inactivity_seconds`
|
| 184 |
+
"""
|
| 185 |
+
inactivity_window_seconds: float = 3.0 # ENV: INACTIVITY_WINDOW_SECONDS
|
| 186 |
|
| 187 |
|
| 188 |
config = Config()
|
speaches/main.py
CHANGED
|
@@ -90,6 +90,8 @@ async def audio_receiver(ws: WebSocket, audio_stream: AudioStream) -> None:
|
|
| 90 |
audio_stream.duration - config.inactivity_window_seconds
|
| 91 |
)
|
| 92 |
vad_opts = VadOptions(min_silence_duration_ms=500, speech_pad_ms=0)
|
|
|
|
|
|
|
| 93 |
timestamps = get_speech_timestamps(audio.data, vad_opts)
|
| 94 |
if len(timestamps) == 0:
|
| 95 |
logger.info(
|
|
@@ -143,7 +145,6 @@ async def transcribe_stream(
|
|
| 143 |
tg.create_task(audio_receiver(ws, audio_stream))
|
| 144 |
async for transcription in audio_transcriber(asr, audio_stream):
|
| 145 |
logger.debug(f"Sending transcription: {transcription.text}")
|
| 146 |
-
# Or should it be
|
| 147 |
if ws.client_state == WebSocketState.DISCONNECTED:
|
| 148 |
break
|
| 149 |
await ws.send_text(format_transcription(transcription, response_format))
|
|
|
|
| 90 |
audio_stream.duration - config.inactivity_window_seconds
|
| 91 |
)
|
| 92 |
vad_opts = VadOptions(min_silence_duration_ms=500, speech_pad_ms=0)
|
| 93 |
+
# NOTE: This is a synchronous operation that runs every time new data is received.
|
| 94 |
+
# This shouldn't be an issue unless data is being received in tiny chunks or the user's machine is a potato.
|
| 95 |
timestamps = get_speech_timestamps(audio.data, vad_opts)
|
| 96 |
if len(timestamps) == 0:
|
| 97 |
logger.info(
|
|
|
|
| 145 |
tg.create_task(audio_receiver(ws, audio_stream))
|
| 146 |
async for transcription in audio_transcriber(asr, audio_stream):
|
| 147 |
logger.debug(f"Sending transcription: {transcription.text}")
|
|
|
|
| 148 |
if ws.client_state == WebSocketState.DISCONNECTED:
|
| 149 |
break
|
| 150 |
await ws.send_text(format_transcription(transcription, response_format))
|