Quentin Fuxa
commited on
Commit
·
85cf486
1
Parent(s):
5fe0e27
Update README.md
Browse files
README.md
CHANGED
@@ -89,23 +89,30 @@ Then open your browser at `http://localhost:8000` (or your specified host and po
|
|
89 |
|
90 |
```python
|
91 |
from whisperlivekit import WhisperLiveKit
|
|
|
92 |
from fastapi import FastAPI, WebSocket
|
93 |
|
94 |
-
|
95 |
-
|
96 |
-
model="tiny.en",
|
97 |
-
diarization=True,
|
98 |
-
)
|
99 |
-
|
100 |
-
# Create a FastAPI application
|
101 |
-
app = FastAPI()
|
102 |
|
103 |
@app.get("/")
|
104 |
async def get():
|
105 |
-
# Use the built-in web interface
|
106 |
-
|
107 |
-
|
108 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
109 |
```
|
110 |
|
111 |
For a complete audio processing example, check [whisper_fastapi_online_server.py](https://github.com/QuentinFuxa/WhisperLiveKit/blob/main/whisper_fastapi_online_server.py)
|
@@ -115,27 +122,27 @@ For a complete audio processing example, check [whisper_fastapi_online_server.py
|
|
115 |
|
116 |
The following parameters are supported when initializing `WhisperLiveKit`:
|
117 |
|
118 |
-
|
119 |
-
|
120 |
-
|
121 |
-
|
122 |
-
|
123 |
-
|
124 |
-
|
125 |
-
|
126 |
-
|
127 |
-
|
128 |
Name size of the Whisper model to use (default: tiny). The model is automatically downloaded from the model hub if not present in model cache dir.
|
129 |
-
|
130 |
-
|
131 |
-
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
|
136 |
-
|
137 |
-
|
138 |
-
|
139 |
|
140 |
5. **Open the Provided HTML**:
|
141 |
|
@@ -164,4 +171,4 @@ No additional front-end libraries or frameworks are required. The WebSocket logi
|
|
164 |
|
165 |
## Acknowledgments
|
166 |
|
167 |
-
This project builds upon the foundational work of the Whisper Streaming project. We extend our gratitude to the original authors for their contributions.
|
|
|
89 |
|
90 |
```python
|
91 |
from whisperlivekit import WhisperLiveKit
|
92 |
+
from whisperlivekit.audio_processor import AudioProcessor
|
93 |
from fastapi import FastAPI, WebSocket
|
94 |
|
95 |
+
kit = WhisperLiveKit(model="medium", diarization=True)
|
96 |
+
app = FastAPI() # Create a FastAPI application
|
|
|
|
|
|
|
|
|
|
|
|
|
97 |
|
98 |
@app.get("/")
|
99 |
async def get():
|
100 |
+
return HTMLResponse(kit.web_interface()) # Use the built-in web interface
|
101 |
+
|
102 |
+
async def handle_websocket_results(websocket, results_generator): # Sends results to frontend
|
103 |
+
async for response in results_generator:
|
104 |
+
await websocket.send_json(response)
|
105 |
+
|
106 |
+
@app.websocket("/asr")
|
107 |
+
async def websocket_endpoint(websocket: WebSocket):
|
108 |
+
audio_processor = AudioProcessor()
|
109 |
+
await websocket.accept()
|
110 |
+
results_generator = await audio_processor.create_tasks()
|
111 |
+
websocket_task = asyncio.create_task(handle_websocket_results(websocket, results_generator))
|
112 |
+
|
113 |
+
while True:
|
114 |
+
message = await websocket.receive_bytes()
|
115 |
+
await audio_processor.process_audio(message)
|
116 |
```
|
117 |
|
118 |
For a complete audio processing example, check [whisper_fastapi_online_server.py](https://github.com/QuentinFuxa/WhisperLiveKit/blob/main/whisper_fastapi_online_server.py)
|
|
|
122 |
|
123 |
The following parameters are supported when initializing `WhisperLiveKit`:
|
124 |
|
125 |
+
- `--host` and `--port` let you specify the server's IP/port.
|
126 |
+
- `-min-chunk-size` sets the minimum chunk size for audio processing. Make sure this value aligns with the chunk size selected in the frontend. If not aligned, the system will work but may unnecessarily over-process audio data.
|
127 |
+
- `--transcription`: Enable/disable transcription (default: True)
|
128 |
+
- `--diarization`: Enable/disable speaker diarization (default: False)
|
129 |
+
- `--confidence-validation`: Use confidence scores for faster validation. Transcription will be faster but punctuation might be less accurate (default: True)
|
130 |
+
- `--warmup-file`: The path to a speech audio wav file to warm up Whisper so that the very first chunk processing is fast. :
|
131 |
+
- If not set, uses https://github.com/ggerganov/whisper.cpp/raw/master/samples/jfk.wav.
|
132 |
+
- If False, no warmup is performed.
|
133 |
+
- `--min-chunk-size` Minimum audio chunk size in seconds. It waits up to this time to do processing. If the processing takes shorter time, it waits, otherwise it processes the whole segment that was received by this time.
|
134 |
+
- `--model` {_tiny.en, tiny, base.en, base, small.en, small, medium.en, medium, large-v1, large-v2, large-v3, large, large-v3-turbo_}
|
135 |
Name size of the Whisper model to use (default: tiny). The model is automatically downloaded from the model hub if not present in model cache dir.
|
136 |
+
- `--model_cache_dir` Overriding the default model cache dir where models downloaded from the hub are saved
|
137 |
+
- `--model_dir` Dir where Whisper model.bin and other files are saved. This option overrides --model and --model_cache_dir parameter.
|
138 |
+
- `--lan`, --language Source language code, e.g. en,de,cs, or 'auto' for language detection.
|
139 |
+
- `--task` {_transcribe, translate_} Transcribe or translate. If translate is set, we recommend avoiding the _large-v3-turbo_ backend, as it [performs significantly worse](https://github.com/QuentinFuxa/whisper_streaming_web/issues/40#issuecomment-2652816533) than other models for translation.
|
140 |
+
- `--backend` {_faster-whisper, whisper_timestamped, openai-api, mlx-whisper_} Load only this backend for Whisper processing.
|
141 |
+
- `--vac` Use VAC = voice activity controller. Requires torch.
|
142 |
+
- `--vac-chunk-size` VAC sample size in seconds.
|
143 |
+
- `--vad` Use VAD = voice activity detection, with the default parameters.
|
144 |
+
- `--buffer_trimming` {_sentence, segment_} Buffer trimming strategy -- trim completed sentences marked with punctuation mark and detected by sentence segmenter, or the completed segments returned by Whisper. Sentence segmenter must be installed for "sentence" option.
|
145 |
+
- `--buffer_trimming_sec` Buffer trimming length threshold in seconds. If buffer length is longer, trimming sentence/segment is triggered.
|
146 |
|
147 |
5. **Open the Provided HTML**:
|
148 |
|
|
|
171 |
|
172 |
## Acknowledgments
|
173 |
|
174 |
+
This project builds upon the foundational work of the Whisper Streaming project. We extend our gratitude to the original authors for their contributions.
|