Quentin Fuxa commited on
Commit
dc3273d
·
1 Parent(s): 9c2f982

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +36 -6
README.md CHANGED
@@ -1,4 +1,6 @@
1
- # Real-time, Fully Local Speech-to-Text and Speaker Diarization
 
 
2
 
3
  This project is based on [Whisper Streaming](https://github.com/ufal/whisper_streaming) and lets you transcribe audio directly from your browser. Simply launch the local server and grant microphone access. Everything runs locally on your machine ✨
4
 
@@ -37,7 +39,20 @@ This project is based on [Whisper Streaming](https://github.com/ufal/whisper_str
37
 
38
  1. **Dependencies**:
39
 
40
- - Install required dependences :
 
 
 
 
 
 
 
 
 
 
 
 
 
41
 
42
  ```bash
43
  # Whisper streaming required dependencies
@@ -83,13 +98,29 @@ This project is based on [Whisper Streaming](https://github.com/ufal/whisper_str
83
 
84
  **Parameters**
85
 
86
- All [Whisper Streaming](https://github.com/ufal/whisper_streaming) parameters are supported.
87
- Additional parameters:
88
- - `--host` and `--port` let you specify the servers IP/port.
89
  - `-min-chunk-size` sets the minimum chunk size for audio processing. Make sure this value aligns with the chunk size selected in the frontend. If not aligned, the system will work but may unnecessarily over-process audio data.
90
  - `--transcription`: Enable/disable transcription (default: True)
91
  - `--diarization`: Enable/disable speaker diarization (default: False)
92
  - `--confidence-validation`: Use confidence scores for faster validation. Transcription will be faster but punctuation might be less accurate (default: True)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
93
 
94
  5. **Open the Provided HTML**:
95
 
@@ -118,4 +149,3 @@ No additional front-end libraries or frameworks are required. The WebSocket logi
118
  ## Acknowledgments
119
 
120
  This project builds upon the foundational work of the Whisper Streaming project. We extend our gratitude to the original authors for their contributions.
121
-
 
1
+ <h1 align="center">WhisperLiveKit</h1>
2
+ <p align="center"><b>Real-time, Fully Local Whisper's Speech-to-Text and Speaker Diarization</b></p>
3
+
4
 
5
  This project is based on [Whisper Streaming](https://github.com/ufal/whisper_streaming) and lets you transcribe audio directly from your browser. Simply launch the local server and grant microphone access. Everything runs locally on your machine ✨
6
 
 
39
 
40
  1. **Dependencies**:
41
 
42
+ - Install system dependencies:
43
+ ```bash
44
+ # Install FFmpeg on your system (required for audio processing)
45
+ # For Ubuntu/Debian:
46
+ sudo apt install ffmpeg
47
+
48
+ # For macOS:
49
+ brew install ffmpeg
50
+
51
+ # For Windows:
52
+ # Download from https://ffmpeg.org/download.html and add to PATH
53
+ ```
54
+
55
+ - Install required Python dependencies:
56
 
57
  ```bash
58
  # Whisper streaming required dependencies
 
98
 
99
  **Parameters**
100
 
101
+ The following parameters are supported:
102
+
103
+ - `--host` and `--port` let you specify the server's IP/port.
104
  - `-min-chunk-size` sets the minimum chunk size for audio processing. Make sure this value aligns with the chunk size selected in the frontend. If not aligned, the system will work but may unnecessarily over-process audio data.
105
  - `--transcription`: Enable/disable transcription (default: True)
106
  - `--diarization`: Enable/disable speaker diarization (default: False)
107
  - `--confidence-validation`: Use confidence scores for faster validation. Transcription will be faster but punctuation might be less accurate (default: True)
108
+ - `--warmup-file`: The path to a speech audio wav file to warm up Whisper so that the very first chunk processing is fast. :
109
+ - If not set, uses https://github.com/ggerganov/whisper.cpp/raw/master/samples/jfk.wav.
110
+ - If False, no warmup is performed.
111
+ - `--min-chunk-size` Minimum audio chunk size in seconds. It waits up to this time to do processing. If the processing takes shorter time, it waits, otherwise it processes the whole segment that was received by this time.
112
+ - `--model` {_tiny.en, tiny, base.en, base, small.en, small, medium.en, medium, large-v1, large-v2, large-v3, large, large-v3-turbo_}
113
+ Name size of the Whisper model to use (default: tiny). The model is automatically downloaded from the model hub if not present in model cache dir.
114
+ - `--model_cache_dir` Overriding the default model cache dir where models downloaded from the hub are saved
115
+ - `--model_dir` Dir where Whisper model.bin and other files are saved. This option overrides --model and --model_cache_dir parameter.
116
+ - `--lan`, --language Source language code, e.g. en,de,cs, or 'auto' for language detection.
117
+ - `--task` {_transcribe, translate_} Transcribe or translate. If translate is set, we recommend avoiding the _large-v3-turbo_ backend, as it [performs significantly worse](https://github.com/QuentinFuxa/whisper_streaming_web/issues/40#issuecomment-2652816533) than other models for translation.
118
+ - `--backend` {_faster-whisper, whisper_timestamped, openai-api, mlx-whisper_} Load only this backend for Whisper processing.
119
+ - `--vac` Use VAC = voice activity controller. Requires torch.
120
+ - `--vac-chunk-size` VAC sample size in seconds.
121
+ - `--vad` Use VAD = voice activity detection, with the default parameters.
122
+ - `--buffer_trimming` {_sentence, segment_} Buffer trimming strategy -- trim completed sentences marked with punctuation mark and detected by sentence segmenter, or the completed segments returned by Whisper. Sentence segmenter must be installed for "sentence" option.
123
+ - `--buffer_trimming_sec` Buffer trimming length threshold in seconds. If buffer length is longer, trimming sentence/segment is triggered.
124
 
125
  5. **Open the Provided HTML**:
126
 
 
149
  ## Acknowledgments
150
 
151
  This project builds upon the foundational work of the Whisper Streaming project. We extend our gratitude to the original authors for their contributions.