Quentin Fuxa commited on
Commit
0bd51b3
·
1 Parent(s): 85cf486

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -20
README.md CHANGED
@@ -34,13 +34,11 @@ pip install whisperlivekit
34
 
35
  ### From source
36
 
37
- 1. **Clone the Repository**:
38
-
39
- ```bash
40
- git clone https://github.com/QuentinFuxa/WhisperLiveKit
41
- cd WhisperLiveKit
42
- pip install -e .
43
- ```
44
 
45
  ### System Dependencies
46
 
@@ -71,7 +69,17 @@ pip install tokenize_uk # If you work with Ukrainian text
71
  pip install diart
72
  ```
73
 
74
- Diart uses [pyannote.audio](https://github.com/pyannote/pyannote-audio) models from the _huggingface hub_. To use them, please follow the steps described [here](https://github.com/juanmc2005/diart?tab=readme-ov-file#get-access-to--pyannote-models).
 
 
 
 
 
 
 
 
 
 
75
 
76
  ## Usage
77
 
@@ -144,20 +152,12 @@ The following parameters are supported when initializing `WhisperLiveKit`:
144
  - `--buffer_trimming` {_sentence, segment_} Buffer trimming strategy -- trim completed sentences marked with punctuation mark and detected by sentence segmenter, or the completed segments returned by Whisper. Sentence segmenter must be installed for "sentence" option.
145
  - `--buffer_trimming_sec` Buffer trimming length threshold in seconds. If buffer length is longer, trimming sentence/segment is triggered.
146
 
147
- 5. **Open the Provided HTML**:
148
-
149
- - By default, the server root endpoint `/` serves a simple `live_transcription.html` page.
150
- - Open your browser at `http://localhost:8000` (or replace `localhost` and `8000` with whatever you specified).
151
- - The page uses vanilla JavaScript and the WebSocket API to capture your microphone and stream audio to the server in real time.
152
-
153
-
154
  ## How the Live Interface Works
155
 
156
  - Once you **allow microphone access**, the page records small chunks of audio using the **MediaRecorder** API in **webm/opus** format.
157
  - These chunks are sent over a **WebSocket** to the FastAPI endpoint at `/asr`.
158
  - The Python server decodes `.webm` chunks on the fly using **FFmpeg** and streams them into the **whisper streaming** implementation for transcription.
159
  - **Partial transcription** appears as soon as enough audio is processed. The "unvalidated" text is shown in **lighter or grey color** (i.e., an 'aperçu') to indicate it's still buffered partial output. Once Whisper finalizes that segment, it's displayed in normal text.
160
- - You can watch the transcription update in near real time, ideal for demos, prototyping, or quick debugging.
161
 
162
  ### Deploying to a Remote Server
163
 
@@ -165,10 +165,8 @@ If you want to **deploy** this setup:
165
 
166
  1. **Host the FastAPI app** behind a production-grade HTTP(S) server (like **Uvicorn + Nginx** or Docker). If you use HTTPS, use "wss" instead of "ws" in WebSocket URL.
167
  2. The **HTML/JS page** can be served by the same FastAPI app or a separate static host.
168
- 3. Users open the page in **Chrome/Firefox** (any modern browser that supports MediaRecorder + WebSocket).
169
-
170
- No additional front-end libraries or frameworks are required. The WebSocket logic in `live_transcription.html` is minimal enough to adapt for your own custom UI or embed in other pages.
171
 
172
  ## Acknowledgments
173
 
174
- This project builds upon the foundational work of the Whisper Streaming project. We extend our gratitude to the original authors for their contributions.
 
34
 
35
  ### From source
36
 
37
+ ```bash
38
+ git clone https://github.com/QuentinFuxa/WhisperLiveKit
39
+ cd WhisperLiveKit
40
+ pip install -e .
41
+ ```
 
 
42
 
43
  ### System Dependencies
44
 
 
69
  pip install diart
70
  ```
71
 
72
+ ### Get access to 🎹 pyannote models
73
+
74
+ By default, diart is based on [pyannote.audio](https://github.com/pyannote/pyannote-audio) models from the [huggingface](https://huggingface.co/) hub.
75
+ In order to use them, please follow these steps:
76
+
77
+ 1) [Accept user conditions](https://huggingface.co/pyannote/segmentation) for the `pyannote/segmentation` model
78
+ 2) [Accept user conditions](https://huggingface.co/pyannote/segmentation-3.0) for the newest `pyannote/segmentation-3.0` model
79
+ 3) [Accept user conditions](https://huggingface.co/pyannote/embedding) for the `pyannote/embedding` model
80
+ 4) Install [huggingface-cli](https://huggingface.co/docs/huggingface_hub/quick-start#install-the-hub-library) and [log in](https://huggingface.co/docs/huggingface_hub/quick-start#login) with your user access token (or provide it manually in diart CLI or API).
81
+
82
+
83
 
84
  ## Usage
85
 
 
152
  - `--buffer_trimming` {_sentence, segment_} Buffer trimming strategy -- trim completed sentences marked with punctuation mark and detected by sentence segmenter, or the completed segments returned by Whisper. Sentence segmenter must be installed for "sentence" option.
153
  - `--buffer_trimming_sec` Buffer trimming length threshold in seconds. If buffer length is longer, trimming sentence/segment is triggered.
154
 
 
 
 
 
 
 
 
155
  ## How the Live Interface Works
156
 
157
  - Once you **allow microphone access**, the page records small chunks of audio using the **MediaRecorder** API in **webm/opus** format.
158
  - These chunks are sent over a **WebSocket** to the FastAPI endpoint at `/asr`.
159
  - The Python server decodes `.webm` chunks on the fly using **FFmpeg** and streams them into the **whisper streaming** implementation for transcription.
160
  - **Partial transcription** appears as soon as enough audio is processed. The "unvalidated" text is shown in **lighter or grey color** (i.e., an 'aperçu') to indicate it's still buffered partial output. Once Whisper finalizes that segment, it's displayed in normal text.
 
161
 
162
  ### Deploying to a Remote Server
163
 
 
165
 
166
  1. **Host the FastAPI app** behind a production-grade HTTP(S) server (like **Uvicorn + Nginx** or Docker). If you use HTTPS, use "wss" instead of "ws" in WebSocket URL.
167
  2. The **HTML/JS page** can be served by the same FastAPI app or a separate static host.
168
+ 3. Users open the page in **Chrome/Firefox** (any modern browser that supports MediaRecorder + WebSocket). No additional front-end libraries or frameworks are required.
 
 
169
 
170
  ## Acknowledgments
171
 
172
+ This project builds upon the foundational work of the Whisper Streaming and Diart projects. We extend our gratitude to the original authors for their contributions.