qfuxa commited on
Commit
1cea20a
·
1 Parent(s): 50bbd26

/ws to /asr to distinguish protocol ws:// from endpoint

Browse files
README.md CHANGED
@@ -86,7 +86,7 @@ This project reuses and extends code from the original Whisper Streaming reposit
86
  ### How the Live Interface Works
87
 
88
  - Once you **allow microphone access**, the page records small chunks of audio using the **MediaRecorder** API in **webm/opus** format.
89
- - These chunks are sent over a **WebSocket** to the FastAPI endpoint at `/ws`.
90
  - The Python server decodes `.webm` chunks on the fly using **FFmpeg** and streams them into the **whisper streaming** implementation for transcription.
91
  - **Partial transcription** appears as soon as enough audio is processed. The “unvalidated” text is shown in **lighter or grey color** (i.e., an ‘aperçu’) to indicate it’s still buffered partial output. Once Whisper finalizes that segment, it’s displayed in normal text.
92
  - You can watch the transcription update in near real time, ideal for demos, prototyping, or quick debugging.
 
86
  ### How the Live Interface Works
87
 
88
  - Once you **allow microphone access**, the page records small chunks of audio using the **MediaRecorder** API in **webm/opus** format.
89
+ - These chunks are sent over a **WebSocket** to the FastAPI endpoint at `/asr`.
90
  - The Python server decodes `.webm` chunks on the fly using **FFmpeg** and streams them into the **whisper streaming** implementation for transcription.
91
  - **Partial transcription** appears as soon as enough audio is processed. The “unvalidated” text is shown in **lighter or grey color** (i.e., an ‘aperçu’) to indicate it’s still buffered partial output. Once Whisper finalizes that segment, it’s displayed in normal text.
92
  - You can watch the transcription update in near real time, ideal for demos, prototyping, or quick debugging.
src/live_transcription.html CHANGED
@@ -92,7 +92,7 @@
92
  </div>
93
  <div>
94
  <label for="websocketInput">WebSocket URL:</label>
95
- <input id="websocketInput" type="text" value="ws://localhost:8000/ws" />
96
  </div>
97
  </div>
98
  </div>
@@ -105,7 +105,7 @@
105
  websocket,
106
  recorder,
107
  chunkDuration = 1000,
108
- websocketUrl = "ws://localhost:8000/ws";
109
 
110
  // Tracks whether the user voluntarily closed the WebSocket
111
  let userClosing = false;
 
92
  </div>
93
  <div>
94
  <label for="websocketInput">WebSocket URL:</label>
95
+ <input id="websocketInput" type="text" value="ws://localhost:8000/asr" />
96
  </div>
97
  </div>
98
  </div>
 
105
  websocket,
106
  recorder,
107
  chunkDuration = 1000,
108
+ websocketUrl = "ws://localhost:8000/asr";
109
 
110
  // Tracks whether the user voluntarily closed the WebSocket
111
  let userClosing = false;
whisper_fastapi_online_server.py CHANGED
@@ -57,7 +57,7 @@ async def start_ffmpeg_decoder():
57
  )
58
  return process
59
 
60
- @app.websocket("/ws")
61
  async def websocket_endpoint(websocket: WebSocket):
62
  await websocket.accept()
63
  print("WebSocket connection opened.")
 
57
  )
58
  return process
59
 
60
+ @app.websocket("/asr")
61
  async def websocket_endpoint(websocket: WebSocket):
62
  await websocket.accept()
63
  print("WebSocket connection opened.")