qfuxa commited on
Commit
f94a527
·
1 Parent(s): dc789f0

Solve #95 and #96

Browse files
Files changed (2) hide show
  1. README.md +17 -17
  2. whisperlivekit/core.py +20 -16
README.md CHANGED
@@ -136,26 +136,26 @@ For a complete audio processing example, check [whisper_fastapi_online_server.py
136
  The following parameters are supported when initializing `WhisperLiveKit`:
137
 
138
  - `--host` and `--port` let you specify the server's IP/port.
139
- - `-min-chunk-size` sets the minimum chunk size for audio processing. Make sure this value aligns with the chunk size selected in the frontend. If not aligned, the system will work but may unnecessarily over-process audio data.
140
- - `--transcription`: Enable/disable transcription (default: True)
141
- - `--diarization`: Enable/disable speaker diarization (default: False)
142
- - `--confidence-validation`: Use confidence scores for faster validation. Transcription will be faster but punctuation might be less accurate (default: True)
143
- - `--warmup-file`: The path to a speech audio wav file to warm up Whisper so that the very first chunk processing is fast. :
144
  - If not set, uses https://github.com/ggerganov/whisper.cpp/raw/master/samples/jfk.wav.
145
  - If False, no warmup is performed.
146
  - `--min-chunk-size` Minimum audio chunk size in seconds. It waits up to this time to do processing. If the processing takes shorter time, it waits, otherwise it processes the whole segment that was received by this time.
147
- - `--model` {_tiny.en, tiny, base.en, base, small.en, small, medium.en, medium, large-v1, large-v2, large-v3, large, large-v3-turbo_}
148
- Name size of the Whisper model to use (default: tiny). The model is automatically downloaded from the model hub if not present in model cache dir.
149
- - `--model_cache_dir` Overriding the default model cache dir where models downloaded from the hub are saved
150
- - `--model_dir` Dir where Whisper model.bin and other files are saved. This option overrides --model and --model_cache_dir parameter.
151
- - `--lan`, --language Source language code, e.g. en,de,cs, or 'auto' for language detection.
152
- - `--task` {_transcribe, translate_} Transcribe or translate. If translate is set, we recommend avoiding the _large-v3-turbo_ backend, as it [performs significantly worse](https://github.com/QuentinFuxa/whisper_streaming_web/issues/40#issuecomment-2652816533) than other models for translation.
153
- - `--backend` {_faster-whisper, whisper_timestamped, openai-api, mlx-whisper_} Load only this backend for Whisper processing.
154
- - `--vac` Use VAC = voice activity controller. Requires torch.
155
- - `--vac-chunk-size` VAC sample size in seconds.
156
- - `--vad` Use VAD = voice activity detection, with the default parameters.
157
- - `--buffer_trimming` {_sentence, segment_} Buffer trimming strategy -- trim completed sentences marked with punctuation mark and detected by sentence segmenter, or the completed segments returned by Whisper. Sentence segmenter must be installed for "sentence" option.
158
- - `--buffer_trimming_sec` Buffer trimming length threshold in seconds. If buffer length is longer, trimming sentence/segment is triggered.
159
 
160
  ## How the Live Interface Works
161
 
 
136
  The following parameters are supported when initializing `WhisperLiveKit`:
137
 
138
  - `--host` and `--port` let you specify the server's IP/port.
139
+ - `--min-chunk-size` sets the minimum chunk size for audio processing. Make sure this value aligns with the chunk size selected in the frontend. If not aligned, the system will work but may unnecessarily over-process audio data.
140
+ - `--no-transcription`: Disable transcription (enabled by default)
141
+ - `--diarization`: Enable speaker diarization (disabled by default)
142
+ - `--confidence-validation`: Use confidence scores for faster validation. Transcription will be faster but punctuation might be less accurate (disabled by default)
143
+ - `--warmup-file`: The path to a speech audio wav file to warm up Whisper so that the very first chunk processing is fast:
144
  - If not set, uses https://github.com/ggerganov/whisper.cpp/raw/master/samples/jfk.wav.
145
  - If False, no warmup is performed.
146
  - `--min-chunk-size` Minimum audio chunk size in seconds. It waits up to this time to do processing. If the processing takes shorter time, it waits, otherwise it processes the whole segment that was received by this time.
147
+ - `--model`: Name size of the Whisper model to use (default: tiny). Suggested values: tiny.en, tiny, base.en, base, small.en, small, medium.en, medium, large-v1, large-v2, large-v3, large, large-v3-turbo. The model is automatically downloaded from the model hub if not present in model cache dir.
148
+ - `--model_cache_dir`: Overriding the default model cache dir where models downloaded from the hub are saved
149
+ - `--model_dir`: Dir where Whisper model.bin and other files are saved. This option overrides --model and --model_cache_dir parameter.
150
+ - `--lan`, `--language`: Source language code, e.g. en,de,cs, or 'auto' for language detection.
151
+ - `--task` {_transcribe, translate_}: Transcribe or translate. If translate is set, we recommend avoiding the _large-v3-turbo_ backend, as it [performs significantly worse](https://github.com/QuentinFuxa/whisper_streaming_web/issues/40#issuecomment-2652816533) than other models for translation.
152
+ - `--backend` {_faster-whisper, whisper_timestamped, openai-api, mlx-whisper_}: Load only this backend for Whisper processing.
153
+ - `--vac`: Use VAC = voice activity controller. Requires torch. (disabled by default)
154
+ - `--vac-chunk-size`: VAC sample size in seconds.
155
+ - `--no-vad`: Disable VAD (voice activity detection), which is enabled by default.
156
+ - `--buffer_trimming` {_sentence, segment_}: Buffer trimming strategy -- trim completed sentences marked with punctuation mark and detected by sentence segmenter, or the completed segments returned by Whisper. Sentence segmenter must be installed for "sentence" option.
157
+ - `--buffer_trimming_sec`: Buffer trimming length threshold in seconds. If buffer length is longer, trimming sentence/segment is triggered.
158
+
159
 
160
  ## How the Live Interface Works
161
 
whisperlivekit/core.py CHANGED
@@ -29,23 +29,21 @@ def parse_args():
29
 
30
  parser.add_argument(
31
  "--confidence-validation",
32
- type=bool,
33
- default=False,
34
  help="Accelerates validation of tokens using confidence scores. Transcription will be faster but punctuation might be less accurate.",
35
  )
36
 
37
  parser.add_argument(
38
  "--diarization",
39
- type=bool,
40
- default=True,
41
- help="Whether to enable speaker diarization.",
42
  )
43
 
44
  parser.add_argument(
45
- "--transcription",
46
- type=bool,
47
- default=True,
48
- help="To disable to only see live diarization results.",
49
  )
50
 
51
  parser.add_argument(
@@ -54,15 +52,14 @@ def parse_args():
54
  default=0.5,
55
  help="Minimum audio chunk size in seconds. It waits up to this time to do processing. If the processing takes shorter time, it waits, otherwise it processes the whole segment that was received by this time.",
56
  )
 
57
  parser.add_argument(
58
  "--model",
59
  type=str,
60
  default="tiny",
61
- choices="tiny.en,tiny,base.en,base,small.en,small,medium.en,medium,large-v1,large-v2,large-v3,large,large-v3-turbo".split(
62
- ","
63
- ),
64
- help="Name size of the Whisper model to use (default: large-v2). The model is automatically downloaded from the model hub if not present in model cache dir.",
65
  )
 
66
  parser.add_argument(
67
  "--model_cache_dir",
68
  type=str,
@@ -105,12 +102,13 @@ def parse_args():
105
  parser.add_argument(
106
  "--vac-chunk-size", type=float, default=0.04, help="VAC sample size in seconds."
107
  )
 
108
  parser.add_argument(
109
- "--vad",
110
  action="store_true",
111
- default=True,
112
- help="Use VAD = voice activity detection, with the default parameters.",
113
  )
 
114
  parser.add_argument(
115
  "--buffer_trimming",
116
  type=str,
@@ -134,6 +132,12 @@ def parse_args():
134
  )
135
 
136
  args = parser.parse_args()
 
 
 
 
 
 
137
  return args
138
 
139
  class WhisperLiveKit:
 
29
 
30
  parser.add_argument(
31
  "--confidence-validation",
32
+ action="store_true",
 
33
  help="Accelerates validation of tokens using confidence scores. Transcription will be faster but punctuation might be less accurate.",
34
  )
35
 
36
  parser.add_argument(
37
  "--diarization",
38
+ action="store_true",
39
+ default=False,
40
+ help="Enable speaker diarization.",
41
  )
42
 
43
  parser.add_argument(
44
+ "--no-transcription",
45
+ action="store_true",
46
+ help="Disable transcription to only see live diarization results.",
 
47
  )
48
 
49
  parser.add_argument(
 
52
  default=0.5,
53
  help="Minimum audio chunk size in seconds. It waits up to this time to do processing. If the processing takes shorter time, it waits, otherwise it processes the whole segment that was received by this time.",
54
  )
55
+
56
  parser.add_argument(
57
  "--model",
58
  type=str,
59
  default="tiny",
60
+ help="Name size of the Whisper model to use (default: tiny). Suggested values: tiny.en,tiny,base.en,base,small.en,small,medium.en,medium,large-v1,large-v2,large-v3,large,large-v3-turbo. The model is automatically downloaded from the model hub if not present in model cache dir.",
 
 
 
61
  )
62
+
63
  parser.add_argument(
64
  "--model_cache_dir",
65
  type=str,
 
102
  parser.add_argument(
103
  "--vac-chunk-size", type=float, default=0.04, help="VAC sample size in seconds."
104
  )
105
+
106
  parser.add_argument(
107
+ "--no-vad",
108
  action="store_true",
109
+ help="Disable VAD (voice activity detection).",
 
110
  )
111
+
112
  parser.add_argument(
113
  "--buffer_trimming",
114
  type=str,
 
132
  )
133
 
134
  args = parser.parse_args()
135
+
136
+ args.transcription = not args.no_transcription
137
+ args.vad = not args.no_vad
138
+ delattr(args, 'no_transcription')
139
+ delattr(args, 'no_vad')
140
+
141
  return args
142
 
143
  class WhisperLiveKit: