Dominik Macháček commited on
Commit
9310b4f
·
1 Parent(s): 88dc796

readme parameter update

Browse files
Files changed (1) hide show
  1. README.md +13 -5
README.md CHANGED
@@ -19,24 +19,32 @@ The backend is loaded only when chosen. The unused one does not have to be insta
19
  ## Usage
20
 
21
  ```
22
- (p3) $ python3 whisper_online.py -h
23
- usage: whisper_online.py [-h] [--min-chunk-size MIN_CHUNK_SIZE] [--model MODEL] [--model_dir MODEL_DIR] [--lan LAN] [--start_at START_AT] [--backend {faster-whisper,whisper_timestamped}] audio_path
 
24
 
25
  positional arguments:
26
- audio_path
27
 
28
  options:
29
  -h, --help show this help message and exit
30
  --min-chunk-size MIN_CHUNK_SIZE
31
  Minimum audio chunk size in seconds. It waits up to this time to do processing. If the processing takes shorter time, it waits, otherwise it processes the whole segment that was received by this time.
32
- --model MODEL name of the Whisper model to use (default: large-v2, options: {tiny.en,tiny,base.en,base,small.en,small,medium.en,medium,large-v1,large-v2,large}
 
 
 
33
  --model_dir MODEL_DIR
34
- the path where Whisper models are saved (or downloaded to). Default: ./disk-cache-dir
35
  --lan LAN, --language LAN
36
  Language code for transcription, e.g. en,de,cs.
 
 
37
  --start_at START_AT Start processing audio at this time.
38
  --backend {faster-whisper,whisper_timestamped}
39
  Load only this backend for Whisper processing.
 
 
40
  ```
41
 
42
  Example:
 
19
  ## Usage
20
 
21
  ```
22
+ usage: whisper_online.py [-h] [--min-chunk-size MIN_CHUNK_SIZE] [--model {tiny.en,tiny,base.en,base,small.en,small,medium.en,medium,large-v1,large-v2,large}] [--model_cache_dir MODEL_CACHE_DIR] [--model_dir MODEL_DIR] [--lan LAN] [--task {transcribe,translate}]
23
+ [--start_at START_AT] [--backend {faster-whisper,whisper_timestamped}] [--offline] [--vad]
24
+ audio_path
25
 
26
  positional arguments:
27
+ audio_path Filename of 16kHz mono channel wav, on which live streaming is simulated.
28
 
29
  options:
30
  -h, --help show this help message and exit
31
  --min-chunk-size MIN_CHUNK_SIZE
32
  Minimum audio chunk size in seconds. It waits up to this time to do processing. If the processing takes shorter time, it waits, otherwise it processes the whole segment that was received by this time.
33
+ --model {tiny.en,tiny,base.en,base,small.en,small,medium.en,medium,large-v1,large-v2,large}
34
+ Name size of the Whisper model to use (default: large-v2). The model is automatically downloaded from the model hub if not present in model cache dir.
35
+ --model_cache_dir MODEL_CACHE_DIR
36
+ Overriding the default model cache dir where models downloaded from the hub are saved
37
  --model_dir MODEL_DIR
38
+ Dir where Whisper model.bin and other files are saved. This option overrides --model and --model_cache_dir parameter.
39
  --lan LAN, --language LAN
40
  Language code for transcription, e.g. en,de,cs.
41
+ --task {transcribe,translate}
42
+ Transcribe or translate.
43
  --start_at START_AT Start processing audio at this time.
44
  --backend {faster-whisper,whisper_timestamped}
45
  Load only this backend for Whisper processing.
46
+ --offline Offline mode.
47
+ --vad Use VAD = voice activity detection, with the default parameters.
48
  ```
49
 
50
  Example: