Hervé BREDIN commited on
Commit
49b252b
·
1 Parent(s): de1360a

feat: update with better checkpoint

Browse files
Files changed (7) hide show
  1. README.md +19 -19
  2. config.yaml +0 -93
  3. hparams.yaml +0 -15
  4. overrides.yaml +0 -22
  5. pytorch_model.bin +2 -2
  6. tfevents.bin +0 -3
  7. train.log +0 -18
README.md CHANGED
@@ -90,25 +90,25 @@ segmentation = inference("audio.wav")
90
  ## Reproducible research
91
 
92
  In order to reproduce the results of the paper ["End-to-end speaker segmentation for overlap-aware resegmentation
93
- "](https://arxiv.org/abs/2104.04045), use the following hyper-parameters:
94
-
95
- Voice activity detection | `onset` | `offset` | `min_duration_on` | `min_duration_off`
96
- ----------------|---------|----------|-------------------|-------------------
97
- AMI Mix-Headset | 0.684 | 0.577 | 0.181 | 0.037
98
- DIHARD3 | 0.767 | 0.377 | 0.136 | 0.067
99
- VoxConverse | 0.767 | 0.713 | 0.182 | 0.501
100
-
101
- Overlapped speech detection | `onset` | `offset` | `min_duration_on` | `min_duration_off`
102
- ----------------|---------|----------|-------------------|-------------------
103
- AMI Mix-Headset | 0.448 | 0.362 | 0.116 | 0.187
104
- DIHARD3 | 0.430 | 0.320 | 0.091 | 0.144
105
- VoxConverse | 0.587 | 0.426 | 0.337 | 0.112
106
-
107
- Resegmentation of VBx | `onset` | `offset` | `min_duration_on` | `min_duration_off`
108
- ----------------|---------|----------|-------------------|-------------------
109
- AMI Mix-Headset | 0.542 | 0.527 | 0.044 | 0.705
110
- DIHARD3 | 0.592 | 0.489 | 0.163 | 0.182
111
- VoxConverse | 0.537 | 0.724 | 0.410 | 0.563
112
 
113
  Expected outputs (and VBx baseline) are also provided in the `/reproducible_research` sub-directories.
114
 
 
90
  ## Reproducible research
91
 
92
  In order to reproduce the results of the paper ["End-to-end speaker segmentation for overlap-aware resegmentation
93
+ "](https://arxiv.org/abs/2104.04045), use `pyannote/segmentation@Interspeech2021` with the following hyper-parameters:
94
+
95
+ | Voice activity detection | `onset` | `offset` | `min_duration_on` | `min_duration_off` |
96
+ | ------------------------ | ------- | -------- | ----------------- | ------------------ |
97
+ | AMI Mix-Headset | 0.684 | 0.577 | 0.181 | 0.037 |
98
+ | DIHARD3 | 0.767 | 0.377 | 0.136 | 0.067 |
99
+ | VoxConverse | 0.767 | 0.713 | 0.182 | 0.501 |
100
+
101
+ | Overlapped speech detection | `onset` | `offset` | `min_duration_on` | `min_duration_off` |
102
+ | --------------------------- | ------- | -------- | ----------------- | ------------------ |
103
+ | AMI Mix-Headset | 0.448 | 0.362 | 0.116 | 0.187 |
104
+ | DIHARD3 | 0.430 | 0.320 | 0.091 | 0.144 |
105
+ | VoxConverse | 0.587 | 0.426 | 0.337 | 0.112 |
106
+
107
+ | Resegmentation of VBx | `onset` | `offset` | `min_duration_on` | `min_duration_off` |
108
+ | --------------------- | ------- | -------- | ----------------- | ------------------ |
109
+ | AMI Mix-Headset | 0.542 | 0.527 | 0.044 | 0.705 |
110
+ | DIHARD3 | 0.592 | 0.489 | 0.163 | 0.182 |
111
+ | VoxConverse | 0.537 | 0.724 | 0.410 | 0.563 |
112
 
113
  Expected outputs (and VBx baseline) are also provided in the `/reproducible_research` sub-directories.
114
 
config.yaml DELETED
@@ -1,93 +0,0 @@
1
- protocol: X.SpeakerDiarization.Custom
2
- patience: 20
3
- task:
4
- _target_: pyannote.audio.tasks.Segmentation
5
- duration: 5.0
6
- warm_up: 0.0
7
- balance: null
8
- overlap:
9
- probability: 0.5
10
- snr_min: 0.0
11
- snr_max: 10.0
12
- weight: null
13
- batch_size: 32
14
- num_workers: 10
15
- pin_memory: false
16
- loss: bce
17
- vad_loss: bce
18
- model:
19
- _target_: pyannote.audio.models.segmentation.PyanNet
20
- sincnet:
21
- stride: 10
22
- lstm:
23
- num_layers: 4
24
- monolithic: true
25
- dropout: 0.5
26
- linear:
27
- num_layers: 2
28
- optimizer:
29
- _target_: torch.optim.Adam
30
- lr: 0.001
31
- betas:
32
- - 0.9
33
- - 0.999
34
- eps: 1.0e-08
35
- weight_decay: 0
36
- amsgrad: false
37
- trainer:
38
- _target_: pytorch_lightning.Trainer
39
- accelerator: ddp
40
- accumulate_grad_batches: 1
41
- amp_backend: native
42
- amp_level: O2
43
- auto_lr_find: false
44
- auto_scale_batch_size: false
45
- auto_select_gpus: true
46
- benchmark: true
47
- check_val_every_n_epoch: 1
48
- checkpoint_callback: true
49
- deterministic: false
50
- fast_dev_run: false
51
- flush_logs_every_n_steps: 100
52
- gpus: -1
53
- gradient_clip_val: 0.5
54
- limit_test_batches: 1.0
55
- limit_train_batches: 1.0
56
- limit_val_batches: 1.0
57
- log_every_n_steps: 50
58
- log_gpu_memory: null
59
- max_epochs: 1000
60
- max_steps: null
61
- min_epochs: 1
62
- min_steps: null
63
- num_nodes: 1
64
- num_processes: 1
65
- num_sanity_val_steps: 2
66
- overfit_batches: 0.0
67
- precision: 32
68
- prepare_data_per_node: true
69
- process_position: 0
70
- profiler: null
71
- progress_bar_refresh_rate: 1
72
- reload_dataloaders_every_epoch: false
73
- replace_sampler_ddp: true
74
- sync_batchnorm: false
75
- terminate_on_nan: false
76
- tpu_cores: null
77
- track_grad_norm: -1
78
- truncated_bptt_steps: null
79
- val_check_interval: 1.0
80
- weights_save_path: null
81
- weights_summary: top
82
- augmentation:
83
- transform: Compose
84
- params:
85
- shuffle: false
86
- transforms:
87
- - transform: AddBackgroundNoise
88
- params:
89
- background_paths: /gpfswork/rech/eie/commun/data/background/musan
90
- min_snr_in_db: 5.0
91
- max_snr_in_db: 15.0
92
- mode: per_example
93
- p: 0.9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
hparams.yaml DELETED
@@ -1,15 +0,0 @@
1
- linear:
2
- hidden_size: 128
3
- num_layers: 2
4
- lstm:
5
- batch_first: true
6
- bidirectional: true
7
- dropout: 0.5
8
- hidden_size: 128
9
- monolithic: true
10
- num_layers: 4
11
- num_channels: 1
12
- sample_rate: 16000
13
- sincnet:
14
- sample_rate: 16000
15
- stride: 10
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
overrides.yaml DELETED
@@ -1,22 +0,0 @@
1
- - protocol=X.SpeakerDiarization.Custom
2
- - task=Segmentation
3
- - task.batch_size=32
4
- - task.num_workers=10
5
- - task.duration=5.
6
- - task.warm_up=0.
7
- - task.loss=bce
8
- - task.vad_loss=bce
9
- - patience=20
10
- - model=PyanNet
11
- - +model.sincnet.stride=10
12
- - +model.lstm.num_layers=4
13
- - +model.lstm.monolithic=True
14
- - +model.lstm.dropout=0.5
15
- - +model.linear.num_layers=2
16
- - optimizer=Adam
17
- - optimizer.lr=0.001
18
- - trainer.benchmark=True
19
- - trainer.gradient_clip_val=0.5
20
- - trainer.gpus=-1
21
- - trainer.accelerator=ddp
22
- - +augmentation=background
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c7d2e72ce20167e5eb05ce163b7af9762e92ef5fec7313435b676b74b8498afe
3
- size 17739960
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0b5b3216d60a2d32fc086b47ea8c67589aaeb26b7e07fcbe620d6d0b83e209ea
3
+ size 17719103
tfevents.bin DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:c2b33b3855ecc446b1913916d8369ede8597b66491541a6c67e5ceafc15bcdb3
3
- size 13357699
 
 
 
 
train.log DELETED
@@ -1,18 +0,0 @@
1
- [2021-03-19 18:29:57,529][lightning][INFO] - GPU available: True, used: True
2
- [2021-03-19 18:29:57,531][lightning][INFO] - TPU available: None, using: 0 TPU cores
3
- [2021-03-19 18:29:57,531][lightning][INFO] - LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1,2,3]
4
- [2021-03-19 18:30:08,622][lightning][INFO] - initializing ddp: GLOBAL_RANK: 0, MEMBER: 1/4
5
- [2021-03-19 18:32:58,993][lightning][INFO] - Set SLURM handle signals.
6
- [2021-03-19 18:32:59,068][lightning][INFO] -
7
- | Name | Type | Params | In sizes | Out sizes
8
- ------------------------------------------------------------------------------------------------------------
9
- 0 | sincnet | SincNet | 42.6 K | [32, 1, 80000] | [32, 60, 293]
10
- 1 | lstm | LSTM | 1.4 M | [32, 293, 60] | [[32, 293, 256], [[8, 32, 128], [8, 32, 128]]]
11
- 2 | linear | ModuleList | 49.4 K | ? | ?
12
- 3 | classifier | Linear | 516 | [32, 293, 128] | [32, 293, 4]
13
- 4 | activation | Sigmoid | 0 | [32, 293, 4] | [32, 293, 4]
14
- ------------------------------------------------------------------------------------------------------------
15
- 1.5 M Trainable params
16
- 0 Non-trainable params
17
- 1.5 M Total params
18
- [2021-03-23 02:26:47,615][lightning][INFO] - bypassing sigterm