alibabasglab commited on
Commit
de31c35
·
verified ·
1 Parent(s): 4d997b9

Upload 6 files

Browse files
checkpoints/.DS_Store ADDED
Binary file (6.15 kB). View file
 
checkpoints/log_YGD_gesture_seg_2spk/config.yaml ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Config file
2
+
3
+ # Log
4
+ seed: 777
5
+ use_cuda: 1 # 1 for True, 0 for False
6
+
7
+ # dataset
8
+ speaker_no: 2
9
+ mix_lst_path: ./data/YGD/mixture_data_list_2mix.csv
10
+ audio_direc: /mnt/nas_sg/wulanchabu/zexu.pan/datasets/gesture_TED/audio_clean/
11
+ reference_direc: /mnt/nas_sg/wulanchabu/zexu.pan/datasets/gesture_TED/visual/gesture_embedding/
12
+ audio_sr: 16000
13
+ ref_sr: 15
14
+
15
+ # dataloader
16
+ num_workers: 4
17
+ batch_size: 8
18
+ accu_grad: 1
19
+ effec_batch_size: 16 # per GPU, only used if accu_grad is set to 1, must be multiple times of batch size
20
+ max_length: 10 # truncate the utterances in dataloader, in seconds
21
+
22
+ # network settings
23
+ init_from: None # 'None' or a log name 'log_2024-07-22(18:12:13)'
24
+ causal: 0 # 1 for True, 0 for False
25
+ network_reference:
26
+ cue: gesture # lip or speech or gesture or EEG
27
+ network_audio:
28
+ backbone: seg
29
+ N: 256
30
+ L: 40
31
+ B: 64
32
+ H: 128
33
+ K: 100
34
+ R: 6
35
+
36
+ # optimizer
37
+ loss_type: sisdr # "snr", "sisdr", "hybrid"
38
+ init_learning_rate: 0.0005
39
+ max_epoch: 200
40
+ clip_grad_norm: 5
checkpoints/log_YGD_gesture_seg_2spk/last_best_checkpoint.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:78b2d7d6bc6b97496b85c6db9247055538c9ed3fe33bf0a9818af0f340c8aae3
3
+ size 53037174
checkpoints/log_YGD_gesture_seg_2spk/last_checkpoint.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:865196d60f5123d77d71bf4f516a197abacefcae7327f4328bc61d4e0eec1123
3
+ size 53037174
checkpoints/log_YGD_gesture_seg_2spk/log_2024-09-26(16:20:37).txt ADDED
@@ -0,0 +1,304 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Config file
2
+
3
+ # Log
4
+ seed: 777
5
+ use_cuda: 1 # 1 for True, 0 for False
6
+
7
+ # dataset
8
+ speaker_no: 2
9
+ mix_lst_path: ./data/YGD/mixture_data_list_2mix.csv
10
+ audio_direc: /mnt/nas_sg/mit_sg/zexu.pan/datasets/gesture_TED/audio_clean/
11
+ reference_direc: /mnt/nas_sg/mit_sg/zexu.pan/datasets/gesture_TED/visual/gesture_embedding/
12
+ audio_sr: 16000
13
+ visual_sr: 15
14
+
15
+ # dataloader
16
+ num_workers: 4
17
+ batch_size: 8
18
+ accu_grad: 1
19
+ effec_batch_size: 16 # per GPU, only used if accu_grad is set to 1, must be multiple times of batch size
20
+ max_length: 10 # truncate the utterances in dataloader, in seconds
21
+
22
+ # network settings
23
+ init_from: None # 'None' or a log name 'log_2024-07-22(18:12:13)'
24
+ causal: 0 # 1 for True, 0 for False
25
+ network_reference:
26
+ cue: gesture # lip or speech or gesture or EEG
27
+ network_audio:
28
+ backbone: seg
29
+ N: 256
30
+ L: 40
31
+ B: 64
32
+ H: 128
33
+ K: 100
34
+ R: 6
35
+
36
+ # optimizer
37
+ loss_type: sisdr # "snr", "sisdr", "hybrid"
38
+ init_learning_rate: 0.0005
39
+ max_epoch: 200
40
+ clip_grad_norm: 5
41
+ W0926 16:20:40.020566 139890934855488 torch/distributed/run.py:757]
42
+ W0926 16:20:40.020566 139890934855488 torch/distributed/run.py:757] *****************************************
43
+ W0926 16:20:40.020566 139890934855488 torch/distributed/run.py:757] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
44
+ W0926 16:20:40.020566 139890934855488 torch/distributed/run.py:757] *****************************************
45
+ started on checkpoints/log_2024-09-26(16:20:37)
46
+
47
+ namespace(seed=777, use_cuda=1, config=[<yamlargparse.Path object at 0x7f4635886e80>], checkpoint_dir='checkpoints/log_2024-09-26(16:20:37)', train_from_last_checkpoint=0, loss_type='sisdr', init_learning_rate=0.0005, max_epoch=200, clip_grad_norm=5.0, batch_size=8, accu_grad=1, effec_batch_size=16, max_length=10, num_workers=4, causal=0, network_reference=namespace(cue='gesture'), network_audio=namespace(backbone='seg', N=256, L=40, B=64, H=128, K=100, R=6), init_from='None', mix_lst_path='./data/YGD/mixture_data_list_2mix.csv', audio_direc='/mnt/nas_sg/mit_sg/zexu.pan/datasets/gesture_TED/audio_clean/', reference_direc='/mnt/nas_sg/mit_sg/zexu.pan/datasets/gesture_TED/visual/gesture_embedding/', speaker_no=2, audio_sr=16000, visual_sr=15, local_rank=0, distributed=True, world_size=2, device=device(type='cuda'))
48
+ network_wrapper(
49
+ (sep_network): seg(
50
+ (encoder): Encoder(
51
+ (conv1d_U): Conv1d(1, 256, kernel_size=(40,), stride=(20,), bias=False)
52
+ )
53
+ (separator): rnn(
54
+ (layer_norm): GroupNorm(1, 256, eps=1e-08, affine=True)
55
+ (bottleneck_conv1x1): Conv1d(256, 64, kernel_size=(1,), stride=(1,), bias=False)
56
+ (dual_rnn): ModuleList(
57
+ (0-5): 6 x Dual_RNN_Block(
58
+ (intra_rnn): LSTM(64, 128, batch_first=True, bidirectional=True)
59
+ (inter_rnn): LSTM(64, 128, batch_first=True, bidirectional=True)
60
+ (intra_norm): GroupNorm(1, 64, eps=1e-08, affine=True)
61
+ (inter_norm): GroupNorm(1, 64, eps=1e-08, affine=True)
62
+ (intra_linear): Linear(in_features=256, out_features=64, bias=True)
63
+ (inter_linear): Linear(in_features=256, out_features=64, bias=True)
64
+ )
65
+ )
66
+ (prelu): PReLU(num_parameters=1)
67
+ (mask_conv1x1): Conv1d(64, 256, kernel_size=(1,), stride=(1,), bias=False)
68
+ (visual_net): LSTM(30, 128, num_layers=5, batch_first=True, dropout=0.3, bidirectional=True)
69
+ (av_conv): Conv1d(320, 64, kernel_size=(1,), stride=(1,), bias=False)
70
+ )
71
+ (decoder): Decoder(
72
+ (basis_signals): Linear(in_features=256, out_features=40, bias=False)
73
+ )
74
+ )
75
+ )
76
+
77
+ Total number of parameters: 4401921
78
+
79
+
80
+ Total number of trainable parameters: 4401921
81
+
82
+ Start new training from scratch
83
+ [rank1]:[W reducer.cpp:1389] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator())
84
+ [rank0]:[W reducer.cpp:1389] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator())
85
+ Train Summary | End of Epoch 1 | Time 5588.81s | Train Loss -1.305
86
+ Valid Summary | End of Epoch 1 | Time 102.37s | Valid Loss -2.629
87
+ Test Summary | End of Epoch 1 | Time 79.44s | Test Loss -2.629
88
+ Fund new best model, dict saved
89
+ Train Summary | End of Epoch 2 | Time 5433.42s | Train Loss -3.538
90
+ Valid Summary | End of Epoch 2 | Time 34.91s | Valid Loss -4.547
91
+ Test Summary | End of Epoch 2 | Time 22.02s | Test Loss -4.547
92
+ Fund new best model, dict saved
93
+ Train Summary | End of Epoch 3 | Time 5432.23s | Train Loss -5.161
94
+ Valid Summary | End of Epoch 3 | Time 33.92s | Valid Loss -5.681
95
+ Test Summary | End of Epoch 3 | Time 24.24s | Test Loss -5.681
96
+ Fund new best model, dict saved
97
+ Train Summary | End of Epoch 4 | Time 5428.65s | Train Loss -6.261
98
+ Valid Summary | End of Epoch 4 | Time 39.14s | Valid Loss -6.456
99
+ Test Summary | End of Epoch 4 | Time 21.84s | Test Loss -6.456
100
+ Fund new best model, dict saved
101
+ Train Summary | End of Epoch 5 | Time 5438.49s | Train Loss -7.072
102
+ Valid Summary | End of Epoch 5 | Time 35.85s | Valid Loss -7.183
103
+ Test Summary | End of Epoch 5 | Time 21.63s | Test Loss -7.183
104
+ Fund new best model, dict saved
105
+ Train Summary | End of Epoch 6 | Time 5437.45s | Train Loss -7.711
106
+ Valid Summary | End of Epoch 6 | Time 37.30s | Valid Loss -7.557
107
+ Test Summary | End of Epoch 6 | Time 21.09s | Test Loss -7.557
108
+ Fund new best model, dict saved
109
+ Train Summary | End of Epoch 7 | Time 5444.24s | Train Loss -8.252
110
+ Valid Summary | End of Epoch 7 | Time 39.37s | Valid Loss -7.818
111
+ Test Summary | End of Epoch 7 | Time 22.48s | Test Loss -7.818
112
+ Fund new best model, dict saved
113
+ Train Summary | End of Epoch 8 | Time 5445.25s | Train Loss -8.708
114
+ Valid Summary | End of Epoch 8 | Time 38.85s | Valid Loss -8.094
115
+ Test Summary | End of Epoch 8 | Time 21.59s | Test Loss -8.094
116
+ Fund new best model, dict saved
117
+ Train Summary | End of Epoch 9 | Time 5440.56s | Train Loss -9.104
118
+ Valid Summary | End of Epoch 9 | Time 39.05s | Valid Loss -8.193
119
+ Test Summary | End of Epoch 9 | Time 21.58s | Test Loss -8.193
120
+ Fund new best model, dict saved
121
+ Train Summary | End of Epoch 10 | Time 5441.83s | Train Loss -9.437
122
+ Valid Summary | End of Epoch 10 | Time 40.08s | Valid Loss -8.662
123
+ Test Summary | End of Epoch 10 | Time 21.44s | Test Loss -8.662
124
+ Fund new best model, dict saved
125
+ Train Summary | End of Epoch 11 | Time 5441.18s | Train Loss -9.732
126
+ Valid Summary | End of Epoch 11 | Time 39.35s | Valid Loss -8.394
127
+ Test Summary | End of Epoch 11 | Time 21.36s | Test Loss -8.394
128
+ Train Summary | End of Epoch 12 | Time 5444.51s | Train Loss -9.903
129
+ Valid Summary | End of Epoch 12 | Time 40.10s | Valid Loss -8.676
130
+ Test Summary | End of Epoch 12 | Time 21.38s | Test Loss -8.676
131
+ Fund new best model, dict saved
132
+ Train Summary | End of Epoch 13 | Time 5441.35s | Train Loss -10.155
133
+ Valid Summary | End of Epoch 13 | Time 39.96s | Valid Loss -8.511
134
+ Test Summary | End of Epoch 13 | Time 21.63s | Test Loss -8.511
135
+ Train Summary | End of Epoch 14 | Time 5455.11s | Train Loss -10.352
136
+ Valid Summary | End of Epoch 14 | Time 41.14s | Valid Loss -8.700
137
+ Test Summary | End of Epoch 14 | Time 22.14s | Test Loss -8.700
138
+ Fund new best model, dict saved
139
+ Train Summary | End of Epoch 15 | Time 5440.99s | Train Loss -10.585
140
+ Valid Summary | End of Epoch 15 | Time 43.36s | Valid Loss -8.711
141
+ Test Summary | End of Epoch 15 | Time 21.50s | Test Loss -8.711
142
+ Fund new best model, dict saved
143
+ Train Summary | End of Epoch 16 | Time 5456.88s | Train Loss -10.789
144
+ Valid Summary | End of Epoch 16 | Time 43.19s | Valid Loss -8.695
145
+ Test Summary | End of Epoch 16 | Time 21.59s | Test Loss -8.695
146
+ Train Summary | End of Epoch 17 | Time 5443.70s | Train Loss -10.967
147
+ Valid Summary | End of Epoch 17 | Time 43.61s | Valid Loss -9.081
148
+ Test Summary | End of Epoch 17 | Time 21.51s | Test Loss -9.081
149
+ Fund new best model, dict saved
150
+ Train Summary | End of Epoch 18 | Time 5459.34s | Train Loss -11.180
151
+ Valid Summary | End of Epoch 18 | Time 43.24s | Valid Loss -8.954
152
+ Test Summary | End of Epoch 18 | Time 21.62s | Test Loss -8.954
153
+ Train Summary | End of Epoch 19 | Time 5447.97s | Train Loss -11.329
154
+ Valid Summary | End of Epoch 19 | Time 44.06s | Valid Loss -8.925
155
+ Test Summary | End of Epoch 19 | Time 21.67s | Test Loss -8.925
156
+ Train Summary | End of Epoch 20 | Time 5458.93s | Train Loss -11.507
157
+ Valid Summary | End of Epoch 20 | Time 41.99s | Valid Loss -9.317
158
+ Test Summary | End of Epoch 20 | Time 22.73s | Test Loss -9.317
159
+ Fund new best model, dict saved
160
+ Train Summary | End of Epoch 21 | Time 5447.55s | Train Loss -11.624
161
+ Valid Summary | End of Epoch 21 | Time 43.85s | Valid Loss -9.092
162
+ Test Summary | End of Epoch 21 | Time 21.68s | Test Loss -9.092
163
+ Train Summary | End of Epoch 22 | Time 5461.35s | Train Loss -11.787
164
+ Valid Summary | End of Epoch 22 | Time 42.36s | Valid Loss -9.352
165
+ Test Summary | End of Epoch 22 | Time 22.75s | Test Loss -9.352
166
+ Fund new best model, dict saved
167
+ Train Summary | End of Epoch 23 | Time 5449.89s | Train Loss -11.941
168
+ Valid Summary | End of Epoch 23 | Time 42.75s | Valid Loss -9.264
169
+ Test Summary | End of Epoch 23 | Time 21.85s | Test Loss -9.264
170
+ Train Summary | End of Epoch 24 | Time 5453.22s | Train Loss -12.123
171
+ Valid Summary | End of Epoch 24 | Time 41.88s | Valid Loss -9.722
172
+ Test Summary | End of Epoch 24 | Time 21.71s | Test Loss -9.722
173
+ Fund new best model, dict saved
174
+ Train Summary | End of Epoch 25 | Time 5431.43s | Train Loss -12.248
175
+ Valid Summary | End of Epoch 25 | Time 42.21s | Valid Loss -9.276
176
+ Test Summary | End of Epoch 25 | Time 21.50s | Test Loss -9.276
177
+ Train Summary | End of Epoch 26 | Time 5433.43s | Train Loss -12.358
178
+ Valid Summary | End of Epoch 26 | Time 41.90s | Valid Loss -9.327
179
+ Test Summary | End of Epoch 26 | Time 21.22s | Test Loss -9.327
180
+ Train Summary | End of Epoch 27 | Time 5432.86s | Train Loss -12.511
181
+ Valid Summary | End of Epoch 27 | Time 43.27s | Valid Loss -9.592
182
+ Test Summary | End of Epoch 27 | Time 21.57s | Test Loss -9.592
183
+ Train Summary | End of Epoch 28 | Time 5433.50s | Train Loss -12.606
184
+ Valid Summary | End of Epoch 28 | Time 42.16s | Valid Loss -9.127
185
+ Test Summary | End of Epoch 28 | Time 21.75s | Test Loss -9.127
186
+ Train Summary | End of Epoch 29 | Time 5432.61s | Train Loss -12.745
187
+ Valid Summary | End of Epoch 29 | Time 42.15s | Valid Loss -9.458
188
+ Test Summary | End of Epoch 29 | Time 21.61s | Test Loss -9.458
189
+ reload weights and optimizer from last best checkpoint
190
+ Learning rate adjusted to: 0.000250
191
+ Train Summary | End of Epoch 30 | Time 5436.75s | Train Loss -13.005
192
+ Valid Summary | End of Epoch 30 | Time 40.19s | Valid Loss -9.828
193
+ Test Summary | End of Epoch 30 | Time 21.68s | Test Loss -9.828
194
+ Fund new best model, dict saved
195
+ Train Summary | End of Epoch 31 | Time 5432.80s | Train Loss -13.319
196
+ Valid Summary | End of Epoch 31 | Time 40.07s | Valid Loss -9.674
197
+ Test Summary | End of Epoch 31 | Time 21.69s | Test Loss -9.674
198
+ Train Summary | End of Epoch 32 | Time 5432.97s | Train Loss -13.489
199
+ Valid Summary | End of Epoch 32 | Time 39.89s | Valid Loss -9.631
200
+ Test Summary | End of Epoch 32 | Time 21.98s | Test Loss -9.631
201
+ Train Summary | End of Epoch 33 | Time 5428.97s | Train Loss -13.637
202
+ Valid Summary | End of Epoch 33 | Time 41.71s | Valid Loss -9.938
203
+ Test Summary | End of Epoch 33 | Time 21.12s | Test Loss -9.938
204
+ Fund new best model, dict saved
205
+ Train Summary | End of Epoch 34 | Time 5431.64s | Train Loss -13.739
206
+ Valid Summary | End of Epoch 34 | Time 39.77s | Valid Loss -9.937
207
+ Test Summary | End of Epoch 34 | Time 21.39s | Test Loss -9.937
208
+ Train Summary | End of Epoch 35 | Time 5429.93s | Train Loss -13.865
209
+ Valid Summary | End of Epoch 35 | Time 39.59s | Valid Loss -9.834
210
+ Test Summary | End of Epoch 35 | Time 21.54s | Test Loss -9.834
211
+ Train Summary | End of Epoch 36 | Time 5428.18s | Train Loss -13.960
212
+ Valid Summary | End of Epoch 36 | Time 39.94s | Valid Loss -9.921
213
+ Test Summary | End of Epoch 36 | Time 21.84s | Test Loss -9.921
214
+ Train Summary | End of Epoch 37 | Time 5428.69s | Train Loss -14.041
215
+ Valid Summary | End of Epoch 37 | Time 39.93s | Valid Loss -9.942
216
+ Test Summary | End of Epoch 37 | Time 21.24s | Test Loss -9.942
217
+ Fund new best model, dict saved
218
+ Train Summary | End of Epoch 38 | Time 5422.98s | Train Loss -14.130
219
+ Valid Summary | End of Epoch 38 | Time 42.71s | Valid Loss -10.054
220
+ Test Summary | End of Epoch 38 | Time 21.17s | Test Loss -10.054
221
+ Fund new best model, dict saved
222
+ Train Summary | End of Epoch 39 | Time 5423.27s | Train Loss -14.214
223
+ Valid Summary | End of Epoch 39 | Time 42.28s | Valid Loss -9.860
224
+ Test Summary | End of Epoch 39 | Time 21.14s | Test Loss -9.860
225
+ Train Summary | End of Epoch 40 | Time 5422.16s | Train Loss -14.298
226
+ Valid Summary | End of Epoch 40 | Time 42.51s | Valid Loss -10.045
227
+ Test Summary | End of Epoch 40 | Time 21.07s | Test Loss -10.045
228
+ Train Summary | End of Epoch 41 | Time 5421.24s | Train Loss -14.363
229
+ Valid Summary | End of Epoch 41 | Time 43.04s | Valid Loss -9.873
230
+ Test Summary | End of Epoch 41 | Time 21.63s | Test Loss -9.873
231
+ Train Summary | End of Epoch 42 | Time 5421.36s | Train Loss -14.421
232
+ Valid Summary | End of Epoch 42 | Time 42.79s | Valid Loss -9.767
233
+ Test Summary | End of Epoch 42 | Time 21.35s | Test Loss -9.767
234
+ Train Summary | End of Epoch 43 | Time 5422.27s | Train Loss -14.489
235
+ Valid Summary | End of Epoch 43 | Time 43.02s | Valid Loss -9.945
236
+ Test Summary | End of Epoch 43 | Time 22.01s | Test Loss -9.945
237
+ reload weights and optimizer from last best checkpoint
238
+ Learning rate adjusted to: 0.000125
239
+ Train Summary | End of Epoch 44 | Time 5422.77s | Train Loss -14.518
240
+ Valid Summary | End of Epoch 44 | Time 44.66s | Valid Loss -9.955
241
+ Test Summary | End of Epoch 44 | Time 21.15s | Test Loss -9.955
242
+ Train Summary | End of Epoch 45 | Time 5422.81s | Train Loss -14.641
243
+ Valid Summary | End of Epoch 45 | Time 42.37s | Valid Loss -10.015
244
+ Test Summary | End of Epoch 45 | Time 21.51s | Test Loss -10.015
245
+ Train Summary | End of Epoch 46 | Time 5424.07s | Train Loss -14.713
246
+ Valid Summary | End of Epoch 46 | Time 42.38s | Valid Loss -10.161
247
+ Test Summary | End of Epoch 46 | Time 21.26s | Test Loss -10.161
248
+ Fund new best model, dict saved
249
+ Train Summary | End of Epoch 47 | Time 5421.37s | Train Loss -14.775
250
+ Valid Summary | End of Epoch 47 | Time 44.70s | Valid Loss -10.079
251
+ Test Summary | End of Epoch 47 | Time 21.16s | Test Loss -10.079
252
+ Train Summary | End of Epoch 48 | Time 5421.69s | Train Loss -14.832
253
+ Valid Summary | End of Epoch 48 | Time 43.73s | Valid Loss -9.943
254
+ Test Summary | End of Epoch 48 | Time 21.23s | Test Loss -9.943
255
+ Train Summary | End of Epoch 49 | Time 5421.44s | Train Loss -14.864
256
+ Valid Summary | End of Epoch 49 | Time 44.25s | Valid Loss -10.162
257
+ Test Summary | End of Epoch 49 | Time 21.19s | Test Loss -10.162
258
+ Fund new best model, dict saved
259
+ Train Summary | End of Epoch 50 | Time 5423.12s | Train Loss -14.920
260
+ Valid Summary | End of Epoch 50 | Time 43.14s | Valid Loss -9.900
261
+ Test Summary | End of Epoch 50 | Time 21.28s | Test Loss -9.900
262
+ Train Summary | End of Epoch 51 | Time 5419.93s | Train Loss -14.957
263
+ Valid Summary | End of Epoch 51 | Time 45.30s | Valid Loss -9.932
264
+ Test Summary | End of Epoch 51 | Time 21.22s | Test Loss -9.932
265
+ Train Summary | End of Epoch 52 | Time 5422.40s | Train Loss -14.995
266
+ Valid Summary | End of Epoch 52 | Time 43.00s | Valid Loss -10.064
267
+ Test Summary | End of Epoch 52 | Time 21.27s | Test Loss -10.064
268
+ Train Summary | End of Epoch 53 | Time 5422.21s | Train Loss -15.036
269
+ Valid Summary | End of Epoch 53 | Time 43.88s | Valid Loss -10.209
270
+ Test Summary | End of Epoch 53 | Time 20.97s | Test Loss -10.209
271
+ Fund new best model, dict saved
272
+ Train Summary | End of Epoch 54 | Time 5419.59s | Train Loss -15.065
273
+ Valid Summary | End of Epoch 54 | Time 45.94s | Valid Loss -10.001
274
+ Test Summary | End of Epoch 54 | Time 21.08s | Test Loss -10.001
275
+ Train Summary | End of Epoch 55 | Time 5418.47s | Train Loss -15.095
276
+ Valid Summary | End of Epoch 55 | Time 45.19s | Valid Loss -10.026
277
+ Test Summary | End of Epoch 55 | Time 21.20s | Test Loss -10.026
278
+ Train Summary | End of Epoch 56 | Time 5417.08s | Train Loss -15.141
279
+ Valid Summary | End of Epoch 56 | Time 45.64s | Valid Loss -9.933
280
+ Test Summary | End of Epoch 56 | Time 21.08s | Test Loss -9.933
281
+ Train Summary | End of Epoch 57 | Time 5417.95s | Train Loss -15.165
282
+ Valid Summary | End of Epoch 57 | Time 45.28s | Valid Loss -9.704
283
+ Test Summary | End of Epoch 57 | Time 21.29s | Test Loss -9.704
284
+ Train Summary | End of Epoch 58 | Time 5418.70s | Train Loss -15.196
285
+ Valid Summary | End of Epoch 58 | Time 44.81s | Valid Loss -10.054
286
+ Test Summary | End of Epoch 58 | Time 21.03s | Test Loss -10.054
287
+ reload weights and optimizer from last best checkpoint
288
+ Learning rate adjusted to: 0.000063
289
+ Train Summary | End of Epoch 59 | Time 5420.00s | Train Loss -15.200
290
+ Valid Summary | End of Epoch 59 | Time 46.30s | Valid Loss -9.865
291
+ Test Summary | End of Epoch 59 | Time 21.27s | Test Loss -9.865
292
+ Train Summary | End of Epoch 60 | Time 5420.50s | Train Loss -15.248
293
+ Valid Summary | End of Epoch 60 | Time 46.35s | Valid Loss -10.050
294
+ Test Summary | End of Epoch 60 | Time 21.29s | Test Loss -10.050
295
+ Train Summary | End of Epoch 61 | Time 5420.45s | Train Loss -15.277
296
+ Valid Summary | End of Epoch 61 | Time 47.57s | Valid Loss -10.149
297
+ Test Summary | End of Epoch 61 | Time 21.66s | Test Loss -10.149
298
+ Train Summary | End of Epoch 62 | Time 5419.90s | Train Loss -15.303
299
+ Valid Summary | End of Epoch 62 | Time 46.20s | Valid Loss -10.096
300
+ Test Summary | End of Epoch 62 | Time 21.31s | Test Loss -10.096
301
+ Start evaluation
302
+ Avg SISNR:i tensor([9.4806], device='cuda:0')
303
+ Avg SNRi: 10.412415402130565
304
+ Avg STOIi: 0.11529820576481561
checkpoints/log_YGD_gesture_seg_2spk/tensorboard/events.out.tfevents.1727338847.bach-gpu011017044238.na61.87202.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:22936579a6cea616d3d368d296f82fdd01d5b418f83a23cc873106efb7f7ad49
3
+ size 9264