id
stringlengths
3
8
text
stringlengths
1
115k
st178168
If someone face similar issue, the problem may be caused by cudastreamsync() when transfer minibatch generated by dataloader from CPU to GPU. Since the tensor transfer is on the default cuda stream, this forces an addition synchronization in every iteration. The issue can be solved by put the tensor transfer on a separate cuda stream.
st178169
@yzz, I am facing a similar issue there, I wonder can you show your code about how you solve this probelm. Thanks
st178170
Hi Team, As part of distributed training, we are trying out Nvidia Apex library and we took care of Set OMP_NUM_THREADS in torch.distributed.launch issue. We are running standard EN-DE (English to German) NMT example given on this documentation. We have noticed that without Apex library we can run the distributed training for EN-DE (English to German) NMT example but with Apex library we could not and surprising we haven’t had any error log there. Training starts and ends in a few seconds. NCCL debug level logs we are capturing but could not see any error trace there. On the master node, we are getting the following logs 2020-08-12 13:52:16 | INFO | fairseq.distributed_utils | distributed init (rank 1): env:// 2020-08-12 13:52:16 | INFO | fairseq.distributed_utils | distributed init (rank 6): env:// 2020-08-12 13:52:16 | INFO | fairseq.distributed_utils | distributed init (rank 3): env:// 2020-08-12 13:52:16 | INFO | fairseq.distributed_utils | distributed init (rank 5): env:// 2020-08-12 13:52:16 | INFO | fairseq.distributed_utils | distributed init (rank 0): env:// 2020-08-12 13:52:16 | INFO | fairseq.distributed_utils | distributed init (rank 7): env:// 2020-08-12 13:52:16 | INFO | fairseq.distributed_utils | distributed init (rank 4): env:// 2020-08-12 13:52:16 | INFO | fairseq.distributed_utils | initialized host 10-7-6-170.cactuslabs.io as rank 7 2020-08-12 13:52:16 | INFO | fairseq.distributed_utils | distributed init (rank 2): env:// 2020-08-12 13:52:16 | INFO | fairseq.distributed_utils | initialized host 10-7-6-170.cactuslabs.io as rank 4 2020-08-12 13:52:16 | INFO | fairseq.distributed_utils | initialized host 10-7-6-170.cactuslabs.io as rank 2 2020-08-12 13:52:17 | INFO | fairseq.distributed_utils | initialized host 10-7-6-170.cactuslabs.io as rank 1 2020-08-12 13:52:17 | INFO | fairseq.distributed_utils | initialized host 10-7-6-170.cactuslabs.io as rank 6 2020-08-12 13:52:17 | INFO | fairseq.distributed_utils | initialized host 10-7-6-170.cactuslabs.io as rank 3 2020-08-12 13:52:17 | INFO | fairseq.distributed_utils | initialized host 10-7-6-170.cactuslabs.io as rank 5 2020-08-12 13:52:20 | INFO | fairseq.distributed_utils | initialized host 10-7-6-170.cactuslabs.io as rank 0 10-7-6-170:2171:2171 [0] NCCL INFO Bootstrap : Using [0]ens3:10.7.6.170<0> 10-7-6-170:2171:2171 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so). 10-7-6-170:2171:2171 [0] NCCL INFO NET/IB : No device found. 10-7-6-170:2171:2171 [0] NCCL INFO NET/Socket : Using [0]ens3:10.7.6.170<0> NCCL version 2.4.8+cuda9.2 10-7-6-170:2178:2178 [7] NCCL INFO Bootstrap : Using [0]ens3:10.7.6.170<0> 10-7-6-170:2175:2175 [4] NCCL INFO Bootstrap : Using [0]ens3:10.7.6.170<0> 10-7-6-170:2173:2173 [2] NCCL INFO Bootstrap : Using [0]ens3:10.7.6.170<0> 10-7-6-170:2172:2172 [1] NCCL INFO Bootstrap : Using [0]ens3:10.7.6.170<0> 10-7-6-170:2174:2174 [3] NCCL INFO Bootstrap : Using [0]ens3:10.7.6.170<0> 10-7-6-170:2177:2177 [6] NCCL INFO Bootstrap : Using [0]ens3:10.7.6.170<0> 10-7-6-170:2176:2176 [5] NCCL INFO Bootstrap : Using [0]ens3:10.7.6.170<0> 10-7-6-170:2178:2178 [7] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so). 10-7-6-170:2175:2175 [4] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so). 10-7-6-170:2173:2173 [2] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so). 10-7-6-170:2172:2172 [1] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so). 10-7-6-170:2174:2174 [3] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so). 10-7-6-170:2177:2177 [6] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so). 10-7-6-170:2176:2176 [5] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so). 10-7-6-170:2172:2172 [1] NCCL INFO NET/IB : No device found. 10-7-6-170:2174:2174 [3] NCCL INFO NET/IB : No device found. 10-7-6-170:2173:2173 [2] NCCL INFO NET/IB : No device found. 10-7-6-170:2176:2176 [5] NCCL INFO NET/IB : No device found. 10-7-6-170:2178:2178 [7] NCCL INFO NET/IB : No device found. 10-7-6-170:2175:2175 [4] NCCL INFO NET/IB : No device found. 10-7-6-170:2177:2177 [6] NCCL INFO NET/IB : No device found. 10-7-6-170:2174:2174 [3] NCCL INFO NET/Socket : Using [0]ens3:10.7.6.170<0> 10-7-6-170:2172:2172 [1] NCCL INFO NET/Socket : Using [0]ens3:10.7.6.170<0> 10-7-6-170:2173:2173 [2] NCCL INFO NET/Socket : Using [0]ens3:10.7.6.170<0> 10-7-6-170:2177:2177 [6] NCCL INFO NET/Socket : Using [0]ens3:10.7.6.170<0> 10-7-6-170:2176:2176 [5] NCCL INFO NET/Socket : Using [0]ens3:10.7.6.170<0> 10-7-6-170:2178:2178 [7] NCCL INFO NET/Socket : Using [0]ens3:10.7.6.170<0> 10-7-6-170:2175:2175 [4] NCCL INFO NET/Socket : Using [0]ens3:10.7.6.170<0> 10-7-6-170:2171:2230 [0] NCCL INFO Setting affinity for GPU 0 to ffffffff 10-7-6-170:2174:2231 [3] NCCL INFO Setting affinity for GPU 3 to ffffffff 10-7-6-170:2178:2235 [7] NCCL INFO Setting affinity for GPU 7 to ffffffff 10-7-6-170:2173:2233 [2] NCCL INFO Setting affinity for GPU 2 to ffffffff 10-7-6-170:2175:2237 [4] NCCL INFO Setting affinity for GPU 4 to ffffffff 10-7-6-170:2177:2236 [6] NCCL INFO Setting affinity for GPU 6 to ffffffff 10-7-6-170:2172:2232 [1] NCCL INFO Setting affinity for GPU 1 to ffffffff 10-7-6-170:2176:2234 [5] NCCL INFO Setting affinity for GPU 5 to ffffffff 10-7-6-170:2171:2230 [0] NCCL INFO CUDA Dev 0[0], Socket NIC distance : PHB 10-7-6-170:2172:2232 [1] NCCL INFO CUDA Dev 1[1], Socket NIC distance : PHB 10-7-6-170:2173:2233 [2] NCCL INFO CUDA Dev 2[2], Socket NIC distance : PHB 10-7-6-170:2174:2231 [3] NCCL INFO CUDA Dev 3[3], Socket NIC distance : PHB 10-7-6-170:2175:2237 [4] NCCL INFO CUDA Dev 4[4], Socket NIC distance : PHB 10-7-6-170:2176:2234 [5] NCCL INFO CUDA Dev 5[5], Socket NIC distance : PHB 10-7-6-170:2177:2236 [6] NCCL INFO CUDA Dev 6[6], Socket NIC distance : PHB 10-7-6-170:2178:2235 [7] NCCL INFO CUDA Dev 7[7], Socket NIC distance : PHB 10-7-6-170:2171:2230 [0] NCCL INFO Channel 00 : 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 10-7-6-170:2171:2230 [0] NCCL INFO Channel 01 : 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 10-7-6-170:2176:2234 [5] NCCL INFO Ring 00 : 5[5] -> 6[6] via P2P/IPC 10-7-6-170:2174:2231 [3] NCCL INFO Ring 00 : 3[3] -> 4[4] via P2P/IPC 10-7-6-170:2177:2236 [6] NCCL INFO Ring 00 : 6[6] -> 7[7] via P2P/IPC 10-7-6-170:2175:2237 [4] NCCL INFO Ring 00 : 4[4] -> 5[5] via P2P/IPC 10-7-6-170:2172:2232 [1] NCCL INFO Ring 00 : 1[1] -> 2[2] via P2P/IPC 10-7-6-170:2173:2233 [2] NCCL INFO Ring 00 : 2[2] -> 3[3] via P2P/IPC 10-7-6-170:2171:2230 [0] NCCL INFO Ring 00 : 15 -> 0 [receive] via NET/Socket/0 10-7-6-170:2171:2230 [0] NCCL INFO NET/Socket: Using 2 threads and 8 sockets per thread 10-7-6-170:2171:2230 [0] NCCL INFO Ring 00 : 0[0] -> 1[1] via P2P/IPC 10-7-6-170:2178:2235 [7] NCCL INFO Ring 00 : 7 -> 8 [send] via NET/Socket/0 10-7-6-170:2176:2234 [5] NCCL INFO Ring 00 : 5[5] -> 4[4] via P2P/IPC 10-7-6-170:2174:2231 [3] NCCL INFO Ring 00 : 3[3] -> 2[2] via P2P/IPC 10-7-6-170:2177:2236 [6] NCCL INFO Ring 00 : 6[6] -> 5[5] via P2P/IPC 10-7-6-170:2175:2237 [4] NCCL INFO Ring 00 : 4[4] -> 3[3] via P2P/IPC 10-7-6-170:2172:2232 [1] NCCL INFO Ring 00 : 1[1] -> 0[0] via P2P/IPC 10-7-6-170:2178:2235 [7] NCCL INFO Ring 00 : 7[7] -> 6[6] via P2P/IPC 10-7-6-170:2173:2233 [2] NCCL INFO Ring 00 : 2[2] -> 1[1] via P2P/IPC 10-7-6-170:2171:2230 [0] NCCL INFO Ring 00 : 8 -> 0 [receive] via NET/Socket/0 10-7-6-170:2171:2230 [0] NCCL INFO NET/Socket: Using 2 threads and 8 sockets per thread 10-7-6-170:2177:2236 [6] NCCL INFO Ring 01 : 6[6] -> 7[7] via P2P/IPC 10-7-6-170:2176:2234 [5] NCCL INFO Ring 01 : 5[5] -> 6[6] via P2P/IPC 10-7-6-170:2174:2231 [3] NCCL INFO Ring 01 : 3[3] -> 4[4] via P2P/IPC 10-7-6-170:2175:2237 [4] NCCL INFO Ring 01 : 4[4] -> 5[5] via P2P/IPC 10-7-6-170:2172:2232 [1] NCCL INFO Ring 01 : 1[1] -> 2[2] via P2P/IPC 10-7-6-170:2173:2233 [2] NCCL INFO Ring 01 : 2[2] -> 3[3] via P2P/IPC 10-7-6-170:2178:2235 [7] NCCL INFO Ring 01 : 7 -> 8 [send] via NET/Socket/0 10-7-6-170:2171:2230 [0] NCCL INFO Ring 00 : 0 -> 8 [send] via NET/Socket/0 10-7-6-170:2177:2236 [6] NCCL INFO Ring 01 : 6[6] -> 5[5] via P2P/IPC 10-7-6-170:2176:2234 [5] NCCL INFO Ring 01 : 5[5] -> 4[4] via P2P/IPC 10-7-6-170:2174:2231 [3] NCCL INFO Ring 01 : 3[3] -> 2[2] via P2P/IPC 10-7-6-170:2175:2237 [4] NCCL INFO Ring 01 : 4[4] -> 3[3] via P2P/IPC 10-7-6-170:2173:2233 [2] NCCL INFO Ring 01 : 2[2] -> 1[1] via P2P/IPC 10-7-6-170:2176:2234 [5] NCCL INFO Trees [0] 4->5->6/-1/-1 [1] 4->5->6/-1/-1 10-7-6-170:2174:2231 [3] NCCL INFO Trees [0] 2->3->4/-1/-1 [1] 2->3->4/-1/-1 10-7-6-170:2175:2237 [4] NCCL INFO Trees [0] 3->4->5/-1/-1 [1] 3->4->5/-1/-1 10-7-6-170:2176:2234 [5] NCCL INFO comm 0x7fdcf4002540 rank 5 nranks 16 cudaDev 5 nvmlDev 5 - Init COMPLETE 10-7-6-170:2174:2231 [3] NCCL INFO comm 0x7f8280002540 rank 3 nranks 16 cudaDev 3 nvmlDev 3 - Init COMPLETE 10-7-6-170:2175:2237 [4] NCCL INFO comm 0x7f432c002540 rank 4 nranks 16 cudaDev 4 nvmlDev 4 - Init COMPLETE 10-7-6-170:2171:2230 [0] NCCL INFO Ring 01 : 15 -> 0 [receive] via NET/Socket/0 10-7-6-170:2171:2230 [0] NCCL INFO NET/Socket: Using 2 threads and 8 sockets per thread 10-7-6-170:2171:2230 [0] NCCL INFO Ring 01 : 0[0] -> 1[1] via P2P/IPC 10-7-6-170:2172:2232 [1] NCCL INFO Ring 01 : 1[1] -> 0[0] via P2P/IPC 10-7-6-170:2178:2235 [7] NCCL INFO Ring 01 : 7[7] -> 6[6] via P2P/IPC 10-7-6-170:2177:2236 [6] NCCL INFO Trees [0] 5->6->7/-1/-1 [1] 5->6->7/-1/-1 10-7-6-170:2173:2233 [2] NCCL INFO Trees [0] 1->2->3/-1/-1 [1] 1->2->3/-1/-1 10-7-6-170:2178:2235 [7] NCCL INFO Trees [0] 6->7->-1/-1/-1 [1] 6->7->-1/-1/-1 10-7-6-170:2172:2232 [1] NCCL INFO Trees [0] 0->1->2/-1/-1 [1] 0->1->2/-1/-1 10-7-6-170:2171:2230 [0] NCCL INFO Ring 01 : 0 -> 8 [send] via NET/Socket/0 10-7-6-170:2177:2236 [6] NCCL INFO comm 0x7f166c002540 rank 6 nranks 16 cudaDev 6 nvmlDev 6 - Init COMPLETE 10-7-6-170:2173:2233 [2] NCCL INFO comm 0x7f934c002540 rank 2 nranks 16 cudaDev 2 nvmlDev 2 - Init COMPLETE 10-7-6-170:2178:2235 [7] NCCL INFO comm 0x7f7abc002540 rank 7 nranks 16 cudaDev 7 nvmlDev 7 - Init COMPLETE 10-7-6-170:2172:2232 [1] NCCL INFO comm 0x7f9d88002540 rank 1 nranks 16 cudaDev 1 nvmlDev 1 - Init COMPLETE 10-7-6-170:2171:2230 [0] NCCL INFO Ring 01 : 8 -> 0 [receive] via NET/Socket/0 10-7-6-170:2171:2230 [0] NCCL INFO NET/Socket: Using 2 threads and 8 sockets per thread 10-7-6-170:2171:2230 [0] NCCL INFO Trees [0] -1->0->1/8/-1 [1] 8->0->1/-1/-1 10-7-6-170:2171:2230 [0] NCCL INFO Using 128 threads, Min Comp Cap 3, Trees enabled up to size 469999 10-7-6-170:2171:2230 [0] NCCL INFO comm 0x7fd6d8002540 rank 0 nranks 16 cudaDev 0 nvmlDev 0 - Init COMPLETE 10-7-6-170:2171:2171 [0] NCCL INFO Launch mode Parallel 2020-08-12 13:52:23 | INFO | fairseq_cli.train | Namespace(activation_dropout=0.0, activation_fn='relu', adam_betas='(0.9, 0.98)', adam_eps=1e-08, adaptive_input=False, adaptive_softmax_cutoff=None, adaptive_softmax_dropout=0, all_gather_list_size=16384, arch='transformer_iwslt_de_en', attention_dropout=0.0, best_checkpoint_metric='loss', bf16=False, bpe=None, broadcast_buffers=False, bucket_cap_mb=25, checkpoint_suffix='', clip_norm=0.0, cpu=False, criterion='label_smoothed_cross_entropy', cross_self_attention=False, curriculum=0, data='data-bin/iwslt14.tokenized.de-en', data_buffer_size=10, dataset_impl=None, ddp_backend='c10d', decoder_attention_heads=4, decoder_embed_dim=512, decoder_embed_path=None, decoder_ffn_embed_dim=1024, decoder_input_dim=512, decoder_layerdrop=0, decoder_layers=6, decoder_layers_to_keep=None, decoder_learned_pos=False, decoder_normalize_before=False, decoder_output_dim=512, device_id=0, disable_validation=False, distributed_backend='nccl', distributed_init_method='env://', distributed_no_spawn=True, distributed_port=-1, distributed_rank=0, distributed_world_size=16, distributed_wrapper='DDP', dropout=0.3, empty_cache_freq=0, encoder_attention_heads=4, encoder_embed_dim=512, encoder_embed_path=None, encoder_ffn_embed_dim=1024, encoder_layerdrop=0, encoder_layers=6, encoder_layers_to_keep=None, encoder_learned_pos=False, encoder_normalize_before=False, eval_bleu=False, eval_bleu_args=None, eval_bleu_detok='space', eval_bleu_detok_args=None, eval_bleu_print_samples=False, eval_bleu_remove_bpe=None, eval_tokenized_bleu=False, fast_stat_sync=False, find_unused_parameters=False, fix_batches_to_gpus=False, fixed_validation_seed=None, fp16=False, fp16_init_scale=128, fp16_no_flatten_grads=False, fp16_scale_tolerance=0.0, fp16_scale_window=None, keep_best_checkpoints=-1, keep_interval_updates=-1, keep_last_epochs=-1, label_smoothing=0.1, layernorm_embedding=False, left_pad_source='True', left_pad_target='False', load_alignments=False, localsgd_frequency=3, log_format=None, log_interval=100, lr=[0.0005], lr_scheduler='inverse_sqrt', max_epoch=0, max_sentences=None, max_sentences_valid=None, max_source_positions=1024, max_target_positions=1024, max_tokens=8000, max_tokens_valid=8000, max_update=0, maximize_best_checkpoint_metric=False, memory_efficient_bf16=False, memory_efficient_fp16=False, min_loss_scale=0.0001, min_lr=1e-09, model_parallel_size=1, no_cross_attention=False, no_epoch_checkpoints=False, no_last_checkpoints=False, no_progress_bar=False, no_save=False, no_save_optimizer_state=False, no_scale_embedding=False, no_seed_provided=True, no_token_positional_embeddings=False, nprocs_per_node=8, num_batch_buckets=0, num_workers=1, optimizer='adam', optimizer_overrides='{}', patience=-1, profile=False, quant_noise_pq=0, quant_noise_pq_block_size=8, quant_noise_scalar=0, quantization_config_path=None, required_batch_size_multiple=8, reset_dataloader=False, reset_lr_scheduler=False, reset_meters=False, reset_optimizer=False, restore_file='checkpoint_last.pt', save_dir='checkpoints', save_interval=1, save_interval_updates=0, scoring='bleu', seed=1, sentence_avg=False, share_all_embeddings=False, share_decoder_input_output_embed=False, skip_invalid_size_inputs_valid_test=False, slowmo_algorithm='LocalSGD', slowmo_momentum=None, source_lang=None, stop_time_hours=0, target_lang=None, task='translation', tensorboard_logdir='', threshold_loss_scale=None, tie_adaptive_weights=False, tokenizer=None, tpu=False, train_subset='train', truncate_source=False, update_freq=[1], upsample_primary=1, use_bmuf=False, use_old_adam=False, user_dir=None, valid_subset='valid', validate_after_updates=0, validate_interval=1, validate_interval_updates=0, warmup_init_lr=-1, warmup_updates=4000, weight_decay=0.0) 2020-08-12 13:52:23 | INFO | fairseq.tasks.translation | [de] dictionary: 8848 types 2020-08-12 13:52:23 | INFO | fairseq.tasks.translation | [en] dictionary: 6632 types 2020-08-12 13:52:23 | INFO | fairseq.data.data_utils | loaded 7283 examples from: data-bin/iwslt14.tokenized.de-en/valid.de-en.de 2020-08-12 13:52:23 | INFO | fairseq.data.data_utils | loaded 7283 examples from: data-bin/iwslt14.tokenized.de-en/valid.de-en.en 2020-08-12 13:52:23 | INFO | fairseq.tasks.translation | data-bin/iwslt14.tokenized.de-en valid de-en 7283 examples 2020-08-12 13:52:24 | INFO | fairseq_cli.train | model transformer_iwslt_de_en, criterion LabelSmoothedCrossEntropyCriterion 2020-08-12 13:52:24 | INFO | fairseq_cli.train | num. model params: 42864640 (num. trained: 42864640) 2020-08-12 13:52:24 | INFO | fairseq.utils | ***********************CUDA enviroments for all 16 workers*********************** 2020-08-12 13:52:24 | INFO | fairseq.utils | rank 0: capabilities = 3.7 ; total memory = 11.173 GB ; name = Tesla K80 2020-08-12 13:52:24 | INFO | fairseq.utils | rank 1: capabilities = 3.7 ; total memory = 11.173 GB ; name = Tesla K80 2020-08-12 13:52:24 | INFO | fairseq.utils | rank 2: capabilities = 3.7 ; total memory = 11.173 GB ; name = Tesla K80 2020-08-12 13:52:24 | INFO | fairseq.utils | rank 3: capabilities = 3.7 ; total memory = 11.173 GB ; name = Tesla K80 2020-08-12 13:52:24 | INFO | fairseq.utils | rank 4: capabilities = 3.7 ; total memory = 11.173 GB ; name = Tesla K80 2020-08-12 13:52:24 | INFO | fairseq.utils | rank 5: capabilities = 3.7 ; total memory = 11.173 GB ; name = Tesla K80 2020-08-12 13:52:24 | INFO | fairseq.utils | rank 6: capabilities = 3.7 ; total memory = 11.173 GB ; name = Tesla K80 2020-08-12 13:52:24 | INFO | fairseq.utils | rank 7: capabilities = 3.7 ; total memory = 11.173 GB ; name = Tesla K80 2020-08-12 13:52:24 | INFO | fairseq.utils | rank 8: capabilities = 3.7 ; total memory = 11.173 GB ; name = Tesla K80 2020-08-12 13:52:24 | INFO | fairseq.utils | rank 9: capabilities = 3.7 ; total memory = 11.173 GB ; name = Tesla K80 2020-08-12 13:52:24 | INFO | fairseq.utils | rank 10: capabilities = 3.7 ; total memory = 11.173 GB ; name = Tesla K80 2020-08-12 13:52:24 | INFO | fairseq.utils | rank 11: capabilities = 3.7 ; total memory = 11.173 GB ; name = Tesla K80 2020-08-12 13:52:24 | INFO | fairseq.utils | rank 12: capabilities = 3.7 ; total memory = 11.173 GB ; name = Tesla K80 2020-08-12 13:52:24 | INFO | fairseq.utils | rank 13: capabilities = 3.7 ; total memory = 11.173 GB ; name = Tesla K80 2020-08-12 13:52:24 | INFO | fairseq.utils | rank 14: capabilities = 3.7 ; total memory = 11.173 GB ; name = Tesla K80 2020-08-12 13:52:24 | INFO | fairseq.utils | rank 15: capabilities = 3.7 ; total memory = 11.173 GB ; name = Tesla K80 2020-08-12 13:52:24 | INFO | fairseq.utils | ***********************CUDA enviroments for all 16 workers*********************** 2020-08-12 13:52:24 | INFO | fairseq_cli.train | training on 16 devices (GPUs/TPUs) 2020-08-12 13:52:24 | INFO | fairseq_cli.train | max tokens per GPU = 8000 and max sentences per GPU = None 2020-08-12 13:52:24 | INFO | fairseq.trainer | no existing checkpoint found checkpoints/checkpoint_last.pt 2020-08-12 13:52:24 | INFO | fairseq.trainer | loading train data for epoch 1 2020-08-12 13:52:24 | INFO | fairseq.data.data_utils | loaded 160239 examples from: data-bin/iwslt14.tokenized.de-en/train.de-en.de 2020-08-12 13:52:24 | INFO | fairseq.data.data_utils | loaded 160239 examples from: data-bin/iwslt14.tokenized.de-en/train.de-en.en 2020-08-12 13:52:24 | INFO | fairseq.tasks.translation | data-bin/iwslt14.tokenized.de-en train de-en 160239 examples 2020-08-12 13:52:25 | INFO | fairseq.optim.adam | using FusedAdam 2020-08-12 13:52:25 | INFO | fairseq_cli.train | done training in 0.0 seconds ***************************************** Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. ***************************************** On the slave node, we are getting the following logs 2020-08-12 13:52:20 | INFO | fairseq.distributed_utils | distributed init (rank 9): env:// 2020-08-12 13:52:20 | INFO | fairseq.distributed_utils | initialized host 10-7-6-166.cactuslabs.io as rank 9 2020-08-12 13:52:20 | INFO | fairseq.distributed_utils | distributed init (rank 15): env:// 2020-08-12 13:52:20 | INFO | fairseq.distributed_utils | initialized host 10-7-6-166.cactuslabs.io as rank 15 2020-08-12 13:52:20 | INFO | fairseq.distributed_utils | distributed init (rank 14): env:// 2020-08-12 13:52:20 | INFO | fairseq.distributed_utils | distributed init (rank 8): env:// 2020-08-12 13:52:20 | INFO | fairseq.distributed_utils | initialized host 10-7-6-166.cactuslabs.io as rank 14 2020-08-12 13:52:20 | INFO | fairseq.distributed_utils | distributed init (rank 13): env:// 2020-08-12 13:52:20 | INFO | fairseq.distributed_utils | initialized host 10-7-6-166.cactuslabs.io as rank 8 2020-08-12 13:52:20 | INFO | fairseq.distributed_utils | distributed init (rank 10): env:// 2020-08-12 13:52:20 | INFO | fairseq.distributed_utils | initialized host 10-7-6-166.cactuslabs.io as rank 13 2020-08-12 13:52:20 | INFO | fairseq.distributed_utils | distributed init (rank 12): env:// 2020-08-12 13:52:20 | INFO | fairseq.distributed_utils | initialized host 10-7-6-166.cactuslabs.io as rank 10 2020-08-12 13:52:20 | INFO | fairseq.distributed_utils | distributed init (rank 11): env:// 2020-08-12 13:52:20 | INFO | fairseq.distributed_utils | initialized host 10-7-6-166.cactuslabs.io as rank 12 2020-08-12 13:52:20 | INFO | fairseq.distributed_utils | initialized host 10-7-6-166.cactuslabs.io as rank 11 10-7-6-166:2407:2407 [4] NCCL INFO Bootstrap : Using [0]ens3:10.7.6.166<0> 10-7-6-166:2407:2407 [4] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so). 10-7-6-166:2407:2407 [4] NCCL INFO NET/IB : No device found. 10-7-6-166:2407:2407 [4] NCCL INFO NET/Socket : Using [0]ens3:10.7.6.166<0> 10-7-6-166:2404:2404 [1] NCCL INFO Bootstrap : Using [0]ens3:10.7.6.166<0> 10-7-6-166:2404:2404 [1] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so). 10-7-6-166:2409:2409 [6] NCCL INFO Bootstrap : Using [0]ens3:10.7.6.166<0> 10-7-6-166:2409:2409 [6] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so). 10-7-6-166:2409:2409 [6] NCCL INFO NET/IB : No device found. 10-7-6-166:2404:2404 [1] NCCL INFO NET/IB : No device found. 10-7-6-166:2409:2409 [6] NCCL INFO NET/Socket : Using [0]ens3:10.7.6.166<0> 10-7-6-166:2404:2404 [1] NCCL INFO NET/Socket : Using [0]ens3:10.7.6.166<0> 10-7-6-166:2410:2410 [7] NCCL INFO Bootstrap : Using [0]ens3:10.7.6.166<0> 10-7-6-166:2410:2410 [7] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so). 10-7-6-166:2410:2410 [7] NCCL INFO NET/IB : No device found. 10-7-6-166:2410:2410 [7] NCCL INFO NET/Socket : Using [0]ens3:10.7.6.166<0> 10-7-6-166:2406:2406 [3] NCCL INFO Bootstrap : Using [0]ens3:10.7.6.166<0> 10-7-6-166:2406:2406 [3] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so). 10-7-6-166:2406:2406 [3] NCCL INFO NET/IB : No device found. 10-7-6-166:2406:2406 [3] NCCL INFO NET/Socket : Using [0]ens3:10.7.6.166<0> 10-7-6-166:2405:2405 [2] NCCL INFO Bootstrap : Using [0]ens3:10.7.6.166<0> 10-7-6-166:2405:2405 [2] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so). 10-7-6-166:2405:2405 [2] NCCL INFO NET/IB : No device found. 10-7-6-166:2405:2405 [2] NCCL INFO NET/Socket : Using [0]ens3:10.7.6.166<0> 10-7-6-166:2408:2408 [5] NCCL INFO Bootstrap : Using [0]ens3:10.7.6.166<0> 10-7-6-166:2408:2408 [5] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so). 10-7-6-166:2408:2408 [5] NCCL INFO NET/IB : No device found. 10-7-6-166:2408:2408 [5] NCCL INFO NET/Socket : Using [0]ens3:10.7.6.166<0> 10-7-6-166:2403:2403 [0] NCCL INFO Bootstrap : Using [0]ens3:10.7.6.166<0> 10-7-6-166:2403:2403 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so). 10-7-6-166:2403:2403 [0] NCCL INFO NET/IB : No device found. 10-7-6-166:2403:2403 [0] NCCL INFO NET/Socket : Using [0]ens3:10.7.6.166<0> 10-7-6-166:2404:2463 [1] NCCL INFO Setting affinity for GPU 1 to ffffffff 10-7-6-166:2410:2465 [7] NCCL INFO Setting affinity for GPU 7 to ffffffff 10-7-6-166:2407:2462 [4] NCCL INFO Setting affinity for GPU 4 to ffffffff 10-7-6-166:2409:2464 [6] NCCL INFO Setting affinity for GPU 6 to ffffffff 10-7-6-166:2406:2466 [3] NCCL INFO Setting affinity for GPU 3 to ffffffff 10-7-6-166:2405:2467 [2] NCCL INFO Setting affinity for GPU 2 to ffffffff 10-7-6-166:2408:2468 [5] NCCL INFO Setting affinity for GPU 5 to ffffffff 10-7-6-166:2403:2469 [0] NCCL INFO Setting affinity for GPU 0 to ffffffff 10-7-6-166:2410:2465 [7] NCCL INFO CUDA Dev 7[7], Socket NIC distance : PHB 10-7-6-166:2403:2469 [0] NCCL INFO CUDA Dev 0[0], Socket NIC distance : PHB 10-7-6-166:2404:2463 [1] NCCL INFO CUDA Dev 1[1], Socket NIC distance : PHB 10-7-6-166:2405:2467 [2] NCCL INFO CUDA Dev 2[2], Socket NIC distance : PHB 10-7-6-166:2408:2468 [5] NCCL INFO CUDA Dev 5[5], Socket NIC distance : PHB 10-7-6-166:2406:2466 [3] NCCL INFO CUDA Dev 3[3], Socket NIC distance : PHB 10-7-6-166:2407:2462 [4] NCCL INFO CUDA Dev 4[4], Socket NIC distance : PHB 10-7-6-166:2409:2464 [6] NCCL INFO CUDA Dev 6[6], Socket NIC distance : PHB 10-7-6-166:2408:2468 [5] NCCL INFO Ring 00 : 13[5] -> 14[6] via P2P/IPC 10-7-6-166:2407:2462 [4] NCCL INFO Ring 00 : 12[4] -> 13[5] via P2P/IPC 10-7-6-166:2406:2466 [3] NCCL INFO Ring 00 : 11[3] -> 12[4] via P2P/IPC 10-7-6-166:2405:2467 [2] NCCL INFO Ring 00 : 10[2] -> 11[3] via P2P/IPC 10-7-6-166:2409:2464 [6] NCCL INFO Ring 00 : 14[6] -> 15[7] via P2P/IPC 10-7-6-166:2404:2463 [1] NCCL INFO Ring 00 : 9[1] -> 10[2] via P2P/IPC 10-7-6-166:2403:2469 [0] NCCL INFO Ring 00 : 7 -> 8 [receive] via NET/Socket/0 10-7-6-166:2403:2469 [0] NCCL INFO NET/Socket: Using 2 threads and 8 sockets per thread 10-7-6-166:2410:2465 [7] NCCL INFO Ring 00 : 15 -> 0 [send] via NET/Socket/0 10-7-6-166:2403:2469 [0] NCCL INFO Ring 00 : 8[0] -> 9[1] via P2P/IPC 10-7-6-166:2407:2462 [4] NCCL INFO Ring 00 : 12[4] -> 11[3] via P2P/IPC 10-7-6-166:2410:2465 [7] NCCL INFO Ring 00 : 15[7] -> 14[6] via P2P/IPC 10-7-6-166:2408:2468 [5] NCCL INFO Ring 00 : 13[5] -> 12[4] via P2P/IPC 10-7-6-166:2406:2466 [3] NCCL INFO Ring 00 : 11[3] -> 10[2] via P2P/IPC 10-7-6-166:2405:2467 [2] NCCL INFO Ring 00 : 10[2] -> 9[1] via P2P/IPC 10-7-6-166:2409:2464 [6] NCCL INFO Ring 00 : 14[6] -> 13[5] via P2P/IPC 10-7-6-166:2404:2463 [1] NCCL INFO Ring 00 : 9[1] -> 8[0] via P2P/IPC 10-7-6-166:2403:2469 [0] NCCL INFO Ring 00 : 8 -> 0 [send] via NET/Socket/0 10-7-6-166:2407:2462 [4] NCCL INFO Ring 01 : 12[4] -> 13[5] via P2P/IPC 10-7-6-166:2408:2468 [5] NCCL INFO Ring 01 : 13[5] -> 14[6] via P2P/IPC 10-7-6-166:2406:2466 [3] NCCL INFO Ring 01 : 11[3] -> 12[4] via P2P/IPC 10-7-6-166:2405:2467 [2] NCCL INFO Ring 01 : 10[2] -> 11[3] via P2P/IPC 10-7-6-166:2409:2464 [6] NCCL INFO Ring 01 : 14[6] -> 15[7] via P2P/IPC 10-7-6-166:2404:2463 [1] NCCL INFO Ring 01 : 9[1] -> 10[2] via P2P/IPC 10-7-6-166:2410:2465 [7] NCCL INFO Ring 01 : 15 -> 0 [send] via NET/Socket/0 10-7-6-166:2407:2462 [4] NCCL INFO Ring 01 : 12[4] -> 11[3] via P2P/IPC 10-7-6-166:2406:2466 [3] NCCL INFO Ring 01 : 11[3] -> 10[2] via P2P/IPC 10-7-6-166:2405:2467 [2] NCCL INFO Ring 01 : 10[2] -> 9[1] via P2P/IPC 10-7-6-166:2408:2468 [5] NCCL INFO Ring 01 : 13[5] -> 12[4] via P2P/IPC 10-7-6-166:2407:2462 [4] NCCL INFO Trees [0] 11->12->13/-1/-1 [1] 11->12->13/-1/-1 10-7-6-166:2403:2469 [0] NCCL INFO Ring 00 : 0 -> 8 [receive] via NET/Socket/0 10-7-6-166:2409:2464 [6] NCCL INFO Ring 01 : 14[6] -> 13[5] via P2P/IPC 10-7-6-166:2403:2469 [0] NCCL INFO NET/Socket: Using 2 threads and 8 sockets per thread 10-7-6-166:2406:2466 [3] NCCL INFO Trees [0] 10->11->12/-1/-1 [1] 10->11->12/-1/-1 10-7-6-166:2408:2468 [5] NCCL INFO Trees [0] 12->13->14/-1/-1 [1] 12->13->14/-1/-1 10-7-6-166:2407:2462 [4] NCCL INFO comm 0x7f0ab4002540 rank 12 nranks 16 cudaDev 4 nvmlDev 4 - Init COMPLETE 10-7-6-166:2406:2466 [3] NCCL INFO comm 0x7f8e80002540 rank 11 nranks 16 cudaDev 3 nvmlDev 3 - Init COMPLETE 10-7-6-166:2408:2468 [5] NCCL INFO comm 0x7f09e8002540 rank 13 nranks 16 cudaDev 5 nvmlDev 5 - Init COMPLETE 10-7-6-166:2403:2469 [0] NCCL INFO Ring 01 : 7 -> 8 [receive] via NET/Socket/0 10-7-6-166:2403:2469 [0] NCCL INFO NET/Socket: Using 2 threads and 8 sockets per thread 10-7-6-166:2403:2469 [0] NCCL INFO Ring 01 : 8[0] -> 9[1] via P2P/IPC 10-7-6-166:2404:2463 [1] NCCL INFO Ring 01 : 9[1] -> 8[0] via P2P/IPC 10-7-6-166:2405:2467 [2] NCCL INFO Trees [0] 9->10->11/-1/-1 [1] 9->10->11/-1/-1 10-7-6-166:2410:2465 [7] NCCL INFO Ring 01 : 15[7] -> 14[6] via P2P/IPC 10-7-6-166:2409:2464 [6] NCCL INFO Trees [0] 13->14->15/-1/-1 [1] 13->14->15/-1/-1 10-7-6-166:2410:2465 [7] NCCL INFO Trees [0] 14->15->-1/-1/-1 [1] 14->15->-1/-1/-1 10-7-6-166:2405:2467 [2] NCCL INFO comm 0x7fbd7c002540 rank 10 nranks 16 cudaDev 2 nvmlDev 2 - Init COMPLETE 10-7-6-166:2409:2464 [6] NCCL INFO comm 0x7f4290002540 rank 14 nranks 16 cudaDev 6 nvmlDev 6 - Init COMPLETE 10-7-6-166:2410:2465 [7] NCCL INFO comm 0x7ff674002540 rank 15 nranks 16 cudaDev 7 nvmlDev 7 - Init COMPLETE 10-7-6-166:2404:2463 [1] NCCL INFO Trees [0] 8->9->10/-1/-1 [1] 8->9->10/-1/-1 10-7-6-166:2403:2469 [0] NCCL INFO Ring 01 : 0 -> 8 [receive] via NET/Socket/0 10-7-6-166:2403:2469 [0] NCCL INFO NET/Socket: Using 2 threads and 8 sockets per thread 10-7-6-166:2404:2463 [1] NCCL INFO comm 0x7fc5b4002540 rank 9 nranks 16 cudaDev 1 nvmlDev 1 - Init COMPLETE 10-7-6-166:2403:2469 [0] NCCL INFO Ring 01 : 8 -> 0 [send] via NET/Socket/0 10-7-6-166:2403:2469 [0] NCCL INFO Trees [0] 0->8->9/-1/-1 [1] -1->8->9/0/-1 10-7-6-166:2403:2469 [0] NCCL INFO comm 0x7f19d4002540 rank 8 nranks 16 cudaDev 0 nvmlDev 0 - Init COMPLETE ***************************************** Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. ***************************************** Environment Config 2 nodes with 8 GPU (K80) Fairseq 0.9 PyTorch 1.5 Cuda = 9.2.88 CudaNN = 7.6.4 NCCL = 2.4.8 Note: We can able to run training with Apex on a single instance with multiple GPUs. Is there anything we are missing? Any information and help will be useful. Thanks in advance
st178171
Were the training jobs with and without Apex done on the same machines? It seems like GPU-GPU communication is working based on the Ring/Trees logs that you pasted. It might make sense to direct this issue to the Apex GitHub repo since the training is working with vanilla DDP (we’ve also been working on bridging the gap between vanilla DDP and Apex through new features like dynamic bucketing, so the performance difference may not be as much as before).
st178172
Thank you so much @osalpekar for your replay and for sharing the valuable information. Were the training jobs with and without Apex done on the same machines? - No, I have used cloud instances to run the experiment with and without Apex I’ll be redirecting this issue to Nvidia Apex GitHub repo and paste the link of that issue here. So you and PyTorch team can refer that. Once again thank you
st178173
I want to train my model using 2 nodes in a HPC system. Each node contains 4 Nvidia V100 GPUs. It requires MPI if more than 1 node is used. However, I don’t have enough expertise on MPI. Besides, I have found a paucity of information about Pytorch MPI Backend with GPU support. During the preparation of my model, I intended to train it to a single machine with 8 GPUs. Unfortunately, I don’t have access to that sort of machine. The HPC as mentioned is the only option for me. I have already gone through Open MPI documents and successfully compiled pytorch 1.5.1(from source) with cuda (10.1) and Open MPI (3.0.4) (CUDA-aware). I would highly appreciate if someone provide me a snippet of code to make specific changes to my source code so that I can train the model in the HPC. Thank you.
st178174
Did you hit any error when using CUDA-aware MPI backend? Based on past discussion 10, you might need to synchronize CUDA streams in the application code when using CUDA-aware MPI. BTW, is MPI the only option for you, or would Gloo backend work?
st178175
I have a model that I am Testing, this model has undergone training. I have a sy.Virtualworker that I send a tensor to. I send the model to this virtual worker. I get the error IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1) I have looked at other errors with this similar error, however, this might have an issue with what i’m doing with the pysyft material. the goal is to get a prediction from the model based on the input. I believed the input to be of similar dimensions to what the model would work with. I either have an issue with how im dealing with the pysyft materia. As I have worked with this code successfully when doing a non-pysyft version and have seen the intended results. full error list: Traceback (most recent call last): File "C:\Users\OMEGA-Money\AppData\Roaming\Python\Python36\site-packages\syft\frameworks\torch\tensors\interpreters\native.py", line 333, in handle_func_command cmd, args_, kwargs_, return_args_type=True File "C:\Users\OMEGA-Money\AppData\Roaming\Python\Python36\site-packages\syft\generic\frameworks\hook\hook_args.py", line 157, in unwrap_args_from_function new_args = hook_args(args_) File "C:\Users\OMEGA-Money\AppData\Roaming\Python\Python36\site-packages\syft\generic\frameworks\hook\hook_args.py", line 356, in <lambda> return lambda x: f(lambdas, x) File "C:\Users\OMEGA-Money\AppData\Roaming\Python\Python36\site-packages\syft\generic\frameworks\hook\hook_args.py", line 535, in three_fold lambdas[1](args_[1], **kwargs), File "C:\Users\OMEGA-Money\AppData\Roaming\Python\Python36\site-packages\syft\generic\frameworks\hook\hook_args.py", line 331, in <lambda> else lambda i: forward_func[type(i)](i) File "C:\Users\OMEGA-Money\AppData\Roaming\Python\Python36\site-packages\syft\frameworks\torch\hook\hook_args.py", line 30, in <lambda> else (_ for _ in ()).throw(PureFrameworkTensorFoundError), File "C:\Users\OMEGA-Money\AppData\Roaming\Python\Python36\site-packages\syft\frameworks\torch\hook\hook_args.py", line 30, in <genexpr> else (_ for _ in ()).throw(PureFrameworkTensorFoundError), syft.exceptions.PureFrameworkTensorFoundError During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<input>", line 1, in <module> File "B:\tools and software\PyCharm 2020.1\plugins\python\helpers\pydev\_pydev_bundle\pydev_umd.py", line 197, in runfile pydev_imports.execfile(filename, global_vars, local_vars) # execute the script File "B:\tools and software\PyCharm 2020.1\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "B:/projects/GRA/FederatedLearningAnalysis/anomaly-detection/PyTroch_singleworker_combined_train_test.py", line 266, in <module> app.run(main) File "B:\tools and software\Anaconda\envs\pysyft-pytorch\lib\site-packages\absl\app.py", line 299, in run _run_main(main, args) File "B:\tools and software\Anaconda\envs\pysyft-pytorch\lib\site-packages\absl\app.py", line 250, in _run_main sys.exit(main(argv)) File "B:/projects/GRA/FederatedLearningAnalysis/anomaly-detection/PyTroch_singleworker_combined_train_test.py", line 261, in main tr=tr, df_malicious=load_mal_data(), features=features) File "B:/projects/GRA/FederatedLearningAnalysis/anomaly-detection/PyTroch_singleworker_combined_train_test.py", line 131, in test_with_data Y_pred = model.predict(torch.from_numpy(X_test_scaled).float()) File "B:/projects/GRA/FederatedLearningAnalysis/anomaly-detection/PyTroch_singleworker_combined_train_test.py", line 204, in predict x_pred = self.model(x) File "B:\tools and software\Anaconda\envs\pysyft-pytorch\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "B:/projects/GRA/FederatedLearningAnalysis/anomaly-detection/PyTroch_singleworker_combined_train_test.py", line 183, in forward x = torch.tanh(self.fc1(x)) File "B:\tools and software\Anaconda\envs\pysyft-pytorch\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "B:\tools and software\Anaconda\envs\pysyft-pytorch\lib\site-packages\torch\nn\modules\linear.py", line 87, in forward return F.linear(input, self.weight, self.bias) File "C:\Users\OMEGA-Money\AppData\Roaming\Python\Python36\site-packages\syft\generic\frameworks\hook\hook.py", line 336, in overloaded_func response = handle_func_command(command) File "C:\Users\OMEGA-Money\AppData\Roaming\Python\Python36\site-packages\syft\frameworks\torch\tensors\interpreters\native.py", line 343, in handle_func_command response = new_type.handle_func_command(new_command) File "C:\Users\OMEGA-Money\AppData\Roaming\Python\Python36\site-packages\syft\generic\pointers\object_pointer.py", line 213, in handle_func_command response = owner.send_command(location, cmd_name=cmd, args_=args_, kwargs_=kwargs_) File "C:\Users\OMEGA-Money\AppData\Roaming\Python\Python36\site-packages\syft\workers\base.py", line 626, in send_command ret_val = self.send_msg(message, location=recipient) File "C:\Users\OMEGA-Money\AppData\Roaming\Python\Python36\site-packages\syft\workers\base.py", line 274, in send_msg bin_response = self._send_msg(bin_message, location) File "C:\Users\OMEGA-Money\AppData\Roaming\Python\Python36\site-packages\syft\workers\virtual.py", line 16, in _send_msg return location._recv_msg(message) File "C:\Users\OMEGA-Money\AppData\Roaming\Python\Python36\site-packages\syft\workers\virtual.py", line 20, in _recv_msg return self.recv_msg(message) File "C:\Users\OMEGA-Money\AppData\Roaming\Python\Python36\site-packages\syft\workers\base.py", line 310, in recv_msg response = self._message_router[type(msg)](msg) File "C:\Users\OMEGA-Money\AppData\Roaming\Python\Python36\site-packages\syft\workers\base.py", line 451, in execute_tensor_command return self.execute_computation_action(cmd.action) File "C:\Users\OMEGA-Money\AppData\Roaming\Python\Python36\site-packages\syft\workers\base.py", line 514, in execute_computation_action response = command(*args_, **kwargs_) File "C:\Users\OMEGA-Money\AppData\Roaming\Python\Python36\site-packages\syft\generic\frameworks\hook\hook.py", line 336, in overloaded_func response = handle_func_command(command) File "C:\Users\OMEGA-Money\AppData\Roaming\Python\Python36\site-packages\syft\frameworks\torch\tensors\interpreters\native.py", line 367, in handle_func_command response = cls._get_response(cmd, args_, kwargs_) File "C:\Users\OMEGA-Money\AppData\Roaming\Python\Python36\site-packages\syft\frameworks\torch\tensors\interpreters\native.py", line 401, in _get_response response = command_method(*args_, **kwargs_) File "B:\tools and software\Anaconda\envs\pysyft-pytorch\lib\site-packages\torch\nn\functional.py", line 1370, in linear ret = torch.addmm(bias, input, weight.t()) File "C:\Users\OMEGA-Money\AppData\Roaming\Python\Python36\site-packages\syft\generic\frameworks\hook\hook.py", line 336, in overloaded_func response = handle_func_command(command) File "C:\Users\OMEGA-Money\AppData\Roaming\Python\Python36\site-packages\syft\frameworks\torch\tensors\interpreters\native.py", line 367, in handle_func_command response = cls._get_response(cmd, args_, kwargs_) File "C:\Users\OMEGA-Money\AppData\Roaming\Python\Python36\site-packages\syft\frameworks\torch\tensors\interpreters\native.py", line 401, in _get_response response = command_method(*args_, **kwargs_) IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1) CODE bits: def predict(self, x): x = x.send('testing') self.model.send(x.location) x_pred = self.model(x) mse = np.mean(np.power(x.get().data.numpy() - x_pred.get().data.numpy(), 2), axis=1) y_pred = mse > self.threshold return y_pred.astype(int) CODE 2: X_test = df.drop(columns=["malicious"]).values X_test_scaled = scaler.transform(X_test) Y_test = df["malicious"] Y_pred = model.predict(torch.from_numpy(X_test_scaled).float())
st178176
cc @ptrblck do you know who would be the best person to answer PySyft questions?
st178177
@RavikantSingh is a member of OpenMined and might be able to help. I cannot find the user account of Andrew Trask (and unsure, if he’s active in this board).
st178178
Hi, I am using automatic mixed precision with DataParallel in a single process. I read the example https://pytorch.org/docs/stable/notes/amp_examples.html#dataparallel-in-a-single-process 1 and it says the @autocast should be added in MyModel before forward. My question is, should I add the @autocast in the subModel. For example. ‘’’ MyModel(nn.Module) … self. conv1 = subModel(…) @autocast() def forward() ‘’’
st178179
Solved by mcarilli in post #3 Anything that runs under autocast in a particular thread will have autocast enabled. MyModel.forward is what DP runs in a side thread. If MyModel.forward is decorated with @autocast(), that takes care of enabling autocast for the side thread. If subModel.forward runs within MyModel’s forward, you…
st178180
Anything that runs under autocast in a particular thread will have autocast enabled. MyModel.forward is what DP runs in a side thread. If MyModel.forward is decorated with @autocast(), that takes care of enabling autocast for the side thread. If subModel.forward runs within MyModel’s forward, you don’t need to additionally decorate subModel.forward.
st178181
Hi everyone. I am working on training models across multiple machines. Following the instruction from the documents, I write following codes: On machine 1 import torch torch.distributed.init_process_group(backend='nccl', world_size=2, rank=0, init_method='tcp://172.16.0.246:3456') net = torch.nn.Linear(256, 128).cuda() net = torch.nn.parallel.DistributedDataParallel(net, [0], 0) On machine 2 import torch torch.distributed.init_process_group(backend='nccl', world_size=2, rank=1, init_method='tcp://172.16.0.246:3456') net = torch.nn.Linear(256, 128).cuda() net = torch.nn.parallel.DistributedDataParallel(net, [0], 0) In which 172.16.0.246 is the IP of machine 1. However, the code hang up unexpectedly when calling function _distributed_broadcast_coalesced in the initialization of DistributedDataParallel. Is there anyone knows what did I do wrong?
st178182
Solved by mrshenli in post #6 Did you install NCCL yourself or the one bundled within PyTorch? Not sure if I interpreted the logs correctly, but looks like machine 1 is trying to use TCP while machine 2 is trying to use IB? What if you set NCCL_IB_DISABLE on both machine? https://docs.nvidia.com/deeplearning/nccl/user-guide/doc…
st178183
Hey @IcarusWizard, what error did you see? Does gloo backend work for you? Can you run the following command to check if the hostname can resolve to the expected IP on both machines? getent hosts `hostname` If the resolved IP is wrong, you can set NCCL_SOCKET_IFNAME 17 env var to point to the right nic (e.g., eth0).
st178184
Hey Shen! The strange thing is that there is actually no error. The code stuck at _distributed_broadcast_coalesced and cannot be terminated by Ctrl+C. I have tried gloo, and it works smoothly which may suggest it is not an issue related to firewall. I also have set GLOO_SOCKET_IFNAME and NCCL_SOCKET_IFNAME to the correct interface on both machine. And for the command you suggested, it returns 127.0.1.1 icarus-Polixir on machine 1, and fe80::b62e:99ff:fe72:d1a1 polixir-G291-Z20-00 fe80::98a3:19ff:fe05:3c61 polixir-G291-Z20-00 fe80::42:adff:fe62:bb24 polixir-G291-Z20-00 fe80::d01c:dff:fe28:8b6f polixir-G291-Z20-00 fe80::1c3d:c8ff:fe62:76cc polixir-G291-Z20-00 on machine 2. I don’t know if it’s related to the issue.
st178185
If NCCL_SOCKET_IFNAME points to the correct interface, it should be fine even if the hostname resolves to wrong address, as the latter is a fallback of the former. And as it has already reached the broadcast op in DDP, I would assume the rendezvous in init_process_group was successful. Could you please confirm this by adding the following code right after init_process_group and see if it also hang at this allreduce? print("rendezvous done") tmp = torch.ones(2, 2) torch.distributed.all_reduce(tmp) print(tmp) Another thing that we could try is to set the following env vars and see if there is any NCCL logs that stand out. export NCCL_DEBUG=INFO export NCCL_DEBUG_SUBSYS=ALL
st178186
It stuck on the all_reduce operation too. I get some new information from NCCL logs. On machine 1: rendezvous done icarus-Polixir:637574:637574 [0] NCCL INFO Bootstrap : Using [0]wlp0s20f3:172.16.0.246<0> icarus-Polixir:637574:637574 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so). icarus-Polixir:637574:637574 [0] misc/ibvwrap.cc:63 NCCL WARN Failed to open libibverbs.so[.1] icarus-Polixir:637574:637574 [0] NCCL INFO NET/Socket : Using [0]wlp0s20f3:172.16.0.246<0> NCCL version 2.4.8+cuda10.2 icarus-Polixir:637574:637587 [0] NCCL INFO Setting affinity for GPU 0 to ff icarus-Polixir:637574:637587 [0] NCCL INFO CUDA Dev 0[0], Socket NIC distance : PHB icarus-Polixir:637574:637587 [0] NCCL INFO Channel 00 : 0 1 icarus-Polixir:637574:637587 [0] NCCL INFO NET/Socket : GPU Direct RDMA Disabled for GPU 0[0] / HCA 0 (distance 2 >= 2) icarus-Polixir:637574:637587 [0] NCCL INFO Ring 00 : 1 -> 0 [receive] via NET/Socket/0 icarus-Polixir:637574:637587 [0] NCCL INFO NET/Socket: Using 1 threads and 1 sockets per thread icarus-Polixir:637574:637587 [0] NCCL INFO Ring 00 : 0 -> 1 [send] via NET/Socket/0 On machine 2: rendezvous done polixir-G291-Z20-00:2672:2672 [0] NCCL INFO Bootstrap : Using [0]enp129s0f1:172.16.16.122<0> polixir-G291-Z20-00:2672:2672 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so). polixir-G291-Z20-00:2672:2672 [0] NCCL INFO NET/IB : Using [0]mlx5_1:1/RoCE ; OOB enp129s0f1:172.16.16.122<0> polixir-G291-Z20-00:2672:2763 [0] NCCL INFO Setting affinity for GPU 0 to ffff0000,00000000,ffff0000,00000000 polixir-G291-Z20-00:2672:2763 [0] NCCL INFO CUDA Dev 0[4], IB NIC distance : SYS polixir-G291-Z20-00:2672:2763 [0] NCCL INFO NET/IB : GPU Direct RDMA Disabled for GPU 0[4] / HCA 0 (distance 4 >= 2) polixir-G291-Z20-00:2672:2763 [0] NCCL INFO Ring 00 : 0 -> 1 [receive] via NET/IB/0 polixir-G291-Z20-00:2672:2763 [0] NCCL INFO Ring 00 : 1 -> 0 [send] via NET/IB/0 polixir-G291-Z20-00:2672:2763 [0] NCCL INFO NET/IB: Dev 0 Port 1 qpn 2358 mtu 3 GID 0 (80FE/A1D172FEFF992EB6) First thing I noticed is that it trying to find libnccl-net.so. However I cannot find this file in official release of NCCL. I have tried to run this two scripts both on machine 1, and it works just fine. Thus I think it may not be the source of problem. Any other thoughts?
st178187
Did you install NCCL yourself or the one bundled within PyTorch? Not sure if I interpreted the logs correctly, but looks like machine 1 is trying to use TCP while machine 2 is trying to use IB? What if you set NCCL_IB_DISABLE on both machine? https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/env.html#nccl-ib-disable 54
st178188
IT WORKS after disabling IB. Seems like IB is a hardware related feature, and the web interface on machine 1 simply doesn’t support it. I am using the NCCL bundled with PyTorch. I have also tried to install NCCL myself, but it makes no different. Thanks a lot for your help!
st178189
Hi, I’m new to distributed training. When I train with DistributedDataParallel do I get the functionality of DataParallel, meaning can I assume that on a single node if there is more than one GPU then all GPUs will be utilized on that node? Thanks, Zlapp
st178190
Solved by mrshenli in post #2 Yep, DistributedDataParallel (DDP) can utilize multiple GPUs on the same node, but it works differently than DataParallel (DP). DDP uses multiple processes, one process per GPU, while DP is single-process multi-thread. See this page for the comparison between the two: https://pytorch.org/tutorials/…
st178191
Yep, DistributedDataParallel (DDP) can utilize multiple GPUs on the same node, but it works differently than DataParallel (DP). DDP uses multiple processes, one process per GPU, while DP is single-process multi-thread. See this page for the comparison between the two: https://pytorch.org/tutorials/beginner/dist_overview.html#data-parallel-training 131 and this to get started with DDP: https://pytorch.org/docs/stable/notes/ddp.html 174
st178192
Hi, I am trying to use DistributedDataParallel for my job. I wrote two following codes but none of them is working properly. It would be great kind if someone helps me to find the problem. class Model(nn.Module): # Our model def __init__(self): super(Model, self).__init__() self.fc1 = nn.Conv2d(1,10,3) self.bn1 = nn.BatchNorm2d(10) self.fc2= nn.Conv2d(10,20,3) self.bn2 = nn.BatchNorm2d(20) self.fc3= nn.Linear(11520,10) def forward(self,x): print(f'inout_size: {x.size()}') x = F.relu(self.fc1(x)) x = self.bn1(x) x = F.relu(self.fc2(x)) x = self.bn2(x) x = x.view(x.size(0),-1) x = self.fc3(x) print(f'output_size: {x.size()}') return(x) ######################################## def train(args): ######################################## rank =args.gpui dist.init_process_group(backed = 'nccl', init_method = 'env://', world_size= args.world_size, rank=rank) torch.manual_seed(0) model = Model() torch.cuda.set_device(args.gpui) model= model.to(device) optimizer = optim.Adam(model.parameters(),lr=0.1) lr_sch = lr_scheduler.StepLR(optimizer,step_size=2,gamma=0.1) criterion = nn.CrossEntropyLoss().to(device) ###################################### model = nn.DistributedDataParallel(model, device_ids = [args.gpui]) ##################################### mnist =torchvision.datasets.MNIST('./data',train= True,download=True, transform =transforms.ToTensor()) #################################### train_sampler = torch.utils.data.distributed.DistributedSampler(mnist, num_replicas=args.world_size, rank = rank) ################################### dataloader = DataLoader(mnist,batch_size=32,num_workers =4,pin_memory=True, sampler = train_sampler) ##################################### for epoch in range(num_epochs): total_loss =0 for X,y in dataloader: X= X.to(device) y = y.long().to(device) pred = model(X) loss = criterion(pred,y) t_loss+= loss.item() optimizer.zero_grad() loss.backward() optimizer.step() print(f'Loss: {t_loss/len(dataloader)}') if __name__=='__main__': parser = argparse.ArgumentParser() parser.add_argument('-n', '--nodes', default=1, type=int, metavar='N') parser.add_argument('-g', '--gpus', default=1, type=int, help='number of gpus per node') parser.add_argument('-gi', '--gpui', default=3, type=int, help='the index of gpu') parser.add_argument('-nr', '--nr', default=0, type=int, help='ranking within the nodes') parser.add_argument('--epochs', default=2, type=int, metavar='N', help='number of total epochs to run') args = parser.parse_args() ######################################################### args.world_size = args.gpus * args.nodes # it is equal to the total number of gpus, because we use each gpu per node os.environ['MASTER_ADDR'] = '172.20.24.55' # it tells which IP address it should look for process 0 os.environ['MASTER_PORT'] = '8890' # mp.spawn(train,args=(args,),nprocs=args.world_size) # I got the following error, --> 125 mp.spawn(train,args=(args,),nprocs=args.world_size) # 126 ######################################################### 127 ~/anaconda3/lib/python3.7/site-packages/torch/multiprocessing/spawn.py in spawn(fn, args, nprocs, join, daemon, start_method) 198 ' torch.multiprocessing.start_process(...)' % start_method) 199 warnings.warn(msg) --> 200 return start_processes(fn, args, nprocs, join, daemon, start_method='spawn') ~/anaconda3/lib/python3.7/site-packages/torch/multiprocessing/spawn.py in start_processes(fn, args, nprocs, join, daemon, start_method) 147 daemon=daemon, 148 ) --> 149 process.start() 150 error_queues.append(error_queue) 151 processes.append(process) ~/anaconda3/lib/python3.7/multiprocessing/process.py in start(self) 110 'daemonic processes are not allowed to have children' 111 _cleanup() --> 112 self._popen = self._Popen(self) 113 self._sentinel = self._popen.sentinel 114 # Avoid a refcycle if the target function holds an indirect ~/anaconda3/lib/python3.7/multiprocessing/context.py in _Popen(process_obj) 282 def _Popen(process_obj): 283 from .popen_spawn_posix import Popen --> 284 return Popen(process_obj) 285 286 class ForkServerProcess(process.BaseProcess): ~/anaconda3/lib/python3.7/multiprocessing/popen_spawn_posix.py in __init__(self, process_obj) 30 def __init__(self, process_obj): 31 self._fds = [] ---> 32 super().__init__(process_obj) 33 34 def duplicate_for_child(self, fd): ~/anaconda3/lib/python3.7/multiprocessing/popen_fork.py in __init__(self, process_obj) 18 self.returncode = None 19 self.finalizer = None ---> 20 self._launch(process_obj) 21 22 def duplicate_for_child(self, fd): ~/anaconda3/lib/python3.7/multiprocessing/popen_spawn_posix.py in _launch(self, process_obj) 40 tracker_fd = semaphore_tracker.getfd() 41 self._fds.append(tracker_fd) ---> 42 prep_data = spawn.get_preparation_data(process_obj._name) 43 fp = io.BytesIO() 44 set_spawning_popen(self) ~/anaconda3/lib/python3.7/multiprocessing/spawn.py in get_preparation_data(name) 170 # or through direct execution (or to leave it alone entirely) 171 main_module = sys.modules['__main__'] --> 172 main_mod_name = getattr(main_module.__spec__, "name", None) 173 if main_mod_name is not None: 174 d['init_main_from_name'] = main_mod_name AttributeError: module '__main__' has no attribute '__spec__'
st178193
One error I noticed is that, when using spawn, it will pass the rank as the first argument to the target function, followed by the args you provided. So the signature of the train function should be train(rank, args). But above does not seem to be the cause of the logged error. That error does not seem to be PyTorch related, see this discussion: https://stackoverflow.com/questions/45720153/python-multiprocessing-error-attributeerror-module-main-has-no-attribute 9
st178194
Thank you for your answer. What about the following code? I tried it in another way. And I found the below error. class Model(nn.Module): # Our model def __init__(self): super(Model, self).__init__() self.fc1 = nn.Conv2d(1,10,3) self.bn1 = nn.BatchNorm2d(10) self.fc2= nn.Conv2d(10,20,3) self.bn2 = nn.BatchNorm2d(20) self.fc3= nn.Linear(11520,10) def forward(self,x): print(f'inout_size: {x.size()}') x = F.relu(self.fc1(x)) x = self.bn1(x) x = F.relu(self.fc2(x)) x = self.bn2(x) x = x.view(x.size(0),-1) x = self.fc3(x) print(f'output_size: {x.size()}') return(x) ######################################## def train(gpu): rank = gpu dist.init_process_group(backed = 'nccl', init_method = 'env://', world_size= 4, rank=rank) torch.manual_seed(0) model = Model() torch.cuda.set_device(gpu) model= model.to(device) optimizer = optim.Adam(model.parameters(),lr=0.1) lr_sch = lr_scheduler.StepLR(optimizer,step_size=2,gamma=0.1) criterion = nn.CrossEntropyLoss().to(device) ###################################### model = nn.DistributedDataParallel(model, device_ids = [gpu]) ##################################### mnist =torchvision.datasets.MNIST('./data',train= True,download=True, transform =transforms.ToTensor()) #################################### train_sampler = torch.utils.data.distributed.DistributedSampler(mnist, num_replicas=4, rank = rank) ################################### dataloader = DataLoader(mnist,batch_size=32,num_workers =4,pin_memory=True, sampler = train_sampler) ##################################### for epoch in range(10): total_loss =0 for X,y in dataloader: X= X.to(device) y = y.long().to(device) pred = model(X) loss = criterion(pred,y) t_loss+= loss.item() optimizer.zero_grad() loss.backward() optimizer.step() print(f'Loss: {t_loss/len(dataloader)}') def main(): os.environ['MASTER_ADDR'] = '172.20.24.55' ### the IP of vm_gpu02 os.environ['MASTER_PORT'] = '9000' mp.spawn(train,nprocs=4) if __name__=='__main__': main() process 3 terminated with exit code 1
st178195
887574002: torch.cuda.set_device(gpu) model= model.to(device) I might miss sth, but looks like the device var is undefined? Did you mean gpu instead?
st178196
Oh, yes. Sorry, I was running several codes today simultaneously. that’s why I didn’t notice this mistake. Hey, I corrected the mistake, it wasn’t the problem though. Here you can see the traceback error. Actually I want to run the model on 4 gpus and I declared it as nproc=4, but I do not know if I should add something else to my code or not? At the moment it only reads one gpu. class Model(nn.Module): # Our model def __init__(self): super(Model, self).__init__() self.fc1 = nn.Conv2d(1,10,3) self.bn1 = nn.BatchNorm2d(10) self.fc2= nn.Conv2d(10,20,3) self.bn2 = nn.BatchNorm2d(20) self.fc3= nn.Linear(11520,10) def forward(self,x): print(f'inout_size: {x.size()}') x = F.relu(self.fc1(x)) x = self.bn1(x) x = F.relu(self.fc2(x)) x = self.bn2(x) x = x.view(x.size(0),-1) x = self.fc3(x) print(f'output_size: {x.size()}') return(x) ######################################## def train(gpu): rank = gpu dist.init_process_group(backed = 'nccl', init_method = 'env://', world_size= 4, rank=rank) torch.manual_seed(0) model = Model() torch.cuda.set_device(gpu) model= model.to(gpu) optimizer = optim.Adam(model.parameters(),lr=0.1) lr_sch = lr_scheduler.StepLR(optimizer,step_size=2,gamma=0.1) criterion = nn.CrossEntropyLoss().to(gpu) ###################################### model = nn.DistributedDataParallel(model, device_ids = [gpu]) ##################################### mnist =torchvision.datasets.MNIST('./data',train= True,download=True, transform =transforms.ToTensor()) #################################### train_sampler = torch.utils.data.distributed.DistributedSampler(mnist, num_replicas=4, rank = rank) ################################### dataloader = DataLoader(mnist,batch_size=32,num_workers =4,pin_memory=True, sampler = train_sampler) ##################################### for epoch in range(10): total_loss =0 for X,y in dataloader: X= X.to(gpu) y = y.long().to(gpu) pred = model(X) loss = criterion(pred,y) t_loss+= loss.item() optimizer.zero_grad() loss.backward() optimizer.step() print(f'Loss: {t_loss/len(dataloader)}') def main(): os.environ['MASTER_ADDR'] = '172.20.24.55' ### the IP of vm_gpu02 os.environ['MASTER_PORT'] = '9000' mp.spawn(train,nprocs=4) if __name__=='__main__': main() Exception Traceback (most recent call last) <ipython-input-10-e18ebd33df91> in <module> 1 if __name__=='__main__': 2 ----> 3 main() <ipython-input-9-331de420a7b8> in main() 5 os.environ['MASTER_PORT'] = '9000' 6 ----> 7 mp.spawn(train,nprocs=4) ~/anaconda3/lib/python3.7/site-packages/torch/multiprocessing/spawn.py in spawn(fn, args, nprocs, join, daemon, start_method) 198 ' torch.multiprocessing.start_process(...)' % start_method) 199 warnings.warn(msg) --> 200 return start_processes(fn, args, nprocs, join, daemon, start_method='spawn') ~/anaconda3/lib/python3.7/site-packages/torch/multiprocessing/spawn.py in start_processes(fn, args, nprocs, join, daemon, start_method) 156 157 # Loop on join until it returns True or raises an exception. --> 158 while not context.join(): 159 pass 160 ~/anaconda3/lib/python3.7/site-packages/torch/multiprocessing/spawn.py in join(self, timeout) 111 raise Exception( 112 "process %d terminated with exit code %d" % --> 113 (error_index, exitcode) 114 ) 115 Exception: process 2 terminated with exit code 1
st178197
Found a few errors when debugging this locally: in DDP ctor, the arg name is backend instead of backed DDP is from torch.nn.parallel package instead of torch.nn t_loss is used before definition. The following code works for me. I tried it on 2 GPUs, as I only have 2 in my dev env. Some general suggestion for debugging: 1) it will be helpful if you can locate which line threw the error, 2) it will be easier to debug if you start from a simpler version and gradually add complexity to the code. import torch import torch.nn as nn import torch.nn.functional as F import os import torch.multiprocessing as mp import torch.distributed as dist import torch.optim as optim import torch.optim.lr_scheduler as lr_scheduler import torchvision import torchvision.transforms as transforms from torch.utils.data import DataLoader class Model(nn.Module): # Our model def __init__(self): super(Model, self).__init__() self.fc1 = nn.Conv2d(1,10,3) self.bn1 = nn.BatchNorm2d(10) self.fc2= nn.Conv2d(10,20,3) self.bn2 = nn.BatchNorm2d(20) self.fc3= nn.Linear(11520,10) def forward(self,x): print(f'inout_size: {x.size()}') x = F.relu(self.fc1(x)) x = self.bn1(x) x = F.relu(self.fc2(x)) x = self.bn2(x) x = x.view(x.size(0),-1) x = self.fc3(x) print(f'output_size: {x.size()}') return(x) ######################################## def train(gpu): print("1111") rank = gpu dist.init_process_group(backend = 'nccl', init_method = 'env://', world_size= 2, rank=rank) print("2222") torch.manual_seed(0) model = Model() torch.cuda.set_device(gpu) model= model.to(gpu) optimizer = optim.Adam(model.parameters(),lr=0.1) lr_sch = lr_scheduler.StepLR(optimizer,step_size=2,gamma=0.1) criterion = nn.CrossEntropyLoss().to(gpu) ###################################### model = nn.parallel.DistributedDataParallel(model, device_ids = [gpu]) ##################################### mnist =torchvision.datasets.MNIST('./data',train= True,download=True, transform =transforms.ToTensor()) #################################### train_sampler = torch.utils.data.distributed.DistributedSampler(mnist, num_replicas=4, rank = rank) ################################### dataloader = DataLoader(mnist,batch_size=32,num_workers =4,pin_memory=True, sampler = train_sampler) ##################################### t_loss = None for epoch in range(2): total_loss =0 for X,y in dataloader: X= X.to(gpu) y = y.long().to(gpu) pred = model(X) loss = criterion(pred,y) t_loss= loss.item() if t_loss is None else t_loss + loss.item() optimizer.zero_grad() loss.backward() optimizer.step() print(f'Loss: {t_loss/len(dataloader)}') def main(): os.environ['MASTER_ADDR'] = 'localhost' ### the IP of vm_gpu02 os.environ['MASTER_PORT'] = '9000' mp.spawn(train,nprocs=2) if __name__=='__main__': main()
st178198
Hi, thank you for your answer. Regarding your code, when we run the code isn’t it needed to give the gpu index? How dose coed understand on which gpus should run the process. And my second question is a bout num_replicas, shouldn’t it be equal to num_process?
st178199
887574002: Hi, thank you for your answer. Regarding your code, when we run the code isn’t it needed to give the gpu index? How dose coed understand on which gpus should run the process. A process can access any visible GPU. The one-process-per-GPU requirement is from DDP to avoid NCCL comm hang. Ideally, we should set CUDA_VISIBLE_DEVICES for each process accordingly, so that each process only sees one GPU and cuda:0 on each process points to a different GPU. But if you are confident that no code would accidentally access a different GPU, directly doing .to(gpu) would be sufficient. We are using the id of the process provided by mp.spawn as the gpu id. And my second question is a bout num_replicas , shouldn’t it be equal to num_process? Yep, you are right.
st178200
Hi, when I want to get data from dataloader and pass it to subprocess, my model and subprocess will block. But in subprocess create dataloader and get data, model will work normally. Following code will block def func(net, d): out = net(d) if __name__ == '__main__': net = Net(input_w=28*28, width=64, n_layer=3, output_w=10) #dense network trainloader = get_data() #get trainloader data, label = iter(trainloader).next() data = data.view(data.size(0), -1) with Pool() as pool: pool.starmap(func, [(net, data)]) Following code works normally def func(net): trainloader = get_data() #get trainloader data, label = iter(trainloader).next() data = data.view(data.size(0), -1) out = net(data) if __name__ == '__main__': net = Net(input_w=28*28, width=64, n_layer=3, output_w=10) #dense network with Pool() as pool: pool.starmap(func, [(net, )]) I don’t know what reason caused this problem. Thanks everyone.
st178201
Does it work if the data tensor is not from DataLoader. Say what if you create the tensor using torch.zeros? Does it still hang?
st178202
Not working. If I create torch.zeros in func(), it can work. But create in “if name == ‘main’:” then pass to func(), it hangs.
st178203
hmm, the following code works for me locally. Can you try this in your dev env? import torch from torch.multiprocessing import Pool def func(net, d): out = net(d) print(out) if __name__ == '__main__': net = torch.nn.Linear(2, 2) data = torch.zeros(2, 2) with Pool() as pool: pool.starmap(func, [(net, data)])
st178204
I tried your code then I found a strange problem!! If I pass torch.zeros(32, 28*28), it can work. But!! If passing (64, 28*28), it hangs. This problem doesn’t happen on my macbook, But happened on Linux PC.
st178205
Could it be the machine/container has been configured to use a very small shm size?
st178206
Your suggestion is helpful for me! I’ll try to adjust shm size. If succed, I’ll reply. Thank you very much.
st178207
I typed df -h in terminal and content showed as following : `Filesystem Size Used Avail Use% Mounted on’ tmpfs 7.9G 30M 7.9G 1% /dev/shm` The /dev/shm has 7.9G, this space is big enough for using.
st178208
I want to use pytorch DDP module to do the distributed training and I use the OpenBLAS as the BLAS. When I execute the following benchmark import timeit runtimes = [] threads = [1] + [t for t in range(2, 49, 2)] for t in threads: torch.set_num_threads(t) r = timeit.timeit(setup = "import torch; x = torch.randn(1024, 1024); y = torch.randn(1024, 1024)", stmt="torch.mm(x, y)", number=100) runtimes.append(r) I found that different threads were running on different cores. However when I execute my training script, I found all threads are bound to the same core. script: export GLOO_SOCKET_IFNAME=ib0 export NUM_CORES=64 export OMP_NUM_THREADS=$NUM_CORES NPROC_PER_NODE=1 COMMAND="$HOME/deepnet_mpi/CosmoFlow.py --epochs=120 --backend=gloo --workers=0 --batch-size=1 --print-freq=50 --data=$HOME/Nbody/datasets/v6" python3 -m torch.distributed.launch \ --nproc_per_node=$NPROC_PER_NODE \ $COMMAND What is the reason of this problem? And this is my environment: Collecting environment information... PyTorch version: 1.6.0a0+b31f58d Is debug build: No CUDA used to build PyTorch: None OS: CentOS Linux release 7.6.1810 (AltArch) GCC version: (GCC) 9.2.0 CMake version: version 3.16.5 Python version: 3.7 Is CUDA available: No CUDA runtime version: No CUDA GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA Versions of relevant libraries: [pip3] numpy==1.19.1 [pip3] torch==1.6.0a0+b31f58d [conda] Could not collect I found that when multiple processes are set up, different processes also use the same CPU core.
st178209
hmm, I am not aware of any DDP code that would change the threading behavior. cc @VitalyFedyunin do you know what might lead to this behavior?
st178210
Thanks for your reply, now I have solved this problem. The reason is that I do not set an openmp environment variable export GOMP_CPU_AFFINITY=0-127
st178211
Hi, I’m wondering how to deal with occasional OOM error happend during DDP backward. For forward, oom can be captured simply by a try-catch statement. For backward, however, loss.backward() performs gradient calculation and the registered hooks perform gradient reduction at the same time. Is it possible to hang due to oom errors during backward in several process so that the other successful processes keep waiting for them? If so, is there a nice way to recover from this problem?
st178212
Solved by mrshenli in post #2 Yes, it is. If one process hit OOM and skipped/reran the the backward pass, it would cause de-synchronization across processes in the same group, which would lead to hang or crash. If so, is there a nice way to recover from this problem? Yep, TorchElastic is built to solve this issue. cc @Kiuk_…
st178213
T_Qri: Is it possible to hang due to oom errors during backward in several process so that the other successful processes keep waiting for them? Yes, it is. If one process hit OOM and skipped/reran the the backward pass, it would cause de-synchronization across processes in the same group, which would lead to hang or crash. If so, is there a nice way to recover from this problem? Yep, TorchElastic 4 is built to solve this issue. cc @Kiuk_Chung
st178214
Have a look here at: https://pytorch.org/elastic/0.2.0/train_script.html 3 - for instructions on how to write a “torchelastic compliant” train script https://pytorch.org/elastic/0.2.0/quickstart.html 1 - for a quickstart on launching your script with torchelastic
st178215
Hi all, apologies in advacne it is my first post. i am trying an experiment with new data and i getting an error i was wondering if somebody could please assist. import math import cvxpy as cp import matplotlib.pyplot as plt import numpy as np import pandas as pd import torch from cvxpylayers.torch import CvxpyLayer import latexify latexify.latexify() torch.set_default_tensor_type(torch.DoubleTensor) %matplotlib inline N_train=1000 test=yf.download("SPY", start="2012-01-01", end="2017-04-30")['Close'].to_numpy() outputs=torch.from_numpy(test) inputs=np.linspace(1,100,len(outputs)) inputs=torch.from_numpy(inputs) X_train = inputs[:N_train] Y_train = outputs[:N_train] X_val = inputs[N_train:] Y_val = outputs[N_train:] len(X_val) len(Y_val) def create_layer(): y_cp = cp.Variable(n) x_minus_y = cp.Variable(n) x_param = cp.Parameter(n) theta_param = cp.Parameter((n, n)) lambda_param = cp.Parameter(pos=True) objective = ( cp.sum_squares(theta_param @ x_minus_y) + lambda_param*cp.sum_squares(cp.diff(y_cp)) ) constraints = [ x_minus_y == x_param - y_cp ] problem = cp.Problem(cp.Minimize(objective), constraints) layer = CvxpyLayer( problem, parameters=[x_param, theta_param, lambda_param], variables=[y_cp]) return layer layer = create_layer() import torch from torch.utils.data import TensorDataset, DataLoader import numpy as np from cvxpylayers.torch import CvxpyLayer torch.set_default_dtype(torch.double) from tqdm.notebook import tqdm def fit(loss, params, X, Y, Xval, Yval, batch_size=128, lr=1e-3, epochs=100, verbose=False, print_every=1, callback=None): """ Arguments: loss: given x and y in batched form, evaluates loss. params: list of parameters to optimize. X: input data, torch tensor. Y: output data, torch tensor. Xval: input validation data, torch tensor. Yval: output validation data, torch tensor. """ train_dset = TensorDataset(X, Y) train_loader = DataLoader(train_dset, batch_size=batch_size, shuffle=True) opt = torch.optim.Adam(params, lr=lr) train_losses = [] val_losses = [] for epoch in tqdm(range(epochs)): if callback is not None: callback() with torch.no_grad(): val_losses.append(loss(Xval, Yval).item()) if verbose and epoch % print_every == 0: print("val loss %03d | %3.5f" % (epoch + 1, val_losses[-1])) batch = 1 train_losses.append([]) for Xbatch, Ybatch in train_loader: opt.zero_grad() l = loss(Xbatch, Ybatch) l.backward() opt.step() train_losses[-1].append(l.item()) if verbose and epoch % print_every == 0: print("batch %03d / %03d | %3.5f" % (batch, len(train_loader), np.mean(train_losses[-1]))) batch += 1 return val_losses, train_losses theta_tch = torch.eye(n, requires_grad=True) lambda_tch = torch.tensor(0.5, requires_grad=True) params = [theta_tch, lambda_tch] def loss_fn(X, actual): preds = layer(X, theta_tch, lambda_tch)[0] mse_per_example = (preds - actual).pow(2).mean(axis=1) return mse_per_example.mean() val_losses, train_losses = fit( loss_fn, params, X_train, Y_train, X_val, Y_val, lr=1e-2, batch_size=8, epochs=15, verbose=True, print_every=1) The above is the code taken from - https://github.com/cvxgrp/cvxpylayers/blob/master/examples/torch/signal_denoising.ipynb 1 and the error i am getting --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-58-0b0fb50d4406> in <module> ----> 1 val_losses, train_losses = fit( 2 loss_fn, params, X_train, Y_train, X_val, Y_val, lr=1e-2, batch_size=8, 3 epochs=15, verbose=True, print_every=1) <ipython-input-56-f19c59cb9b44> in fit(loss, params, X, Y, Xval, Yval, batch_size, lr, epochs, verbose, print_every, callback) 32 33 with torch.no_grad(): ---> 34 val_losses.append(loss(Xval, Yval).item()) 35 if verbose and epoch % print_every == 0: 36 print("val loss %03d | %3.5f" % (epoch + 1, val_losses[-1])) <ipython-input-57-0aead751c22d> in loss_fn(X, actual) 4 5 def loss_fn(X, actual): ----> 6 preds = layer(X, theta_tch, lambda_tch)[0] 7 mse_per_example = (preds - actual).pow(2).mean(axis=1) 8 return mse_per_example.mean() ~/miniconda3/envs/myenv1/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 720 result = self._slow_forward(*input, **kwargs) 721 else: --> 722 result = self.forward(*input, **kwargs) 723 for hook in itertools.chain( 724 _global_forward_hooks.values(), ~/cvxpylayers/cvxpylayers/torch/cvxpylayer.py in forward(self, solver_args, *params) 150 info=info, 151 ) --> 152 sol = f(*params) 153 self.info = info 154 return sol ~/cvxpylayers/cvxpylayers/torch/cvxpylayer.py in forward(ctx, *params) 224 p_shape = p.shape if batch_size == 0 else p.shape[1:] 225 if not np.all(p_shape == param_order[i].shape): --> 226 raise ValueError( 227 "Inconsistent parameter shapes passed in. " 228 "Expected parameter {} to have non-batched shape of " ValueError: Inconsistent parameter shapes passed in. Expected parameter 0 to have non-batched shape of (100,) but got torch.Size([339]).
st178216
Tensor shape incorrect, evidently: to have non-batched shape of (100,) but got torch.Size([339])
st178217
How do I reshape to provide the code with the correct format. @ptrblck @smth was wondering if you would be able to assist. Thank you in advance. Andrew
st178218
I’m unfortunately not familiar with cvxpylayers, but as @iffiX mentioned, the input shape of Xval and/or Yval seems to be wrong. Based on your code it seems tou are trying to pass these tensors directly to the loss function, i.e. without a DataLoader. Could the batch size be missing? If not, I would recommend to check the shapes and make sure which shapes are expected.
st178219
Ah Thank you @ptrblck !! That was very helpful!! im still getting used to pytorch :), i just assumed i could pass the dataset straight in. The example is below works but i am a little unsure about how to fit it with new data. github.com cvxgrp/cvxpylayers/blob/master/examples/torch/signal_denoising.ipynb { "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Signal denoising\n", "\n", "This notebook accompanies the paper [Learning Convex Optimization Models](https://web.stanford.edu/~boyd/papers/learning_copt_models.html)." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import math\n", "\n", "import cvxpy as cp\n", This file has been truncated. show original Do you have any thoughts about how i could fit new data with the current example. Kind regards , Andrew
st178220
So my next plan was to try a few different ways to format the data correctly. I have not tried this but please observe the below. import yfinance as yf data = yf.download("SPY", start="2008-01-01", end="2017-04-30")['Close'] dd=data.to_numpy() s=int(np.ceil(len(dd)/100)) s def strided_app(a, L, S ): # Window len = L, Stride len/stepsize = S nrows = ((a.size-L)//S)+1 n = a.strides[0] return np.lib.stride_tricks.as_strided(a, shape=(nrows,L), strides=(S*n,n)) ddd= strided_app(dd, 100, s) def f(x): # return math.sqrt(x) return torch.from_numpy(np.array(x)) ffxx=list(map(f, ddd)) torch.stack(ffxx)
st178221
Based on the previous post it seems that your code is already working with another dataset and the error is raised, if you are trying to use a new dataset? If that’s the case, could you print the shape of the input tensor as well as some intermediate tensors? Since your code is not executable, I can just speculate what might be wrong.
st178222
Goal: Distributed Training with Dynamic machine location, where worker’s device location can change. For e.g. 4 Worker Parameter Server setting. Now, for first 2 epochs 2 workers are run on Machine 1, but after 2 epochs they are supposed to be run on Machine 2. I am assuming, since the worker’s machine change after 2 epochs, dist.init_process_group() needs to be initialized. However, reinitializing issues this error. RuntimeError: trying to initialize the default process group twice What’s the correct way to update ‘process_group()’ ? Solution Ideas: Is there anyway to delete the initialized process group? So that before re-initialization using dist.init_process_group() I can delete the prior process group, thus avoiding the issue.
st178223
Hey @adarsh-kr, There is a destroy_process_group API to clear the default ProcessGroup instance. If you would like to create multiple ProcessGroup instances, you can do so using the new_grou 24 API. github.com pytorch/pytorch/blob/05f00532f52883d29a08d96b2961042cc41573ab/torch/distributed/distributed_c10d.py#L530-L577 32 def destroy_process_group(group=group.WORLD): """ Destroy a given process group, and deinitialize the distributed package Arguments: group (ProcessGroup, optional): The process group to be destroyed, if group.WORLD is given, all process groups including the default one will be destroyed. """ global _pg_map global _pg_names global _pg_group_ranks global _default_pg global _default_pg_init_method global _group_count if group == GroupMember.NON_GROUP_MEMBER: return This file has been truncated. show original
st178224
Hi, Anyone knows how to stop / terminate an all-reduce call properly when it doesn’t get reply from other processes? Just to explain my question, please see the sample code below. I have 4 processes divided into two sub-groups (group1 and group2), and a shared Queue with 5 elements. Each process will try to get one element from the queue in the while loop until the queue becomes empty. And inside the while loop, each process will do all_reduce with its “neighbor” in the same sub-group. The problem is that when one of the process get the last element, and at the same time, the shared queue is empty and its “neighbor” process already exit the while loop, it will get hanged and waiting forever for the all_reduce reply. Is there any way to set timeout for all_reduce call? Or some other ways to solve this situation? Thanks. Please see codes attached below. import os import torch import torch.distributed as dist import torch.multiprocessing as mp def run(rank, a, q): dist_init_method = 'tcp://{master_ip}:{master_port}'.format( master_ip='127.0.0.1', master_port='12346') world_size = 4 torch.distributed.init_process_group(backend="nccl", init_method=dist_init_method, world_size=world_size, rank=rank) group1 = dist.new_group([0, 1]) group2 = dist.new_group([2, 3]) tensor = torch.ones(1) device = torch.device('cuda', rank) tensor = tensor.to(device) while not q.empty(): current_index = q.get() print(f'Process {rank} current index is: {current_index}') if rank == 0 or rank == 1: dist.all_reduce(tensor, op=dist.ReduceOp.SUM, group=group1) else: dist.all_reduce(tensor, op=dist.ReduceOp.SUM, group=group2) print('Rank ', rank, ' has data ', tensor[0]) if __name__ == "__main__": a = 1 ctx = mp.get_context('spawn') q = ctx.Queue() for index in range(5): q.put(index) mp.spawn(run, args=(a, q), nprocs=4)
st178225
Hey @Yi_Zhang, you can set a timeout in init_process_group. For NCCL backend, it also requires setting NCCL_BLOCKING_WAIT env var to 1. More explanation can be found here https://pytorch.org/docs/stable/distributed.html#torch.distributed.init_process_group 58. (search for NCCL_BLOCKING_WAIT )
st178226
Hi @mrshenli, thanks for your reply. I tried with “gloo”, it does terminate the process and throws Exceptions. But with “nccl”, I add the following line in the bash script to run the .py file, export NCCL_BLOCKING_WAIT=1 However it doesn’t work. Also tried with adding the following line in the .py file itself, but doesn’t work either. os.environ["NCCL_BLOCKING_WAIT"] = "1"
st178227
Hey @Yi_Zhang, did you set the env var within each spawned process (i.e., in run function) and before calling init_process_group?
st178228
Hi @mrshenli, yes, I did that for each process. Don’t understand where is the mistake. Here is the sample code: import os import torch import torch.distributed as dist import torch.multiprocessing as mp import time import datetime def run(rank, a, q): os.environ["NCCL_BLOCKING_WAIT"] = "1" print('Rank ', rank, 'NCCL_BLOCKING_WAIT is: ', os.environ["NCCL_BLOCKING_WAIT"]) dist_init_method = 'tcp://{master_ip}:{master_port}'.format( master_ip='127.0.0.1', master_port='12346') torch.distributed.init_process_group(backend="nccl", init_method=dist_init_method, timeout=datetime.timedelta(seconds=5), world_size=4, rank=rank) group1 = dist.new_group([0, 1]) group2 = dist.new_group([2, 3]) tensor = torch.ones(1) device = torch.device('cuda', rank) tensor = tensor.to(device) while not q.empty(): print('Rank ', rank, ' in the loop ') current_index = q.get() print(f'Process {rank} current index is: {current_index}') try: if rank == 0 or rank == 1: dist.all_reduce(tensor, op=dist.ReduceOp.SUM, group=group1) print(f'Process {rank} all_reduce tensor is: {tensor}') else: dist.all_reduce(tensor, op=dist.ReduceOp.SUM, group=group2) print(f'Process {rank} all_reduce tensor is: {tensor}') except Exception: pass print('Rank ', rank, ' has data ', tensor[0]) if __name__ == "__main__": a = 1 ctx = mp.get_context('spawn') q = ctx.Queue() flag = ctx.Queue() for index in range(5): q.put(index) mp.spawn(run, args=(a, q), nprocs=4) An update, I tried “gloo” without setting timeout, and it can terminate properly. I’m wondering maybe “gloo” takes care of this situation by itself? It doesn’t have something to do with the “timeout”?
st178229
I confirm that I can reproduce this locally. Hey @osalpekar, do you know if we miss anything here? Would I be correct if I assume the following code is expected to abort the op in this case? github.com pytorch/pytorch/blob/ecb88c5d11895a68e5f20917d27a0debbc0f0697/torch/lib/c10d/ProcessGroupNCCL.cpp#L301-L335 5 if (blockingWait_) { // Use the passed in timeout if provided, otherwise use the default // opTimeout for each WorkNCCL object. std::chrono::milliseconds workTimeout = timeout == kNoTimeout ? opTimeout_ : timeout; // Wait for the operation to complete. while (!isCompleted()) { auto currentTimepoint = std::chrono::steady_clock::now(); if (std::chrono::duration_cast<std::chrono::milliseconds>( currentTimepoint - workStartTime_) > workTimeout) { // When operation times out due to some errors that are not // detected by nccl communicators, ncclCommWatchdog can not check this // time out error and thus can not abort ncclComms accordingly. // So explicitly abort ncclComms here before throwing this timed out // exception to users, after this, ncclCommWatchdog can detect nccl // communicators are aborted and clean up devNCCLCommMap_ accordingly. // if throwing timed out excepiton without aborting nccl communicators // here, it was observed that CUDA GPU will have 100% utilization and // can not run new events successfully. for (const auto& ncclComm : ncclComms_) { This file has been truncated. show original
st178230
@mrshenli - Yes that is the code block that should abort the op if it times out. @Yi_Zhang - There is a workaround. The all_reduce call actually returns an async work handle. You can capture that handle and wait on it as such: work = dist.all_reduce(..., async_op=True) work.wait(SOME_TIMEOUT) If the all_reduce call times out, then the wait call will throw an exception. In the meantime, let me try to repro from your most recent code snippet.
st178231
@mrshenli @osalpekar, thanks for your reply. I find another way to avoid this situation without using timeout. I just add some checks to make sure the pairs of processes terminate after same number of rounds. But I’m still curious to know if you have any answer for the timeout issue. Thanks
st178232
Background: I’m doing a distributed PPO (basically gathering data from several worker and training on one learner) Issue: Data collection works fine but when I train the network with lines below self.critic.optimizer.zero_grad() batch_states_values = self.critic.forward(batch_states) print('crtitic batch_states_values done') critic_loss = F.mse_loss(batch_states_values, batch_REFs) print('crtitic critic_loss done') critic_loss.backward() print('crtitic loss backward done') self.critic.optimizer.step() print('crtitic step done') And the output shows: crtitic batch_states_values done crtitic critic_loss done crtitic loss backward done So it appears to be that the program hangs after the loss backward. What could be the cause? It works fine on my windows workstations but hangs when I run it on a linux machine
st178233
Hey @Lewis_Liu, which part of the program is distributed? Since torch.distributed does not support Windows yet, I assume the working version of the program on Windows does not use distributed training?
st178234
The training isn’t distributed and torch.distributed isn’t used. By distributed I mean the workers used to collect data are distributed and the network params are send from trainer to these workers through mp.queue. Once the data are collected and trainer starts to train, the workers stop working so I suppose there’s no interaction between the workers and the trainer. So what appears really strange to me is that the backward is done but step is not. I’m using the standard optim.Adam
st178235
Are you using any CUDA ops? If so, could you please add a torch.cuda.synchronize() 1 before every print to make sure that preceding ops are indeed done instead of still pending in the CUDA stream?
st178236
Hi Li, I just added the line and the prints are the same. FYI, after it hangs there, I killed the program and it showed this. I’m not sure if this is helpful Traceback (most recent call last): File “test.py”, line 23, in Process Process-2: p.join() File “/apps/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/multiprocessing/process.py”, line 140, in join Process Process-1: res = self._popen.wait(timeout) File “/apps/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/multiprocessing/popen_fork.py”, line 48, in wait return self.poll(os.WNOHANG if timeout == 0.0 else 0) File “/apps/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/multiprocessing/popen_fork.py”, line 28, in poll pid, sts = os.waitpid(self.pid, flag) KeyboardInterrupt
st178237
That’s weird, is there a way that we can reproduce this locally, so that we can help debug?
st178238
Hi Li, Thanks for the help. I’m afraid it wouldn’t be easy to do it. I’ll try to find a way to convert it so it can be shared. Meanwhile, what would you say that might be the cause? Any chance this could be the use of mp.queue or mp.Value in the linux environment? If likely, I can try to avoid or alter the way using them
st178239
Are you using torch.multiprocessing.SimpleQueue 1? If yes, does the program guarantee that the owner of the shared data object is still alive when the user uses it when sharing CPU tensors? And are you using spawn to create processes? Unlike CPU tensors, the sending process is required to keep the original tensor as long as the receiving process retains a copy of the tensor. The refcounting is implemented under the hood but requires users to follow the next best practices.
st178240
I used spawn by adding the line mp.set_start_method(“spawn”, force=True) I used torch.multiprocessing.Queue instead of the SimpleQueue. The owners are always alive
st178241
mrshenli: Unlike CPU tensors, the sending process is required to keep the original tensor as long as the receiving process retains a copy of the tensor. The refcounting is implemented under the hood but requires users to follow the next best practices. Interesting fact, on linux cluster, it works if I change the device to ‘CPU’ instead of using a GPU. But on windows, both devices work
st178242
Have you solved this problem? I have met the same problem when running with multiple machine
st178243
Not completely solved but was able to found what was the issue and found a way around. The issue is that the network was somehow shared with other processes. So my practical suggestion would be check everything that might lead to your network being shared/visited. e.g. mistake in using copy.copy or deepcopy to send the statedict
st178244
Suppose that I have a big model that cannot fit into one gpu, so I have to split the model to different gpus. I’m wondering whether there’s a way to make Pytorch take multiple gpu as a single one, so that we don’t have to split model manually.
st178245
Solved by mrshenli in post #3 I’m wondering whether there’s a way to make PyTorch take multiple gpu as a single one, so that we don’t have to split model manually. Currently, this is not available, we are working on adding a model partitioning feature. Manually splitting model shouldn’t be too hard with today’s PyTorch API, …
st178246
I heard nn.DataParallel for using on two or more Gpu, Taking multiple gpu and using as one is great question, also looking for answer to this
st178247
I’m wondering whether there’s a way to make PyTorch take multiple gpu as a single one, so that we don’t have to split model manually. Currently, this is not available, we are working on adding a model partitioning feature. Manually splitting model shouldn’t be too hard with today’s PyTorch API, you just need to append .to(device) to certain layers and outputs. See this tutorial: https://pytorch.org/tutorials/intermediate/model_parallel_tutorial.html 1
st178248
My code runs so much faster on my single GTX 1060 than on the cluster which has 2 GTX 1080 Ti. This is strange because a few parts of the code run faster on the cluster, although most parts run slower. For some comparison, enumerating through the dataloader takes ~9 seconds on my local while it takes 550 seconds on the cluster. Also, calculating loss + backprop takes ~ 1 seconds on my local, while it takes 10 seconds on the cluster. The code is from this paper https://github.com/Philip-Bachman/amdim-public 1
st178249
How did you measure the timing in both applications? If the data loading is 55 times slower on the “cluster”, I would recommend to narrow down this issue first. E.g. are you storing the data on a network drive or on a local SSD in both cases?
st178250
I just called time.time() at different sections and computed their differences for timing. Can you clarify how to narrow down the issue? The data is stored on my local SSD, and I’m not exactly sure where on the network it’s stored. The original code used ImageFolder to load the data, and I tried changing it to a standard dataset, but this did not help
st178251
If you are timing CUDA operations you would have to synchronize the code before starting and stopping the timer due to the asynchronous execution of CUDA kernels via torch.cuda.synchronize(). mhong94: The data is stored on my local SSD, and I’m not exactly sure where on the network it’s stored. I don’t understand this explanation. Is the data stored on the SSD in your workstation or on a server in your network (or both)? In the latter case you would introduce the network latency into the training, so I would recommend to store the data on a local SSD. mhong94: The original code used ImageFolder to load the data, and I tried changing it to a standard dataset, but this did not help What do you mean by “standard Dataset”? Did you write a custom Dataset?
st178252
When I run on my local machine, I’m accessing the data on my local SSD. I copy all code/data over to a storage on my kubernetes pod, then run the code there, using the cluster’s GPUs. I think that in both cases, they are accessing the data from their local SSDs.
st178253
When I run a model with DDP with 4 spawned processes and each process taking a GPU, I notice that there are a lot of spawned processes with nvidia-smi on the main process. Why are there so many additional processes spawned? I am also using torch=1.4.0 and CUDA=10.1.243 which is installed through my conda environment. gpu_ddp642×642 11.6 KB
st178254
See discussion `torch.distributed.barrier` used in multi-node distributed data-parallel training 3 For more detailed explainations, cc @mrshenli
st178255
iffiX: For more detailed explainations, c I took a look at that, but I am a little confused as it still. Is it a back-end issue, or is it normal behaviour?
st178256
So I played around a little more by breaking my pipeline to a very simple training loop, just to ensure I wasn’t doing anything wrong. I then cloned my conda environment and updated the torch and cuda to the versions listed on the Getting Started page (the most up-to-date) and it seems to fix the issue. So I’m not sure if it was a cuda, nccl (in the cudatoolkit), or the updated torch, but updating does fix it.
st178257
Those (~500MB CUDA memory consumption) look like CUDA context. It seems all processes somehow used CUDA:0 somewhere. It could be caused by 3rd-party libraries or calls like torch.cuda.empty_cache(). If you don’t want to debug the root cause of it, you can avoid it by setting CUDA_VISIBLE_DEVICES env var to make sure that each process only sees one GPU.
st178258
I updated both my torch and cudatoolkit (with conda) to the newest versions which seemed to fix the problem. I’m not sure if it was cuda, torch or how torch interacted with cuda, but updating seems to have fixed the bug. I’m not sure if this improves anything though, but the extra processes don’t show up anymore.
st178259
Hello All, Can anyone tell me how to install NumPy matrix with Python distribution? I have already download anaconda python distribution but still, I don’t know whats the next step is? Can anyone suggest me step to step process?
st178260
I think this question is not relevant here. But Anyway, you can use Anaconda to download packages all required for data science at once. https://www.anaconda.com/products/individual 1 and download and install You can see lot of tutorials and videos regarding installing Anaconda.
st178261
I have to install anaconda but I am stuck in some places but my mostly requires was resolved with the help of this post and thanks for your reference which is useful to me.
st178262
Each EC2 p3.2xlarge instance has 8 CPU and 1 GPU. I’m allowed a maximum of 16 CPUs on AWS at any given time, so I’ve been doing distributed training over 2 GPUs. However, I’ve just noticed that each p2.xlarge instance has 4 CPU and 1 GPU. This means that I could have 4 of these, which means training over 4 GPUs. Would this make training faster? What factors should be taken into consideration? 4 instances do take more time to set up than 2. Cost is not an issue. I’m doing mixed precision distributed training with apex. Thanks
st178263
qap: This means that I could have 4 of these, which means training over 4 GPUs. Would this make training faster? This is possible. Hope Figure 9 in this paper can offer some insight: https://arxiv.org/pdf/2006.15704.pdf 9 What factors should be taken into consideration? If the GPUs are the same, the network bandwidth is one of the dominating factor of training speed. 4 instances do take more time to set up than 2. This is one-time setup overhead, instead of per-iteration overhead, right? If so, it should be fine.
st178264
Hi, I’m trying build pytorch from source and need to make use of NCCL 2.7.6 or higher. The current pytorch is built using 2.7.3. Is there a way to do this? Thank you
st178265
Solved by mrshenli in post #4 Hey @Purvak-L, you can cd to the NCCL submodule in the third party folder in https://github.com/pytorch/pytorch/tree/master/third_party/nccl and manually update the nccl module there. Another option is to install 2.7.6 locally and set USE_SYSTEM_NCCL=1 when building PyTorch. See this issue: https:/…
st178266
You can potentially clone your own pytorch repo and upgrade nccl in there only, after that you can recompile from source. Would that work?
st178267
I did clone the pytorch repo and build from it. Can you point me where to make that update? I did check cmake/modules/FindNCCL.make.