id
stringlengths
3
8
text
stringlengths
1
115k
st177968
Can you add some profiling to see which part is the bottleneck? One reason might be PyTorch ops already launches many threads. So it possible that one process already saturates CPU resources.
st177969
when i use the code “net = torch.nn.SyncBatchNorm.convert_sync_batchnorm(net)” to replace BN with SyncBatchNorm, the code would be deadlock like this: image880×293 30.8 KB it seems to be a problem with dataloader. And the relevant code is as follows image1114×559 69.8 KB image1066×480 45.7 KB Is there any kind person to help me?Thanks.
st177970
The difference between BatchNorm and SyncBatchNorm is that SyncBatchNorm uses torch.distributed.all_reduce in the backward pass. Two questions: What args and env vars did you pass to init_process_group? If you program, is there any other code that launches communication ops?
st177971
Hi All, I am a new user of torch. I would like to develop a dataloader module to feed data to torch C++ API. My focus is the C++ API. My dataset is distributed among multiple compute nodes. Each node owns a disjoint portion of data WRT any other nodes. Let’s say I have 5 nodes. They own 200,200,200,300,100 examples of data respectively. Is there a way to achieve the same training result from training on a single node (the node owns all 1000 rows of data) or on a 3-node cluster (the data distribution is different from 5-node cluster case)? Is there a way to control/know how torch generates the indices when calling customDataset::get(index) in training and in scoring? Are the indices sequential (1,2,3,…) or random? If I deploy multiple workers to load data in a node, is there a way to know the caller’s thread id from customDataset::get()? auto data_loader = torch::data::make_data_loader(std::move(dataset), torch::data::DataLoaderOptions().batch_size(kBatchSize).workers(2)); Thank you very much.
st177972
I am quite a pytorch newby, I hope this is the right place to post my issue. I am trying to train a transformer model with model parallelism following closely the megatron example from fairseq (just using complete transformer model instead gpt, same options including --fp16). GitHub pytorch/fairseq 4 Facebook AI Research Sequence-to-Sequence Toolkit written in Python. - pytorch/fairseq My setup is: two nodes with 6 GPUs (Titan RTX) each. Pytorch 1.6 Cuda 10.1.243 Ubuntu 18.04 LTS The model trains, however only with 4 GPUs per node. When switching to 6 GPUs per node (plus tweaking the model/dictionary to ensure divisibility by number of GPUs) I get the following error on the second node (right when training should start): terminate called after throwing an instance of ‘c10::Error’ what(): CUDA error: misaligned address Exception raised from create_event_internal at /pytorch/c10/cuda/CUDACachingAllocator.cpp:687 (most recent call first): frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x42 (0x7fc06b9c91e2 in /secondary/thies/.virtualenvs/pytorch-1.6/lib/python3.6/site-packages/torch/lib/libc10.so) frame [#1]: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0xad2 (0x7fc06bc17f92 in /secondary/thies/.virtualenvs/pytorch-1.6/lib/python3.6/site-packages/torch/lib/libc10_cuda.so) frame #2: c10::TensorImpl::release_resources() + 0x4d (0x7fc06b9b79cd in /secondary/thies/.virtualenvs/pytorch-1.6/lib/python3.6/site-packages/torch/lib/libc10.so) frame [#3]: std::vector<at::Tensor, std::allocatorat::Tensor >::~vector() + 0x5c (0x7fc0b3262d1c in /secondary/thies/.virtualenvs/pytorch-1.6/lib/python3.6/site-packages/torch/lib/libtorch_python.so) frame [#4]: torch::autograd::Engine::evaluate_function(std::shared_ptrtorch::autograd::GraphTask&, torch::autograd::Node*, torch::autograd::InputBuffer&, std::shared_ptrtorch::autograd::ReadyQueue const&) + 0x16b2 (0x7fc0a5d8f6b2 in /secondary/thies/.virtualenvs/pytorch-1.6/lib/python3.6/site-packages/torch/lib/libtorch_cpu.so) frame [#5]: torch::autograd::Engine::thread_main(std::shared_ptrtorch::autograd::GraphTask const&) + 0x451 (0x7fc0a5d8ffa1 in /secondary/thies/.virtualenvs/pytorch-1.6/lib/python3.6/site-packages/torch/lib/libtorch_cpu.so) frame [#6]: torch::autograd::Engine::thread_init(int, std::shared_ptrtorch::autograd::ReadyQueue const&, bool) + 0x89 (0x7fc0a5d88119 in /secondary/thies/.virtualenvs/pytorch-1.6/lib/python3.6/site-packages/torch/lib/libtorch_cpu.so) frame [#7]: torch::autograd::python::PythonEngine::thread_init(int, std::shared_ptrtorch::autograd::ReadyQueue const&, bool) + 0x4a (0x7fc0b352834a in /secondary/thies/.virtualenvs/pytorch-1.6/lib/python3.6/site-packages/torch/lib/libtorch_python.so) frame [#8]: + 0xbd6ef (0x7fc0b46826ef in /usr/lib/x86_64-linux-gnu/libstdc++.so.6) frame [#9]: + 0x76db (0x7fc0b85746db in /lib/x86_64-linux-gnu/libpthread.so.0) frame [#10]: clone + 0x3f (0x7fc0b88ad88f in /lib/x86_64-linux-gnu/libc.so.6) It seems there is problem with the c10 library, however I get exactly the same error when adding the --ddp-backend=no_c10d option. When removing the fp16 option the model trains fine on the c10 backend.
st177973
Hey @Thies1006, as developers watching this channel have limited knowledge about fairseq models, it’s hard to tell what went wrong. Have you tried posting this as an issue in fairseq repo?
st177974
Hi! Sorry for the delay. Yes, I posted this as well in the fairseq repo. It seems that when compiling pytorch from sources, this error disappears.
st177975
Hi, I am using the code from IMPLEMENTING A PARAMETER SERVER USING DISTRIBUTED RPC FRAMEWORK tutorial, which should be straightforward to implement. However, I am receiving the TypeError (see following screenshot) while executing dist_autograd.backward(cid, [list]) and can not get rid of it after trying a lot. Is this because the tensors need to be scalar? Your help will be much appreciated. Screenshot from 2020-09-07 15-48-06815×149 17.9 KB
st177976
Hey @Khairul_Mottakin, looks like you are using PyTorch v1.4? The cid arg is added in v1.5 IIRC. Could you please try upgrade to the latest release v1.6? cc @rvarm1
st177977
Yah, it is working locally after updating pytorch version. Many many thanks @mrshenli and @rvarm1 for your contribution and helping us to implement distributed system in our own ways. While started training from remote worker, it says “RuntimeError: […/third_party/gloo/gloo/transport/tcp/pair.cc:769] connect [127.0.1.1]:14769: Connection refused”. The same issue had been raised by @Oleg_Ivanov in here. I am not sure whether it has been solved. Should I use "os.environ[‘GLOO_SOCKET_IFNAME’]=‘nonexist’ " ? Can you suggest any tutorial for building such smaller cluster (2-3 remote workers with 1 master PS) to implement the Parameter Server using RPC of PyTorch? Thank you very much once again.
st177978
Hey @Khairul_Mottakin, can you try printing out GLOO_SOCKET_IFNAME, MASTER_ADDR, and MASTER_PORT immediately before where init_rpc is called on all processes? And args did you pass to init_rpc?
st177979
Hi all, I have spent the past day trying to figure out how to use multiple GPUs. In theory, parallelizing models across multiple GPUs is supposed to be as as easy as simply wrapping models with nn.DataParallel. However, I have found that this does not work for me. To use the most simple and canonical thing I could find for proof of this, I ran the code in the Data Parallelism tutorial 7, line for line. The output is as follows - it is the same output that I get every time I try to run Pytorch with multiple GPUs: --------------------------------------------------------------------------- KeyboardInterrupt Traceback (most recent call last) <ipython-input-3-0f0d83e9ef13> in <module> 1 for data in rand_loader: 2 input = data.to(device) ----> 3 output = model(input) 4 print("Outside: input size", input.size(), 5 "output_size", output.size()) /usr/local/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 487 result = self._slow_forward(*input, **kwargs) 488 else: --> 489 result = self.forward(*input, **kwargs) 490 for hook in self._forward_hooks.values(): 491 hook_result = hook(self, input, result) /usr/local/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py in forward(self, *inputs, **kwargs) 141 return self.module(*inputs[0], **kwargs[0]) 142 replicas = self.replicate(self.module, self.device_ids[:len(inputs)]) --> 143 outputs = self.parallel_apply(replicas, inputs, kwargs) 144 return self.gather(outputs, self.output_device) 145 /usr/local/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py in parallel_apply(self, replicas, inputs, kwargs) 151 152 def parallel_apply(self, replicas, inputs, kwargs): --> 153 return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) 154 155 def gather(self, outputs, output_device): /usr/local/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py in parallel_apply(modules, inputs, kwargs_tup, devices) 73 thread.start() 74 for thread in threads: ---> 75 thread.join() 76 else: 77 _worker(0, modules[0], inputs[0], kwargs_tup[0], devices[0]) /usr/local/lib/python3.6/threading.py in join(self, timeout) 1054 1055 if timeout is None: -> 1056 self._wait_for_tstate_lock() 1057 else: 1058 # the behavior of a negative timeout isn't documented, but /usr/local/lib/python3.6/threading.py in _wait_for_tstate_lock(self, block, timeout) 1070 if lock is None: # already determined that the C code is done 1071 assert self._is_stopped -> 1072 elif lock.acquire(block, timeout): 1073 lock.release() 1074 self._stop() KeyboardInterrupt: Note that it hangs - I have to keyboard interrupt to stop. And the error is the same every time - some sort of deadlock is entered into, although I do not understand how or why. Some information about my system: Operating System: Ubuntu 16.04 GPUS: 4 1080tis Pytorch version: 1.01 CUDA version: 10.0 NVIDIA Driver: 415 I have tried everything from only having a specific permutation of my GPUs be visible to CUDA to reinstalling everything related to CUDA but can’t figure out why I cannot run with multiple GPUs. If anyone could point me in the right direction, it would be greatly appreciated.
st177980
@ptrblck was this issue ever solved. I have seen a lot of threads on pytorch forums regarding NCCL deadlock but I didn’t find any solution.
st177981
I was not able to solve this issue, and my rig is currently disassembled and across the country so no way I can be of much help unfortunately.
st177982
It seems this issue was not solved and we don’t have a code snippet to reproduce this issue. I would generally recommend to use the latest stable version (and try out the nightly, if possible) using the latest CUDA, NCCL etc. versions. If the error is still observable, an executable code snippet to reproduce this issue is very helpful.
st177983
I tried the pytorch-nightly which uses NCCL 2.7.6. I have not faced the deadlock again yet. Thanks @ptrblck. @Jeffrey_Wang you might want to try that out. @ptrblck what could be issue with the previous version?
st177984
iamshnik: what could be issue with the previous version? Nothing we are aware of, i.e. we haven’t seen deadlocks in NCCL 2.4 before.
st177985
Hi, I come across a problem when I try to train my model in a distributed way. In summary, the problem is how to solve the memory leak in rank 0. When I run the code in single gpu, it works well and occupies 10.3G GPU memory. My GPU is 2080ti 11GB. But when I run the code in the distributed way, OOM occurred. I build a class named Trainer, then initiate dataset and model inside. The rough code is showed below. The process contains several backward operations. class Trainer(): def __init__(self): self.data_initial() self.model_initial() self.train() def data_initial(self): source_data = DataSet(.....) source_sampler = data.distributed.DistributedSampler(source_data, seed=1234) source_dataloader = data.DataLoader(source_data, batch_size=..., num_workers=4, pin_memory=False, sampler=source_sampler) self.source_data = enumerate(source_dataloader) target_data = DataSet(.....) target_sampler = data.distributed.DistributedSampler(target_data, seed=1234) target_dataloader = data.DataLoader(source_data, batch_size=..., num_workers=4, pin_memory=False, sampler=target_sampler) self.target_data = enumerate(target_dataloader) def model_initial(self): # build backbone rank = dist.get_rank() self.backbone = ResNet().cuda(rank) self.backbone = DDP(self.backbone, device_ids=[rank]) # restore_part the parameter succeeds restore. I have checked it. self.backbone.train() # classifier self.classifier = Classifier(...).cuda(rank) self.classifier = DDP(self.classifier, device_ids=[rank]) self.classifier.train() # optimizer self.backbone_optimizer = optim.SGD(self.backbone.parameters(), ...) self.backbone_optimizer.zero_grad() self.classifier_optimizer = optim.Adam(self.classifier.parameters(), ...) self.classifier_optimizer.zero_grad() def train(self): # pytorch prework rank = dist.get_rank() self.criterion = torch.nn.BCEWithLogitsLoss().cuda(rank) for i in range(1, self.config.num_steps): # get data c, batch = self.source_data.__next__() image, label, _, _ = batch _, batch = self.target_data.__next__() image_t, label_t, _, _ = batch self.step(i, image, label, image_t, label_t, loss_dic) gc.collect() def step(self, i, image, label, image_t, label_t, loss_dic): rank = dist.get_rank() self.backbone_optimizer.zero_grad() self.classifier_optimizer.zero_grad() # supervised learning for source image = Variable(image).cuda(rank) x = self.backbone(image) y1, _ = self.classifier(x) loss = self.criterion(y, label.long().cuda(rank)) loss.backward() for para in self.classifier.parameters(): para.requires_grad = False image_t = Variable(image_t).cuda(rank) x = self.backbone(image_t) _, y2 = self.classifier(x) label2 = Variable(...).cuda(rank) loss = self.criterion(y2, label2) loss.backward() ### # optimize the parameter self.backbone_optimizer.step() self.classifier_optimizer.step() ### # recycle variable #delete some variable in intermedia process del x ..... torch.cuda.empty_cache() My main function to call distributed training is showed below. def main(): world_size = torch.cuda.device_count() mp.spawn(sub_process, args=(world_size), nprocs=world_size, join=True) def sub_process(rank, world_size): set_up(rank, world_size) trainer = Trainer() cleanup() def set_up(rank, world_size): os.environ['MASTER_ADDR'] = '127.0.0.113' os.environ['MASTER_PORT'] = '12355' dist.init_process_group("nccl", rank=rank, world_size=world_size) def cleanup(): dist.destroy_process_group() So in process 0(rank 0), the first step training is ok, but in the second step. OOM occurred. -- Process 0 terminated with the following error: Traceback (most recent call last): File "/home/xx/anaconda3/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 20, in _wrap fn(i, *args) ....... File "/home/xx/code/xxx.py", line 225, in step loss_aux = self.seg_criterion(out_aux, label.long().cuda(rank)) File "/home/xx/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/xx/code/multitask/utils/utils.py", line 44, in forward predict = predict[target_mask.view(n, h, w, 1).repeat(1, 1, 1, c)].view(-1, c) RuntimeError: CUDA out of memory. Tried to allocate 536.00 MiB (GPU 0; 10.73 GiB total capacity; 8.30 GiB already allocated; 198.56 MiB free; 9.01 GiB reserved in total by PyTorch) I know that maybe the memory leak occurred. But I have no experience about distributed one. How should I solve this problem
st177986
Does anyone know how to solve it? It’s a little emergent. In summary, how to solve the memory leak in rank 0.
st177987
Hey @JIE_LIU, which version of PyTorch are you using? If it is v1.6, I suspect it is due to the DDP comm bucket reconstruction algorithm temporarily boost memory consumption, and then hit the OOM problem. cc @Yanli_Zhao
st177988
I don’t think there is a decent way to get rid of it in v1.6. One hacky solution might be introducing a tiny unused parameter in the model (e.g., self.unused = nn.Linear(1, 1)), and then set find_unused_parameters=True in DDP ctor, which would disable bucket rebuilt. github.com pytorch/pytorch/blob/v1.6.0/torch/csrc/distributed/c10d/reducer.cpp#L381 1 // Rebuild bucket only if 1) it is the first time to rebuild bucket 2) // unused_parameters_ is empty, currently it does not support when there are // unused parameters 3) this backward pass needs to run allreduce. Here, we // just dump tensors and their parameter indices into rebuilt_params_ and // rebuilt_param_indices_ based on gradient arriving order, and then at the // end of finalize_backward(), buckets will be rebuilt based on // rebuilt_params_ and rebuilt_param_indices_, and then will be broadcasted // and intialized. Also we only need to dump tensors and parameter indcies of // one replica. if (!has_rebuilt_bucket_ && unused_parameters_.empty() && index.replica_index == 0) { rebuilt_params_.push_back( replicas_[index.replica_index][index.variable_index]); rebuilt_param_indices_.push_back(index.variable_index); } // If there are model parameters that went unused when computing the model // output, they won't be part of the autograd graph, and won't receive // gradients. These parameters are discovered in the `prepare_for_backward` // function and their indexes stored in the `unused_parameters_` vector. This looks like a regression to me. @Yanli_Zhao has a some recent work to reduce DDP memory footprint and hopefully that can help. BTW, if possible, can you try the same script with PyTorch v1.5, it will help to confirm if bucket reconstruction is indeed the culprit.
st177989
I implement my code modified from https://github.com/pytorch/examples/blob/master/imagenet/main.py 1 ,and now I have 2 problems: 1.Dose this code is now the best way to do distribution? 2.How to to write logs? In my opinion, We should find the major process and wrtie, but could the mp.spawn can do this? Many thanks to your replies!
st177990
1.Dose this code is now the best way to do distribution? This depends on the application requirements. For available tools, see this: https://pytorch.org/tutorials/beginner/dist_overview.html 2.How to to write logs? In my opinion, We should find the major process and wrtie, but could the mp.spawn can do this? You can use the rank to control which process does the log, e.g.: import torch.distributed as dist if dist.get_rank() == 0: # do log If you are using RPC, the counterpart API is get_worker_info.
st177991
I’m trying to figure out DistributedDataParallel (on a single machine; single GPU / process mode). I’ve got a few questions: Will all launched processes do CUDA init? Is it safe? Should we explicitly set CUDA_VISIBLE_DEVICES per launched process to ensure that it can see only one device? ( think fork/spawn global state issues with CUDA / OpenMP / pthreads etc…) Would it prevent it from rpc using nccl? Is it sensible to do a forced parameter sync once in a while? Inherent GPU parallelism non-determinism can cause parameter divergence that could cause replica parameter divergence for some sensitive models (even if gradients are sychronized)? How does one do it? When does it make sense to do NUMA node pinning? CPU affinity pinning? Thank you!
st177992
vadimkantorov: Will all launched processes do CUDA init? CUDA is lazily initialized. So if one process is not touching a specific device, corresponding CUDA context shouldn’t be created on that device. Is it safe? Even if multiple processes create context on the same device, it won’t crash, but each context consumes about 500MB CUDA memory, which is not desired. Should we explicitly set CUDA_VISIBLE_DEVICES per launched process to ensure that it can see only one device? This is the recommended way, as it also rules out the possibility that some third-party library accidentally create tensors on other devices. Would it prevent it from rpc using nccl? Which rpc are you referring to? Are you using DDP in conjunction with torch.distributed.rpc? Is it sensible to do a forced parameter sync once in a while? Inherent GPU parallelism non-determinism can cause parameter divergence that could cause replica parameter divergence for some sensitive models (even if gradients are sychronized)? If there are drifts, then, yes, manually sync once a while would help. How does one do it? You can use broadcast, see the code linked below. If you would like to calculate average, you can also use all_reduce and then divide by world_size. github.com pytorch/pytorch/blob/3806c939bda0df4c0f38b9d356c004f384535ac1/torch/nn/parallel/distributed.py#L404 2 self.bucket_bytes_cap = int(bucket_cap_mb * 1024 * 1024) # Sync params and buffers self._sync_params_and_buffers(authoritative_rank=0) self._ddp_init_helper() def _sync_params_and_buffers(self, authoritative_rank=0): module_states = list(self.module.state_dict().values()) if len(module_states) > 0: self._distributed_broadcast_coalesced( module_states, self.broadcast_bucket_size, authoritative_rank) def _ddp_init_helper(self): """ Initialization helper function that does the following: (1) replicating the module from device[0] to the other devices (2) bucketing the parameters for reductions When does it make sense to do NUMA node pinning? CPU affinity pinning? Hey @ptrblck, do you know what’s the best practice here?
st177993
Thanks a lot @mrshenli for these responses! Look like it should be in some official guide @mrshenli I’m having troubles launching the most basic two-node distributed configuration (I checked, TCP connection with nc works ok). It doesn’t seem to respect the passed port. If you could take a look, it would be awesome! UPD: I created an issue to discuss this: https://github.com/pytorch/pytorch/issues/44544 9 import os import torch import argparse import torch.distributed as dist if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument('--backend', default='gloo') parser.add_argument('--rank', type=int, default=0) parser.add_argument('--world-size', type=int, default=1) args = parser.parse_args() dist.init_process_group(args.backend, init_method="env://", rank=args.rank, world_size=args.world_size) print(f"Master node {os.environ['MASTER_ADDR']}:{os.environ['MASTER_PORT']}. Rank {args.rank}. World size: {args.world_size}") test_tensor = torch.tensor(args.rank+1) if args.backend == 'nccl': test_tensor = test_tensor.cuda() dist.all_reduce(test_tensor, op=dist.ReduceOp.SUM) print(f"Test value: {test_tensor.item()}, expected: {sum(range(args.world_size+1))}") Test standalone Node 1 Input: GLOO_SOCKET_IFNAME=team0 MASTER_ADDR=10.81.13.54 MASTER_PORT=12345 python distributed_example.py Output: Master node 10.81.13.54:12345. Rank 0. World size: 1 Test value: 1, expected: 1 Node 2 Input: GLOO_SOCKET_IFNAME=team0 MASTER_ADDR=10.81.13.51 MASTER_PORT=12345 python distributed_example.py Output: Master node 10.81.13.51:12345. Rank 0. World size: 1 Test value: 1, expected: 1 Test disctibuted Gloo Node 1 Input: GLOO_SOCKET_IFNAME=team0 MASTER_ADDR=10.81.13.54 MASTER_PORT=12345 python distributed_example.py --rank 0 --world-size 2 Output: Traceback (most recent call last): File "distributed_example.py", line 11, in <module> dist.init_process_group('gloo', init_method="env://", rank=args.rank, world_size=args.world_size) File "/miniconda3/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 425, in init_process_group _default_pg = _new_process_group_helper( File "/miniconda3/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 499, in _new_process_group_helper pg = ProcessGroupGloo( RuntimeError: [/opt/conda/conda-bld/pytorch_1595629411241/work/third_party/gloo/gloo/transport/tcp/pair.cc:769] connect [10.81.13.51]:11169: No route to host Node 2 Input: GLOO_SOCKET_IFNAME=team0 MASTER_ADDR=10.81.13.54 MASTER_PORT=12345 python distributed_example.py --rank 1 --world-size 2 Output: Test distributed NCCL Node 1 Input: NCCL_SOCKET_IFNAME=team0 NCCL_DEBUG=INFO MASTER_ADDR=10.81.13.54 MASTER_PORT=12345 python distributed_example.py --rank 0 --world-size 2 --backend nccl Output: Master node 10.81.13.54:12345. Rank 0. World size: 2 srs-ds11:64570:64570 [0] NCCL INFO Bootstrap : Using [0]team0:10.81.13.54<0> srs-ds11:64570:64570 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so). srs-ds11:64570:64570 [0] misc/ibvwrap.cc:63 NCCL WARN Failed to open libibverbs.so[.1] srs-ds11:64570:64570 [0] NCCL INFO NET/Socket : Using [0]team0:10.81.13.54<0> NCCL version 2.4.8+cuda10.1 srs-ds11:64570:65266 [0] NCCL INFO Setting affinity for GPU 0 to 55,55555555 Node 2 Input: NCCL_SOCKET_IFNAME=team0 NCCL_DEBUG=INFO MASTER_ADDR=10.81.13.54 MASTER_PORT=12345 python distributed_example.py --rank 1 --world-size 2 --backend nccl Output: Master node 10.81.13.54:12345. Rank 1. World size: 2 srs-ds8:192240:192240 [0] NCCL INFO Bootstrap : Using [0]team0:10.81.13.51<0> srs-ds8:192240:192240 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so). srs-ds8:192240:192240 [0] misc/ibvwrap.cc:63 NCCL WARN Failed to open libibverbs.so[.1] srs-ds8:192240:192240 [0] NCCL INFO NET/Socket : Using [0]team0:10.81.13.51<0> srs-ds8:192240:192316 [0] NCCL INFO Setting affinity for GPU 0 to 55,55555555 srs-ds8:192240:192316 [0] include/socket.h:390 NCCL WARN Connect to 10.81.13.54<34419> failed : No route to host srs-ds8:192240:192316 [0] NCCL INFO bootstrap.cc:100 -> 2 srs-ds8:192240:192316 [0] NCCL INFO bootstrap.cc:326 -> 2 srs-ds8:192240:192316 [0] NCCL INFO init.cc:695 -> 2 srs-ds8:192240:192316 [0] NCCL INFO init.cc:951 -> 2 srs-ds8:192240:192316 [0] NCCL INFO misc/group.cc:69 -> 2 [Async thread] Traceback (most recent call last): File "distributed_example.py", line 17, in <module> dist.all_reduce(test_tensor, op=dist.ReduceOp.SUM) File "/miniconda3/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 936, in all_reduce work = _default_pg.allreduce([tensor], opts) RuntimeError: NCCL error in: /opt/conda/conda-bld/pytorch_1595629411241/work/torch/lib/c10d/ProcessGroupNCCL.cpp:518, unhandled system error, NCCL version 2.4.8
st177994
For the glooo case, I noticed that the master address for two nodes are different, is this a typo? node 1: MASTER_ADDR=10.81.13.54 node 2: MASTER_ADDR=10.81.13.51
st177995
No it doesn’t. Addresses are different because it is standalone test (one node/one worker) for sanity check. Theare are three cases: single node test, multi node test with gloo backend and with nccl backend.
st177996
Hello, I am facing problems with using torch distributed training in a multi-node setup. Apart from the specified master port, looks like Pytorch tries to open random ports for inter-node communication. In my setup I get a limited number of specified open ports. Is there some way I can force Pytorch to use only the given ports for internode communication? Thanks!
st177997
Out team is planning to use CPUs from multiple computers to do network prediction and data production, and then use a single GPU server to do network training. Is there any method we can use in torch.distributed package that can help us with this situation?
st177998
torch.distributed.rpc should be able to help. Here is a list of tutorials. The use case looks similar to the following two examples: https://pytorch.org/tutorials/intermediate/rpc_tutorial.html#distributed-reinforcement-learning-using-rpc-and-rref https://pytorch.org/tutorials/intermediate/rpc_async_execution.html#batch-processing-cartpole-solver 1
st177999
What is the implementation and performance differences between torch.distributed.launch and torch.multiprocessing.spawn?
st178000
torch.distributed.launch uses subprocess.Popen. The perf differences between these two are typical multiprocessing vs subprocess 71 Besides that, torch.distributed.launch also tries to configure several env vars and pass command line arguments for distributed training script, e.g., RANK, LOCAL_RANK, WORLD_SIZE etc. On the other hand, torch.multiprocessing.spawn is general multi-processing, not specifically tailored for torch.distributed. If you need multi-server distributed data parallel training, it might be more convenient to use torch.distributed.launch as it automatically calculates ranks for you, through --nnode, --node_rank, and --nproc_per_node.If you need single-server multi-gpu data parallel training, both should work the same.
st178001
Could pytorch effectively use setup in server with different GPUs, for example 2x 2080 Ti + 2x 3080 Ti for the one distributed learning process? Or some drawbacks may occur?
st178002
We had similar posts in this forum, which you could search for, and the main drawback would be that the slower GPUs would potentially create the bottleneck in your setup.
st178003
Hello, I have a machine with a 2080ti and 1080ti. is it possible to do distributed data-parallel on two different types of GPUs?
st178004
Sure, pytroch don’t care about it. But please take care the gpu memory allocation.
st178005
Hi, I have a questions about NVIDIA apex I know NVIDIA apex package creates each process per gpu, like this image718×195 4.14 KB so, each process are referred as local_rank variable in my code I want to save best accuracy from each process and i coding like below When i Using 2 gpus for epochs in range(0, args.epoch): train() test() ... save_best() def save_best(): # 1'th gpu if args.local_rank == 0: is_best = test_acc > best_acc best_acc = max(test_acc, best_acc) if is_best: torch.save(...) # 2'th gpu if args.local_rank == 1: is_best = test_acc > best_acc best_acc = max(test_acc, best_acc) if is_best: torch.save(...) After 1 epoch I can verify each accuracy 0’th gpu’s accuracy is 19.906, It is saved 0’th weight file 1’th gpu’s accuracy is 19.269, It is saved 1’th weight file But, When i loading weight file and adapt to network, test accuracy is not equal to each result I got 19.572(0’th file), 19.561(1’th file) Surprisingly, When i using 1 gpu for training, the situation that i mentioned above is not happened(test accuracy while training is equal to accuracy which is loading from weight file) I can’t understand why this situation is happened. Any body can help?
st178006
Solved by ptrblck in post #2 We recommend to use the native mixed-precision training utility via torch.cuda.amp instead of apex, as it should cover more tested use cases. More information can be found here.
st178007
We recommend to use the native mixed-precision training utility via torch.cuda.amp 2 instead of apex, as it should cover more tested use cases. More information can be found here 14.
st178008
I am using nn.DataParallel and I have an error inside the embedding layer that said “RuntimeError: arguments are located on different GPUs at /pytorch/aten/src/THC/generic/THCTensorIndex.cu:403”. My network architecture is the following class WordEmbeddingNetwork(nn.Module): def __init__(self, word_embeddings_path, word2id, pad_token, unk_token, freeze=False): super(WordEmbeddingNetwork, self).__init__() self.pad_token = pad_token self.unk_token = unk_token self.word2id = word2id self.embedding_file = word_embeddings_path.split('/')[-1] self.load_embeddings_from_file(word_embeddings_path) embedding_weights = self.get_embeddings_weights(OOV_corrections) num_embeddings, self.embedding_dim = embedding_weights.shape self.embedding_layer = nn.Embedding(num_embeddings, self.embedding_dim) self.embedding_layer.load_state_dict({'weight': embedding_weights}) if freeze: for p in self.embedding_layer.parameters(): p.requires_grad = False def forward(self, batch): print(batch.device) print(self.embedding_layer.weight.device) emb = self.embedding_layer(batch) return emb class MyNet(nn.Module): _HIDDEN_SIZE = 300 def __init__(self, word_embeddings_path, word2id, pad_token, unk_token, seed, device='cpu'): torch.manual_seed(seed) super(MyNet, self).__init__() self.device = device self.word_embeddings_layer = WordEmbeddingNetwork(word_embeddings_path=word_embeddings_path, word2id=word2id, pad_token=pad_token, unk_token=unk_token) def __init__(self, utterances, ...): self.word_embedding_layer(utterances) .... I don’t understand why embedding layer and the given input are on different gpus. Can you help me?
st178009
Solved by Seo in post #3 Finally I have solved. nn.DataParallel moves to the correct gpu only tensors, if you have list of tensors as input of your model forward() method, you need to move one by one tensors in the list on the correct gpu. The correct gpu can be retrieved by accessing the .device attribute of a tensor autom…
st178010
I tried to remove the embeddings and put them on cpu. Now I have the same error on LSTM, it seems to me that nn.DataParallel moves things wrongly from one gpu to another RuntimeError: Input and parameter tensors are not at the same device, found input tensor at cuda:0 and parameter tensor at cuda:1
st178011
Finally I have solved. nn.DataParallel moves to the correct gpu only tensors, if you have list of tensors as input of your model forward() method, you need to move one by one tensors in the list on the correct gpu. The correct gpu can be retrieved by accessing the .device attribute of a tensor automatically moved by the nn.DataParallel on the correct gpu. Never force a .to(device) with the wrong device!
st178012
Hey @Seo, IIUC, the DataParallel should be able to automatically scatter tensors in the input list to the correct device along the batch dimension. It uses the following code. Is your use case different from this assumption? github.com pytorch/pytorch/blob/10dd25dcd18457a53e69eb319f48749a49a48430/torch/nn/parallel/scatter_gather.py#L5-L31 5 def scatter(inputs, target_gpus, dim=0): r""" Slices tensors into approximately equal chunks and distributes them across given GPUs. Duplicates references to objects that are not tensors. """ def scatter_map(obj): if isinstance(obj, torch.Tensor): return Scatter.apply(target_gpus, None, dim, obj) if isinstance(obj, tuple) and len(obj) > 0: return list(zip(*map(scatter_map, obj))) if isinstance(obj, list) and len(obj) > 0: return list(map(list, zip(*map(scatter_map, obj)))) if isinstance(obj, dict) and len(obj) > 0: return list(map(type(obj), zip(*map(scatter_map, obj.items())))) return [obj for targets in target_gpus] # After scatter_map is called, a scatter_map cell will exist. This cell # has a reference to the actual function scatter_map, which has references # to a closure that has a reference to the scatter_map cell (because the This file has been truncated. show original
st178013
Hi @mrshenli, thank you for your response. Actually my case is different, I have a list of tensors and I want to chunk the list along its length. I have solved by implementing my own scatter method like this: def scatter(inputs, target_gpus, dim=0): r""" Slices tensors into approximately equal chunks and distributes them across given GPUs. Duplicates references to objects that are not tensors. """ def scatter_map(obj): if isinstance(obj, torch.Tensor): return Scatter.apply(target_gpus, None, dim, obj) if isinstance(obj, tuple) and len(obj) > 0: return list(zip(*map(scatter_map, obj))) if isinstance(obj, list) and len(obj) > 0: #on the last gpu the torch scatter always put the remaining samples to fit the batch # (e.g., batch=256, n_gpus=3 ==> chunks=[86, 86, 84]) size = math.ceil(len(obj)/len(target_gpus)) chunk = [obj[i * size:(i + 1) * size] for i in range(len(target_gpus)-1)] diff = len(obj) - size*(len(target_gpus)-1) chunk.append(obj[-diff:]) return chunk if isinstance(obj, dict) and len(obj) > 0: return list(map(type(obj), zip(*map(scatter_map, obj.items())))) return [obj for targets in target_gpus] # After scatter_map is called, a scatter_map cell will exist. This cell # has a reference to the actual function scatter_map, which has references # to a closure that has a reference to the scatter_map cell (because the # fn is recursive). To avoid this reference cycle, we set the function to # None, clearing the cell try: return scatter_map(inputs) finally: scatter_map = None
st178014
Problem description: I compile the pytorch source code in arm machine.And I want to use DDP interface for distributed training.However, I found that pytorch could only find one physical CPU, which means that my CPU usage cannot exceed 50%.(The machine has two sockets) My machine contains two physical Cpus, each with 64 cores. I use OpenBLAS as the BLAS and I compile it with openmp.In the script, I set the environment variable export OMP_NUM_THREADS=128 export GOMP_CPU_AFFINITY=0-127 export OMP_DISPLAY_ENV=true Then when I execute my script python3 -m torch.distributed.launch \ --nproc_per_node=$NPROC_PER_NODE \ script.py 2>&1 Then I found out that all the threads were running on the same CPU core and it ouput: OPENMP DISPLAY ENVIRONMENT BEGIN _OPENMP = '201511' OMP_DYNAMIC = 'FALSE' OMP_NESTED = 'FALSE' OMP_NUM_THREADS = '64' OMP_SCHEDULE = 'DYNAMIC' OMP_PROC_BIND = 'TRUE' OMP_PLACES = '{0},{1},{2},{3},{4},{5},{6},{7},{8},{9},{10},{11},{12},{13},{14},{15},{16},{17},{18},{19},{20},{21},{22},{23},{24},{25},{26},{27},{28},{29},{30},{31},{32},{33},{34},{35},{36},{37},{38},{39},{40},{41},{42},{43},{44},{45},{46},{47},{48},{49},{50},{51},{52},{53},{54},{55},{56},{57},{58},{59},{60},{61},{62},{63},{64},{65},{66},{67},{68},{69},{70},{71},{72},{73},{74},{75},{76},{77},{78},{79},{80},{81},{82},{83},{84},{85},{86},{87},{88},{89},{90},{91},{92},{93},{94},{95},{96},{97},{98},{99},{100},{101},{102},{103},{104},{105},{106},{107},{108},{109},{110},{111},{112},{113},{114},{115},{116},{117},{118},{119},{120},{121},{122},{123},{124},{125},{126},{127}' OMP_STACKSIZE = '0' OMP_WAIT_POLICY = 'PASSIVE' OMP_THREAD_LIMIT = '4294967295' OMP_MAX_ACTIVE_LEVELS = '2147483647' OMP_CANCELLATION = 'FALSE' OMP_DEFAULT_DEVICE = '0' OMP_MAX_TASK_PRIORITY = '0' OMP_DISPLAY_AFFINITY = 'FALSE' OMP_AFFINITY_FORMAT = 'level %L thread %i affinity %A' OPENMP DISPLAY ENVIRONMENT END OPENMP DISPLAY ENVIRONMENT BEGIN _OPENMP = '201511' OMP_DYNAMIC = 'FALSE' OMP_NESTED = 'FALSE' OMP_NUM_THREADS = '64' OMP_SCHEDULE = 'DYNAMIC' OMP_PROC_BIND = 'TRUE' OMP_PLACES = '{0},{1},{2},{3},{4},{5},{6},{7},{8},{9},{10},{11},{12},{13},{14},{15},{16},{17},{18},{19},{20},{21},{22},{23},{24},{25},{26},{27},{28},{29},{30},{31},{32},{33},{34},{35},{36},{37},{38},{39},{40},{41},{42},{43},{44},{45},{46},{47},{48},{49},{50},{51},{52},{53},{54},{55},{56},{57},{58},{59},{60},{61},{62},{63}' OMP_STACKSIZE = '0' OMP_WAIT_POLICY = 'PASSIVE' OMP_THREAD_LIMIT = '4294967295' OMP_MAX_ACTIVE_LEVELS = '2147483647' OMP_CANCELLATION = 'FALSE' OMP_DEFAULT_DEVICE = '0' OMP_MAX_TASK_PRIORITY = '0' OMP_DISPLAY_AFFINITY = 'FALSE' OMP_AFFINITY_FORMAT = 'level %L thread %i affinity %A' OPENMP DISPLAY ENVIRONMENT END Could someone tell me what is the reason? Thanks! environment Collecting environment information... PyTorch version: 1.6.0a0+b31f58d Is debug build: No CUDA used to build PyTorch: None OS: CentOS Linux release 7.6.1810 (AltArch) GCC version: (GCC) 9.2.0 CMake version: version 3.16.5 Python version: 3.7 Is CUDA available: No CUDA runtime version: No CUDA GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA Versions of relevant libraries: [pip3] numpy==1.19.1 [pip3] torch==1.6.0a0+b31f58d [conda] Could not collect
st178015
From the DDP Docs 5, you must do the following when initializing DDP: For multi-device modules and CPU modules, device_ids must be None or an empty list, and input data for the forward pass must be placed on the correct device. While you cannot specify which cores to run each process on from PyTorch, you should still be able to specify CPU affinity in general.
st178016
Thanks for your reply! Yeah, I did what they said: model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[], output_device=[]) I also think I am able to specify CPU affinity,however I am failed.
st178017
@VitalyFedyunin @ptrblck is it possible to specify CPU affinity when using PyTorch?
st178018
It should be possible to set the CPU affinity using NVML 11 and Tesla GPUs for DDP. You could probably use pynvml as a convenient Python API to create the affinity list and set it via os.sched_setaffinity. However, I haven’t played around with it a lot.
st178019
Thanks for your reply, I have tried it but it did nothing. I implement it just like this: #set cpu affinity pid = 0 affinity_mask = {i for i in range(128)} os.sched_setaffinity(0, affinity_mask) print("Number of CPUs:", os.cpu_count()) affinity = os.sched_getaffinity(pid) real_pid = os.getpid() print("Now, process {} is eligibl to run on:{}".format(real_pid,affinity)) In fact, It did print the process is eligible to run on CPU:0~127 Now, process 34094 is eligibl to run on:{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127} However,when I use ps -eLF to check the threads of this process, it still uses the half of the CPU cores. I set this environment export OMP_DISPLAY_ENV=true and then the script print the OpenMP message twice.Do you know what are the OpenMP calls in there two places? As shown above,the first time OMP_PLACES is 0-127,but the second time it becomes 0-63. If I do not use DDP and just execute the py script,then I found there is just one OpenMP message.
st178020
@khalil Could you describe your DDP setup? Are you running DDP on a single machine with multiple processes, if so how many processes per host? Or are you running DDP across multiple machines here?
st178021
I am sorry for taking so long time to reply you. I found the problem is not caused by DDP. It is caused by the __init__.py file in the torch directory. I try to avoid load this __init__.py and the problem is disappeared. I think there are some library problem in my machine. Thanks for your reply.
st178022
The training hangs without printing any logs. Observations/configurations: 4 nodes. 4 GPU/node. distributed training with each process taking 1 GPU. pytorch version = 1.1; cuda version = 9.0; gpu driver version: 410.78 use the code base of facebook/maskrcnn-benchmark, but i thought it is just normal pytorch code. GPU utility is close to 100%, but there is no more log. it has finished 278K iterations, and then hangs there without any progress (no more snapshot, no more logs) gdb attached to one of the process (sudo gdb -p process_id) and it seems like it hangs at cuMemcpyHtoDAsync_v2. (gdb) where #0 0x00007ffe309e1b6d in clock_gettime () #1 0x00007f8cc536f876 in __GI___clock_gettime (clock_id=4, tp=0x7ffe30898660) at ../sysdeps/unix/clock_gettime.c:115 #2 0x00007f8c6c7ecc4e in ?? () from /usr/local/nvidia/lib64/libcuda.so.1 #3 0x00007f8c6c87b8d3 in ?? () from /usr/local/nvidia/lib64/libcuda.so.1 #4 0x00007f8c6c89b81f in ?? () from /usr/local/nvidia/lib64/libcuda.so.1 #5 0x00007f8c6c7c8737 in ?? () from /usr/local/nvidia/lib64/libcuda.so.1 #6 0x00007f8c6c6d9e4e in ?? () from /usr/local/nvidia/lib64/libcuda.so.1 #7 0x00007f8c6c6dbfc3 in ?? () from /usr/local/nvidia/lib64/libcuda.so.1 #8 0x00007f8c6c829c82 in cuMemcpyHtoDAsync_v2 () from /usr/local/nvidia/lib64/libcuda.so.1 #9 0x00007f8cbe7ad49c in ?? () from /opt/conda/lib/python3.6/site-packages/torch/lib/libcudart-f7fdd8d7.so.9.0 #10 0x00007f8cbe78a573 in ?? () from /opt/conda/lib/python3.6/site-packages/torch/lib/libcudart-f7fdd8d7.so.9.0 #11 0x00007f8cbe7c3d86 in cudaMemcpyAsync () from /opt/conda/lib/python3.6/site-packages/torch/lib/libcudart-f7fdd8d7.so.9.0 #12 0x00007f8c836a9f4b in (anonymous namespace)::copy_from_cpu(at::Tensor&, at::Tensor const&) () from /opt/conda/lib/python3.6/site-packages/torch/lib/libcaffe2_gpu.so #13 0x00007f8c8374a875 in void (anonymous namespace)::_copy__cuda<float>(at::Tensor&, at::Tensor const&, bool) () from /opt/conda/lib/python3.6/site-packages/torch/lib/libcaffe2_gpu.so #14 0x00007f8c836aafb8 in at::native::_s_copy__cuda(at::Tensor&, at::Tensor const&, bool) () from /opt/conda/lib/python3.6/site-packages/torch/lib/libcaffe2_gpu.so #15 0x00007f8c826d47ef in at::CUDAType::s_copy_(at::Tensor&, at::Tensor const&, bool) const () from /opt/conda/lib/python3.6/site-packages/torch/lib/libcaffe2_gpu.so #16 0x00007f8c7764033d in at::native::copy_(at::Tensor&, at::Tensor const&, bool) () from /opt/conda/lib/python3.6/site-packages/torch/lib/libcaffe2.so #17 0x00007f8cbf546dc9 in torch::autograd::VariableType::copy_(at::Tensor&, at::Tensor const&, bool) const () from /opt/conda/lib/python3.6/site-packages/torch/lib/libtorch.so.1 #18 0x00007f8c777829cc in at::native::to(at::Tensor const&, c10::TensorOptions const&, bool, bool) () from /opt/conda/lib/python3.6/site-packages/torch/lib/libcaffe2.so #19 0x00007f8c77a01857 in at::TypeDefault::to(at::Tensor const&, c10::TensorOptions const&, bool, bool) const () from /opt/conda/lib/python3.6/site-packages/torch/lib/libcaffe2.so #20 0x00007f8cbf31cb52 in torch::autograd::VariableType::to(at::Tensor const&, c10::TensorOptions const&, bool, bool) const () from /opt/conda/lib/python3.6/site-packages/torch/lib/libtorch.so.1 #21 0x00007f8cc0bb8eb3 in torch::autograd::dispatch_to(at::Tensor const&, c10::Device, bool, bool) () from /opt/conda/lib/python3.6/site-packages/torch/lib/libtorch_python.so #22 0x00007f8cc0bb9598 in torch::autograd::THPVariable_to(_object*, _object*, _object*) () from /opt/conda/lib/python3.6/site-packages/torch/lib/libtorch_python.so #23 0x0000556d518096a6 in PyCFunction_Call () at /tmp/build/80754af9/python_1546130271559/work/Objects/methodobject.c:98 #24 0x0000556d518b74ad in do_call_core (kwdict=0x7f8b9b42b168, callargs=0x7f8bb1deec18, func=0x7f8b9b42b510) at /tmp/build/80754af9/python_1546130271559/work/Python/ceval.c:5116 #25 _PyEval_EvalFrameDefault () at /tmp/build/80754af9/python_1546130271559/work/Python/ceval.c:3404
st178023
found the issue. github.com/pytorch/pytorch Issue: distributed all_reduce deadlocks in v1.1 389 opened by prafullasd on 2019-05-17 closed by facebook-github-bot on 2019-05-26 🐛 Bug I'm doing multi-node training (8 nodes, 8 gpu's each, NCCL backend) and am using DistributedDataParallel for syncing grads and distributed.all_reduce()... module: cuda module: distributed triaged
st178024
Distributed data parallel freezes without error message Env: Ubuntu 18.04 Pytorch 1.6.0 CUDA 10.1 Actually, I am using Docker image gemfield/pytorch:1.6.0-devel which stated in https://github.com/DeepVAC/deepvac (same with above env), and use PyTorch DDP (by use the class DeepvacDDP in https://github.com/DeepVAC/deepvac/blob/master/deepvac/syszux_deepvac.py) to train my model, which the code worked perfect yesterday. But today when I launch the train program again, the DDP is stucked in loss.backward(), with cpu 100% and GPU 100%。 There has no co…
st178025
Hi, I’m working on modifying my model (including my custom data loader) to fit the structure of DDP. I haven’t given my code a try but I’d like to know more about the synchronization process. According to the many great threads on this forum, DDP takes care of the synchronization during loss.backward(). But what if the number of data in each data loader leads to different for-loop counts, would the processes with n+1 loops be blocked because the processes with n loops never reach the point? Say, I have 401 images, distributed to 4 data loaders with 101, 100, 100, 100 images respectively. Batch size is 4 so process 0 gets 26 iterations while others get 25. Would my process group get stuck at 26th iteration? Here is a simplified version of part of my code: #......(some init process including moving self.model to DDP)...... for phase in ['train', 'eval']: dist.barrier() if phase=='train': self.model.train() self.data_loader.train() else: self.model.eval() self.data_loader.eval() running_loss = 0 for inputs, labels in self.data_loader: self.optimizer.zero_grad() with torch.set_grad_enabled(phase=='train'): outputs = self.model(inputs) loss = self.loss(outputs, labels) if phase == 'train': loss.backward() ### Could this or the following line get stuck during the extra loop by process 0? self.optimizer.step() running_loss += loss.item()*inputs.shape[0] torch.cuda.empty_cache() epoch_loss = running_loss/len(self.data_loader) Thanks for any helpful hint!
st178026
annisat: According to the many great threads on this forum, DDP takes care of the synchronization during loss.backward(). But what if the number of data in each data loader leads to different for-loop counts, would the processes with n+1 loops be blocked because the processes with n loops never reach the point? Yep, the one with n+1 loops will block when using <= PyTorch v1.6. There are ways to get around in user code, e.g. by collecting a signal in each iteration to see if any process has already exited. If yes, break. @rvarm1 is working on a much better solution, which will be included in v1.7. With that solution, the process that exits early will use dummy comm ops to unblock remaining active ones. Please see the following issue and PR. github.com/pytorch/pytorch [RFC] Join-based API to support uneven inputs in DDP 3 opened May 9, 2020 rohan-varma 🚀 Feature with @pritamdamania87 @mrshenli @zhaojuanmao This RFC is to summarize the current proposal for supporting uneven inputs across different DDP processes. Related... feature module: distributed triaged github.com/pytorch/pytorch Join-based API to support DDP uneven inputs 2 pytorch:gh/rohan-varma/152/base ← pytorch:gh/rohan-varma/152/head opened Aug 5, 2020 rohan-varma +894 -59
st178027
Thousand thanks for the explanation! I modified my code following your suggestion and I provide my provisional solution here for comments. running_loss = 0 running_len = 0 for inputs, labels in self.data_loader: self.optimizer.zero_grad() with torch.set_grad_enabled(phase=='train'): outputs = self.model(inputs) loss = self.loss(outputs, labels) if phase == 'train': loss.backward() self.optimizer.step() iteration_count+=1 running_loss += loss.item() running_len += inputs.shape[0] torch.cuda.empty_cache() ########## is_next = torch.Tensor([self.data_loader.peek()]) # is_next==True if the iterator has not reached the end, i.e., next loop is expected dist.all_reduce_multigpu(is_next, op=dist.ReduceOp.BAND) if not is_next: break ##########
st178028
Hey @annisat, that looks good to me. One way to speed it up a bit is to run the dist.all_reduce at the beginning of the loop and set async_op=True. Then only wait for it when you need the result. In this way, the comm and the forward/backward/opt.step computation can overlap. Please see the code in the following thread: Multiprocessing - Barrier Blocks all Processes? The self-contained code below works for me. import torch import torch.distributed as dist import torch.multiprocessing as mp import torch.nn as nn import torch.optim as optim from torch.nn.parallel import DistributedDataParallel as DDP def example(rank, world_size): # create default process group dist.init_process_group("gloo", rank=rank, world_size=world_size) # create local model model = nn.Linear(10, 10).to(rank) # construct DDP model ddp_model = DDP(model, device_i…
st178029
Thanks for the tips! It took me some while to understand and implement async_op. I would like to point out a problem when I ran my own code above. I changed my code to is_next = torch.Tensor([self.data_loader.peek()]).cuda(self.gpu) col_handle = dist.all_reduce(is_next, op=dist.ReduceOp.BAND, async_op) ... col_handle.wait() if not is_next: break and tried it with SPSG with 2 processes. The final value of is_next is [2] rather than [True] or [1]. It seems that dist.ReduceOp.BAND adds up input tensors rather than doing a regular AND. Therefore I changed the first line into: is_next = torch.Tensor([self.data_loader.peek()]).bool().cuda(self.gpu) The Error Message says all_reduce does not support this Tensor type for now. In order to achieve my goal, I use dist.ReduceOp.MIN instead. Here’s my final code that actually runs smoothly without imbalanced for-loop counts blocking the synchornization process. for inputs, labels in self.data_loader: is_next = torch.Tensor([self.data_loader.peek()]).cuda(self.gpu) col_handle = dist.all_reduce(is_next, op=dist.ReduceOp.MIN, async_op=True) # forward and backward and step and stuff col_handle.wait() if not is_next: break
st178030
Hi I’m rather new to the DDP, and I found a rather bizarre behavior of my training with DDP. Say I have 4 GPUs in total. If I run my code on the GPU:0 and GPU:1 and leave the remaining two unoccupied. Then during training the percentage of both GPU would be at maximum 50%. Now the GPU occupation is: gpu0: process1(50% or lower) gpu1: process1(50% or lower) gpu2: empty gpu3: empty But when I run another training code on the remaining two GPUs, all 4 cards would hit 100% usage and both of the processes run faster than the previous situation. Now the gpu occupation is: gpu0: process1 (99%) gpu1: process1 (99%) gpu2: process2 (99%) gpu3: process2 (99%) I experience this on multiple servers and it’s really confusing me. Can anyone help with this?
st178031
How about the per iteration latency? If you feed the same batch size to each DDP instance (i.e., different global batch_size), is using 4 GPU still faster? Please use the elapsed_time API to measure that. As DDP uses all_reduce to communicate gradients, GPU utilization cannot faithfully represent how busy a GPU is. CUDA would report 100% GPU utilization even if one GPU is block waiting for another peer to join and doing nothing.
st178032
Thanks for your reply! I didn’t use elasped_time to measure how long it takes for one iteration but I did use time.time() to calculate the iteration latency. As long as there is one GPU that remains unused, the process runs slower (sometimes 50% slower and sometimes 100% slower). The weird thing is that it seems my DDP program is influenced by other programs (they are not necessarily DDPs) running on other GPUs, in a counterintuitive way: normally we expect that programs are competing for the computational resources, but here more programs bring better performance… (This thing didn’t happen to my colleague’s non-DDP program so I guess it must have something to do with the DDP).
st178033
I am using a A100 server from GCP with the latest NGC container from nvidia. However for the support of DCNV2 i have to downgrade my pytorch version to 1.4.0. Whenever i initialise a tensor in gpu like torch.randn(3).cuda() the interpreter gets stuck and never finishes that command. Any help??
st178034
hey @ptrblck, do you know if anyone can answer questions regarding using PyTorch cuda features on GCP/NGC? cc @ngimel
st178035
Solved!. After close to 10 mins the tensor gets initialised in gpu and from thereon no problem
st178036
The long startup time is most likely create due to a JIT compilation of the CUDA code, if your installed PyTorch version wasn’t built for compute capability 8.0 (A100). This would be the case, if you’ve installed the 1.4 binary instead of building it from source. We are working towards building the PyTorch nightly binaries with the latest library stack and cc8.0. For now, you could either build from source or let the JIT compiler run in the first CUDA call.
st178037
Thanks for the reply. But I wonder that is it possible to build from source for old version PyTorch (for example version 1.4) with cuda11? Or is there any plan to support old version PyTorch for A100?
st178038
Shenggan: Or is there any plan to support old version PyTorch for A100? There is no plan on changing older PyTorch versions to enable CUDA11 and thus new GPU architectures, so you would have to use the latest PyTorch version. Shenggan: is it possible to build from source for old version PyTorch (for example version 1.4) with cuda11? You could try to cherry-pick all commits mentioning CUDA11 in an older version and try to build it. However, while it might work, what’s your use case that you need to use an old PyTorch version?
st178039
Thanks for the reply. I think porting to the latest version PyTorch is the best choice.
st178040
I think it is an elementary question about programming with GPU. First, i tried to use time.time() in python module, to measure the operation time of some modules in NNs. such as def forward(self, x): end = time.time() output1 = self.layer1(x) time_output1 = time.time() output2 = self.layer2(output1) time_output2 = time.time() print(time_output1 - end, time_output2-end) However, i found that the time information is inaccurate, and i have to use below link: How to measure time in PyTorch I have seen lots of ways to measure time in PyTorch. But what is the most proper way to do it now (both for cpu and cuda)? Should I clear the memory cache if I use timeit? And is it possible to get accurate results if I’m computing on a cluster? And is it a way to make this results reproducible? And what is better: timeit or profiler? Although i measure the correct time by the methods in the link, i want to know why profiling with time.time() gives inaccurate results. an example: https://github.com/facebookresearch/moco/issues/66 18 Thank you !
st178041
Solved by tom in post #8 So the immediate takeaway from the above discussion is replace time.time with time.perf_counter() have a torch.cuda.synchronize() before taking the the start_time, maybe don’t take the first batch (of a given size)
st178042
It isn’t. While there are more refined measures, there isn’t anything wrong with plain timing. Apparently there first are lots of people doing it wrong (both beginners and people with considerable experience) and then inaccurate representations of what exactly is wrong (“can’t use time.time” Edit: actually it is true you should not use it but time.perf_counter(!)): The main things to get right is warm-up and synchronization. The thing is that if you use the GPU, unless you call torch.cuda.synchronize() before taking the time (for both start and finish), you don’t know what has been executed before and after the time taking. I invariably use the following pattern: def do_stuff(): for _ in range(100): # or 1000 or whatever, depending on how long it takes do_my_computation() torch.cuda.synchronize() do_stuff() %timeit do_it() Of course, you need to divide the time by whatever size of the loop you have. I usually aim to have something in the msec range or so. What this does: This does run the operator (do_my_computation) multiple times between syncs, this would reduce the influence of the synchronization (which takes time) on the measurement. Calling do_stuff() before the timing does: Warm-up (e.g. some things compile kernels on the fly when called for the first time etc.) Synchronize before starting the timing Timing with do_stuff() ensures that synchronization happens after each run (and thus implicitly before the next). You can do essentially the same thing with time.time time.perf_counter() before and after what is %timeit here, except that timeit will actually call do_stuff several times and do some stats to help you along. There also is the timeit module which is similar but you need to adjust the number of runs manually to the duration of your computation. That said, the profiler gives you more detailed information with very little effort. Best regards Thomas
st178043
Hello tom. first of all, thank you for fast and detailed reply ! After i read, i tried to use time.time() and torch.cuda.synchronize() to profile the elapsed times. However, the results are still inaccurate. For example of https://github.com/facebookresearch/moco/issues/66 20, i) time.time() w/o torch.cuda.synchronize() shuffle time: 0.5993 s inf_time: 0.1185 s ii) use torch.cuda.Event & torch.cuda.synchronize() shuffle time: 2.72 ms inf_time: 59.88 ms iii) use time.time() w/ torch.cuda.synchronize() shuffle time: 0.0649 s inf_time: 0.0587 s Although i use torch.cuda.synchronize(), the shuffle time is still over-estimated.
st178044
Ah, wait. The other thing you should do is use time.perf_counter() instead of time.time(). This is because time.time() isn’t guaranteed to actually give valid differences, but you need to use a monotonic clock for that.
st178045
Thanks Tom. I checked both time.perf_counter() and time.process_time() with torch.cuda.synchronize(), and got similar results to time.time() iv) use time.perf_counter() w/ torch.cuda.synchronize() shuffle time: 0.0650 s inf time: 0.0587 s v) use time.process_time() w/ torch.cuda.synchronize() shuffle time: 0.0879 s inf time: 0.0584 s When comparing all the results, the inference time is consistent, but the shuffle time is inconsistent by the profiling method. the shuffle time is shuffleBN is Moco github.com facebookresearch/moco/blob/78b69cafae80bc74cd1a89ac3fb365dc20d157d3/moco/builder.py#L133 1 # compute query features q = self.encoder_q(im_q) # queries: NxC q = nn.functional.normalize(q, dim=1) # compute key features with torch.no_grad(): # no gradient to keys self._momentum_update_key_encoder() # update the key encoder # shuffle for making use of BN im_k, idx_unshuffle = self._batch_shuffle_ddp(im_k) k = self.encoder_k(im_k) # keys: NxC k = nn.functional.normalize(k, dim=1) # undo shuffle k = self._batch_unshuffle_ddp(k, idx_unshuffle) # compute logits # Einstein sum is more intuitive # positive logits: Nx1 which gathers all samples, shuffles the index, and reallocates mini-batch to each GPU. github.com facebookresearch/moco/blob/78b69cafae80bc74cd1a89ac3fb365dc20d157d3/moco/builder.py#L69-L94 1 def _batch_shuffle_ddp(self, x): """ Batch shuffle, for making use of BatchNorm. *** Only support DistributedDataParallel (DDP) model. *** """ # gather from all gpus batch_size_this = x.shape[0] x_gather = concat_all_gather(x) batch_size_all = x_gather.shape[0] num_gpus = batch_size_all // batch_size_this # random shuffle index idx_shuffle = torch.randperm(batch_size_all).cuda() # broadcast to all gpus torch.distributed.broadcast(idx_shuffle, src=0) # index for restoring idx_unshuffle = torch.argsort(idx_shuffle) This file has been truncated. show original I cannot infer a reason why only the record time of this shuffle operation has varied by measuring method.
st178046
That might be a lot clearer if you specify what exactly you want to measure, time these bits in isolation, break down the entire thing into the bits you want to measure (i.e. do your parts reconcile to the total? if not, where are overlaps or gaps in the parts). The links don’t seem to show the actual measurement you inserted.
st178047
In the forward pass of moco, i am measuring the shuffling time, that is def forward(self, im_q, im_k): (....) # compute key features with torch.no_grad(): # no gradient to keys self._momentum_update_key_encoder() # update the key encoder start_time = time.time() # shuffle for making use of BN im_k, idx_unshuffle = self._batch_shuffle_ddp(im_k) torch.cuda.synchronize() end_time = time.time() shuffle_time = end_time - start_time (...)
st178048
So the immediate takeaway from the above discussion is replace time.time with time.perf_counter() have a torch.cuda.synchronize() before taking the the start_time, maybe don’t take the first batch (of a given size)
st178049
time.time() i) time.time() w/o torch.cuda.synchronize() shuffle time: 0.5993 s inf_time: 0.1185 s ii) time.time() w/o torch.cuda.synchronize() both before and after an operation shuffle time: 0.0018 s inf_time: 0.031 s time.perf_counter() i) use time.perf_counter() w/ torch.cuda.synchronize() shuffle time: 0.0650 s inf time: 0.0587 s ii) use time.perf_counter() w/ torch.cuda.synchronize() both before and after an operation shuffle time: 0.0021 s inf time: 0.0309 s time.process_time() i) use time.process_time() w/ torch.cuda.synchronize() shuffle time: 0.0879 s inf time: 0.0584 s ii) use time.process_time() w/ torch.cuda.synchronize() both before and after an operation shuffle time: 0.001879 s inf time: 0.03107 s torch.cuda.Event i) use torch.cuda.Event & torch.cuda.synchronize() shuffle time: 2.72 ms inf_time: 59.88 ms Conclusions When time profiling, we can use both time module in python and torch.cuda.Event. What we remember is, we must use torch.cuda.synchronize() right before and after the operations to be profiled. Measurements of time and cuda.Event is some what different, but the ratio is consistent.
st178050
In the last, can i ask some reasons why the time module and cuda.Event measure different elapsed times? i am using V100x4 and DistributedDataParallel.
st178051
I must admit that I don’t know. One thing to exclude would be stochastic variation (e.g. %timeit gives you a standard deviation, so you can imagine error bars for the measurement), but I would not see 30ms->60ms doing that. The other part is that you need a really stable environment to get reliable benchmarking, maybe something we did here changed something w.r.t. other things going on. (But maybe @ptrblck knows something.) Fun anecdote: A long time ago, I briefly enabled remote access to the GPU I used for benchmarking for one of fellow PyTorch devs because I somehow had a much more stable timing environment then them. Best regards Thomas
st178052
#%% LSTM architecture class LSTM(nn.Module): def __init__(self, input_dim, hidden_dim, batch_size,num_layers,output_dim): super(LSTM, self).__init__() self.input_dim = input_dim self.hidden_dim = hidden_dim self.batch_size = batch_size self.num_layers = num_layers self.output_dim=output_dim # Define the LSTM layer self.lstm = nn.LSTM(self.input_dim, self.hidden_dim, self.num_layers) # Define the output layer self.linear = nn.Linear(self.hidden_dim, output_dim) def init_hidden(self): return (torch.zeros(self.num_layers, self.batch_size, self.hidden_dim), torch.zeros(self.num_layers, self.batch_size, self.hidden_dim)) def forward(self, input): lstm_out, self.hidden = self.lstm(input.view(len(input), self.batch_size, -1)) # Only take the output from the final timestep y_pred = self.linear(lstm_out[-1].view(self.batch_size, -1)) return y_pred.view(-1) #%% Train the Model loss_epoch_train = [] loss_epoch_val = [] net = net.double() for epoch in range(num_epochs): loss_seq_train = [] loss_seq_val = [] # train loop for seq, labels in train_loader: seq, labels = seq.to(device), labels.to(device) # init hidden cell net.hidden = net.init_hidden() # Clear stored gradient optimizer.zero_grad() y_pred_train = net(seq.double()) # loss computation and backpropagation seq_loss = loss_function(y_pred_train, labels) loss_seq_train.append(seq_loss.data.cpu().numpy()) seq_loss.backward() optimizer.step() print('Epoch: ' + str(epoch+1) + ', Loss: ' + str(seq_loss.item())) # val loop for seq, labels in val_loader: seq, labels = seq.to(device), labels.to(device) # current model prediction y_pred_val = net(seq.double()) # loss computation seq_loss = loss_function(y_pred_val, labels) loss_seq_val.append(seq_loss.data.cpu().numpy()) loss_epoch_train.append(np.mean(loss_seq_train)) loss_epoch_val.append(np.mean(loss_seq_val)) # print loss of validation and training data for each epoch print('Epoch '+str(epoch)+'/'+str(num_epochs)+': Train-Loss: '+str(np.round(loss_epoch_train[-1],4))+'; Val-Loss: '+str(np.round(loss_epoch_val[-1],4)))
st178053
Hey @john, is any part of this model using DataParallel, DistributedDataParallel, or torch.distributed.rpc? Any reason for tagging this question with “distributed-rpc”? The format of the code looks distorted, and will be hard to debug. Could you please share a properly-formatted self-contained example?
st178054
please can i share my complete code with you because i really want to get it done correctly
st178055
Hi, I want to have two parallel processes in one GPU, one for training (calculating) and the other for communicating parameter updates with other GPUs. And both processes can modify a shared variable, (like a buffer to store the most updated parameters). Anyone knows is it possible to do this? I checked this documentation: https://pytorch.org/docs/stable/notes/multiprocessing.html 6, and it mentions “multiprocessing.Queue”, not sure is it suitable in my case? Or any good examples?
st178056
This should be doable, see the example code in this post: `Exception: process 0 terminated with exit code 1` error when using `torch.multiprocessing.spawn` to parallelize over multiple GPUs distributed You can use share_momery_() and torch.multiprocessing.SimpleQueue to implement IPC. E.g.: import numpy as np import torch import torch.multiprocessing as mp def func(rank, x, p2c, c2p): x_power = x.to(rank) ** rank c2p.put(x_power) # citing multiprocessing doc: Unlike CPU tensors, the # sending process is required to keep the original tensor # as long as the receiving process retains a copy of # the tensor. The refcounting is implemented under the # hood but re…
st178057
I am trying to setup a training workflow with PyTorch DistributedDataParallel (DDP). Generally when I train I pass a logger through to track outputs and record useful information. However, I am having trouble using the logger I have with the DDP method. Right now my code is as follows: import torch import torch.multiprocessing as mp class BaseModel: def __init__(self, *args, **kwargs): ... "does things" def fit(self, *args, **kwargs): ... 'set up stuff' mp.spawn(self.distributed_training, nprocs=self.num_gpus, args=(self.params, training_input, self.logger)) def distributed_training(params, training_input, logger): ... for e in epochs: 'trains for an epoch' logger.info(print_line) I know I am supposed to use the QueueHandler and QueueListener tools from logging with the import, but I have been scouring the internet and still do not have a clear understanding as to how. Any help would be greatly appreciated.
st178058
What sort of issues do you encounter when running this code? You could also consider creating a per-spawned process logger and no longer passing in the same logger into the spawned processes. This would also allow you to configure your logging on a per-DDP process basis, for example, write the logs to different files depending on the process.
st178059
With the way the code is set-up, things passed to the logger in the spawned process just don’t go through (ie, wont print or save). I’m not sure how a spawned process logger would help, as I need to capture things in the training. Mainly, I’m just trying to record loss and metrics per-epoch, nothing I would consider overly special for a training process. I think the issue is more that the logger is initiated on the main process so sending it to some type of other process is causing issues (whether it would be on its own process or in the distributed process).
st178060
I have a training script which I launch using torch.distributed.launch on multiple GPUs. I would like to set a time limit, so that my training will early stop without surpassing this limit. Something like this: # for storing the running times of the last 3 epochs epoch_time_queue = deque(maxlen=3) start_time = time.time() for epoch in range(start_epoch, args.epochs): start_epoch_time = time.time() # training train_epoch(...) # validation eval_epoch(...) # epoch time in minutes epoch_time = (time.time() - start_epoch_time)/60 # average duration of the last 3 epochs epoch_time_queue.append(epoch_time) avg_time = sum(epoch_time_queue)/len(epoch_time_queue) # if the next epoch will likely surpass the time limit, then stop here estimated_next_total_time = (time.time() - start_time)/60 + avg_time if args.time_limit > 0 and estimated_next_total_time > args.time_limit: break The issue is that the elapsed time may be different between processes. For example, at the end of the 5th epoch, the process on GPU1 may think that it will surpass the time limit at the next (6th) epoch by a few seconds, so it stops; while GPU2 thinks that it will be able to finish the 6th epoch a few seconds before the limit, so it will continue, which is not good. I would like to know if there is a way for the processes to communicate about this. Ideally, a process should wait for all the other processes to finish the current epoch to decide whether to go for the next epoch or not.
st178061
Solved by mrshenli in post #2 @f10w yep, this is possible, you can use the all_gather API to let every process to collect elapsed time from all processes.
st178062
@f10w yep, this is possible, you can use the all_gather 3 API to let every process to collect elapsed time from all processes.
st178063
Thanks for the reply! When we do torch.distributed.all_gather(), does it create some kind of “barrier” between the processes?
st178064
Yes, it does, all collective communications (e.g., boradcast, all_reduce, all_gather, etc.) can be considered as a barrier.
st178065
a distributed training crashes with the following errors. Normally it works well, but sometimes it crashes with the following errors. Any idea to resolve it? pytorch 1.5, sync-bn is used, each GPU’s input has different dimensions. File "/opt/conda/lib/python3.6/site-packages/apex-0.1-py3.6-linux-x86_64.egg/apex/amp/_initialize.py", line 197, in new_fwd **applier(kwargs, input_caster)) File "/tmp/code/quickdetection/src/FCOS/fcos_core/modeling/detector/generalized_rcnn.py", line 49, in forward features = self.backbone(images.tensors) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/tmp/code/quickdetection/src/qd/layers/efficient_det.py", line 1221, in forward _, p3, p4, p5 = self.backbone_net(inputs) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/tmp/code/quickdetection/src/qd/layers/efficient_det.py", line 1067, in forward x = self.model._bn0(x) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/batchnorm.py", line 472, in forward self.eps, exponential_average_factor, process_group, world_size) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/_functions.py", line 46, in forward count_all.view(-1).long().tolist() RuntimeError: CUDA error: an illegal instruction was encountered terminate called after throwing an instance of 'c10::Error' what(): CUDA error: an illegal instruction was encountered (insert_events at /opt/conda/conda-bld/pytorch_1591914742272/work/c10/cuda/CUDACachingAllocator.cpp:771) frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x4e (0x7fe430441b5e in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10.so) frame #1: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0x6d0 (0x7fe430686e30 in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10_cuda.so) frame #2: c10::TensorImpl::release_resources() + 0x4d (0x7fe43042f6ed in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10.so) frame #3: <unknown function> + 0x51e58a (0x7fe45da9358a in /opt/conda/lib/python3.6/site-packages/torch/lib/libtorch_python.so) <omitting python frames> frame #31: __libc_start_main + 0xf0 (0x7fe4783c6830 in /lib/x86_64-linux-gnu/libc.so.6)
st178066
Could you update to PyTorch 1.5.1, as 1.5.0 had a bug where internal assert statements were ignored? This should hopefully yield a better error message than the illegal memory access.
st178067
With Pytorch 1.6 / CUDA 10.2 /CUDNN7, I got following error occasionally: Traceback (most recent call last): File "train.py", line 212, in <module> train(None) File "/gemfield/hostpv/gemfield/deepvac/lib/syszux_deepvac.py", line 335, in __call__ self.process() File "train.py", line 163, in process self.processTrain() File "/gemfield/hostpv/gemfield/deepvac/lib/syszux_deepvac.py", line 294, in processTrain self.doBackward() File "train.py", line 139, in doBackward self.loss.backward() File "/opt/conda/lib/python3.7/site-packages/torch/tensor.py", line 185, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/opt/conda/lib/python3.7/site-packages/torch/autograd/__init__.py", line 127, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: transform: failed to synchronize: cudaErrorIllegalAddress: an illegal memory access was encountered terminate called after throwing an instance of 'c10::Error' what(): CUDA error: an illegal memory access was encountered Exception raised from create_event_internal at /opt/conda/conda-bld/pytorch_1595629403081/work/c10/cuda/CUDACachingAllocator.cpp:687 (most recent call first): frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x4d (0x7fb3e291677d in /opt/conda/lib/python3.7/site-packages/torch/lib/libc10.so) frame #1: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0xb5d (0x7fb3e2b66d9d in /opt/conda/lib/python3.7/site-packages/torch/lib/libc10_cuda.so) frame #2: c10::TensorImpl::release_resources() + 0x4d (0x7fb3e2902b1d in /opt/conda/lib/python3.7/site-packages/torch/lib/libc10.so) frame #3: <unknown function> + 0x53f0ea (0x7fb41c1990ea in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) <omitting python frames> frame #17: __libc_start_main + 0xe7 (0x7fb442bdfb97 in /lib/x86_64-linux-gnu/libc.so.6) Aborted (core dumped) Don’t know it is a hardward issue,driver issue or pytorch issue?