id
stringlengths
3
8
text
stringlengths
1
115k
st178068
Could you post a minimal code snippet to reproduce this issue as well as your currently installed NVIDIA driver and the GPU you are using?
st178069
Could you post a minimal code snippet as given in my previous post, so that we could have a look at this issue?
st178070
I also encountered similar issues with pytorch 1.6 with ubuntu 18 or ubuntu 16; cuda 10.1 or cuda 10.2. it works fine with pytorch 1.5.1, but this issues occurs occasionally with pytorch 1.6
st178071
Could you rerun the code with CUDA_LAUNCH_BLOCKING=1 python script.py args and post the stack trace here?
st178072
thanks for your reply. however, this is random. roughly, 10% of the times, it will happen. Recently, i find pytorch 1.5.1 also has this issue. Note, in the following trace, CUDA_LAUNCH_BLOCKING is not set as 1. Paste it here and hopefully it can have some information. image3084×938 265 KB
st178073
another case. It seems like the error message is also random. image3312×1816 598 KB
st178074
Could you post the stack traces by wrapping them into three backticks ``` please? If you don’t set CUDA_LAUNCH_BLOCKING=1, the stack trace might point to random lines of code.
st178075
I am trying to load a batch from a replay buffer with pytorch asyncronously while optimizing the model parameters and thereby hide the batch loading latency. The program I run is as follows: for _ in range(100): begin = time.time() batch = sample_batch() batch_load += time.time() - begin begin = time.time() optimize(batch) optimize_time += time.time() - begin When running this script, batch_load takes about 0.001 seconds and optimize_time about 0.009 seconds. To hide the latency of the batch_load (although it doesn’t take long in this program, it takes more time in another program which I would actually like to optimize), I thought I can use pythons concurrent.futuresmodule to acquire afuturefromsample_batchand load it whilstoptimize` is running. This program instead looks as follows: with concurrent.futures.ProcessPoolExecutor(max_workers=12) as executor: for _ in range(100): begin = time.time() future = executor.submit(sample_batch) batch_load += time.time() - begin begin = time.time() optimize(batch) optimize_time += time.time() - begin batch = future.result() This turned out to be a pretty bad idea. The data loading time increases to 0.085 seconds and the optimization time increases to 0.13 seconds. Can somebody kindly educate me on why the second program is so much slower than the first? Furthermore, does somebody have any ideas on how to hide data loading latency? I appreciate any answers and suggestions very much!
st178076
Solved by mrshenli in post #2 As batch_load measures the latency of executor.submit, I assume that’s the overhead of ProcessPoolExecutor? But it is still weird that the optimize() also increased a lot. Does optimize() run ops on GPU? If yes, you will need to either torch.cuda.synchronize() on that GPU, or use elapsed_time to m…
st178077
As batch_load measures the latency of executor.submit, I assume that’s the overhead of ProcessPoolExecutor? But it is still weird that the optimize() also increased a lot. Does optimize() run ops on GPU? If yes, you will need to either torch.cuda.synchronize() on that GPU, or use elapsed_time 1 to measure the latency. Because, CUDA ops returns when the op is inserted to the stream instead of when the op is done.
st178078
Thank you @mrshenli for your answer! Indeed, the slower run time was caused entirely by the overhead of the ProcessPoolExecutor. It is interesting that this context has implications also for non-asynchronous procedure calls. I measured the entire program again with longer-running tasks and the overhead of the ProcessPoolExecutor seemed to be constant but the latency of data loading could be hid below the optimize call. Again, thank you for your reply - It helped me a lot!
st178079
I am trying to make use of either distributed or parallel training using fastai and SageMaker notebooks or training jobs (somewhat fixed on using this service based on my team). I am running code on a ml.p3.8xlarge with 4x V100, but I cannot get any speed ups with any of the approaches I have taken. After spinning up the ml.p3.8xlarge notebook instance, here is the set up in my notebook using the pytorch env: %%bash pip install fastai==2.0.0 fastcore==1.0.0 sudo mkdir -p /opt/ml/input/data/collab sudo chmod 777 /opt/ml/input/data/collab Here is the code I am testing: import fastai, fastcore, torch print(f'fastai {fastai.__version__}') print(f'fastcore {fastcore.__version__}') print(f'torch {torch.__version__}') from fastai.collab import * from fastai.tabular.all import * from fastai.distributed import * path = untar_data(URLs.ML_100k, dest="/opt/ml/input/data/collab") ratings = pd.read_csv( path/'u.data', delimiter='\t', header=None, names=['user','movie','rating','timestamp'] ) movies = pd.read_csv( path/'u.item', delimiter='|', encoding='latin-1', usecols=(0,1), names=['movie','title'], header=None, ) ratings = ratings.merge(movies) dls = CollabDataLoaders.from_df(ratings, item_name='title', bs=64) n_users = len(dls.classes['user']) n_movies = len(dls.classes['title']) n_factors = 64 model = EmbeddingDotBias(n_factors, n_users, n_movies) learn = Learner(dls, model, loss_func=MSELossFlat()) print(learn.model) print("rank_distrib():", rank_distrib()) print("num_distrib():", num_distrib()) print("torch.cuda.device_count():", torch.cuda.device_count()) epochs, lr = 5, 5e-3 print('learn.fit_one_cycle') learn.fit_one_cycle(epochs, lr) print('with learn.distrib_ctx():') with learn.distrib_ctx(): learn.fit_one_cycle(epochs, lr) print('with learn.distrib_ctx(torch.cuda.device_count()-1):') with learn.distrib_ctx(torch.cuda.device_count()-1): learn.fit_one_cycle(epochs, lr) print('with learn.parallel_ctx():') with learn.parallel_ctx(): learn.fit_one_cycle(epochs, lr) print('nn.DataParallel(learn.model)') if torch.cuda.device_count() > 1: learn.model = nn.DataParallel(learn.model) learn.fit_one_cycle(epochs, lr) Here is the output from running code as a script: sh-4.2$ /home/ec2-user/anaconda3/envs/pytorch_p36/bin/python /home/ec2-user/SageMaker/cf.py fastai 2.0.0 fastcore 0.1.39 torch 1.6.0 EmbeddingDotBias( (u_weight): Embedding(944, 64) (i_weight): Embedding(1665, 64) (u_bias): Embedding(944, 1) (i_bias): Embedding(1665, 1) ) rank_distrib(): 0 num_distrib(): 0 torch.cuda.device_count(): 4 learn.fit_one_cycle epoch train_loss valid_loss time 0 1.153435 1.154428 00:11 1 0.957201 0.954827 00:11 2 0.816548 0.878350 00:11 with learn.distrib_ctx(): epoch train_loss valid_loss time 0 0.999254 1.040871 00:11 1 0.821853 0.914921 00:11 2 0.658059 0.845227 00:11 with learn.distrib_ctx(torch.cuda.device_count()-1): epoch train_loss valid_loss time 0 0.749317 0.997568 00:11 1 0.580846 0.912386 00:11 2 0.381058 0.878295 00:11 with learn.parallel_ctx(): epoch train_loss valid_loss time 0 0.514148 1.025872 00:25 1 0.383893 0.996381 00:18 2 0.204836 0.970403 00:18 nn.DataParallel(learn.model) epoch train_loss valid_loss time 0 0.341708 1.103849 00:16 1 0.272570 1.067705 00:16 2 0.134262 1.055507 00:16 Using the command nvidia-smi dmon -s u to watch GPU usage, I can see that only the training with DataParallel (using with learn.parallel_ctx(): and nn.DataParallel(learn.model)) show GPU ids 1,2,3 being used. The problem is the data parallel is slower, even when I have tried increasing batch size or embedding size. Any help with this would be appreciated. I have a much larger collaborative filtering model I would like to use that is experiencing the same issues as this movie example and I need to reduce the training time hopefully with the use of parallel/distributed training.
st178080
Hey @pl3, sorry about the delay. For DataParallel (DP), it can become slow when the model is large, as DP needs to replicate the model in every forward pass. For DistributedDataParallel (DDP), I would expect it is faster than local training. Which of the numbers shown above are DistributedDataParallel? And how did you initialize DDP module? When using DDP, did you reduce the per-process batch_size to batch_size / world_size?
st178081
For DataParallel (DP), it can become slow when the model is large, as DP needs to replicate the model in every forward pass. Ahh that makes sense why that is slower, especially for the models that I have a couple large embeddings. The lines after with learn.distrib_ctx(): use DDP under the hood as a context manager that handles setting up and tearing down the distributed model. You can find a link to the code here 4, though it is a bit abstracted and a little difficult to understand (at least for me) depending on familiar with the fastai library. I am guessing there might be an issue with fastai functions/defaults for how it reads number of distributed GPUs available in sagemaker environments. When using DDP, did you reduce the per-process batch_size to batch_size / world_size ? Slightly unclear what you mean here. I had the same batch size for each training loop which meant that each GPU in the DP would have been receiving 1/4th the batch size which I was assuming should have been faster.
st178082
pl3: Slightly unclear what you mean here. I had the same batch size for each training loop which meant that each GPU in the DP would have been receiving 1/4th the batch size which I was assuming should have been faster. I am not familiar with fastai’s DDP wrapper. When using the raw DDP API, applications need to spawn one process per GPU and then create one DDP instance and one dataloader in each process. With this setting, the per-process dataloader should use batch_size/world_size as the new batch size. Given the linked code, looks like it does not spawn subprocess for you. And it only calls init_process_group when num_distrib() > 1. So, if you didn’t spawn subprocesses explicitly in application code, it might fall back to local training? github.com fastai/fastai/blob/3c6dca627c1f3812d58b0447bc9a45dd866c601f/fastai/distributed.py#L143-L159 1 @patch @contextmanager def distrib_ctx(self: Learner, cuda_id=None,sync_bn=True): "A context manager to adapt a learner to train in distributed data parallel mode." # Figure out the GPU to use from rank. Create a dpg if none exists yet. if cuda_id is None: cuda_id = rank_distrib() if not torch.distributed.is_initialized(): setup_distrib(cuda_id) cleanup_dpg = torch.distributed.is_initialized() else: cleanup_dpg = False # Adapt self to DistributedDataParallel, yield, and cleanup afterwards. try: if num_distrib() > 1: self.to_distributed(cuda_id,sync_bn) yield self finally: self.detach_distributed() if cleanup_dpg: teardown_distrib() In case this is helpful, here 4 is a quick example with a brief explanation of how DDP works. And this section 1 tries to explain the differences between DP and DDP.
st178083
Yeah you are correct, I just found out I was not implementing DDP correctly. I knew there was extra steps I needed to do to get DDP working, so was hoping that DP would speed things up, but with the large model that doesn’t seem to be the case. I found this example 21 in fastai which uses the distributed context, so I am working on my script to add in the correct functionality. I will review the links your provided as well, it seems I need to get into the docs a little more. I appreciate your help!
st178084
I’m trying to use numba and pytorch distribution simultaneously. When I create new tensor on GPU, I got cuda initialization error: Here is my code: import torch import numpy as np import torch.distributed as dist from torch.multiprocessing import Process from numba import cuda import os def func(idx): a = torch.randn([10, 10, 10], device=‘cuda’) return def init_process(idx, size, fn, backend=‘NCCL’): os.environ[‘MASTER_ADDR’] = ‘127.0.0.1’ os.environ[‘MASTER_PORT’] = ‘29500’ dist.init_process_group(backend, rank=idx, world_size=size) fn(idx) return if **name** == “ **main** ”: processes = [] for idx in range(2): p = Process(target=init_process, args=(idx, 2, func)) p.start() processes.append(p) for p in processes: p.join() Here is error report: THCudaCheck FAIL file=/pytorch/aten/src/THC/THCGeneral.cpp line=47 error=3 : initialization error Process Process-2: Traceback (most recent call last): File “/home/sss/anaconda3/envs/torch_new/lib/python3.8/multiprocessing/process.py”, line 315, in _bootstrap self.run() File “/home/sss/anaconda3/envs/torch_new/lib/python3.8/multiprocessing/process.py”, line 108, in run self._target(*self._args, **self._kwargs) File “/home/sss/Desktop/Experiment/test.py”, line 14, in init_process fn(idx) File “/home/sss/Desktop/Experiment/test.py”, line 8, in func a = torch.randn([10, 10, 10], device=‘cuda’) File “/home/sss/anaconda3/envs/torch_new/lib/python3.8/site-packages/torch/cuda/ **init** .py”, line 190, in _lazy_init torch._C._cuda_init() RuntimeError: cuda runtime error (3) : initialization error at /pytorch/aten/src/THC/THCGeneral.cpp:47 THCudaCheck FAIL file=/pytorch/aten/src/THC/THCGeneral.cpp line=47 error=3 : initialization error Process Process-1: Traceback (most recent call last): File “/home/sss/anaconda3/envs/torch_new/lib/python3.8/multiprocessing/process.py”, line 315, in _bootstrap self.run() File “/home/sss/anaconda3/envs/torch_new/lib/python3.8/multiprocessing/process.py”, line 108, in run self._target(*self._args, **self._kwargs) File “/home/sss/Desktop/Experiment/test.py”, line 14, in init_process fn(idx) File “/home/sss/Desktop/Experiment/test.py”, line 8, in func a = torch.randn([10, 10, 10], device=‘cuda’) File “/home/sss/anaconda3/envs/torch_new/lib/python3.8/site-packages/torch/cuda/ **init** .py”, line 190, in _lazy_init torch._C._cuda_init() RuntimeError: cuda runtime error (3) : initialization error at /pytorch/aten/src/THC/THCGeneral.cpp:47 When I comment “from numba import cuda”, there is no error reported. Since I really need numba and cuda, I can’t comment them. Can anyone solve my problems?
st178085
Solved by tom in post #2 You could either see which version of cuda numba uses see if PyTorch offers that, too (or vice versa) or you could self-compile one or the other or both.
st178086
You could either see which version of cuda numba uses see if PyTorch offers that, too (or vice versa) or you could self-compile one or the other or both.
st178087
Thank you for the reply. I compiled a new numba for pytorch. It worked ! Thank you !
st178088
I want to run my model on dataset and store all embeddings using DistributedDataParallel. I created dataloader with DistributedSampler and now want to store all embeddings in the form: (image_name, embedding) And after that I want to save them as csv or pickle file. Will it be correct to create a global list and store data there or will there be conflicts with writing to the list?
st178089
RocketFlash: Will it be correct to create a global list and store data there or will there be conflicts with writing to the list? By “global list”, you mean Python global variable? And this will create a global list per process? Who will be writing to the global list? BTW, any reason for not using nn.Embedding 1?
st178090
Yes, by “global list” I mean global python variable. I am using mp.spawn to start distributed training, so I thought that the variables inside the executable file in this case are visible to all ranks. But after executing the code, nothing was written into the dict. What are the benefits of using nn.Embedding? I want to store image_name and embeddings.
st178091
RocketFlash: But after executing the code, nothing was written into the dict. Right, global vars are per-process, so each spawned child process will have a different global var. What are the benefits of using nn.Embedding? One benefit is that you can then run lookup ops on GPU. And if you need to let the training process to update the embedding as well, using nn.Embedding will also make it easier. I want to store image_name and embeddings. If you would like to pass those data back to the main process, one option is to use the multiprocessing SimpleQueue 1. See the example below. github.com pytorch/pytorch/blob/cb26661fe4faf26386703180a9045e6ac6d157df/test/test_multiprocessing.py#L580-L600 def test_event_multiprocess(self): event = torch.cuda.Event(enable_timing=False, interprocess=True) self.assertTrue(event.query()) ctx = mp.get_context('spawn') p2c = ctx.SimpleQueue() c2p = ctx.SimpleQueue() p = ctx.Process( target=TestMultiprocessing._test_event_multiprocess_child, args=(event, p2c, c2p)) p.start() c2p.get() # wait for until child process is ready torch.cuda._sleep(50000000) # spin for about 50 ms event.record() p2c.put(0) # notify child event is recorded self.assertFalse(event.query()) c2p.get() # wait for synchronization in child self.assertTrue(event.query()) p.join() I am trying to understand this requirement. In your application, is it like each subprocess will produce some image embedding independently and concurrently, and then you wanna save those?
st178092
mrshenli: In your application, is it like each subprocess will produce some image embedding independently and concurrently, and then you wanna save those? Yes, each subprocess generate embeddings from dataloader batches. I want to process all my data (generate embeddings) as fast as possible, this is why I want to use DistributedDataParallel. Process and after that save everything in one file.
st178093
My model has a few LSTMs which run out of Cuda memory when run on large sequences with one GPU. So I shifted a few components of the model to another GPU. I tried 2 things with Apex AMP: Move the model components to another GPU before invoking amp.initialize . In this case, I get NaNs soon after first backpropagation. First invoke amp.initialize , and then move the model components to another GPU. In this case, its like the model backpropagation runs on a single GPU. It runs out of Cuda memory. The model training runs fine without Apex, so I suppose I am missing some step where the loss is backpropagated on both GPUs. I looked through the documentations of Apex, however, it only talks about DataParallelism, and not ModelParallelism. Any ideas?
st178094
Solved by mcarilli in post #5 I don’t think the apex amp API supports this without complex/undocumented hacks. Apex amp is in maintenance mode now, no new features will be added. However, torch.cuda.amp is designed to support model parallelism (ie different layers on different device) out of the box. Please consider upgrading…
st178095
Hey @Caesar, have you tried the native AMP in PyTorch? I haven’t tried that with model parallel yet, but if it does not work, that will be a bug that we need to fix. https://pytorch.org/docs/stable/amp.html 6 https://pytorch.org/docs/stable/notes/amp_examples.html#amp-examples 1
st178096
Thanks for your response. I am constrained to use an older version of PyTorch which does not support AMP natively. So I am usingNVIDIA apex. https://github.com/NVIDIA/apex
st178097
I don’t think the apex amp API supports this without complex/undocumented hacks. Apex amp is in maintenance mode now, no new features will be added. However, torch.cuda.amp is designed to support model parallelism (ie different layers on different device) out of the box. Please consider upgrading if at all possible.
st178098
There’s a multi-GPU torch.cuda.amp.GradScaler test case 6 that ensures ordinary GradScaler usage supports networks with layers on different devices. torch.cuda.amp.autocast locally enables/disables autocast for all devices used by the invoking thread. (However, the autocast state is thread local, so if you spawn a thread to control each device, you must re-invoke autocast in the side thread(s). This affects usage with torch.nn.DataParallel and torch.nn.parallel.DistributedDataParallel with multiple GPUs per process 4.)
st178099
I have a question for the PyTorch development team. How is the memory consumed by queues in PyTorch implementation of multi-processing libraries managed? If you can point me to the relevant piece of code (if available) and/or provide a textual description, I would appreciate it.
st178100
@VitalyFedyunin Could you help out here since its a torch.multiprocessing question?
st178101
Please check github.com pytorch/pytorch/blob/e75fb4356b752097d093c7013ba85c9eb82961ef/torch/multiprocessing/reductions.py import torch import torch.utils.hooks from torch._namedtensor_internals import check_serializing_named_tensor import os import threading import multiprocessing from multiprocessing.util import register_after_fork from multiprocessing.reduction import ForkingPickler try: # Early load resource_sharer to prevent a partially initialized instance # from being inherited in a forked child process. The reduce_storage method # requires this module indirectly through DupFd(). The built-in mp.Queue # class pickles arguments in a background thread which may overlap with the # fork. import multiprocessing.resource_sharer except ImportError: pass class StorageWeakRef(object): This file has been truncated. show original and github.com pytorch/pytorch/blob/cca247635c6edb323176eeac7a18d3e9ab71c558/torch/multiprocessing/queue.py import io import multiprocessing import multiprocessing.queues from multiprocessing.reduction import ForkingPickler import pickle class ConnectionWrapper(object): """Proxy class for _multiprocessing.Connection which uses ForkingPickler to serialize objects""" def __init__(self, conn): self.conn = conn def send(self, obj): buf = io.BytesIO() ForkingPickler(buf, pickle.HIGHEST_PROTOCOL).dump(obj) self.send_bytes(buf.getvalue()) def recv(self): This file has been truncated. show original as methods are different for CPU/GPU generally speaking we are passing storage descriptors and do usage ref counting.
st178102
Hi, I have done some experiments on multi-gpu training, and I feel a bit confused about the relationship between gpu status and training speed. My original expectation was the slowest GPU would be the bottleneck for training speed, since local parameters need to sync in each step. But my experiment result proves I was wrong. Anyone can explain why the busiest GPU doesn’t slow down training speed as expected? I’m using all_reduce to sync parameters: for param in model.parameters(): if param.requires_grad and param.grad is not None: torch.distributed.all_reduce(param.grad.data, op=torch.distributed.ReduceOp.SUM) I measured GPU-Util, and my guess is higher GPU-Util means the GPU is busier and should be slower for training same size of batches. More experiment result for training the same dataset: test 1: 4 GPU with about 95% GPU-Util - training time is 35 sec test 2: 2 GPU with 0% GPU-Util, 2 GPU with 90% GPU-Util - training time is 18 sec test 3: 3 GPU with 0% GPU-Util, 1 GPU with 97% GPU-Util - training time is 15 sec test 4: 4 GPU with about 0% GPU-Util - training time is 10 sec If the slowest GPU was the bottleneck, then training time of test 2 and test 3 should be similar as test 1. But how to understand this result? Please also let me know if you notice any mistake in my experiment. Thanks.
st178103
One reason might be that CUDA GPU shows 100% utilization when running NCCL collective communications, even if it is actually block waiting for other peers to join and doing nothing. So the GPU utilization number cannot faithfully represent how busy a GPU is.
st178104
@mrshenli, thanks for reply. Then do you know what could be a better way to check a GPU’s status? For example, to compare training speed on each GPU when using multi-gpu training?
st178105
One option might be using nvprof and then visualize the result. It will show time consumed by different comp and comm ops. See the following links: https://www.telesens.co/2019/04/04/distributed-data-parallel-training-using-pytorch-on-aws/ 2 https://developer.nvidia.com/blog/cuda-pro-tip-nvprof-your-handy-universal-gpu-profiler/ 1 We are also working on extending autograd profiler to work with DDP, but we don’t have a target date for it yet.
st178106
Hi, What is the behavior of using DistributedDataParallel without running the training dataset using the DistributedSampler?. Will it mean that the models are deployed on multiple GPUs , but they end up working on the same data?. I am sort of confused about the behavior. Would be good to have some clarification. Thanks
st178107
Solved by mrshenli in post #2 Hey @Trinayan_Baruah Quote a recent discussion: Comparison Data Parallel Distributed data parallel Please also see this brief note and this full paper What is the behavior of using DistributedDataParallel without running the training dataset using the DistributedSampler? Will it mean that the mo…
st178108
Hey @Trinayan_Baruah Quote a recent discussion: Comparison Data Parallel Distributed data parallel 2 Please also see this brief note and this full paper What is the behavior of using DistributedDataParallel without running the training dataset using the DistributedSampler? Will it mean that the models are deployed on multiple GPUs , but they end up working on the same data. Yep, if you don’t use DistributedSampler or manually shard input data for each process, they will be working on the same data. In this case, every DDP instance in each process will end up with the same gradient in every iteration. As a result local gradients and synchronized global gradients will be the same, making DDP useless.
st178109
How to generate dynamic number of branches? My code use a list to store the branches, but got RuntimeError: Caught RuntimeError in replica 1 on device 1. RuntimeError: Expected tensor for argument #1 ‘input’ to have the same device as tensor for argument #2 ‘weight’; but device 1 does not equal 0 (while checking arguments for cudnn_convolution) I think it is because the weights for branches are not copied to other GPUs. class ResNetONE(nn.Module): def __init__(self, depth, num_classes=1000, num_branches=3, block_name='BasicBlock'): super(ResNetONE, self).__init__() # Model type specifies number of layers for CIFAR-10 model if block_name.lower() == 'basicblock': assert(depth - 2) % 6 == 0, 'When use basicblock, depth should be 6n+2, e.g. 20, 32, 44, 56, 110, 1202' n = (depth - 2) // 6 block = BasicBlock elif block_name.lower() == 'bottleneck': assert (depth == 2) % 9 == 0, 'When use bottleneck, depth should be 9n + 2, e.g. 20, 29, 47, 56, 110, 1199' n = (depth - 2) // 9 block = Bottleneck else: raise ValueError('block_name shinterval_sumould be Basicblock or Bottleneck') self.inplanes = 16 self.num_branches = num_branches self.num_classes = num_classes self.conv1 = nn.Conv2d(3, 16, kernel_size=3, padding=1, bias=False) self.bn1 = nn.BatchNorm2d(16) self.relu = nn.ReLU(inplace=True) self.layer1 = self._make_layer(block, 16, n) self.layer2 = self._make_layer(block, 32, n, stride=2) self.layer3 = self._make_layer(block, 64, n, stride=2) self.avgpool = nn.AvgPool2d(8) self.fc = nn.Linear(64 * block.expansion, self.num_classes) self.branches = self._make_branches(self.layer3, self.avgpool) self.gate = self._make_gate() for m in self.modules(): if isinstance(m, nn.Conv2d): n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels m.weight.data.normal_(0, math.sqrt(2./n)) # "normal_" is defined by torch._C._TensorBase # https://pytorch.org/docs/stable/tensors.html#torch.Tensor.normal_ # https://zhuanlan.zhihu.com/p/100937718 elif isinstance(m, nn.BatchNorm2d): m.weight.data.fill_(1) m.bias.data.zero_() def _make_layer(self, block, planes, blocks, stride=1): downsample = None if stride != 1 or self.inplanes != planes * block.expansion: downsample = nn.Sequential( nn.Conv2d(self.inplanes, planes * block.expansion, kernel_size=1, stride=stride, bias=False), nn.BatchNorm2d(planes * block.expansion) ) layers = [] layers.append(block(self.inplanes, planes, stride, downsample)) self.inplanes = planes * block.expansion for i in range(1, blocks): layers.append(block(self.inplanes, planes)) return nn.Sequential(*layers) def _make_branches(self, *layers): branches = [] for i in range(self.num_branches): branch = nn.Sequential(*layers) branches.append(branch) return branches
st178110
Hey @Erica_Zheng, are you using DataParallel, DistributedDataParallel or torch.distributed? And can you include a min repro? Looks like the above example does not include the forward() function or how the model forward pass was launched?
st178111
@mrshenli Thank you, Shen! No Runtime Error now by using branches = nn.ModuleList(branches) instead of python list. Corresponding file located in ‘models/resnet.py --> ResNetONE()’. More details are in mini repo 1. However, the all branches produce the same results.
st178112
I’m currently trying to run an NLP model using DistributedDataParallel and I’ve been receiving the following error if I use more than one worker for DataLoader (this error appears for each worker process): Traceback (most recent call last): File "<string>", line 1 in <module> File "/opt/conda/lib/python3.6/multiprocessing/spawn.py", line 105, in spawn_main exitcode = _main(df) File "/opt/conda/lib/python3.6/multiprocessing/spawn.py", line 115, in _main self = reduction.pickle.load(from_parent) File "/opt/conda/lib/site-packages/torch/nn/parallel/distributed.py", line 396, in __setstate__ self.process_group = _get_default_group() File "/opt/conda/lib/site-packages/torch/distributed/distributed_c10d.py", line 286, in _get_default_group raise RuntimeError("Default process group has not been initialized, " Default process group has not been initialized, please make sure to call init_process_group. In main() that I call after torch.multiprocessing.spawn(), I use the following call: dist.init_process_group("nccl", rank=rank, world_size=args.gpu, init_method="file:///app/tmp/sharedfile") I don’t receive this error if I set the number of workers to 0. I still receive this error if I change init_method to env:// (and I have the port and address variables set). I would like this to work in file mode though, since I can’t change the size of /dev/shm. The error itself seems to trigger when I start iterating through dataloader for my epoch (which means I don’t begin a single training loop before the error). I’m using 4 GPUs on a single node centos docker image with pytorch 1.4.0 and python 3.6.9. Let me know if you need further info, appreciate any tips!
st178113
Solved by claracurrier in post #7 @mrshenli in the course of building a barebones repro, I discovered the source of the error: I was passing something unpicklable into my Dataset instance (I was passing my model instead of the args by accident) such that it looked like this: def main(rank, args): dist.init_process_group(...) du…
st178114
Hey @claracurrier, could you please share a code snippet? Did you do sth like the following? If so, the default process group is a per-process state, so you will need to call init_process_group in the beginning of the spawn target function (i.e., target_fn), not after spawn in main function. def main(): spawn(target_fn, args=(...)) init_process_group(....)
st178115
Sorry, here’s more code, I’m following guides from tutorials and the documentation. def main(rank, args): dist.init_process_group(...) # ... load data ... train_sampler = torch.utils.data.distributed.DistributedSampler( train_dataset.lengths(), num_replicas = args.gpus, rank = rank, shuffle = True ) train_loader = torch.utils.data.DataLoader( train_dataset, batch_size = args.batch_size, num_workers = args.data_workers, collate_fn = mybatchifyfunc, pin_memory = True, drop_last = True, shuffle = False ) # ... machine learning ... if __name__ == "__main__": # ... set up logging, parse args... torch.multiprocessing.spawn(main, args=(args,), nprocs=args.gpus, join=True)
st178116
@mrshenli Sorry to bump, but I’m still not able to figure out the error - I’ve been cross-checking with examples but I’m following them as far as I can see. If the cause of the error is outside of distributed, I’m not able to tell because the error is thrown on spawn.
st178117
Hey @claracurrier The above code looks correct to me. Is it possible to share a repro so that I can help debug locally? Regarding the original error message you posted, looks like the program is trying to pass a DistributedDataParallel object through the spawn args, and hence the unpickle triggered the error. What’s in the args=(args,) when you call spawn? File "/opt/conda/lib/python3.6/multiprocessing/spawn.py", line 115, in _main self = reduction.pickle.load(from_parent) File "/opt/conda/lib/site-packages/torch/nn/parallel/distributed.py", line 396, in __setstate__ self.process_group = _get_default_group()
st178118
Thanks for the quick reply @mrshenli For args, it’s a Namespace object from argparse that contains all my ML parameters. It is fairly long, but it only contains ints, floats, str, lists, and bools. The DistributedDataParallel object is not passed. Unfortunately I’m working on a remote instance that makes copying and pasting difficult so it’ll take a little while to get a minimal reproduction. I’ll post here when I have it.
st178119
@mrshenli in the course of building a barebones repro, I discovered the source of the error: I was passing something unpicklable into my Dataset instance (I was passing my model instead of the args by accident) such that it looked like this: def main(rank, args): dist.init_process_group(...) dummy_train = [] model = dummyModel(args) model.parallelize(rank) train_dataset = MyNLPDataset(dummy_train, args) # replace args with something unpicklable to trigger error train_sampler = torch.utils.data.distributed.DistributedSampler(...) train_loader = torch.utils.data.DataLoader(train_dataset ...) for epoch in range(args.num_epochs): for training_ex in train_loader: output = model.update(training_ex) Where the dataset class was: class MyNLPDataset(torch.utils.data.Dataset): def __init__(self, examples, args): self.examples = examples self.args = args # <-- previously this was saving a copy of the model object # ... overrided methods ... I think this could’ve been easier if I had a better error message - your comment about something being unpicklable helped me narrow my search. The error ultimately didn’t involve the process group. I’m now getting a new NCCL backend error that doesn’t affect GLOO, so I’m back to hunting for new issues. Thanks so much for your help!
st178120
Hi. I have a question regarding data parallel (DP) and distributed data parallel (DDP). I have read many articles about DP and understand that gradient is reduced automatically. However, I could not find an article explaining whether or not loss is also reduced. For example, I believe that the following codes appear typical main routine of a DP program. outputs = model(inputs) loss = criterion(outputs, targets) loss.backward() I understand that the split inputs and the model are copied on each GPU and a forward path is concurrently computed to yield the loss, then a backward path is also concurrently computed and finally all gradients are reduced to one. Is the loss obtained by above code averaged over all the GPUs, which is exactly same as a loss computed by a serial program? Or, is the loss a value from just one GPU (gpu0)? I need to plot a loss chart, so I wonder if the loss is averaged over the GPUs. The same question applies to outputs. I also need to compute training accuracy using outputs in above code. Does it hold the results of all the GPUs? If so, in what structure of a tensor are they stored? Regarding DDP, above codes are written in each process running on respective GPU. In this case, how can I access the values on all the GPUs to plot the averaged loss and total accuracy? I appreciate any sources of information. Thank you in advance.
st178121
TT_YY: However, I could not find an article explaining whether or not loss is also reduced. No, loss is not reduced because there is only one loss tensor with DP. Besides, the gradients are actually accumulated automatically by the autograd engine. As DP is single-process-multi-thread, all threads share the same autograd engine, and hence ops on different threads will be added to the same autograd graph. Is the loss obtained by above code averaged over all the GPUs, which is exactly same as a loss computed by a serial program? Or, is the loss a value from just one GPU (gpu0)? I need to plot a loss chart, so I wonder if the loss is averaged over the GPUs. DP’s forward function will gather all outputs to cuda:0 (by default) and then return the gathered result. So, in the code above outputs is on one GPU and hence loss is also on one GPU. The same question applies to outputs. I also need to compute training accuracy using outputs in above code. Does it hold the results of all the GPUs? If so, in what structure of a tensor are they stored? Below is DP’s forward function. The outputs var on line 161 holds the output on different GPUs, but the gather function on line 162 copied them to one GPU. github.com pytorch/pytorch/blob/fa6b34b54c731938327c8e30e08b287a10b86b0a/torch/nn/parallel/data_parallel.py#L147-L162 9 def forward(self, *inputs, **kwargs): if not self.device_ids: return self.module(*inputs, **kwargs) for t in chain(self.module.parameters(), self.module.buffers()): if t.device != self.src_device_obj: raise RuntimeError("module must have its parameters and buffers " "on device {} (device_ids[0]) but found one of " "them on device: {}".format(self.src_device_obj, t.device)) inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids) if len(self.device_ids) == 1: return self.module(*inputs[0], **kwargs[0]) replicas = self.replicate(self.module, self.device_ids[:len(inputs)]) outputs = self.parallel_apply(replicas, inputs, kwargs) return self.gather(outputs, self.output_device) If you want to access individual output on different GPUs, you can do so in the forward function of your model (the one you passed to DP ctor). E.g., class MyModel(nn.Module): def __init__(self): self.fc = nnLinear(10, 10) def forward(self, input): output = self.fc(input) print("per-GPU output ", output) return output dp = DataParallel(MyModel()) outputs = dp(inputs) # this outputs is on one GPU Regarding DDP, above codes are written in each process running on respective GPU. In this case, how can I access the values on all the GPUs to plot the averaged loss and total accuracy? You can use gather 38 or all_gather 27 or all_reduce 35 to communicate the loss to one process and print it. BTW, could you please add a “distributed” tag to distributed training related questions? People working on distributed training monitor that tag and can get back to you promptly.
st178122
Thank you Shen Li for your detail explanation. It is very helpful, and now I understand what’s going on in DP and DDP. I modified your codes to see the order of data so that I can make sure that the output is correctly compared to corresponding labels in loss function. import torch import torch.nn as nn device = "cuda:0" class Model(nn.Module): def __init__(self): super(Model, self).__init__() # forward() outputs the input as it is. def forward(self, input): output = input print("per-GPU output ", output) return output model = Model() model = nn.DataParallel(model) model.to(device) # input is a sequence of integer in 2D shape. input = torch.arange(20 * 5).reshape(20, 5) input = input.to(device) print("total input ", input) output = model(input) print("total output ", output) I was not sure about the “tag” that you pointed out, but I added “distributed” to “Categories”. I still have a related question about DDP. In my understanding, the gradient is a vector that points a direction where the loss increases the most. I learned from your explanation that we don’t have the “total” loss until we “gather”, “all_gather”, or “all_reduce” the loss computed in each GPU. If we use a loss in each process instead of total loss to compute each gradient and average all the gradients, will it be a correct “total” gradient of the total loss? In other words, I wonder if it is mathematically correct that averaging all gradients that increase each of respective loss produces a total gradient that increases the averaged loss. If it is not correct, I think it means that we need to do all_reduce of the loss before we do loss.backward in order to hand total loss information to each process for computing correct gradients. Is my thinking correct? Thank you again for your kind assistance.
st178123
TT_YY: In my understanding, the gradient is a vector that points a direction where the loss increases the most. I learned from your explanation that we don’t have the “total” loss until we “gather”, “all_gather”, or “all_reduce” the loss computed in each GPU. If we use a loss in each process instead of total loss to compute each gradient and average all the gradients, will it be a correct “total” gradient of the total loss? Good question. Instead of communicating loss, DDP communicates gradients. So the loss is local to every process, but after the backward pass, the gradient is globally averaged, so that all processes will see the same gradient. This 51 is brief explanation, and this 16 is a full paper describing the algorithm. If it is not correct, I think it means that we need to do all_reduce of the loss before we do loss.backward in order to hand total loss information to each process for computing correct gradients. Is my thinking correct? The reason we didn’t communicating loss is because that’s not sufficient. When computing gradients, we need both loss and activation, and the activation depends on local inputs. So we need to either communicate loss + activation or gradients. DDP does the later.
st178124
Thank you again. Maybe you have fully answered my question, but I still feel that my point is missing. As I understand, a gradient is computed by the back propagation using the chain rule and first derivative of functions in a model network. Also, as you mentioned, we need the function vales within the network, as well as the loss. Since the method existed far before the parallelism era, the back-prop naturally started from a single “total” or “global” loss in the single processor platform. Therefore, in that case, we use a loss readily averaged over a batch of input. On the other hand, in the multi-GPU platform, a batch input is farther divided into smaller batches each of which is used to produce a “local” loss by a GPU. In that case, when computing the local gradient, the functions, inputs, and function values are exactly same as the case of the single processor platform. Only difference is using the local loss instead of the global loss. My question is; does averaging the local gradients computed from the local losses produce exactly the same one as the global gradient computed from the global loss? If the answer is no, I think that we need to average the local losses to produce a global loss and hand it to all the GPUs to compute correct local gradients that are averaged to produce a correct global gradient. This might be achieved by performing all_reduce() over the local losses before doing loss.backward() on each GPU. The answer could be yes, but I don’t know the mathematical explanation for it. That is my point. If I misunderstand something, please point it out. Thank you.
st178125
In that case, when computing the local gradient, the functions, inputs, and function values are exactly same as the case of the single processor platform. This is actually not true. Say we have a function f(x) = w * x, where w is the weight. Then when you compute gradient (i.e., dw), you will need both df (from loss, which depends on local input) and x (from local input or intermediate local output, which also depend on local input). So, if not communicating gradients, we need to communicate both the final loss and the intermediate outputs of all layers. TT_YY: does averaging the local gradients computed from the local losses produce exactly the same one as the global gradient computed from the global loss? No, this is not guaranteed to be the same, but due to a different reason. If 1) the loss function satisfies the condition loss_fn([x1, x2]) == (loss_fn(x1) + loss_fn(x2)) / 2 and 2) batch size on all processes are the same, then average gradients should be correct. Otherwise, average won’t produce the same result. One example would be, if we use .sum() as the loss function, we should just sum instead of averaging the gradient. If the answer is no, I think that we need to average the local losses to produce a global loss and hand it to all the GPUs to compute correct local gradients that are averaged to produce a correct global gradient. This might be achieved by performing all_reduce() over the local losses before doing loss.backward() on each GPU. I might miss sth. If we do the above, it means we compute the gradients using global loss and local activation (i.e., global df and local x in the f(x)=w*x example above). In this case, what does this gradient mean?
st178126
Thank you for your further explanation. So, if not communicating gradients, we need to communicate both the final loss and the intermediate outputs of all layers. Yes, I agree that we must communicate gradients to have a global gradient. My question is about relationship between the global loss and the local gradients, not about communicating losses instead of gradients. If 1) the loss function satisfies the condition loss_fn([x1, x2]) == (loss_fn(x1) + loss_fn(x2)) / 2 and 2) batch size on all processes are the same, then average gradients should be correct. I understand that, in a parallel process, the losses are locally averaged on a GPU, and the resulting losses can be globally averaged. That is the reason why the condition you explained must hold to have the “average of average” being equal to the global average. My point is based on that a parallel process just does the same thing in parallel as a serial process does, and both of them are supposed to produce identical results. What I am wondering about is that the backward path of the computational graph in a DDP process starts from a local loss, while it starts from a global loss in the serial process, and they are supposed to produce the same result. From your former explanation, I learned that the backward path starts from the global loss in DP, but not DDP. So, I believe that DP will produce the same results as the serial process does, but I wonder about DDP. One thing I have come across is that, if the global loss is computed by sum() / batch_size, the backward path might start from 1 and dividing it by batch_size. If this is true, the only difference between starting from the global loss and the local loss should be difference between dividing by the global batch size and the local per-GPU batch size. So, I suspect that the gradients in those cases have the same direction but different sizes. In particular, the gradient from DDP might be n_gpu times larger than DP, where n_gpu is the number of GPUs. Even if this is true, that will not be a big problem, but DDP may require a different learning rate from DP. I just thought that way, but it needs a confirmation. Is this correct? I appreciate your assistance. Thank you.
st178127
TT_YY: So, I suspect that the gradients in those cases have the same direction but different sizes. Yep, this is true for the sum() / batch_size case you mentioned, on the condition that all processes are using the same batch size. Here is the test to verify that: github.com pytorch/pytorch/blob/97d594b9f72e7c7baf877f2394d8a5aaeda3140d/test/distributed/test_distributed.py#L2033-L2072 32 def _test_DistributedDataParallel(self, gpu_subset, rank, output_device=None): # Run a simple end to end DDP model, use result of single node model # as baseline # cpu training setup model = DDP_NET # single gpu training setup model_gpu = copy.deepcopy(model) model_gpu.cuda(gpu_subset[0]) # DDP training setup model_DDP = copy.deepcopy(model) model_DDP.cuda(gpu_subset[0]) model_DDP = nn.parallel.DistributedDataParallel( model_DDP, device_ids=gpu_subset ) # test serializable/unserializable with tempfile.NamedTemporaryFile() as tmp: This file has been truncated. show original In particular, the gradient from DDP might be n_gpu times larger than DP, where n_gpu is the number of GPUs. Even if this is true, that will not be a big problem, but DDP may require a different learning rate from DP. I just thought that way, but it needs a confirmation. DDP computes the average of all gradients from all processes, so the gradient should be the same value as local training for the sum() / batch_size case. What might affect the learning rate is the batch size you configured for each DDP process. If each process is using the same batch_size as local training, it means that in each iteration the DDP gang collective process world_size * batch_size input data, so you might be more confident on the result gradient compared to local training and might need to set the learning rate to a larger value. But this is not guaranteed. See this discussion: Should we split batch_size according to ngpu_per_node when DistributedDataparallel 24
st178128
Thank you, Shen Li. DDP computes the average of all gradients from all processes, so the gradient should be the same value as local training for the sum() / batch_size case. I interpret it as that the difference is taken care of when computing the global gradient from the local gradients, and we will see no difference from the serial cases. What might affect the learning rate is the batch size you configured for each DDP process. I think that whether or not we expand the global batch size is a choice between computation speed per iteration and algorithmic efficiency of total convergence, with a larger learning rate that you mentioned. Besides, we can make use of the GPU memories if we choose a large batch size. I feel that a larger batch brings about faster convergence even in the wall clock time bases, if we can efficiently utilize the multiple GPUs. That’s what I’m trying to do. Thank you very much. I appreciate your time for this long discussion.
st178129
Hi, I’m making some modifications to MoCo, which runs pytorch multiprocessing. Running the default code leads to very consistent iteration times (between 0.1s and 0.13s). After my modification (essentially adding some optimizable conditional normalization layers as input to the ResNet) the runtime has become stochastic, a bit more than half the iterations take ~0.5-0.6s (I was expecting this increase), but some take 1.5-2s. The dist-backend is nccl and I’m using 7 GPUs, although the same issue appears when using the default 8 GPUs. I’m wondering whether this stochasticity implies some bug in my implementation and, if so, what’s the best way of debugging distributed models. Thanks!
st178130
Hey @alet, can you share the implementation of modified model? Especially the “optimizable conditional normalization layers”. Would I be correct if I assume the program is using DDP both before and after the modification?
st178131
Just like pytorch provides option like nn.DataParallel() to efficiently make use of multiple-GPUs, is there any such option for general purpose tensor operations that might not require mutli-GPUs but rather multi-processing ? Like if i have a function like: > def generate_dataset(dataloader, val=False): > f = open('new_dataset.txt', 'w') > for x,y in tqdm(dataloader, total=len(dataloader)): > pred = [] > for x_d in x: > pred.append(general_tensor_operations(x_d)) #THIS IS'NT NECESSARILY A nn.Module() > data = format_to_string_data([pred, y.item()]) > f.write(data) > f.close() Note that, I am making use of a torch dataloader object the tensor operation on x is NOT performed by a nn.Module() model the tensor operation doesn’t work on batch data, rather on each slice of the mini-batch and i write the obtained (pred,y) to a new txt file Now the processing bottle neck occurs at the general_tensor_operations(). So if I am to parallelize such a function (generate_dataset()), how do i do it ? Note that, if i am using multiple processes, then they should use non-overlapping subsets of the original dataloader (otherwise there will be duplicates in the new_dataset.txt). A code snippet shall help a ton. Thank you in advance!
st178132
If this is on the same machine, will torch.multiprocessing.queues.SimpleQueue 7 work for you? If you need across-machine communication, you can use torch.distributed.rpc 1.
st178133
Is it possible make CUDA_VISIBLE_DEVICES and DDP work together? I am trying to run a script on an 8 GPU server like so: CUDA_VISIBLE_DEVICES=0,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=7 --use_env main.py but I always run into: RuntimeError: CUDA error: invalid device ordinal Here is the output of nvidiia-smi: ue Aug 18 15:21:16 2020 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 440.33.01 Driver Version: 440.33.01 CUDA Version: 10.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GTX 108... On | 00000000:04:00.0 Off | N/A | | 20% 13C P8 7W / 235W | 0MiB / 11178MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 1 GeForce GTX 108... On | 00000000:05:00.0 Off | N/A | | 23% 18C P8 8W / 235W | 0MiB / 11178MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 2 GeForce GTX 108... On | 00000000:08:00.0 Off | N/A | | 23% 20C P8 8W / 235W | 0MiB / 11178MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 3 GeForce GTX 108... On | 00000000:09:00.0 Off | N/A | | 23% 23C P8 8W / 235W | 0MiB / 11178MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 4 GeForce GTX 108... On | 00000000:84:00.0 Off | N/A | | 23% 18C P8 11W / 235W | 0MiB / 11178MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 5 GeForce GTX 108... On | 00000000:85:00.0 Off | N/A | | 20% 16C P8 7W / 235W | 0MiB / 11178MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 6 GeForce GTX 108... On | 00000000:88:00.0 Off | N/A | | 20% 15C P8 7W / 235W | 0MiB / 11178MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 7 GeForce GTX 108... On | 00000000:89:00.0 Off | N/A | | 23% 25C P8 7W / 235W | 0MiB / 11178MiB | 0% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ What am I missing?
st178134
Solved by Diego in post #3 sorry this was a mistake by me. I had set the device_ids variable in the DDP constructor besides using the CUDA_VISIBLE_DEVICES variable, once I removed the former the script runs as expected.
st178135
Hey @Diego, the launching script will launch multiple sub-processes, which might be inherit the CUDA_VISIBLE_DEVICES value you passed to the command line. A work around would be setting CUDA_VISIBLE_DEVICES in main.py before loading any cuda-related packages. Note that the recommended way to use DDP is one-process-per-device, i.e., each process should exclusively run on one GPU. If you want this, you need to set CUDA_VISIBLE_DEVICES to a different value for each subprocess. BTW, what’s the default CUDA_VISIBLE_DEVICES value in your machine? I would assume the script should be able to see all devices by default if CUDA_VISIBLE_DEVICES wasn’t set. And when the program throws RuntimeError: CUDA error: invalid device ordinal, do you know which device it tries to access?
st178136
sorry this was a mistake by me. I had set the device_ids variable in the DDP constructor besides using the CUDA_VISIBLE_DEVICES variable, once I removed the former the script runs as expected.
st178137
Hi, currently I have the following train function: def train(model, dataset, sampler, criterion, optimizer, scheduler, cfg): model = DataParallel(model.cuda()) loader = DataLoader(dataset, bs=cfg.BS, num_workers=4, sampler=sampler) for epoch_idx in range(cfg.EPOCHS): for batch, targets in loader: preds = model(batch) loss = criterion(preds, targets) optimizer.zero_grad() loss.backward() optimizer.step() scheduler.step() How can I convert this code to use DistributeDataParallel? I looked in the tutorials, but they initialise model, dataset, etc. inside the train function. I can’t do that since I want the signature to remain the same and have the flexibility of defining model, dataset, etc. outside the train function. Can I just pass all that using arguments in mp.spawn?
st178138
Hey @Rizhiy You might not be able to pass the optimizer as that, because every subprocess needs its own dedicated optimizer. And not sure about how dateset/criterion/scheduler in the code behave in multiprocessing use cases. If it is just the model, the following might work. def train(rank, model): model = DistributedDataParallel(model.to(rank), device_ids=[rank], output_device=rank) ... def main(): model.share_memory() mp.spawn( train, args=(model, dataset, ...), nprocs=world_size, join=True) if __name__=="__main__": main() Rank is the subprocess id, which is provided my mp.spawn as the first argument to the target function. If the reason for this is to keep the train() signature intact, is it possible to create another wrapper function to wrap train(), configure everything in wrapper, call train in wrapper, and use wrapper as the spawn target?
st178139
Hi @mrshenli, the reason is that this function is part of the internal framework and other programmers should be able to setup the arguments how they require and train() is only responsible for the loop.
st178140
IIUC, it can accept shared memory tensor and anything that’s pickable with Python multiprocessing
st178141
I have installed pytorch by using conda and I can directly use nccl backend tro do distributed training. However, the internal nccl library of pytorch is 2.4.8. If I want use another manually installed nccl library such as 2.7.8 version, how can I do it? Is there any way without compiling pytroch from souce?
st178142
Hey @ayl You can export USE_SYSTEM_NCCL=1, and then compile PyTorch from source. See this discussion Torch distributed not working on two machines [nccl backend] 22
st178143
ayl: Is there any way without compiling pytroch from souce? Thank you. Is there any way without compiling pytorch from source?
st178144
I don’t think there is an easy/safe way to do so, as the NCCL API also changes from release to release. Even if you can dynamically link libnccl, it might not be compatible with the built libtorch.
st178145
Hi, everyone! Recently I want to train a network using 2 gpus on 1 machine(node).But I really get confused cause I can not find one which descrip train ,validiation ,saving checkpoints and load checkpoints concretely? So is there any great example could help?
st178146
Here is the overview for the distributed training tools offered by PyTorch: https://pytorch.org/tutorials/beginner/dist_overview.html 2 If you are looking for data parallel training, you might want to start from DataParallel?
st178147
Thanks, I found tutorial w.r.t. DataParallel is much easier to understand and implement. However, using torch.nn.parallel.DistributedDataParallel is much more difficult.
st178148
Yep, DataParallel indeed is an easier entry point, but is not the most efficient solution. If you are looking for faster training speed or scaling to more machines later, DDP would still be the way to go.
st178149
Tanks, I find the example code of DDP on Imagenet is not easy to imitate to fit my code. Is there any more detailed example?
st178150
Here are some more general DDP examples/tutorials: GitHub pytorch/examples 2 A set of examples around pytorch in Vision, Text, Reinforcement Learning, etc. - pytorch/examples https://pytorch.org/tutorials/intermediate/ddp_tutorial.html There are also several example projects of varying complexity on GitHub that use DistributedDataParallel. They would be a great reference for pytorch code across a variety of domains.
st178151
Besides the link @osalpekar posted above, here is a summary of all DDP docs we have currently: https://pytorch.org/tutorials/beginner/dist_overview.html#torch-nn-parallel-distributeddataparallel 1
st178152
Problem Recently, I have tried to use DDP. I just followed the tutorial to change my original code for DP. I accidentally stopped my DDP training, so I planned to resume model as what I did before. However, loss becomes much higher after resuming, I think it might be some error in my code. image2167×598 36.4 KB image2447×622 40.1 KB Here are my codes save model # code for resuming after validation if args.local_rank == 0: tf_writer.add_scalar('acc/test_top1_best', best_prec1, epoch) output_best = 'Best Prec@1: %.3f\n' % (best_prec1) print(output_best) log_training.write(output_best + '\n') log_training.flush() save_checkpoint({ 'epoch': epoch + 1, 'arch': args.arch, 'state_dict': model.state_dict(), 'optimizer': optimizer.state_dict(), 'best_prec1': best_prec1, }, is_best) def save_checkpoint(state, is_best): filename = '%s/%s/ckpt.pth.tar' % (args.root_model, args.store_name) torch.save(state, filename) if is_best: shutil.copyfile(filename, filename.replace('pth.tar', 'best.pth.tar')) I don’t save scheduler because I manually set the learning rate for the optimizer. And I will set learning rate before training. adjust_learning_rate(optimizer, epoch, args.lr_type, args.lr_steps) resume model if args.resume: if os.path.isfile(args.resume): checkpoint = torch.load(args.resume, map_location=torch.device('cpu')) pretrained_dict = checkpoint['state_dict'] new_state_dict = OrderedDict() for k, v in pretrained_dict.items(): if '.total' not in k: name = k[7:] # remove 'module.' # name = name.replace('.net', '') new_state_dict[name] = v model.load_state_dict(new_state_dict) model=torch.nn.parallel.DistributedDataParallel(model, device_ids=[local_rank], output_device=local_rank) if 'epoch' in checkpoint.keys(): args.start_epoch = checkpoint['epoch'] best_prec1 = checkpoint['best_prec1'] # get_optim_policies is a fuction to set optimizer policies optimizer = torch.optim.SGD(get_optim_policies(model), lr=args.lr, momentum=args.momentum, weight_decay=args.weight_decay) optimizer.load_state_dict(checkpoint['optimizer']) print(("=> loaded checkpoint '{}' (epoch {})" .format(args.evaluate, checkpoint['epoch']))) print(("=> best top1 '{}'".format(best_prec1))) The above codes work well when I use DP. Are there anything I miss while using DDP?
st178153
When you call save_checkpoint, is the model var a DDP instance? If yes, you might need to save model.module instead? But I don’t think that’s the reason for the error jump. When you load the module, if you do not use DDP or DP (just a local model), is the loss after recovery as expected? I might miss sth, looks like in the “resume model” part, the model state is loaded to CPU and not moved to local_rank before passing to DDP ctors? Or is the model already on the correct device before loading state dict?
st178154
When training a model with DDP, GPU for rank 0 consumes much higher memory than others. Because of that GPU, I cannot increase batch-size for training. Is there a good way to deal it with? image794×690 88.9 KB
st178155
Solved by jaehyung.ca in post #3 Thank for the reply @mrshenli I haven’t set explicitly device cuda:0 at any point. And even in the official DDP example code shows the same unbalanced GPU memory consumption. I solved the issue by setting torch.cuda.set_device(args.local_rank) which works the same as setting CUDA_VISIBLE_DEVICES. …
st178156
Hey @jaehyung.ca did you intentionally create any tensor on cuda:0 from every process? If not, it might be some lib/code accidentally create states on cuda:0. To avoid this, you can set CUDA_VISIBLE_DEVICES env var to make sure that all processes only sees one GPU.
st178157
Thank for the reply @mrshenli I haven’t set explicitly device cuda:0 at any point. And even in the official DDP example code 7 shows the same unbalanced GPU memory consumption. I solved the issue by setting torch.cuda.set_device(args.local_rank) which works the same as setting CUDA_VISIBLE_DEVICES.
st178158
Suppose I have a vector of type torch.int32. During all reduce operation, do all 32bits for each coordinate gets transmitted, irrespective of the value (at the coordinate)? More specifically, I am interested in how we achieve higher speeds in reducing sparse tensors. (By sparse tensors I mean tensors wil large number of zeroes).
st178159
We currently support all_reduce on sparse tensors with the Gloo backend (for both CPU and CUDA tensors), but this is not yet supported with the NCCL backend.
st178160
Tensorflow all_reduces sparse tensors (tf.IndexedSlices), by all_gathering followed by tensor reduction. Does PyTorch do the same (with Gloo backend) or it does something different under the hood?
st178161
It’s pretty similar - we all_gather the metadata, indices, and values, and then each node does a local sum of the sparse tensors. Here’s the implementation: https://github.com/pytorch/pytorch/blob/65bd38127a34d428915c88507878b4735edf005f/torch/lib/c10d/ProcessGroupGloo.cpp#L939 1
st178162
Hello everyone. I used the mpi to run multiprocess and use the nccl backend with DDP, Is this a correct way that I use the mpi and nccl? I’d appreciate if anybody can help me! Thanks in advance! here is my sample code: import torch import torch.distributed as dist from torch.nn.parallel import DistributedDataParallel as DDP def dist_train(rank, size): local_rank = int(os.environ['OMPI_COMM_WORLD_LOCAL_RANK']) if args.gpu: torch.cuda.set_device(local_rank) # set torch device device = torch.device("cuda" if args.gpu and torch.cuda.is_available() else "cpu") model = model.to(device) model = DDP(model, device_ids=[local_rank]) '''training code......''' def init_process(rank, size, fn, backend='gloo'): dist.init_process_group(backend, init_method='tcp://master_ip:port', rank=rank, world_size=size) fn(rank, size) world_size = int(os.environ['OMPI_COMM_WORLD_SIZE']) world_rank = int(os.environ['OMPI_COMM_WORLD_RANK']) init_process(world_rank, world_size, dist_train, backend='nccl') My running command is : mpirun -np ${totals} -H ${slots} ${COMMON_MPI_PARAMETERS} python demo.py
st178163
As far as I know, when in DDP(DistributedDataParallel), loss.backward()will synchronize gradient for all nodes in the group automatically through Reducer. However, If I do want to synchronize and update model parameters among part of nodes in some epochs, how can I manage to do that? I would appreciate you for any hints or concrete code sample
st178164
Hey @Asta I see two options: Option 1: create two DDP instances on each process and construct then using different ProcessGroup instances. One DDP instance can use the global ProcessGroup which will synchronize across all nodes, and another DDP instance can use a different ProcessGroup of a sub-group which is created using the new_group API. Option 2: Use the DDP comm hook 6 [code and example 11]. This is still a prototype feature and might change in the future. One thing to mention is that, when you do this (sync gradients in subgroup), it will create gradient inconsistency across processes (as some process didn’t participate in some iteration). This would then lead to inconsistency in model replicas on different processes. DDP only broadcasts model in its ctor. To keep all model replicas consistent, it relies on the assumption that all processes see the same gradient in all iterations. So, if you do partial sync, you might also need to manually broadcast model to bring all processes back to sync, otherwise the result might be numerically incorrect.
st178165
I am training VGG11 over 16 nodes with data parallellism and NCCL backend. However, I found that the training time for one iteration is too long. I breakdown the time spent at IO, forward, backward, and optimization. It turns out that the I/O, forward, and optimization phases have similar time durations when compared with 8 nodes. The major time is increased by the gradient synchronization during the backward phase. I profile the code. It turns out the NCCL allreduce takes the majority of the time (see figure below, the timeline for 1 iteration over 16 nodes). I think the most of time is spent on the classifier layers. However, the Pytorch NCCL allreduce time of these layers is much longer than the expected original NCCL allreduce performance on the same amount data. In addition, I also measured PyTorch NCCL allreduce performance over the model parameters (see code below). It turns out the classifier layers take 280.682 ms (total size: 471MB). However, if I directly use NCCL allreduce bechmark to report the performance of the same amount of data the time is about 60ms. I wonder if anyone might know the reason. I am using Pytorch 1.4. [0] NCCL INFO NET/IB : Using [0]mlx5_1:1/IB [1]mlx5_3:1/IB [2]mlx5_0:1/IB [3]mlx5_2:1/IB ; for param in model.parameters(): event_start[i].record() dist.all_reduce(param.grad.data, op=dist.ReduceOp.SUM) event_end[i].record()
st178166
@yzz, thanks for posting and compare dist.allreduce with NCCL allreduce benchmark. May I have your complete code for the comparison? also what kind of GPU are you using? and what is the network type (GPUDirect or ethernet and etc)? I can try to repro and see what it is going on here.
st178167
Sure. The code is attached below. The GPU is NVIDIA V100, and GPUDirect is used. To run the code, a dummy fixed-size sample dataset has to be generated. Sample size is 3 * 224 * 224 * 4 Byte. The attached script can be used to generate this dataset. #! /bin/bash base="base.file" dataset_base="your_dir_path" truncate -s 602112 $base for class in {0..9} do dir="$dataset_base/${class}" /bin/rm -rf $dir mkdir -p $dir echo $dir created for img_id in {0..1300} do fpath="${dataset_base}/${class}/${img_id}.fake" cp $base $fpath done done You have to set master addr and port as env variable, and change the root path in vgg11.py to the created dir path we have two files (one vgg, one data_loader). My test case: 1 GPU per node, 16 nodes, 128 samples per GPU python vgg11.py [batch_size] [rank] [rank_sizes] The print out at the end of the output is the allreduce time spent for applying allreduce directly on each parameter. vgg11.py import torch import torch.nn as nn import torch.optim as optim import torch.distributed as dist import os import sys import time import data_loader cfg = { 'VGG11': [64, 'M', 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'], 'VGG13': [64, 64, 'M', 128, 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'], 'VGG16': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'M', 512, 512, 512, 'M', 512, 512, 512, 'M'], 'VGG19': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 256, 'M', 512, 512, 512, 512, 'M', 512, 512, 512, 512, 'M'], } class VGG(nn.Module): def __init__(self, vgg_name): super(VGG, self).__init__() self.features = self.large_make_layers(cfg[vgg_name]) self.classifier = nn.Sequential( nn.Linear(512 * 7 * 7, 4096), nn.ReLU(), nn.Dropout(), nn.Linear(4096, 4096), nn.ReLU(), nn.Dropout(), nn.Linear(4096, 1000), ) def forward(self, x): out = self.features(x) out = out.view(out.size(0), -1) out = self.classifier(out) return out def large_make_layers(self, cfg, batch_norm=False): layers = [] in_channels = 3 for v in cfg: if v == 'M': layers += [nn.MaxPool2d(kernel_size=2, stride=2)] else: conv2d = nn.Conv2d(in_channels, v, kernel_size=3, padding=1) if batch_norm: layers += [conv2d, nn.BatchNorm2d(v), nn.ReLU()] else: layers += [conv2d, nn.ReLU()] in_channels = v return nn.Sequential(*layers) def sync_gradients(model, batch_idx, timer): """ Gradient averaging. """ global record_event_cnt for param in model.parameters(): print(param.grad.data.shape) print("record_event_cnt: %d, batch_idx: %d" % (record_event_cnt, batch_idx)) if batch_idx > 0: put_timer(record_event_cnt, 1, timer) dist.all_reduce(param.grad.data, op=dist.ReduceOp.SUM) if batch_idx > 0: put_timer(record_event_cnt, 0, timer) record_event_cnt += 1 def cal_single_time(count, timer): tot_time = 0.0 global para_cnt for i in range(count): print("i: %d" % (i)) time = timer[i].elapsed_time(timer[i + para_cnt]) print("%d time: %lf" % (i, time)) def put_timer(i, start, timer): global para_cnt if i >= 0: if start == 1: timer[i].record() print("put start for %d " % (i)) elif start == 0: #print("put timer for iteration: " + str(i)) timer[para_cnt + i].record() print("put end for %d" % (para_cnt + i)) N = int(sys.argv[1]) rank = int(sys.argv[2]) world_size = int(sys.argv[3]) record_event_cnt = 0 device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # PyTorch device dist.init_process_group(backend='nccl', rank=rank, world_size=world_size) net = VGG('VGG11') net = net.cuda() #net = torch.nn.parallel.DistributedDataParallel(net, device_ids=[0]) para_cnt = 22 sync_timer=[] for i in range(para_cnt * 2): sync_timer.append(torch.cuda.Event(enable_timing=True)) start_event = torch.cuda.Event(enable_timing=True) end_event = torch.cuda.Event(enable_timing=True) optimizer = optim.SGD(net.parameters(), lr=0.1, momentum=0.9) criterion = nn.CrossEntropyLoss() #inputs = torch.ones([N, 3, 224, 224], device=device) #labels = torch.empty(N, dtype=torch.long, device=device).random_(1000) root_path='your_dir_path' res_size=224 trainset = data_loader.DatasetFolder(root=root_path, loader=data_loader.raw_data_loader, \ img_size=res_size, extensions=data_loader.IMG_EXTENSIONS, transform=None) trainloader = torch.utils.data.DataLoader(trainset, batch_size=N, shuffle=True, num_workers=1, pin_memory=True) torch.cuda.synchronize() for batch_idx, (inputs, labels) in enumerate(trainloader): inputs, labels = inputs.to(device), labels.to(device) if batch_idx == 1: torch.cuda.synchronize() start = time.time() start_event.record() out = net(inputs) loss = criterion(out, labels) loss.backward() sync_gradients(net, batch_idx, sync_timer) print("================") optimizer.step() optimizer.zero_grad() if batch_idx == 2: break record_event_cnt = 0 end_event.record() torch.cuda.synchronize() end = time.time() print(end-start) print("iter:%d, %d: %lf, cuda time: %lf"% (batch_idx, N, (end - start), start_event.elapsed_time(end_event))) print("end record_event_cnt: %d"% (record_event_cnt)) cal_single_time(record_event_cnt, sync_timer) data_loader.py (some codes are borrowed from original torchvision data loader) from torchvision import datasets, transforms import torch import torchvision import os import os.path import time device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # PyTorch device def raw_data_loader(path, size, d): file_content = torch.from_file(path, dtype=torch.float, size=size) #file_content = file_content.to(torch.float) file_content.resize_((3, d, d)) return file_content IMG_EXTENSIONS = ('.jpg', '.jpeg', '.png', '.ppm', '.bmp', '.pgm', '.tif', '.tiff', '.webp', '.fake') def has_file_allowed_extension(filename, extensions): """Checks if a file is an allowed extension. Args: filename (string): path to a file extensions (tuple of strings): extensions to consider (lowercase) Returns: bool: True if the filename ends with one of given extensions """ return filename.lower().endswith(extensions) def make_dataset(directory, class_to_idx, extensions=None, is_valid_file=None): instances = [] directory = os.path.expanduser(directory) both_none = extensions is None and is_valid_file is None both_something = extensions is not None and is_valid_file is not None if both_none or both_something: raise ValueError("Both extensions and is_valid_file cannot be None or not None at the same time") if extensions is not None: def is_valid_file(x): return has_file_allowed_extension(x, extensions) for target_class in sorted(class_to_idx.keys()): class_index = class_to_idx[target_class] target_dir = os.path.join(directory, target_class) if not os.path.isdir(target_dir): continue for root, _, fnames in sorted(os.walk(target_dir, followlinks=True)): for fname in sorted(fnames): path = os.path.join(root, fname) if is_valid_file(path): item = path, class_index instances.append(item) return instances class DatasetFolder(datasets.VisionDataset): def __init__(self, root, loader, img_size, extensions=None, transform=None, target_transform=None, is_valid_file=None): super(DatasetFolder, self).__init__(root, transform=transform, target_transform=target_transform) classes, class_to_idx = self._find_classes(self.root) samples = make_dataset(self.root, class_to_idx, extensions, is_valid_file) if len(samples) == 0: msg = "Found 0 files in subfolders of: {}\n".format(self.root) if extensions is not None: msg += "Supported extensions are: {}".format(",".join(extensions)) raise RuntimeError(msg) self.loader = loader self.extensions = extensions self.img_size = img_size * img_size * 3 self.img_res = img_size self.classes = classes self.class_to_idx = class_to_idx self.samples = samples self.targets = [s[1] for s in samples] def _find_classes(self, dir): """ Finds the class folders in a dataset. Args: dir (string): Root directory path. Returns: tuple: (classes, class_to_idx) where classes are relative to (dir), and class_to_idx is a dictionary. Ensures: No class is a subdirectory of another. """ classes = [d.name for d in os.scandir(dir) if d.is_dir()] classes.sort() class_to_idx = {cls_name: i for i, cls_name in enumerate(classes)} return classes, class_to_idx def __getitem__(self, index): """ Args: index (int): Index Returns: tuple: (sample, target) where target is class_index of the target class. """ path, target = self.samples[index] sample = self.loader(path, self.img_size, self.img_res) if self.transform is not None: sample = self.transform(sample) if self.target_transform is not None: target = self.target_transform(target) return sample, target def __len__(self): return len(self.samples)